From Transfiguration to Transhumanism: Taking seriously Theology as the Science of God



I have been always been much more interested in the nature of God than the nature of religion.

Anyone who has followed my work on science and religion in defence of intelligent design should have inferred as much, since I see “religion” as little more than the site where God (the “intelligent designer”) is discussed openly, especially in secular societies that would otherwise be allergic to the very idea of a deity. I mean here not any old deity but specifically the Biblical God in whose “image and likeness” humanity is said to have been created.

For this reason, I have always had time for theologians who take their profession literally – that is, as engaged in the “science of God” and not simply a kind of makeshift sociology of religion.

But theologians need to consider whether a “science of God” ultimately turns out to be anything other than science itself – or put more normatively, “science properly done.” In other words, as theologians try to fathom the mind of God, they are coming to terms with reality’s intelligibility, in the strict sense, which requires that we have minds that are designed not merely to make sense of our everyday environments (that is, the adaptive forms of intelligence that most “naturalistic” and “evolutionary” epistemologies try to explain) but more tellingly, the entire cosmos, the vast majority of which we have never empirically encountered, let alone inhabited. Of course, God would “always already” have access to the entirety of this extended realm. Theology’s task, then, put in today’s terms, is to “occupy God” – but that may mean abandoning conventional religion.

Lest this prospect sound too radical, it is worth recalling that Jews and Hindus regarded the early converts to Christ and Buddha, respectively, in exactly this light. And of course, within the history of Christendom, the rise of modern science is most naturally read as a successor move in this quest for God in spite of religion.

Arguably the only branch of Christianity that has managed to keep humanity’s nascent divinity within a strong clerical context is the Eastern Orthodoxy, which has made the Transfiguration – the moment Jesus realizes his divine nature as “Christ” (theosis) – a defining episode in the religion. Yet, even in this case, it is noteworthy that a school of late-nineteenth and early-twentieth century Russian Orthodox thinkers known as “Cosmists” tried to leverage their faith into a hyperbolic scientific worldview that envisaged the physical resurrection of the dead and absolute human control over the Earth and the Heavens (via space travel).

In retrospect, Cosmism provides the clearest anticipation of today’s transhumanist aspirations. It also helps to explain Russia’s cultural receptivity to the most ambitious Soviet schemes for world domination, notwithstanding the regime’s official atheism. This point is worthy of much further study, a good recent introduction to which is George M. Young’s The Russian Cosmists.

However, Western Christianity is where the pursuit of God-in-spite-of-religion has been most clearly documented. Isaac Newton, still regarded as the greatest of all scientists, had a particular way of operationalizing the theological problematic, the quest to “occupy God”: How does reality appear to a being who exists equidistant from all points in space and time? In this respect, the unified world order sought by mathematical physics in the modern era may be seen as trying to deliver on the theological quest. Certainly this was the intention of the instigators of the seventeenth century Scientific Revolution, that harmonic convergence of the Renaissance, Reformation and Enlightenment which inaugurated the modern worldview. These people – Galileo, Kepler, Bacon, Descartes, Newton and so forth – were responsible for turning the “view from nowhere” – our simulation of the divine standpoint – into the gold standard for all knowledge, regardless of its immediate practical relevance.

Overcoming obstacles to accessing God: The Bible’s historic role

To be sure, many atheists and theists have been tempted to dismiss any scientific attempt to approximate the divine bandwidth as – if not outright fictional – a potentially dangerous diversion from forms of inquiry that might serve to ameliorate the “human condition,” understood as a sophisticated form of animal life firmly planted on Earth. Thus, complementing a concern for the future of the ecology and the lives of those most likely to be affected by radical changes to it has been a scepticism about pursuits that take us too far afield from our natural habitats, be they conducted in cyberspace or outer space.

Moreover, the theological remit of science has not been helped by latter-day “aestheticism” of physicists such as Steven Weinberg, who won a Nobel Prize in 1979 for helping to unify the fundamental forces of nature but nowadays promotes one of the more civil versions of scientific atheism in the pages of The New York Review of Books. Innocent of all theology, physicists like Weinberg appeal to an excessively subjective sense of mathematical elegance to justify their pursuits, even as these pursuits continue to require enormous long-term funding – mostly from those who are forced to take the physicists at their word that such an experience of reality is worth having.

How, then, did people come to believe that their thoughts of worlds beyond the everyday could be the concrete target of their aspirations? Of course, people have always had outlandish ideas that might be reasonably called “supernatural.” However, these had been confined to the sphere of apparitions, dreams and prophecy – all of which traditionally implied a lack of control over what passed through one’s mind. In this respect, Roman Catholicism is rightly seen as a “broad church” in Christianity: it has tolerated a wide range of personal revelations – but typically in a therapeutic spirit, which explains the emphasis on pastoral counselling. Wild, even blasphemous ideas and acts have been tolerated as symptomatic of our fallen state, which are best treated in terms of various institutionalised “safety valves” for letting off psychic steam, most notably the confessional.

All of this changed once the Protestant Reformers reasserted the centrality of the Bible as the via regia of divine-human interaction, in terms of which one’s existence had to be justified. As people read the Bible for themselves, and saw that it addresses them as creatures born with unredeemed godlike powers, they began to trust their imaginings as indicative of deeper realities that, at least in principle, they are capable of accessing. Nearly sixty years ago, Arthur Koestler’s The Sleepwalkers dramatized this spiritual awakening, whose fixation on the Biblical text nurtured an interest in a second, complementary text, which Galileo, Bacon and others called “The Book of Nature,” written in the language of mathematics. This language, with its algebraic capacity to put exact limits on variation, was entrusted to discipline and focus the wide-ranging vagaries of the human imagination.

In today’s secular, materialistic climate, these matters are normally discussed in terms of the technology that turned this ideological sea change into an attractive commercial prospect, the printing press. But make no mistake, it was the Bible that was printed, which in turn framed the spirit in which published works were generally received as the centuries wore on. This is the only way to understand the debates over the moral import of novels that began with Rousseau’s The New Heloise in the mid-eighteenth century: such works, like the Bible before them, were treated as models for life. This is why even in this age of the internet, despite the substantial doubts that have been raised about the significance of books, secular people still feel a sense of blasphemy – only matched by their response to mass murder – whenever a regime authorizes the burning of books.

People still caught in early-twentieth century-style science-religion conflict dynamics mistakenly think that this original phase of “bibliomania” was authoritarian in character: a propaganda campaign to force-feed the Bible. On the contrary, it provided the first great opportunity for ordinary people to own their faith by directly confronting the word of God, which the Church was seen as having inhibited. Under the circumstances, it is unsurprising that the mode of reading was literal – that is, focused on the exact meaning of the words that one encountered on the page. But this was by no means a naive mode of reading. The first modern Bible readers generally knew of prior “symbolic interpretations,” but these had effectively discouraged one from reading the book as a source of personal guidance.

Postmodernists and other more “ecumenical” theologians today often forget that talk by Aquinas and like-minded, typically Dominican scholastics of the Bible’s interpretive difficulties was originally an elite gesture to get people to divert their attention away from the sacred book and toward the Mass and other Catholic rituals and prescriptions. This move served to reinforce clerical authority in the same breath as it gave meaning to people’s everyday lives. “Paternalism” as a generalized political concept arose from this environment, most notably by reducing faith to a kind of “trust” in a designated father figure. Thus, the scholastic protagonist of our story, Aquinas’s great Franciscan opponent, John Duns Scotus – about whom more below – is seen in church history as an intellectual champion of the Immaculate Conception. Regardless of how one understands the virgin birth of Jesus, the very idea that Mary herself could have been born without sin suggests that God himself is no slave to the norms of patrilineal descent as represented by, say, the Petrine papal lineage.

Avoiding any such paternalist – if not patronising – mystification of the Bible, the literalists approached matters in a straightforwardly modern attitude. After all, why should Christianity (or the Abrahamic faiths more generally) be founded on a sacred book, unless that book was designed to be understood by all those who are called to the faith? Moreover, the sacred book is full of references to the divine use of language, not least as the premier creative medium (logos). And if humans are indeed created in the deity’s “image and likeness,” then there should be something especially luminous about encountering God through the Bible.

But this in turn is only possible if people can read the Bible in the same sense that they read anything else; hence, the stress on “literalism,” or in somewhat fancier terms, what after John Duns Scotus is called “univocal predication.” Thus, if the Bible claims that God is “all powerful,” then the deity should be understood as possessing an infinite amount of what we normally understand as “power.”

Science as a potential solution to the challenges of Biblical literalism

Little surprise that one of Scotus’s Oxford students, John Wycliffe, was the original campaigner to translate the Bible into the common languages of Europe – the starting shot of the Reformation. Yet, the exact meaning of “literalism” is tricky, especially when talking about God. Nonetheless if humans are capable of redemption, then the task should not be insuperable. In this very important sense, Biblical literalists, just like scientific reductionists, have been “anti-mysterian,” in today’s terms: both hold that in principle nothing should lie beyond our comprehension, despite our clear liabilities. In particular, the following three issues need to be addressed for literalism to payoff theologically:

  1. God is said to bring together an infinite version of all the humanly recognizably virtues in his own unified being. Theologians following Scotus call these convergent perfections “coextensive transcendentals.” Thus, while goodness, power and knowledge are unevenly distributed across humanity, they are jointly maximized in God’s unified being. But this suggests that such virtues are manifested in humans in distorted form, due not only to our finite natures but also because the versions of each virtues that we do possess are not necessarily tempered by equally developed versions of the other virtues. This is an especially serious problem for today’s transhumanists, who often talk about the future of humanity in terms of indefinite extensions along a variety of value dimensions – living longer, healthier, more productive lives – without considering how such an extreme being would integrate those dimensions into a coherent and positive conception of the self.
  2. Because we are fallen creatures, our understanding of the virtues is corrupted by our own cognitive and moral limitations. Thus, we may well recognize that God is “all good” while misunderstanding the nature of the good that God is and does. This applies no less to the Biblical authors. From this standpoint, the question is not whether, say, the Resurrection occurred – let’s grant it – but whether Biblical authors who reported on the event understood what had been witnessed. Similarly, the question is not whether, say, the pre-Socratic philosopher Thales was really talking about water when he said “everything is water” – let’s grant it – but whether he understood the nature of the thing he was talking about. Galileo repeatedly alluded to this mode of reasoning: of course, the ancients were talking about the same sun that we see in the heavens, but they lacked the astronomical knowledge – not to mention the telescope – to make proper sense of what they saw. It was in this spirit that the American philosopher Saul Kripke, born to a rabbinical family, proposed a very influential theory of linguistic reference in the 1970s, according to which people may talk about the same thing for centuries, while ideas about that thing may change quite radically and perhaps in each case contain substantial error. But the bottom line is that there remains a “something” that is the common object of reference across time and space.
  3. Prior to the printing press, people normally encountered writing in the imperative mode, namely, as indicative of authority and possession. Only obedience, not sophisticated hermeneutics or even general semantic competence, would have been required as an appropriate response to what were effectively a load of “Do not trespass!” signs. As Rousseau famously stressed, this created a strong sense of self and other, the basis of legal entitlement. In contrast, the Bible’s mass dissemination starting in the fifteenth century – and the attendant spread of common literacy – inaugurated a subtler form of reading, one whose aim is to understand what is written as if the reader had written it. Thus, to read the Bible is not to eavesdrop on a private conversation among Jews, but to be engaged as part of the original intended audience – namely, anyone willing to listen. This is the version of “literalism” that captivated the Protestant imagination, resulting in what we would now call a performative understanding of the Bible, whereby the truth of the sacred book is revealed in how we enact and take forward its words in our own lives. A practicing scientist adopts exactly the same attitude toward Newton or Darwin. However, the point is less obvious in the scientific case because its culture of knowledge transmission generates a trail of textual revisions (in other words, textbook editions) that absolve scientists of the need to consult the founding texts of their discipline. It was in just this “scientific” spirit that the great Christian dissenter Joseph Priestley inspired the author of the United States Declaration of Independence to write the heavily redacted version of the New Testament popularly known as the “Jefferson Bible.”

Together these three issues draw attention to the trenchant character of human fallibility in the quest to leverage the Bible into thevia regia to the mind of God. Perhaps the most relevant historian of science and religion in our times, Peter Harrison of the University of Queensland, has highlighted the revival of the doctrine of Original Sin in the early modern period as central to both the rise of the Biblical literalist sensibility and the development of science.

In particular, now with the Bible ready at hand, how should one understand the sense of loss incurred by humanity following Adam’s expulsion from Eden? Was it that Adam’s passions clouded rather than enhanced his reason? Was it that his senses had withered from their previous powers? Or, was it that the world itself refused to yield to his will as it had before? Answers varied radically, but in each case specific epistemologies – most of which survive in secular guise as “philosophies of science” – were proposed for recovering humanity’s divine status, a task increasingly understood as access to the ultimate representation of reality. When scientists brandish a capital “T” version of “truth” at their reprobate colleagues in the humanities and social sciences, it is to this originally theological quest that they allude malgre lui.

Can the study of human rationality be seen as a “Science of the Fall”?

It would be a mistake to conclude that the theological and scientific sides of this quest for Truth (or, God) can be reconciled very easily. But this is not due to the all too familiar polemics between religious and scientific “fundamentalists” that periodically erupt in the mass media. Much trickier is a general learned scepticism about the classical paradigm of rationality to the human condition. Perhaps the most popular version of this paradigm is Homo economicus, which endows humans with a seemingly godlike capacity to set feasible objectives, survey possible courses of action, access relevant information, weigh evidence, reason logically and so on.

A good entry point to scepticism about this self-understanding of humanity is the history of psychological research into “heuristics” – that is, mental shortcuts designed to turn our cognitive limitations to our advantage. Colloquially we speak of these things as “rules of thumb” or, more provocatively, “biases,” “prejudices” or “stereotypes.” They basically trade off accuracy for efficiency in heroic feats of abstraction and generalisation. In the words of Gerd Gigerenzer of the Max Planck Institute, heuristics are “fast and frugal,” an image consistent with the general Darwinian line that intelligence is primarily adaptive to an organism’s normal environments. For Gigerenzer, what matters is that heuristics are sufficiently right, sufficiently often for us to carry on. Accuracy, let alone “Truth,” is not an overriding end.

However, to Daniel Kahneman, the 2002 Economics Nobelist who began experimentally studying heuristics in 1970s, their systematically erroneous nature can be remedied only by either a programme of massive human cognitive self-improvement or the development of intelligent machines designed to adhere to the classical paradigm of rationality. Unlike Gigerenzer, Kahneman refused to take our cognitive limitations as a non-negotiable fact.

While I would not wish to label Kahneman a “transhumanist,” let alone a theologian, it is clear that his interpretation of heuristics moves us toward thinking in terms of a necessary step change in the re-orientation of our cognitive capacities – either inside our own brains or as offloaded to smart machines. In either case, under normal circumstances, there would seem to be a fundamental mismatch between the aspirations of our minds and the dispositions of our brains.

Here theologians should prick up their ears. A litmus test for a “theological” inclination is that one registers the distinction between “top-down” and “bottom-up” thinking as qualitatively different. An anti-theological (“naturalistic”) mode of thinking sees top-down as reducible to, if not superseded by, bottom-up thinking. This is Gigerenzer’s view, which also leads evolutionary psychologists to proclaim blandly that, say, all of mathematics can be ultimately explained in terms of incremental changes to our native capacities to count, calculate, measure, group and so on.

Of course, this is much easier said than done – and it certainly is nowhere close to being done. Defeat is implicitly conceded in a move popularised by Richard Dawkins’s great nemesis in the 1980s and 1990s, Stephen Jay Gould, who argued that just about everything that is normally taken to be distinctive of humans as a species turns out to be an “exaptation” – that is, a glorified genetic accident that happened to stick over time under a unique set of concatenated circumstances. Thus, most evolutionary explanations of religion involve treating it as a by-product of one or more survival strategies. It is only a matter of time before an evolutionist has the courage to explain science itself in exactly the same way – that is, as a potentially risky sideshow from the standpoint of natural selection.

What is so dissatisfying about Gould’s line of evolutionary inquiry is that scientists end up exactly where the religious believers from whom they would normally distance themselves: both concede the irreducible necessity of miracles – except that in the evolutionist’s case they have been naturalised as “exaptations.”

Consider, by contrast, that when one of the founders of the Neo-Darwinian synthesis, Julian Huxley, coined “transhumanism” in the 1950s, he saw it as marking an epochal moment of species self-consciousness. Instead of simply conforming to natural selection,Homo sapiens – courtesy of Darwin and other evolutionists – had acquired a theoretical understanding of the biological processes in which natural selection has played such a decisive role. If this moment constituted a “miracle,” it was the miracle to end all miracles. Henceforth, reasoned Huxley, we would exert greater control – as well as responsibility – over the basic mechanisms of life on the planet, what nowadays is called the anthropocene. The previously infallible and unaccountable workings of natural selection would be gradually converted to the corrigible and accountable workings of artificial selection on a planetary and perhaps even cosmic scale. In this context, Huxley channelled his abiding faith in eugenics toward promoting the worldview of the heretical Jesuit palaeontologist, Pierre Teilhard de Chardin.

In a time when a broad range of variously motivated humanists, scientists and technologists openly consider – sometimes with substantial financial backing – radical genetic interventions, the uploading of consciousness to machines, geoengineering and indefinite space travel and space colonisation, the theologian’s voice needs to be heard. However, the theologian’s role should not be one of spoiler, equivocator, fear-monger – or, for that matter, pathos provider. These are perfectly honourable roles for priests and religious people to adopt in the interests of themselves and their immediate constituencies.

However, theologians by virtue of their unique focus on God are especially well-equipped to discuss the images of the human future that we are in the process of projecting and, to a certain extent, living through. In this context, as Hegel realized so well two hundred years ago, the old theological disciplines of theodicy, eschatology and soteriology have much to say to us today.

Steve Fuller is Auguste Comte Professor of Social Epistemology at the University of Warwick. He is the author of more than twenty books, most recently (with Veronika Lipinska) The Proactionary Imperative: A Foundation for Transhumanism. This piece is a reflection on his plenary address to the 2014 meeting of the European Society for the Philosophy of Religion in Munster, Germany.

By Steve Fuller

First published in ABC Religion and Ethics September 29th, 2014

This entry was posted in News. Bookmark the permalink. Both comments and trackbacks are currently closed.

3 Trackbacks

Post a Comment

You must be logged in to post a comment.