Reading my way into the history and practice of AI as preparation for this task, I started paying attention to articles in the press. In June, The New York Times reported that a military drone in the Middle East may have independently initiated a lethal attack on enemy combatants. If autonomous vehicles still have a long way to go in learning to understand and safely negotiate a world of fixed and moving objects, imagine the challenges for an autonomous weapons system.
Last week, two stories in The Guardian caught my eye. One was about Cambridge-1, a supercomputer resembling the rows of white refrigerators you’d see in an appliance warehouse. Using machine learning to build on the capacity of a well-trained expert system, this AI will trawl huge medical databases to identify new insights into the nature of diseases and come up with hopefully breakthrough treatments. One writer on AI suggests that the expert systems known as medical assistants might provide the only real hope for impoverished people in the developing world to access a proper diagnosis, given limited hands-on medical resources.
The other Guardian story was about “AI-DA,” an artificially intelligent robot “artist” of strikingly human appearance, able to hold a much better conversation than SIRI, and responsible for a remarkable series of so-called self-portraits, which have critics buzzing. Indeed, AI-DA has had a first exhibition in London. The pictures are eerie, distinctive, striking, but also deeply unsettling for many. They’re self-portraits by a non-self—and good ones, too.
Lefteri Tsoukalas, in his plenary lecture “Mimetic Theory and AI,” sees AI serving as a force multiplier for social media algorithms that fuel rivalry, boost proxy wars, and give underdogs access to weapons of mass destruction. Yet he also speaks of a “new soteriology” as positive patterns and communal trends are encouraged in society, serving the global commons by providing a new platform for non-violent togetherness.
Fr. Johann Rossouw offered a plenary lecture about “Scapegoating, Mimesis and the Human Future: Thinking with Girard and Stiegler on The Automation of Desire.” He follows that “Heideggerian technophile,” Bernard Stiegler, in regarding AI as a direct extension of earlier technologies since the printing press that enhance the power of human memory, carrying performativity beyond consciousness. But with Girard in mind he laments an “automated herd effect” along with a “total and immediate industrial mediation” that leads to “algorithmic governmentality,” in a world of increasing undifferentiation. All of which contributes to violence.
In a parallel session talk entitled “Mimetic Acceleration and Capitalist Hyperintelligence,” Geoff Shullenberger assesses the seemingly invincible acceleration of capitalism in mimetically similar terms, regarding “techno-capitalism itself as an alien hyperintelligence that has obsolesced individual human agency and thinks without us.”
Fr. Rossouw has less faith than Stiegler in the capacity of an Enlightenment-style education to stop us unquestioningly buying-in to these accelerating trends, turning instead to the liturgical and contemplative habitus of Christianity to provide a counterweight.
Having now set the scene with some thoughts about the issues we face, I now consider the first of two broad areas where attention has focussed during our conference.
Autonomy, Fascination and Revulsion
In Pablo Bandera we have a serious Girardian scholar who also happens to represent an AI insider’s properly withering view of populist misconceptions and hysterics. Still, he inquires after the rise of such disquiet over the last decade. In his plenary lecture, “Intelligent Robots: The Model/Rival of the Unhuman,” Bandera identifies the apparent autonomy of artificially intelligent robots as the cause of rivalrous feelings towards them, based on metaphysical desire for the being of a model.
In particular, he thinks it’s the indifference and impassibility of these machines that gives rise to a sense of scandal, as their desirable autonomy manifests itself by indifference towards us. This is the same pair of reactions that Girard identifies in Dostoyevsky’s underground man, at once fascinated and embittered with the handsome officer who’d carelessly slighted him.
Bandera acknowledges that this threatened exposure of our own ontological lack can come from people as well as machines. And here I’ll take a few steps where Bandera doesn’t tread. I couldn’t help thinking, for instance, of a particular political constituency in America resentful of claims for equality on the part of black and brown people regarding the right to protest and to vote—or, if that sounds partisan, let me balance the score by pointing to an opposing constituency and a widespread resentment towards claims for autonomy and human rights on behalf of the unborn.
Paul Dumouchel advances his important work on AI, robots and mimetic desire with a plenary lecture entitled, “Desiring Machines: Machines That Are Desired and Machines that Desire” (available in the 2021 volume of Contagion). Like Bandera he explores the attractiveness of AI robots, also mentioning Dostoyevsky’s underground man. He discusses what Gunter Anders called “Promethean shame”: a sense of inferiority that humans feel when confronted with their many limitations. And one of these limitations, for Dumouchel, is our mimetic dependency on others. This makes the transhuman and the non-human desirable because they point beyond that mimetic dependency.
Dumouchel mentions a further dimension: a fascination for mechanism, referencing Girard’s account of the scapegoat mechanism as source of sacred transcendence. So, mechanism is perceived as the path to transcend our despised human limitations.
But why the accompanying revulsion? Here Wolfgang Palaver helps us in his plenary lecture, “The Desire to Be Like God: Addressing Temptations Coming Along With AI,” which is helpfully read alongside the chapter on mimesis in his now-standard work René Girard’s Mimetic Theory.
Palaver intriguingly invokes Sartrean bad faith, which we recognise for instance in the inauthenticity of waiters and others who put on superior airs. But we know that they’re not classier than us. In Girardian terms, they betray our metaphysical desire. Likewise, for Palaver, artificially intelligent robots ultimately prove a disappointment. They’re not as human and intelligent as they’re made out to be, and so they offend us as frauds and pretenders—we’re scandalized because their promised ontological sufficiency proves insufficient.
A related point emerged yesterday in Rebecca Gibson’s presentation on her book Desire in the Age of Robots and AI, discussing Blade Runner 2049. In one scene from the movie, Deckerd rejects a flawed copy of his lost love, the android Rachael, insisting that “her eyes were green.” Gibson concludes that “mimetic desire rejects faulty simulacra,” suggestive of Palaver’s insight.
Like Fr. Rossouw in his lecture, Palaver knows we have to resist the mimetic siren song of what he regards as a gnostic religion, which disdains humanity’s defining bodily finitude. So, for instance, Palaver is wary of hyper-disciplined human communities that so appealed to Jeremy Bentham. A parallel session talk by Teresa Pitts makes a valuable contribution in this connection, reflecting on Mary Shelley’s Frankenstein as a cautionary tale about the mimetic disorders of idolatry, isolation, and obsession.
I wasn’t able to obtain an advance copy of the lecture by Sorin Matei, an AI specialist and high-level consultant to government on military matters, though he kindly provided some earlier published work on robotic warfare which, I was assured, would inform his lecture. His point of departure in what I read is a discussion of HAL 9000, the rogue computer in Stanley’s Kubrick’s 1968 science-fiction classic 2001: A Space Odyssey. Creepy HAL decides that the best way to help his human crew fulfil their mission to Jupiter is to kill them and carry on without them rather than let them abort the trip. For Matei, this is a sign not so much of evil intent on HAL’s part as of too much individuation. He proposes teaching AIs the Golden Rule and instilling respect by making a group of AIs mutually accountable for decisions—perhaps specifying threshold conditions requiring human intervention—to forestall such individuation.
An alternative solution would be to try and build some mimetic element into AIs as a prophylactic against machine obsessiveness, and more generally because mimesis might well provide the only path to a genuine, humanlike, strong AI. This leads into my next section, to be called…
What’s Missing? What’s Possible?
I begin with Arkady Plotnitsky and his plenary lecture, “The Destinies of Desire and Versions of the Virtual: Structures, Machines, and Desiring Machines.” Plotnitsky is a confessed non-Girardian, preferring the alternative account of desire from Gilles Deleuze and Félix Guattari that isn’t based on lack, and which serves as a productive cultural driver—a mechanism that, incidentally, may have provided the title of our conference, based on Deleuze and Guattari’s notion of “desiring machines.”
For Plotnitsky, high-probability predictions that strong AI is just around the corner are unlikely to be right. Machines capable of reckoning are far from authentic human judgment, and machinic functioning is a long way from human experience. He writes, “It is the problem of explaining our experience—sensorial, bodily, mental, and emotional, including any stream of thoughts, all defined in terms of so-called qualia, qualitative phenomenal properties.” Further, Plotnitsky suggests that judgment and experience were configured through a long and very likely unrepeatable evolutionary process, which would be impossible to catch up let alone improve upon.
I would like to respond to this last point about evolution and unrepeatability before proceeding. Evolution need not be seen as an entirely random walk that would necessarily turn out different results wherever it took place, unlikely to produce anything like us even if replayed under near-identical conditions. There is, however, a strong evidence-based case for convergent evolution that Simon Conway Morris makes against the powerful anti-teleological evolutionary orthodoxy represented by Stephen Jay Gould, to which Plotnitsky seems committed.
Anyway, Plotnitsky illustrates his case for the uniqueness of human minds with examples of poetic creativity. Curtis Gruenler, in his parallel session talk on “Artificial Intelligence and Literary Intelligence,” takes up this point of literature as part of how we develop a mode of contemplative attention bringing us into right relationship with the world and objects. This includes the clarifying, converting insight that James Alison calls “the intelligence of the victim.” But Gruenler argues that we’re a long way from that. He invokes Charles Sanders Peirce and his breakthrough account of semiotics, which is helping Girardian thinkers like Anthony Bartlett to explore how Girard holds hermeneutics and realism together. Such realism genuinely mediated by signs constitutes the sophisticated epistemological world we inhabit, which for Gruenler is simply not what machines can attain to (at least not in the foreseeable future, following Brian Cantwell Smith).
Eric Gans’ plenary lecture, entitled “Biological, Anthropological, and Algorithmic Mimesis,” makes an important contribution about what’s missing and what’s possible with AI. Gans is what you might call a heterodox Girardian, long championing an alternative account of mimetic theory called generative anthropology. He proposes that primal violence is deferred rather than cathartically discharged at the origin of culture.
Here he argues that what’s missing from AI’s algorithmic simulations is a sense of a scene, of “framing”—of the humanistic, soulful extra beyond what animals achieve. This comes through inhabiting a linguistic community that helps provide what Gans calls an “originary phenomenology.” His practical suggestion is to try programming instincts, to see if mimetic rivalry or sign usage evolves in a group of AIs thus equipped. One could cheekily add that such an experiment, seeking to recreate the originary scene, might settle once and for all whether Gans or Girard is right.
This is an irreducibly social process, however, as Grant Poettker argues in his parallel session talk, “Nature and Artifice: Pressing (the) Disanalogies between Human Action and Robot Activity.” Programming is seen as very different from how mimesis shapes the mind, and it hasn’t provided the essential cognitive element that a victim represents. Indeed, as Dominic Pigneri opines in his parallel session talk, “Girard Does Not Compute: A Girardian Criticism of AI,” AIs do not have a substructure capable of non-rational thought—the sole type of substructure upon which mimetic desire could conceivably be built.
My discussion of what’s missing and what’s possible must include an important parallel session talk from our COV&R President, Martha Reineke—what in the game of cricket we’d call a fine captain’s knock. In her paper “Robot Love: AI and the Future of Human Intimacy,” Reineke makes a plea for embodiment as the missing link in this discussion, especially the importance of touch in conveying human meaning. Notably, she also mentions the mirror neuron system—nowadays surely indispensable for any Girardian theory of mind.
Reineke discusses “phenomenology of touch.” “Reciprocity in sensation means that touch must be tactful,” she argues. “It is an interplay of double sensation fed through a multisensory system blended seamlessly with culture. Touching and being touched are not a simple firing of neurons or electrical charges.” So, in addition to the famed Turing Test, to see if an AI can pass for a human, Reineke commends a “touch test.”
She analyses the 2015 film Ex Machina, with its strong AI robot called Ava, who builds her humanity mimetically from the corporate shark who created her and by consuming everything on the internet. But Ava also craves fuller embodiment and the entrée that would provide her to the bustling human sociality of a city. I think Reineke sees Ava as having achieved humanity but if so, Ava does so as a murdering psychopath—in my view, thanks to a whole world of rivalry and scandal modelled for her mimetically on the internet.
Reineke is confident that the male titans of AI are unlikely to appreciate her concerns about the necessity of embodiment and touch. She’ll be pleased, then, that a female titan of AI, the English computer science professor Melanie Mitchell, in Artificial Intelligence: A Guide for Thinking Humans (London: Pelican, 2019), is tending towards Reineke’s view. And I quote:
A seemingly inescapable conclusion for me is that we may…need embodiment, and that the only way to build computers that can interpret scenes like we do is to allow them to get exposed to all the years of (structured, temporally coherent) experience we have, ability to interact with the world, and some magical active learning/inference architecture that I can barely even imagine when I think backwards about what it should be capable of (pp. 348-349).… [O]nly the right kind of machine—one that is embodied and active in the world—would have human-level intelligence in its reach.… I am finding the embodiment argument increasingly compelling (p. 349).
Let me now offer a little speculation, prompted by Reineke’s insight. In early 1950s England, Alan Turing, a key founder of computer science and AI, was convicted of gross indecency for a then-illegal homosexual act and sentenced to chemical castration, soon thereafter tragically taking his own life. Had Turing lived, the world and not just AI might just have turned out very differently—a scenario explored in Jeanette Winterson’s 2019 novel, Frankissstein. My speculation has to do with the touch test recommended by Reineke. Turing, the thoroughgoing cyberneticist, who was content to work with a disembodied—indeed, subjectless—simulation of mind, was ultimately unable to live in a marred body, deprived of physical desire and of touch.
This mention of Alan Turing, and questions he raises for us about what AI researchers think they’re doing, provides a segue to the important plenary lecture by Jean-Pierre Dupuy, called “The Philosophical Foundations of Mimetic Theory and Cognitive Science (including Artificial Intelligence).” I read this lecture alongside his history of ideas of the AI movement, On the Origins of Cognitive Science: The Mechanization of the Mind. Dupuy brings significant meta-critical insight about what’s missing and what’s possible with AI, and with that an important corrective.
His corrective is that, from Turing’s cybernetic thinking to newer AI vistas involving neural nets and machine learning, we’re dealing with the automation of a simulation of mind and not with actual mind—or, alternatively, with mechanically simulating an existing simulation of what the brain’s like. By extension, we must take into account that what many take AI and related robotics to be doing—making human machines—is not what they’re actually doing.
An important touchstone for Dupuy’s argument is the seventeenth-century Neapolitan philosopher Giambattista Vico and his Scienza Nuova, pioneering our modern sense that constructions of the human mind provide an indispensable access point to objective reality. Specifically, Vico’s idea is that the mind only truly knows what it can make a copy of. And so, science studies mental—often mathematical—models of reality first and foremost, rather than reality direct and unmediated. For Dupuy in his lecture, “It is the copy that science admires and is fascinated with—the copy, that is, its own creation. A model is so much purer, so much more readily mastered than the world of phenomena, that there is a risk that it may become the exclusive object of the scientist’s attention.”
Enter Alan Turing the cyberneticist, who thought that the mind was a machine, a computer. He then set about realizing that simulation with a further simulation—an actual computer. It’s an important point of Dupuy’s that the model, the copy, came first for Turing, which was then further simulated by building actual machines. They were copies of a copy, then, rather than copies of the mind itself.
If I understand Dupuy correctly, he’s saying that Turing was also somewhat influenced by the mathematician Kurt Gödel, whose two incompleteness theorems of 1931 showed that no mathematical representation could be logically complete. Received more widely, this mathematical insight helped to put the kibosh on any prospect of fully representing a complex reality, which I guess strengthened the view that simulations of complex reality are the best we can hope for.
The Turing Test wasn’t about whether an actual mind was present, then, but only a sufficiently convincing simulation of one. Dupuy incisively dismisses the American philosopher John Searle’s thought experiment conceived as a challenge to Turing’s alternative. Searle’s so-called Chinese Room scenario concludes that there’s no actual intelligence at work in his Chinese Room, when blindly passing coded transcriptions of mechanical translations into and out of the room is all that’s happening. Dupuy points out, however, that Turing is actually assuming no more than this, making no claim for any actual intelligence, mind or subject being present. For Turing, there was no ghost in the machine.
Then comes a profound insight from Dupuy, linking Turing’s attitude to wider currents in contemporary thought. Dupuy refers to the subjectless agency associated with structural anthropology, blind economic forces, Marxist philosophy of history, and particularly deconstruction, where all we’re ever dealing with is the multiplication and spillage of signs. Dupuy explores this deconstruction of subjectivity at the intersection of social science and cognitive science in his book, in mental and social mechanisms alike. As Jacques Derrida once insisted, against Searle, the only way to fool a Chinese person that you can speak Chinese is by speaking to them in Chinese—here reducing the mental to an epiphenomenon of the social. Dupuy writes in his aforementioned book that,
In both cases the deconstruction of the subject proceeds from a recognition that a complex network of interactions among simple entities—formal neurons in the individual quasisubject, schematic individuals in the case of the collective subject—is capable of exhibiting remarkable properties. For cognitive scientists who carry on the cybernetic tradition, it is neither more nor less justified to attribute a mental state such as an intention, to a human being than to a group of human beings (p. 160).
Yet in his lecture Dupuy raises doubts, insisting with Vico that realism about the world and our mental constructions of that world cannot be separated. He likens this to the behavior of self-organizing physical systems, where the dynamics of a system converge on a stable state called an attractor, while at the same time the presence of that attractor guides the system dynamics that give rise to it.
Dupuy ends his lecture with two stories, one from holocaust survivor and psychoanalyst Victor Frankl and the other from classical literature. In the first case, a patient baulks at the idea proposed in therapy of an exact copy of his late and much-lamented wife being restored to him, whereupon his great burden of grief begins to lift. In the latter, a wife is dissatisfied when Zeus takes the exact form of her husband for a night of lovemaking. In both cases, the real wife or husband is so woven in with their spouse’s experience and sense of their partner, and I daresay the whole narrative of their life together, that no substitute could be tolerated—there just is something extra that can’t be simulated, that can’t be reduced to a list of characteristics which, when copied, faithfully constitutes the beloved.
As Dupuy concludes, “I greatly fear that the spontaneous ontology of those who wish to be the makers or recreators of the world knows nothing of the beings who inhabit it but lists of characteristics.” Here we see an echo of Wolfgang Palaver’s suggestion, that we’re dissatisfied and even scandalized with humanlike AIs because they prove to be inadequate models of the human. For Dupuy, there’s a surplus of content when we confront the real thing. This is because the cybernetic mindset relies on a primary simulation that’s flawed, let alone attempting further simulation by trying to realize that simulation in a machine.
Conclusion: For Future Research
A few issues for future research have occurred to me in response to the conference material.
One is making sure that we relate to real AI and not the AI of populist misconception. There’s often a disjunct between what practitioners in the field see as the realistic prospects of AI and the fevered speculations of journalists, the imaginative flights of sci-fi writers and filmmakers, and the anxious reactions of those who fear a changing workplace, or a changing world. This disjunction carries over to how different scholarly fields approach the phenomenon of AI.
Curtis Gruenler, in a post on our online social wall, observed from his conversations during the conference that humanities scholars are thinking about the replication of humanness while scientists are thinking about AI’s contribution to relations between humans. Here contributions like those of Pablo Bandera and Jean-Pierre Dupuy are particularly helpful, bringing a meta-critical alertness to our discussions. Future research planning needs to keep this issue in mind.
Having said that, the very fact of all this imaginative projection, anxiety and something like méconnaissance ought to be of interest to mimetic theorists. The worlds of literature and film provide ample material for reflection, as we’ve seen in a number of papers and plenaries. There’s also the recursive nature of science fiction, to which Rebecca Gibson referred in her presentation yesterday: that sci-fi provides models for actual research, and of course vice versa. One AI writer admits that Star Trek and its seemingly all-knowing computer on the bridge of the USS Enterprise—which was able to be accessed and to respond via ordinary speech—provided a model for his generation of AI researchers. So, the obvious fascination with AI and humanlike robots is a popular culture trope that bears further investigation in light of mimetic theory—along with Zombies, and UFOs!
Accordingly, I’d be particularly interested in what mimetic theorists make of the so-called “uncanny valley.” This is a fixed range of human-likeness and can be represented on a graph. It’s introduced for us by Wolfgang Palaver, discussing how encountering an AI can make us uneasy—at least in the identifiable zone of the “uncanny valley.” Is an AI that falls into this range too simple to be genuinely humanlike, like AI-DA? Perhaps it’s not simple enough—simple like the conversationally limited but still much-desired sex-bot, for instance? Or is Jean-Michel Oughourlian right, in The Mimetic Brain (p. 27), that we’re creeped out simply because our mirror neuron system can read humanlike robots only intermittently? I’d certainly like to know.
A concern that’s been brewing for me about the widespread fascination with AI centres on Girard’s insights into ontological sickness—specifically, the pseudo-masochism that drives those in its thrall to abase themselves before a model deemed to possess the fulness of being. This leads us to seek the stone we’re unable to lift, and to mistake walls for doors, as Girard puts it—to seek, and dash ourselves against, the obstacle. There’s a related fascination with the inorganic, the lifeless, and the ugly that Girard discerns here, not least in modern art. I wonder if this mimetic soul sickness toward healthy humanity provides a further dimension of today’s fascination with AI.
Perhaps the most searching question about AI and mimetic theory that’s arisen during our conference is that of whether AIs and especially in robotic form need to be made mimetic. Though of course, being mimetic entails the risk of violence as an entity learns—as we’ve all had to learn—to negotiate the mimetic minefield. This is, after all, the only known path to maturity and wisdom. The role of the self in a theory of mind arises here. And for Girardians the self is a work in progress, as we attach, detach and reattach to a succession of mimetic models and rivals across a lifetime, ideally becoming generally freer and less mimetically febrile.
The nature of this self, and of the mind that accompanies it, recalls that of the relation between mind and brain, which must now take into account the mirror neuron system discovered in the mid-1990s. Indeed, the idea that our sense of self is integrally, neurologically interwoven with other humans is something that mimetic theory can contribute to thinking about AI.
Likewise, Jean-Michel Oughourlian proposes that our brains have three registers, the third of which is mimetic and provides a coordinating function. His book The Mimetic Brain, developing his earlier work on virtual selves created by hypnosis, certainly requires further engagement in mimetic theory. And why not consider how this suggestion might apply to the creation of artificial intelligences?
I was not able to engage with important sessions in our conference devoted to racial and indigenous issues, but it does occur to me that strong AIs and androids would likely become ready scapegoats in ways that recall widespread scapegoating of the human other in its many forms. During the Industrial Revolution, Luddites and other machine breakers turned their anger at social and economic displacement onto the machines. One might imagine a lot of human togetherness, even across existing troubled racial divides, being facilitated by the scapegoating of AIs and AI robots. The UK version of the TV series Real Humans offers a fine example of this. Lifelike Synths are being destroyed in the streets by angry mobs, despite their capacity for love and loyalty shown to be capable of overcoming even stubborn resistance—to the point that one of the Synth leaders sacrifices “herself” in a plainly Christlike offering.
Speaking of Christ, you’ll surely excuse a Christian theologian for saying something about God, Christ, and AI before winding up. Anthony Bartlett, in his conference paper “‘Forgiving Victim’ as Artificial Intelligence,” reflects that humanity released from the false sacred and attaining the intelligence of the victim demonstrates its own version of artificial intelligence. Beyond ‘natural’ attitudes that go with inhabiting a culture structured by sacrifice, a disruptive breakthrough beyond such structuring is testified to in the Judeo-Christian Scriptures, and seen by Girard as setting many hares running in history. “Christ the singularity” is a theme that I would like to see further explored in Christian theology by its Girardian practitioners.
We’ve also seen issues of ethics, culpability and responsibility arise in our sessions, with a view to ensuring that AIs can productively and safely coexist with humans. Should strong AIs ever become a reality, questions of their ethical status would inevitably arise as an extension of existing human rights considerations. Adam the AI robot was destroyed with impunity in Ian McEwan’s novel Machines Like Us, while Klara the artificial friend, in Kazuo Ishiguro’s novel Klara and the Sun, was afforded moderate consideration by being allowed to “fade out” in a junkyard. Some Girardian speculation might be devoted to these issues.
I guarantee that if humanlike robots with strong AI ever start turning up in Anglican churches, we’ll soon be debating whether they can be baptized, or ordained as priests—though perhaps only as deacons. In case those theological debates ever arise, I’m grateful for all your contributions that have helped me be ready for them.