SOMA is quite a head-spinner. Much like the Bioshocks and the Deus Exs, it confronts a number of powerful topics that extend far beyond the bounds of the game itself. Taking cues from the best science-fiction, SOMA explores what it really means to be human, and what calamities we open ourselves up to by placing our lives in the hands of an artificial intelligence. With its dark and dreadful aesthetic, it raises some harrowing questions about the future of humankind. It does not, however, provide many answers. That is the point, of course, but it’s left my head whirring with existential quandaries for days now. I can’t keep them locked up any longer, and so I subject you, poor reader, to these confounding conundrums in the hope we can silence them together.

[WARNING: SOMA SPOILERS FOLLOW!]

Much of SOMA’s narrative revolves around the notion of replicating human consciousness in digital form. Though the game focuses on using the technology as a means of besting mortality, it mostly neglects the possibility of it being used for evil. There is one haunting scene where you revive a dead man’s brain scan in order to trick him into divulging important information, but this represents only a fraction of the malevolence made possible by the technology. Brain scans would provide an effectively limitless supply of soldiers for any war one wanted to fight. Train a single expert assassin, upload their consciousness to a thousand humanoid robots, and let them loose on the leaders of the world. Or, for a less bloody coup, the most talented hackers could be copied a hundred times over and set to task usurping the world’s networks. Slave labour, identity fraud, sexual perversions fed recyclable victims: the potential travesties are endless. And worst of all? SOMA suggests that the typical human mind would struggle with the transition to a non-human body, eventually rejecting it in the same way our immune systems reject foreign intruders. This schism of the self would inevitably lead to insanity - a neat way for oppressors to cover their tracks.

These worries dovetail with another major concern: what rights is a copied consciousness entitled to? Let’s consider a similar present-day concept. Digital copies of books, movies, and games are often afforded less rights than the original. The fact that something is no longer unique and can be recreated at little to no expense typically reduces its cost, but at the same time it also impacts its value. Many avid book-lovers refuse to acknowledge eBooks as real books. Movie aficionados regularly prefer physical copies to downloads and streaming. Even gamers, so close to the bleeding edge of technology, are slow to embrace the digital future. The content in all these cases is the same. Rationally, it doesn’t make sense to ascribe different values to the same content in different packaging - value, by the way, is not the same as price. Price should indeed be affected by the costs of production; value should not.

So, should a copy of a person be afforded rights commensurate with the original? What if the copy doesn’t need to eat or drink in order to survive? Should that impact their entitlement to things like wages and standards of living? Should the rules be different if both the original and the copy exist concurrently? Does the original deserve a measure of authority over the copy? Should the relationship be comparable to that of a spouse or a business partner, or should it be closer to a parent or legal guardian? Or maybe there should be no relationship whatsoever; who knows what kind of psychological damage could stem from confronting your flaws in physical form.

Carl can’t see he’s a robot; his mind won’t accept it. Do you leave him to go insane, or do you pull the plug?

Advertisement

It’s not hard to come up with issues pertaining to the coexistence of a copy and the original, but what about the notion of consciousness continuity that drove many of the Pathos-II scientists to take their lives? As Catherine tells Simon numerous times, the scanning procedure is a copy, not a transfer; the original consciousness remains in its original body, and the copy is from then on a different person. Continuity of self is not possible. If the original dies, whether from murder, suicide, or natural causes, they remain dead; what lives on is just a very convincing imitation.

But what about cases of terminal illness or injury? If death is inevitable, a copy is at least a chance for their legacy, if not their consciousness, to live on. Things get real messy when you start thinking about the details, though. When should the copy be brought to life? Only when the original dies? What if they’re suffering from a slow, degenerative disease like HIV, or they’re in a persistent vegetative state? Should those years be spent in misery and mourning, or should the future be embraced as swiftly as possible? Resumption of employment, familial stability, emotional support: a copy could avert and ameliorate a whole lot of pain and despair. But what about the original? Would they then be forgotten, left to wither away while the world moves on without them? Should they even be kept alive if their role has been filled and their demise is inevitable? What purpose do they serve?

What’s the difference between pulling the plug on an organic person and a digital one?

Advertisement

Assuming, then, that a copy is deployed to replace the original, how should the copy be addressed? As the real deal? If it looks the same, talks the same, and acts the same, does its origins matter? Well, judging from current societal segregation, the answer is sadly yes. Racism, sexism, homophobia: the slightest, most irrelevant differences can produce a despicable amount of hatred. The issue of originality would almost certainly suffer the same discrimination. As with eBooks and other digital products, a copied consciousness would be seen as inferior and thus deserving of prejudice. If it’s just a bunch of 1s and 0s pretending to be a real human being, why should it be treated any better than an imposter?

So if that’s the case, maybe the truth should be kept hidden, even from the copy itself. Knowing that it’s a substitute - albeit a very good one - is bound to cause some mental distress. We are already vulnerable to dangerous thought patterns like the Imposter syndrome and inescapable spirals of inadequacy, and that’s without the complications of replicant technology. The potential for existential crises and crippling neurosis would skyrocket for those told that they were just a duplicate of someone that came before. Would ignorance, then, be the better option? Even if it involved lying to not just the copy, but friends, family - the entire world? What about the grander ramifications: if anyone could be a copy, a cloud of doubt would hang over every social interaction: am I talking to the original, or a fake?

Is it wrong to deceive a digital person if they’re dead anyway?

A copied consciousness might not fare so well in the physical world, but what about in a virtual one? There, issues of originality and coexistence would be irrelevant. A virtual world can exist in isolation, unaffected by the state of the physical one. The question is, though, what purpose does the simulated world serve? Disconnected from the reality that created it, there can be no illusion of continuity nor the propagation of legacy - not from the perspective of the original world, at least. In SOMA, the simulated world is an Ark representing the remnants of the human race, and thus it has purpose. Short of an apocalypse, though, copying consciousnesses to a digital Earth seems like an ethical disaster.

Advertisement

Who would own the simulation? The people who created it, or the people living in it? Should the creators be allowed to change the simulated world, eliminating death and disease, perhaps, or even halting the ageing process? How should procreation be handled? By merging parental brain scans and wiping all memories, assuming that’s possible? Or should agelessness obviate the need for having children? Should the uploads be informed of their sterility, or should they be left to find out themselves?

And what about the prospect of experimentation? Is it okay to subject digital consciousnesses to simulated pain and suffering for the purpose of research that might improve life back in the physical world, much as we currently do with mice in the lab? How much control, if any, should one world have over the other? If none, is there actually any point to a simulated world outside of the Ark scenario?


Phew. Just thinking about all these implications tires me out. It’s good stuff, though; SOMA deserves praise for tackling such heady concepts without getting preachy or bogged down in extraneous detail. The questions it inspires are ones that we as a species will have to face in the future - the not-too-distant future, if technology keeps up its pace. It’s important, then, that we start thinking about these issues and how we might address them. On that note, I’d love to hear your thoughts on these or any other existential concerns you might have - SOMA-related or not. Get philosophical in the comments below!

Advertisement

Matt Sayer is 50% gamer, 50% writer, 50% programmer, and 100% terrible at maths. You can read more of his articles here, friend him on Steam here or tweet him cat photos at @sezonguitar