How far will you go to define yourself in a world where existence itself has lost all meaning?
The most important aspect people should consider when they talk about NieR: Automata’s story is why it’s set so far into the future. Yes the game revolves around humanity being fucked – or more fucked if you’ve played the first NieR – but why couldn’t this story have happened a few years after the extinction rather than the few millenia we got? While a lot can definitely change in just three years, and Japan has been known to make games where humanity destroys itself within a week when put in a bad situation, universal philosophies are among the most indelible things to exist in this world. Even to this day, long after the war is over, there are people who still agree with Hitler’s ideals. But even old uncle Adolf’s philosophy has an expiration date, and while it’s true there exist some people who still believe in the Confederacy, they mostly belong in the same bubble as the four or five people who for some reason really like Futakoi.
Although not programmed to have emotions like the androids, time takes its toll on all things in life, and that includes the machines in NieR: Automata and their strict protocols to obliterate humanity and serve the aliens. With only one prevalent philosophy surrounding the machines after thousands of years, there are going to be those who decide that “glory to mankind” might not be so bad. And it also shouldn’t be a surprise when their idea of simulating this philosophy include orgy simulations and ritual suicide while the ones who are closer to getting it right lack the complexity that we human beings generally have in order to prevent us from piercing our core life support during emotional moments. After all, when you combine their inherent programming with how humans have been extinct for a really really really long time, it would take a miracle for the true essence of humanity to exist through them, no matter how many french maid outfits they wear.
Existentialism is a philosophy concerned with finding self and the meaning of life through free will, choice, and personal responsibility. The belief is that people are searching to find out who and what they are throughout life as they make choices based on their experiences, beliefs, and outlook. And personal choices become unique without the necessity of an objective form of truth. An existentialist believes that a person should be forced to choose and be responsible without the help of laws, ethnic rules, or traditions. – All About Philosophy
Every single character in this game is artificial in some form. Even the Emil (if you’re reading this article and you don’t know who Emil is…) that we see in this game is actually one of several artificial constructs that the original Japanese version of Jack Skellington made in order to combat the aliens long ago. That means that they’re inherently programmed with their beliefs, never needing to choose for themselves or be responsible for their actions. It’s sort of like how a lot of humans grow up with the personal beliefs that their own parents hammered into them since birth, only more restrictive since machines aren’t supposed to have the free will necessary to diverge from said beliefs after they expire. That’s why when the humans ended up going extinct, Yorha was created in order to prevent the androids from ever experiencing the inevitable AI contradiction that occurs when a program is made to accomplish an impossible task – which would be even more of a disaster when you consider that the androids are programmed to have emotions (which isn’t the same as free will before you guys argue against me on that point).
Unless it was programmed into them at the very start, the chances of a machine developing its own consciousness right away is very unlikely. And even with the march of time, developing the philosophy of existentialism from scratch isn’t exactly a quick process either. Although it did have origins beforehand (with Pascal’s version being one of the more famous), existentialism as we know now was inspired by many 19th century philosophers including Søren Kierkegaard and Friedrich Nietzsche. In other words, it took a long time for humans themselves to open up their cognition to the possibility that free choices define your very nature. By the time the machines were even capable of doing that, there were no examples to follow, and it’s hard for the past to return once it has, well, passed.
So what do they do? Fall in love without seeing what’s wrong with changing the personality of their loved one to be more satisfying. Constantly improve their fighting strength for the sole purpose of being invincible. Do a Romeo and Juliet play without understanding what Romeo and Juliet is supposed to be about in the first place. While the androids can definitely recognize what’s wrong with this picture, who are they to argue when they don’t know of a better alternative themselves? And even if they did know the alternative, would it even work anymore?
Existentialism is broadly defined in a variety of concepts and there can be no one answer as to what it is, yet it does not support any of the following:
-wealth, pleasure, or honor make the good life
–social values and structure control the individual
–accept what is and that is enough in life
–science can and will make everything better
–people are basically good but ruined by society or external forces
–“I want my way, now!” or “It is not my fault!” mentality – All About Philosophy
That above quote may be true of existentialism today, but definitions change all the time. The word “fag” used to (and still does to an extent) mean cigarettes or exhaustion before it got turned into the offensive homophobic insult it is now. And to these machines, existentialism might as well support everything I just quoted above. After all, not supporting “science can and will make everything better” belief is kind of hard in a world where science is basically all that’s alive besides the local wildlife. Plus, I know quite a few cultures who would disagree in that “honor making a good life” isn’t defining yourself.
Main characters 2B and 9S not only live in a dreary world of constant war, but the former has killed the latter every time he came close to learning the truth about how meaningless his role is. By repeatedly snuffing out his existence, 2B has been trying to keep 9S from losing his reason for existing, which in turn has caused her to suffer greatly. The fact that this irony can even exist is proof that existentialism as we know it now is not a permanent thing. While Earth may not have blindfolded otaku bait swinging their swords around in the far future (although most of you guys can dream), there’s no denying that time will eventually change things to the point that that the past is unrecognizable.
Not even a likable guy like Emil is immune to what time can do to someone or something. Obviously I can’t speak for the one everyone who played NieR truly got to know, but as can be seen from the multiple machine copies of him who were originally meant to fight the aliens and ended up forgetting who the original characters actually were to the point that one of them runs a shopping cart whilst singing a hilariously annoying song, you may as well have given him a different name in Automata. Hell, the machines may as well forget about being human altogether and just act out under a new set of species behavior that we’ve never seen before. Or even act like the aliens that invaded Earth in the first place. So why don’t they?
Well I’m not really sure, but I think there’s just something about having emotions that are intrinsically tied to being human. And no matter how much they change, humans will never be able to alter the very basics for which all philosophies are built on. There are definitely other factors too, but the bottom-line is that these machines eventually discovered their own form of emotion, and this made them unable to dismiss human activities as a viable method of existing, even if they don’t quite understand what said activities actually are.
The characters’ attempts to understand them pretty much serve as the meat that drives NieR: Automata’s plot. Every confrontation 2B has with a giant machine is basically a death match to see whether one ambiguous method of existence can outlast another. Everything Pascal struggles through regarding the people in his village is a test to see whether machines are actually capable of becoming the new human race. And everything revolving around 9S is the tragic result of what can happen if a new viable way of life isn’t discovered. Who is the one that can guide us down a set road that gives Earth’s new main inhabitants a defined future? Does said road even exist?
As crazy as Yoko Taro is, there’s no way he could ever know the answer to that. After all, he still lives in a world where individuals can think for themselves whilst drawing inspiration from others who do the same. What he can offer though are building blocks that we can hopefully preserve so that the future isn’t totally blind. I remember comparing NieR: Automata to Texhnolyze in my initial review of the former, and while that still holds true for the most part, the big and obvious difference is that despite the soul crushing atmosphere they share, I never once felt that Automata’s story was going in the same direction the inhabitants of Lux went. The light at the end of the tunnel may be small, but I definitely saw it early on. And honestly, a larger sized light would not fit everything 2B, 9S, A2, and everyone who played the game went through to reach it.
I’m not really sure what the common consensus on Ending E is, but I wouldn’t be surprised if people wondered what the ultimate takeaway was supposed to be from it. Even a purposefully ambiguous ending must at least leave those building blocks I mentioned earlier, usually in the form of a meaningful question. And “will these androids repeat their actions or not after their revival is complete?” generally isn’t considered acceptable after either of the endings involving that tower, where it either falls apart or launches an ark depending on whether you choose to play as either A2 or 9S in the final confrontation.
Personally, I don’t think that question even matters. And I don’t think whether these machines become the new human race matters either. What does matter is that despite (or indeed, because of) all the hardships they went through, watching these futuristic existential philosophies lead to the end of many existences themselves whilst challenging their own philosophies in the process, they were able to evolve to the point that the new methods of finding yourself will be better. And while whether or not they’ll still cause destruction is a different story altogether, just like Pods 042 and 153, I personally believe that the revived androids will one day come to realize themselves as individual beings one day. The hope they have right now may be small, but at least having it keeps the door to possibilities open, and finding even one speck of the stuff is an arduous task in of itself.
I’m not confident in how long it will take for them to pass through the right door, but one thing is for sure: they have a lot of time to plan things out when they wake up.