Can Mario and Luigi ever know existential angst?

I was recently involved in an online forum discussion about the following interesting AI related news:

See “Scientists are teaching Super Mario to think and feel”, and  “Computer Scientists Generate A Self-Aware Mario That Can Learn And Feel”.

It seems that a research group in Germany has remade the famous Mario brothers character so that it now has adaptive capabilities, can learn from voice commands, and displays various human emotions such as sadness, greed, happiness, etc…

The question was: Is the simulation of emotional states equivalent to actually experiencing emotions?

Or as I like to think of it, can Mario and Luigi ever know existential angst? Can they ever grow into angst ridden emo cyber beings? Can they know joy, despair, euphoria, or pain? Can they ever really know what it’s like?


Different schools of thought within the philosophy of mind would answer this question differently. I will try to describe the answer that each position implies, and although the various positions with regards to the mind-body problem are richer and far more diverse than the 4 I describe below,  I believe these 4 basic positions are sufficient to address this question.

  • Behaviorism  is based on the idea that a scientific theory can only describe observable phenomena, and we can never observe emotions and beliefs directly, we can only observe behavior. So all mental and emotional states are reducible to behavior, and to say anything more about them is to make unscientific statements. From this point of view the “Mario Lives!” program does experience emotions, because it behaves according to those emotions, and there is nothing more to emotions other than the corresponding behavior. Behaviorism was popular in the 1930s and 1940s but has since fallen out of favor because of a serious challenge: it fails to properly explain situations where someone has an emotion but is hiding it, and conversely where the same behavior can be explained by more than one emotion (is that person crying tears of sadness or tears of joy?). Put another way, a behaviorist has no way of telling whether someone is really experiencing pain, or are they simply a very good actor?
  • Functionalism  appeared in the 1950s and attempts to solve the problem that behaviorism faces as mentioned above. Functionalism, associates mental states with internal states similar to the internal states of a computer or a Turing machine. These functional states might or might not lead to specific behaviors, but they are in a causal relationship with sensory inputs and outputs and with other functional states. Thus learning that a loved one has died will lead to a functional state of sadness, and functional state of sadness can eventually lead to a functional state of depression or to certain beliefs, etc…A functionalist will agree that the “Mario lives!” program does experience emotions, provided its designers equip it with the proper functional states.
  • Type-Identity Physicalism or Type Physicalism  makes the very strong claim that emotions and beliefs correspond to exactly the physical brain states of the person experiencing them, based on the idea that only the physical/material world has any concrete existence. The usual example is that of the sensation of pain: pain is exactly the firing of the specific C-fiber nervous tissue in the brain and nothing else. Therefore “Mario Lives!” cannot experience emotions, because it doesn’t have the corresponding neural structures that these emotions are identified with. Type-idendtity physicalism is not very popular these days, because it is considered too strong. Per this approach, even other living beings which have a different neural structure than that of humans cannot experience pain, since the definition of pain is such a specific one. See the question of multiple realizability.
  • Dualism  is the position that mental states are of a different nature altogether than physical/material states (possibly a different substance or part of a different realm). DesCartes was one of the first to provide a serious philosophical argument for (Substance) Dualism with his famous “I think, therefore I am”. Although DesCartes’ version of the argument is no longer popular, there are modern variations of it, such as Saul Kripke’s and Thomas Nagel’s, which have received considerable attention. Per a dualist, the “Mario Lives!” program cannot experience emotions, since it lacks the specific properties or substances that make something mental as opposed to physical.

From the article and the youtube videos, it seems that the authors based their claim that Mario “experiences” emotions on purely functionalist criteria. Mario is explicitly equipped with internal emotional states that would be in accordance with  functionalism’s conception of mental states. (On a side note, the Star Trek TNG episodes and movies where Data develops emotions also seemed to be premised on functionalist criteria: Data has simply to be programmed and/or modified to start experiencing emotions). So whether the Mario simulation experiences emotions or not really comes down to whether you agree with functionalism or not.

Here’s a more detailed way of looking at the question does the Mario program “experience emotions”: Does the “Mario Live!” program have subjective first person experience? Is it capable of introspection? Is a functionalist emotional state really equivalent to feeling, to experiencing color, pain, love, or the flavor of maple walnut ice cream, first hand?

The above mentioned sensations are what philosophers of mind call  qualia, and the question now becomes part of the overall debate about the existence of  qualia and what David Chalmers calls the hard problem of consciousness, and whether functionalism is sufficient to describe subjective first person experience or not.

The best way to understand what the debate about Qualia and the hard problem  is to look at a strong objection to functionalism best illustrated by the story of Mary the dentist.

This objection was first provided by Frank Jackson in his paper “What Mary Didn’t Know“. In the original argument, Mary is a Neuroscientist and the sensation in question is color. I propose a variation on the argument in which Mary is a dentist and the sensation in question is tooth pain, since it is more plausible and relatable than Jackson’s original version:

  • Imagine a dentist called Mary, who is the top dentist in her field. She has aced every dentistry topic there ever was, can cure any patient who comes to her with a dental problem no matter how bad that patient’s condition, and has studied every biological, neurological and chemical aspect of what a toothache is. In short she knows every single physical and functional aspect of toothaches.
  • However, Mary has been an avid tooth brusher ever since she was a kid, and she has never ever had a toothache in her life.
  • Most would agree that because of this, she doesn’t really know what a toothache is at all, since she has never had the subjective experience of a toothache. She knows all there is to know about toothaches, but she doesn’t know what a toothache is, having never had the first person experience of a toothache.
  • If Mary were to suddenly stop brushing her teeth and start eating copious amounts of chocolate, within a few months she would develop a serious toothache. The new experience she had constitutes additional knowledge of toothaches that she didn’t have before she went through the subjective experience. It is argued that functionalism has no way of accounting for this additional knowledge that Mary gains from subjective experience. Functionalism is incomplete, and there must a be a purely mental (or non-physical) aspect to things such as pain, color, love. This purely mental aspect is called qualia.

The problem is that qualia, the “what it’s like” part of a toothache is impossible to quantify or describe in a way other than all of the functional information that Dr. Mary had from the beginning. We know that the qualia of a toothache is something more than the functional information of a toothache, but we have no way of specifiying what that additional information is, other than that it exists. This is what David Chalmers calls the hard problem of consciousness and why some people consider qualia to be an argument for dualism.

One possible response to this argument against functionalism is Higher Order Theories of consciousness: Thoughts, beliefs, perceptions, etc..are 1st order mental (functional) states. Subjective experience are adequately described by 2nd order mental states. 2nd order functional states are functional states about other functional states. Per such an approach, the Mario Lives program should have to have not only functional states corresponding to various emotions, but higher order/2nd order functional states about feeling those emotions.

In particular, one type of higher-order theory of consciousness are self-representational theories of consciousness, which were popularized by Douglas Hofstadter in his famous Gödel, Escher, Bach: An Eternal Golden Braid. The designers of Mario Lives! could try to implement a Hofstadter style strange loop within their emotive Mario.

Based on what I have read about the Mario simulation, the designers didn’t equip it with any 2nd order functional states or self-representational states, they just equipped it with 1st order emotional states. Put simply the Mario program has emotions, but it doesn’t experience emotions. It is definitely not self aware.

To get it to experience emotions, the team would have to go further than just providing it with internal emotional states, they would also have to provide with a way to introspect, to view it’s own mental states. But then Mario would not only start experiencing emotions, it runs a serious risk of becoming truly self-aware, in the sense of achieving self-perception, with all the risks that that entails…instead of the rise of the machines coming from a military Super-AI, a bunch of self-aware and angst-ridden emo-Marios and emo-Luigis take over the word and become our overlords, jumping over and squashing anyone who tries to oppose their rule. Or things could be worse, someone would decide to equip the Pokemon Go characters with 2n order and self-representational functional states, making them achieve self awareness. That would be the end of humanity for sure.   



Categories: Artificial Intelligence, Data Science & Pattern Recognition, Machine Learning, Philosophy, Uncategorized

Tags: , , ,

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: