The Only Difference Between Humans and Robots

A fun question is: what makes humans different from robots? Ignoring all other animals and just focusing on humans and robots, or humans and computers if you will, we can quickly go to the Turing test. This was really more designed as a way to distinguish an intelligent machine from a human through a series of questions. But this test focused on the thinking process between humans and computers, and I’m more interested in the subjective experience.

If we try to pinpoint what exactly creates the human experience, I fall back to not the way we think, not the way we move, not the way we sound, but to empathy, sympathy, and compassion. I feel that a computer program, almost by definition, cannot have compassion. In other words, compassion is an urge to help other people. I suppose that you could program a computer to recognize when someone needs help and that computer program could try to act in a way that helped others, but could a computer ever feel an urge? 

How about emotions in general? Can you create an emotional experience through circuits? It's a hard question, because feelings are often a synthesis of physical experiences merged with an elevation through the mind’s awareness, consciousness, and understanding. For example, when someone is in love they often say I have butterflies in my stomach. but without a gastrointestinal system to know what it feels like when your stomach is lifted, could that experience be transferred to a computer? 

I suppose that every experience exists inside of a brain, which is really just a bunch of circuits, and so the answer could be that yes, there is no difference between a computer program and a human. But coming back to compassion, can you have a machine that feels compelled to help others when they are in need? In order to have compassion, that machine would need to have empathy, which would allow it to have its own sensations mimic that of someone else's, so that it could determine accurately when that other thing or person needed help. A computer program could give sympathetic words, so I don't think sympathy as an observable action is itself unique to humans. And the computer program can learn empathy, but it would probably need a combination of sympathy, empathy, and compassion, for it to resemble a human. 

Then again, a human that lacks sympathy, empathy, and compassion, would very closely resemble a robot. A sociopath with expert calculating and analytical abilities resembles a robot. One can ask if a super analytical brain that has the power of big blue and calculates its whole life is synonymous with a robot, and if so, then what makes this human special, or distinct from a computer than can perform the same algorithms and calculations? 

If the entire human experience can be captured through an artificially intelligent machine, then humans are neither essential or unique. Our entire biological beings can dissolve, leaving in their places a series of calculating circuits that learn and build without our input. Entire worlds could exist simply for these intelligent machines, but what would be the point? Could meaning, desire, emotion, and understanding ever be achieved by these machines? If so, then humans are not special due to their biology, but rather temporarily limited by their biology. And once we can shed this husk of a body, we can be better. But who determines better? More accurate? Faster processing? What’s the point? Why does it matter if there are machines on a planet building things, if there is no meaning? Can robots create their own meaning? Does that mean the end of humans? And, does it mean that the machine’s programming limits and determines its value? 

I can’t help but feel an instinctual pushback to this dystopian future, like so many writers before me. They describe a world devoid of emotions, of joy, of meaning and purpose. Instead, it’s a world defined by logic, efficiency, and production. Questions about meaning are irrelevant, and represent a flaw in the coding. What difference is there between an empty planet like our moon, and a planet filled with robots moving around? Is the movement itself meaningful? If so, then everything inherently has meaning, because everything is in motion of some kind. If true, then life and death are irrelevant. Then the distinction between man and machine is irrelevant, and that concepts like meaning and purpose are irrelevant. This conclusion is about as close to nihilism as I can imagine, and I choose to create meaning in my life, rather than destroy it. 

A life lived without meaning can be a very empty and anesthetized life. I choose to find meaning in our flaws, our emotions, our empathy, our sympathy, and our compassion. I refute the idea that efficiency and accuracy is everything, and instead believe that our subjective experiences are important to giving us purpose and meaning in life. If I am only my exterior objective actions, then a robot can replace me, and with more accuracy. But if I am both my inner subjective experiences as well as my actions, then regardless of how many machines may replace me, my life can still have meaning and purpose to me. That’s something that can’t be taken away. 

Jess

A deep thinker, sharing his abstract thoughts with the world. 

Previous
Previous

We Have Big Brains...So What?

Next
Next

Ego, Love, and Math