Perspectives on Artificial Intelligence: Ethics, Consciousness, and the Future of AI

The recent rise of natural language processing (NLP) models has put artificial intelligence (AI) on the map. Programs like ChatGPT have transformed modern AI from a novelty to a normalized part of everyday life. Today, 27% of people use AI “every day” while a further 28% utilize artificial intelligence “several times a week”.

The discourse surrounding AI has largely hinged on job loss and dystopian predictions. This is understandable, as we’ve all been raised on a healthy diet of Terminator, Blade Runner, and Ex-Machina.

However, fears about the future of AI shouldn’t stifle critical thinking. If we want to secure a better tomorrow with AI, it would be prudent to discuss the ethics and application of AI today.

Consciousness and AI

Consciousness is a subject that has vexed philosophers for millennia. Socrates maintained that the most important of all philosophical commandments was to “know thyself”, Descartes declared “I think, therefore I am”,  and many modern thinkers continue to debate Freud’s assertion that our conscious self is driven by an unconscious that “cannot be apprehended by the conscious mind”.

David Chalmers neatly summarized the tension between the immediacy of consciousness and the difficulty we have defining it by stating “There’s nothing we know about more directly [than consciousness]… but at the same time it’s the most mysterious phenomenon in the universe.”

Things become even more fickle when considering the consciousness of artificial intelligence.

On the one hand, AI models seem to pass the litmus test of consciousness set down by classical thinkers like Plato and Socrates. AI programs like ChatGPT are a near-perfect reproduction of Plato’s theory of forms and clearly utilize perfect imaginary forms to generate answers. 

A knee-jerk reaction to AI hallucinations may lead us to believe that ChatGPT also passes modern, psychoanalytic definitions of consciousness. It’s quite tempting (and exciting!) to assert that these hallucinations are proof that NLPs have strange, unknowable, and unconscious selves much like our own.

However, anything beyond a surface-level understanding of AI hallucinations shows that semiotics are muddying our understanding of the issue. Unlike our own unintelligible hallucinations, AI’s hallucinations can be explained by mistakes in training data and human programming errors. This suggests that AI would not qualify as a Freudian or Lacanian split subject and, therefore, could not be considered “conscious” in the same way the animals are.

Ethics and the Moral Machine

Artificial intelligence may not be fully conscious, but that doesn’t make it any less useful. In fact, moral machines and reasoning software may lead us into a new era of objective ethical thinking. 

However, before we can utilize machines in decision-making and anti-bias training, it’s worth considering the way we train machine learning programs. This sentiment is echoed by Ana Sandoiu, who asserts that “philosophers, ethicists, passengers, and pedestrians have all come a little too late to the moral debate party.”

When training AI, we need to put forward a unanimous, unbiased vision of ethics. Unfortunately, no such version of ethics exists (at least, not in the real world). Consequentially, we are almost certain to accidentally train AI with unconscious biases.

Fortunately, we have extensive experience in dealing with unreliable ethical actors — ourselves! We are constantly at odds with our own beliefs and routinely take actions that betray our own ethics. For example, every large society agrees that murder is wrong, yet murder occurs in every large society.

Perhaps, then, we can model AI training on ethical training that works in real life. Could laws that govern humans be used to govern artificial intelligence, too? This, of course, raises questions about how punishment can be levied against AI models. However, it appears that an imperfect solution may be the only route forward if we want to utilize the power of AI while minimizing the damage it may do.

While AI ethics at large is frustratingly relative, governmental bodies like UNESCO have created recommendations designed to ensure that AI works for the good of society, humanity, and the natural world. These recommendations are designed to promote human dignity, guard against bias, and promote the health of our ecosystems.

Changing the Social Order

If we lived in an egalitarian society, AI would help humans by freeing up time for leisure. In this sense, artificial intelligence would not be a threat to human health and happiness, as automated programs could take over the means of production without jeopardizing the financial security of the masses.

Unfortunately, our social structure is governed by free-market economics rather than ethical imperatives. This means that AI may actually exacerbate inequality rather than end the exploitation of working people.

U.S. economist and Nobel-prize winner, Joseph E. Stiglitz, explains that AI could have a “macroeconomic effect on the level of inequality” and that this may result in a fall in GDP. This is largely due to the prevalence and power of free-market economics which leaves working people vulnerable during times of technology upheaval.

Fortunately, AI isn’t ready to take over completely — yet. For example, in fields like marketing, AI cannot replace real writers as NLPs are incapable of taking constructive criticism, externally fact-checking themselves, or creating authentically unique creative content.

It remains to be seen how AI will affect white-collar workers. However, if wielded correctly, AI could spearhead a new revolution that decreases working hours, increases leisure time, and gives more people a chance to determine their own life path.

Conclusion

Artificial intelligence poses problems for philosophers on multiple fronts. At first glance, it seems obvious that AI is not conscious or capable of original thought. However, a deeper inspection suggests that AI may experience the same hallucinations and incongruities that define human consciousness — if human consciousness can be defined.

The exact nature of consciousness continues to evade our understanding. After all, what is it that is being conscious? Is there a sense in which Descartes’ dualism is correct and consciousness is its own essence apart from the material that makes up the brain? Or is that an illusion, and consciousness is the collection of so many neurons firing in concert? Whatever the case, consciousness seems to have self-awareness as one of its traits and if the materialist view of consciousness is correct, it remains to be seen how a collection of neurons produces a concept of the self and then turns to examine that concept.

In order to be able to grasp whether AI is or can be conscious, and therefore, whether it can be ethical in a self-determining sense, we must first answer the question of what consciousness and the self is. If AI is never truly self-aware and able to critique its hallucinations, is it truly conscious?

For now, we can set aside the larger and more thorny issue of consciousness in favor of determining what AI’s purpose is as an actor in society. We want it to be a helpful tool that really does solve problems for the greater good. The key, it seems, is to approach AI from a teleological perspective and work backward. This ensures that AI is used to promote human dignity and the preservation of ecosystems.