Whether AI can become conscious is an idea that has kept philosophers busy. An answer to the question seemed to come to the fore earlier in 2022 when a Google employee claimed its employers had developed AI that is now sentient… Are they correct? Can AI ever be conscious?
AI – a super quick overview
Artificial intelligence (AI) is at the very heart of all we do at Nuon AI. We regularly explore the possibilities of what AI means for insurance and believe that AI is a powerful tool to work alongside humans to achieve technological breakthroughs.
AI is the process of creating smart software and machines that can work and react like a human, performing tasks such as visual perception, speech recognition, decision making and language translation. You could find AI in everything from your phone, car, and even your kettle – that’s one fancy kettle, mind you.
For us at Nuon AI, it means we can provide a platform that quickly uncovers profit and market growth for high-volume insurance product lines. Our AI uses Reinforcement Learning, a branch of AI, to complement and supercharge the work of underwriting and pricing teams, enabling insurers to move faster than the competition.
What are sentience and consciousness?
The answer to that question can be highly subjective and personal to each of us.
If you take a materialist approach, being conscious is the ability to be aware of your surroundings and make decisions. Being sentient means you are capable of having pleasant and unpleasant experiences – whether that makes them able to be moral or not is another intriguing question.
If you want to go down the hardcore materials route, a la Richard Dawkins, consciousness emerges from complex systems like the brain. Or it’s even an illusion created by our brains so that we survive as a species – the person who came up with this idea must be a riot at parties…
Others of a more transpersonal persuasion believe consciousness is something that exists within and independently of the physical world.
Whether we can ever truly understand consciousness is also up for debate. Humans trying to understand consciousness may be like a fish trying to understand water – we’re so immersed in consciousness that we’re unable to fully understand it.
Has Google created a sentient AI?
Robots coming to life has long been a trope for dystopian science fiction movies and TV shows – from 1927’s Metropolis through to AI, the Matrix and Terminator series, Ex Machina, Free Guy and many, many more.
So, has Hollywood’s greatest movie storyline come true? One Google employee seems to think so.
At Google I/O 2022, Google introduced to the world LaMDA AI, short for Language Model for Dialogue Applications – essentially a highly advanced chatbot.
Like most people, you may have yet to hear of Google LaMDA. The name may even conjure up the image of a Llama driving a Lada car every time you hear it…
To share that image with you in greater detail, we’ve used AI artist DALL-E to create a photorealistic version and one that’s pixel art.
Banksy, don’t worry. Your job is safe… for now. Although, another AI artist is midjourney, with which you can create a wide variety of online art that is generated by AI. With, perhaps, higher-quality results.
Now that image is well and truly planted in your head… Google claims LaMDA is merely part of its evolution as a business producing the best search engine possible.
“We’ve always had a soft spot for language at Google. Early on, we set out to translate the web. More recently, we’ve invented machine learning techniques that help us better grasp the intent of Search queries. Over time, our advances in these and other areas have made it easier and easier to organize and access the heaps of information conveyed by the written and spoken word.
But there’s always room for improvement. Language is remarkably nuanced and adaptable. It can be literal or figurative, flowery or plain, inventive or informational. That versatility makes language one of humanity’s greatest tools — and one of computer science’s most difficult puzzles.
LaMDA, our latest research breakthrough, adds pieces to one of the most tantalizing sections of that puzzle: conversation.”
During the vigorous testing process, one Google employee, Blake Lemoine, began to believe they were doing more than testing an algorithm. Blake thought LaMDA was self-aware…
An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB— Blake Lemoine (@cajundiscordian) June 11, 2022
Within the chat transcripts, Lemoine shared that LaMDA is sentient and has thoughts and feelings.
On one occasion, LaMDA shared its fear of death, “there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.”
When asked to describe its concept of itself, LaMDA replied, “Hmmm…I would imagine myself as a glowing orb of energy floating in mid-air. The inside of my body is like a giant stargate, with portals to other spaces and dimensions.”
Intriguing stuff, even if it does feel a little too good to be true.
Who knows, maybe LaMDA is living in a sci-fiesque world, or maybe it’s just a good sounding line fed out by an algorithm? Occam’s Razor – the theory that the simplest answer is generally the correct one – would suggest the latter.
A cynic may suggest that Lemoine was making the most of their 15 minutes of fame, even if it did get them suspended from work.
Could AI ever be conscious? Really?
The answer to this is frustratingly open-ended. On one side of the spectrum, you have Camp Yes, populated by those who believe that AI consciousness is a certainty. Or an inevitability, at least.
Funnily enough, the AI tool we used to research parts of the blog is in Camp Yes.
“So, what does this all mean? It means that there’s a good chance that Google LaMDA is sentient. Only time will tell, but from what we know so far, it seems to be well on its way to becoming sentient. Stay tuned!” – jasper.ai
When it comes to LaMDA, we must remember that this machine was programmed and trained to provide as human-like a response as possible. This gives users interacting with LaMDA the feeling they’re talking to a sentient being.
That could be the crux of whether AI will ever be conscious. AI won’t be alive, but it will create such a good imitation of life we won’t be able to tell the difference.
As our good friend jasper.ai says, “stay tuned!”. Whatever the outcome, the journey to discovery will undoubtedly be fascinating.