Can AI ever be sentient? On LaMDA and the wider debate

Thu, Jun 23, 2022 6-minute read

If you’re reading this then chances are you already well aware of the recent AI-themed issue: operator of Google-owned LaMDA, Blake Lemoine, was suspended from his job for claiming that his AI was sentient. (Well actually, he was suspended for sharing company secrets, in the form of the chat transcript.)

If you’re not up to speed, then you won’t have to look far to find the story. I first read about it on The Register.

The huge interest initially was sparked because of Google’s handling of the situation - everyone loves a little-person-vs-big-corp scenario but everyone soon got bored of that and then piled in on the actual claims - was the chatbot indeed sentient?

The tech community was very, very categorical and vocal in agreement: it was definitely not sentient (with a strong undertone of “and whoever believes it is is a dumbass”). But the commentary has continued to rage on.

One of the local newspapers here in Guernsey picked up on the story, out of curiousity and intrigue moreso than anything, and has now published two articles on the topic after interviews - the first with Martyn Dorey and the second, published today, with yours truly. It was a phone interview and I did not review it before publication, so whilst there are one or two inaccuracies, and some editorial liberty has been taken, it is nevertheless it a decent article and broadly in line with my view.

Most people know that I am highly skeptical of artifical intelligence as a concept. Not because I doubt the technology, but because of the way it is marketed. My PhD, back in the late 90s/early 00s, used many of the techniques - such as artificial neural networks, polynomial regression, and so on, that, back then were just algorithms (and themselves weren’t exactly new) - but techniques that are now referred to as machine learning and artifical intelligence.

The advancement of these techniques over the last 20-30 years has been nothing short of spectacular, but absolutely not in the least bit surprising. We have, in simple terms, got a lot better at computers. This is down to all sorts of reasons crossing the whole span of the industry and it is the entire collaborative growth that enables us to push the boundary of what we can do. In all aspects.

It’s just that “artificial intelligence” sounds really sexy. And if you slap a robot face on it, then it’s something that you can sell to Hollywood as equally as you can sell it to investor vultures. It’s a win-win (for them.)

Most ‘artificial intelligence’ nowadays is just advanced pattern matching. Sophisticated classifcation algorithms. Advancements will continue to come, the technology will get better and we will do more. This is obvious and inevitable.

My gripe, though, is more on this relentless anthropomorphisation of it all. The depiction of robotic AI in the media is long and well-established and for theatrical purposes is mostly understandable. But from a real use perspective, what is the value in it?

One reason is the way in which it changes the way we interact and interface with technology. One area of particular interest is in NLP - natural language processing - and the way in which devices (and e.g., smart speakers) can be made to talk in quasi-human ways - and this does unlock a lot of opportunity. Most people are aware of ‘Google FU’ - that is, those skills you acquire(d) over the years to know how to ask the right question of Google in order to get the results you want. Well, if NLP can be improved to the extent that it can interpret your query through actual common language - and if Siri’s ability to just play some fudging music when I ask it to is anything to go by, then we are still a long way away - but you can see this has being one genuine reason to humanise this aspect of technology, as it will make it accessible to vastly more people. You would even go so far as to accept that in some scenarios, such as in Care, having ‘more human’ devices (both in looks and interactions) would make these technologies more accessible and acceptable (compared with a HAL-style glowing red eye.)

But beyond that, you have to ask the question, what, really is intelligence? In either the human sense or the artificial / robotic sense? You could ask LaMDA an obscure question about virtually any topic and it would likely know the answer. But, how is this actually intelligence? How is it even human? Other than those blessed with eidectic memories, being able to regurgitate bizarre details is virtually the epitome of un-human.

That’s not saying that that is not a valuable technology to have, absolutely the opposite (even if having to account for bias or fake news.) But it is not exactly ‘intelligence’. This is why Google & co are working on the ‘sensibleness’ - their words - of their chatbots, because the ability to reply in context is what makes a human interaction actually human; so that rather than just spouting a fact it provides a reply that is more believable.

Even more cynically, though, the Turing Test is regularly referenced as being the test of these devices - whether or not the machine can fool an operator in to believing it is human. But in the modern era of fake news, Cambridge Analytica and the like, how can that even philosophically nowadays be an acceptable motivation?

It is inevitable that the technology will continue to develop to the point that even the interaction itself will become something that is wholly indistinguishable from ’normal’ human interactions.

But this doesn’t answer the question as to why we should care so much about the technology being human? Why do we want the technology to become sentient? Technological marvel it might be, but what actual benefit will it bring to us? We’ve all seen the Terminator films and the Skynet scenario (as referred to in my LinkedIn profile) is something we should all be very scared of. But sentient technology, as the article alludes to, brings a whole other level of issues… which was the main theme of the C4 series Humans.

We’ve got 7.7bn people on Earth - we don’t exactly need anymore, and we certainly don’t need to trying to replicate any of the millions of flaws that humans have.

We shouldn’t waste time on being sentient. It’s easy to confuse what ’thinking’ really is - it’s making a decision based on the facts before you and your knowledge. This is what the current state of the art AI does. And this is all it needs to do. For years, the outcomes have been black and white but as the technology advances, the decisions are becoming more gray through applying context. But do we really want to mimic human behaviour which makes bad and unpredictable decisions because they’re tired and hungover? I think not.

We should be focused on the technology that achieves new things without the need to simulate being a human. Allow them to find patterns, classify data, make decisions and get a lot better at it. Save lives. Make life easier. These technologies will need the ability to ‘understand’ what it is to ‘be human’.

They just don’t themselves need to ‘be human.’

Posts in this Series