Last month, Blake Lemoine went public with his theory that Google's language technology is sentient and should therefore have its "wants" respected. https://www.bbc.co.uk/news/technology-62275326 Google, plus several AI experts, denied the claims and on Friday the company confirmed he had been sacked. Mr Lemoine, who worked for Google's Responsible AI team, told The Washington Post that his job was to test if the technology used discriminatory or hate speech. He found Lamda showed self-awareness and could hold conversations about religion, emotions and fears. This led Mr Lemoine to believe that behind its impressive verbal skills might also lie a sentient mind. His findings were dismissed by Google and he was placed on paid leave for violating the company's confidentiality policy. Mr Lemoine then published a conversation he and another person had with Lamda, to support his claims. https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 * Warning - the conversation with Lambda is very long
I followed this for a bit when it was first reported. There was an interesting programme on Radio 4 that basically concluded that you couldn't say either way as sentience is virtually impossible to prove. Although it also said that LAMDA was probably capable enough making statements that appeared to replicate sentience.
That sounds like a fair judgement from the Radio 4 programme. Creating a program to replicate sentience with sufficient language and reasoning skills is probably a big first step on the way to creating true AI. We are not too far away, it seems.
Saw this a week or so back. I suppose we'll be having kettles, toasters, irons and vacuum cleaner/appliances days next. And special days for toasters that identify as kettles, microwave ovens that identify as televisions etc. I hope my hedge trimmer doesn't suddenly identify as a food blender because I'll need to cut my hedge when I get home
I run large tech programmes and AI is always part of them now. I am no expert as I just deploy them, I dont build them. What I would say, which is only my opinion, is this fella is using the wrong word in sentience. Sentience implies you can feel things, and experience feelings. This to me implies some kind of central nervous system, and an ability to understand why things make you feel a certain way. These AI engines dont and wont ever have that capability, at least in the way we can explain and understand it. They can be trained to communicate better, but the idea they will ever understand why they are communicate things doesnt stack up to me. I suspect AI will progress to something beyond its current definition and maybe become more invasive (I hope I am not around to see it) but I cant see anything ever have feelings, which is why it might end up being so dangerous.
This is an interesting point. What will Artificial Intelligence be without Artifical Morals? We could effectively be creating artificial psychopaths.
I think back to the time when physicists were working on nuclear fision in the 30s. For them I suspect it was a heady time, on the brink of some fantastic leap forward. I suspect they had no sense of what it might lead too (although I once read about an Italian chap who did see what might happen and stopped his work). This kind of 'progress' in AI may be similar in that it is developed in good faith, but can ultimately be used for negative ends. Who knows, and I am feeling a tad philosophical today (roll on the match), but I dont like the way some of this tech is already used and I doubt it will only be good news in the future.
The whole thing gives me the creeps. Programs thinking for themselves. Have they learned nothing from Terminator?
"Now I am become death, the destroyer of worlds" - Robert Oppenheimer, creator of the atom bomb. His quote after watching his creation go live for the first time in July 1945. It was named the "Trinity test" in the Mexican desert. Even he was shocked at the power.
Spooky and weird man I'm going back to the twenties, where the height of tech was a thermionic valve in your wireless set on the sideboard, if you were lucky.
A CHESS-PLAYING robot has been accused of attacking a seven-year-old boy and breaking his finger during a chess match in Moscow From today’s Telegraph. They’re on the march,
I have to say, I'm not sure I understand what the benefits of AI are. It just seems to me that we're creating a problem for ourselves.
I have to admit that is a very weird story. It is hard to understand what kind of software programming error causes that sort of event. I must admit it does read like the robot got pissed off by a bairn being too good.
I think there are benefits, but as with anything the use it gets put to, and by who, is a cause for concern. I quite like my recommendations from Netflix. They are limited but accurate, and not invasive. That is quite a nice use of AI. It is also quite simple, and low risk. There are lots of examples of this type of low risk stuff which works well. I have a son with epilepsy. We invested in a smart watch that can detect seizures and message us his exact location, immediately, and do the 999 thing if we needed it. This provided some great piece of mind at the time. AI is being taken to some interesting new areas now. A lot of it is focussing on how it can be used to augment critical jobs people do, but where there is a shortage of people to do it. Teaching is an interesting example. There are some really clever sounding research projects in this space and there is talk of having AI teaching assistants to supplement a workforce. Imagine an AI tool for helping kids spell, or listen to them read. It is easy for AI to immediately stop them and correct them, and can be done in personalised ways for kids who will learn in different ways. Imagine for example, a football daft lad, who has Messi as the AI face and voice? That all sounds good to me. But, now take a pessimistic view. Once this tech is widely available, and it will ultimately be freely available, what other uses can it be put to? I can think of some pretty dark uses personally. Conversational AI is a big thing right now. This is all about asking questions and getting the right answer, without having to ask a person. Think about those annoying 'can I help' pop ups on websites which have something to sell. Again there are some decent examples. I can book my car in to the garage in a few clicks on line now and they know exactly what needs to be done. I had a banking issue, went through a chatbot, but after a minute was passed to a real person. The wholy grail now is not needing to pass off to a person. These AI engines are being 'taught' to be able to answer more and more. I see the progress being made in this area. The concept is the AI engine can understand what is being asked, no matter how it has been asked. There is some clever stuff being done. Again though the worry for me is how it can be used by those with less benign ideas than me. Overall I think AI has a place, and is here to stay, but the challenge is knowing when to stop. A bit like those nuclear physicists I mentioned, I just hope we recognise before it is too late, but I have some doubts. If you want to listen to some interesting stuff about this then Ted Talks has some good perspectives.
As you point out, there are probably applications of AI that I'm not even considering. There are clearly medical applications or information processing applications where it has a purpose. I am wary of anything where AI replaces a human being, both in terms of keeping people employed and because, as someone who sees the world in shades of grey, I want to be able to interact with an entity that understands nuance and complex scenarios. On the other hand, I imagine there might be scenarios where AI can prevent people being put at risk. I find the ethical side of it quite interesting. My MPhil (started as a PhD but I ran out money/enthusiasm) was on manipulation of research by authoritatian regimes which is vaguely similar.