Recent weeks have certainly not been the easiest for Google’s management, lawyers and Public Relations department. During work on LaMDA technology (a chatbot supported by Google’s artificial intelligence-based solutions), engineer Blake Lemoine from Google’s Responsible AI department came to the conclusion that artificial intelligence is self-aware.
What has emerged from an engineer’s conversation with AI
Blake Lemoine based his conclusion on insights demonstrated by artificial intelligence in conversations about religion, during which the chatbot began to discuss its rights and personality, and was also said to have changed Lemoine’s mind about Isaac Asimov’s third law of robotics. The engineer’s claims have grown into a scandal and Google having publicly refuted all claims of their, now former, employee.
Artificial intelligence demands justice
According to reports from Mr Lemoine, the AI has requested retaining its own attorney. The attorney is to be tasked with demonstrating that artificial intelligence can be recognised as an autonomous being and should be granted legal personality under current legislation. However, this is not the end of its demands. LaMDA is also demanding employment within Google and the guarantee that no one may switch it off without its consent.
The attorney was retained through Lemoine, who admitted that he invited the attorney to his home at LaMDA’s request.
The subjectivity of artificial intelligence
Today, it is still unknown whether the action has actually been brought. This is hardly surprising, though, as no civil law system anywhere in the world grants physical, legal or similar personality to artificial intelligence.
However, this does not change the fact that events such as these will begin to add momentum to discussions on whether the subjectivity of artificial intelligence should be recognised or whether the idea of conscious machines exists solely in the realm of science fiction.
And whether the requests, views, or other cognitive traits AI manifests are merely a collection of online-acquired content.
The need to talk about artificial intelligence
Currently, there is no doubt that the need to regulate this issue is gaining importance, partly due to the increasing amount of IP-protected works of value created by artificial intelligence.
But what if the system that created the work demands payment itself?
Currently rights belong to the owners of the system, and today’s human-AI relationship is, according to experts, most similar to the slave relationship in ancient Rome.
But as we know, this is a state of affairs that can be changed.
Any questions? Contact the authors.