REGULATION OF ARTIFICIAL INTELLIGENCE
The time has come to integrate artificial intelligence in our daily lives; it has been a topic of debate for almost a decade. With the advancement of artificial intelligence and its foretold integration with the emotional intelligence has come to fruition in the form of robots taking over the fields like nursing, counseling and many others fields which require a human’s emotional intelligence to tend to the patient suffering from a mental issues etc. now artificial intelligence with the help of integration of undisciplined and unmonitored learning algorithms are capable learning human-like emotion in a whim. This advancement has helped humankind in realizing the probable potential and threat associated with artificial intelligence, but the legal principles are not evolved to such an extent which can keep up with this type of advancement, however, the existing jurisprudential principles can be used to act as a catalyst to the advancement in the explaining the liabilities arising due to the act of the emotionally aware artificial intelligence. The author has segmented the current question in two segments:
Artificial intelligence as a product
Artificial intelligence as an individual
ARTIFICIAL INTELLIGENCE AS A PRODUCT
With the amendment in our current consumer protection law, it has been made clear 6(7) of the act of 2019 it has been made clear that any liability pertaining to the product will lie upon the shoulder of its manufacturer and not the buyer in this case. This was a bold move by the government but the nexus between the product and the user is still missing this gap can be exploited in the further in the future scenarios as a legal loophole. This problem can further be fragmented into two parts:
Principle of vicarious liability
Principle of caveat emptor
The principle of vicarious liability will only be functional in the industrial scenarios, for example, let’s say a nursing AI got so emotionally evolved that it suggested the person it is treating to commit suicide as he cannot get any better because of his broken mental state and his family hates him and it’s a drag to treat his condition. In this case, the clear cut answer will be that the company responsible for such a development shall be held liable in the first place as the principle was established in Kawasaki bike factory incident case and once using the principle of lifting of corporate veil the person can be held individually liable who was directly involved in the development of such an AI. But what about hospitals employing nurses with Artificial intelligence? This again will lead either to the application of the principle of respond et superior or the principle of caveat emptor in the first place. It can be said that both of these principles will be intertwined with each other if the manufacturer has given a full account of details about the AI and its ability to adapt to the nature of emotions it is subjected to, then the liability might fall onto the shoulders of hospital or it might fall onto the shoulders of the doctor directly controlling the AI. The author suggests that the AI shall be treated as an innocent agent that is not aware of its surroundings and can only evolve by taking in the inputs of various humans surrounding it. However, the principle of foreseeability will still act as a defence in both of the cases, as the person designing and employing it doesn’t know about the extent to which it can evolve.
ARTIFICIAL INTELLIGENCE AS A PERSON
This is a very farfetched scenario, artificial intelligence evolving to such an extent that it is indistinguishable from the human beings, what will happen if an AI murders someone or instigate someone to commit a crime in other words it became self-aware? The answer can be very simple by punishing it under the same law under which we govern humans. How? Let’s take a look at various the element of a crime, it is generally intention, preparation, attempt and completion if these three criteria are fulfilled then the it can be said that a crime has been committed once an AI is fully self-aware of its surrounding it can be made liable but the principle and the extent of punishment needs to be changed immensely or a separate legislation governing the right and liability needs to be passed for the regulation. Again I would like to mention it’s a very far fetched terminator like scenario which is highly likely to become a reality.
Prajanya Raj Rathore,
Symbiosis Law School, Hyderabad.
(Image used for representational purpose only.)