«BANARAS LAW JOURNAL Cite This Issue as Vol. 42 No. 2 Ban.L.J. (2013) The Banaras Law Journal is published bi-annually by the Faculty of Law, Banaras ...»
ii) The second situation is using an AI entity of a old version which lacks the modern advanced capabilities of the modern AI entities.
This model is not suitable when an AI entity decides to commit an offence based on its own accumulated experience or knowledge. This model is also not suitable when software of the AI entity was not designed to commit the specific offence, but commit offences of general nature. Equally, model is also not suitable when the specific AI entity functions not as innocent agent, but as semi – innocent agent.20 The legal result of applying this model is that the programmer and the user are fully criminally liable for the specific offence committed, while AI entity has no criminal liability.
B. Natural – Probable – Consequence Liability model This second model of criminal liability assumes involvement of programmers or users in the AI entity’s daily activity even though they have neither the knowledge of the committed offence nor had any planned for the same nor had participated in any way in the Commission of that offence until it had already been committed.
The examples of such situation are – an AI in robot or software designed to function as an automatic piloting the plane. During the flight, the human pilot activates the automatic piloting system (which is the AI entity) and the programme has been initialized. At some time after activation of the automatic piloting system, human pilot sees an approaching storm and tries to abort (switch off) the activation of automatic piloting and return back to normal piloting. The artificial intelligence entity may understand 19 The AI entity is used as an instrument and not as a participant, although it uses its features of processing information. See, Cross, George R., & Debessonet, Cary G., An Artificial Intelligence Application in the Law: CCLIPS, A Computer Program that Processes Legal Information, 1 High Tech. L.J. 329 (1986).
20 Lacey Nicola, and Wells, Celia, Reconstructing Criminal Law – Critical Perspectives on Crime And the Criminal Process 53 (2nd ed., 1998).
24 THE BANARAS LAW JOURNAL [Vol. 42] the human pilot’s action as a threat to the mission and take action to eliminate the threat. It may cut off the air supply to the pilot or activate the ejection seats etc.. As a result, the plane may crash and killed the passengers and pilot. But the factual result is that human pilot and passengers were killed by action of Artificial Intelligence entities action. These actions were done according to the programme. Similarly, a second example is artificial intelligence software designed to detect threats from the internet and protect a computer system from these threats. After few days software has activated and figured out that the best way to detect such threats is by entering in to web sites and defines as dangerous and destroying any software does that, it is committing a computer offence, although the programmer did not intended for the artificial intelligence to do so.
The problems discussed in the above examples may not be dealt by first model legally suitable. The first model assumes mens rea, the criminal intent of the programmers or users to commit an offence through instrumental use of artificial intelligence entity’s capabilities. In the above examples, the programmers and users had no knowledge of the offence committed. They had also neither planned nor intention to commit nor participated in the commission of offence. Such problems (discussed in the above example) may suitably be dealt by the second model. Because this model is based upon the ability of the programmers or users to foresee the potential commission of offences.
According to this second model, a person might be held accountable for an offence, if the offence (committed) is a natural and probable consequences of that person’s conduct. In criminal law, the natural – probable consequence liability was used to impose criminal liability upon accomplices. The establish rule prescribed by U.S. courts is that accomplice liability extends to act as a perpetrator that was a natural and probable consequence21 of a criminal scheme which accomplice encourage or aided.22 Latter on, it has been widely accepted in U.S. statute. This liability model requires the programmer or user to be in a mental state of negligence. Programmer or user are not required to know about any forthcoming commission of an offence as a result of their activity, but are required to know that such an offence is a natural, probable consequence of their action.
A negligent person, in criminal law, is a person who has no knowledge of the offence, but a reasonable person should have known about it, since specific offence is a natural probable consequence of that person’s conduct.23The programmers or users of an entity are coming under this category. They should have known about the probability of the forthcoming commission of the specific offence even though they did not actually knowing about it. Negligence is, in fact, an omission of awareness or knowledge. The negligent person omitted knowledge and the act.
21 United State v. Powell, 929 F. 2d 724 (D. C. Cir. 1991).
22 Sayre, Francis Bowes, Criminal Responsibility for the Acts of Another, 43 Har. L. Rev. 689(1930); Pople v. Prettyman, 14 Cal. 4th 248, 58 Cal.
Rptr. 2d 827, 926 P.2d 1013(1996); Chance v. State, 685 A.2d 351 (Del. 1996); State v. Jackson, 137 Wash. 2d 712 976 P.2d 1229 (1999).
23 Fine, Robert P., and Cohen, Gray M., Is criminal Negligence a Defensible Basis for Criminal Liability? 16 Buff. L. Rev. 749 (1966); Hart, Herbert L. A., Negligence Mens Rea and Criminal Responsibility, Oxford Essays In Jurisprudence 29 (1961); Sturat, Donald,, Mens Rea, Negligence and Attempt, Crim. L. R. 647 (1968).
Liability of Artificial Intelligence Entities in Criminal Cases 2013 25 This liability model would permit liability to be predicated upon negligence, even when the specific offence requires a different state of mind. 24 This is not valid in relation to the person who personally committed the offence, but rather, is considered valid in relation to the person who was not the actual perpetrator of the offence, but was one of its intellectual perpetrators. Reasonable programmers or users should have foreseen the offence, and prevented it from being committed by AI entity.
However, the legal results of applying the Natural – Probable – Consequence Liability model to programmer or user is differ in two different types of factual cases.
The first type of case is when the programmers or users were negligent while programming or using the AI entity but had no criminal intent to commit any offence.
The second type of case is when programmers or users programmed or used the AI entity knowingly and wilfully to commit one offence through AI entity, but the AI entity deviated from the plan and committed some other offence, in addition or instead of the planned offence. The first type of the case is purely covered in negligence. The programmers or users acted or omitted negligently, there is no reason why they should not be held liable for offence of negligence.
The second type of case resembles with the basic idea of Natural Probable Consequence - Liability model. For example, a programmer programmes an AI entity to commit a violent robbery in a bank but the programmer did not programme the AI entity to kill any one. During the execution of the robbery, the AI entity kills one of the person present in bank whom resisted the robbery. In such cases, criminal negligence liability alone is insufficient for programmer or user. They should be made accountable as if they had committed knowingly and wilfully. In this case, they should be held liable for robbery as well as manslaughter or murder, which requires knowingly or intent.. 25 The question still remain: What is criminal liability of AI entity itself when the Natural – Probable – Consequence model is applied? There may be two possible outcome–
i) When AI entity acted as an innocent agent, without knowledge, it is not accountable for committed offence.
ii) When AI entity did not act merely as innocent agent, then in addition to criminal liability of programmer or user (as to the natural – probable- consequence model), the AI entity itself shall be held criminally liable for specific offence directly. The direct liability model of AI entities is the third model, as discussed below.
C. Direct Liability Model :
The third liability model does not assume any dependency of AI entity on programmer and user. This model only focuses on the AI entity itself. In order to impose criminal liability on any kind of entity two elements must be proved – first, actus reus (external element) and second mens rea (mental element). The mental element includes knowledge or intent of that entity. The relevant questions related with criminal
liability of AI entity is :
How it may be proved that these entities fulfill the requirements of the criminal liability? Do they differ from humans in this context? An AI algorithm might have 24 State v. Linscott, 520 A.2d 1067. (Me. 1987).
25 People v. Cooper, 194 Ill. 2d 419 (2000), People v. Weiss 256 App. Div. 162, 9N.Y. S. 2d 1 (1939).
26 THE BANARAS LAW JOURNAL [Vol. 42] different feature and qualifications from an average human, but such features are not requires to impose criminal liability. In order to impose criminal liability, internal and external elements are required to be proved. Similarly, to impose criminal liability on AI, it is essential to examine that whether they are capable of fulfilling both elements and if they fulfill them, then nothing prevent to impose criminal liability on them.
Normally, the fulfilment of external element requirement of an offence is easily attributed to AI entities. So long as AI entity controls a mechanical or other mechanism to move its moving parts, any act might be considered as performed by the AI entity.
Thus when an AI robots its electric and hydraulic arms and moves it, it might be considered an act, if specific offence involves such an act.
When any offence might be committed due to omission, it is simpler for AI entity also. Its action is the legal basis for criminal liability, so long as these had been a duty to act. If a duty to act is imposed upon the AI entity, and it fails to act, actus reus requirement of specific offence is fulfilled by way of an omission. The attribution of the internal (mental) element of offence to AI entities is the real legal challenge in most cases. The attribution of mental requirement differs from an AI technology to other. Most cognitive capabilities developed in modern AI technology are immaterial to the question of the imposition of criminal liability. Creativity is a human feature that some animals also have, but creativity is not a requirement for imposing criminal liability. The sole mental requirements needed in order to impose criminal liability are - knowledge, intent, negligence etc.. Knowledge has been defined as sensory reception of factual data and the understanding of that data.26 Most of the AI entities are well equipped for such reception. Sensory receptors of sights, voices, physical contact, touch etc. are not rare in most AI systems. These receptors transfer the factual data received to central processing units that analysed the data. The process of analysis in AI systems parallel to human understanding.27 The human brain understands the data received by eyes, ears, hands etc. by analysing that data. Advanced AI algorithms are trying to initiate human cognitive processes. This process is not different.28 Specific intent required to establish liability for specific offence. The perpetrator of the offence performs external act as a result of existence of such intent. This situation is not unique to humans. An AI entity might be programmed to have a purpose or an aim and to take action in order to achieve that purpose. This is specific intent of AI.
One might argue that human have feelings that cannot be initiated by AI software, even most advanced software- such as feeling of love affection, hatred, jealous etc. However, such feelings are rarely required to prove specific offences. Specific offence may be proved by knowledge of the existence of external element. But few offences require specific intent in addition to knowledge. Almost all other offences are satisfied by much less than this requirement like – negligence, recklessness, strict liability etc.
For imposing the criminal liability, both external and internal elements of specific offence are required to be proved. Why should an AI entity that fulfils all elements of 26 In this context knowledge and awareness are identical. See, United State v. Youts 229 F.3d 1312 (10th cir. 2000); State v. Sargent, 156 Vt. 163, 594 A.2d 401 (1991).
27 Boden, Margaret A.,, Has AI Helped Psychology? The Foundations of Artificial Intelligence 108 (Derek Partridge and Yorick Wilks, eds, 2006).
28 Dannett, Daniel C., Evolution, Error, and Intentionally, The Foundations of Artificial Intelligence 190 (Derek Partridge and Yorick Wilks eds., 2006) ; Chandraswkaran, B.,, What Kind of Information Processing is Intelligence? The Foundations of Artificial Intelligence 14 (Derek Partridge and Yorick Wilks eds., 2006).
Liability of Artificial Intelligence Entities in Criminal Cases 2013 27 offence be exempted from criminal liability even if both elements (external and internal) have been established – like infants and mentally ill persons? Social rational behind it is to protect infants from harmful consequences of criminal process.29 Do such frame works exist for AI entities? The original legal rational behind the infancy defence was that the infants are yet incapable of comprehending what was wrong in their conduct.
Could the same applied to AI entities? Most AI algorithms are capable of analysing permitted or forbidden.
The mentally ill are presumed to lack the intentional element of specific offence.
The mentally ill are unable to distinguish between right and wrong (cognitive capabilities)30 and to control impulsive behaviour.31 When an AI algorithm functions properly, there is no reason for it not to use all its capabilities to analyze the factual data received through its receptors.
However, an interesting legal question would be whether a defence of insanity might be raised in relation to a malfunctioning of AI algorithm, when its analytical capabilities became corrupted as a result of that malfunctioning.