Understanding the legal risks associated with artificial intelligence

October 1st, 2020

Picture source: Gallo Images/Getty

This article introduces the fourth industrial revolution philosophically, exploring the application of innovation and automation in broad terms. At its core, artificial intelligence (AI) is the science of teaching computers how to learn, reason, perceive, infer, communicate, and make decisions like humans do (Sterling Miller ‘Part II: The future of artificial intelligence’ (static.legalsolutions.thomsonreuters.com, accessed 31-8-2020)). It is evident that AI is the peak of today’s technological age as we push through super intelligence. Not long ago, technology was perceived as ‘science fiction’. However, fundamental changes are being made to the way people work. As a result, machines are taking over the drudge work efficiently and at a speed quite impossible for humans to replicate.

Like any human or system, AI is not accurate. For this reason, its deficiency might result in detrimental consequences in our society. Some of the challenges are that important decisions, which affect our livelihoods, are delegated to imperfect systems. This simply means that there is no transparency about how these decisions are reached when automated systems are functioning, for example, how they conduct legal research (Aalia Manie ‘Artificial intelligence – apex of tech, policy challenges’ (www.itweb.co.za, accessed 31-8-2020)). Other related legal challenges are data privacy infringement. Artificial intelligence can also generate risks for human rights, not only by creating privacy threats and facilitating surveillance, but also by creating inequalities and discrimination (Barbara Rosen Jacobson ‘Artificial Intelligence, Justice and Human Rights’ (https://dig.watch, accessed 31-8-2020)).

Nonetheless, AI has its own advantages, for example, the machine intelligence will assist humans with repetitive jobs that are monotonous in nature. The best thing is that, machines think faster than humans and can be put to multitasking (Krishna Reddy ‘Artificial intelligence – advantages and disadvantages’ (https://content.wisestep.com, accessed 31-8-2020)). Equally, machine intelligence can be employed to carry out dangerous tasks that humans cannot do. This shows that their parameters unlike humans, can be adjusted. Hence, their speed and time calculations are based on parameters only.

Privacy infringement

Emerging AI is an ever increasing public concern for the many risks present where decisions are made by computers and not by humans. Artificial intelligence requires access to vast amounts of data, but poorly drawn laws and government policies can hinder beneficial access without reducing the risk of AI activities. Artificial intelligence also raises important ethical and privacy concerns that could erode trust in emerging technologies if not addressed thoughtfully. Artificial intelligence requires access to data because machines cannot learn unless they have large data sets from which to discern patterns (Mirjana Stankovic, Ravi Gupta, Bertrand A Rossert, Gordon I Myers and Marco Nicoli ‘Exploring Legal, Ethical and Policy Implications of Artificial Intelligence’ White Paper of the Global Forum on Law Justice and Development (2017)). For instance, this involves the retention and automated processing of vast amounts of personal data, some of which may be sensitive (Amy Edwards ‘Using artificial intelligence to fight financial crime in the UK – a legal risk perspective’ (www.allenovery.com, accessed 31-8-2020)). Therefore, governments should carefully assess whether existing data access laws should be updated to reflect the benefits of AI (Stankovic (op cit)).

Moreover, in an era of increasing data collection and use, privacy protection is more important than ever before. Not only do advances in AI benefit society, but policy frameworks must also protect privacy without limiting innovation (Stankovic (op cit)). Yet AI devices have no inherent notion of privacy or general principles of human dignity, which results in the violation of data protection legislation in South Africa (SA). It is true that: ‘The processing of data can infringe on a person’s personality primarily in two ways: where true personal information is processed, a person’s privacy is infringed, and where false or misleading information is processed, the person’s identity may be infringed’ (A Roos ‘Personal data protection in New Zealand: Lessons for South Africa?’ (2008) 11.4 PER 62 at 89).

There are two main aspects of AI that are of particular relevance for privacy. The first is that the software itself can make decisions, and the second is that the system develops by learning from experience. In order for a computer system to learn, it needs experience, and it obtains this experience from the information that humans feed into it. Some systems utilise personal data, while other systems use data that cannot be linked to individuals, which then contradicts data protection policies, resulting in a legal risk.

Data protection in SA

Data protection is largely about safeguarding the rights of individuals to decide how information about themselves is used. This requires controllers to be open about the use of personal data, so that such use is transparent.

Privacy relates to personal facts, which a person has determined should be excluded from the knowledge of outsiders. ‘[I]t follows that privacy can be infringed only when someone learns of true private facts about the person against his or her determination and will’ (Roos (op cit)).

Privacy is expressly protected by s 14 of the Constitution. The constitutional right to informational privacy has been interpreted by the Constitutional Court as coming into play wherever a person has the ability to decide what they wish to disclose to the public and the expectation that such a decision will be respected is reasonable (Investigating Directorate: Serious Economic Offences and Others v Hyundai Motor Distributors (Pty) Ltd and Others: In re Hyundai Motor Distributors (Pty) Ltd and Others v Smit NO and Others 2001 (1) SA 545 (CC) at 557).

The Protection of Personal Information Act 4 of 2013 (POPIA) recognises the right to privacy enshrined in the Constitution and gives effect to this right by mandatory procedures and mechanisms for the handling and processing of personal information. People often provide information for one reason and do not realise that it may be used for other purposes as well. Personal information may be used to predict the outcome of litigation and enable legal practitioners to provide more impactful advice to their clients in connection with dispute resolution issues (Garcia (op cit)). So, the processing of such information is limited, which means that personal information must be obtained in a lawful and fair manner. Therefore, a person processing data must ensure that the proper security safeguards and measures to safeguard against loss, damage, destruction and unauthorised or unlawful access or processing of the information, have been put in place.

It is evident that AI systems may result in a data breach, which poses a security, privacy and reputation risk. A law firm’s computer system can be hacked, with key and confidential data stolen and sabotaged. The hackers could be agents or organised crime, or terrorist groups, rival law firms or just malicious individuals taking pleasure in causing pain. As a result, cyber security is essential as both a prerequisite and enabler for the fourth industrial revolution (Keith Campbell ‘The Fourth Industrial Revolution is upon us and South African industry must adapt’ (www.engineeringnews.co.za, 31-8-2020)). The need to protect data is not only limited to the carrying on of a business, but legislation is also driving this need.

Data protection in the European Union (EU)

The rules governing the processing of personal data have their basis in some fundamental principles. Article 5 of the General Data Protection Regulation (GDPR) lists the principles that apply to all personal data processing. The essence of these principles is that personal information shall be utilised in a way that protects the privacy of the data in the best possible way, and that each individual has the right to decide how their personal data is used. However, the use of personal data in the development of AI challenges several of these principles.

It is clear that the GDPR and the POPIA are quite similar to each other. However, there is much debate on whether the GDPR protects data privacy in the data centric world we live in. This is because the GDPR does not protect legal entities, it also does not create serious penalties for failing to protect an account number.

Data protection in the United States (US)

Unfortunately, in the US, there is no single, comprehensive federal (national) law regulating the collection and use of personal data. However, California was the first state to enact a security breach notification law (California Civil Code §1798.82). The law requires any person or business that owns or licenses computerised data that includes personal information to disclose any breach of the security of the system to all California residents whose unencrypted personal information was acquired by an unauthorised person.

There are no formal designations of controllers and processors under US law as there are in the laws of other jurisdictions. There are, however, specific laws that set forth different obligations based on whether an organisation would be considered a data owner or a service provider (Rosemary P Jay (ed) Data Protection & Privacy 2014 (London: Law Business Research Ltd 2013)).

Discrimination and biased AI systems

Artificial intelligence systems have the potential to reinforce the pre-existing human biases. A machine has no predetermined concept of right and wrong, only those which are programmed into it. A system that can learn for itself and act in a way unforeseen by its creators, may act contrary to its original intentions (Stankovic (op cit)). While the big data on which AI is based is extensive, it is neither complete nor perfect. This imperfect data feeds algorithms and AI, and can ‘bake discrimination into algorithms’ (Jacobson (op cit)). As a result, human biases will be accentuated and not resolved.

The truth is that many AI devices are better than human beings at identifying small differences. However, algorithms and machine learning may also develop (or embody) false correlations between appearance, origin or other human attributes, that replicate and extend discriminatory practices (Stankovic (op cit)). Several recent controversies have illustrated this type of bias in a particularly shocking way. In 2015, Google Photos, a face recognition software, caused an uproar when two young African Americans realised that one of their photos had been tagged as ‘Gorillas’ (Victor Demiaux and Yacine Si Abdallah ‘How can humans keep the upper hand? The ethical matters raised by algorithms and artificial intelligence’ (www.cnil.fr, accessed 31-8-2020)). The algorithms and model’s result may be incorrect or discriminatory if the training data renders a biased picture reality, or if it has no relevance to the area in question. Such use of personal data would be in contravention of the fairness principle (Stankovic (op cit)).

Whoever trains an algorithm, in some ways builds it into their own way of seeing the world, values or, at the very least, the values which are more or less directly inherent in the data gathered from the past (Abdallah (op cit)). Researcher, Kate Crawford, in particular, has lifted the lid on the ingrained social, racial and gender bias that is rife among the circles where those who are training artificial intelligence today are recruited (Kate Crawford ‘Artificial Intelligence’s White Guy Problem’ (www.nytimes.com, 31-8-2020)).

It is, therefore, clear that AI systems may create inequalities and discrimination, which are expressly prohibited by s 9 of the Constitution, thus posing a legal risk.

Cybercrimes and Cybersecurity Bill B6 of 2017

The Bill defines ‘data’, as ‘electronic representations of information in any form’. It provides that any person who unlawfully and intentionally secures access to data, a computer program, a computer data storage medium or a computer system is guilty of an offence. These are some of the penalties available in SA for perpetrators, who infringe data protection rules. This Bill works interchangeably with POPIA, and emphasises the importance of data protection especially with the constant advances of AI systems, which may pose a legal risk.


It is, therefore, clear that advances in AI hold the potential to reshape the way we live fundamentally. It is also true that the transformative nature of AI technology will impact law and policy. I submit that it cannot be predicted how this fourth industrial revolution will play out, but the protection of personal information appears to be an exercise worth pursuing without the threat of legal censure.

Sara Tony LLB (UJ) is a legal intern at Rand Water in Johannesburg.

This article was first published in De Rebus in 2020 (Oct) DR 9.