By Prof Michele van Eck
Artificial intelligence (AI) can positively change the way in which the law is practiced, but AI is not without its risks. Some of the practical and ethical risks have been highlighted previously in, for example, Marciano Van Der Merwe ‘Do legal practitioners truly understand the danger of ChatGPT?’ 2023 (Sep) DR 14 and Prof Michele van Eck ‘Pitfalls and traps for legal practitioners when using ChatGPT’ 2023 (Sept) DR 11. Despite these warnings, the proper use of AI has increasingly become a point of ethical concern in the legal profession. From an international perspective, legal practitioners have been sanctioned on more than one occasion for questionable behaviour in the use of AI, specifically using ChatGPT in conducting legal research which resulted in the use of false, fabricated and fictitious case citations in court documents (see, for example, the discussion in Prof Michele van Eck ‘Error 404 or an error in judgment? An ethical framework for the use of ChatGPT in the legal profession’ (2024) 3 TSAR 469). More recently, the American case of In Re Matter of Weber, 2024 NY Slip Op 24258 (N.Y. Surrogates Ct. Oct. 10, 2024) addressed an expert witness’ use of Microsoft CoPilot to draft expert reports, which content and reliability was in question.
Similarly, South Africa has examples of legal practitioners’ use of AI in Parker v Forsyth NO and Others (Johannesburg Regional Court) (unreported case no 1585/20, 29-6-2023) (Magistrate Chaitram) and Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Others (KZP) (unreported case no 7940/2024P, 8-1-2025) (Bezuidenhout J). Although, the use of AI by the legal practitioner and candidate legal practitioner in the Mavundla case was implied by the court, in both instances, our court provided useful guidance as to the expected conduct in the proper and ethical use of AI. As the Code of Conduct for all Legal Practitioners, Candidate Legal Practitioners and Juristic Entities (GG42364/29-3-2019) read with s 36(2) of the Legal Practice Act 28 of 2014 has yet to provide guidance on the ethical use of AI or technology in general, the Parker and Mavundla cases serve as important guidance for the expected conduct of a legal practitioner in the use of AI technologies. In this regard, four broad principles may be extrapolated from these cases.
Although the Code of Conduct provides little guidance on a legal practitioner’s expected conduct in the use of AI, a cursory reading of news article, research on the topic and even De Rebus articles (as referenced above) highlights the potential risks associated with the use of AI. This, in turn, provides enough information for a legal practitioner to, at the very least, implement some internal mitigation mechanisms to manage such risks.
What is of interest, however, is that Bezuidenhout J in the Mavundla case experimented with ChatGPT by entering in several questions related to the content of certain cases. The reason for doing so was to establish how accurate the application was and found that the information provided was blatantly incorrect (para 50). The conclusion drawn from this experiment was that ChatGPT (and arguably AI tools in general) is unreliable ‘as a source of information and legal research’ (para 50). In fact, Bezuidenhout J stated that ‘[i]n my view, relying on AI technologies when doing legal research is irresponsible and downright unprofessional’ (para 50), clearly indicating the court’s disapproval in using AI when conducting legal research.
Although the appropriate use of AI may be useful and create internal efficiencies in legal practice, it is certainly not a shortcut to professional and ethical duties. The court in Parker noted that ‘… the efficiency of modem technology still needs to be infused with a dose of good old-fashioned independent reading’ (para 90, see also Mavundla at para 42). After all, one could argue that much time is invested in a legal practitioner’s training, skills, competencies and expertise, not only from a personal financial perspective but also in undergraduate studies, vocational training programmes and the like. The use of AI, like any other tool, does not remove the unique abilities, skills and competencies required of a legal practitioner. In fact, the ‘[c]ourts
expect lawyers to bring a legally independent and questioning mind to bear on, especially, novel legal matters, and certainly not to merely repeat in parrot-fashion, the unverified research of a chatbot’ (Parker at para 90, see also Mavundla at para 42).
Legal practitioners are generally expected to verify information presented to court and their client. In essence, the legal practitioner remains responsible for the information they present to the court and clients, irrespective of the source of the information. The verification of information is expanded on in the supervisory duties of rule 18.3 of the Code of Conduct, where a legal practitioner must supervise the work done by staff and candidate legal practitioners. AI does not change this duty. In fact, Bezuidenhout J agreed with the view that a ‘supervisory role would include the verification of the accuracy and correctness of any information sourced from generative AI systems and other technologies and databases by staff, including candidate legal practitioners, in the legal practitioner’s employ’ (Mavundla at para 48). Practically, this means that output from AI should be fact-checked and there should, to some extent, be internal mechanisms such as training and policy measures to address AI usage within legal practice.
One of the biggest challenges in the use of AI is its unreliability, which often produces inaccurate or false information. When such information is presented to the court, a legal practitioner runs the risk of breaching rule 57.1 of the Code of Conduct, which reads: ‘[a] legal practitioner shall take all reasonable steps to avoid, directly or indirectly, misleading a court or a tribunal on any matter of fact or question of law. In particular, a legal practitioner shall not mislead a court or a tribunal in respect of what is in papers before the court or tribunal, including any transcript of evidence’ (see also Mavundla at para 37).
Misleading the court generally has two possible forms that it can take. Firstly, it could relate to deliberate misstatement or lies where the legal practitioner ‘consciously misstate the facts’ and ‘knowingly conceal the truth’ (Mavundla para 38 quoting from Van der Berg v General Council of the Bar of SA [2007] 2 All SA 499 (SCA) at para 16). Second, misleading the court could ‘occur through ignorance or negligence’ where a legal practitioner provides a ‘tacit representation … that no contradictory authority is known’ to them (Mavundla at para 39 quoting Ulde v Minister of Home Affairs and Another 2008 (6) SA 483 (W) at para 37). These principles are particularly important in instances of AI that is prone to producing fake, false and fabricated information. In this context, Bezuidenhout J held that the underlying tacit representation of a legal practitioner of not producing contradictory authority that is known to them, ‘should be expanded to include that a court should also be able to assume and rely on counsel’s tacit representation that the authorities cited and relied upon do actually exist’ (Mavundla at para 40). Central to all of this, is a duty on the legal practitioner to provide an honest account of the law (para 47).
These four general principles provide a broad overview of what may be expected of legal practitioners in the use of AI. As the use of AI is not going away and the use of AI tools will likely only increase within the legal profession, it is perhaps time to consider how to address the ethical and professional expectations of legal practitioners when using AI technology, which may include training and policy considerations in individual legal practices as well as updating the Code of Conduct to address the ethical use of AI in the legal profession as a whole.
Prof Michele van Eck BCom (Law) LLB LLM (UJ) LLD (UP) BTh (SATS) BTh (Hons) (SATS) is an Associate Professor at the School of Law at the University of Witwatersrand.
This article was first published in De Rebus in 2025 (May) DR 20.
De Rebus proudly displays the “FAIR” stamp of the Press Council of South Africa, indicating our commitment to adhere to the Code of Ethics for Print and online media, which prescribes that our reportage is truthful, accurate and fair. Should you wish to lodge a complaint about our news coverage, please lodge a complaint on the Press Council’s website at www.presscouncil.org.za or e-mail the complaint to enquiries@ombudsman.org.za. Contact the Press Council at (011) 4843612.
South African COVID-19 Coronavirus. Access the latest information on: www.sacoronavirus.co.za
|