Do legal practitioners truly understand the danger of ChatGPT?

September 1st, 2023

Picture source: Gallo Images/Getty

Chat Generative Pre-trained Transformer (ChatGPT) is a phenomenon by OpenAI. The chatbot was launched in November 2022 and has over 100 million users with 13 million unique visitors per day. ChatGPT is said to make some professionals redundant and was used in passing a judgment by Judge Juan Manuel Padilla Garcia in Cartagena, Columbia.

ChatGPT uses a conversational dialogue to generate responses to questions posed by the user. As a Pre-trained Transformer, it uses deep learning techniques to generate natural language text. It can generate code, write a thesis, do reports, and pass a law examination from the University of Minnesota course (PM Parikh, DM Shah, KP Parikh ‘Judge Juan Manuel Padilla Garcia, ChatGPT, and a controversial medicolegal milestone’ (, accessed 29-7-2023)). In consideration of this widespread use and attention the following identifies and discusses the limitations followed by the dangers of using ChatGPT within the South African legal profession.

For tailored legal advice, OpenAI notes that it does not allow the use of their models without a qualified person to review the output. ChatGPT is not fine-tuned to give legal advice. Users should not rely on ChatGPT as their only source of legal advice (OpenAI ‘Terms of use’ (, accessed 29-7-2023), ‘Disallowed usage of our models’ under ‘Usage policies’ (, accessed 29-7-2023)).

OpenAI does envision legal practitioners use of ChatGPT. Its usage policies under ‘We have further requirements for certain uses of our models’ (OpenAI ‘Usage policies’ (op cit)), is of the view that further requirements are needed for certain uses of ChatGPT. Where the user is a consumer in the legal industry, the user must make use of a disclaimer. The disclaimer must inform the user’s clients that AI is being used and the potential limitations thereto.

Additionally, the tool is sensitive to input phrasing or attempting the same prompt multiple times. Opposed to clarifying what users asked, the tool might guess what the user’s intended query is. However, to circumvent this, ChatGPT Plus allows users to constantly challenge such assumptions. OpenAI acknowledges that ChatGPT can write ‘plausible-sounding but incorrect or nonsensical answers’ (Ian Sample ‘ChatGPT: what can the extraordinary artificial intelligence chatbot do?’ (, accessed 29-7-2023)).

ChatGPT may limit the unique way in which legal practitioners service their clients. Output generated by ChatGPT is similar among the various ChatGPT users, in that a user can provide an input and the output received will be the same or similar to the output received by other users (OpenAI ‘Terms of use’ (op cit) at provision 3(b)), therefore, resulting in standardisation of output among users and how legal practitioners may service their clients.

The tool may pose a danger to the learning and development of early legal career professionals. It relies on feedback to help OpenAI improve its limitations. In Michelle Mohney’s article mention is made that the approach to developing ChatGPT, includes reinforcement learning (Michelle Mohney ‘How ChatGPT could impact law and legal services delivery’ (, accessed 29-7-2023)). This includes human input to better align the dialogue to human expectations and intentions.

However, as contemplated Paul W Glimcher, reinforcement learning entails a process whereby a conditioned response is brought about from a conditioned stimulus (Paul W Glimcher ‘Understanding dopamine and reinforcement learning: The dopamine reward prediction error hypothesis’ (, accessed 29-7-2023)). In other words, the more often content produced by ChatGPT is rated as accurate, by users and ChatGPT trainers, the more reinforcement ChatGPT obtains that the content is accurate.

Similarly, early legal career professionals gain experience through a similar learning process. Professionals who solely rely on ChatGPT may be doing their learning and development an injustice by only knowing what the end content looks like and not understanding why. This will hamper their ability to perform professional work with a degree of skill, care, or attention as may reasonably be expected from an attorney, in accordance with para 18.14 of the Code of Conduct for all Legal Practitioners, Candidate Legal Practitioners and Juristic Entities (the Code).

Furthermore, OpenAI does not warrant the accuracy of the information produced. The company further notes, in provision 7(b) of the ‘Terms of use’ (OpenAI ‘Terms of use’ (op cit)), the services provided by ChatGPT are on an ‘as is’ basis. There is no warranty that the services provided are accurate or error free.

At a privacy level, the use of ChatGPT may pose a privacy threat to organisations. In terms of s 1 of the OpenAI’s Privacy Policy (OpenAI ‘Privacy policy’ (, accessed 29-7-2023)), information is collected about the user’s browsing activities over time and across different websites. The site does not respond to ‘Do Not Track’ signals. Furthermore, in terms of s 2 (‘How we use personal information’) of the Privacy Policy, personal information is used to provide, administer, maintain, improve and/or analyse the services. How personal information is used may present a danger to the duty legal practitioners have to their clients.

The Law Society of South Africa (LSSA) places duties on legal practitioners when using Internet-based technologies like ChatGPT. In accordance with the LSSA Guidelines on the Use of Internet-Based Technologies in Legal Practice (, legal practitioners are required to take reasonable steps or reasonable protective measures. This is pursuant to ensuring that information provided by clients remain confidential. Thus, the way information is used by ChatGPT may pose a danger to the privacy of client information.

However, OpenAI holds that the content provided by the user (API content) is not used to develop or improve the service. It is only used to provide and maintain the API services (Open AI ‘Terms of use’ (op cit) at provision 3). Notably, there is no further elaboration on what is meant by ‘provide and maintain’ nor how ‘provide and maintain’ differs from ‘develop or improve’ under these circumstances.

There is a clear indication that non-API content can be used to improve ChatGPT’s performance. Should there be a copyright complaint, OpenAI makes provision on how the user should go about initiating such complaint (Open AI ‘Privacy Policy’ (op cit) at provision 9).

OpenAI further limits its liability in its terms of use. More specifically, provision 3(a) of the OpenAI’s ‘Terms of use’ (op cit), states the user owns all ‘Input’. OpenAI assigns to the user the right, title, and interest in the ‘Output’. Output and input amounts to content and content may be used by OpenAI but the user is responsible for the content, including ensuring that it does not violate any law or terms of use.

I submit that this is a way for OpenAI to avoid potential liability. The user is responsible for the content whereas OpenAI has the right of use. In the event the content is inaccurate legal professionals will ultimately be held liable.

Liability for legal practitioners is further evident in legislation and legal practitioners are under legislative obligations when servicing their clients. Paragraph 3.6 of the Code holds that legal practitioners are to ‘maintain legal professional privilege and confidentiality regarding the affairs of present or former clients’. Any use of ChatGPT must comply with relevant legislation such as the Electronic Communications and Transactions Act 25 of 2002 (ECTA) and Protection of Personal Information Act 4 of 2013 (POPIA).

In terms of the ECTA, liability for legal practitioners may ensure if the chatbot is used contrary to ECTA. The ECTA facilitates and regulates electronic communications. Electronic communication is defined as the originator sending data messages on the originators behalf and where the information system did not execute the said programming. Furthermore, where a person acts as an intermediary they are expressly excluded under the definition of an originator.

An intermediary may provide a service in respect of the data message and, therefore, ChatGPT may be defined as an intermediary. Although legal practitioners may make use of ChatGPT as an intermediary, legal practitioners will still be considered the originators of any data message. Thus, in this instance, legal practitioners will be liable for the use of artificial intelligence generated information in terms of the ECTA.

Similarly, legal practitioners may encounter liability in terms of POPIA. POPIA introduces conditions to ‘establish minimum requirements for the processing of personal information’. Subsection 19(1) states ‘a responsible party must secure the integrity and confidentiality of personal information in its possession or under its control’. In accordance with subs 19(1)(b) responsible parties are to take reasonable measures to prevent ‘unlawful access to or processing of personal information’.

As abovementioned ChatGPT makes use of non-API content to improve its performance. If personal information of clients is used to improve the performance of ChatGPT services, this will breach POPIA. Pursuant to a conviction thereof, the legal practitioner is liable to imprisonment or a fine.

The Code states in para 3.3 that legal practitioners shall ‘treat the interests of their clients as paramount’. This is subject to their duty to the court, interests of justice, observance of the law and ethical standards. Should any legal practitioner seek to use ChatGPT their client must be informed.

However, OpenAI allows users to opt-out of having their data used to improve their non-API services. OpenAI further notes that it removes any personally identifiable information from data intended to improve the model’s performance (Michael Schade ‘How your data is used to improve model performance’ (, accessed 29-7-2023)).

In conclusion, the tool has received widespread attention and use. Legal practitioners are to use the tool to their benefit in accordance with legislation and the Code. Practitioners can use it to implement new efficiencies in servicing clients. However, practitioners must be aware of the dangers and limitations of using ChatGPT.

Marciano Van Der Merwe BA (Industrial Psychology and History) LLB (Wits) is Corporate Counsel in Johannesburg. Mr Van Der Merwe writes in his personal capacity.

This article was first published in De Rebus in 2023 (Sep) DR 14.