Picture source: Getty/iStock
‘Artificial Intelligence (AI) is not the kind of utility that needs to be regulated once it is mature but needs to be regulated now. It is a powerful force, a new form of smart agency, which is already reshaping our lives, our interactions, and our environments. When people think about AI, they may have visions of the future. But AI is already in use’ (S Snail Ka Mtuze and M Morige ‘Towards drafting artificial intelligence (AI) legislation in South Africa’ (2024) 45.1 Obiter 161).
The legal industry is slowly embracing AI, but one question remains: should AI be used in drafting legal opinions and heads of argument? In South Africa, the Protection of Personal Information Act 4 of 2013 (POPIA) governs the collection and processing of personal data. Attorneys are automatically subject to POPIA regulation because they collect individuals’ private information in their daily practice. AI’s potential to revolutionise legal research and drafting cannot be denied, yet its integration into sensitive areas such as drafting legal opinions and heads of argument raises ethical, legal and data protection concerns. This article examines these concerns within the context of POPIA, focusing on the role of AI in South African litigation and the consequences of its use.
AI tools are already being used to automate time-consuming tasks such as contract review, legal research and document management. These tools are powered by machine learning algorithms, which process high volumes of data and provide insights which might otherwise take hours to uncover. In this regard, AI is invaluable in streamlining routine legal tasks, thereby saving time and reducing costs. However, the use of AI in more complex and nuanced tasks, such as drafting legal opinions and heads of argument presents challenges which demand careful consideration. These types of legal documents demand proper comprehension, research on a case by case basis and peer review for accuracy. Legal opinions require in-depth analysis, interpretation of legal principles, and application of the law to specific facts. The process involves legal reasoning, which cannot be fully replaced by machines, as it often requires human judgement, creativity, and an understanding of the broader societal and ethical implications of legal decisions. Similarly, heads of argument, which are legal documents outlining a party’s position in court, rely heavily on the legal practitioner’s ability to craft persuasive arguments, anticipate opposing arguments, and adapt to dynamic court proceedings.
Although AI-powered tools can assist with legal drafting through suggesting relevant legal precedents, summarising case law, and identifying key arguments, they fall short in understanding the subtleties of complex legal issues and ensuring that arguments are framed in a manner that aligns with the objectives of the client. This limitation raises a fundamental question: Can AI be relied on in drafting critical legal documents such as opinions and heads of argument?
AI systems are increasingly capable of supporting legal practitioners by providing quick access to vast legal databases, recommending relevant case law, and even generating drafts based on specific inputs. However, the decision to fully delegate the drafting of legal opinions or heads of argument to AI is fraught with risk. Legal professionals must ensure that the documents produced are not only legally sound but also ethically and contextually appropriate.
Legal opinions often require more than just a mechanical application of the law; they require a nuanced understanding of the client’s situation, an ability to predict the consequences of legal advice, and the exercise of professional judgement. While AI can assist in researching and organising large quantities of information, it cannot replace the expertise needed to interpret the law in a manner that reflects the client’s specific needs and circumstances. Furthermore, AI systems often rely on historical data, which may not always be up to date or applicable to the specific facts of a case.
An example of AI’s utility in legal opinion drafting is the use of AI-powered tools for case law research, which can swiftly identify relevant precedents. However, the task of interpreting these precedents and determining their application remains firmly within the domain of human lawyers. AI cannot factor in client-specific nuances or emerging legal trends that could influence the development of legal arguments.
Heads of argument determine which party has the upper hand and highest chance of winning a case by presenting a succinct, persuasive outline of a party’s case, including the legal grounds for relief and supporting evidence. AI could be used in drafting heads of argument to organise case law, summarise legal principles, or even suggest possible arguments based on the information provided. However, the process of formulating a compelling legal narrative that is suited to the specific court or tribunal, and anticipating the responses of opposing counsel, requires skill, experience, and judgement that AI is currently incapable of replicating.
Moreover, heads of argument need to reflect the strategic aims of the party, incorporating the attorney’s legal expertise, knowledge of the judge’s preferences, and understanding of the case’s broader context. AI lacks the contextual sensitivity necessary for this level of legal strategy and persuasive argumentation.
The integration of AI tools in legal practice must comply with POPIA, South Africa’s privacy legislation designed to protect personal information. The use of AI in drafting legal documents whether opinions or heads of argument raises specific challenges regarding data collection, consent, and the ethical use of personal information.
AI systems require access to large quantities of data to function effectively. In the legal context, this may include client information, case files, and sensitive personal data. Under POPIA, legal practitioners must ensure that they have obtained informed consent from clients before processing their personal data through AI tools. The act of gathering and processing such data must be done with full transparency, ensuring that clients understand how their data will be used and protected. Legal professionals must ensure that any third-party service providers (eg, AI platforms) comply with POPIA’s requirements, particularly with regard to data storage and data transfer. For instance, AI systems which rely on cloud computing or external platforms must ensure that data is stored securely and that access to the data is restricted to authorised personnel only.
POPIA imposes the principle of data minimisation, which means that only the data necessary for that specific purpose – for example, drafting a legal document should be collected and processed. This is particularly important when using AI to handle sensitive client information in the drafting process.
Legal practitioners must ensure that AI systems do not access or process unnecessary data, particularly when drafting legal opinions or heads of argument. For example, when using AI to assist with case law research, legal professionals must exercise caution in ensuring that only relevant and lawfully obtained information is processed by the system (Sarah Bankins and Paul Formosa ‘The Ethical Implications of Artifcial Intelligence (AI) For Meaningful Work’ (2023) 185 Journal of Business Ethics 725–740).
POPIA requires that organisations implement appropriate measures to protect personal data from unauthorised access or disclosure. For legal practitioners using AI tools requires ensuring that any data processed by AI systems is securely encrypted and that the system adheres to stringent security protocols to prevent breaches. AI tools that assist in legal drafting should be regularly audited to ensure that they meet security standards and compliance requirements set out by POPIA. This is particularly critical when dealing with highly sensitive personal information, such as financial details or confidential client communications.
Legal practitioners remain accountable for the accuracy and ethical use of the AI tools they employ, even if the tools themselves generate drafts or assist in the decision-making process. Under POPIA, firms must ensure that they keep detailed records of how AI tools process personal data and that the data flows are documented for compliance purposes. Regular Data Protection Impact Assessments (DPIAs) should be conducted to assess the potential risks of AI systems used in legal drafting. These assessments will help identify and mitigate any risks related to privacy or security violations, ensuring that AI’s use is both lawful and transparent.
S Bankins and P Formosa ‘The ethical implications of artificial intelligence (AI) for meaningful work’ (2023) 185 J Bus Ethics 725 discuss the ethical implications of AI use in the workplace. While AI has a significant role to play in legal practice, over-relying on it for drafting complex legal documents such as opinions and heads of argument can be dangerous. AI systems, though sophisticated, lack the critical thinking and strategic insight that human lawyers bring to the table. This became evident in a recent case where an AI-generated legal research document was presented in court, only for the information to be proven inaccurate, causing harm to the client’s case (Bankins and Formosa (op cit)). Legal professionals must use AI as a tool to enhance their work, not as a substitute for their legal expertise. In the case of heads of argument, the lawyer’s ability to craft a compelling and persuasive narrative based on strategic legal analysis cannot be replaced by AI. Human judgement remains paramount in ensuring that the documents reflect the client’s best interests and the nuances of the legal and factual context.
Looking ahead to 2025, the role of AI in legal practice will undoubtedly continue to grow. While AI can assist in many aspects of legal drafting such as research, summarisation, and document generation legal practitioners must remain vigilant and avoid over-reliance on these tools, especially when drafting complex legal documents like legal opinions and heads of argument. The integration of AI into these areas should be approached with caution, ensuring that human expertise and oversight remain central to the process.
As AI technology advances, South African legal practitioners must also remain mindful of the legal and ethical implications of using AI in litigation. POPIA provides a crucial framework for ensuring that personal data is handled responsibly, and compliance with these regulations will be critical for maintaining the trust and integrity of the legal profession.
By carefully balancing the use of AI with traditional legal expertise, lawyers can harness the benefits of technology while ensuring that their practice remains aligned with the highest ethical standards and legal obligations.
Precious Muleya LLB LLM (UFH) is a legal practitioner at Precious Muleya Inc Attorneys in East London.
This article was first published in De Rebus in 2025 (May) DR 29.
De Rebus proudly displays the “FAIR” stamp of the Press Council of South Africa, indicating our commitment to adhere to the Code of Ethics for Print and online media, which prescribes that our reportage is truthful, accurate and fair. Should you wish to lodge a complaint about our news coverage, please lodge a complaint on the Press Council’s website at www.presscouncil.org.za or e-mail the complaint to enquiries@ombudsman.org.za. Contact the Press Council at (011) 4843612.
South African COVID-19 Coronavirus. Access the latest information on: www.sacoronavirus.co.za
|