ChatGPT is a technological breakthrough like no other. Even though the technology has been around for a long time the latest innovation of generative artificial intelligence (AI) exploded onto the tech scene and into our lives over the past few months. With its incredible ability to provide lengthy answers with humanlike sophistication, ChatGPT has been hailed a ‘game changer’ by many. It has not only changed the game – it changed everything – the players, the rules, the outcome right down to the most minute detail. It is revolutionary, with enormous potential to both benefit and harm society.
The fundamental problem is that the inexorable AI is on a collision course with democratic values like accountability, transparency, fairness, and equality. Opaque algorithms can now influence how we work, live and play by determining who qualifies for credit agreements, who gets considered for jobs, and who may be summarily arrested through predictive policing with facial recognition technology. This article explores the risks for society and humanity if AI proceeds without legal regulation.
For illustrative purposes, I prompted ChatGPT about the role of AI in legal practice, and this was the very considered response it generated.
Title: Embracing the future: Why lawyers must adopt AI in legal practiceIntroduction: In an era where technology and artificial intelligence (AI) is rapidly transforming various industries, the legal profession cannot afford to be left behind. Despite the traditional resistance to change, lawyers must embrace technology and AI to thrive in the modern legal landscape. The benefits of AI adoption in legal practice are manifold, from increased efficiency and cost-effectiveness to improved accuracy and decision-making.
Conclusion: The adoption of AI in legal practice is not a question of if but when. By embracing technology and AI, lawyers can increase efficiency, reduce costs, improve accuracy, and enhance decision-making. Ultimately, AI adoption will not only benefit lawyers and their clients but will also contribute to a more just and equitable society. The time to embrace the future of legal practice is now. |
Impressive and compelling indeed. Next, I prompted ChatGPT to tell me about the risks of adopting AI in legal practice:
Title: Caution ahead: The dangers of AI in legal practiceIntroduction: While the integration of artificial intelligence (AI) into various industries has led to numerous benefits, it is crucial to consider the potential risks and downsides of incorporating AI in legal practice. By examining the ethical, privacy, and employment concerns, we can better understand the potential pitfalls of relying too heavily on AI in the legal profession.
Conclusion: Although AI has the potential to revolutionise the legal profession, it is crucial to proceed with caution and carefully consider the potential dangers. By acknowledging and addressing the ethical, privacy, and employment concerns associated with AI in legal practice, we can strike a balance between harnessing the benefits of AI and preserving the essential human qualities that underpin the legal profession. |
ChatGPT’s reasoning is smart, superior, and logical. It neatly encapsulates the ethical, legal, and social dilemmas created by AI. The problem is that AI is so sophisticated and realistic that some vulnerable individuals may be at great risk when interacting with these intelligent machines.
In the March 2023 issue of De Rebus, Prof Michele van Eck illustrated the shortcomings of ChatGPT in making factual mistakes and giving incorrect answers (Prof Michele van Eck ‘Chatting with ChatGPT: Will attorneys be able to use AI to draft contracts?’ 2023 (April) DR 12). It tends to ‘hallucinate’ and invent answers or make statements with great certainty that are completely disconnected from reality. The Washington Post reported on 5 April 2023 that ChatGPT invented a sexual harassment scandal and named a real law professor as the accused while fabricating primary sources to back up its claims (Pranshu Verma and Will Oremus ‘ChatGPT invented a sexual harassment scandal and named a real law prof as the accused’ (www.washingtonpost.com, accessed 12-4-2023)). Another extreme example took place in Belgium where a chatbot became so manipulative that it convinced a user to commit suicide.
Even more concerning is the recent open letter by 1 100+ notable signatories including Elon Musk and Apple co-founder, Steve Wozniak calling on ‘all AI labs to immediately pause for at least [six] months the training of AI systems more powerful than GPT-4’ (Future of Life Institute ‘Pause giant AI experiments: An open letter’ (https://futureoflife.org, accessed 12-4-2023)). They raise valid concerns about the development of ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
In a recent article, Bill Gates also raises the possibility that AI may run out of control. He notes the long-standing fear that super-intelligent machines may well decide one day that humans are a threat. At present, nobody knows how far away we are from the development of Artificial General Intelligence (AGI). This will be a machine that can do anything a human brain can do but without any limits on the size of its memory or the speed at which it operates. Such AGI will be able to make its own decisions that may conflict with those of humanity. For the time being, AGI does not exist but the speed at which the technology is developing means that it cannot be predicted when this will become a reality. GPT-5, the newest rendition in the works at OpenAI is already a step in that direction.
We are at the foothills of the mountain when it comes to innovation in tech. Worldwide governments are responding. Italy’s swift move was to ban ChatGPT on 31 March 2023, citing data privacy concerns and the potential exposure of misinformation to young people.
The United Kingdom, on the other hand, has unveiled a world-leading approach to innovation, in publishing the first AI White Paper ‘A pro-innovation approach to AI regulations’ in March 2023, to regulate AI and build public trust in cutting-edge technologies. The United Arab Emirates appointed the world’s first and only Minister of State for Artificial Intelligence.
In the European Union, plans are underway for new legislation to regulate AI – the Artificial Intelligence Act. In Australia, law schools have held collaborative sessions to examine the impact of ChatGPT on legal education, academic integrity, and the legal ecosystem. Even though the United States (US) has published a potential AI Bill of Rights, there is no comprehensive federal legislation on AI in the US yet.
The situation in South Africa (SA) is troubling. At the moment, there is no comprehensive legislation which governs the use of AI and machine learning in the country.
AI is pervasive. It is penetrating every area of our lives daily, almost minute by minute and, without being alarmist, conspiratorial, or fearmongering, the dangers of such explosive technology are real. That should not, however, deter the enormous potential it has as a revolutionary tool.
This is an urgent call for a dedicated national AI strategy to address the ethical and social challenges highlighted in this article. We need a legislative framework in SA that will put the necessary guardrails in place, educate the public and develop expertise in government to manage this technological avalanche.
Without a legal regulatory framework, we will have to play catch-up with the rest of the world as technological advances race along at breakneck speed. The time for legislatures and policymakers to step in is now.
Diana Mabasa LLM (Wits) Legal Technology and Innovation Certificate (LT&I Institute, Dubai) is in-house counsel for the Office of the Legal Services Ombud.
This article was first published in De Rebus in 2023 (May) DR 17.
De Rebus proudly displays the “FAIR” stamp of the Press Council of South Africa, indicating our commitment to adhere to the Code of Ethics for Print and online media, which prescribes that our reportage is truthful, accurate and fair. Should you wish to lodge a complaint about our news coverage, please lodge a complaint on the Press Council’s website at www.presscouncil.org.za or e-mail the complaint to enquiries@ombudsman.org.za. Contact the Press Council at (011) 4843612.
South African COVID-19 Coronavirus. Access the latest information on: www.sacoronavirus.co.za
|