Picture source: Getty/iStock
By Haroon Aziz
South Africa’s Department of Communications and Digital Technologies issued its ‘South Africa National Artificial Intelligence Policy Framework’ document in August 2024. It is the first step towards developing artificial intelligence (AI) policy.
Its scope covers ethical guidelines development; privacy and data protection; safety and security; transparency and explainability; and fairness and mitigating bias.
Its strategic pillars cover development of talent and capacity; digital infrastructure; research, development, and innovation; and public sector implementation.
Its objectives include the development of a comprehensive policy within the context of rapid global advancement in AI technology in alignment with global AI governance standards – in enhancement of global competitiveness in AI innovation.
It makes a generalised analysis of push-pull interactions to foster an enabling environment for AI policy with aspirations towards social equity while addressing historical disparities and promoting broad access to AI benefits. It aims to make South Africa an AI leader in Africa and a significant player globally.
The policy document of August 2024 was preceded by the same Department’s ‘Discussion Document – AI National Government Summit’ (dated October 2023). It draws on global best-practice documents. It concedes that South Africa has done limited research on productivity impact, societal impact, and risks. It looks at six types of risks, namely, robotic, social, criminal, existential, monopolisation, and military. It repeats the omission contained in those documents, from which it takes precedent, namely, systemic risk-based governance. It relies on the European Union (EU) Artificial Intelligence Act (AI Act) as a ‘concrete regulatory prescript’ because it is the ‘first comprehensive piece of legislation’ with enforcement mechanisms. The AI Act defines ‘risk’ as the ‘combination of the probability of an occurrence of harm and the severity of that harm’ (my italics). The Discussion Document anticipates ‘future potential AI harms’ but it is ‘less clear what the optimal form of regulation’ should be.
The first major harm revealed itself on 19 July 2024 when 8,5 million computers using Microsoft’s Windows crashed because its internal quality control mechanism failed and compromised its safety checks and automatic quality assurance. It resulted in a loss of $1 trillion on Nasdaq 100 (Jeran Wittenstein ‘$1 trillion rout hits Nasdaq 100 over AI jitters in worst day since 2022’ (www.bloomberg.com, accessed 23-11-2024)). Insurer Parametrix estimated that the United States (US) Fortune 500 companies faced $5.4 billion in losses (David Jones ‘CrowdStrike disruption direct losses to reach $5.4B for Fortune 500, study finds’ (www.cybersecuritydive.com, accessed 23-11-2024)). Services from aviation to media to hospitals to banking were disrupted.
Microsoft used an ‘advanced’ software known as ‘CrowdStrike’, which was supposed to protect systems from external malicious software and hacking. The software, in spite of its ‘intelligence’ did not ‘know’ what threats to look for nor how to respond to its own internal threat that is organic to the character and unpredictable behaviour of the electron, on which AI and cyberspace pivot. The challenge for lawmakers is how to manage the untameable behaviour.
The failure demonstrated the invisible, silent, omniscient, and monopolised web of IT interconnections, the multifaceted nature of modern IT ecosystems, and global supply chains, in which is invested human trust. The chaos that followed the historic failure is now a ‘digital prophesy’ for the urgent need for internal systemic risk-based governance of AI in a world of lesser monopoly of digital infrastructure and lesser vulnerability to chaos. While CrowdStrike was on the look-out for external disruptions, an internal disruptive virus struck it like a ‘digital COVID-19’ virus, which is natural to the electron. Risk is systemic to AI. Domestic and international AI law should factor in this systematism. It cannot be but risk-based governance. Its only certainty is its unpredictability.
The European Parliament passed legislative resolution on 13 March 2024 on the proposal for laying down harmonised rules on its AI Act. There were 499 votes in favour, 28 against, and 93 abstentions. The resolution is far from becoming law.
The formal action plan began in 2021 with the Commission proposal to the European Parliament and the Council on ‘laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts.’ It involved wide consultation with the European Central Bank; European Economic and Social Committee; Committee on Internal Market and Consumer Protection; Committee on Civil Liberties, Justice and Home Affairs; Committee on Culture and Education; Committee on Legal Affairs; Committee on Environment, Public Health and Food Safety; and Committee on Transport and Tourism.
This action plan organically changed the very nature of drafting a new epochal law. The drafters were a trans-disciplinary team with relevant expertise, competencies and powers, beyond political rhetoric. History demanded teamwork that included scientists.
The AI Act is very careful in the construction of its underlying assumptions as a model and very rich in details. The probability remains that science might throw up some unknown facts not anticipated by the law.
The UN General Assembly (UNGA) on 21 March 2024 adopted this resolution, consensually. It was co-sponsored by 123 countries, including the US and China. It was a history-marker in international law.
It reaffirms international law like the UN Charter and recalls the Universal Declaration of Human Rights. It concludes by acknowledging ‘the United Nations system, consistent with its mandate, uniquely contributes to reaching global consensus on safe, secure and trustworthy artificial intelligence systems, that is consistent with international law … and facilitating the inclusion, participation, and representation of developing countries in deliberations’ (my italics).
On 1 July 2024, the UNGA consensually adopted another resolution (A/78/L.86) on ‘Enhancing International Cooperation on Capacity-building of Artificial Intelligence’. It was co-sponsored by 193 countries, including China and USA. It was another history-marker in international law, barely, 14 weeks after the first resolution.
Why the sudden, accelerated, and developmental pace of history? The answer may be found in the UN meetings coverage and press releases of the 78th session of the UNGA of 1 July 2024 – under the title: ‘Worried about Increasing Extreme Violence Worldwide, Speakers in General Assembly Call for Action-Oriented Measures to Protect Vulnerable Populations. Membership Also Adopts Resolutions on Artificial Intelligence.’
China proposed the resolution, which was co-sponsored by 140 countries, including US. It placed AI development at the very centre of human development, especially, in the developing countries – to strengthen their AI capacity building, to enhance their representation and voice in global AI governance, to promote a non-discriminatory business environment, and to allow the UN to play a central role in international cooperation for the realisation of the UN’s 2030 Agenda for Sustainable Development.
Many countries have yet to access, use, and benefit from AI technology even as the global digital divide widens. The resolution focused on AI capacity-building and international cooperation and dialogue. The focus on capacity-building is the essence of Global AI Governance Initiative and Global Development Initiative (State Council of the People’s Republic of China ‘UNGA adopts China-proposed resolution to enhance int’l cooperation on AI capacity-building’ (https://english.www.gov.cn, accessed 23-11-2024)).
In addition to the two historic UN resolutions, the hybrid 2024 World AI Conference (WAIC) and High-Level Meeting on Global AI Governance was held on 4-6 July 2024 in Shanghai, China. With the theme of ‘Governing AI for Good and for All’, WAIC focused on practical implementation of policies. It also functioned as a platform for international cooperation and AI exchanges. It has three main forums on global governance, industrial development, and scientific frontier. The topics covered included investment, financing, education, and talent development. More than 200 000 government officials, including organisational representatives, industries, universities, and research institutions attended WAIC (He Yin ‘2024 World AI Conference: Governing AI for Good and for All’ (http://en.people.cn, accessed 23-11-2024)).
The Valdai Discussion Club, in Moscow, held an expert discussion on this topic on 22 May 2024. Five African experts and two Russian experts were the discussants. The focus was on the transition to knowledge-based economies as a key to non-western modernisation and industrialisation. After independence African countries continue to face inequalities between themselves and the ‘developed’ countries and the challenges of integrating themselves into the system of international relations and the continued problem of foreign dependency. They thought that modern industrialisation would be a way out of inequality and a way into sustainable socio-economic growth but lack of resources, personnel, and financing proved to be hindrances. The ‘mobile revolution’ was trapped in these hindrances because they perpetuated dependency complex on private western IT giants.
There is an ever-widening gap between the old knowledge of the diplomatic community and the new knowledge of the scientific community. Semi-conductor engineers are leading the knowledge revolution in science, technology, innovation, and invention. AI and its related miniaturization of semi-conductors (computer chips) are developing at rapid speed.
The build-up to the resolutions included shared approach such as Bletchley Declaration of the United Kingdom’s AI Safety Summit; Global Partnership on Artificial Intelligence’s (GPAI) ‘Global IndiaAI Summit’ in India; International Code of Conduct for Organizations Developing Advanced AI Systems (through G7 Hiroshima AI Process); G20 Principles for Responsible Stewardship of Trustworthy AI in Japan; and OECD AI Principles.
While large learning models (LLM) like ChatGPT have amazing useful capability, they have an inherent inability to discern truth in general. Their capability is restricted to churning plausible-sounding text and answering questions incorrectly, without critical thinking. They produce authoritative misinformation. It was for this reason that LLM for science was discontinued after a three-day experiment. Although LLM can produce authoritative-sounding text in styles such as legalese, bureaucratese, and lecture notes, they are untrustworthy and unreliable. LLM has proven useful in coding, entertainment, and translation.
As risks are systemic to AI there are attempts at AI risk reduction, which should be included in the South African policy formulation without curbing the improvement of the technology. Ideally, they should be adopted voluntarily with the potential for regulatory enforcement. OpenAI, Anthropic, and Google DeepMind have already formulated their own respective frameworks in preparation for the upcoming 2025 Artificial Intelligence Action Summit in France. As South Africa has deep knowledge of AI it, too, should formulate its own framework for the Paris Summit through a panel of experts. Simultaneously, enhancing our digital sovereignty. One of the aims should be to prevent abuse of AI through irresponsible, reckless, and dangerous consumer behaviour.
There is a need for businesses, institutions, entrepreneurs, and others in South Africa to get creatively involved in AI governance matters from the Global South perspective because:
The UN has initiated the development of international law on AI governance. The electronic nature of AI demands –
Africa needs a mass-based philosophical approach to digital decolonisation, global governance, modernisation, industrialisation, knowledge-based economies, trans-disciplinary teamwork, human development, and scientific revolution – the key to which is miniaturised semi-conductor development.
South Africa needs to widen its public consultation process on AI policy development and deliberately include semi-conductor engineers and jurisprudents with a core focus on how to manage the untameable behaviour of the electron.
Haroon Aziz is a retired physicist, author, and researcher and is part of the leadership collective of the Apartheid Era Victims’ Families and Support Group.
This article was first published in De Rebus in 2025 (Jan/Feb) DR 40.
De Rebus proudly displays the “FAIR” stamp of the Press Council of South Africa, indicating our commitment to adhere to the Code of Ethics for Print and online media, which prescribes that our reportage is truthful, accurate and fair. Should you wish to lodge a complaint about our news coverage, please lodge a complaint on the Press Council’s website at www.presscouncil.org.za or e-mail the complaint to enquiries@ombudsman.org.za. Contact the Press Council at (011) 4843612.
South African COVID-19 Coronavirus. Access the latest information on: www.sacoronavirus.co.za
|