Understanding business liability when using AI-generated content

February 1st, 2024
x
Bookmark

Picture source: Gallo Images/Getty

If a business owner relies on ChatGPT in their business and it makes a mistake, who is to blame? In the age of artificial intelligence (AI), businesses are increasingly using tools like ChatGPT to streamline customer service, enhance marketing efforts, and generate content. AI can simplify the effort involved in producing content, particularly customer communications, and it can improve efficiency by freeing up human resources for other activities. AI tools, and large language models (LLMs) in particular, have huge potential to enhance the way we work, but they must be used judiciously. They should not be a substitute for expertise or originality. But where does liability sit when things go wrong with AI? What do we need to consider regarding content generated by AI models like ChatGPT in customer interactions and public-facing information?

The rise of AI-generated content in business

In a very short time, AI-generated content has become an integral part of modern business operations. From chatbots that handle customer inquiries to content generation tools that create blog posts and social media updates, AI is revolutionising the way businesses communicate. ChatGPT in particular has gained attention for its ability to generate human-like text and handle a wide range of conversational tasks. Even technology laggards cannot ignore the impact and potential of ChatGPT. Smart organisations will develop a set of guidelines around the use of AI, to ensure it is used consistently, ethically, and responsibly. This will also help manage liability.

Types of AI-generated content liability

AI-generated content in business operations creates several potential areas of liability:

  • Accuracy and reliability of AI-generated responses. If the AI model provides incorrect or misleading information to customers, it can harm the customer experience and damage the company’s reputation.
  • Data privacy. Using AI in customer interactions often involves handling personal data. Companies must adhere to data privacy regulations, namely, Protection of Personal Information Act 4 of 2013, to protect customer information and ensure it is not misused.
  • AI-generated responses to common queries can be extremely human-like, because AI is trained on vast datasets of real human interactions. If customers believe they are interacting with a human when they are not, a company may face accusations of misrepresentation.
  • Legal compliance. AI-generated content must comply with relevant laws and regulations. Failure to do so can result in legal consequences, including fines and lawsuits.
  • Customer harm. If a customer relies on AI-generated content and suffers harm as a result, a business could face legal action. Customer harm might include financial losses, health-related issues, or other damages.
Mitigating AI-generated content liability

Businesses can mitigate the risks associated with AI-generated content. There are several proactive steps they can take:

  • Implement robust review processes to ensure the accuracy and quality of AI-generated responses. Human oversight can catch errors and prevent misinformation.
  • Clearly communicate to customers when they are interacting with AI. Transparency builds trust and helps manage expectations. Offer to connect to a human if the chatbot does not provide the assistance required.
  • Follow strict data privacy practices to protect customer information. Implement encryption, secure storage, and data access controls to safeguard sensitive data.
  • Stay informed about the legal and regulatory landscape in their industry. Ensure AI-generated content complies with all relevant laws and industry best practices.
  • Educate customers about the capabilities and limitations of AI-generated content. Encourage them to use critical thinking when relying on AI-generated information.
  • Educate employees to use generative AI for internal research and general tasks such as brainstorming, and not for original writing. Encourage them to consult with an expert to validate AI-generated content.
Legal precedents and liability in AI-generated content

Currently, South Africa does not have comprehensive legislation governing the use of AI and generative language tools. Some countries are beginning to publish white papers and consider legislation, and legal precedents are emerging. If a liability issue arises, courts may consider factors such as –

  • the business’s level of control over the AI system;
  • the steps taken to ensure accuracy;
  • the transparency of the business regarding AI use;
  • compliance with data privacy regulations; and
  • whether the customer suffered harm or damages.

How might a customer suffer harm as a result of AI? It is no different to the responsibility a business has when providing customers with any information, whatever the source. If a company is seen as an authority on a product or service offering, it has a responsibility to provide accurate information. Here is a theoretical example:

A retailer uses ChatGPT to handle online customer queries. A customer asks about the compatibility of a specific electronic device with their existing set-up. ChatGPT provides inaccurate information, and the customer purchases the device, only to find it is incompatible. They incur additional costs and inconvenience. The business might face potential liability due to –

  • inaccurate information provided by AI;
  • failure to properly review and verify AI responses; and
  • potential financial losses incurred by the customer.

The outcome of this case would depend on various factors, including the business’s efforts to ensure AI accuracy, its transparency with the customer, and the Consumer Protection Act 68 of 2008. There is currently a consultation underway in Europe on producer liability for digital products, but there is a lack of clarity on the way forward and this is unlikely to serve as a defence.

Manage risk, limit liability

As businesses increasingly integrate AI-generated content into operations, understanding and managing liability is essential. They need to balance AI’s potential for efficiency and customer engagement with its risks. Organisations need to raise awareness internally of accuracy, transparency, data privacy, legal compliance, and customer education in order to reduce their exposure to liability when using AI-generated content. They should review and verify content against a reliable source and be aware of potential biases. Attribution is helpful, such as: ‘This content was created by a generative AI tool with respect to [whatever product, service, or instruction applies].’

The landscape of AI-generated content liability is ever evolving. Businesses looking to harness its benefits must ensure they minimise the potential legal challenges.

Simon Dippenaar BBusSci LLB PG Dip Legal Practice (UCT) is a legal practitioner at Simon Dippenaar and Associates Inc in Cape Town.

This article was first published in SA Lawyer in 2024 (January) DR 2.

X
De Rebus