Gen AI Workplace Risk Assessment and Management for SMEs

Recent research shows that Generative AI is being used by 80% of small businesses in the UK (according to the GS10K Small Business Manifesto), while 46% currently leverage AI to optimise business performance. As adoption grows, the importance of Gen AI Workplace Risk Assessment and Management for SMEs cannot be overstated. Small businesses must ensure they are addressing data protection, ethical use, and AI compliance requirements under evolving regulations such as the EU AI Act. Developing a clear framework for responsible AI use in SMEs helps reduce operational risks, maintain employee trust, and ensure AI systems are applied safely and transparently.

And while these figures demonstrate the courage and forward-thinking approach of SME owners, according to the GS10K Generation Growth Policy Series they still have two main concerns when it comes to AI:

  • Lack of understanding and awareness
  • Data security and privacy concerns

With the government launching a training pilot for AI skills development for SMEs in May 2024 it is key that all fears and apprehensions are allayed whilst also fully understanding the risks to your business involved in adopting AI.

Here we look at the areas of concern, offer advice to mitigate risks and provide advice to business owners looking to integrate generative AI into their businesses.

Intellectual Property (IP) Considerations

Generative AI can produce outputs that might be subject to IP rights, including copyright and database rights. However, the extent to which these outputs are protected under current UK law is unclear. For an output to be protected by copyright, it must be the “author’s own intellectual creation”. You should scrutinise the terms of use of the generative AI systems to understand ownership of IP rights in the outputs. One key point to note is that AI can’t currently create a copyright (the law in the UK may be under review).

However, outputs from generative AI systems could infringe on third-party IP rights if the training data includes proprietary content used without permission. You need to assess the source of training data to avoid legal complications. This includes selecting AI systems that guarantee compliance with IP laws and providing clear guidelines for employees on acceptable use.

Data Protection and Cybersecurity

Large datasets used to train generative AI systems may contain personal data, raising concerns under the UK GDPR and the Data Protection Act 2018. Additionally, prompts used by employees might inadvertently include personal data.  You should establish controls to minimise personal data processing and ensure compliance with relevant laws.

Interacting with generative AI tools can expose individuals and businesses to cybersecurity risks, such as malware from downloading AI software or mishandling of passwords. Implementing robust cybersecurity measures, including regular updates and password management protocols, is essential.

It is also key that your contractors and freelancers are fully trained in data protection risks when collecting and processing data on your behalf.

Confidentiality

There is a significant risk that confidential business information, such as trade secrets or client data, could be misused or lost if included in AI prompts. This information could become part of the training data, leading to substantial damage, including litigation and reputational harm. You should ensure that sensitive data is never included in AI prompts and establish strict confidentiality guidelines. It is a good idea to check what information you are sharing because that information could be shared with others using the same system.

Discrimination and Bias

Generative AI can perpetuate or even exacerbate biases present in training data. Users must be vigilant about the potential for discriminatory or biased outputs. Implementing guidelines to prevent offensive or inappropriate content in prompts and conducting regular reviews of AI outputs can help mitigate these risks.

Inaccurate Outputs

Gen AI can produce outputs that are plausible but inaccurate or entirely fabricated, known as “hallucinations.” These can mislead business decisions and harm reputations. Users should enforce a rigorous review process to verify the accuracy and factual correctness of AI-generated content.

Conducting an AI Audit

Businesses should audit their current use of generative AI to identify specific tools and applications, evaluate transparency, and assess legal, ethical, and reputational risks. Key questions include how AI is being used, the specific tools employed, and the current controls in place to manage associated risks.

When considering future AI applications, you should evaluate potential benefits, appropriate tools, legal and ethical risks, and necessary controls. This foresight ensures that AI integration aligns with business goals while mitigating potential adverse impacts.

Establishing clear policies and guidelines on generative AI use is crucial. Business owners should define acceptable applications, and permissible uses, and input data guidelines. Opting out of using input data for training purposes can preserve proprietary rights.

Providing workforce training on the appropriate use of AI tools and implementing robust review processes for AI outputs can prevent misuse and ensure compliance with IP rights. Regularly updating policies to keep pace with technological advancements is also recommended.

Vendor Onboarding Considerations

When procuring third-party generative AI applications, you should define the tool’s purpose, desired outcomes, and associated legal risks. Ensuring contractual controls, compliance with internal policies, and thorough vendor due diligence are critical steps.

Users need a clear understanding of their rights concerning AI prompts, training data, and outputs. Knowing whether a tool operates as a closed system or contributes to a broader training model can influence risk management strategies.

Assessing Generative AI Risk to SMEs

Generative AI offers substantial opportunities for businesses in the UK to innovate and enhance operational efficiencies. By leveraging these technologies, you can unlock new levels of productivity and creativity. However, the integration of generative AI comes with significant risks that necessitate a proactive and informed approach to risk management. Protecting intellectual property, ensuring data protection, maintaining confidentiality, mitigating cybersecurity threats, and addressing potential biases and inaccuracies are crucial.

As generative AI continues to evolve, SMEs must remain vigilant and adaptable, continuously updating their policies and practices to navigate the dynamic landscape of AI technology. Balancing innovation with robust protection and awareness will enable businesses and individuals to harness the full potential of generative AI while safeguarding their business interests and maintaining trust with stakeholders.

We have created a Generative AI Risk Management Checklist for SMEs which will enable you to identify areas of concern for your business. Download your copy now. Our team are on hand to help you mitigate risks in areas you may uncover.