Worried About AI? effective risk management is key for businesses
Depending on who you ask, AI is either a cure-all to supercharge businesses’ efficiency, an existential threat to humanity or (perhaps more realistically) somewhere in between.
With the growing number of AI tools on offer, the issue is not something we can afford to ignore. Many businesses are already using AI and those that fail to do so could find themselves struggling to keep up. However, there are significant risks to the rise of AI that cannot simply be ignored.
In this article, we explain some of the opportunities AI offers for businesses, key risks that need to be considered and practical steps that businesses can take to help manage those risks.
Key points for business to know
- AI has the potential to offer many benefits to businesses
- Failing to use AI could mean you are unable to compete with rivals who do
- But overinvesting in technology that fails to offer benefits could be costly
- If you do use AI, you must have the right oversight and risk management processes in place
- How fast AI will develop and what capabilities it may have in the near future are huge unknowns, so any speculation should be taken with a pinch of salt
- There is currently no legal requirement to disclose if you are using AI in your business (however, if you are asked, then being dishonest about it could be an issue)
- Businesses can specify in contracts and agreements that suppliers must not use AI (but this might be too restrictive)
- You should be very wary about relying on AI to get important details right, especially when it comes to contracts and any key factual information
- There are data security concerns that must be considered when deciding what information to share with AI tools
- AI also poses risk for businesses’ cybersecurity that must be guarded against
- Failing to stay on top of how AI is affecting your industry and making any necessary adaptations could be fatal for your business
What opportunities does AI offer for businesses?
It is important to remember that AI tools have a lot of potential benefits they can offer to businesses. Some of the key possible advantages include:
- Making processes faster and more efficient
- Automating low value tasks so employees can focus on higher value work
- Allowing you to analyse large amounts of data that previously would have required too much time to be cost-effective
- Providing more personalised experiences for customers
- Offering near-instantaneous monitoring of large amounts of data
- Reducing mistakes by identifying or even eliminating the opportunity for human errors
There can also be many other benefits, depending on the needs of your business and the specific tools you are using.
What risks does AI pose for businesses and how can you manage them?
Rival businesses that use AI could outcompete you
When talking about the risks from AI, one of the first things we have to consider is the risks of not using the technology or failing to use it effectively. If your competitors make good use of AI, they may be able to cut their prices and/or offer a superior service. This could leave you unable to compete, losing you customers and possibly even causing your business to fail.
To manage this type of risk effectively, you will need to make sure you are spending time looking at what AI tools have to offer your business and invest in training or hiring people so you have the skills available to make best use of the technology.
Wasting money on AI technology that goes nowhere
We seem to be in something of an AI gold rush, with many companies dashing to adopt the technology, while new tools and ‘AI gurus’ are popping up left right and centre. While some of these technologies and experts are likely to be beneficial, there is also a risk that businesses could end up spending a lot of time and money on things that do not add any real value to their operations.
Managing this type of risk is tricky. It can be really hard to know whether a tool or expert in an emerging field is going to be worth the money since they are likely to have a limited track record. Having good procurement processes and being realistic that any money spent could be something of a gamble can help to mitigate this risk.
Inaccurate AI-generated results leading to poor decisions
While AI can generally do things faster than people, that does not necessarily mean it can do them better or, in some cases, effectively at all. Without human oversight from someone with the right expertise, there is the danger that AI tools could produce work that is substandard or, worse, factually wrong. If information generated by AI is used to make key business decisions, those decisions could potentially be made on faulty grounds.
The key here is to make sure you do not lose the expertise in your organisation needed to provide proper oversight. Having people who understand what a piece of work is meant to look like and what potential problems could be caused by faulty work can help to avoid any AI-driven blunders.
Disputes from clients unhappy with your use of AI
Similar to the above point – if you are producing work for clients using AI, there is a danger that the work might not be good enough or could even be clearly wrong. If a client picks up on poor quality work, this will likely undermine their trust anyway, but if they suspect the work was AI-generated, it could cause them to lose faith in your ability to meet their needs at all.
Again, as above, the answer here is to make sure any work being produced for clients is checked by someone with the relevant expertise to know if it is good enough or not. This can, ideally, allow you to benefit from the efficiency gains offered by AI while maintaining the quality of work you produce.
What you do not, currently, need to do is proactively disclose to clients that you are using AI. However, if a client asks whether you are using AI or what tools you are using more generally, you should not lie about it. You either need to tell them the truth or explain that you prefer not to disclose what specific tools you use.
Contractual requirements not to use AI
One issue that is starting to crop up is clients specifying that they do not want suppliers to use AI, particularly in reference to writing tools such as ChatGPT. If a client does state that they do not want you to use AI, then doing so could be considered a breach of contract, so it is important to take note of any such terms to avoid the risk of a dispute.
If you wish to limit a supplier’s use of AI, then you could consider making this a contractual requirement, but this might mean you are limiting a supplier’s ability to use AI in legitimate ways. This could, ultimately, mean they are not able to operate as efficiently, harming the quality of service you receive.
If you have been asked not to use AI or are considering specifying this for a supplier, you need to be clear about what the risks and benefits of AI are for your specific situation and be ready to have an open conversation with the other side about this.
Legal risks from poor quality contracts
For businesses looking to save on legal costs, getting AI to generate contracts might seem like an easy cost-cutting measure. The danger is that, at best, these contracts will be generic, without consideration for the specific needs of your business. At worst, an AI-generated contract could be wholly inappropriate, for example, relying on legal principles from a different country to the one you are in. This could mean the terms do not mean what you think, do not provide the protection you require or that they are simply not enforceable.
This is one area where having human legal expertise is absolutely vital. While many of the terms of contracts can be fairly similar from business to business and situation to situation, knowing when these apply, when they need to be varied and what else needs to be included takes a high level of specialist knowledge and training.
Data security risks from sharing sensitive information with AI tools
One growing area of concern is what AI tools might do with any sensitive data that users input into them. Because these tools generally learn from the data they are fed, there is a risk that any sensitive information they are given could end up being shared with the wrong people, including those outside of your organisation. This could potentially put you in breach of the terms of the Data Protection Act 2018.
To manage any data protection risks from AI, you need to have a clear policy around sharing sensitive data with AI tools. You should then make sure that anyone using the tools has appropriate training, so they know what information they can and cannot share.
Cybersecurity risks from hackers using AI tools
One of the biggest fears around AI is the potential for it to be used for nefarious purposes. AI-powered hacking tools could be used to overwhelm businesses’ cybersecurity measures, putting organisations at serious risk. Hackers could potentially bring business systems down and/or access sensitive information about the business, its employees and its customers.
Businesses will need to continue to be proactive about their cyber security measures, including making sure they have the right technology and people to minimise the risk from hackers. This is not an area where any business can be complacent as cybercriminals can cause very serious harm.
AI could make your business redundant
Perhaps the issue that many employees are most worried about, is whether AI will take their jobs. This is also something businesses need to worry about as there is the possibility that AI could change industries so much that the service a business offers is no longer necessary.
To stay ahead, businesses are going to need to keep on top of what AI tools are being used by rivals and customers, and how these are affecting their industries as a whole. Adopting AI tools and adapting your business offering may be essential to stay competitive. You might even need to pivot to serve different markets or offer alternative services. For some organisations, this may be the only option to survive.
Speak to us about your requirements
As with any type of risk management, you need to have the right framework in place, with proper governance, monitoring and reporting. Getting expert advice can help you to create an appropriate AI risk management framework and this is something our team can advise on, particularly in relation to the legal risks.
Whether you need help with managing the risks of AI or are looking for support with any other business legal matters, our highly experienced Company and Commercial Law team are here to help.
To discuss how we can help with any commercial law issues, please get in touch and we will be happy to advise.
Please note the contents of this article are given for information only and must not be relied upon. Legal advice should always be sought in relation to specific circumstances.