Ten top tips for SMEs using AI

29th January 2024

AI is currently a hot topic in the world of business. The launch of Chat GPT in late 2022 brought the idea of using AI at work to a much wider audience. Here are our top tips for SME’s who are considering whether AI technology can work for them.

1. AI can make your life easier, but be careful how

Before using AI in your business, make sure you have a real understanding of how it works and what it can and cannot do. No machine is ‘intelligent’ and it cannot ‘learn’. What it can do is take information loaded into it and apply it back. Used well, AI can really free up employees to add value to your business by taking away routine tasks or getting projects started. Good examples are using AI to schedule social media, provide the first draft of a speech or deliver the outline of a project plan. The key is to see this as a first step and a way to free up your employees to do the detailed thinking and the complicated tasks. Always bear in mind tips two and three.

2. Consider confidentiality

Tempting as it may be to upload a set of sample documents to Chat GPT and have it write client pitches, proposals or advice papers, remember that machine learning works by amalgamating the information you upload. It is not a private or ring-fenced area and anything you upload is now in the public domain. There is also almost no way to re-secure the information. It is important to have a policy on use of AI, particularly Chat GPT, and ensure that employees understand how its use may breach their confidentiality obligations to you.

If you do suffer a breach of this nature, depending what has been uploaded, you will need to consider GDPR, reports to any regulator, damage to the client relationship and disciplinary action against the individual concerned. Prevention is far better than an attempted cure.

Different industries or businesses will have different risk profiles: where a business collects and processes certain types of personal data, considered to be “special category data”, a breach could be very serious, depending on the potential for harm to the data subjects.

3. Check you own the content produced, and it isn’t nonsense

Documents created using AI run a high risk of being generated from authored content and breaching the IP rights of others. Journalist Rory Cellan-Jones was among a number of authors to recently discover poor quality, AI generated versions of their books for sale online – in his case, an AI version of his autobiography. If you create content using Chat GPT or other online functions, there is always the risk that you are using content owned by someone else, and that could bring IP enforcement issues. Cellan-Jones also found the rival book contained passages which were complete fiction. There is no quicker way to lose credibility with your clients than to serve them incorrect information that you cannot back up. Make sure work created using AI is clearly labelled as such internally and you have quality control procedures in place where necessary.

4. Beware of discrimination

Many people are now familiar with the story of how Amazon had to discontinue an AI recruitment tool it built, because it learned to discriminate against women. The tool was trained with CVs over a ten year period, and those skewed very male. As a result, the tool downgraded mentions of women’s sports, teams and colleges. You may have outsourced decision making to AI, but the liability when it goes wrong very much still rests with you.

5. It’s not a ‘plug and go’ solution

We have all probably experienced the web pop-up box that offers live ‘chat’ to sort out any problems we’re having. These are often now powered by AI, at least until the bot gives up and connects you to the call centre. DPD were in the news this month when their chatbot went a bit rogue. A frustrated customer trying to track down a parcel managed to make it swear at him, call itself useless and even write a poem about how awful the company was. Particularly for SME’s, consider whether your clients would rather wait longer for a human answer, and always ensure that you have staff assigned to oversight of services.

6. Be very careful when using AI for decision-making

With the increase in all records being kept online and orders placed electronically, whether by business customers or individuals, there is a temptation to rely on AI to search those records and provide “decisions” as to creditworthiness, legitimacy of company ID and trading or order history. While the ability of AI to plough through large volumes of records and search or categorise them is remarkable, businesses should be careful not to rely on automated decisions and instead review AI-generated results to ensure they are accurate taking into account the “human element” which may affect records. Data subjects can appeal against automated decision-making if it is inaccurate, and request a human review the decision. If a business does rely on AI for evaluations, it should clearly state this in their privacy policy.

7. Fairness in employment

The GDPR rules also have implications for use of AI in workforce management. AI can be a useful tool to track data and trends across a workforce, but be careful when it comes to applying that learning to an individual. No employee should be subject to a dismissal, a decision on a flexible working application, an answer to a request for a disability adjustment or any other similar process by algorithm. Make sure that a human being uses the tools to guide them, but makes the decision themselves.

8. Personal Data and AI: protecting human rights

AI’s advantage lies in the volume of data that is analysed, but unless some “ethical” safeguarding measures are designed in, AI does not distinguish between personal data and other data. As a result, AI will likely process personal data alongside non-personal data without checking for the usual data protection requirements – and data privacy is a human right intended to give individuals the right to protect their private information. Data Protection laws apply to information gathered by means of AI, and the company using the AI is the responsible entity. Before using AI, businesses should ensure that there are data protection safeguards built into the software, to anonymise personal data found – even if it is publicly available – to mitigate against misuse.

9. Ensure your employees are trained to understand AI

Training should be given to employees about AI and data protection, confidentiality and content ownership, given the potential for unreliable decisions or results that can be presented. This will ensure that employees are aware and equipped to question results “when they seem strange” and provide a safeguard to bias, discrimination, reliance on wrong information or publishing confidential information or copyrighted works. All employees should also be aware of the principle of “data minimisation” requiring only absolutely necessary personal data to be retained and processed.

10. Ensure technical and organisational measures to manage the AI

Give access to AI software only to those employees who need to have access to it –  especially where that software includes personal data that the business has collected and needs to process. Consider using AI tools only where they have built in encryption of personal data – or all data – to help ensure against unnecessary breach risks.

Check that your AI software has been designed and developed with data protection compliance at its core. Data minimisation should be observed and prevent any general data-scraping. You should also check where the AI server is based and where any personal data might be stored when it is processed using AI software: there are territories which are considered to be very difficult and non-compliant with UK data protection laws. The same data protection laws need to be considered in the use of AI as in any other aspect of business functions.