As we celebrate April Fool’s Day and we look forward to the Easter break, like many I am looking forward to an overseas holiday.

Those that know me, know that I like to spend as much time as I can over in France (Just don’t get me started on the rolling 90 days in 180 days rule) and for the last few years I have often used the AI supported chat functionality on the Ryanair site, with differing results it has to be said. But with the increasing speed of adoption of AI augmented communication tools, then it is clear to me that this will soon be an everyday acceptance in many sectors and within our sector of Professional Services.

Within the UK, the Government has just produced its first White Paper on AI, aimed at driving responsible innovation and maintaining public trust in this revolutionary technology.

It sets out 5 key areas of focus:

  • safety, security and robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed
  • transparency and explainability: organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI
  • fairness: AI should be used in a way which complies with the UK’s existing laws, for example the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes
  • accountability and governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes
  • contestability and redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI

Recently, as a team, MAP has been discussing the growing use of AI as a business tool and what opportunities, and risks, that this creates for our clients and us as their accountants.

On the face of it, Artificial Intelligence tools such as ChatGPT have their benefits and attractions, particularly around customer contact and queries, as an example, but we have to ask the question of what happens when the Bot suggests an action, or proposes an outcome, that the qualified accountant would not, based maybe of knowledge of current / future legislation, or the specific intricacies of any given client.

This becomes a more significant issue in a regulated sector such as Accountancy and the question of what is advice as compared to opinion?

It is entirely reasonable to assume that the client takes any response received from their accountant as considered professional advice, when in an AI supported situation, this is more likely to simply be what the tool has learned from previous interactions. Again, this is dangerous given the global scale of AI tools and the many differing financial laws and rules that apply in territories across the globe.

There is also logic, and some evidence, that tools may achieve a bias based on the questions asked the most, which would therefore rely on the legitimacy of the questions when considering the validity of the response.

We know that just like any tech that you may have chosen to adopt, or that has been thrust upon you in what is now an “always on line” landscape for your business, AI will get more complicated and hopefully better as a result. But my word of caution in the meantime is remember that in many situations within the UK, let’s take taxation as an example, the responsibility lies with you as the individual, or company director.

Now it may be too early in the AI product lifecycle, but I am fascinated to see the outcome of  the first case where the defence was “The Bot told me to do it”

Stuart Brown