Artificial Intelligence in Employment
Mandy Armstrong, Employment Lawyer at Anderson Strathern LLP
I’m going to speak to you tonight about artificial intelligence in employment. The reason that I’ve chosen to talk about this topic is because it is in the news so much at the moment. I know certainly in terms of my own workplace, we are talking about how we can use AI to help us going forward, and certainly within the world of work given now the prevalence of hybrid working and of remote working the use of AI at work seems set to likely to expand. From an employment lawyer’s perspective, one aspect that’s really interesting is the fact that our employment legislation was not drafted with this kind of working environment in mind and that will present unique challenges for all of us who work within HR going forward. In terms of what I would like to cover tonight, I’m going to really briefly look at what is AI, how is it being used in the workplace at the moment, what are the main employment law risks, I’m an employment lawyer so I look at everything based on risk fortunately, and what might future regulation of the use of AI in work look like?
So what is AI? When we’re talking about AI, we essentially mean computer systems that are capable of performing tasks that historically we would’ve thought of as requiring human intervention. It is being used in work at the moment, the Department for Culture, Media and Sport reported in January 2022 these figures, and those figures will have increased by now in 2023. In terms of how it’s being used in work, it has huge potential uses in relation to recruitment. AI can be used to help draft job specifications. We can be using chatbots and virtual assistants to even ask candidates preliminary questions. It can be used in loads of other areas of work from performance management to allocation of tasks to tracking of employees in relation to redundancy. It can be used to select redundancy criteria. It can be used to analyse pools of employees against those criteria.
Chatbots like ChatGPT, or generative AI as it’s known, can also have huge uses in the workplace. Chatbots can draft presentations for us, they can draft LinkedIn posts, they can draft emails, they can draft documents, but what are the risks? From an employment lawyer’s perspective, one of the biggest risks in relation to the use of AI in work is discrimination. That’s because we’re removing the human element or the common sense element of decision-making when we use AI technology. Amazon reported a few years ago an issue that it had with an algorithm it was using in relation to recruitment. The algorithm based on the recruitment data that it was analysing developed a practice of preferring applications from male candidates over female, so it was penalizing applications if they included the word woman or girl or female, which is not great. It’s not what you want.
Deliveroo, their algorithm, Frank, has caused problems for them. Frank is the algorithm who determines which workers will be allocated which work, and Frank had developed a practice of preferring only those workers who could deliver at peak times. Obviously Frank as an algorithm was not able to look behind the reasons why and look at are there childcare issues there, are there disability issues, are there religion issues? So there’s indirect discrimination risks in relation to good old Frank. Uber Eats is also in the news at the moment. It’s been sued in the East London Employment Tribunal by an employee who claims that its facial recognition software is racist. This employee was automatically suspended from work after the facial recognition software, which Uber Eats requires its drivers to use every morning, couldn’t recognize his skin tone and it reported to Uber Eats that he was not the employee. He’s suing them at the moment in the tribunal.
There are other risks as well. There are unfair dismissal issues that are constructive dismissal issues if we are relying on software where we can’t guarantee that that software has been fed the correct data in the first place. If we are relying on results from software that has been fed incorrect data, it won’t be reasonable to rely on those results to justify particular decisions like dismissal decisions. Similarly, managers who’re using AI technology have to be able to understand that technology and analyze that technology and be able to explain to the employees the rationale behind the decisions reached by that technology.
As well as discrimination, unfair dismissal, and constructive dismissal risks, there will also be data protection issues. The use of AI technology in relation to employees will involve data profiling, it will involve employee monitoring. Just this week the ICO published new guidance on the monitoring of employees at work, so this is going to be a topical issue going forward. Future regulation. So future regulation of AI, at the moment in the UK we’re really behind the curve. There is nothing to regulate how AI is used in the workplace. The Trades Union Congress announced just at the start of September that they were launching an AI taskforce which will look at drafting legislation to protect workers’ rights in relation to the use of AI at work. That legislation will be the Employment and AI Bill, which looks like it will be drafted in early 2024, and it will do various things including amending the Equality Act we think to protect against discriminatory algorithms like Frank.
See Mandy delivering her presentation at DisruptHR Aberdeen here >>>> https://vimeo.com/876137573
More talks from DisruptHR Aberdeen here >>>> https://hunteradams.co.uk/blog/disrupthr-aberdeen/