OpinionPREMIUM

RACHEL POTTER and VANESSA JACKLIN-LEVIN: Unfair discrimination and breach of human rights by AI

Picture: 123RF/PESHKOVA
Picture: 123RF/PESHKOVA

The recent US court decision in Mobley v Workday is a sign of things to come in SA as artificial intelligence (AI) is being used to make decisions affecting people’s fundamental rights.

The absence of legislation or regulation in the country does not mean companies creating, selling and using AI software in SA are immune to legal action. In Mobley v Workday a district court in California ruled that AI service providers, such as cloud-based human capital software vendor Workday, could be directly liable for employment discrimination under federal anti-discrimination laws.

Derek Mobley, an African-American man over 40 years old with anxiety and depression, alleged that Workday’s AI-powered hiring tools discriminated against him and other job applicants based on race, age and disability. He had applied for more than 100 positions through companies using Workday’s platform since 2017 and was consistently rejected. 

The court ruling is not final and merely allows the case to move to the discovery phase, where both parties will gather more evidence. This is similar to the exception phase or the class action certification phase in the SA context — the court rules that there is a legal basis for liability, but the plaintiff still has to prove a claim on the facts against the specific defendant in question.

 ‘Agents’ of employers

Workday provides a broad suite of “human resource management services”, including providing its customers with a platform on the customer’s website to collect, process and screen job applications. The software in dispute in this case is an algorithmic decision-making tool used for applicant screening in the hiring process. Mobley claimed the tools provided by Workday disproportionately rejected applicants who were African American, over 40 or disabled.

He claimed the automated nature of the rejections, often occurring at late or early hours, indicated that the decisions were made by Workday’s AI-driven tools rather than human evaluators. The court accepted the argument that AI vendors/ suppliers can be considered “agents” of employers, and could therefore fall within the definition of an employer under the relevant employment discrimination legislation. This means if an AI tool used by an employer discriminates against job applicants, the vendor/supplier providing that tool could be held directly responsible.

The court emphasised that Workday’s software was not following employers’ instructions but qualified as an agent because its tools are alleged to perform a traditional hiring function of rejecting candidates at the screening stage and recommending who to advance to subsequent stages, through the use of AI and machine learning.

In the SA context 

While there is no law specifically regulating AI in SA, the department of communications & digital technologies released an AI Policy Framework in August. The policy framework is broad, and the country is still far from implementing AI legislation and regulations. One of the strategic pillars of the AI policy laid out in the framework is fairness and mitigating bias, which includes human control of technology (a human-centred approach in AI systems); human-in-the-loop systems (ensuring critical AI decisions involve human oversight, especially in generative AI); and decision-making frameworks (developing frameworks for AI decision-making that prioritise human judgment).

SA, and indeed the world, is alive to the importance of human oversight in critical decision-making functions of AI, particularly those that affect fundamental human rights. In the EU, legislators signed the AI Act in June and it entered into force in August. The new law adopts a risk-based approach and classifies AI systems into several risk categories, with different degrees of regulation applying.

Outright prohibited AI practices include biometric categorisation systems inferring race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation (except for lawful labelling or filtering in law enforcement purposes); and AI systems evaluating or classifying individuals or groups based on social behaviour or personal characteristics, leading to detrimental or disproportionate treatment in unrelated contexts or unjustified or disproportionate to their behaviour.

The absence of legislation or regulation in SA does not mean companies creating, selling and using AI software in the country are immune to legal action. Our courts are entitled to develop the law to apply the rights contained in the bill of rights to individuals or companies, not only the state and public entities, if there is inadequate protection of that right by legislation.

Individuals may seek the protection of the courts where they are deprived of their rights to equality, dignity, privacy, housing, healthcare, food, water and social security, among others, by decision-making left to AI software, with or without human oversight. One approach may be to extend the law of vicarious liability (where an employer is strictly liable for the negligent or intentional conduct of its employees if acting in the course and scope of their employment) or the law of agency to hold companies that use or sell AI software liable for the decision, acts or omissions of the AI software that they use.

In all the uncertainty of this space, one thing that can be certain is that litigation is coming.

• Potter is a senior associate, and Jacklin-Levin a partner, at Bowmans. 

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Comment icon