The past week was a big one for artificial intelligence (AI), not because of a breakthrough in the technology or a new company launching. Rather, the good governance of AI was given a huge boost in the form of the first global standard on AI ethics.
A Unesco project presented to the world by director-general Audrey Azoulay on November 25, the framework has now officially been adopted by the 193 member countries, including SA as well as Russia and China — the two countries generally considered to be the rebellious kids lurking at the back of the school bus in this regard. Also notably, the US — where many of the world’s biggest tech companies are based — is not a member, though this may change under the Biden administration. And nor is Israel, which has a booming tech and innovation start-up scene.
The framework is the result of three years’ work, initiated by Unesco and incorporating input from multidisciplinary AI and ethics experts as well as public comment. From this point, member countries must take the “recommendation on the ethics of artificial intelligence” — and the values and principles therein — and build their own legal and regulatory infrastructure to help direct the development of AI technologies within their jurisdictions. These are expected to include ethical impact assessments and “strong enforcement mechanisms and remedial actions” as laid out in the framework.
The framework also covers the protection of personal data, the banning of social scoring and mass surveillance measures, and the need for countries to consider AI’s role in environmental matters such as carbon footprint and energy consumption.
It’s bigger than TikTok and much more meaningful than a billionaire in space — and was used in achieving both of those
AI ethics has been bubbling up in the zeitgeist for a while. Companies such as Dell and IBM and many more have convened meetings and appointed oversight committees to the matter because of the immense potential and social challenges these tools bring. But this move is exciting because of the sheer scope and reach of it: one framework to rule them all, so to speak.
AI is exciting. It’s bigger than TikTok and much more meaningful than a billionaire in space — and was used in achieving both of those. But an AI tool designed to enhance security, such as facial recognition, can just as easily be used to deny or infringe on human rights. That’s not a statement of speculative possibility, but a reality in the here and now.
Unless you are in that field, we tend to think of AI as a magical (read: inscrutable) future technology, but it is already being used all around us and in ways that directly affect us. It is already in your phone — in the form of Siri or similar digital assistants — and shaping your social feeds and the adverts you see. Some companies use AI to determine your eligibility for a loan and the interest rate you are offered, or to screen their incoming CVs and job applications. It is being deployed in processing university and college admissions, and in diagnosing health issues and diseases.
Human bias
The rhetoric about technology and computer-based decision-making has created a pervasive fiction that when you leave a decision to software you’re being dealt with by a non-biased, rational engine. The connotations of the word “machine” reinforce this. When someone is being cold, we accuse them of being machine-like, as opposed to a feeling, thinking being.
But now we have inserted a kind of thinking into our machines, and to enable this we have trained the systems on data sets and given them a set of processes to follow. What is in those data sets and how we frame and weight those processes determines an outcome. That is how human bias is installed in supposedly non-biased machines.
Say, for example, you have a data set of people who have previously defaulted on their loans, and have told your AI system to price risk into the interest rate offered to people applying for loans. This sounds relatively reasonable at first glance, but it can mean excluding valid applicants based on their current address or how “ethnic” (read: non-white) their names sound.
Credit scores
For example, social scoring has been used in China to ascribe citizens with a trustworthiness score, akin to a credit score. The exact variables are unknown — which is the first transgression, as ethical AI requires transparency — but media and activists have reported instances of people being blacklisted from public transport or having their internet speeds throttled because of the cumulative effect of their “untrustworthy” behaviours, which can include everything from tax avoidance to buying “too many” video games.
While I’m tempted to endorse a system that limits the rights of people who refuse vaccinations without good reason, you can see how quickly and easily such systems can be abused — and why we need checks and balances to ensure against this.
Many studies (seriously, loads and loads of studies) have proven that algorithms can be racist, sexist and otherwise discriminatory, and these issues tend to deepen over time. So a system that has an inbuilt minor discrimination favouring men tends to become increasingly biased in that direction as it runs.
Take the rage you are feeling at how the Global North responded to our Omicron news, and you will quickly see how SA and our peers need to be concerned about systems built on unexamined biases — and why this framework is a glimmer of hope as we move towards a future in which AI is ubiquitous.
• Thompson Davy, a freelance journalist, is an impactAFRICA fellow and WanaData member.






Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.