BIG READ: As remote workers and robots find each other, remember the dark side of AI

Ismail Lagardien explores the fact that artificial intelligence still acts only on instructions fed by humans with biases and foibles

Picture: 123RF/FEODORA52
Picture: 123RF/FEODORA52

People around the world will continue to return to their jobs in varying arrangements as the Covid-19 pandemic becomes more endemic, or at least under better control. One of the unforeseen outcomes of successive lockdowns and working from home is that many people are reconsidering whether to return to the physical places of existing jobs. Some have become comfortable, others not so much with working from home and yet others are considering their chances in the gig economy.

All the while, people in information and communications technology (ICT), especially those in artificial intelligence (AI), have become more inventive and creative with ways to reshape and reorganise the way we work. Teaching, learning and corporate meetings have moved online with new platforms such as Zoom or Microsoft Teams providing avenues for exchange between colleagues or teachers and learners.

When viewed together this is a vast and exceedingly exciting area of research and innovation that hold significant challenges for the future of work — a common refrain among global public policymakers. There are, however, significant drawbacks, especially when it comes to recruitment and placement of candidates. This is where alarm bells have sounded, and where there has already been considerable fallout.

We can look at this problem of selection and placement of candidates from the top, but first step back 25-30 years or so. In the 1990s — the decade of globalisation — large institutions around the world, from the best universities to the UN, received thousands of applications for specific jobs or positions in universities. At universities such as the London School of Economics (my alma mater) or Oxbridge, the people who made decisions about admissions were quite literally handed large piles of applications — in some cases upwards of 5,000 from around the world. The UN System, which includes the World Bank (one of my former employers) and the IMF face similar floods of applications for limited spaces or positions.

If we consider, as an example, the World Bank’s Young Professionals Programme — a two-year leadership development programme at the start of a five-year employment contract — the institution has recruited at least 1,800 people from about 120 countries over a little more than 50 years. If we rounded off the recruitment over this period to 2,000 (if only to demonstrate the enormity of the selection process) it amounts to about 40 people in each of the past 50 years. Now compare that with the (conservative) estimate of 5,000 applications each year.

It is no more or no less absurd within countries, especially in poor countries with high levels of unemployment. For example, in a statement posted online by the SA Revenue Service in April the organisation said that 88,009 applications were received with the possibility of only 500 actual placements.

Cash cows

In these institutions applications have historically been considered by humans. By the 1990s, the period during which I first encountered this selection and placement process, institutions had strict criteria for selection and placement — that is if they did not simply take only the top 1,000 applications from the top of the piles, and ignored the rest.

Most British educational institutions would prefer graduate students from outside the UK or EU; “Americans and Asians”, it was said, were the “cash cows” because they paid full fees. Students from the UK and EU paid “home fees”. That was one hurdle that was fairly easy to get past.

At the World Bank the criteria were a lot more comprehensive. It helped if you were from a developing country; you had a PhD (preferably) or a master’s degree; you were fluent in English and spoke a second language, preferably French, Russian, Spanish or Arabic (and I would assume, these days, Chinese). In all these cases humans made the selection decisions. There were also “mild” forms of nepotism and cronyism where applications of certain candidates were simply placed in front of the queue.

In SA, where there is a debilitating skills shortage, the private sector has to turn away hundreds and possibly thousands of candidates simply because there are too few actual jobs. As for jobs in the public service, there are variables at work that have to do with the governing party or “transformation”, which is an entirely different conversation. Again, in all cases humans made the decisions, even though it was not always for the right reasons.

Enter selection by AI. The significant scientific advances made in AI, and as applied to recruitment, has by and large taken decision-making from the hands of humans (whom, it is assumed, have biases and prejudices) and transferred to computers which are presumed to be more objective. But that is all in theory.

Chess grandmaster Garry Kasparov ponders a move during the final match against chess supercomputer Deep Junior in the the Man vs. Machine chess championship on February 7 2003 in New York City. Picture: MARIO TAMA/GETTY IMAGES
Chess grandmaster Garry Kasparov ponders a move during the final match against chess supercomputer Deep Junior in the the Man vs. Machine chess championship on February 7 2003 in New York City. Picture: MARIO TAMA/GETTY IMAGES

Outperform

The better known (somewhat brief) history of AI may be traced back to the early 20th century, notably the humanoid robot that impersonated Maria in the 1927 film Metropolis. By 1950 the British polymath Alan Turing explored the mathematical possibility of what may rightfully be described as AI in an essay, “Computing Machinery and Intelligence”.

The “birth” of AI may be tagged to a 1956 conference where the computer program, The Logic Theorist, demonstrated that, under the right conditions, it could outperform human mathematicians. It went on to better the theorems of notable mathematicians such as Alfred Whitehead and Bertrand Russell.

After 1956, and for at least the next decade, a question kept popping up: can computers out-think humans? Fast forward to 1970 and the optimism reached its peak when the cognitive and computer scientist Marvin Minsky suggested that within 10 years “we will have a machine with the general intelligence of an average human being”.

During the 1980s there was serious investment and rapid development of computer processing and improving what we now know as AI, albeit with relatively little hype or excitement, until the decade of globalisation. In the 1990s and beyond AI took off, as it were. By 1997, IBM’s Deep Blue beat the reigning world chess champion and grandmaster Garry Kasparov in a chess-playing computer program.

Fast forward to this, the third decade of the new century and you can hardly have a conversation about work without hearing about AI and robotics. While we are still a long way from computers passing what is referred to as the Turing test (or the singularity) when computers start making decisions for themselves without human coding or instructions and we may not be able to control them, in application AI has already begun to show its dark side.

Face recognition

In 2018, during a conference on fairness, accountability and transparency, researchers from the Massachusetts Institute of Technology (MIT) and Stanford University found that commercially marketed facial analysis programmes produced by major technology companies showed significant skin-type and gender biases. Based on statistical data, researchers showed that the programs prejudiced dark-skinned women, and in a sense elevated light-skinned men.

In February 2018, MIT explained that “the findings raise questions about how today’s neural networks, which learn to perform computational tasks by looking for patterns in huge data sets, are trained and evaluated. According to the paper, researchers at a major US technology company claimed an accuracy rate of more than 97% for a face recognition system they had designed. But the data set used to assess its performance was more than 77% male and more than 83% white.”

“What’s really important here is the method and how that method applies to other applications,” explained Joy Buolamwini, a researcher in the Media Lab’s Civic Media group at MIT, and first author on the new paper. “The same data-centric techniques that can be used to try to determine somebody’s gender are also used to identify a person when you’re looking for a criminal suspect or to unlock your phone. And it’s not just about computer vision. I’m really hopeful that this will spur more work into looking at [other] disparities.”

Buolamwini’s curiosity began a few years before she published the paper with Timnit Gebru while working on a system she called Upbeat Walls, an interactive, multimedia art installation that allowed users to control colourful patterns projected on a reflective surface by moving their heads.

To track the user’s movements, the system used a commercial facial analysis program. The team that worked on the project was ethnically diverse, but the researchers found that when it came time to present the device in public, they had to rely on one of the lighter-skinned team members to demonstrate it. The system just did not seem to work reliably with darker-skinned users. Buolamwini started submitting photos of herself to commercial facial recognition programs and found they often failed to recognise the photos as featuring a human face at all. And when they did recognise her as a human, the program consistently misclassified Buolamwini’s gender.

Bias in AI

In December 2020 Timnit Gebru, the Ethiopian-American computer scientist who worked on algorithmic bias, among others, made headline news when she left her employment at Google where she was the technical co-lead of the ethical AI team. While the issue is mired in accusations and denials by the company (as may be expected), The New York Times reported on December 3 2020 that Gebru “was fired by the company after criticising its approach to minority hiring and the biases built into today’s AI systems”.

In an email message which The New York Times had seen, Gebru expressed exasperation over Google’s response to efforts by her and other employees to increase minority hiring and draw attention to bias in AI.

“Your life starts getting worse when you start advocating for underrepresented people. You start making the other leaders upset,” the email read. “There is no way more documents or more conversations will achieve anything,” Gebru reportedly said.

While the company made no official statement, according to the The New York Times Jeff Dean, who oversees Google’s AI work, referred to Gebru’s departure as “a difficult moment, especially given the important research topics she was involved in, and how deeply we care about responsible AI research as an org and as a company”.

Former Google AI research scientist Timnit Gebru. Picture: KIMBERLY WHITE/GETTY IMAGES FOR TECHCRUNCH
Former Google AI research scientist Timnit Gebru. Picture: KIMBERLY WHITE/GETTY IMAGES FOR TECHCRUNCH

This issue has swirled and swelled, and more and more questions are being asked about the ethics of AI and its application. These days, where many companies and institutions encourage online applications, humans no longer have to plough through 1,000 applications for 10 or even 100 positions in a graduate course, or tens of thousands of applications in the private or state sector.

It is expected that computers (AI) would shake applicants who “do not fit the criteria” out of the system. But, and here is the kicker, we should not gloss over the fact that computers act only on instructions that are fed into them. As such, it should come as no surprise that the foibles of humans can be fed into the models for collaboration between mathematics and AI.

As for the return to a postpandemic workplace, research by Microsoft among 30,000 employees across the world found that “70% of workers want flexible remote work options to continue [and] 66% of business decision-makers are considering redesigning physical spaces to better accommodate hybrid work”.

We may have reached a point where, between the pressures of a postpandemic workplace — what the BBC described as the coming “great resignation” and “hybrid” working — and the prepandemic embedded biases in recruiting, it becomes difficult to imagine that amoral algorithms embedded with humanity’s foibles will be making any significant difference in attracting new talent.

For a long time during the early 2000s recruiting officers scanned applications looking for phrases such as “project completed”, “adaptability” or “willing to relocate”. With job applications increasingly moving online, algorithms may draw on information and “footprints” we leave on social media, including our spending habits. All told, it’s difficult to shake the memory of that marvellous line in the 1958 novel The Leopard by Tomasi di Lampedusa: “For things to stay as they are everything must change”.

• Ismail Lagardien has worked in the Office of the Chief Economist of the World Bank, in the Secretariat of the National Planning Commission and was Dean of Business and Economics Sciences at Nelson Mandela University.

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Comment icon