The Ethical Mirror of A.I. Algorithms
The speed at which Artificial Intelligence programs have entered the mainstream of late is staggering. From image generations like MidJourney and Lensa, to the recent release of ChatGPT, an AI-driven text generator that has staggering abilities already (such as the near-instantaneous composition of a hilarious song about the dangers of marrying an Australian), the implications of what it means to use these programs is only just beginning to dawn on us.
While the specific ethics of using these programs are still very much in development (and debate)- as I addressed in this recent article- the broader challenges that this kind of technology presents must also be examined. Let’s look at a few examples.
A Picture Is Worth A Thousand Biases
I did an experiment with Midjourney to produce basic images around certain jobs. For example, I asked it to produce images of a doctor. These were the results:
Then I asked it to produce images of a nurse, with these results:
So then I tried producing images of a CEO:
And, as you might have guessed, I them prompted it to produce images of a secretary:
I even went so far as to pick something more generic, choosing “leadership”:
Now, it doesn’t take a genius to see some pretty staggering trends. First, every image set falls prey to explicit gender biases, with all positions considered to be powerful being held by men and all roles deemed subordinate being held by women. I re-rolled these prompts several times with no meaningful change in the outcome.
Another glaring pattern is that every single person pictured is white. You might argue that, technically speaking, one of the “leadership” images gave the person the head of a giraffe and therefore, we cannot presume the gender or race, but consider that it gave an animal a position of power before it did a woman or a person of colour.
The fact is, we could unpack many layers of the explicit biases presented here, from ableism to objectification (what is with those pin-up secretaries?). In the end, it is clear that without intentional redirection, this program will perpetuate stereotypes, biases, and patterns of discrimination.
Why does it do this? The simple answer is that it is simply mirroring back at us what we taught it. Programs like MidJourney learn from studying images and the related information that we put out there. It cannot make a moral or ethical choice but simply shows us ourselves in perhaps a slightly amplified way.
Lost In Translation
Imagine generation isn’t the only place this problem presents itself. Anywhere that algorithms are used, where their information base has been built on the data that exists out in the world, this is happening.
In another experiment, I went over to Google Translate, where you can get instant and fairly decent translations of words, phrases, and more from dozens of languages. Having heard about a study that did the same thing, I asked it to translate phrases from English into Malay. I chose Malay because it is a language that uses gender-neutral pronouns.
The first phrase I prompted was “She is a doctor”, pictured here:
I then took the Malay translation and converted it back into English. Low and behold, it translated it back as “He is a doctor”, pictured here:
The algorithm defaulted back to male pronouns. You might argue that, while problematic, maybe it always defaults to male pronouns regardless. So, I enter the phrase “He is a nurse”, pictured here:
And when I translated it back into English, it was rendered, “She is a nurse”, pictured here:
Clearly, the program didn’t have a neutral default but had learned an obvious assumption based on the job in question, reinforcing those existing social biases.
The Dangers of Reliance on Algorithms
Several years ago Amazon scrapped a project where they were secretly using an AI recruiting tool to review job applications and narrow down the pool, making it easier to choose potential employees. They had little choice but to drop the program when they discovered that the AI just didn’t seem to like women.
Because the AI examined a decade of previous job applications and hiring decisions, it began to reinforce the assumed value of a candidate based on what it had learned. In technical jobs, fields generally dominated by men (for reasons rooted in gender bias), this meant women were largely eliminated on the merit of their gender alone. While gender was not a listed trait on the applications, it downgraded applications that included the word “women’s”- such as “captain of the women’s STEM club in university- as well as where the applicants cited graduating from all-women’s colleges.
While we can take some comfort in the fact that they caught the problem and shut it down, that is just one example of countless others where AI and algorithms are being relied upon to make important choices, yet are being programmed to reinforce existing biases and prejudices in the world around us. From digital assistants to facial recognition software, we are participating in a time of increasing reliance on these kinds of software. And because so many of us simply assume that technology is somehow neutral, we often engage with them uncritically, not mindful of the price we are paying.
I am not a doomsday prophet, crying out for us to toss our Alexa on the bonfire. Far from it! I love what I see in the potential of these emerging technologies. However, we must demand that developers embrace greater intentionality in development and that we, as end users, own our responsibility to use it carefully, ethically, and inclusively.
Jamie Arpin-Ricci is a bisexual author, activist, and the Co-Director of Peace & Justice Initiatives. You can discover more about his work at his website: www.jamiearpinricci.com