“With great power comes great responsibility.”

This famous quote from Ben Parker is a succinct and wholesome representation of what is the ethics of AI. In today’s fast advancing world, we see new state of the art models being released almost every week. However, it’s clear to see that there is a one-sided allocation of resources geared towards the development of more state of the art models, rather than balancing the development of bigger and better models with proper vetting of biased training data. 

Breakthrough models and accountability

For example, take the GPT3 model: a breakthrough language model that was released last year and made headlines with its power. With such a large model, there is a lot of preprocessing of the data that is required. That in itself is a feat. But the point of concern, the actual cleansing of the content, is often overlooked. The GPT3 model isn’t exactly “pure,” as it was trained using data containing profanity, racial stereotypes and other associations. Does this mean the scientists and engineers intentionally had malicious racial biases in mind when creating the model? Of course not. What it does mean is that they, like many others, neglected to stop and think, “Is my model fair and representative, or is it biased in any way and what are the repercussions of that?” 

The whole concept of the ethics of AI comes down to accountability. Maybe the scientists and engineers had good intentions, but they lacked in the implementation. Without the proper checks in place, our models aimed at helping others can actually have the opposite effect. I liken the situation to the popular show Doctor Who. The Doctor goes around traveling at various points in time and space. He has a wealth of knowledge that he possesses combined with lots of power, but if that goes unchecked, then he can drive himself mad, or worse, hurt others. He needs a companion that can be present to humanize him and ensure that he does not take his power too far. When we have a large model, like an imaging, speech or text model, we need to be able to humanize them.

Sci-fi fictions vs. AI realities

In light of the world created by shows and movies like Doctor Who, I wonder that in this new age of AI, is sci-fi still unrealistic? Are we still in that realm where we think these things can never happen? Nope. Because we see things such as autonomous weapons that are being built in Israel. And the NY police department has made viral news lately for using a Boston dynamics robot on their dispatches. You see the funny videos of them dancing on Christmas that may look cheery, but when it comes down to it, the data sets that these models are being trained on that are being deployed into the robots are very biased. They are being trained on current incarceration rates, and unfortunately, BIPOC demographics statistically have the highest incarceration rates in the United States.

Advancements of ethical governance in AI

In this past year, more has been done to advance the notion of responsible AI and the ethics of AI than ever before. The recent NeurIPS conference initiated the first of its kind ethics evaluation panel which oversaw the final submission of AI models by evaluating the pros and cons of deploying the submitted models in industry. Moreover, the AI ethics committee from Google (the committee was disbanded, and raised some issues.) proposed a viable approach to enact accountability.  They propose that researchers need to submit model cards with their models. Each model should have a document explaining what the model is, what it is used for, what are the known limitations, the trade offs, and any issues with specific demographic groups.

As long as this exists, the transparency also exists and researchers are more accountable in those aspects. Of course, these are things that can be implemented to subdue the negatives of AI that exist within models and in the hearts of consumers that do not trust AI for its fallacies, but also do not quell them. For a long time, technology has been able to avoid accountability, and this can be attributed to the lack of proper regulation in the technology industry. The famous show Silicon Valley actually mentioned the ethics of AI in a term called “tethics”, and it is a point of research also being proposed by faculty at UC Berkeley. The idea is that there should exist a binding code of ethics in technology in a similar way in which all doctors must uphold the Hippocratic Oath. If anything can cure the negative stigma around AI, it will be through “tethics” and accountability.

Sources:

[1] Prestigious AI meeting takes steps to improve ethics of research (nature.com)
[2] Vox – Glad You Asked: Are We Automating Racism? 
[3] ✨Lecture 9: AI Ethics✨ – Full Stack Deep Learning
[4] Ethics in AI research papers and articles (einstein.ai)
[5] A Practical Guide to Building Ethical AI (hbr.org)

Pin It on Pinterest