The Ethics of AI : Issues & Dilemmas

The advent of artificial intelligence and its widespread adoption in today’s society eventually raised important questions regarding the ethical framework that the development and use of intelligent systems should abide by. That being said, why now?

If history has taught us anything, it is that this phase where humans question the morality of a novel technology was bound to arrive sooner or later.
We are now reaching a point of maturity in AI technology where attention is shifting from development and applications to ethics and regulations. We shed light on some ethical dilemmas encountered in various domain applications where AI is being used today.
Some of these serve to highlight everyday situations where decisions are not so black and white and the line between right and wrong is awfully blurry.
Others question the long-standing definitions of some human constructs by sensibly pointing out how they may no longer be applicable in tomorrow’s society.

System Bias in AI Systems

One of the greatest examples used to warn against the naive use of AI is the different cognitive biases that these systems exhibit when put to test, which often lead to unintended consequences.
To put things into perspective, AI doesn’t know whether its decisions are biased or not, we are still ages away from an Artificial General Intelligence (AGI) that is capable of such high-level reasoning and introspection. In practice, a machine learning algorithm is trained using (input, output) data pairs to inductively approximate the true and unknown mapping between the desired input and output spaces, and hopefully gain the ability to generalize (i.e. deduce or predict) to new input data points that have not been encountered during training. Bias stems from this process in different ways, either through the skewed distribution of the data used for learning (e.g. underrepresented populations) or in the design of algorithms and training protocols.

To illustrate this point, we can consider search engines that constantly exhibit gender-skewed behavior. In Microsoft’s Bing, text queries such as “asian girls” will retrieve hyper-sexualized visual content whereas the male equivalent, “asian boys”, will output mostly standard pictures of boys from an asian origin.
Another prevalent example is facial recognition software which, despite boasting a really high predictive performance overall, was shown to have a much higher error rate when it comes to people of color. While this particular shortcoming is acceptable in some situations, it can have devastating repercussions in other applications where the stakes are high, such as law enforcement practices like video surveillance.
Ultimately, this type of AI use will become a human rights issue that warrants legislative oversight.

The Trolley Dilemma in Autonomous Vehicles

As more autonomous cars hit the road, there is a case of growing concerns regarding the design of software and algorithms that control these vehicles. One particularly anticipated situation is reminiscent of the Trolley Dilemma where one has to make the decision of sacrificing one person to save a larger number of people.
The real-life analogous of such situation occurs when an imminent and potentially fatal collision cannot be avoided, and the car’s decision-making system has to decide whether to value the safety of the car’s occupants over those outsides, or vice-versa. So what is the right course of action for a self-driving car in this situation? And how should this decision differ when the subject of the expected collision is a human jaywalker (i.e. higher chance of deadly outcome) as opposed to another vehicle or object? What about a child ?

Naturally, philosophy holds the key to finding some answers to these dilemmas. Case in point, utilitarianism is a set of normative ethical theories that encourage decisions which maximize utility, where utility can be thought of as the happiness or well-being of individuals. In the example of an unavoidable accident, utilitarianism would prescribe sacrificing the few for the sake of the many.
However, it is worth noting that even established theories like utilitarianism are met with criticism on different aspects ranging from the definition of utility itself to the type of aggregate that should be maximized (e.g. average utility, total utility, minimum utility, etc.).
At this point in time, we have more ethical questions than answers, and to make things worse, there are other highly influential economic factors that come into play.
For example, how can a car manufacturer market their self-driving vehicle when the client knows that in the case of a crash, their own car will jeopardize their safety for the sake of pedestrians and outsiders.
Does this mean that the decision is already made in a sense, and all we are left with is the illusion of a choice?

Artificial Creativity

There is a class of machine learning algorithms called generative models that is able to generate new data according to an approximation of the unknown process that generated the training examples. It wasn’t long before this type of modeling was applied to audio, text, image and video signals, paving the way for the concept of artificial creativity to emerge.

In the world of arts, a generative AI can be trained on hundreds of paintings belonging to an artist in order to learn how to make new paintings that incorporate the style, creative process, and artistic identity of said artist. In 2016, a deep learning model was able to create a masterpiece dubbed “the Next Rembrandt” after Rembrandt, the original author of the paintings used for learning, centuries after his death.
Once the “wow” effect fades away, the immediate question that comes to mind is who should be designated as the author of such creation? Is it the AI itself? The engineering team who trained the model? Or is it the original artist whose past work has served for the purpose of training? Incidentally, this problem is not only confined to the world of arts but can also extend to other domains that are tied to creativity.
For example, in 2019, the European Patent Office (EPO) rejected a patent application in which an AI was designated as the inventor. This incident sparked interest in the conventional definition of authorship, and its failure to meet modern-day expectations.
So how can we update our definition of the term “author” in a way that does justice to all parties involved: the AI, the people who created the AI, and the source/author of the training data? Furthermore, is AI art a threat to human artists, or a catalyst to push the boundaries of human creativity?
These questions deserve careful consideration because they can help us better understand the nature of involvement of AI in creative work, differentiate plagiarism from originality, and preserve the value of art in the cultural sector.

You Are Hereby Found Guilty: AI Meets Courtroom

Another interesting use case of AI can be found in the court of law, where judicial systems are making use of algorithms to evaluate cases, essentially replacing the human judge.
The increasing adoption of AI technology in areas ranging from predictive policing to sentencing is producing tangible value, mainly in the form of time efficiency, but it also brings up a set of ethical questions worth exploring.

The justice system is slow, and that can be attributed to multiple reasons including the complex nature of legal processes, the high volume of cases and the lack of facilities to cite a few.
This creates an opportunity for machine learning-based automation of mundane repetitive tasks to speed up the wheel. In 2019, China unveiled its first AI judge; a digital female character featuring a body, a voice, actions and facial expressions to handle basic repetitive work in a new online litigation service.
Judges however are not the only target of such solutions. Algorithms and analytics tools are also used to benefit law firms, by accelerating legal research and drawing valuable insights on cases. And for those who can’t afford to hire a lawyer, DoNotPay is here to save the day. This self-proclaimed world’s first robot lawyer is an AI-powered legal counsel that can assist in matters pertaining to small claims courts.

One of the most compelling arguments used to defend the use of AI in legal matters is the ability of this technology to be “neutral”. But we know all too well that AI is anything but neutral, as it suffers from embedded bias and discriminatory behavior, essentially reflecting our own prejudices as we explained before.
Another important issue is the lack of transparency in deep learning algorithms, which are very often regarded as a black-box.
Even if these neural networks are highly accurate, not understanding the why behind their decisions poses a problem, simply because we cannot afford to blindly trust an accurate machine when the future of a person is at stake, nor can we accept a ruling without understanding which factors contributed to that decision.
Finally, training these models relies on data gathering protocols to build massive datasets, which also raises some privacy concerns.

Misinformation and Fake News

“It is only when they go wrong that machines remind you how powerful they are.” 
Clive James

Misinformation has always been used as a political weapon to manipulate crowds. Today, it is further amplified by the rise of specialized deep learning models that can either produce new text, tamper with images or falsify speech. These algorithms have reached a level where their creations are hardly distinguishable from real signals anymore.
OpenAI’s GPT-3 which stands for Generative Pre-trained Transformer, is a large deep learning model that was trained on a massive corpus of text in order to capture the statistical regularities of natural language.
The model could then easily be used to generate paragraphs of text after being fed a few initial words. It wasn’t long before GPT-3 was leveraged to produce all sorts of highly plausible fake news, setting off multiple alarm bells regarding this horrific misuse of technology.
On the side of images and videos, a recent development in deep learning research was able to tamper with videos by replacing the face of a source person in a video by the face of a target person from an image. This technology, termed DeepFake, essentially allows us to put new words in a public figure’s mouth, star in our favorite movie or any other application you can imagine. To make matters worse, it can also extend to audio signals making it easy to fool people on the phone.
In 2019, a criminal used a deepFake to impersonate the voice of a german energy firm’s CEO in order to demand a fraudulent bank transfer of €220,000, in what is believed to be the first AI-based cybercrime. While technically intriguing, DeepFakes introduce a new set of social issues, as they completely discredit hearing and sight, the two most trusted senses humans rely on to make decisions.
These deep learning models can be weaponized against women by synthetically embedding their faces into pornographic content, set someone up by placing them at the scene of a crime they didn’t commit or spread a doctored declaration of war from the president of a powerful nation.

The proliferation of misinformation on the internet has become an alarming issue that demands immediate attention. The time-consuming process of generating fake multimedia content that was reserved to experts in the past, is now accessible to the masses with no skill requirements.
Social media platforms are amongst the top concerned parties since they constitute the main medium of “uncontrolled” communication that allows fake news to go viral. For this reason, Google, Facebook, Twitter and other tech giants are waging a war against misinformation and dangerous propaganda, but their efforts to this day have shown limited effects.

This thought-provoking article was an effort to cover some common dilemmas and issues related to the ethical and responsible use of AI and incite the reader to reflect on the topic. The subject of raising awareness about the importance of ethics in the horizon of a new AI-augmented society is constantly gaining traction. As researchers, technologists, authorities, philosophers and policymakers join forces to regulate AI use, we remain optimistic that the day will come where ethics will be an integral part of the development and use of this technology. It has to: the lives and livelihoods of millions of people depend on it.