burgerlogo

Human Rights and IoT: The Machine and the Agent

Human Rights and IoT: The Machine and the Agent

avatar
Hannah Sloan

- Last Updated: November 25, 2024

avatar

Hannah Sloan

- Last Updated: November 25, 2024

featured imagefeatured imagefeatured image

The last post (for now!) in the Human Rights and IoT series will take us into the blame game of human rights adjudication. Here are links to earlier posts on the right to fair and decent workthe right to privacy, and tracking human rights violations.

Human rights violations happen all the time. They will continue to happen. Technology is critical to certain types of protections, and in many cases, technology can help identify and adjudicate abuses. As technology becomes even more sophisticated, we may even start to think of "digital agents" as important actors in human rights practice. They have immense power over people’s lives, and their actions can cause human rights violations at a grand scale—even though the actions themselves can seem subtle.

Whose fault is it?

To understand who’s to “blame” for violations, we’ll start with this question:

Right now, if a human rights violation is committed, what happens?

Many governments have taken the tenants of human rights and incorporated them into their own laws in the form of conventions. If a person or organization commits a human rights violation, the violation is captured in the body of national laws, so the judicial system is supposed to handle it and place appropriate blame on the guilty parties. If someone commits a human rights violation, but the country has not codified that human right into its judicial system, then the violation may go unpunished and unrepaired. In these cases, other states and human rights organizations will criticize the country publically.

In the Universal Declaration of Human Rights, the state is the enforcing party. The state is in charge of protecting and upholding human rights. Here, if the country itself isn't taking steps to remediate the violation, the state itself is at fault.

There are certain human rights violations for which the international community has “zero tolerance.” These are called “peremptory norms,” and they exist for crimes like genocide, enslavement, torture, wars of aggression, etc. In these cases, countries who allow these violations to happen will be brought before the UN Security Council to face sanctions and/or possible military intervention if the violations aren't remediated.

What Happens When Machines Enter the Picture?

“Digital agents” is a phrase I’m using to talk about machines with power—so much power that their actions can determine a scope of human rights protections and/or violations. If digital agents enter the picture, does this change how blame gets assigned?

It can get complicated quickly. Let’s use discrimination as an example.

In an extreme instance of discrimination, like apartheid, political participation is denied to people on the basis of race. Discrimination diminishes the capacity for other protections like the right to privacy, the right to a fair trial, and the right to free movement. This method of social control can and has resulted in inadequate access to food, housing, healthcare, employment, political decision-making, cultural life...the list goes on. Systematic disenfranchisement of a group of people leads to further erosion of health, education, and economic infrastructure. One egregious human rights violation leads to hundreds upon thousands more.

What happens if a machine or an IoT system brings about the same result? One biased algorithm on a housing application or job application can lead to systematic domino effects that influence entire societies. Maybe the simplest answer is that the people who program the machines should be held responsible for the actions of the machines. But is it really that simple?

[click_to_tweet tweet="Who is to blame if a biased #AI algorithm or #IoT system causes widescale harm? Maybe the simplest answer is that the people who program the machines should be held responsible. But is it that simple?

||#IoTForAll #ML #HumanRights" quote="Who is to blame if a biased AI algorithm or IoT system causes widescale harm? Maybe the simplest answer is that the people who program the machines should be held responsible. But is it that simple? " theme="]

Hannah Arendt, a prominent political philosopher, wrote at length about a concept called “the banality of evil.” Her classic case was that of Adolf Eichmann, a prominent Nazi SS officer in WWII and one of the major organizers of the Holocaust. Eichmann carried out the logistics of transporting Jews to concentration camps. In 1961, he went on trial for war crimes. He argued that he was essentially just doing his job. He was following orders and obeying the law of his regime. He claimed to have served as “a mere instrument.” Thus, Hannah Arendt coined the term “banality of evil” to refer to his seeming ordinariness and lack of guilt for the atrocities in which he participated.

An image of Adolf Eichmann on trial in 1961

Image Credit: 24 Heures

Adolf Eichmann—shown above on trial in Jerusalem in 1961—was judged not because his actions were disobeying the state law but because they were obeying international law—a specific type of moral law encoded in human rights conventions and international courts. Many people claim that he did have some sort of choice—that he should have held himself to a higher moral standard than the Nazi regime. Many others say that he should still have been found guilty (which he was), regardless of any “choice” he had in performing heinous tasks, simply because of his complicity in genocide. Many have suspected he wasn't so detached and thoughtless as Arendt suggested. Whether or not this caricature of Eichmann is true to life, Arendt's analysis is of a person under a totalitarian regime, in which thoughtless obedience and the threat of retribution for noncompliance govern behavior.

Eichmann was judged not by his “programming” but by his actions. To some extent, we as people are judged not by our “programming” but by our compliance with a standard of moral principles—not by our choices per se but by our actions. Disagreement on people’s intentions vs. actions creates huge contention in judicial systems. People do have limited options. Sometimes criminality is the lesser option of two evils. Crime is harder to identify and to punish in these cases, but choice and coercion are important factors to consider.

How Will Machines Be Judged?

Will machines be held to the same standards? When government judicial systems disagree, human rights provide a standard. When machines—or interconnected systems of machines, as in IoT—commit violations, there are two cases:

1—They act in line with their programming. 2—They have obtained a level of artificial intelligence that defies the expected and/or intended effects of their programming. In either case, the “intention” of the action done by the machine/system isn’t factored into the judgment of the machine’s action, and neither is the intention of the programmer.

If a driverless car, for example, could encounter the classic moral conundrum of altering its path to avoid the death of 5 people and instead killing one person, as shown here. What happens?

Image Credit: New York Magazine

For example, imagine a driverless car encounters the classic moral conundrum of altering its path to avoid the death of five people, instead killing one person (as shown above). What happens?

People disagree about whether it's good or desirable to program specific moral maxims into the cars themselves. “A recent poll by the MIT Media Lab found that half of the participants said that they would be likely to buy a driverless car that put the highest protection on passenger safety,” NPR reports. “But only 19 percent said they'd buy a car programmed to save the most lives.” If the market produces tons of driverless cars programmed for passenger safety, is someone to blame when wayward pedestrians or drivers are killed?

For every action complex machines make (especially Internet-enabled ones), there are multiple ways to program inputs and outputs. Decisions must be made about how to control bias in algorithms and which biases to allow out of necessity. When catastrophe results from these decisions, people will look to machines, manufacturers, and programmers for answers.

[click_to_tweet tweet="As #IoT systems take on more power to affect (and potentially harm) the world around them, the need for a programmable morality only grows stronger." quote="As IoT systems take on more power to affect (and potentially harm) the world around them, the need for a programmable morality only grows stronger." theme="]

So Really, Whose Fault Is It?

If a person commits a human rights violation, should the government have prevented it from happening? Is a person only held liable to their country’s laws? Is it the government’s fault for not encoding human rights into their judicial systems? What if the government lacks resources for the judicial system to be fully operable? Is that their fault too? At what point does it become an individual person’s fault that they violated international law? None of these questions have easy answers, but scholars, judges, and politicians have spent decades creating processes to adjudicate these hard cases.

What about these things should change when we’re dealing with machines and networks instead of humans?

What we might expect “programmatically” from someone with human nature—subject to the effects of genetics, “nurturing,” culture, and other variants on the same nature—is to act in some predictable accordance with the known laws of their culture. Let’s take “nature” and “nurture” together to approximate the “programming” one has.

At some level, though, people are generally believed to have individual choice, and they should be judged for their actions.

What can we approximate as the “nature” and “nurture” of machines?

A diagram demonstrating cultural programming as part biology, part cultural influence

Image Credit: Caner Özcan

The question doesn’t have to be so existential. It doesn’t necessarily require us to think about the possibility of robots or IoT systems as capable of “choice” or independent action. People make harmful choices that are still within their “programming,” as evidenced by plenty of behavioral economics research. To deny this is to say that vigilantism isn't a prevalent worldview, or that willful ignorance isn’t employed to justify people’s actions.

People make decisions that are “in their programming.” They are also capable (depending on how much you believe in free will) of operating “outside their programming” as they continue to grow and develop into their own people after adolescence. They can take charge of their own “nurturing.”

We might reframe a government-individual relationship or, better yet, a parent-child relationship, as somewhat analogous to the relationship between creator and machine. Are we supposed to set everything up for the machine’s functioning? If it has “a mind of its own,” or if the effects of its algorithms and machine learning are somewhat unknown or unpredictable, then whose fault is it if they produce bad results? The difference in this metaphor is, of course, the presence of a human “individual will” in the case of the machine.

Should we expect perfectly lawful behavior from someone else who hasn't grown up in the same society, government, or values system? People are trained (socialized) and expected to comply with how they have been trained to interact with the world, whether or not that "training" was intentional. Machines don’t have individual will (at least...I hope not), but they do have the capacity to act on their learning in ways that are not anticipated.

Where Do We Go from Here?

The MIT Technology Review writes this: “Criminal liability usually requires an action and a mental intent. Could a program that is malfunctioning claim a defense similar to the human defense of insanity? Could an AI infected by an electronic virus claim defenses similar to coercion or intoxication?” These questions will only become more relevant as machine errors and unpredictabilities leave victims and judges searching for blame and for reparations.

How Can We Encode “Morals” into Machines?

It's in the interest of the creators (designers, programmers, and implementers) to program morals into machines. It's in the interest of some “public good” to do so. It's in the interest of litigators to have something to reference. Regardless of whether we adjudicate creators for the crimes of their machines, or whether we try to blame machines themselves for coming to unfavorable conclusions in their programming, we must prioritize the creation of programmable moral maxims that carry some universal weight. Machines will be held to these standards—as will those who program them. As IoT systems and individual machines take on more power to affect the world around them, the imperative for a sort of “programmable morality” only grows stronger.

Need Help Identifying the Right IoT Solution?

Our team of experts will help you find the perfect solution for your needs!

Get Help