Artificial Intelligence (A.I.), “The Social Dilemma” and Human Rights

 

Agni Mentaki Tripodi

27 October 2020

These are Isaac Asimov’s Three Laws of Robotics ethics:

First Law

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law

A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law [1]

Pretty unusual start? Probably not.

 The 3 suggested laws were introduced in Isaak Asimov’s 1942 short story “Runaround” (included in the 1950 collection I, Robot), and were devised to protect humans from interactions with robots.

I, personally came across these rules around 10 years ago and in the next paragraphs I will try to present why I find them so relevant now, probably more relevant than ever.

Here is a set of questions for you:

  • How close do you think we are to claim that these rules have already been violated?
  • How many of you think that robots, in our context social media machines powered by A.I technology, are already harming human beings?
  • How many of you are concerned that A.I machines are close to become more intelligent than their human creators and even take self-driven decisions that could allow a human being to come to harm?
  • How far or how close are we to “Singularity” the time when the abilities of a computer overtake the abilities of the human brain?

[Singularity: the hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence with unforeseeable changes to human civilization (Kurzweil R., The Singularity is Near, 2006)]

Last week, during my flight from Geneva to Athens, I watched the recently launched Netflix documentary-drama “The Social Dilemma” that aims to explore the dangerous impact of social networking to humans. The documentary provides a graphic account of what the business model of few companies is doing to us and how this constant clicking, swiping and liking has impacted our lives and societies. It features among others, tech experts, former employees of tech social media giants (i.e. Google, Facebook, Twitter, Instagram etc.) sounding the alarm on their own creations and explaining the harms that the addiction-machines they created have caused to the society.Tristan Harris former Google design ethicist and Center for Humane Technology co-founder said in his interview in the documentary:

“Social media isn’t a tool that’s just waiting to be used,”. “It has its own goals and it has its own means of pursuing them by using your own psychology against you.” In other words: “The tool is alive. It knows you. It’s feeding you information you think you want and need but in reality, is eliciting action and clicks as a way to fuel advertising”.

A few minutes later in the documentary, a statement by Tristan Harris strikes my mind:

“If you are not paying for the product, then you ARE the product”

What is this supposed to mean?

This means that for example Google will track what you search for in its search engine facility and then use that intelligence to its advantage to generate profits. It may suggest YouTube videos you might like or simply present you with advertisements it thinks are relevant to you. Google is doing this since advertising is its main source of income.

It makes total sense to offer advertisers ‘targeted ads’ to the right audience because in most cases Google only gets paid if someone ‘clicks’ on an ad link. In that sense, the more relevant and successful the advertising program, the more money Google will make.

But, wait a second! Where is my right to privacy?

I don’t recall giving my consent to Google or other platforms to track my searches, use my personal data to design more successful marketing tools and target me back with ads? That sounds totally wrong by all ethical means and at the same time so totally right as a business for profit model.

According to a 5000-person study published in 2017 by The American Journal of Epidemiology, the impact of high social media use can be harmful to our mental health. The study shows that higher social media use correlated with self-reported declines in mental and physical health and life satisfaction.

forward, robot, artificial intelligence

Persuasive design techniques like push notifications and the endless scroll of your newsfeed have created a feedback loop that keeps us glued to our devices”.

Moreover, Social media are criticized to cause increased rates of depression among teenagers. For example, if a person posts a selfie photo in Facebook or Instagram and does not receive enough ”likes” that may give the impression to the person that he/she is not liked enough by the network, not likeable, not good looking etc. causing increased levels of psychological  and emotional stress to the person. But, wait a second! Does this mean that my right to health may be compromised by the high social media use?

As the tech experts described though in the documentary, this harm was not intentional.

“When we first developed the platforms, we meant to bring the society together and connect the world. We couldn’t have imagined the size of the impact that these tech innovations could possibly have to the audience”.

We have already touched the impact of social media to the right to privacy and the right to health. Have you ever thought what happens when such a powerful tool is used by bad actors with power to manipulate people and serve their own agendas? As stated in New York times:

” The number of countries with political disinformation campaigns on social media doubled in the past 2 years”.“Social media advertising gives anyone the opportunity to reach huge numbers of people with phenomenal ease, giving bad actors the tools to sow unrest, fuel political divisions, manipulate the public and even control democratic processes as elections”.

But, wait a second! Is our democracy also at stake?

In an Internal Facebook report of 2018, we read that “64% of the people who joined extremist groups on Facebook did so because the algorithms steered them there. That means, that algorithms can promote content that amplifies discrimination and hate with dangerous consequences for our right to Equality and non-discrimination.

Important to mention at this point is that some A.I-related risks spring also from the way A.I models are trained. What does this mean? In the McKinsey Global Institute study “Notes from the A.I Frontier Applying A.I for Social Good”, 2018, we read that if data sets used to train algorithms are based on historical data that incorporate racial or gender bias (even unintentionally, resulting solely from sampling bias), the applications derived from the algorithms will perpetuate and may aggravate that bias. It makes absolute sense.

Let’s pause for a minute and try to reflect on what we have read so far.

In a sense, we briefly examined how A.I machines can harm the very people they are supposed to help when misused. A.I can be used maliciously to threaten the physical and emotional safety of individuals, as well as their digital safety and even financial security, equity and fair treatment. For society at large, malicious uses of A.I could possibly threaten national security and economic and political stability.

 

display dummy, board, face

But please, let’s not despair. The purpose of this article is not to demonize A.I and cause pessimism, but to create awareness about some of the risks of A.I especially in the context of Social Media.

In almost every case in the history of human evolution, each technological innovation comes with a set of opportunities and a set of risks or threats; A.I, the very same technology put under criticism above, could equally contribute to tackle some of the world’s most challenging social problems; for example A.I can be used in research to cure cancer or to tackle climate change; it can equally be used to decrease poverty, provide education for all, increase access to medicines and improve the health of people living in remote areas.

It always depends on how it is used.

And here comes the question: What now? What needs to be done? The answer is simple.

A.I technology needs to be regulated and corporate accountability should be increased for its creators; the tech companies and social media platforms. The same way companies all-over the world are held accountable for the Human Rights violations they cause or contribute to, through their operations, similar principles should be applied to the tech industry that operates in the cyber environment. In most of the other industries the impact is more visible and easier to identify; in the tech industry is yet to be identified.

To that direction, cross sector collaboration is highly encouraged. Stakeholders from the civil society, A.I researchers, public and private sector need to collaborate in order to first identify the full spectrum of risks and then provide solutions to mitigate them. After, a set of principles need to be established and proper monitoring mechanisms should be put in place to ensure that tech companies comply with the guidelines.

Only then, risks can be prevented, and A.I can be applied for good causes  to add Shared Value to the Society.

Share the article