• 6 heures
  • Facile

Ce cours est visible gratuitement en ligne.

course.header.alt.is_video

course.header.alt.is_certifying

J'ai tout compris !

Mis à jour le 22/03/2021

The Ethical Challenges of Artificial Intelligence

Connectez-vous ou inscrivez-vous gratuitement pour bénéficier de toutes les fonctionnalités de ce cours !

AI techniques possess enormous potential. The opportunities are many, but they also raise ethical concerns such as the use of personal data, environmental impacts, and the emergence of artificial intelligence systems capable of making decisions on their own.

In this chapter, we will discuss these challenges in greater detail. Each issue raised will be accompanied by questions to ask yourself that will help you stay vigilant and adopt responsible behaviors. Are you ready?

Understand the Challenges Associated With Data Use

You use digital services every day, such as sending and receiving emails. As you move around your environment, you almost certainly use apps that record your habitual routines.

There is a flip side to these mostly free convenient services that you shouldn't ignore. The data you produce can be compiled and analyzed for use in advertising, among other things. Your profile is analyzed across all of your services to show you precisely targeted ads and features that correspond to your habits.

The adage, "If it's free, you are the product," tends to be true concerning all big tech companies and is often a fair description of their business model.

So how can I make sure my information isn’t used for undesirable purposes?

By applying critical thinking! When you use a digital service, take a moment to identify which data are being collected and used.

For example, when you identify images to prove you’re not a robot (CAPTCHA), you are helping to train image recognition algorithms. When you accept web page cookies, you allow your activity to be tracked, which leads to algorithms that show you targeted ads. 

Some of these practices may seem acceptable to you; others not. The important thing is to be aware!

Some countries have questionable personal data practices. For example, the Chinese government conducts tight surveillance, analyzing its citizens' web activity, movements, and behaviors using image recognition. Most cities are subject to video surveillance. Personal data and AI systems are used to assign each citizen a score based on their actions, and this score determines their access to services such as credit and transportation.

 How can I maintain control over the data being collected?

For the tech giants, the first step is to look at the information gathered to understand it better (for Google, you can go here to see all of your recorded data). Once you know, don’t hesitate to change your privacy settings! For example, you can turn off GPS tracking for route mapping, fitness, and dating apps. You can also reduce their activation by allowing location access only while you are using the app.

Are there companies that don’t collect data?

Yes! Companies such as DuckDuckGo, Ecosia, Firefox, and Startpage don’t collect – or use – your personal data.

The Challenges Associated With Algorithmic Decision-Making

As you have seen, the tech giants collect a lot of data for advertising purposes, but that’s not all. With the advent of artificial intelligence, data also help automate specific tasks meaning that algorithms and not humans make decisions.

What potential does that create for bias?

As you will see, algorithms can reproduce bias. See, algorithms are trained on datasets, and data can be biased—for example, when they reflect the world inaccurately (such as a visual recognition algorithm that learns primarily on white subjects) or when they mimic an imperfect world. Thus, if the data used for training an algorithm contain traces of discrimination, the algorithm might make discriminatory decisions.

Human Resource departments rely more and more on artificial intelligence, including software programs that automatically analyze the text in documents such as resumés to select the most relevant profiles for a particular job.

In 2015, Amazon developed hiring software to analyze the applicants for various job offers. Before long, the company realized that the algorithm was biased against female applicants. The system rejected them more often than males because it had been trained on data from Amazon’s organizational chart, which showed that 85% of Amazon employees were men. Once they discovered this, they discontinued using the tool.

Of course, such instances do not occur systematically; many tools that incorporate artificial intelligence are handy—including hiring tools, which most often do an excellent job of matching would-be applicants with relevant job offers.

For example, the list of job offers you see has been compiled by an AI system according to  your profile.

The thing to watch out for is bias. For example, do the tools serve everyone, or do they reproduce sexist or racist practices? Developers must apply critical thinking to AI solutions, and we must demand that they be transparent.

The Challenges Associated With Disinformation

Perhaps you have encountered weird or improbable videos circulating on the internet: Barack Obama insulting Donald Trump or Mark Zuckerberg talking about manipulating Facebook users. These fake videos, which began appearing in 2018, can seem startlingly real. The underlying technology leverages powerful artificial intelligence techniques. These synthetic media are often referred to as deepfakes.

Uh, but didn’t fake videos exist well before artificial intelligence?

Yes, but in the past, only a handful of experts manipulated photos. Now a growing number of people can do it, and the results are more and more convincing. Visit the site This person does not exist to see portraits created using this artificial intelligence technique (to scroll through them, refresh the page). It would be easy to mistake these artificially generated people for real ones. 

Deepfakes can refer to manipulated photos, audio recordings, or videos. In 2019, a Chinese application called Zao made the news by allowing users to replace someone's face in a video or music clip with a photo of their choice. 

These new techniques have a high potential to deceive, increasing the risk of widespread disinformation.

What can we do about it?

There are two tools you can use: common sense and critical thinking. To protect against deepfakes, arm yourself with these, along with a few pieces of advice:

  • Question your sources and make sure your information is legitimate. For example, did it come from a recognized media source?

  • Crosscheck the information by looking at other information sites.

  • You can also use fact-checking sites, such as AFP Fact Check USA/Canada, Africa Check or Full Fact (Africa, North America, UK).

If used correctly, this technique can have artistic applications such as creating drawings “in the style of” famous artists.

With this technology, you can transform a photo into a painting in Van Gogh or another artist’s style. You can also use it to compose music, create special effects in movies, or dub voices hyper-realistically (synching actors’ lips perfectly to the dubbing language).

Once again, the technology is not the problem: it’s how it is used.

The Challenges Associated With the Environment

You may sometimes hear that data is the “oil” of the 21st century. Well, this new “black gold” is fueled by artificial intelligence.

What is the environmental impact of the data industry?

The information economy, which includes the manufacture and use of hardware, has a significant environmental impact. In 2015, it was estimated that the industry accounted for approximately 3.5% of global CO2 emissions (to put this in context, air travel accounts for 2% and telecom networks for 0.5%).

The information economy can’t function without digital devices (smartphones, computers, tablets, etc.) or certain physical infrastructures that are more or less out of public view, such as fixed and mobile networks, business networks, and data centers. Data centers house the computer systems and associated components, such as servers, that make the internet and all of its services possible.

Concretely, whenever you use the internet or one of its services, your “clicks” go to a server somewhere in the world, which processes and responds to your request. That server needs energy to do its work!

How can people limit the environmental impact of routine digital activities?

There are some simple things you can do daily to reduce your digital environmental footprint. These include deleting old emails, unsubscribing from newsletters you don’t read, and limiting your use of streaming platforms.

However, the artificial intelligence boom is enlarging this environmental footprint. Storing and processing the data to develop AI programs consumes a lot of energy. The algorithms ("recipes" computers use to make calculations) very powerful because they are trained for days using mighty servers that require a lot of energy.

Although AI algorithms' training requires large amounts of energy, applications can also be used to mitigate environmental impacts and save energy by streamlining production lines and optimizing the use of energy and other resources.

Let's Recap!

  • Maintain control over your personal data by choosing who you share it with and adjusting your settings.

  • Algorithmic bias can be introduced in artificial intelligence program design. Stay informed and demand a regulatory framework that minimizes this risk.

  • Deepfakes are used to spread disinformation. The best weapons against deepfakes are critical thinking and research to crosscheck your sources.

  • AI consumes a lot of energy, and we must keep environmental sustainability in mind moving forward.

In the next chapter, you will learn about the impact of artificial intelligence on labor and the job market.

Exemple de certificat de réussite
Exemple de certificat de réussite