• 6 heures
  • Facile

Ce cours est visible gratuitement en ligne.

course.header.alt.is_video

course.header.alt.is_certifying

J'ai tout compris !

Mis à jour le 29/02/2024

Identify the Challenges in Creating Responsible and Trustworthy AI

Explore the Data Challenges

As we've seen, the tech giants collect a lot of data. They use this data for advertising purposes, but that's not all. With the advent of artificial intelligence, they can use this data to automate certain tasks. Humans are no longer making the decisions, algorithms are.

You need to be aware of how AI algorithms use data, especially when it's your personal data and when that data is biased. Let's take a look at each of these cases.

Check How Your Personal Data is Used

You send and receive emails all day long. You probably use your favorite mapping apps to help you get around. You use digital services every day.

There is a flip side to these mostly free convenient services that you shouldn't ignore. The data you produce can be compiled and analyzed for use in advertising, among other things. Your profile is then examined based on your usage and observed habits so that targeted ads and features can be suggested to you.

There’s an expression we use to describe this phenomenon: “if it’s free, you're the product!” This can be seen in most of the huge tech companies, who rely on this as their core business model.

So how can I make sure my information isn’t used for undesirable purposes?

By applying critical thinking! When you use a digital service, take a moment to identify which data is being collected and used. 

For example, when you’re asked to identify images to prove that you’re not a robot (as in the CAPTCHA system), you’re actually training algorithms to recognize certain images. When you allow your activity on a web page to be tracked by accepting cookies, you’re actually permitting algorithms to suggest targeted ads based on your preferences.

Some of these practices will seem acceptable to you, whereas others won’t. The important thing is to be aware!

In some countries, there are some uses that raise questions. This is the case in China, where we are now seeing State monitoring of citizens' web activity and of their movements and behavior using image recognition, with most cities under video surveillance. Personal data and AI systems are used to assign each citizen a score based on their actions, and this score determines their access to services such as credit and transportation.

 How can I maintain control over the data being collected?

For the tech giants, the first step is to look at the information gathered to understand it better (for Google, you can go to Google Takeout to see all of your recorded data). Once you know, don’t hesitate to change your privacy settings! For example, you can turn off GPS tracking for route mapping, fitness, and dating apps. You can also reduce their activation by allowing location access only while you are using the app. 

Are there companies that don’t collect data?

Yes! Companies such as DuckDuckGo, Ecosia, Firefox, and Startpage don’t collect – or use – your personal data.

Be Aware of Bias Risk

As you will see, algorithms can reproduce bias. The algorithms are never neutral as they rely on learning databases. For example, the data may be biased by being an imperfect representation of the world (as in the case of a visual recognition algorithm that learns predominantly from white subjects) or by mimicking an imperfect world. Thus, if the data used for training an algorithm contains traces of discrimination, the algorithm might make discriminatory decisions.

Let's look at a real-world example: recruitment. Today's human resources departments are using more and more artificial intelligence solutions. For example, software can automatically analyze documents such as CVs to select the most suitable candidates.

In 2015, Amazon developed hiring software to analyze the applicants for various job offers. Before long, the company realized that the algorithm was biased against female applicants. The system was automatically rejecting more female applicants than male applicants because it had been trained on data from Amazon’s organizational chart, which showed that 85% of Amazon employees were men. Once they discovered this, they discontinued using the tool.

Of course, such instances do not occur systematically; many tools that incorporate artificial intelligence are quite handy– including hiring tools, which most often do an excellent job of matching would-be applicants with relevant job offers.

If, for example, you are looking at job vacancies, AI systems are designed to show you the opportunities that best match your profile.

The thing to bear in mind is that these solutions should benefit everyone; they should not perpetuate sexist or racist behavior, for example. Developers must apply critical thinking to AI solutions, and we must demand that they be transparent.

Explore the Challenges Associated With the Environment 

You may sometimes hear that data is the “oil” of the 21st century. Well, this new “black gold” is fueled by artificial intelligence.

What is the environmental impact of the data industry?

The information economy can’t function without digital devices (smartphones, computers, tablets, etc.) or certain physical infrastructures that are more or less out of public view, such as fixed and mobile networks, business networks, and data centers. Data centers house the computer systems and associated components, such as servers, that make the internet and all of its services possible.

Concretely, whenever you use the internet or one of its services, your “clicks” go to a server somewhere in the world, which processes and responds to your request. That server needs energy to do its work!

How can people limit the environmental impact of routine digital activities?

There are some simple things you can do daily to reduce your digital environmental footprint. These include deleting old emails, unsubscribing from newsletters you don’t read, and limiting your use of streaming platforms.

However, the artificial intelligence boom is enlarging this environmental footprint. Storing and processing the data to develop AI programs consumes a lot of energy. The reason why everyday algorithms are so powerful is that they've been trained for days on end, using extremely powerful, energy-hungry servers.

Although AI algorithms' training requires large amounts of energy, applications can also be used to mitigate environmental impacts and save energy by streamlining production lines and optimizing the use of energy and other resources.

Limit and Anticipate Risk With AI Governance

As we’ve seen, researching AI safety is essential when attempting to resolve issues of explainability, robustness and purpose definition. Research is also the key to limiting AI bias and environmental impact. But providing solutions to these technical problems is no guarantee that those who are developing AI solutions will adopt them. This is where AI governance comes in.

Governance is about developing, financing, supervising and regulating artificial intelligence in a way that aligns with the public interest.

Businesses are rarely interested in investing in AI safety for the systems they deploy. Companies operating at the cutting edge of AI in particular are more interested in racing to develop ever more powerful AI. This approach can prevent proper safety considerations, because any budget allocated to safety can’t be applied to profitable activities.

In these circumstances, institutions play an important role in hosting round-table discussions with stakeholders, encouraging coordinated efforts, incentivizing the adoption of best practices and defining new rules to be followed.  The European Union (EU) is currently putting together regulations known as “The AI Act,” with the aim of promoting development best practices and reducing the risks posed by AI. All businesses that use AI will need to comply with the regulations from 2025. So, if you work for one of these organizations, you now know that you’ll probably be affected! 

Let’s Recap!

  • AI uses data. This data could be your own data, or it could be biased, so you need to stay aware. 

  • AI uses a lot of energy. Optimizing the environmental cost of AI is an important consideration for the future. 

  • Research is not enough to develop safe, responsible and trustworthy AI systems. Governance is a way of encouraging or compelling businesses to develop safe AI systems that serve the public interest.

In the next chapter, we will learn about the impact of artificial intelligence on labor and the job market.  

Exemple de certificat de réussite
Exemple de certificat de réussite