The next generation of artificial intelligence faces the familiar problem of bias.
a part of
. This story was co-produced with
Suppose a computer and a human are fighting each other in a neutral battle. Who do you think will win? Lots of people will bet on the machines. But this is the wrong question.
Humans created computers, they designed and trained Make modern technology work. When these systems are created, the biases of their human creators are reflected in them. When people refer to AI bias, that’s essentially what they’re talking about. Like human bias, AI bias can become discrimination once translated into decision-making or action. Like many forms of discrimination, AI bias disproportionately affects communities that have historically or currently faced oppression.
Facial recognition software has long failed to recognize black faces. Researchers and users have found anti-Black bias in AI applications, from recruiting to robots to lending. An artificial intelligence system can determine if you have found public housing or if a landlord has rented it to you. Generative artificial intelligence technology is being touted as the antidote to the paperwork that drives burnout among medical professionals.
as a generative artificial intelligence tool (such as ChatGPT and After the features of Google Bard entered the mainstream, the unfair preferences or biases that have long plagued artificial intelligence persisted. This impact is everywhere, in the apps and software you encounter every day, from the automated sorting of social media feeds to chatbots for customer service. AI bias can also permeate some of the big decisions a company might make about you: whether to hire you for a job, lend you money to buy a house, or pay for your medical bills.
The term for this technology – artificial intelligence, Algorithms, large language models — checks that can make their effects feel very technical. In some ways, AI bias is a technical problem with no easy solutions. However, the central question of combating bias in AI does not require much expertise to understand: Why does bias permeate these systems? Who is harmed by AI bias? Who is responsible for addressing this problem and the harm it creates in practice? Can we trust artificial intelligence to handle important tasks that have an impact on human life?
Here is a guide to help You work through these problems and figure out where do we go from here.
What is artificial intelligence? What is an algorithm?
Many definitions of artificial intelligence rely on comparisons with human reasoning: artificial intelligence, these definitions are, is advanced technology designed to replicate human intelligence and is capable of performing tasks that previously required human intervention. But in reality, AI is software that can learn, make decisions, complete tasks and solve problems.
artificial intelligence Learning how to do this comes from a dataset, often called its training data. An AI system trained to recognize faces will learn to do so on a dataset consisting of a bunch of photos. The person who creates the text will learn how to write from the existing text entered into the system. In 2023, most of the AI you hear about is generative AI, the kind that learns from large data sets how to produce new content, such as photos, audio clips, and text. Think image generator DALL-E or chatbot ChatGPT. In order to work, AI needs algorithms, which are basically mathematical recipes, instructions that the software needs to follow when it completes a task. In artificial intelligence, they provide the basis for how programs learn and what to do. Ok, so what is AI bias and how does it get into AI systems?
AI bias is like any other bias: it is an unfair bias or practice that exists in or is enforced by a system. It affects some communities more than others and is permeating more and more corners of everyday life. One might experience bias that social media filters don’t work properly on dark skin, or that test proctoring software fails to account for neurologically divisive student behavior. A biased AI system could determine the care someone receives at a doctor or how they are treated by the criminal justice system.
Spotting Bias It enters the field of artificial intelligence in many ways. However, Sasha Luccioni, a machine learning ethics and machine learning researcher, says that broadly speaking, to understand what happens when an AI system goes astray, you just need to know that AI is fundamentally trained to recognize patterns and act on those patterns. mission accomplished. The association team at Hugging Face, an open-source artificial intelligence startup. Because of this, she said, the AI system “will go after dominant patterns, whatever they may be.”
Those dominant patterns may emerge in the training data the AI system learns from, in the tasks it is asked to perform, and in the algorithms that support its learning process middle. Let’s start with the first one.
AI-driven systems are trained on existing datasets, such as photos, videos, audio recordings or text. These data can be skewed in countless ways. For example, facial recognition software needs photos to learn how to recognize faces, but if the data set it was trained on contained photos depicting primarily white people, the system might not work well on non-white faces. If English with a slight foreign accent is not represented in the audio clips in the training database, the AI-powered captioning program may not be able to accurately transcribe that accent. AI can only learn from what it is given.
The bias in the dataset may itself simply be a reflection of a larger systematic bias. As Karen Howe explains in MIT Technology Review,
Unrepresentative training data prompts AI systems to recognize unrepresentative patterns. A system designed to automate the decision-making process and trained on historical data may simply learn how to perpetuate biases already present in history.
Maybe the creators of the AI system are trying to remove the bias introduced by the dataset. Some attempts to reduce bias can also bring their own problems. Making an algorithm “blind” to attributes like race or gender doesn’t mean the AI won’t find other ways to introduce bias into its decision-making process — and might identify the same attributes it should ignore, as the Brookings Institution did in explained in a 2019 report. For example, a system designed to evaluate job applications might be “blind” to an applicant’s gender, but learn to distinguish between male and female-sounding names, or look for other indicators on their resumes, such as a degree from a university. A women’s college if its training dataset favors male applicants.
Have I encountered AI bias?
For many Americans, AI-powered algorithms are already part of their daily lives, from the recommendation algorithms that drive online shopping to the posts they see on social media. Vincent Conitzer, a professor of computer science at Carnegie Mellon University, pointed out that the rise of chatbots such as ChatGPT provides more opportunities for these algorithms to be biased. At the same time, companies like Google and Microsoft are looking to generate artificial intelligence to power the search engines of the future, where users will be able to ask conversational questions and get clear, simple answers.
“One use of the chat might be, ‘Okay, okay, I’m going to visit the city. What sites should I check? Which neighborhoods are better to live? That might be Real business impact on real people,” Konitzer said.
While generative AI is just beginning to appear in everyday technology, conversational search is already a part of many people’s lives. Voice-activated assistants have transformed our relationship to searching for information and staying organized, making everyday tasks (compiling a shopping list, setting a timer, or managing our schedule) as easy as speaking. Assistant will do the rest. But tools like Siri, Alexa, and Google Assistant have built-in biases.
Speech recognition technology has a history of failing in certain situations. They may not be able to recognize requests from people whose native language is not English, or they may not understand black users correctly. While some people may choose not to use these technologies to avoid these problems, these failures can be especially devastating for people with disabilities who may rely on voice-activated technology.
This form of bias is also permeating generative AI. A recent study of tools designed to detect the use of ChatGPT in any given writing sample found that these detectors could falsely and unfairly label non-English-speaking writing as AI-generated. Currently, ChatGPT is still new to many users. But as companies rush to incorporate generative AI into their products, Konitzer said, “these technologies will increasingly be integrated into products in a variety of ways that have a real impact on people.”
other Yes, I will donate $120 /year Yes, I will give $120/Year We accept credit cards, Apple Pay and Google Pay. You can also pass