Sunday, September 24, 2023
HomeTechnologyAI automatically discriminates. Here's how to spot it.

AI automatically discriminates. Here's how to spot it.

The next generation of artificial intelligence faces the familiar problem of bias.

a part of

discrimination issue

of Highlights

. This story was co-produced with

Capital B


Suppose a computer and a human are fighting each other in a neutral battle. Who do you think will win? Lots of people will bet on the machines. But this is the wrong question.

Humans created computers, they designed and trained Make modern technology work. When these systems are created, the biases of their human creators are reflected in them. When people refer to AI bias, that’s essentially what they’re talking about. Like human bias, AI bias can become discrimination once translated into decision-making or action. Like many forms of discrimination, AI bias disproportionately affects communities that have historically or currently faced oppression.

Facial recognition software has long failed to recognize black faces. Researchers and users have found anti-Black bias in AI applications, from recruiting to robots to lending. An artificial intelligence system can determine if you have found public housing or if a landlord has rented it to you. Generative artificial intelligence technology is being touted as the antidote to the paperwork that drives burnout among medical professionals.

as a generative artificial intelligence tool (such as ChatGPT and After the features of Google Bard entered the mainstream, the unfair preferences or biases that have long plagued artificial intelligence persisted. This impact is everywhere, in the apps and software you encounter every day, from the automated sorting of social media feeds to chatbots for customer service. AI bias can also permeate some of the big decisions a company might make about you: whether to hire you for a job, lend you money to buy a house, or pay for your medical bills.

The term for this technology – artificial intelligence, Algorithms, large language models — checks that can make their effects feel very technical. In some ways, AI bias is a technical problem with no easy solutions. However, the central question of combating bias in AI does not require much expertise to understand: Why does bias permeate these systems? Who is harmed by AI bias? Who is responsible for addressing this problem and the harm it creates in practice? Can we trust artificial intelligence to handle important tasks that have an impact on human life?

Here is a guide to help You work through these problems and figure out where do we go from here.

What is artificial intelligence? What is an algorithm?

Many definitions of artificial intelligence rely on comparisons with human reasoning: artificial intelligence, these definitions are, is advanced technology designed to replicate human intelligence and is capable of performing tasks that previously required human intervention. But in reality, AI is software that can learn, make decisions, complete tasks and solve problems.

artificial intelligence Learning how to do this comes from a dataset, often called its training data. An AI system trained to recognize faces will learn to do so on a dataset consisting of a bunch of photos. The person who creates the text will learn how to write from the existing text entered into the system. In 2023, most of the AI ​​you hear about is generative AI, the kind that learns from large data sets how to produce new content, such as photos, audio clips, and text. Think image generator DALL-E or chatbot ChatGPT. In order to work, AI needs algorithms, which are basically mathematical recipes, instructions that the software needs to follow when it completes a task. In artificial intelligence, they provide the basis for how programs learn and what to do. Ok, so what is AI bias and how does it get into AI systems?

AI bias is like any other bias: it is an unfair bias or practice that exists in or is enforced by a system. It affects some communities more than others and is permeating more and more corners of everyday life. One might experience bias that social media filters don’t work properly on dark skin, or that test proctoring software fails to account for neurologically divisive student behavior. A biased AI system could determine the care someone receives at a doctor or how they are treated by the criminal justice system.

Spotting Bias It enters the field of artificial intelligence in many ways. However, Sasha Luccioni, a machine learning ethics and machine learning researcher, says that broadly speaking, to understand what happens when an AI system goes astray, you just need to know that AI is fundamentally trained to recognize patterns and act on those patterns. mission accomplished. The association team at Hugging Face, an open-source artificial intelligence startup. Because of this, she said, the AI ​​system “will go after dominant patterns, whatever they may be.”

Those dominant patterns may emerge in the training data the AI ​​system learns from, in the tasks it is asked to perform, and in the algorithms that support its learning process middle. Let’s start with the first one.

AI-driven systems are trained on existing datasets, such as photos, videos, audio recordings or text. These data can be skewed in countless ways. For example, facial recognition software needs photos to learn how to recognize faces, but if the data set it was trained on contained photos depicting primarily white people, the system might not work well on non-white faces. If English with a slight foreign accent is not represented in the audio clips in the training database, the AI-powered captioning program may not be able to accurately transcribe that accent. AI can only learn from what it is given.

The bias in the dataset may itself simply be a reflection of a larger systematic bias. As Karen Howe explains in MIT Technology Review,

Unrepresentative training data prompts AI systems to recognize unrepresentative patterns. A system designed to automate the decision-making process and trained on historical data may simply learn how to perpetuate biases already present in history.

Maybe the creators of the AI ​​system are trying to remove the bias introduced by the dataset. Some attempts to reduce bias can also bring their own problems. Making an algorithm “blind” to attributes like race or gender doesn’t mean the AI ​​won’t find other ways to introduce bias into its decision-making process — and might identify the same attributes it should ignore, as the Brookings Institution did in explained in a 2019 report. For example, a system designed to evaluate job applications might be “blind” to an applicant’s gender, but learn to distinguish between male and female-sounding names, or look for other indicators on their resumes, such as a degree from a university. A women’s college if its training dataset favors male applicants.

Have I encountered AI bias?

may be.

For many Americans, AI-powered algorithms are already part of their daily lives, from the recommendation algorithms that drive online shopping to the posts they see on social media. Vincent Conitzer, a professor of computer science at Carnegie Mellon University, pointed out that the rise of chatbots such as ChatGPT provides more opportunities for these algorithms to be biased. At the same time, companies like Google and Microsoft are looking to generate artificial intelligence to power the search engines of the future, where users will be able to ask conversational questions and get clear, simple answers.

“One use of the chat might be, ‘Okay, okay, I’m going to visit the city. What sites should I check? Which neighborhoods are better to live? That might be Real business impact on real people,” Konitzer said.

While generative AI is just beginning to appear in everyday technology, conversational search is already a part of many people’s lives. Voice-activated assistants have transformed our relationship to searching for information and staying organized, making everyday tasks (compiling a shopping list, setting a timer, or managing our schedule) as easy as speaking. Assistant will do the rest. But tools like Siri, Alexa, and Google Assistant have built-in biases.

Speech recognition technology has a history of failing in certain situations. They may not be able to recognize requests from people whose native language is not English, or they may not understand black users correctly. While some people may choose not to use these technologies to avoid these problems, these failures can be especially devastating for people with disabilities who may rely on voice-activated technology.

This form of bias is also permeating generative AI. A recent study of tools designed to detect the use of ChatGPT in any given writing sample found that these detectors could falsely and unfairly label non-English-speaking writing as AI-generated. Currently, ChatGPT is still new to many users. But as companies rush to incorporate generative AI into their products, Konitzer said, “these technologies will increasingly be integrated into products in a variety of ways that have a real impact on people.”

Who is most hurt by AI bias?

To get a clear picture of how AI bias affects human life, we can look at the criminal justice system. Courts use algorithms that are biased against black people to create risk scores designed to predict a person’s likelihood of reoffending. These scores affect the sentence and a prisoner’s ability to receive parole. Police departments are even incorporating facial recognition, and the technology’s well-documented biases, into day-to-day policing.

An algorithm designed to make a risk assessment of whether an arrested person should be detained will use data from the U.S. criminal justice system. Konitzer said the data could contain wrongful convictions and fail to capture data on people who committed crimes but were not caught.

“Some neighborhoods are far more policed ​​than others. It doesn’t look like other neighborhoods are committing a lot of crime, but that might just be because they’re not being strictly policed.” regulation,” Konitzer explained. Algorithms trained on this data would spot these biases in the criminal justice system, recognize them as a pattern, and make biased decisions based on that data.

Apparent AI bias is not limited to a single institution. In the beginning, schools relied on anti-cheating software for virtual test takers during the Covid-19 pandemic. Such software typically uses video analysis and facial recognition to watch for specific behaviors it has been trained to see as signs of potential cheating. Students quickly discovered that virtual proctoring software designed to enhance academic fairness was not suitable for all students. Some popular proctoring programs fail to detect black faces and penalize students who cannot find a stable internet connection and a quiet, private testing space at home. Proctoring software can be particularly biased against students with various disabilities, or heighten the anxiety of test-takers with certain mental health issues.

As Democracy and Technology points out, proctoring software may incorrectly flag students who require a screen reader, are visually impaired, or otherwise may Students with disabilities that cause irregular eye movements, and students with neurological differences who may pace or fidget during exams. Some proctoring services do not allow toilet use.

This sounds terrible! Is there any solution?

The good news is that AI bias is very Many people talk and think about how to reduce the problem. However, not everyone agrees on how to address this increasingly pressing problem.

OpenAI founder Sam Altman recently told Otherworld that he believes these systems will eventually be able to repair themselves: “I am optimistic that, We’re going to a world where these models can be a force for reducing bias in society, rather than reinforcing it.” “While the early systems before these technologies were found to certainly reinforce bias, I think we can now explain Say, we want an unbiased model, and it does a really good job of that.” Ott Mann’s solution is essentially to ask the world to trust technology to heal itself, a process driven by the people who created it. For many AI and ethics experts, that’s not enough. Hugging face ethics researcher Luccioni cites the example of a generative artificial intelligence tool that Should be able to speed up medical paperwork, he argues we should question whether artificial intelligence belongs in this field. “If ChatGPT writes the wrong prescription, someone dies,” she said. While note-taking isn’t a task that requires a decade of education to master, assuming you can simply swap out your doctor for an AI-powered robot to speed up the paperwork process, it could eliminate important oversight.

Abby Allheiser

is a freelance journalist and editor, writing about technology, religion and culture.

rn rn

voice mark rn rn rn rn rn rn “,”cross_community”:false,”groups”:[{“base_type”:”EntryGroup”,”id”:27524,”timestamp”:1687382402,”title”:”Technology”,”type”:”SiteGroup”, “url”:””,”slug”:”technology”,”community_logo”:”rn rn rn rn rn rn rn rn rn “,”community_name”:”Vox”,”community_url”:”https:// /”,”cross_community”:false,”entry_count”:24425,”always_show”:false,”description”: “Uncover and explain how our digital world is changing—and changing us. “,”Disclosure”:””,”cover_image_url”:””,”cover_image”:null,”title_image_url”:””,”intro_image”:null,”four_up_see_more_text”:”View All”,”primary”:true} ,{“base_type”:”EntryGroup”,”id”:79774,”timestamp”:1686922969,”title”:”Highlights”,”type”:”SiteGroup”,”url”:”https://www.vox .com/ the-highlight”,”slug”:”the-highlight”,”community_logo”:”rn rn rn rn rn rn rn rn r n “,”community_name”:”Vox”,”community_url”:””,”cross_community”:false,”entry_count”:516,”always_show”:false,”description”:” Vox is the home of ambitious stories that explain our world. “,”disclosure”:””,”cover_image_url”:””,”cover_image”:null,”title_image_url”:” “,”intro_image”:null,”four_up_see_more_text”:”View All”,”primary”:false},{“base_type”:”EntryGroup”,”id”:80311,”timestamp”:1687372837,”title”:” Artificial Intelligence”,”type”:”SiteGroup”,”url”:””,”slug”:”Artificial Intelligence”,”community_logo”:”rn rn rn vox tag标题> rn rn rn rn rn rn ” ,”community_name”:”Vox”,”community_url”:””,”cross_community”:false,”entry_count”:369,”always_show”:false,”description”:” Vox’s covers how artificial intelligence is shaping everything from text and image generation to the way we live. “,”Disclosure”:””,”cover_image_url”:””,”cover_image”:null,”title_image_url”:””,”intro_image”:null,”four_up_see_more_text”:”View All”,”primary”:false} ,{“base_type”:”EntryGroup”,”id”:102793,”timestamp”:1687382402,”title”:”Technology and Media”,”type”:”SiteGroup”,”url”:”https://www”,”slug”:”business-of-media”,”community_logo”:”rn rn rn rn rn rn rn rn rn “,”community_name”:”Vox”,”community_url”:””,”cross_community”:false,”entry_count”:267,”always_show”:false,”description”:” “,”Disclosure”:””,”cover_image_url” ;:””,”cover_image”:null,”title_image_url”:””,”intro_image”:null,”four_up_see_more_text”:”View All”,”primary”:false },{“base_type”:”EntryGroup”,”id”:102794,”timestamp”:1687372837,”title”:”Innovation”,”type”:”SiteGroup”,”url”:”https://www”,”slug”:”Innovation”,”community_logo”:”rn rn rn vox-mark rn rn rn rn rn rn “,”community_name”: “Vox”, “community_url”: “”, “cross_community”: false, “entry_count”: 171, “always_show”: false, “description”:” “,”disclosure”: “”,”cover_image_url”:””,”cover_image”:null,”title_image_url”:””,”intro_image”:null,”four_up_see_more_text”:”View All”,”primary”:false} ,{“base_type”: “EntryGroup”,”id”:112414,”timestamp”:1686739019,”title”:”Technical Policy”,”type”:”SiteGroup”,”url”:”https://www.”,”slug”:”Technology-Policy”,”community_logo”:”rn rn rn rn rn rn rn rn rn “,”community_name”:”Vox”,”community_url”:””,”cross_community”:false,”entry_count”:19,”always_show”:false,”description”:” “,”Disclosure”:””,”cover_image_url”:””,”cover_image”:null,”title_image_url”:””,”intro_image”:null,”four_up_see_more_text”:”View All”,”primary”:false} ],”internal_groups”:[{“base_type”:”EntryGroup”,”id”:110883,”timestamp”:1686739019,”title”:”Technical Freelancer”,”type”:”SiteGroup”,”url” :””,”slug”:”Technical Freelancer”,”community_logo”:”rn rn rn rn rn rn rn rn rn “,”community_name”:”Vox”,”community_url”:”https://www.v”,”cross_community”:false,”entry_count”:1,”always_show”: false,”description”:””,”disclosure”:””,”cover_image_url”:””,”cover_image”:null ,”title_image_url”:””,”intro_image”:null,”four_up_see_more_text”:”View All” },{“base_type”:”EntryGroup”,”id”:112403,”timestamp”:1687376832,”title”:” Approach — dissects something complicated”,”type”:”SiteGroup”,”url”:””, “slug”:”approach-dissects-something-complicated”,”community_logo”:”rn rn rn vox mark rn rn rn rn rn rn “,”community_name”:”Vox”,”community_url”:””,”cross_community”:false,”entry_count”:207,”always_show”:false,”description”:” “, “disclosure”: “”, “cover_image_url”: “”, “cover_image”: null, “title_image_url”: “”, “intro_image”: null, “four_up_see_more_text”: “See all” }] ,”image”: {“ratio”:”*”,”original_url”:””,”network”: “unison”,” bgcolor”:”white”,”pinterest_enabled”:false,”caption”:null,”credit”:” Xia Gordon for Vox and Capital B “,”focal_area”:{“top_left_x”:807,”top_left_y”:387, “bottom_right_x”:1113,”bottom_right_y”:693},”bounds”:[0,0,1920,1080],”uploaded_size”:{“width”:1920,”height”:1080},”focal_point”:null ,”image_id”:72341902,”alt_text”:”A picture of a woman looking at a computer with a warning message on the screen. “},”hub_image”:{“ratio”:”*”,”original_url”:””,”network “:”unison”,”bgcolor”:”white”,”pinterest_enabled”:false,”caption”:null,”credit”:” Xia Gordon Vox and Capital B “,”focal_area”:{“top_left_x”:807, “top_left_y”:387,”bottom_right_x”:1113,”bottom_right_y”:693},”bounds”:[0,0,1920,1080],”uploaded_size”:{“width”:1920,”height”:1080} ,”focal_point”:null,”image_id”: 72341902,”alt_text”:”A picture of a woman looking at a computer with a warning message on the screen. “},”lede_image”:{“ratio”:”*”,”original_url”:””,”network “:”consistent”,”bgcolor”:”white”,”pinterest_enabled”:false,”caption”:null, “credit”:” Xia Gordon for Vox and Capital B “,”focal_area”:{“top_left_x”: 807 ,”top_left_y”:387,”bottom_right_x”:1113,”bottom_right_y”:693},”bounds”:[0,0,1920,1080],”uploaded_size”:{“width”:1920,”height”: 1080 },”focal_point”:null,”image_id”:72341903,”alt_text”:”A picture of a woman looking at a computer with a warning message on the screen. “},”group_cover_image”:null,”picture_standard_lead_image”: {“ratio”: “*”,”original_url”:” .jpg”, “network”: “Consistent”, “bgcolor”:”white”,”pinterest_enabled”:false,”caption”:null,”credit”:” Xia Gordon for Vox and Capital B “,”focal_area”: {“top_left_x”:807,”top_left_y”:387,”bottom_right_x”:1113,”bottom_right_y”:693},”bounds”:[0,0,1920,1080],”uploaded_size”:{“width”:1920 ,”height”:1080},”focal_point”:null,”image_id”:72341903,”alt_text”:”A warning message appears on a picture screen of a woman looking at a computer. “,”picture_element”:{“loading”:”eager”,”html”:{},”alt”:”A picture of a woman looking at a computer with a warning message on the screen. “,”default”:{“srcset”:”×1080/320×240/filters:focal(807×387:1113×693)/cdn.vox- 320w,×1080/620×465/filters: focal (80 7×387 : 1113×693)/ 620w,×1 080 /920×690/ Filter: Focus (807×387:1113×693)/ 920w, G0=/ 0x0: 1920×1080/1220×915/filters:focal(807×387:1113×693)/ 1220w, / thumbor/ QND_5Tyd6r4CU3_tDT_jFXf-lFU=/0x0:1920×1080/1520×1140/filters:focal(807×387:1113×693)/ _Lede5_2.0.jpg 1520w”,”webp_srcset”: “https: // 1920×1080/320×240/filers: FOCAL (807×387: 1113×693): Format (webp)/C. chorus_image/image/72341903/XiaGordon_Lede5_2.0.jpg 320 w,×1080/620×465/filters:focal(807×38 7:1113×693):format(webp) / /image/72341903/XiaGordon_Lede5_2.0.jpg 620w,×1080/920×69 0/filters :focal(807×387:1113×693):form in (webpage) / 920w, /Epc9s9scGNDmJrzF_IZACcxaTZw=/0x0:1920×1080/1220×915 /Filter:Focus(807×387:1113×693):Format(webp)/ 5_2.0.jpg 1220w, https: //×1080/1520×1140/filters:focal(807×387:1113×693):format(webp)/ image/72341903 /XiaGordon_Lede5_2.0.jpg 1520w”, “media”:null, “size”: “(minimum width: 809px) 485px, (minimum width: 600px) 60vw, 100vw”, “fallback”: “https://cdn.×1080/1200×900/filters:focal(807×387:1113×693)/ 341903/XiaGordon_Lede5_2.0.jpg”}, “art_directed”:[]}},”image_is_placeholder”:false,”image_is_hidden”:false,”network”:”vox”,”omits_labels”:false,”optimized”:false,”promo_headline”:”artificial intelligence automatic discrimination . Here’s how to spot it. “,”recommished_count”:0,”recs_enabled”:false,”slug”:”technology/23738987/racism-ai-automated-bias-discrimination-algorithm”,”dek”:”The development of the next generation of artificial intelligence Accompanied by a familiar problem of bias. “,”homepage_title”:”Artificial intelligence automatically discriminates. Here’s how to discover it. “,”homepage_description”:”The next generation of artificial intelligence faces familiar problems of bias. “,”show_homepage_description”:false,”title_display”:”Artificial intelligence automatically discriminates. Here’s how to spot it. “,”pull_quote”:null,”voxcreative”:false,”show_entry_time”:true,”show_dates”:true,”paywalled_content”:false,”paywalled_content_box_logo_url”:””,”paywalled_content_page_logo_url”:” “,”paywalled_content_main_url” : “”,”article_footer_body”:”At Vox, we believe everyone deserves information that helps them understand and shape the world in which they live. That’s why we keep our jobs free. Support our mission and help keep Vox free for all by donating to Vox Today. “,”article_footer_header”:” Explanatory news is good for the public “,”use_article_footer”:true,”article_footer_cta_annual_plans”:”{rn “default_plan”: 1,rn “plans”: [rn{rn”Amount”: 95,rn”Plan ID”: 74295rn},rn{rn”Amount”: 120, rn “Plan ID” “: 81108rn },rn {rn “Amount”: 250,rn “plan_id”: 77096r n },rn { rn “amount”: 350,rn “plan_id”: 92038rn }rn ]rn}”,”article_footer_cta_button_annual_copy “:”Year”, “article_footer_cta_button_copy”: “Yes, I will give it”, “article_footer_cta_button_monthly_copy”: “Monthly”, “article_footer_cta_default_Frequency”: “Annual”, “article_footer_cta_monthly_plans”: “{rn”default_ plan ” : 1,rn “plan”: [rn {rn “amount”: 9,rn “plan_id”: 77780rn },r n {rn “Amount”: 20,rn “plan_id”: 69279rn },rn {rn “Amount”: 50, rn “plan_id”: 46947rn },rn {rn “amount”: 100,rn “plan_id”: 46782rn } rn ]rn} “,”article_footer_cta_once_plans”:”{rn “default_plan”: 0,rn “plan”: [rn {rn “Amount”: 20,r n “plan_id”: 69278rn },rn {rn “Amount”: 50,rn “plan_id” : 48880rn }, rn {rn “Amount”: 100,rn “plan_id”: 46607rn },rn {r n “amount”: 250, rn “plan_id”: 46946rn }rn ]rn}”,”use_article_footer_cta_read_counter”:true,”use_article_footer_cta”:true,” layout”:””,”featured_placeable”: false, “video_placeable”: false, “disclaimer”: null, “volume_placement”: “lede”, “video_autoplay”: false, “youtube_url”: “http://bit. ly/voxyyoutube”, “facebook_video_url”: ” “,”play_in_modal”:true,”user_preferences_for_privacy_enabled”:false,”show_branded_logos”:true,”uses_video_lede”:false,”image_brightness”:”image dark”,”display_logo_lockup”:false ,”svg_logo_data”:” “}” data-cid=”site/article_footer-1687404441_2 23_66188″> $95



$350 /Year

Yes, I will donate $120 /year Yes, I will give $120/Year We accept credit cards, Apple Pay and Google Pay. You can also pass



Please enter your comment!
Please enter your name here


Featured NEWS