This episode was recorded in front of a live audience at EmTech Digital, MIT Technology Review’s annual artificial intelligence conference.
We meet:
- Brookings Institution Technology Center Director Nicol Turner Lee
- Anthony Green, Producer of the In Machines We Trust Podcast
Credits:
This episode was written by Jennifer Strong, Anthony Green, Erin Ann Created by Durwood and Emma Silerkens. It was edited by Michael Reilly, directed by Laird Nolan, and mixed by Garret Lang. Interlude art by Stephanie Arnett. Cover art by Eric Mongeon. Special thanks to Amy Lammers and Brian Bryson this week.
Full transcript :
[PREROLL]
[TR ID]
Jennifer Strong: The application of artificial intelligence is so deeply embedded in our daily lives that it is easy to forget it exists…but these Systems such as the one powering an Instagram filter or the price of a drive home… can rely on failing pre-existing datasets depicts the full picture of the consumer.
This means that people become outliers in that data—usually those who have historically been marginalized.
Here’s why facial recognition technology is least accurate for women of color, and why ride-sharing services can actually be more expensive in low-income neighborhoods. So how do we stop this from happening?
You will believe a word from Harry Potter
and his magical world…might create a good starting point for this conversation?
This is Jennifer Strong, and our producer Anthony Green brings you this episode of Equity at MIT Technology Review’s AI conference EmTech Digital dialogue. We’ll hear from Nicol Turner Lee, director of the Brookings Institution Technology Center, about what it takes to develop effective AI policy.
[EPISODE IN:
Anthony Green: Harry Potter everywhere saying.
Nicole Turner Lee: Oh Lord, I, I, I haven’t watched a Harry Potter show since the kids were little, so I’ll give it a try.
[Laughter]
Anthony Green:
Omg. Well, that’s a pretty good one. No, yes, it’s been bugging me all these years. Honestly, I’m not even a big fan, but, well, the quote says that there was a time when we had to choose between what’s right and what’s easy, and it felt like that totally applies to how companies design these systems. Right. So I guess my question is, how can policymakers start working in the right direction when it comes to favorable outcomes for AI in decision-making?
Nicol Turner Lee: Mmmm , that’s a good question. Thanks again for inviting me. You might be wondering why I’m sitting here. I am a sociologist. I’ve had the pleasure of attending several MIT conferences on this stage. But I get into this…before I answer your question, because I think the quote you cite points to most of what my colleagues are talking about, which is the socio-technical implications of these systems.
Anthony Green: Hmm-hmm.
Nicol Turner Lee: So I’ve been doing this for about 30 years. Part of our challenge is that we don’t see equitable access to technology. When we think about these emerging complex systems, in your case, we have to consider the extent to which they affect ordinary people, especially people who are already marginalized. Already vulnerable in our society. So that statement makes a lot of sense because if we’re not careful, I think technology itself will accelerate some of the progress we’ve made in terms of equity and civil rights.
Anthony Green: Yes of.
Nicol Turner Lee: Well, I’m going on a date with myself for a while. I know I look a lot younger. When I was growing up, I used to run home to see Jason, right. There are two comics. I watched Fred Flinstone, if you all remember, he rode a car with rocks, and I watched the Jetsons..
Anthony Green: Powered by his feet.
Nicole Turner Lee: I Yes, yes! You are too young to know Fred Flintstone.
Anthony Green: Oh, boomerang.
But, but if you notice. You know, Fred Flintstone is outdated. correct?
Anthony Green: Yes.
Nicol Turner Lee: Stone As a wheel doesn’t work.
Anthony Green: Yes.
Nicol Turner Lee: Jetsons really did. In addition to my PhD in sociology and my interest in technology, part of the challenge and reason I’m interested in this work is that these systems are now more commonly used to the point that when people are in the environment.
This is where I think we have to have more conversations to point to your question. Such a roundabout way. But I think it’s very important that we have these conversations now, before technology accelerates.
Anthony Green: One hundred percent. I mean, you know, all that said, yes, policymaking alone isn’t going to be the only solution needed to solve these problems. So if you could talk a little bit about accountability, especially on the industry side, I’d love it.
Nicol Turner Lee: Mmmm , the problem with the policy maker is that we are not necessarily technical experts. So we can see a problem, we can actually see that from the results.
Anthony Green: Yes.
Nicol Turner Lee: So I don’t think any policy makers, or people other than Ro Khanna and others, really know what it’s like to be in tech.
Anthony Green: Of course.
Nicol Turner Lee: This Understand how these results happen. They don’t understand what’s under the hood. Or as people say, I’m trying to get rid of the language. It’s not really a black box. correct. It’s just a box.
Anthony Green: Right.
Nicol Turner Lee: Because there are some, uh, Call it black box judgment. But when you think about policy and those outcomes you have to say to yourself, how do policymakers take an organic iterative model and then legislate or regulate it? I think, that’s what people like me in the social sciences come in and say about what they should be looking for A place for more conversations. Well, so the accountability there is hard.
Anthony Green: Yes.
Nicol Turner Lee: Because there is no one and this Many people in the room speak the same language, right. Technicians are a bit eager to get to the market. I call it unlicensed forgiveness. Well, as my colleague at the Technology Innovation Center, Tom Wheeler has a great phrase, “build it, break it, then come back and fix it.” Well, guess what? That is unlicensed forgiveness. Because what will happen? When people go into foreclosure, we say we’re sorry, uh, mortgage rates, they’re being held for longer in the criminal justice system because the models dictate those predictions.
Anthony Green: Yes .
Nicol Turner Lee: So policymakers haven’t Totally keep up with the pace of innovation. We’ve been saying this for decades, but it’s really true.
Anthony Green: Of course. I mean, you’ve called this issue in the past as a civil and human rights issue.
NICOLE TUNER LEE: Yes. This is.
Anthony Green:
Yes. So, I mean, can you expand on that and how does that affect your conversations about policy?
Nicole Turner Lee: You know, from From that perspective, it shapes my conversation. I, I, you know, shameless plug, I have a book coming out about America’s digital divide, so I’ve been intrigued. I call it, uh, digital invisibility, how the internet creates a new underclass. It’s really about the digital divide, going beyond the binary of who’s online and who’s not, and really thinking about what the impact is when you’re not connected.
Anthony Green: Yes.
Nicol Turner Lee: and How do these emerging technologies affect you? So from your point of view, I call it a civil rights issue because the pandemic has shown that without internet access, you don’t actually have the same opportunities as everyone else. You cannot register your vaccine. You cannot communicate with your friends and family. 50 million school-age children were sent home, of which 150 to 16 million were unable to study. Now we see its impact.
Anthony Green: Yes.
Nicol Turner Lee: So When we think about artificial intelligence systems that have now replaced what I call the death of simulation. To replace, uh, you know, how we used to do things in person, the laws we see now in the civil rights era are broken. And…in my opinion, I don’t necessarily blame it on the malfeasance of the technicians. But what they’re doing is they’re giving up opportunities that people strive for.
Anthony Green: Of course.
Nicole Turner Lee: 2016 general election. When we let foreign agents in and manipulate what voters can use. That’s a form of voter suppression.
Anthony Green: Yes.
Nicol Turner Lee: And there is nowhere to make Those who say like the Supreme Court or Congress say my vote was just dismissed based on deep neural networks related to what they see.
Anthony Green: Yes.
Nicol Turner Lee: or Incorrect information about voting. We’re in a situation right now…when you’re in a city like Boston, the Uber driver won’t pick you up because he sees your face in your profile. For the type of civil rights system we have that’s not based on the digital environment, well, you know, where do you go? So part of my work at the Brookings Institution is how we think about the flexibility and agility of these systems to apply to emerging technologies. We don’t have easy answers, because these rules weren’t necessarily made in the 21st century.
Anthony Green: Yes.
Nicole Turner Lee: When They were developed when my grandfather told me how he walked to school in the same pair of shoes, right. The worst place because he wants an education. We don’t have that today. I think it’s worth discussing as these techniques become more commonplace. How do we develop not only inclusive and fair AI, but legitimate AI as well? AI makes sense, and people feel like they have some retribution for this malfeasance. So I’m going to talk about some of the work we’ve done there, but I think, you know, there’s a bunch of people like me, some of them at MIT, and they’re really trying to figure out how do we go back and have people on citizens People responsible for freedom and human liberty, don’t let technology be the fallout, you know, why things wreak havoc or go wrong.
Anthony Green: Don’t blame robot.
Nicole Turner Lee: You knew! I tell people that robots don’t discriminate. sorry. You know, we do, and we have something to say about it. We started to focus on civil rights.
Anthony Green: I’m going to the auditorium. Does anyone have questions?
Rene, Audience: Thank you very much, Renee from Sao Paulo, Brazil.
Nicol Turner Lee: Hey !
Rene, Audience: Common with last speech theme. It’s about stealth.
Nicole Turner Lee: Yes!
Rene, Audience: Invisible There are many ways. If, if you have the wrong badge, you are invisible, like Harry Potter. If you are too old, if you have the wrong skin type. There is another very interesting thing. When we talk, we talk about data and artificial intelligence. AI makes recommendations about available data.
Nicole Turner Lee: Yes of.
Rene, Audience: But There is data showing that it is completely invisible to the unseen. So what kind of solutions are we building if you are based on data…based on data about all people, always the same people. How do we make it visible to everyone?
Yes!
Rene, Audience: So ,thank you very much.
Nicol Turner Lee: No , I like this question. Can I join this directly?
Anthony Green: Go ahead .
Nicole Turner Lee: You Know, uh, my colleague and friend Renee Cummings, who is an artificial intelligence, uh, resident scientist at the University of Virginia. She introduced me to me a few months ago, and we did a podcast featuring her, and the concept is called data trauma.
Anthony Green: Hmm.
Nicole Turner Lee: I Wanted to walk you through that because when I started thinking about the implications, it blew my mind, and it involved Renee’s question. What does that mean, you know, when we talk about artificial intelligence, we often talk about the development of the problem, the data on which we are training it, the way we interpret the results or interpret the results, but we never talk about the quality of the data and the data itself contains The facts in it, and the trauma of our society. I don’t care what people say. If you’re training AI in criminal justice, well, the problem, and you’re trying to make a fair and impartial AI that can identify who should be detained or who should be released. We all know the specific algorithm I’m talking about. If it were trained on U.S. data, it would be disproportionately represented by people of color.
So even though my friends and I tell everyone this, just to let you know, it’s like she’s not coming here, you know, pissed off. I tell everyone, you need a social scientist as a friend. I don’t care who you are. If you’re a scientist, an engineer, a data scientist, and you don’t have a social scientist as your friend, you’re not being honest with the question. correct? What happens because of this data? It goes with all this noise. Despite our ability as scientists to tease out noise or distract from it, you still have unequal bases and foundations. So one of the things I’m trying to tell people is that we can recognize the trauma of the data we’re using. We can realize that our model is normative to the extent that there is bias. Technical bias, social bias, outcome bias, and prediction bias, but we should disclose what those things are.
Anthony Green: Yes.
Nicol Turner Lee: This That’s where I work as a person who sees it as the use of the proxy and the use of the data in particular becomes very interesting to me. Which part of the model is more detrimental to respondents and results to me. Which parts should we disclose, we just don’t have the right data to accurately predict without some type of, you know, risk…
Anthony Green: Of course.
Nicole Turner Lee: …to that group of people.
Anthony Green: Yes.
Nicol Turner Lee: So For your question, I think if we admit that, you know, I think we can have an honest conversation about how to bring interdisciplinary backgrounds into certain situations.
Anthony Green: We have one more question .
Kyle, Audience: Hi, Nicole.
Nicole Turner Lee: Hey.
Kyle, Audience: I appreciate your View. Well, my name is Kyle. I run…I’m a trained data scientist and I manage a team of AI and ML designers and developers. So, you know, the speed at which this industry is growing scares me. You mentioned GPT-3. We’re already talking about GPT-4 being developed, and the exponential leaps and capabilities that will emerge. What you mentioned strikes me as the legislators don’t understand what we’re doing. And I don’t think we as data scientists should decide how to tie our hands behind our backs.
Nicole Turner Lee: Yes of.
Kyle, Audience: And how to protect ours from unintended consequences.
Nicol Turner Lee: Yes of.
Kyle, Audience: So how do we engage in how do we help legislators understand the real risks, not the hype that the media sometimes hears or perceives?
Nicol Turner Lee: Yes Yes, no, I love that question. I’m really going to flip and I’m going to talk about it in two ways that I actually talk about it. So I do think legislators working in this space, especially in those sensitive use cases.
So I tell people, I keep giving this example. I love to buy boots, and I’m fine with the algorithm telling me as a consumer that I love boots, but as Latonya Sweeney’s work shows, if you associate other things with me. Uh, what other, uh, attributes does this particular person have? When does she buy boots? How many pairs of boots does she have? Does she check her credit when buying boots? What computer did she use when she bought the boots? If you have a cumulative picture around me, then we have what Dr. Sweeney said – these associations create this risk.
So your first question, I think you are right. Policy makers should actually define guardrails, but I don’t think they need to do it for everything. I think we need to choose those areas that are the most sensitive. The EU calls them high risk. Maybe we can draw from some models that can help us think about what is high risk, where should we spend more time with potential policy makers, where should we spend time together?
I’m a big fan of the regulatory sandbox in terms of co-design and co-evolution of feedback. Well, I published an article about an incentive-based rating system in an Oxford University news book, and I can talk about it right away. But I also think, on the other hand, you all have to take your reputational risk into account.
As we move into a more digital society, developers have a responsibility to do their due diligence as well. As a company, you can’t afford to put what you think is an algorithm or what you think is the best idea of an autonomous system in a newspaper. Because doing so will reduce consumer trust in the product.
So I’m telling you, both sides are the guardrails that I think it’s worth having a conversation with face recognition technology where we’re sure, because we don’t have the technology when it works for all groups of people accuracy. When it comes to different impacts on financial products and services. I’ve found some great models in my work, in banking, they actually have triggers because they have regulators that help them understand which agents actually make a difference. We’ve just been in housing and This is seen in the assessment market, where AI is being used to replace subjective decision-making, but more fueling the types of discriminatory and predatory assessments we see. In some cases, we actually need policymakers to set up guardrails, but more proactively. I keep telling policymakers that you can’t blame data scientists. If the data is bad.
Anthony Green: Yes.
Nicol Turner Lee: Put More money into R and D. Help us create better datasets that are overrepresented in certain regions or underrepresented in minority populations. The point is, it has to work together. I don’t think if policy makers actually lead this, or data scientists lead it themselves in some areas, I don’t think we’re going to have a good successful solution. I think you really need people to work together and collaborate on these principles. We create these models. Computers don’t. When we create algorithms, autonomous systems, or ad targeting, we know what we’re doing with those models. we know! We’re in this room and we can’t sit down and say we don’t understand why we’re using these technologies. We know because they actually have a precedent for how they can be expanded in our society, but we need some accountability. This is exactly what I want to achieve. Who holds us accountable for these systems we are creating?
This is so funny Anthony, the last few, uh, weeks, because many of us have seen, uh, conflict Ukraine. My daughter, because I have a 15-year-old, she came to me with all kinds of TikToks and stuff, and she saw it kind of like, “Hey, Mom, you know what’s going on? ?” I had to pull myself back because I was really involved in the conversation, not knowing in some ways, once I went down that road with her. I’m going deeper and deeper.
Anthony Green: Yes.
Nicol Turner Lee: And I think for us scientists, it goes back to this. I have a dream to give a speech. We have to decide which side of history we want to be with these technologists. How much do we want to contribute down the rabbit hole? I think the great thing about AI is our ability to wrap human cognition in these repetitive processes far beyond our wildest imaginings of Jetson.
This allows us to do something that none of us have been able to do in our lifetime. Do we want to be on the right side of history? How do we approach these technologies so that we can create better scientists?
Anthony Green: Of course.
Nicol Turner Lee: No worse. I think that’s a valid question to ask this group. This is a valid question to ask yourself.
Anthony Green: I don’t Knowing if we could have ended in a better way, we’re out of time! Nicole, we could go all day, but..
Nicole Turner Lee : I know. I always felt like a Baptist missionary, you know, so if I had the energy…
Anthony Green: Choir, can you sing?
Nicole Turner Lee: I I know, right. I can’t sing, but you can sing I have a dream speech, Anthony.
[Laughter]
Anthony Green: Oh man. You put me in the stands and I’m already on stage.
Nicole Turner Lee: Yes Yes, yes, haha.
Anthony Green: Nico Er, thank you very much.
Nicol Turner Lee: Also Thanks a lot. Appreciate it.
Anthony Green: Of course.
Nicol Turner Lee: Thank you Everyone.
[MIDROLL AD]
Jennifer Strong: This episode was produced by Anthony Green, Erin Underwood and Emma Cillekens. It was edited by Michael Reilly, directed by Laird Nolan, and mixed by Garret Lang. It was recorded in front of a live audience at the MIT Media Lab in Cambridge, Massachusetts, with special thanks to Amy Lammers and Brian Bryson.
Jennifer Strong: Thank you for listening. I’m Jennifer Strong.