giftstorage.blogg.se

Dair timnit
Dair timnit







dair timnit

(She herself is often subject to this harassment.) She has no background in the technical details of AI but understands deeply both how AI systems harm refugees and how social media is used as a tool of surveillance and harassment. Likewise, Meron Estifanos, also a research fellow, has spent her career advocating for refugees in the Sinai. “This is a very personal project for her,” Gebru says. For example, Raesetje Sefala, a research fellow at DAIR who is using computer vision to analyze the geographic evolution of apartheid in South Africa, is based in Johannesburg and from the Lebowakgomo township in the Limpopo province in South Africa. “Even at places like Stanford, we have too much concentrated power that is impacting the world, and yet the world has no opportunity to affect how technology is being developed.”įoundational to the work of DAIR is an effort to fracture this concentration of power and build instead a decentralized and local base of expertise. “How can we talk about ethics or responsibility when we have these companies that can simply say, ‘Sorry, I don’t care about Morocco,’ ” Gebru says. He described the imprisonment of his friends. He talked with others about the Moroccan government’s use of social media to harass citizens and journalists. Two weeks before leaving Google, Gebru recalls, she hired a colleague from Morocco who eagerly tried to raise awareness about the abuses of social media in his home country. “If we want AI that benefits our communities,” Gebru says, “then what kind of processes should we follow?” Spread the Power

#Dair timnit series

CAS is hosting a year-long speaker series on tech in Africa.

dair timnit

She discussed the animating principles of her organization, the Distributed AI Research Institute (DAIR), which supports independent, community-rooted AI research broadly and is currently prioritizing work that benefits Black people in general (Africa and the African diaspora). Speaking at a recent Center for African Studies (CAS) in the School of Humanities and Sciences event, co-sponsored by Stanford HAI, Gebru answered her own question. If we had the opportunity to pursue this work from scratch, how would we want to build these institutions?” “At the end of the day, this needs to be about institutional and structural change. “What I’ve realized is that we can talk about the ethics and fairness of AI all we want, but if our institutions don’t allow for this kind of work to take place, then it won’t,” Gebru says. There she raised a number of red flags - and was publicly and controversially fired in December 2020. After her term at Microsoft she went on to co-lead a Google team focused on the ethics of artificial intelligence. While a postdoctoral researcher at Microsoft, she investigated and wrote about bias in facial recognition software. She pointed out the field’s lack of diversity in 2015 and, two years later, responded by co-founding the organization Black in AI. For years, computer scientist Timnit Gebru has been voicing concerns about fairness in AI.









Dair timnit