Timnit Gebru, a prominent artificial intelligence computer scientist, is launching an independent artificial intelligence research institute focused on the harms of the technology on marginalized groups, who often face disproportionate consequences from AI systems but have less influence in its development.
Her new organization, Distributed Artificial Intelligence Research Institute (DAIR), aims to both document harms and develop a vision for AI applications that can have a positive impact on the same groups. Gebru helped pioneer research into facial recognition software’s bias against people of color, which prompted companies like Amazon to change its practices. A year ago, she was fired from Google for a research paper critiquing the company’s lucrative AI work on large language models, which can help answer conversational search queries.
DAIR received $3.7 million in funding from the MacArthur Foundation, Ford Foundation, Kapor Center, Open Society Foundation and the Rockefeller Foundation.
“I’ve been frustrated for a long time about the incentive structures that we have in place and how none of them seem to be appropriate for the kind of work I want to do,” Gebru said.
Gebru said DAIR will join an existing ecosystem of smaller, independent institutes, such as Data & Society, Algorithmic Justice League, and Data for Black Lives. She hopes DAIR will be able to influence AI policies and practices inside Big Tech companies like Google from the outside — a tactic Gebru said she employed during her time at Google.
Even as the high-profile co-lead of Google’s Ethical AI group, Gebru said she was more successful at changing Google’s policies by publishing papers that were embraced externally by academics, regulators and journalists, rather than raising her concerns internally about bias, fairness and responsibility.
Gebru said she hopes to use the funding to break free of the broken incentives of Big Tech, where she said outspoken researchers can be sidelined, potential harms are evaluated only after an AI system is in use and profitable AI projects — such as large language models, the subject of Gebru’s contested paper at Google — are treated as inevitable once they have been deployed in the real world. There was little consideration for concepts like AI applications that did not use big data sets, or focused on less profit-oriented aims, such as language revitalization, she said.
Click here to read more.
SOURCE: The Washington Post, Nitasha Tiku