A $27 million fund aimed at applying artificial intelligence to the public interest has announced the first targets for its beneficence: $7.6 million will be split unequally between MITs Media Lab, Harvards Berkman Klein Center and seven smaller research efforts around the world.
The Ethics and Governance of Artificial Intelligence Fund was created by Reid Hoffman, Pierre Omidyar and the Knight Foundation back in January; the intention was to ensure that social scientists, ethicists, philosophers, faith leaders, economists, lawyers and policymakers have a say in how AI is developed and deployed.
To that end, this first round of fundings supports existing organizations working along those lines, as well as nurturing some newer ones.
The lions share of this initial round, $5.9 million, will be split by MIT and Harvard, as the initial announcement indicated. Media Lab is, of course, on the cutting edge of many research efforts in AI and elsewhere; Berkman Klein focuses more on the legal and analysis side of things.
Media and information quality looking at how to understand and control the effects of autonomous information systems and influential algorithms like Facebooks news feed.
Social and criminal justice perhaps the area where the bad influence of AI-type systems could be the most insidious; biases in data and interpretation could be baked into investigative and legal systems, giving them the illusion of objectivity. (Obviously the fund seeks to avoid this.)
Autonomous cars although this may seem incongruous with the others, self-driving cars represent an immense social opportunity. Mobility is one of the most influential social-economic factors, and its reinvention offers a chance to improve the condition of nearly everyone on the planet great potential for both advancement and abuse.
Those two well-known organizations will be pursuing issues related to those (theyre already working together anyway), but the seven smaller efforts are also being more modestly funded.
Digital Asia Hub, FAT ML and ITS Rio will be hosting conferences and workshops to which experts across fields will be invited, advancing and enriching the conversations around various AI issues. ITS Rio also will be translating debates on the topics a critical task, since there are important thinkers worldwide and these conversations shouldnt be limited by something as last-century as native language.
On the research side, AI Now will be looking at bias in data collection and healthcare; the Leverhulme Center will be looking at interpretability of AI-related data; Data & Society will be conducting ethnographically-informed studies on the human element of AI and data for example, how demographic imbalances in who runs real estate businesses might inform the systems they create and use.
Access Now (which doesnt really fit in either category) will be working to create a set of guidelines for businesses and services looking to conform to major upcoming data regulations in the EU.
For this initial cohort, we looked for projects that fit our goal of building networks across fields, and that would complement the work of our anchor partners at the Media Lab and Berkman Klein, said Knights VP of Technology and Innovation, John Bracken, in an email to TechCrunch.
We think its vital that civil society has a strong voice in the development of artificial intelligence and machine learning. We see these projects as part of a growing set of researchers, engineers, and policy makers who will be part of ensuring that these new tools are developed ethically.
Although the funds are in the public interest, they arent just handouts; I asked Bracken whether there were any concrete expectations for the organizations involved.
Absolutely, he said. The discussion around artificial intelligence is no longer a far-off, speculative thing. Each of the grants were making have deliverables planned for the next twelve months, and well be showcasing them as they launch.
A few million bucks may seem like a drop in the bucket among the herds of unicorns we track here at TechCrunch, but on the other hand it may seem cheap when the studies and events being funded come to fruition and result in the kind of productive dialogue this fast-moving field needs.
Ito, J. (2018, November). Lecture: The Limits of Ethical A.I. Science and Democracy Lecture, Harvard Program on Science, Technology & Society. Cambridge, MA.
inEthics and Governance of Artificial Intelligence
AI for the public interest fund to explore bias in criminal justice and autonomous vehicles
The Ethics and Governance of Artificial Intelligence Fund, created in part by the Omidyar Network, Knight Foundation, and LinkedIn founde…
Unlike humans, these machine algorithms are much harder to interrogate because you dont actually know what they know, says Joi Ito.
The talk will examine the limits and implications of the traditional vision of ethics and governance of AI, and offer alternatives.
Except for papers, external publications, and where otherwise noted, the content on this website is licensed under aCreative Commons Attribution 4.0 International license (CC BY 4.0). This also excludes MITs rights in its name, brand, and trademarks. For papers and external publications included on this website, please contact the author(s) or publisher(s) directly for licensing information.