Brussels has resealed a white paper with new guidelines for regulating Artificial Intelligence. For some reason, I find the directive of informing citizens that they are interacting with AI particularly interesting. The news below is a copy from the MIT Technology Review mailing from February 20th 2020.
The EU just released new guidelines for regulating AI
The news: The European Union’s newly released white paper containing guidelines for regulating AI acknowledges the potential for artificial intelligence to “lead to breaches of fundamental rights,” such as bias, suppression of dissent, and lack of privacy. It suggests legal requirements such as:
- Making sure AI is trained on representative data
- Requiring companies to keep detailed documentation of how the AI was developed
- Telling citizens when they are interacting with an AI
- Requiring human oversight for AI systems
The criticism: The new criteria are much weaker than the ones suggested in a version of the white paper leaked in January. That draft suggested a moratorium on facial recognition in public spaces for five years, while this one calls only for a “broad European debate” on facial recognition policy. Meanwhile, the paper’s guidelines for AI apply only to what it deems “high-risk” technologies.
When does this kick in?: The white paper is only a set of guidelines. The European Commission will start drafting legislation based on these proposals and comments at the end of 2020.
What else: The EU also released a paper on “European data strategy” that suggests it wants to create a “single European data space”—meaning a European data giant that will challenge the big tech companies of Silicon Valley.
Read next: Artificial intelligence development should be regulated, says Elon Musk. (TR)