The European Parliament has approved an EU Commission proposal aimed at ensuring that the application of artificial intelligence (AI) is safe. The proposal was amended by Members of European Parliament (MEPs) to guarantee that AI systems are overseen by human beings and are safe, transparent, traceable, non-discriminatory, and environmentally friendly. The AI Act aims to address the current state of the AI discourse.
The new proposal includes a long list of disallowed practices, including subliminal techniques that materially distort a person’s behavior in a manner that causes physical or psychological harm. While there is some grey area as to what constitutes physical and psychological harm, the MEPs have made several changes to avoid intrusive and discriminatory uses of AI systems.
Focus on Transparency
The AI Act also places a significant emphasis on transparency. Developers of generative foundation models, such as GPT, must explicitly state that their content was generated by AI, and they must take steps to prevent their model from publishing summaries of copyrighted data and generating illegal content.
Impact on Innovation
While the AI Act is designed to protect fundamental rights, strengthen democratic oversight, and ensure a mature system of AI governance and enforcement, Romanian MEP Dragos Tudorache emphasizes the importance of not stifling innovation in the AI start-up space. The act is expected to be the first of its kind in the world to outline safe and transparent practices for AI development.
If passed, the AI Act will need backing from the entire Parliament at the June 12-15 session, with the majority accepting the changes made by the MEPs. The amended list of prohibited AI practices now includes “Real-time” remote biometric identification systems in publicly accessible spaces, “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization, Biometric categorization systems using sensitive characteristics, Emotion recognition systems in law enforcement, border management, workplace, and educational institutions, and Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases that violate human rights and right to privacy.