Artificial Intelligence EU regulations
Hannes Werthner from the Digital Humanism Initiative with an assessment of the EU AI Act agreement (see here about the public statement):
These have been 37 good hours for the EU AI Act and the (digital) world. Many from civil society, science, politics, and industry have stood up for this. […] The European Commission, the Parliament, and the Member States have agreed on the act (but the details of the final wording still need to be finalized). As you know, in the last few weeks Germany, France and Italy argued against a strict regulation, being afraid that this might hinder innovation. In the particularly controversial issue of how foundation models (or now also called general purpose artificial intelligence – GPAI) are regulated, strict transparency rules were agreed upon for providers of large models such as GPT-4 from OpenAI, while small and medium-sized companies and open source models have to meet fewer strict requirements. The use of AI for biometric procedures for facial recognition in public spaces, which was also controversial until the end, was permitted, but only for law enforcement, not for pure surveillance. The foundation models (GPAI) must meet the transparency requirements as proposed by Parliament. This includes the creation of technical documentation, compliance with EU copyright law and the dissemination of detailed summaries of the data used for training. For models classified as high-risk due to their significant potential harm to health, safety, fundamental rights, environment, democracy, and the rule of law, companies must carry out model assessments, assess and mitigate systemic risks, inform the Commission of serious incidents, ensure cybersecurity and report on their energy efficiency. For details as known so far, have a look at https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai.
Comments:0