Digital Europe, a digital technology trade association, has warned the EU against overregulating in its upcoming AI bill in a letter. Here is the official joint statement signed by 33 European digital industry representatives.
„As the AI Act approaches the final weeks of negotiations, two sticking points remain. Firstly, how to handle foundation models and general-purpose artificial intelligence (GPAI). Secondly, the risk of misalignment with existing sectorial legislation.
Only 8% of European companies use AI – far from the Commission’s 2030 target of 75%1 – and barely 3% of the world’s AI unicorns come from the EU.2 Europe’s competitiveness and financial stability highly depend on the ability of European companies and citizens to deploy AI in key areas like green tech, health, manufacturing or energy.
For Europe to become a global digital powerhouse, we need companies that can lead on AI innovation also using foundation models and GPAI. We the 33 undersigned European digital industry representatives, see a huge opportunity in foundation models, and new innovative players emerging in this space, many of them born here in Europe. Let’s not regulate them out of existence before they get a chance to scale, or force them to leave.
On scope, the Commission’s own data shows that, for an SME of 50 employees, placing one single AI-enabled product on the market could result in compliance costs of well over €300,000 under the AI Act.4
It is vital that we reduce this burden as much as possible, and let SMEs apply GPAI, foundation models and other new emerging AI technologies in their innovations.
This is why we support recent moves by Member States to limit the scope for foundation models to transparency standards. The AI Act does not have to regulate every new technology, and we strongly support the regulation’s original scope focusing on highrisk uses, not specific technologies.
Further, European key sectors already are strongly regulated, and it is imperative to clarify and remove any overlaps and conflicts with existing sectoral legislation, such as the Medical Devices Regulation.
. The risk-based approach must remain at the core of the AI Act. It is supported by a broad alliance of industry and civil society and is key to ensure that the regulatory framework is technology-neutral and focuses on truly high-risk use cases – it should not treat all AI software without an intended purpose as high risk.
. Regulatory flaws will be aggravated at sectoral level, such as healthcare. We should better align with the EU’s existing, comprehensive product safety legislation to address conflicting requirements and overlaps to avoid disruptions to well established sectoral frameworks.
. Regulating GPAI and foundation models requires focusing on information sharing, cooperation and compliance support across the value chain. The AI Act should allow companies to detail collaboration activities and allocate responsibilities among them. The concept of ‘very capable’ or ‘high-impact’
foundation models cannot be measured and is not future proof.
. The EU’s comprehensive copyright protection and enforcement framework already contains provisions that can help address AI-related copyright issues, such as the text and data mining exemption and corresponding opt-out for rightsholders in Art. 4 of the Copyright Directive.„
Banking 4.0 – „how was the experience for you”
„So many people are coming here to Bucharest, people that I see and interact on linkedin and now I get the change to meet them in person. It was like being to the Football World Cup but this was the World Cup on linkedin in payments and open banking.”
Many more interesting quotes in the video below: