The EU AI Act, officially enacted in August 2024, isn't standing still. Like most groundbreaking legislation, it's already facing calls for amendments and refinements. Companies that invested heavily in initial compliance efforts might feel a bit like they're chasing a moving target (and honestly, they're not wrong).
The European Commission has signaled that several key provisions will undergo review within the next two years. This isn't just bureaucratic tinkering - these changes could fundamentally alter how businesses approach AI compliance across the European market.
Table of contents
- Current state of AI Act implementation
- Key areas targeted for revision
- Foundation model regulation updates
- Risk assessment framework changes
- Enforcement mechanism adjustments
- Impact on different industry sectors
- Timeline for upcoming amendments
- Preparing for regulatory evolution
- Compliance strategy adaptation
Current state of AI Act implementation
The AI Act's phased rollout has created an interesting situation. Some provisions kicked in immediately upon enactment, while others won't take effect until 2027. This staggered approach has given both regulators and industry players time to identify practical challenges.
Early implementation feedback has highlighted several pain points. The definition of "AI system" proved broader than many companies anticipated. Risk categorization guidelines remain somewhat ambiguous in certain edge cases. And the compliance documentation requirements have proven more extensive than initial estimates suggested.
Member states are still establishing their national competent authorities. Belgium, for instance, designated its Data Protection Authority as the primary AI Act enforcer, while Germany is creating an entirely new federal AI office. These different approaches are already creating some inconsistency in interpretation and enforcement expectations.
The European AI Office has been busy publishing guidance documents, but many of these remain in draft form. Companies are operating with incomplete information, making strategic compliance decisions based on their best interpretation of evolving requirements.
Key areas targeted for revision
Several specific aspects of the AI Act are drawing criticism from industry groups, legal experts, and even some regulatory bodies themselves. The Commission has acknowledged that certain provisions need clarification or adjustment.
Definition scope refinements
The current definition of AI systems casts an extremely wide net. Simple rule-based systems and traditional statistical software could technically fall under the regulation's scope. This has created confusion for companies using basic automation tools that hardly qualify as "artificial intelligence" in the colloquial sense.
Industry associations have been pushing for a more precise definition that excludes conventional software applications. The Commission appears receptive to this feedback, particularly for low-risk applications that pose minimal societal harm.
Risk classification adjustments
The four-tier risk classification system (prohibited, high-risk, limited risk, minimal risk) has proven challenging to apply consistently. Real-world AI systems often don't fit neatly into predefined categories.
Consider an AI-powered customer service chatbot used by a bank. Is this a high-risk system because it operates in the financial sector? Or minimal risk because it only handles routine inquiries? Different legal interpretations have yielded different conclusions.
The Commission is considering more granular risk categories and clearer classification criteria. This would help companies make more confident compliance decisions without requiring extensive legal consultation for every AI deployment.
Foundation model regulation updates
Foundation models - the large language models and multimodal AI systems that power many consumer applications - have received significant attention since the Act's passage. The original regulation established some requirements for these systems, but rapid technological advancement has outpaced the legislative framework.
The threshold for "systemic risk" foundation models is currently set at 10^25 FLOPs of compute used during training. This seemed reasonable when written, but model efficiency improvements mean that highly capable systems might fall below this threshold while still posing significant risks.
OpenAI's GPT-4, Claude 3, and similar systems clearly qualify as systemic risk models. But what about smaller, highly specialized models that might be equally capable in narrow domains? The Commission is exploring more nuanced criteria that consider model capabilities rather than just computational resources.
Open source considerations
Open source foundation models present unique challenges. How do you regulate a model that thousands of developers might download, modify, and deploy? The original Act didn't provide clear guidance for this scenario.
Meta's release of Llama models highlighted these complexities. The company makes these models freely available, but has limited control over how they're used downstream. Should Meta bear responsibility for every application built on top of Llama? Most experts agree this would be impractical and potentially stifling to innovation.
Proposed amendments would create clearer liability frameworks for open source model providers. These would likely focus on responsible disclosure practices and basic safety testing rather than end-use monitoring.
Risk assessment framework changes
The current risk assessment requirements have proven burdensome for many companies, particularly smaller organizations without dedicated compliance teams. The documentation standards are comprehensive but sometimes redundant.
Companies must currently maintain detailed records of training data, model architecture decisions, testing procedures, and ongoing monitoring results. While this information is genuinely useful for ensuring AI safety, the administrative overhead has been substantial.
Streamlined documentation
Proposed changes would standardize risk assessment templates and reduce duplicative reporting requirements. Instead of requiring companies to create entirely custom documentation, they could use pre-approved frameworks adapted to their specific use cases.
This approach mirrors successful compliance frameworks in other industries. Financial services companies don't reinvent risk management from scratch - they adapt established methodologies to their particular circumstances.
Proportional requirements
One-size-fits-all regulation rarely works well in practice. A startup deploying a simple recommendation algorithm shouldn't face the same compliance burden as a multinational corporation developing autonomous vehicle software.
The Commission is exploring tiered requirements based on company size, system complexity, and potential impact. Small companies might qualify for simplified procedures, while maintaining robust oversight for high-impact applications.
Enforcement mechanism adjustments
The AI Act's enforcement mechanisms have revealed some practical challenges during early implementation. Coordination between national authorities remains inconsistent, and penalty structures might not provide appropriate incentives for compliance.
Penalty framework revisions
Current fines can reach up to 7% of global annual turnover for the most serious violations. While these penalties are certainly attention-grabbing, they might be disproportionate for minor technical violations or good-faith compliance attempts.
The Commission is considering more graduated penalty structures. First-time violators or companies that self-report violations might face reduced penalties. This would encourage proactive compliance efforts rather than defensive strategies designed to minimize regulatory exposure.
Cross-border coordination
AI systems frequently operate across multiple EU member states, but enforcement currently happens at the national level. This creates potential for regulatory arbitrage and inconsistent treatment of similar violations.
Proposed amendments would strengthen the European AI Office's coordination role and establish clearer protocols for cross-border investigations. Companies would benefit from more predictable enforcement patterns across the EU market.
Impact on different industry sectors
Different industries are experiencing varying levels of disruption from AI Act requirements. Healthcare and financial services, which already operate under strict regulatory frameworks, have adapted more easily than sectors with historically lighter compliance obligations.
Healthcare sector adaptations
Healthcare AI systems often qualify as high-risk under the Act's classification system. Medical device manufacturers were already familiar with rigorous safety testing and documentation requirements, so AI compliance represented an extension of existing practices rather than a complete departure.
However, software-as-a-medical-device applications have faced unique challenges. These products blur traditional boundaries between medical devices and software applications. Proposed amendments would provide clearer guidance for digital health applications, particularly those using AI for diagnostic or treatment recommendations.
Financial services alignment
Banks and insurance companies have decades of experience with algorithmic auditing and bias testing. The AI Act's requirements for high-risk systems align closely with existing practices for credit scoring and automated decision-making.
The main challenge has been adapting these practices to newer AI techniques like large language models used for customer service or fraud detection. Proposed changes would recognize existing financial sector compliance frameworks and avoid duplicative requirements where appropriate.
Retail and e-commerce impacts
Online retailers using AI for product recommendations, pricing optimization, or customer targeting have faced significant compliance uncertainty. Many of these applications fall into regulatory gray areas - not clearly high-risk, but potentially more complex than minimal risk systems.
Proposed amendments would create clearer safe harbors for common e-commerce AI applications. Companies following established best practices for algorithmic transparency and user control would face streamlined compliance procedures.
Timeline for upcoming amendments
The Commission has outlined a preliminary timeline for reviewing and potentially amending key AI Act provisions. This schedule attempts to balance the need for regulatory stability with the reality of rapidly evolving technology.
Short-term adjustments (2024-2025)
The most urgent clarifications focus on definitional issues and classification criteria. Companies need clearer guidance to make informed compliance investments. Draft amendments addressing these areas are expected by mid-2025.
These short-term changes will likely take the form of implementing acts and delegated regulations rather than modifications to the primary legislation. This allows for faster adoption while maintaining the Act's fundamental structure.
Medium-term revisions (2025-2027)
More substantial amendments addressing enforcement mechanisms and penalty frameworks are planned for this timeframe. These changes require more extensive consultation with member states and industry stakeholders.
The Commission has committed to publishing a comprehensive review of the Act's effectiveness by early 2027. This review will inform more significant structural changes to the regulation's approach.
Long-term evolution (2027+)
Technology continues advancing rapidly, and the regulatory framework needs to remain relevant. The Commission has indicated openness to more fundamental revisions based on practical implementation experience and technological developments.
Areas like artificial general intelligence, quantum-enhanced AI systems, and brain-computer interfaces weren't addressed comprehensively in the original Act. Future amendments will likely expand coverage to these emerging technologies.
Preparing for regulatory evolution
Companies can take several practical steps to position themselves for successful adaptation as the AI Act evolves. The key is building flexible compliance systems that can accommodate regulatory changes without requiring complete overhauls.
Documentation best practices
Maintaining comprehensive records of AI system development and deployment decisions will remain important regardless of specific regulatory requirements. Companies should focus on creating documentation that serves multiple purposes - compliance, technical improvement, and business analysis.
Version control for AI models and training data becomes critical when regulations change. Being able to demonstrate how systems evolved over time helps establish good-faith compliance efforts even when requirements shift.
Monitoring regulatory developments
The European AI Office publishes regular updates on implementation guidance and potential amendments. Companies should establish processes for tracking these developments and assessing their relevance to existing AI systems.
Industry associations and legal firms specializing in AI regulation often provide analysis and interpretation of regulatory changes. While these resources require investment, they can be more cost-effective than developing internal expertise for smaller companies.
Building adaptable systems
Technical architectures that separate AI models from business logic create more flexibility for compliance adaptations. When regulatory requirements change, companies can modify compliance procedures without rebuilding core applications.
This approach also facilitates testing different compliance approaches or adapting systems for multiple jurisdictions with varying requirements.
Compliance strategy adaptation
Smart companies are treating AI Act compliance as an ongoing process rather than a one-time project. This mindset proves particularly valuable as regulatory requirements continue evolving.
Risk-based prioritization
Not every AI system requires the same level of compliance attention. Companies should focus their most significant efforts on high-risk applications while maintaining proportional oversight for lower-risk systems.
Regular risk assessments help identify when systems might move between categories due to changes in usage patterns, user populations, or regulatory interpretations.
Vendor management evolution
Many companies rely on third-party AI services rather than developing systems internally. Managing vendor compliance becomes increasingly complex as regulations evolve and vendor offerings change.
Contract terms should address how compliance responsibilities shift when regulations change. Clear allocation of liability and compliance obligations protects both parties and facilitates smoother adaptation to new requirements.
Training and awareness programs
Regulatory literacy among technical teams, product managers, and business stakeholders directly impacts compliance effectiveness. Regular training programs help ensure that AI Act considerations are integrated into routine decision-making processes.
These programs should cover both current requirements and anticipated changes. Teams make better decisions when they understand the direction of regulatory evolution, not just current obligations.
The AI Act represents one of the world's first comprehensive attempts to regulate artificial intelligence systems. Like most pioneering legislation, it's experiencing growing pains as theoretical frameworks meet practical implementation challenges.
Companies operating in the EU market need to stay informed about potential amendments while maintaining compliance with current requirements. This balancing act requires sophisticated legal and technical capabilities.
Building robust compliance frameworks that can adapt to regulatory evolution becomes a competitive advantage. Companies that view compliance as a technical and business capability rather than just a legal obligation are better positioned for long-term success.
For organizations seeking to maintain compliance with both current and future AI Act requirements, partnering with specialized compliance platforms can provide valuable support. ComplyDog offers comprehensive tools for managing GDPR and AI Act compliance obligations, helping companies adapt their practices as regulations continue evolving while maintaining focus on core business objectives.


