Microsoft Moves Toward Signing EU AI Code While Meta Stands Firm Against It
A clear split has emerged among leading tech companies as the European Union readies enforcement of its new AI regulations.
Microsoft’s president, Brad Smith, confirmed the company is likely to endorse the EU’s voluntary code of practice for general-purpose AI models, a framework aimed at helping firms comply with the EU’s AI Act.
Meanwhile, Meta Platforms has decisively refused to sign, calling the code excessive and legally uncertain.
Smith told Reuters on 18 July,
“I think it’s likely we will sign. We need to read the documents.”
He added that Microsoft values “the direct engagement by the AI Office with industry,” showing a willingness to cooperate with European regulators.
This contrasts sharply with Meta’s stance, where Joel Kaplan, Meta’s chief global affairs officer, wrote on LinkedIn that “Europe is heading down the wrong path on AI” and stated clearly that Meta “won’t be signing” due to “legal uncertainties” and rules that “go far beyond the scope of the AI Act.”
What the EU’s Voluntary AI Code Demands
Published on 10 July by the European Commission, the General-Purpose AI Code of Practice sets out transparency, copyright, and safety obligations for AI developers.
It requires providers to maintain up-to-date documentation on their AI models, publish summaries of training data, and adopt policies respecting EU copyright law.
The code also imposes risk assessments and continuous monitoring to prevent serious incidents, such as malfunctions or security breaches, with mandatory reporting to regulators for any significant event.
The framework applies to AI models with large-scale computational needs, measured by floating-point operations (FLOPs).
Models exceeding 10²³ FLOPs during training must meet basic rules, while those above 10²⁵ FLOPs face stricter risk management.
Industry Divisions on Regulation Approach
Meta’s refusal echoes concerns from a group of about 45 European companies, including Airbus and ASML, which recently urged the Commission to delay the rollout by two years to avoid harming innovation.
Kaplan argued the code would “throttle the development and deployment of frontier AI models in Europe” and could “stunt European companies” building AI-driven businesses.
In contrast, companies like OpenAI and Mistral have already signed the code, suggesting varied strategies in managing EU regulatory demands.
The EU has warned firms rejecting the voluntary framework that they will face closer scrutiny and will need to prove compliance through alternative means.
Microsoft’s AI Investments Amid Workforce Changes
Microsoft is investing heavily in AI infrastructure, planning to spend around $80 billion (£68.6 billion) on data centres to support AI training.
Simultaneously, it is reducing its workforce by approximately 15,000 employees this year, largely affecting the Xbox division.
While some link job cuts to AI efficiency gains, Microsoft insists that AI productivity improvements are “not a predominant factor.”
Judson Althoff, Microsoft’s Chief Commercial Officer, highlighted that AI tools have helped save over $500 million in call centre costs and accelerated software development, with AI generating about 35% of new product code.
The firm’s investment in AI remains a priority, led by British AI pioneer Mustafa Suleyman, although relations with partners like OpenAI reportedly face tension.
EU’s AI Act Enforcement Timeline and Impact
The voluntary code is a stepping stone toward full compliance under the AI Act, which takes effect from 2 August 2025.
Enforcement for new AI models begins in August 2026, while existing models have until August 2027 to comply.
The code’s voluntary status initially allows industry input, but non-signatories will face more rigorous oversight.
This legislation bans some high-risk AI uses outright and mandates registration and risk management for other applications.
The EU’s approach also includes exemptions for free and open-source models under strict conditions but maintains oversight for the most powerful systems.
Marketing and Business Implications of AI Regulation
For marketing professionals, these regulations will influence AI tool selection and content creation practices.
Transparency rules require AI providers to disclose training datasets and capabilities, improving clarity for marketers relying on AI for campaign optimisation and personalised advertising.
Copyright compliance adds complexity to producing AI-generated content, as providers must respect content owners’ rights throughout the AI lifecycle.
Companies using AI for data analysis or automated decisions must also navigate heightened risk assessment requirements.
The phased enforcement gives businesses time to adjust current AI deployments and transition to compliant solutions, shaping AI’s integration into marketing and enterprise software.
Can Industry and Regulators Find Middle Ground?
The divergent reactions to the EU’s AI code reflect the broader challenge of balancing innovation with regulation.
While some companies embrace the framework as a way to shape rules and demonstrate responsibility, others fear excessive controls will stifle progress and competitiveness.
With global efforts like the G7 Hiroshima AI Process underway, the EU’s code is part of a complex landscape where tech firms must juggle multiple regulatory demands.
The question remains whether these approaches can converge to foster both trust and growth in AI technologies, or if regulatory fragmentation will fragment the AI market itself.