Author: Zhang Feng
On January 26, 2026, the European Commission officially announced a new formal investigation into Elon Musk's X platform (formerly Twitter) under the Digital Services Act (DSA). The core focus of this investigation is the recommendation algorithm driven by the Grok AI model built into the X platform, with a key assessment of whether the algorithm poses a risk of disseminating illegal content, including fabricated explicit pornographic images, child sexual abuse material, and other violations.

I. Background and Main Contents of the Digital Services Act (DSA)
The EU's investigation into Platform X is based primarily on the Digital Services Act (DSA).
Starting in August 2023, Amazon's Marketplace, Apple's App Store, and 16 other internet services were subject to the EU's Digital Services Directive, and from February 17, 2024, this applied to all online platforms within the EU. This directive is a core component of the EU's digital strategy and one of the strictest and most systematic digital services regulations globally. Its enactment was not accidental, but rather a response to the numerous governance challenges arising from the rapid development of digital services, aiming to regulate the behavior of digital service providers, protect user rights, and maintain fairness and security in the digital space. (I) Background of the Formulation With the rapid development of internet technology, digital services have become deeply integrated into the daily lives of EU citizens. However, the chaos in the digital space has also become increasingly prominent, posing a pressing problem for EU regulatory authorities. On the one hand, the spread of illegal or inappropriate content on digital platforms is becoming increasingly rampant, not only harming citizens' personal rights and dignity but also potentially triggering social conflicts, disrupting public order, and even endangering national security. On the other hand, large digital platforms, leveraging their monopolistic position, abuse their market advantages, arbitrarily collecting user data, manipulating recommendation algorithms, and restricting competition, thereby damaging users' privacy rights and hindering the innovative development of the digital industry. The Digital Services Act (DSA) was enacted precisely to address these challenges. Its core objective is to establish a secure, transparent, and fair digital single market, balancing digital service innovation with user rights protection and the maintenance of public interests, while simultaneously strengthening the EU's voice in global digital governance. By establishing unified regulatory rules covering all providers of digital services within the EU, regardless of whether their headquarters are located in the EU, the DSA effectively curbs regulatory arbitrage. (II) Main Contents The Digital Services Act (DSA) covers all aspects of digital services regulation, focusing on three core principles: tiered regulation, clear responsibility, and transparency and traceability. It categorizes digital service providers into different tiers, imposing varying degrees of regulatory obligations based on their size and influence. Firstly, it clarifies the scope of regulation. The DSA's scope is extremely broad, covering all natural persons, legal persons, and other organizations providing digital services to users within the EU, regardless of whether their headquarters are located inside or outside the EU. If their service recipients include EU citizens, they must comply with this act. Specifically, this covers various digital services including social media platforms, search engines, e-commerce platforms, cloud service providers, and app stores. More stringent regulatory requirements are set for "Very Large Online Platforms" (VLOPs) and "Very Large Online Search Engines" (VLOSEs). According to DSA regulations, platforms with more than 45 million monthly active users will be identified as "Very Large Online Platforms," including platforms like X, Meta's Facebook, and Instagram. Secondly, platform responsibilities are tiered. The DSA categorizes digital service providers (DSAs) into four tiers based on their size and influence, assigning different responsibilities and obligations to each: Tier 1 consists of basic service providers (e.g., internet access services), who only need to fulfill basic compliance obligations, such as cooperating with regulatory investigations; Tier 2 consists of general digital service providers (e.g., small social media platforms, niche e-commerce platforms), who are required to fulfill obligations such as content moderation and handling user complaints; Tier 3 consists of hosting service providers. (e.g., cloud storage, forum hosting) requires preliminary review of user-uploaded content and timely removal of obviously illegal content; the fourth level is ultra-large online platforms and ultra-large online search engines, which must fulfill the strictest regulatory obligations, including conducting regular risk assessments, establishing independent compliance systems, and accepting third-party audits. Thirdly, it regulates algorithm and content management. This is one of the core contents of the DSA and a key basis for the EU investigation into the X platform. The law explicitly requires digital service providers to regulate the recommendation algorithms they use, ensuring the transparency, explainability, and fairness of the algorithms, and prohibiting the use of algorithms to spread illegal content, mislead users, or engage in discriminatory practices. For very large online platforms, it is also necessary to regularly disclose the operating principles and recommendation logic of the algorithm, as well as the potential risks that the algorithm may bring, and take effective measures to mitigate the systemic risks caused by the algorithm. At the same time, the bill requires all digital service providers to establish and improve mechanisms for reviewing illegal content, promptly handle user complaints about illegal content, and delete or remove obviously illegal content (such as terrorist propaganda or child sexual abuse material) within a specified time. Fourth, it strengthens the protection of user rights. The DSA places user rights protection in an important position, clearly stipulating that users have the right to know, the right to choose, the right to complain, and the right to data privacy. For example, users have the right to understand the basic logic of the platform's recommendation algorithm and the right to refuse personalized recommendations from the platform; platforms must establish convenient user complaint channels and respond to and handle user complaints about illegal content or violations within a specified time; platforms must not collect or use user data arbitrarily and must strictly comply with the requirements of the General Data Protection Regulation (GDPR) to protect users' privacy rights. Furthermore, the bill pays special attention to the protection of the rights and interests of minors and vulnerable groups, requiring platforms to take special measures to prevent minors from accessing harmful content. Fifth, it establishes a strict enforcement and penalty mechanism. The DSA clarifies the enforcement powers of the European Commission and the regulatory agencies of each member state, establishing a unified enforcement coordination mechanism to ensure the effective implementation of regulatory rules. For digital service providers that violate the DSA, the EU will take different penalties depending on the severity of the violation: minor violations will be subject to warnings and orders to rectify; serious violations will be subject to fines of up to 6% of their global annual turnover, a penalty rate far exceeding other regulatory provisions, and has a strong deterrent effect. In addition, for platforms that continue to violate regulations and refuse to rectify, the EU can also take temporary measures, including restricting the platform's functions, suspending the platform's services within the EU, and even forcing the platform to withdraw from the EU market. II. Core Details of the EU Investigation into Platform X The new investigation launched by the EU on January 26, 2026, is not the first time the EU has taken regulatory action against Platform X. In late 2025, the European Commission fined Platform X €120 million for misleading users in its "BlueFocus Certification" interface design, lack of transparency in its advertising database, and refusal to share data with researchers. This new investigation focuses on Platform X's AI recommendation algorithm. It represents the EU's routine oversight of large online platforms based on the DSA (Data Security Authority), and is also a targeted oversight of the application of AI technology on digital platforms. (I) Regulatory Body The EU's investigation into Platform X presents a two-tiered regulatory structure: "led by the European Commission and coordinated by member state regulatory bodies." This structure, established according to DSA regulations, ensures the uniformity and efficiency of regulatory work. The core regulatory body is the European Commission. Authorized by the DSA, the European Commission is responsible for coordinating the regulation of digital services across the EU and has direct regulatory and enforcement powers over very large online platforms and very large online search engines. This investigation was initiated directly by the European Commission, which is responsible for developing the investigation plan, collecting relevant evidence, assessing the compliance of Platform X, and ultimately determining whether it has violated regulations and what penalties to impose. The cooperating regulatory bodies are the digital services coordination agencies (DSAs) of each member state. The DSA requires each EU member state to establish a dedicated DSA to cooperate with the European Commission in its regulatory work and handle digital services complaints and investigations within its borders. In this investigation, the Irish DSA, as the national DSA of the member state where Platform X is located in the EU, has been working closely with the European Commission, participating in the relevant work, including assisting in evidence collection, conducting on-site inspections, and liaising with Platform X's representative office in Ireland. Furthermore, other EU member state DSAs will also provide necessary support as required by the European Commission to ensure the smooth progress of the investigation across the EU. (II) Target of Regulation The target of this investigation is clearly defined, with Elon Musk's X platform as the core focus. Specifically, the investigation focuses on two core elements of the X platform: its built-in artificial intelligence model, Grok, and its recommendation algorithm driven by Grok. From a platform perspective, as a globally renowned social platform, X platform's monthly active users far exceed the 45 million threshold stipulated by the DSA, classifying it as a "super-large online platform." Therefore, it needs to fulfill the strictest regulatory obligations stipulated by the DSA, which is one of the important reasons why the EU has made it a key focus of regulation. Since its acquisition by Elon Musk, the X platform has undergone several functional adjustments, including the integration of the AI model Grok into the platform to drive recommendation algorithms and generate user content. This investigation specifically addresses the compliance risks arising from this adjustment. From a functional perspective, the core focus of the regulation is the Grok AI model and the recommendation algorithm it drives. Grok is an artificial intelligence tool developed by the X platform provider. Since 2024, the X platform has deployed it in various ways, allowing users to generate text and images, providing contextual information for user-posted content, and optimizing the platform's recommendation algorithm to push personalized content to users. The focus of this EU investigation is to assess whether the recommendation algorithm driven by the Grok AI model poses a risk of disseminating illegal content, including fake explicit pornography, child sexual abuse material, and anti-Semitic content, and whether Platform X conducted a sufficient risk assessment of its application of Grok and implemented effective risk mitigation measures. Furthermore, the investigation also covers Platform X's submission of risk assessment reports. The European Commission pointed out that, upon reviewing the risk assessment reports submitted by Platform X in accordance with the DSA, it found that the Grok AI model was not reflected in the reports. This means that Platform X may not have assessed the risks that Grok itself, or its integration into the Platform X, might pose to EU citizens, which is one of the key concerns of this investigation. At the same time, the European Commission has expanded the scope of its previous investigation into Platform X's recommendation system to assess the impact of its recent announcement of switching to a "Grok-based recommendation system," determining whether the system has comprehensively identified and mitigated the systemic risks defined by the DSA. (III) Regulatory Basis The core regulatory basis for this EU investigation of Platform X is the Digital Services Act (DSA), which came into effect in 2023. Specifically, it is mainly based on the following key provisions of the DSA, which also clarify Platform X's compliance obligations and the EU's regulatory authority. Firstly, the provisions in the DSA regarding the regulatory obligations of "very large online platforms". According to Articles 34 and 35 of the DSA, large online platforms are required to conduct regular systemic risk assessments to identify risks such as the spread of illegal content and infringement of user rights that may arise from their platform services. They must also develop specific risk mitigation measures, prepare a special risk assessment report, and submit it to the European Commission and regulatory agencies in member states. This EU investigation found that Platform X failed to include the Grok AI model in its risk assessment report, potentially violating the aforementioned provisions and failing to fulfill its risk assessment obligations. Secondly, there are the DSA provisions regarding algorithm regulation. Article 42 of the DSA explicitly requires that large online platforms regulate the recommendation algorithms they use, ensuring the transparency, explainability, and fairness of the algorithms. They must not use algorithms to spread illegal content, mislead users, or engage in discriminatory behavior. At the same time, platforms must take effective measures to mitigate the systemic risks posed by algorithms. The core of this investigation is to assess whether the Grok-based recommendation algorithm of Platform X meets the above requirements and whether there is a risk of disseminating illegal content. If such issues exist, it violates the relevant provisions of the DSA. Thirdly, the DSA's provisions regarding the handling of illegal content. The DSA requires all digital service providers to establish sound mechanisms for reviewing and handling illegal content, promptly address user complaints about illegal content, and delete or remove obviously illegal content within a specified time. The European Commission pointed out that anti-Semitic content, unauthorized deepfake videos of women, and content involving child sexual abuse were found on Platform X, and Platform X may not have taken timely action, which may have violated the relevant provisions of the DSA regarding the handling of illegal content. Fourthly, the DSA's provisions regarding enforcement and penalties. Article 66 and related clauses of the DSA clarify the European Commission's enforcement powers, including initiating investigations, collecting evidence, taking provisional measures, making non-compliance decisions, and imposing penalties. According to these clauses, the European Commission has the right to initiate a formal investigation into Platform X, collecting evidence through methods such as issuing information requests, conducting interviews, and carrying out on-site inspections. If Platform X is found to have violated regulations and has not made substantial corrective actions, the European Commission has the right to take provisional measures and make a non-compliance decision, imposing a fine on Platform X of up to 6% of its global annual turnover. Furthermore, this EU investigation also references previous regulatory records on Platform X—at the end of 2025, the EU fined Platform X €120 million for issues such as deceptive design, insufficient advertising transparency, and inadequate access to researcher data. This investigation also serves as ongoing monitoring of Platform X's compliance, ensuring that it effectively fulfills its obligations under the DSA. III. The Reasonableness and Fairness of the EU's Investigation The EU's investigation into Platform X, based on the Digital Services Act, is an important measure for the EU to fulfill its regulatory responsibilities for digital services, protect user rights, and maintain digital security, and thus has a certain degree of rationality. However, there are also some questionable aspects regarding the investigation's targeting, the fairness of regulation, and the potential negative impacts. Therefore, the market generally believes that if the EU ultimately imposes severe penalties on Platform X, Platform X may file a lawsuit in the European Court of Justice, challenging the legality of the EU's investigation and the reasonableness of the penalty decision. This would prolong the entire regulatory process and become a focal point in the global digital regulatory field. (I) Overall Alignment with Regulatory Objectives and Meeting Industry Development Needs The rationale for the EU's investigation is mainly reflected in the following aspects, which align with the legislative spirit of the DSA and the global trend of digital governance. Firstly, it aligns with the legislative objectives of the DSA and can effectively prevent the risks brought about by AI algorithms. The core objective of the DSA is to regulate the behavior of digital service providers, protect user rights, and maintain the security and fairness of the digital space. This investigation focuses on the Grok AI model and its recommendation algorithm on platform X, addressing the potential risks of illegal content dissemination arising from the application of AI technology on digital platforms. This aligns closely with the legislative objectives of the DSA. With the rapid development of AI technology, the risks posed by AI algorithms are becoming increasingly prominent. Without strengthened regulation, these risks could seriously harm user rights and public order. The EU's timely initiation of this investigation effectively prevents such risks, urges platforms to fulfill their compliance obligations, and regulates the application of AI algorithms, consistent with the legislative spirit of the DSA. Secondly, it aligns with global digital governance trends, promoting the development of the digital service industry towards compliance. Currently, the core trend of global digital governance is to strengthen the regulation of digital service platforms, especially AI technology and recommendation algorithms, to protect user rights and maintain the security of digital space. This EU investigation is an important practice in global digital governance, capable of promoting the development of the digital service industry towards compliance and standardization, and providing a reference for digital regulation in other countries and regions. For example, many countries and regions are formulating or improving digital service regulatory regulations. The EU's investigation and the DSA's implementation experience will provide a reference for these countries and regions, promoting the standardization and unification of global digital regulation. Thirdly, the investigation is specifically targeted at the actual violations of Platform X, demonstrating clear regulatory focus. This EU investigation was not initiated blindly, but based on the actual violations of Platform X. The European Commission pointed out that anti-Semitic content, unauthorized deepfake videos of women, and content involving child sexual abuse were found on Platform X. Platform X failed to conduct a sufficient risk assessment of the Grok AI model and did not reflect Grok's relevant information in the risk assessment report, potentially violating DSA regulations. Therefore, this investigation targets the actual violations and risks of Platform X, demonstrating clear focus and effectively urging Platform X to rectify its practices and protect user rights. Fourth, this is a necessary measure to fulfill regulatory responsibilities and strengthen the DSA's regulatory authority. As a globally renowned mega-online platform, Platform X's compliance serves as a significant example for the entire digital services industry. The EU's initiation of an investigation into Platform X is a necessary measure to fulfill its regulatory responsibilities, sending a clear regulatory signal to global digital service providers, urging all platforms to strictly comply with DSA regulations, strengthening the DSA's regulatory authority, and ensuring the effective implementation of regulatory rules. (II) Controversies Surrounding the Targeting, Fairness, and Impact of Regulation While the EU's investigation has some merit, several aspects have raised questions in practice, primarily in the following areas, sparking widespread discussion within the industry. Firstly, the investigation's targeting is flawed, potentially focusing excessively on Platform X and raising suspicions of "selective regulation." As a company owned by Elon Musk, Platform X has repeatedly clashed with EU regulators since its acquisition. The EU previously fined Platform X €120 million, and this new investigation inevitably raises questions about "selective regulation"—that is, an overemphasis on Platform X while neglecting similar violations by other platforms. For example, platforms like Facebook and Instagram (owned by Meta), and YouTube (owned by Google), also incorporate AI models, and their recommendation algorithms may pose a risk of spreading illegal content. However, the EU has not launched similar investigations against these platforms, raising questions about the fairness of regulation. Furthermore, Platform X's Grok AI model has only been online for a relatively short time, and its risks have not yet fully manifested. The EU's investigation at this time may be considered over-regulation, which could hinder the platform's innovative development. Secondly, overly strict regulatory standards may stifle innovation and development in AI technology. The DSA's regulatory requirements for ultra-large online platforms are extremely stringent, especially regarding AI algorithms, requiring platforms to disclose algorithm logic, conduct risk assessments, and strengthen content review. While these requirements effectively prevent risks, they may also inhibit innovation and development in AI technology. The development of AI technology requires a certain amount of room for trial and error, and overly strict regulatory requirements may deter platforms from boldly developing and applying AI technologies, fearing severe penalties for violations. For example, platforms may reduce investment in generative AI and limit the functionality of AI models, which would be detrimental to innovation and progress in AI technology and may also affect the vitality of the digital services industry. Thirdly, the fairness of regulation needs to be considered, and there may be a tendency towards "territorial protection." The DSA's regulatory scope covers all platforms providing digital services within the EU, but in actual enforcement, the EU may apply different regulatory standards to EU-based and non-EU platforms, exhibiting a tendency towards "territorial protection." Platform X, headquartered in the US, is a non-EU platform, while platforms like Meta and Google, although also headquartered in the US, have a large business presence and greater compliance investment within the EU, potentially receiving "special treatment" from EU regulators. Meanwhile, EU-based digital service platforms may enjoy a more lenient regulatory environment, which violates the DSA's principle of "uniform regulation and fair treatment" and is detrimental to fair competition in the global digital services industry. Fourth, the potential negative impacts have not been fully considered, affecting user experience and industry development. As mentioned earlier, this investigation may lead platforms to weaken personalized recommendation functions, restrict AI functions, and increase service fees, affecting user experience and rights. At the same time, excessively high compliance costs may cause small and medium-sized platforms to exit the EU market, promoting increased industry concentration and forming a monopolistic structure, which is detrimental to fair competition and innovative development in the industry. However, the EU does not seem to have fully considered these potential negative impacts when launching the investigation, nor has it formulated corresponding countermeasures, which may lead to regulatory effects that are inconsistent with expectations, or even produce the opposite effect. Fifth, the lack of transparency in the investigation process may affect the impartiality of the investigation results. While the EU's investigation into Platform X has clarified its focus and basis, the specific procedures, evidence collection methods, and evaluation standards have not been fully disclosed to the public and Platform X. This may affect the impartiality of the investigation's results. As the subject of the investigation, Platform X has the right to know the details of the investigation and to have a full opportunity to defend itself. However, the lack of transparency in the EU's investigation procedures may prevent Platform X from fully exercising its rights, affecting the fairness and reasonableness of the investigation's results.