3 Minutes
Ireland's media regulator has opened formal probes into TikTok and LinkedIn, flagging their content-reporting tools as potentially incompatible with the EU's Digital Services Act (DSA). The move intensifies scrutiny of how large platforms let users report illegal content and whether those systems are truly user-friendly.
What prompted the investigation?
Coimisiún na Meán, Ireland's media regulator, told Reuters it believes some reporting flows on TikTok and LinkedIn could mislead users. In practice, the regulator says, certain interfaces record reports only when content appears to breach the platform’s own policies — not necessarily when users flag suspected illegal material. That difference, regulators argue, may prevent effective identification and takedown of illegal content.
What regulators say and why it matters
John Evans, the DSA commissioner at Coimisiún na Meán, stressed that the DSA guarantees users the right to report content they suspect is illegal. Platforms must offer easy, accessible, and non-deceptive reporting tools that support informed user decisions. In other words: a report should be able to register suspected illegal content, not only content that violates a platform’s internal rules.

Evans also pointed out that after engagement with regulators, some services have made significant changes to their reporting systems — changes likely motivated by the risk of hefty fines. Under the DSA, companies can face penalties of up to 6% of their annual global turnover if breaches are confirmed. Many major tech firms host their European headquarters in Ireland, putting the country’s regulator at the center of enforcement.
Broader enforcement: DSA and GDPR probes
The TikTok and LinkedIn reviews are part of a wider compliance sweep by Coimisiún na Meán. Separately, Ireland has opened an investigation into X over allegations it used user data to train its AI assistant, Grok — a claim that, if proven, could breach the EU’s GDPR and carry fines up to 4% of global revenue.
Key regulatory concerns
- Reporting interfaces that are hard to find or confusing for users.
- Systems that do not allow anonymous reports in sensitive cases, such as child sexual abuse material.
- Designs that may discourage users from reporting suspected illegal content.
What this means for platforms and users
For platforms, the message is clear: reporting tools must be transparent and genuinely enable users to flag suspected illegal material. For users, the investigations raise awareness about how reporting flows work and whether they’re served by platforms with clear, accessible options.
Imagine trying to report illegal content and finding the form only flags policy violations, not suspected crimes. That gap is precisely what regulators want closed. Coimisiún na Meán says it will keep engaging with services and take stronger regulatory steps if necessary — including fines.
The probes underline the EU’s broader push to hold Big Tech accountable: not just for moderating content, but for designing interfaces and workflows that protect users and make legal redress practical and straightforward.
Leave a Comment