CMPs: A Privacy Risk Masquerading as a Solution

17 May 2025 - tsp
Last update 18 May 2025
Reading time 5 mins

Consent Management Platforms (CMPs) were introduced as a response to the growing legal and societal pressure for stronger user privacy, especially in the EU under the GDPR and ePrivacy regulations. On the surface, they promise users control over their personal data, enabling them to selectively allow or deny cookies, trackers, and other data collection mechanisms. However, behind this surface lies a disturbing irony: CMPs themselves can (this by no means means that any specific CMP does do this at the moment though) be a new layer of surveillance, not a shield against it.

The Hidden Party in Every Page Load

CMPs are designed to give users control over what data is collected about them - for example, whether cookies are allowed for analytics, advertising, or personalization purposes. They are supposed to act as a privacy gateway: asking for consent before any tracking scripts are executed. However, due to how they are typically implemented, they introduce a new tracking vector themselves.

The issue arises because most CMPs are embedded from external domains. This means their JavaScript files are loaded from the CMP provider’s servers, not the site’s own domain. As a result, each time a page includes a CMP script, the browser sends an HTTP request to that third-party domain. This request automatically includes:

This is the same mechanism that tracking scripts from ad networks or analytics providers use to track users across multiple domains. So, while CMPs are meant to regulate tracking, they are often technically indistinguishable from the trackers they are supposed to manage. The only difference is a promise in how the data is used. Worse still, they are often not isolated - meaning they can gather consent behavior across a user’s journey through unrelated websites, especially when those sites use the same CMP vendor. Again the only protection against this is a promis to adhere the rules.

If a user visits five different news sites that all use the same CMP provider, that CMP now has metadata across all those visits. This undermines the very idea of local user consent: consent choice becomes an opportunity for central correlation.

Certified - but by Whom, and for Whose Interest?

To participate in the IAB’s Transparency and Consent Framework (TCF), CMPs must be “certified”. This means:

But critically, certification is not a privacy guarantee in the technical sense - it is a guarantee that the CMP plays well within the advertising ecosystem and some certification entity claims they follow the rules - one has to trust the certifier. Many certified CMPs are directly connected to AdTech companies or operate on a business model that benefits from user profiling and analytics. One could see this as a conflict of interest, not a trust anchor.

A Flawed Concept: Trust Without Technical Guarantees

At the heart of the problem lies a flawed architectural assumption: we have to trust that the CMP doesn’t misuse its position. Yet nothing in the browser’s same-origin security model prevents that CMP from building a shadow profile of cross-site activity. There’s no technical guarantee, just legal promises and (self-)certification - and in the tech world, especially on the internet, legal promises are usually known to be totally meaningless. History is filled with examples of major actors violating their own privacy policies or breaching user trust, with minimal consequences. Trust, without enforceable and verifiable technical boundaries, is a hollow protection. Politicians and jurists may not want to hear this, but they are not the ones setting the constraints in this arena - it is technological and technical reality that defines what is actually possible and enforceable. Legal declarations without accompanying technical mechanisms amount to little more than hopeful assertions.

Compare this to actual privacy-preserving approaches:

Only origin-specific storage and isolation mechanisms have the potential to truly guarantee user privacy - because they work at a technical level, independently of legal frameworks, political will, or whether companies choose to follow the rules. They are a solution in the literal sense: they define what is possible and enforceable by the platform itself. Everything else - banners, certified promises, or consent tokens exchanged between parties - remains performative theater without any real binding power.

Conclusion: CMPs Shouldn’t Be Above Criticism

CMPs are not inherently evil - but they are based on trust in a context where trust is the wrong tool. In an environment shaped by technical enforcement, not promises, trust-based mechanisms are fragile and exploitable. The current ecosystem around them reinforces this vulnerability - undermining their legitimacy as a true privacy solution. When the very tools meant to enforce user consent become yet another potential surveillance layer, we have a systemic problem.

A better future for consent on the web will not be based on banners and hidden requests to third parties. It must be built on browser-level enforcement, real origin isolation, and a technical architecture that removes the need to trust.

Until then, CMPs may be (though they claim different) just the privacy theater before the main act of data collection begins.

This article is tagged:


Data protection policy

Dipl.-Ing. Thomas Spielauer, Wien (webcomplains389t48957@tspi.at)

This webpage is also available via TOR at http://rh6v563nt2dnxd5h2vhhqkudmyvjaevgiv77c62xflas52d5omtkxuid.onion/

Valid HTML 4.01 Strict Powered by FreeBSD IPv6 support