Ads Are Coming to ChatGPT - But What Does It Already Know About You?
In case you missed our previous articles, and almost every other piece written about ChatGPT recently, ads are coming to the platform. We now know broadly how they will appear, where they will be placed, and which users will be affected. There is still plenty of uncertainty about how ads will affect the platform in the future. But one thing is now certain. They’re coming.
My concern is not really about their arrival. It is about what ChatGPT already knows about you.
For ad monetisation, conversational LLMs are potentially the most valuable intent-capture systems ever created, but monetising that intent without breaking trust is the hard part. So let’s unpack how advertising works today, why LLMs are fundamentally different, and what the implications could be.
How Ads Work Today
Search advertising is primarily driven by the intent expressed in a query. Advertisers bid on keywords through real-time auctions on platforms like Google Ads and Microsoft Advertising. Placement is determined by a mix of bid size, predicted click-through rate, relevance to the query, and landing-page quality.
That core signal is refined using first-party context such as location, device, language, and time of day. On top of this sit retargeting layers, often via cookies or logged-in identifiers, allowing advertisers to bid more aggressively or tailor messaging for users who have previously visited their site or shown related interest. Probabilistic audience models such as in-market and lookalike segments extend reach beyond explicit queries.
For example, a consumer searches for “best running shoes for flat feet” and clicks through to a few brands without buying. Over the next several days they search “are stability shoes worth it” and “Nike vs ASICS for overpronation”. Each query updates the system’s understanding of intent. Brands the user has already engaged with are prioritised, ad copy becomes more specific, and follow-on ads appear across the web as the user moves closer to a purchase decision.
This system is powerful. But it is still limited by what the user types into a search box, one query at a time.
Why LLMs Are Different
Ads are now being actively tested in ChatGPT. OpenAI has confirmed they will appear at the bottom of responses, be clearly labeled, and will not influence the model’s answers. However, the company has not ruled out richer formats over time, including contextual recommendations or sponsored placements that sit alongside conversational output rather than traditional banners.
When people pour their daily questions into conversational systems, they create a far richer stream of intent data than search ever captured, and they do so voluntarily. This matters for several reasons.
1. Query depth and honesty
People phrase questions to LLMs the way they think. They offer uncertainty, emotional context, constraints, and follow-ups, normally with total transparency. A search query might be “best mortgage rates UK”. A chat query is “we’re thinking of buying in 18 months, one income might drop, how risky is this?”. From a targeting perspective, that depth is enormously valuable.
2. Continuous intent, not isolated signals
Search gives you snapshots. Conversational systems give you a person’s whole life story. Over days or weeks, a model can infer life stage, purchase horizon, risk tolerance, brand sensitivity, and switching intent. That longitudinal signal is something ad platforms have always wanted but never fully achieved.
3. High-confidence commercial moments
LLMs are increasingly used at decision points: choosing software, planning travel, comparing insurance, preparing for interviews, troubleshooting purchases. Those are moments advertisers pay premiums for. The difference is that the model knows why the user is deciding, not just what they typed.
Third-party cookies infer intent probabilistically. Conversational input is explicit. We, as users, volunteer our goals, fears, budgets, and constraints in rich textual form. From a data science perspective, this is extraordinarily valuable.
The Trust Constraint
That said, there are real constraints.
Privacy and trust are existential issues for conversational AI. If users feel their conversations are being directly monetised against them, usage will fall. OpenAI appears acutely aware of this risk. That is why ads in ChatGPT are unlikely to resemble traditional search ads. Dropping banners or obvious sponsored links into a chat would feel jarring and invasive.
So far the company has said ads will not influence responses and won’t be shown to those under the age of 18. Conversations will not be sold to advertisers, sensitive topics such as politics and mental health will be excluded, and users can opt out of ad personalisation.
But we can’t rule out the possibility of future formats such as sponsored recommendation sets with multiple disclosed providers, or sponsored links woven carefully into responses. Even then, the data itself remains immensely valuable. Upstream monetisation through training signals, market intelligence, and demand forecasting is hard to ignore.
What Are the Risks?
If you believe LLM platforms will avoid the excesses and predatory practices of social media advertising, history suggests otherwise.
A leaked report from a former employee described how Meta targeted weight-loss and beauty ads at adolescent girls who deleted an Instagram story shortly after posting it. The assumption was that deletion signalled insecurity. Ads were delivered at moments of heightened self-consciousness.
Now imagine advertisers applying similar logic to users who openly share their fears, doubts, financial pressures, health concerns, and emotional struggles with an AI system. Would you want retailers to know every detail of what you have asked ChatGPT?
What have you told GPT?
This is an extraordinary opportunity for advertisers. Less so for everyone else.
The web has seen this cycle before. Ad-driven monetisation pushes platforms toward overreach because the incentives are overwhelming. There is simply too much money on the table to assume this time will be different.




