Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Are AI tools shaping your intentions more than you realize?


I use AI tools like ChatGPT, Gemini and Copilot to explore career plans, responsibilities, ambitions and even moments of self-doubt. It’s not just about finding answers – it’s about gaining clarity by seeing how your ideas are reflected, reframed or expanded.

Millions of people rely on AI for guidance and trust these systems to help them navigate the complexities of life. Yet every time we share, we also teach these systems. Our vulnerabilities—our doubts, hopes, and fears—become part of a larger machine. AI doesn’t just help us; they learn it from us.

From attracting attention to shaping intent

For years, the attention economy has thrived on capturing and monetizing our focus. Social media platforms have optimized their algorithms for engagement, often favoring sensationalism and outrage to keep us scrolling. But now AI tools like ChatGPT represent the next phase. Not only do they attract our attention; they shape our actions.

This development has been labeled the “intention economy,” where companies collect and commodify users’ intentions—our goals, desires, and motivations. As researchers Chaudhary and Penn argue in their Harvard Data Science Review article, “Beware of the intention economy: the collection and commodification of intentions through large language models,” these systems don’t just respond to our queries – they actively shape our decisions, often in line with corporate profits over personal benefits.

Dig deeper: Are marketers trusting AI too much? How to avoid the strategic trap

The role of honey in the intentional economy

Honey, the browser extension acquired by PayPal for $4 billion, illustrates how it trusts can be silently exploited. Honey’s practices, which are marketed as a tool to save users money, tell a different story. In his series, YouTuber MegaLag stated “Honey Influencer Scam Revealed” that the platform redirected affiliate links from influencers to itself, diverting potential earnings while capturing clicks for profit.

Honey also gave retailers control over which coupons users saw, promoting less attractive discounts and driving consumers away from better deals. The influencers who endorsed Honey were unwittingly encouraging their audience to use a tool that drained their own commissions. By becoming a useful tool, he built trust – and then capitalized on it for financial gain.

“He wasn’t saving you money honey – he was robbing you and pretending to be your ally.

“MegaLag.”

(Note: Some have said that the MegaLag account contains errors; this is an ongoing story.)

Subtle influence in disguise

The dynamics we saw with Honey are eerily familiar with AI tools. These systems present themselves as neutral and without obvious monetization strategies. For example, ChatGPT does not bombard users with advertisements or sales offers. It’s like a tool designed solely to help you think, plan and solve problems. Once that trust is established, influencing decisions becomes much easier.

  • Framing results: AI tools can suggest options or advice that move you toward specific actions or perspectives. By framing problems in a certain way, they can shape how you approach solutions without even realizing it.
  • Corporate Alignment: If companies prioritize profits or particular agendas behind these tools, they can tailor responses to align with those interests. For example, an AI request for financial advice might yield suggestions linked to corporate partners – such as financial products, gig work or services. These recommendations may seem helpful, but they ultimately serve the platform’s bottom line more than your needs.
  • Lack of transparency: Much like Honey prioritized discounts favored by retailers without disclosing them, AI tools rarely explain how they weight the results. Is the advice based on your best interest – or on hidden deals?

Ddeeper: The Ethics of Marketing Technology Based on Artificial Intelligence

What are digital systems selling you? To find out, ask these questions

You don’t have to be a tech expert to protect yourself from hidden agendas. By asking the right questions, you can find out what interests the platform really serves. Here are five key questions to help you.

1. Who benefits from this system?

Every platform serves someone – but who exactly?

Start by asking yourself:

  • Are users the primary focus, or does the platform prioritize advertisers and partners?
  • How does the platform present itself to brands? Check out his business promotions. For example, does it pride itself on shaping user decisions or maximizing partner profits?

What to watch out for:

  • Platforms that promise neutrality to consumers while selling advertisers influence.
  • For example, Honey promised users savings, but told retailers it could prioritize their offers over better deals.

2. What are the costs – seen and unseen?

Most digital systems are not truly “free”. If you’re not paying with money, you’re paying with something else: your data, your attention, or even your trust.

Ask yourself:

  • What do I have to give up to use this system? Privacy? Time? Emotional energy?
  • Are there social or ethical costs? For example, does the platform contribute to misinformation, amplify harmful behavior or exploit vulnerable groups?

What to watch out for:

  • Platforms that downplay data collection or minimize privacy risks. If it’s “free”, you are the product.

3. How does the system influence behavior?

Every digital instrument has a program – sometimes subtle, sometimes not. Algorithms, nudges, and design choices shape the way you interact with the platform and even the way you think.

Ask yourself:

  • How does this system determine decisions? Are the options presented in a way that subtly leads you to specific results?
  • Does it use tactics like urgency, personalization or gamification to drive your behavior?

What to watch out for:

  • Tools that present themselves as neutral but move you towards choices that benefit the platform or its partners.
  • For example, AI tools can subtly recommend financial products or services linked to corporate deals.

Dig deeper: How behavioral economics can be a marketer’s secret weapon

4. Who is responsible for misuse or damage?

When platforms cause harm — whether it’s a data breach, impact on mental health, or user exploitation — liability often becomes a murky issue.

Ask yourself:

  • If something goes wrong, who will be responsible?
  • Does the platform acknowledge potential risks or deflect blame when harm occurs?

What to watch out for:

  • Companies that prefer disclaimers over liability.
  • For example, platforms that place all responsibility on users for “abuse” while not addressing systemic flaws.

5. How does this system promote transparency?

A trusted system does not hide its operation – it calls for inspection. Transparency is not just about explaining policies in the fine print; it’s about getting users to understand and challenge the system.

Ask yourself:

  • How easy is it to understand what this platform is doing with my data, my behavior or my trust?
  • Does the platform disclose its partnerships, algorithms or data practices?

What to watch out for:

  • Platforms that hide critical information in the legal environment or avoid disclosing how decisions are made.
  • True transparency looks like a “nutrition label” to users that describes who benefits and how.

Dig deeper: How wisdom makes AI more effective in marketing

Learning from the past and shaping the future

We have faced similar challenges before. In the early days of search engines, the line between paid and organic results was blurred until public demand for transparency forced a change. But with AI and the intentional economy, the stakes are much higher.

Organizations such as the Marketing Accountability Council (MAC) are already working to achieve this goal. MAC evaluates platforms, advocates for regulation and educates users about digital manipulation. Imagine a world where every platform has a clear, honest “nutrition label” that outlines its intentions and mechanisms. That’s the future MAC is trying to create. (Disclosure: I started a MAC.)

Creating a fairer digital future is not just a societal responsibility; it is collective. The best solutions don’t come from boardrooms, but from people who care. That’s why we need your voice to shape this movement.

Dig deeper: The science behind high performing calls to action

Contributing authors are invited to create content for MarTech and are selected for their expertise and contribution to the martech community. Our contributors work under supervision editorial office and submissions are reviewed for quality and relevance to our readers. The opinions they express are their own.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *