Super Bowl Ads - The Cost of Talking Loudly and Saying Nothing
If you believe AI is the next electricity, the next internet, the next fire we stole from the Gods… why are you selling it like a diet pill?
Why pay $266,000 per second to beam a message into 130 million living rooms to say: ”Relax. Don’t worry about it. We’re the good guys. Also, our competitor is shady. Also, AGI is coming. Also, look over there, celebrity!”
After the Super Bowl, much of the media conversation fixated on the rivalry between Anthropic and OpenAI.
Anthropic showed up to take a swing. OpenAI showed up to call the swing “dishonest.” Then came the tweets. The clarifications. The counter-clarifications. Because nothing says “I don’t worry about the competition” like writing a novel in response to an ad.
Meanwhile, the actual viewer is sitting there thinking:
”Wait, am I supposed to download something? Fear the garage door? Be excited for the future?”
This year, nearly one-quarter of the ads were AI-related. That alone could have been interesting. Instead, it once again showed AI companies talking loudly, expensively, and, mostly to each other.
What the SuperBowl viewers didn’t understand is that these ads weren’t trying to explain AI, so much as ease people into it. What they were really doing was normalization. And normalization is a weird thing; that’s how you get people used to living with something they didn’t ask for.
First it’s a joke.
Then it’s a feature.
Then it’s a default setting you can’t find the toggle for.
In trying to make the technology feel inevitable, the industry ended up spending something it didn’t have much of to spare - trust.
Most people already assume they’re being marketed to by machines, and a growing number believe they can tell when something has been generated, smoothed, or scaled past the point of human involvement. Whether they’re right is almost beside the point. The perception alone is enough to change how the message is interpreted.
That’s the “AI Bowl” in a sentence: a bunch of companies spending a fortune to talk past the public.
And you could argue, fairly, that none of this really matters. That these ads work. That the companies behind them are doing just fine. That awareness is awareness, and reach is reach, and the rest is noise.
So let’s look at it.
There are quite a few ways to measure whether an ad works: brand awareness, purchase intent, search lift, website traffic, social engagement, and long-term recall. The spreadsheets are endless, and marketers love them because they let us pretend certainty exists.
To do a proper analysis, you’d need access to data impossible to access, but even with back-of-the-envelope math, the Super Bowl still looks defensible.
We have roughly 130 million viewers. Thirty seconds for $8–10 million in airtime alone. Call it about $63 per CPM, which makes it 6-8 cents per view - before production, before Matthew McConaughey, before the full campaign cost creeps toward $50 million.
On paper, it’s not irrational... until you compare it to the alternative.
“Half the money I spend on advertising is wasted; the trouble is I don’t know which half.”
John Wanamaker
For the price of a single Super Bowl spot, a brand could run months of highly targeted Meta, TikTok, YouTube, PickYourPoison ads, reaching people who actually resemble their users, testing messages in real time, optimizing toward something measurable beyond “people saw it while refilling nachos.”
A 50 million-dollar budget can buy far more than a cultural cameo. It can buy time, iteration, and the chance to build trust. Which makes the decision to spend it all at once, on a message that barely bothered to explain itself, feel... what?
careless?
egotistical?
strangely unconcerned with whether anyone was actually listening?
more confident in its own importance than in its audience’s understanding?
There’s this popular trend on social media with AI executives saying things that feel increasingly out of touch, sometimes bordering on the diabolical.
Like in this clip below of Sam Altman saying: “I think AI will probably, like most likely, sort of lead to the end of the world, but in the meantime, uh, there will be great companies created.”
... while everyone else is posting photos of budget Super Bowl snacks on Pinterest, and trading tips on how to stretch the night a little further. The mood isn’t “what’s the next thing we’re all supposed to be excited about?” So much as “how do we get through this without spending more than we have?”
Against that backdrop, watching companies spend ten million dollars to needle each other feels oddly George-Orwellian, but I digress...
And then there was Amazon...
Amazon introduced Alexa+ with a Super Bowl spot featuring Chris Hemsworth narrowly avoiding a fatal accident caused by his own smart home.
I felt very uncomfortable watching this ad, not because of the unnecessary gruesomeness, but because of the kind of fear the ad decided to address… and, more importantly, the fears it avoided.
If Amazon wanted to do real satire, there were plenty of targets closer to the bone.
They could’ve made fun of the engineers laid off after years of loyalty, told their work was being “accelerated.”
They could’ve joked about white-collar redundancy, about resumes rewritten by the very systems that replaced the jobs behind them.
They could’ve touched on economic precarity, deskilling, and the creeping sense of being optional.
Instead, the ad picked a safer villain: the homicidal garage door. (lol)
It’s a strawman fear. Cinematic. Absurd. Easy to dismiss. No one really thinks Alexa is plotting murder. But lots of people do think AI is coming for their work, their leverage, their sense of usefulness.
So the joke rerouted the anxiety. Away from labor, power, and displacement. Toward spectacle. Toward something you can laugh off and forget. I blame Chris Hemsworth.
Humane marketing doesn’t try to correct or mock what people are afraid of; it starts by recognizing the fear they’re already living with and addresses it head-on.
I was initially planning on ending this article by pointing out the ads that were done “right.” But then again, what is “right” depends on the stick you use to measure it.
According to Kellog’s Yearly School Super Bowl Advertising Review, Google’s Gemini Ad ranked as the #1 ad for “tugging at the heartstrings” while showing practical utility (redecorating a child’s room.)
Budweiser won the USA TODAY Ad Meter by simply showing a horse and an eagle, the brand’s recognizable identity, rather than abstract digital battles
Data from iSpot says that Ring’s ad promoting an AI feature to find lost dogs scored in the 100th percentile for both attention and likeability. This would suggest audiences respond better when AI messaging addresses specific, relatable problems rather than broad promises (go figure!)
Ultimately, winning your audience’s trust is not done by spending tens of millions of dollars on ads, but by taking responsibility for the externalities your brand creates (social, economic, environmental) and not pretending those don’t exist.
These AI companies are trying to shape expectations about work, automation, surveillance, and power. When you’re operating at that scale, how you market is just as important as what you’re marketing.
Marketing is one of the most powerful levers companies have to redirect capital, attention, and behavior, not just demand.
Which raises an obvious question:
What else could $50 million have done?
Instead of spending $50 million to “win” a cultural moment, AI companies could have:
Invested in visible, verifiable climate mitigation, especially water and energy conservation, two areas where large-scale computing already carries a reputational cost.
Funded workforce transition programs for the very roles most likely to be displaced. Not “reskilling narratives,” but actual paid training, job placement, and income bridges.
Built public-facing transparency tools that help people understand how, when, and where AI systems are used.
Redirected marketing budgets toward public-interest campaigns that explain tradeoffs honestly: what AI does well, what it doesn’t, and where humans are still essential.
Partnered with municipalities, schools, or utilities to solve specific, boring problems (water leaks, energy waste, logistics inefficiencies.)
The same resources could have been used to acknowledge the fears people already have, and do something concrete about them.
The irony is that trust can’t be bought in thirty-second increments, no matter how expensive they are. But it can be built, slowly, by showing people you understand the world they’re living in, and that you’re willing to invest in making it a little more livable, not just more inevitable.







