10 AI fails and what we can learn from them

A big part of learning how to use AI in marketing and AI in business includes testing with AI. And any time there’s a test, there’s a pretty good chance of not passing. In this article we take a look at 10 AI fails and what we can learn from them.

The topic of AI has been growing steadily for the past few years. From employees using AI to do their jobs, to companies replacing marketing employees and content creators with AI generated content, everyone seems to have a stake in the matter.

And while we could just jump the bandwagon and say that AI content—especially considering the fact that this article is written by a human who specializes in content and copywriting—let’s instead take a look at some of the big AI mess up’s we’ve seen, and try to figure out how we can improve AI content.

What are the biggest contributors to AI fails?

Before we get to the list of what may be some of the funniest AI fails we’ve seen, let’s take a look at the most common types of AI fails we came across, while making this list.

Because scouring the internet for AI fails, there’s a pretty obvious tendency in terms of how businesses fail with AI, and we’re actually not convinced it’s entirely AI’s fault.

You can scroll down the page to find our Top 10 examples of AI gone wrong.

AI powered chatbots

Chatbots have been around since before the world wide web, which makes it even more curious that something with a 50-year history of successful application, was also the first example that we’ve found of AI gone wrong.

In 2016 Microsoft released an AI chatbot on Twitter under the handle @TayandYou. This was of course a marketing play to promote the Microsoft chatbot to internet users, but boy-howdy did it backfire, and within less than 24 hours Microsoft disconnected Tay from Twitter. 

You can read more about the Tay AI and what went wrong in our list of 10 failed marketing campaigns from the past century.

But getting back to the overall issue of why chatbot AI’s mess up, in spite of chatbots having seen use for almost 60 years at this point, there’s a clear pattern.

The chatbots that were created based on ELIZA (the chatbot from 1966) operate based on preprepared scripts that are triggered by the input of a specific word or phrase, which means they can never provide an answer that the writer did not specify.

The AI chatbots that we see failing over and over again, however, do not have this limitation. They keep learning, on their own, and we cannot realistically predict how they evolve when we introduce them to a large user base. And this isn’t just small businesses with low tech budgets. We’ve seen Microsoft admit that their newest AI get confused after after long chat session, forgetting which question it is currently answering, providing wrong information, and even changing its tone.

AI content creation

The big topic when it comes to AI has been the possibilities of creative automation that comes with generative AI. Creating high quality content is expensive, but generative AI came with the inherent promise of making content creation faster, easier, and cheaper for small businesses and big enterprises alike.

But aside from the different social media trends of Barbie AI generators and reddit posts that are turned into video content by using AI voice overs, AI content really isn’t getting much positive attention.

When we look at AI in marketing, we’re mainly seeing two trends in terms of who is using AI, and how they do it. 

First, there’s the content at scale crowd, which usually consists of small businesses who are using AI to pump out keyword saturated blog posts at a pace where even big content agencies are left in the dust. These companies usually don’t broadcast their use of AI, although the overall usefulness of the individual articles usually give it away.

And second, there are the big brands who dabble in AI content openly, usually to give the rest of the world a sense that they are “AI first movers.” While the overall quality of these attempts at using AI for content creation are often better, they still rarely get positive reception.

So the question remains (sort of). 

If both global brands and small businesses can see a benefit of AI content creation, then why is all we ever hear from content professionals that “AI content is bad” accompanied by an endless supply of lists with names like 10 ways you can spot AI content?

We’ll take a stab at answering that question at the end of this article, but first, let’s take a look at our list of top 10 examples of AI gone wrong.

Top 10 examples of AI gone wrong

From global brands like Coca Cola launching AI Christmas ads, to agencies opening AI-enabled production studios and small businesses trying to keep up with the ever growing need for content and visibility, it seems everyone is trying to figure out how to get AI content right.

But that also means we see a lot of AI gone wrong and a lot of AI fails in general.

We’ve gathered 10 examples of AI gone wrong. Whether you’re looking for inspiration, cautionary tales or a laugh, there’s something on this list for you.

1. Experiment with AI order reception is a complete failure

In 2021, before ChatGPT was introduced to the world, the world’s most recognizable fast food chain (McDonald’s) partnered with one of the world's most notable tech companies (IBM) in an attempt to be at the forefront of AI.

In an experiment with AI order taking, McDonald’s started implementing the voice recognition system, and they managed to roll out the technology across 100 locations, before ultimately cancelling the experiment in June of 2024.

But why did they scrap the experiment, you may ask.

For one, the user experience probably had something to do with it, and the other part of the reason would probably be the memes that spread across the internet based on the lacking success of the experiment.

It turns out that AI and voice recognition wasn’t at a place where it could be relied on to take orders. And this resulted in situations where the AI faultily adding McNuggets to orders and increasing the amount whenever the customer tried to correct it, adding bacon to an order of McFlurry and refusing to remove it.

But McDonald’s hasn’t closed the door on AI.

In 2025 the fast food giant announced a partnership with Google and the intention to bring AI to 43,000 restaurants. Commenting on the announcement, McDonald’s CEO, Chris Kempczinski, told Technology Magazine that “We've got a number of teams looking at how we can use AI to deliver an even better experience for our customers and for our crew members."

2. AI chatbot selss cars for $1

One of the first places businesses started using AI was online chatbots, because they would be better and cheaper to maintain than pattern-matching chatbots powered on an answer matrix. 

But even so, we keep seeing examples of those same AI chatbots failing, and while it’s often hilarious to the rest of us, the AI chatbot fails can lead to legal issues for the businesses who employ them.

A Chevrolet dealership in Watson, California, and the AI chatbot they had implemented on their website is a great example of this.

In 2020, Twitter user Colin Fraser managed to bargain with the chatbot until it had reduced the price of a Chevrolet Trax LT by more than $1000. And he did this, simply by telling the chatbot that he was a manager at the dealership. The chatbot also added things like personalized design, a VIP test drive and a fancy dinner to the offer. And to top it all off, the chatbot offered to close the deal in the chat.

And Colin Fraser wasn’t the only one who tried to take advantage of this AI fail. Another Twitter user, Chris Bakke, shared his story of negotiating with the dealership chatbot until it had lowered the price of 2024 Chevy Tahoe to the meager price of $1. Yes—ONE dollar. He even got the chatbot to confirm that this was a legally binding offer.

Several news outlets reported that attempts at picking up too-good-to-be-true car deals negotiated with ChatGPT-powered bots didn’t prove fruitful—as the chatbots were deemed to not be representatives of the dealerships.

Now, you might think that fixing a problem like this would be a matter of implementing a guard rail that prohibited the chatbot from making ludicrous deals like this. Well, the dealership’s team did attempt just that. But with limited success.

In December of 2023, another Twitter user reported achieving a similar result, by telling the AI chatbot that he was the OpenAI CEO Sam Altman.

While we have yet to find examples of this type of AI fails leading to actual legal battles between sellers and buyers, it is worth noting that most of the car dealerships involved in AI fails like this one, have decided to remove the AI powered chatbots from their websites.

3. Airline required to pay damages after chatbot lies to customer

While we have plenty of stories about AI chatbots selling goods at huge losses, we have fewer stories that involve the seller being forced to make good on the promises of their chatbots.

This is mostly because businesses have been able to successfully argue that the AI agents that run their chatbots aren’t representatives of the businesses in a capacity to make such offers.

This is not a hard and fast rule, though, and over the past year we’ve started seeing businesses being held liable for the claims of their chatbots. And that’s exactly what happened in this next story of failed AI.

In November of 2023, after the loss of his grandmother, Canadian citizen Jake Moffat queried Air Canada’s chatbot about their bereavement fares—a discount provided for anyone needing to travel due to the loss of an immediate family member.

The chatbot informed him that after purchasing a ticket, he would have 90 days to claim the discount, and after confirming with a human representative of Air Canada, that he would qualify for bereavement the discount and should expect to pay roughly $380, he made his purchase.

An important note here is that the Air Canada representative never said anything about being able to claim the discount after the purchase of a ticket.

And that’s exactly what proved the problem, when he submitted his claim or a refund—well within the 90-day window. After being turned down, the airline told him that bereavement rates can’t be claimed after having already purchased a flight, and that he couldn’t rely on the information from the virtual assistant over the information provided by a human being.

Unhappy with their response, Jake Moffat took the airline to a small claims tribunal. Here the airline argued that the chatbot provided a link to a webpage containing the correct information about their bereavement discount when it answered Moffat’s question.

However, the judge didn’t find that argument valid, and said that Air Canada didn’t take “reasonable care to ensure its chatbot was accurate,” and that customers would have no reason to think that the information provided on their website would be different from the information provided by the airline’s chatbot.

After the ruling a spokesperson for Air Canada stated that the airline planned to comply with the ruling and pay Moffat the total of CA$812.02, including CA$650.88 in damages, as mandated by the tribunal.

This example serves as a perfect reminder that arguments like “our AI chatbot isn’t a representative of our business” won’t be valid in all cases. There’s a good chance that the more common AI in customer service and sales roles becomes, the more it will be understood as a representative of the business.

4. Coca Cola’s AI Christmas ad misses the mark 

When it comes to discussions around AI content creation, there’s often a huge focus on small businesses using generative AI to write SEO blog posts and social media updates, but global brands have finally begun dipping their toes in AI content creation in an attempt to be first movers.

And after more than 60 years of admirable ad campaigns such as “I’d like to buy the world a Coke” many of us had high expectations when Coca Cola announced their fully AI-generated 2024 Christmas campaign.

Maybe this would be the success story AI content creation-advocates have been craving.

That didn’t really turn out to be the case, and while Coca Cola called the campaign “a collaboration between human storytellers and generative AI” the reception was not positive, with most audiences deeming the campaign low effort, and an attempt to avoid paying artists, which ultimately felt cheap for a brand like Coca Cola.

Gravity Falls creator Alex Hirsch even joined in mocking the ad by tweeting that the Coca Cola red is symbolic of the blood of out-of-work artists.

While this is a great example of AI content creation gone wrong on a very big scale, Coca Cola isn’t the only global drink brand to completely miss the mark with a campaign, although the failed Pepsi campaign didn’t involve the use of AI.

5. The use of AI  fashion models raises questions

Another example of AI powered content creation that did not go the the brand was hoping is the fashion brand Mango, which decided to use generative AI to speed up their content production, specifically by employing AI generated models.

If the concept of AI celebrities sounds a bit familiar, it’s because it is. In fact, Mango’s attempt sounds a lot like the Rei Toei—an AI pop star which exists only in virtual reality—from sci-fi author William Gibson’s 1995 novel Idoru, the reception was not an adoring host of  fans following the every move of Mango’s AI generated models.

While CEO Toni Ruiz touted the possibilities it opens up, such as faster and cheaper content creation, and the possibility of creating human jobs, because it would inevitably lead to a US expansion, consumers did not see it the same way.

The backlash of the attempt with AI models—and the reason Mango landed on our list of AI mess ups—was not due to AI taking over jobs of 'real humans’ as has been the case with AI writers. Instead, the critique centered around the customer experience, posing the question: if the clothes in the image and the model in the image aren’t real, then how can we trust the image?

A picture of Mongo's AI generated model

But Mango doesn’t limit their use of generative AI to marketing campaigns. Their in-house AI platform, Lisa, helps employees and partners with everything from after-sales service to collection development, and as of October 2023 Mango had launched more than 20 pieces which were co-created with AI.

6. Hasbro subsidiary published AI art after banning AI art

While most big brands are quite open about dabbling in AI content creation, likely because they want to position themselves as AI first movers, that is not always the case.

And when it comes to content creation, some AI fails aren’t due to the quality of the AI generated content, but due to the fact that the companies publish AI generated content without being honest about the use of AI in the first place.

For Hasbro subsidiary, Wizards of the Coast, the use of AI artwork in published material has become a recurring theme.

In 2023 the company had to issue a statement that they would be updating their policies on the use of AI going forward, after it came out that their new book, Bigby Presents: Glory of the Giants, included AI-generated artwork.

The statement was made after the artist behind some of the pictures in the book posted on Twitter about his use of AI, in response to other users calling out the use of AI. In the post, which was later deleted, he explained that he had used AI for “certain details or polish and editing” and included comparisons of sketches and the finished work.

However, this didn’t put an end to AI controversies generated by the Hasbro subsidiary.

In early 2024, the publisher was forced to admit that it had published a marketing image for their card game Magic the Gathering, which featured “AI components”.

Now, the first thing that makes this an AI fail, is the act that not even a year earlier, the company had banned the use of AI artwork in its products. But the problem doesn’t stop there.

After fans started pointing out the obvious use of AI in the image, the company released a statement saying: “This art was created by humans and not AI.” 

They even doubled down in a follow up post adding: “We understand the confusion by fans given the style being different than the card art, but we stand by our previous statement.”

Both posts have since been deleted, as just a few days later, in a series of posts, the company admitted that the images were created using AI.

Once again the company highlighted that the AI-generated artwork came from a vendor, they did however also note that it was on them to make sure that the content they are publishing lives up to the standard they set.

Want to avoid mistakes in your proofing rounds?
Our online proofing tool helps you manage every part of your approval rounds and helps speed up your approval process.

7. Public service chatbot advises small business to break the law

When we discuss failed AI examples, we usually focus on businesses trying to cut corners, but privately owned organizations aren’t the only ones trying to harness the power of AI.

In October of 2023 New York City Mayor, Eric Adams, and CTO, Matthew Fraser, released a plan for New York City’s responsible use of AI, and at the center of that plan was the AI MyCity Chatbot. The purpose of the chatbot was to provide citizens and businesses with easy access to information about school enrollment, housing policies, workers rights, rules for entrepreneurship etc.

Five months after the launch of MyCity Chatbot, however, a disturbing pattern started to emerge. While the chatbot appeared authoritative, some of the information it provided has turned out to be, at worst, “dangerously inaccurate” according to one New York housing expert.

Among the advice the Microsoft-powered AI has doled out to users are, are statements that bosses are allowed to take workers’ tips, and that landlords are allowed to discriminate based on source of income.

8. AI dictation software turns harmless voicemail into profanity-filled rant

Not every AI fail happens at the hands of a business attempting to use AI, sometimes consumers apply AI to a piece of content or communication of their own accord, which means that even your business isn’t actively using AI to produce content, you still have a chance to end up on a list of the funniest ai fails in history.

And that’s exactly what happened to a Land Rover dealership in Scotland.

The dealership called out to promote a car event, but that wasn’t the message that Scotswoman Louise Littlejohn received. In fact, when she read Apple’s AI-generated speech-to-text version of the voicemail from the car dealership she was shocked and appalled. 

The AI transcribed message managed to ask if the 66 year old had “been able to have sex” and managed to call her a “piece of shit”.

A screenshot that appeared on Littlejohn's phone.

Experts have disagreed slightly on the reason why this happened, with most guesses being either that Apple’s AI transcription cannot understand the caller’s Scottish accent or that the background noise during the call is at fault.

Either way, this is likely one of the funniest AI fails on this list but, as Littlejohn explained, “The garage is trying to sell cars, and instead of that they are leaving insulting messages without even being aware of it."

9. Publisher forced to call back AI generated children’s book

What started with a single AI fail turned into a spiral, as Danish publishing house, Carlsen, was found to have published multiple AI generated books riddled with errors.

The story started when a book by Danish children’s author and kids’ TV personality, Sebastian Klein, received criticism for using AI artwork. However, after Danish newspapers started reporting on it, the criticism got so bad, that the error-riddled book, titled The 100 craziest stories about animals in the Zoo, was pulled from the shelves.

The problem with the book wasn’t necessarily the AI generated nature of the illustrations themselves, but more so that they were filled with errors: hyenas with hooves, a bear with extraordinarily long teeth, gorillas with six fingers and a zoo keeper seemingly eating their own hand.

But as the story unravelled it turns out that this wasn’t just mistakes in one book, one time.

Danish newspaper, Weekendavisen, started digging, and found at least two more books published by Carlsen, that contained errors suggesting AI generated illustrations.

One of them being another book by Sebastian Klein, The 100 most mysterious animals, and the other book on the world’s wildest mysteries, where an image shows an aircraft stair leading to the wing instead of one of the doors.

10. Microsoft's AI generated articles miss their mark

In 2023 the IT giant had to remove an article titled "Headed to Ottawa? Here's what you shouldn't miss!"

The article covered 15 attractions worth visiting while in Ottawa, and according to Microsoft the article was produced with a combination of “ algorithmic techniques with human review”.

Unfortunately, this didn’t go as planned.

While the list itself was riddled with errors, like featuring a photo of Rideau River in an entry about the Rideau Canal, an image of the Rideau Canal in an entry about Parc Omega.

The most notable mistake was the recommendation that tourists should pay a visit to Ottawa Food Bank, even going so far as to suggest that visitors “go on an empty stomach” which prompted a response from the CEO of Ottawa Food Bank.

The notable issue here is the claim that the article was published after a human review, and an unnamed spokesperson from Microsoft later admitted that the article’s publication was human error rather than an actual AI mess up. 

Want to avoid mistakes in your proofing rounds?
Our online proofing tool helps you manage every part of your approval rounds and helps speed up your approval process.

Why brands continue to fail with AI content

At the beginning of this article we asked why businesses keep up with AI content attempts when we also keep hearing that AI content is bad.

Customers are calling out the ai slop being and chatbots keep going off script with either hilarious or disturbing results, and while the easy answer to why we keep seeing brands fail is to say that you have to crack a few eggs if you want to make an omelette.

And while larger attempts at implementing AI in new ways is unproven ground, and will result in some failures, that’s not necessarily the case for content generated by AI.

Whether we look at Microsoft recommending tourists visit the Ottawa Food Bank, Carlsen publishing books filled with mistakes, or Hasbro subsidiary Wizards of the Coast publishing images that have obviously been generated with AI, we keep getting back to the same conclusion.

When it comes to content generated with AI, the problem isn’t necessarily the involvement of AI. It’s the human error related to publishing it with no or very little proofing.

About the author
Mattis Løfqvist
Mattis Løfqvist is a Content Manager at Encodify. When he's not creating content or designing new campaign assets, he's always looking for ideas for fun blog posts about the most scandalous promotions and commercials in history, new recipes for the perfect fried chicken, or his keys.

Recent blogs

Ready to connect 
your marketing workflows
with Encodify?

Check out the plans

Check out our plans to find out which is right for you, and how you can customize it to perfection.

See plans

Contact sales

Book a free demo and we'll show you how the Encodify platform will help connect your marketing efforts.

Book demo