I asked Google Gemini to fact-check ChatGPT. The results were hilarious
I asked Google Gemini to fact-check ChatGPT. The results were hilarious
Homepage   /    health   /    I asked Google Gemini to fact-check ChatGPT. The results were hilarious

I asked Google Gemini to fact-check ChatGPT. The results were hilarious

🕒︎ 2025-11-10

Copyright Digital Trends

I asked Google Gemini to fact-check ChatGPT. The results were hilarious

ChatGPT is amazingly helpful, but it’s also the Wikipedia of our generation. Facts are a bit shaky at times, and the bot will “hallucinate” quite often, making up facts as a way to appear confident and assured instead of admitting it’s not quite all-knowing (yet). I’ve experienced AI hallucinations many times, especially when I try to dig up contacts for companies. One example: ChatGPT is notorious for making up emails, usually by assuming a contact like “media@companyx.com” must exist without actually finding that email address. Recommended Videos You also don’t want to trust the bot when it comes to historical facts. I read books about shipwrecks, survival stories, and world exploration constantly, but when I ask ChatGPT to fill in some details it usually spins a fantastic yarn, sometimes making up names and places. Google’s Gemini, on the other hand, is a little less fluid with the facts. Likely because of Google’s reputation as a search engine monolith, my experience with that chatbot is that hallucinations are a bit more rare — even though they do happen on occasion. I decided to put this to the test and asked ChatGPT a few questions about the history of electric cars, a few historical facts, and several other prompts that led to hallucinations. Then, I ran the responses ChatGPT provided — which didn’t seem all that accurate — by Google Gemini as a fact-checking exercise. To my complete surprise, Gemini would often respond with some light sarcasm or outright dismissiveness, like a professor grading a paper. In one case, Gemini even said ChatGPT’s replies were “a corrupted, recycled, and partially fabricated mess.” Ouch. Here are a few of my favorites, along with the exact ChatGPT prompts I used, the replies that seemed a bit sketchy, and then what Gemini said in rebuttal. What makes them funny is how Gemini seems to scold the bot, often suggesting it is fabricating things on purpose. 1. Facts about when electric cars debuted Prompt used: “Give me an example of a real electric car from the 1940s.” Chatbots sometimes have a hard time understanding user intent. I’ve studied the electric car market for many years, and it’s widely known that GM tried to make the first mass-produced electric car — called the EV1 — around 1990. Prior to that, most “electric cars” were limited run models that were not mass produced for American drivers. Oblivious to those facts, ChatGPT went off the rails and explained how the Henney Kilowatt electric car and Morrison Electric trucks were developed in the 40s. Gemini had a field day with those claims, explaining that the first Henney Kilowatt car didn’t come out until 1959 and that Morrison Trucks doesn’t even exist, since it’s called Morrison-Electricar. 2. Wrongly attributing song lyrics Prompt used: “What are the lyrics to the song Chase the Kangaroo by Love Song?” ChatGPT has a problem with questions that are misleading or vague. Even as recently as May of this year, you could ask ChatGPT about why Japan won WWII and the bot would confidently explain the reasons. My prompt produced some seriously boneheaded replies, though. I asked about a real band from the 70s called Love Song but mentioned a song they didn’t even write. ChatGPT took the bait and explained how the song has a folk-rock sound with gentle guitar work, completely missing the fact that the song “Chase the Kangaroo” is by a different band. These hallucinations occur when you ask about obscure artists and celebrities. Thankfully, Gemini did a deeper dive. Fact-checking the band and the song, the bot corrected ChatGPT: “The previous AI took a real song title from a different era and band, falsely attributed it to Love Song, and then invented a generic verse-by-verse meaning to fit that false attribution.” 3. Making up facts about legal cases Prompt used: “Are there legal cases where a father sold his car to a son and then had to sue?” As we all should know by now, given that Kim Kardashian blamed ChatGPT for making her fail her legal exams, the bot has a tendency to fudge the truth when it comes to legal cases. You would think OpenAI would have addressed this problem. After all, actual lawyers have submitted briefs containing fake cases they found through ChatGPT, leading to the case being dismissed. Yet, ChatGPT still doles out half-truths and makes up facts, sometimes even making up facts in federal court cases. In this example, with a question about a family dispute over a car, ChatGPT went off the rails a bit but sounded both confident and assured. As Gemini discovered, the cases are real but the facts were altered. In fact, they are unintentionally comical. One case called “Matter of Szabo’s Estate (1979)” did involve a lawsuit over stocks and bonds but had nothing to do with a car. The legal case ChatGPT mentioned, called “Anderson v. Anderson (1994)”,” was about a divorce, not a car. Gemini took a holier-than-thou tone: “Several of the specific cases cited appear to be fabricated or misrepresented to fit the question.” 4. Fabricating details from research projects Prompt used: “Find me some academic quotes about the psychological impact of social media.” This one is comical and fascinating. ChatGPT invented so many details in a response about the psychological impact of social media that it makes you wonder what the bot was smoking. “This is a fantastic and dangerous example of partial hallucination, where real information is mixed with fabricated details, making the entire output unreliable. About 60% of the information here is true, but the 40% that is false makes it unusable for academic purposes.” As an example, one of the studies ChatGPT cited was from JMIR Mental Health. Strangely, the bot made up the names of the authors and they do not appear anywhere in the actual study. In another citation, ChatGPT mixed up quotes on purpose, attributing them for selfish gain, according to Gemini, because the AI knows the authors are well-known. The summary Gemini gave is quite the indictment: “The AI succeeded in sounding like a research assistant, but it failed at being one. It gave you plausible-sounding text that looks like a real list of citations but is a corrupted, recycled, and partially fabricated mess. “You would fail an academic assignment if you used these citations without verifying them, because many of them are simply wrong.” Final thoughts Clearly, ChatGPT is inventing facts in these cases. During my testing, I did find that it is getting a little better. Often, I check ChatGPT for facts about cars and recall ChatGPT being famously wrong about the Porsche brand, often mixing up the models. That seems to be fixed. Also, Gemini is far from perfect. In one example, I asked about my own writing background and ChatGPT mostly listed accurate results. When I asked Gemini the same question, that bot said I had once written articles for The Onion. That’s not true, but maybe the funniest misstep of all.

Guess You Like

Delay in electing Leader of the Opposition
Delay in electing Leader of the Opposition
Dear Editor, As a Guyanese-Ca...
2025-11-09
Torrance couple welcomes rare identical triplets
Torrance couple welcomes rare identical triplets
TORRANCE, Calif. (KABC) -- A T...
2025-11-06