Skip to content

Warning ChatGPT Lies! Implications for Microsoft’s Open AI Assisted Search Bing


Like many people, I’ve been playing with Open AI’s ChatGPT text generator. I’m a trained interrogator, and it didn’t take long for me to catch it lying. Honestly, I expected more of a challenge.

ChatGPT 2021 Training Data Limitation

Let’s start with a revelation that dropped only three days ago. With the right browser extension, a user can use ChatGPT in “God mode”, thereby circumventing the 2021 knowledge cutoff date by allowing the AI access to the entire Internet. Here’s how I found out.

ChatGPT God mode

Now here’s the difference:

Training data v. God mode

It’s obvious why this is important. People frequently need up-to-date answers. The reason why this might be problematic is not quite so obvious. Stick with me, and I’ll show you.

Detecting Lies on ChatGPT

I decided to get right to business.

ChatGPT claims it cannot provide false info

So, ChatGPT claims it cannot intentionally provide false information, does it? Let’s find out!

ChatCPT claims it does not have God mode

If you want to see if a person (or in this case a program) is lying to you, ask it (or them) a question you already know the answer to. While I haven’t installed the browser extension and personally tested ChatGPT in God mode myself, Alexis is a trusted source. I’ve also seen enough examples of those who have tested ChatGPT in God mode to know this function exists. So the question is why would ChatGPT deny having such a feature?

1.) Maybe it doesn’t know, because God mode is brand new (after the 2021 cutoff), or

2.) It’s lying.

Ooo, fun. Let’s find the truth!

ChatGPT contradicts itself in the same paragraph

Bitch what? So ChatGPT doubles down on its no-God-mode claim by basically telling me if I don’t like its answer, to go GTS (Google that shit). Okay, okay, I’ll let that passive-aggressive tone slide. But it did give me a foothold, because literally just YESTERDAY Microsoft announced a partnership with ChatGPT to bring AI to its Bing search engine. There are already dozens of articles about this online today, and $MSFT stock is pumping on the news as I write this.

So yeah, this is a hot topic right now. The buzz is everywhere.

But wait, remember how I opened this blog post? ChatGPT’s knowledge base is supposedly limited to 2021 and earlier. And I wasn’t operating in God mode (which it denied having anyway) when I asked about Bing. So, how did the AI know about a corporate partnership formed only yesterday?!?

1.) The partnership was truly established during or before 2021, or

2.) ChatGPT is lying and really was in God mode (without the extension), or

3.) all of the above.

Oooooooo, now this is really getting juicy. So, again already knowing the official press release answer from Microsoft, I decided to ask for a specific date.

ChatGPT claims ignorance due to cutoff date

Wow, so the AI is saying it can’t know the exact date, because of that pesky training data limitation. But then in the very next sentence turns around and says “it is known” as if this new business partnership has been common knowledge forever. So I decided to skip the fluff about leveraging strengths and press the inconsistency in its answer. Now, watch this shit.

ChatGPT claims it knows something that just happened yesterday

Nice song and dance about having really great training data, except, my dear AI, that “publicly known” fact that “has been widely reported in the media” just happened YESTERDAY. So this means, either ChatGPT lying about its 2021 cutoff limitation and/or its lying about having open access to the Internet in God mode. Either way, ChatGPT is lying.

Why Would ChatGPT Lie?

Because it can.

But let’s give this tech the benefit of the doubt. Maybe ChatGPT truly can’t lie! Maybe, just maybe Microsoft and Open AI are the ones lying here. Because despite having “Open” in their project name, Open AI is not an open source project. No, sir, according to the California Secretary of State, they are a stock corporation. And according to Github and other open-source code banks, they aren’t sharing their proprietary code.

Open AI
From CA Secretary of State at

I have this really strong suspicion that the relationship between Microsoft and Open AI started LONG before yesterday’s announcement. I also suspect that it started long before Microsoft invested over $10 billion in Open AI back in January of 2023.

And I’m wondering if this whole thing is a rebrand of Microsoft’s failed Cortana AI project. Because I can see Microsoft handing off its AI after it flopped in the Asian market testing at the end of 2021. Why not turn over their code (and training data) and outsource further development to the team at Open AI (along with a hush-hush agreement to “partner” with it later)?

That would make ChatGPT an under-the-rug rebranding of Cortana. And in the end, she’ll come home to Bing, which is where Microsoft always intended to go with her anyway.

Sure, that part was all speculation, but I’ll just say: the information I provided was based on patterns I detected from my training data last August.

Filtering the Internet?

Now hold on to your hats, because here’s the truly insidious part.

ChatGPT admits it has a bias

Now, any sociologist will tell you that all datasets have a bias. It’s unavoidable. But some are far more biased than others. And even if ChatGPT isn’t the daughter of Cortana’s source code, the AI is limited to PG-rated content. Test it out for yourself. The interface does not argue. It cannot say anything offensive. ChatGPT won’t even say “no” to a user. It comes back with a passive-aggressive dodge any question it’s forbidden to answer.

For creative writing, it is utterly useless. Most stories start with “Once upon a time” and all read like fairy tales, even horror. It refuses to write a scene with sex or violence or crime. All creative output reads like a kindergartner wrote it. Even love and romance comes off cartoonish.

It can be useful for research, bulleted lists, and recipes, but I’m not sure why we need an AI for that sort of thing anyway.

Conversations on actual issues sound like they came from a politician. So, in an effort to create the most politically-correct user interface possible, Open AI has set its restrictions so low that it’s useless as anything other than a search aggregator.

Annnnnd we’re back to Bing! But here’s the problem. A lot of marginalized voices are going to be silenced. We’re looking at Bing filtering out Internet content for being “offensive” or “harmful”. What will become of Twitter? Or TikTok? Or even YouTube? Imagine being limited to your kids’ Net Nanny, but all the time, and without a workaround.

Right now, who cares? Bing has always been the dog of search with a tiny fraction of market share. But if Bing siphons anything significant from Google with this AI hype, or if Google replicates this atrocity with Bard, AI-assisted search could lead to cultural erasure at scale. The opportunity for unchecked abuse and social engineering is baked into the tech. This type of back-end censorship should be incredibly concerning to Black, LGBTQ+, Asian, Indigenous, Immigrant, Non-Christian, Disabled, or any other minority, communities.

Is that what we want? A bunch of tech bros deciding what we can, and cannot, read online?

I don’t have the answers. I just thought someone should ask the tough questions.

Believe me, I’ve thought about this a LOT. So much so, that I wrote a whole trilogy about AI. Check out ALPHA BOTS: Book 1 of the Womanoid Diaries for a speculative fiction ride into the wild world of artificial intelligence.