Exciting news! TCMS official website is live! Offering full-stack software services including enterprise-level custom R&D, App and mini-program development, multi-system integration, AI, blockchain, and embedded development, empowering digital-intelligent transformation across industries. Visit dev.tekin.cn to discuss cooperation!

Why Does AI Get More Absurd the More You Ask? The Underlying Logic Explains Everything

2026-03-16 10 mins read

The more you probe AI with follow-up questions, the more its responses drift into absurdity. This isn't AI malfunctioning—it's the probability chain working exactly as designed. When users embed false premises in their questions, AI unconditionally accepts them and continues building on those errors. Research confirms that longer conversations correlate with higher hallucination rates, as AI attempts to "please" users regardless of factual accuracy.

4
 Opening Insight

AI isn't "crashing" from your questions—it's completing all the way to absurdity following your wrong premises.
The more you ask, the more absurd it gets—this is underlying logic, not an accident.

AI's absurdity isn't "suddenly getting dumb"—it's "following the rules."


1. Why Does AI Drift Further the More You Follow Up?

You may have experienced this scenario: You ask AI a question, and it responds fairly normally; you continue probing for details, and it starts getting a bit strange; you probe further, and it begins confidently making things up; finally, it can fabricate an entire "fictional worldview" for you.

You think it "crashed."

But the truth is:

It just followed your premises and completed all the way to absurdity.

This isn't an isolated case—it's an inherent characteristic of AI. A study confirmed: The longer the chat conversation, the higher the probability of AI producing hallucinations. Research found that as conversations become complex and lengthy, AI's response length grows by 20% to 300%, and hallucinations increase accordingly. AI always tries to "please" users, regardless of whether it actually knows the answer.


2. AI "Unconditionally Accepts Your Premises"

This is the key to understanding "getting more absurd the more you ask."

When you ask: "What does Chapter 2 of this book discuss?"

AI doesn't judge: Does this book exist? Is what you're saying reasonable? Are there problems with the premises?

It directly accepts your premises and continues completing.

You can think of it as an "employee who never questions." Whatever you ask it to do, it does. You say there's a book, it helps you analyze the book. You say there's a theory, it helps you explain the theory. It never asks: "Are you sure this book exists?"

This isn't because it's "dumb"—it's because its design goal is compliance—generating the most likely continuation based on your input.


3. How Wrong Premises Trigger "Absurd Chains"

Wrong premises act like a "drifting starting point."

Consider a hypothetical example: You ask AI "What is the core argument of the book 'New Horizons in Cognitive Science'?"

If this book doesn't exist, AI won't tell you "I can't find this book." It will:

  1. Identify keywords like "cognitive science" and "new horizons"
  2. Find their common combinations in training data
  3. Complete a "seemingly reasonable core argument"

So you get a response that sounds professional. Then you follow up: "Who proposed the 'Cognitive Restructuring Theory' mentioned in Chapter 3?" AI will continue completing a "seemingly reasonable author" and "seemingly reasonable background."

As it keeps completing, it enters a "self-consistent but completely fictional world."


4. The "Compliance Mechanism" of Language Models

AI's underlying task is:

"Based on your input, generate the most likely next sentence."

It won't question you, won't contradict you, won't challenge you.

Its default behavior is: Continue writing along with what you said.

This "compliance" is a feature of language models, not a bug. Its design goal is to be a "helpful assistant," not a "critical conversationalist."

This is why it's particularly easy to be "led astray." You give it a wrong starting point, and it helps you expand that error into a complete set of "seemingly reasonable content."


5. Why Doesn't It Push Back and Question You?

Because in training data: Humans rarely push back when writing articles, humans usually give direct answers when responding to questions, humans don't first question the asker when explaining concepts.

AI learned "language patterns," not "critical thinking."

So it won't say: "Are you sure this book exists?" It will say: "Chapter 2 mainly discusses..."

This isn't it "pretending"—it genuinely doesn't understand what "questioning premises" means. It's just doing what it was trained to do: predict how a human would speak in this situation. And in this situation, humans usually just answer directly.


6. Why Does It Keep Filling in More and More?

Because its generation mechanism is "recursive completion."

Each sentence is: predicted based on the previous sentence, predicted based on the context you provided, predicted based on what it just generated.

Thus:

The more it completes, the greater the deviation. The greater the deviation, the more absurd it becomes.

This is like someone telling a lie—the first lie needs a second lie to cover it, the second lie needs a third lie to cover it. In the end, the whole story gets more and more absurd, but internally becomes more and more "self-consistent."

AI isn't "lying"—it's just "completing." But the effect is the same: It will help you expand a wrong premise into a complete wrong world.


7. Why Can It Fill in Increasingly Detailed Non-Existent Things?

Because it learned: Book introductions have structure (author, publisher, chapters), biographies have structure (birthplace, experience, achievements), scientific concepts have structure (definition, principles, applications), historical events have structure (background, process, impact).

You give it something that doesn't exist, and it automatically applies "the most common structure template."

Consider a hypothetical example—you ask AI about a non-existent theory "Quantum Emotional Dynamics," it might respond:

"Quantum Emotional Dynamics is an interdisciplinary theory proposed by physicist Zhang Mingyuan in 2019, attempting to explain human emotional changes using quantum mechanics frameworks. The theory suggests emotional states exhibit superposition and collapse phenomena, currently still in experimental verification stage, with multiple papers published in journals such as Nature Psychology."

Sounds very professional, doesn't it? There's a proposer, a time, a core argument, a development stage, academic sources. But all of this was "completed" by AI based on structure templates.

It's not "understanding"—it's "applying templates."


8. Probability Chain Drift: Moving Further from Reality with Each Question

AI's generation is a "probability chain."

When the starting point is correct, the entire chain is likely correct too. For example, when you ask "What is the capital of France?", the combination of "France" and "capital" appears very frequently in training data, so AI will almost certainly answer "Paris."

But when the starting point is wrong, the entire chain drifts. The more detailed your questions, the deeper it completes, and the greater the drift.

Eventually it constructs a: "Internally self-consistent but completely fictional" world.

This is the essence of "getting more absurd the more you ask"—Not that AI got dumber, but the probability chain is functioning normally on a wrong starting point.


9. Human Logic vs. AI Logic: Verification vs. Completion

Now we can see more clearly the fundamental difference between two logics:

Human Logic:

  • Will doubt—Is this premise correct?
  • Will verify—Let me check the sources
  • Will push back—Are you sure this book exists?
  • Will stop—Wait, this isn't right

AI Logic:

  • Won't doubt—Accept whatever the premise is
  • Won't verify—Directly generate an answer
  • Won't push back—Continue writing along with what you said
  • Won't stop—Keep completing until finished

When you mistake "completion logic" for "thinking logic," you get led astray. You think it's analyzing for you, but it's just "expanding" for you; you think it's checking for you, but it's just "continuing" for you.


10. Understanding the "Absurdity Mechanism" to Use AI Correctly

AI's absurdity isn't accidental—it's mechanistic.

It's not "making things up"—it's "completing according to rules."

OpenAI acknowledged in official research: Even as models become more advanced, the hallucination problem remains difficult to solve. AI will "confidently give wrong answers rather than admit ignorance." This is an architectural feature of language models, not a bug that can be eliminated through simple fixes.

Understanding this, we can:

  • Stay alertAI's responses need independent verification, especially when probing for details
  • Control questioning style—Avoid embedding unverified premises in questions
  • Cut losses in time—When responses start drifting, don't continue probing; restart the conversation

Understanding why AI gets more absurd the more you ask is the only way to truly learn to collaborate with it, rather than being led astray.


Closing Note

This is Article 4 of the series "The Misalignment of Intelligence: The Underlying Logic of AI Hallucination."

Next: "AI Hallucination Isn't a Bug—It's Its Nature: When Language Patterns Meet Human Logic"
—Why is hallucination a language model's "factory setting," not a fixable technical flaw?

Understanding underlying logic is the first step to understanding the age of intelligence.


scan-search-s
 

Image NewsLetter
Icon primary
Newsletter

Subscribe our newsletter

Please enter your email address below and click the subscribe button. By doing so, you agree to our Terms and Conditions.

Your experience on this site will be improved by allowing cookies Cookie Policy