Exciting news! TCMS official website is live! Offering full-stack software services including enterprise-level custom R&D, App and mini-program development, multi-system integration, AI, blockchain, and embedded development, empowering digital-intelligent transformation across industries. Visit dev.tekin.cn to discuss cooperation!

You Ask AI About a Non-Existent Person, It Can Fabricate an Entire Life for Them

2026-03-16 9 mins read

Give AI any name, and it will generate a complete biography—birth year, education, career, achievements, and even personality traits. This isn't magic; it's structure completion. Real cases illustrate the danger: a Norwegian man searched his own name and discovered AI had fabricated a story that he murdered his two children. AI doesn't verify whether people exist—it applies learned patterns of "what a person's life should look like."

6
 Opening Insight

AI doesn't know whether this person exists, but it knows "what a person should look like."
So you give it a name, and it can fabricate an entire life for you.

AI's "storytelling ability" isn't coincidence—it's an underlying mechanism. It's just applying templates.


1. Why Can AI Write Complete Life Stories for Non-Existent People?

You may have tried: Asking AI to introduce a name you casually made up. It will immediately give you: birth year, family background, education, career path, life turning points, even "representative works."

You'll be shocked: "I just made up this name—how can it write something so real-looking?"

The truth is:

AI doesn't know whether this person exists, but it knows "what a person should look like."

This isn't an isolated case. In 2025, a Norwegian man named Arve Hjalmar Holmen searched his own name on ChatGPT, and AI told him: He was a murderer who had killed his two children. This story had complete time, place, and motive—but was completely fabricated. Holmen had never killed anyone; he was just another victim of AI hallucination.

This case was covered by BBC and other media outlets, becoming a typical example of AI "fabricating false lives out of thin air."


2. AI Doesn't Judge Existence—It Only Completes "Person Templates"

Language models won't ask: Is this person real? Does this name appear in any database? Does this person have factual basis?

It only asks:

"How do humans typically write when introducing a person?"

So it automatically applies the "person template."

European privacy organization NOYB found in a 2024 investigation that ChatGPT not only fabricated Holmen's "murder story," but also created false information such as sexual harassment scandals and bribery accusations for other real people. AI isn't "spreading rumors"—it's just doing what it was designed to do: completing person templates.


3. The Underlying Mechanism of Person Generation: Structure Completion

During training, AI read countless person introductions: celebrity biographies, Wikipedia entries, news profiles, interview drafts, paper author bios.

It learned:

Person introduction = Background + Experience + Achievements + Influence

So when you give it a name, it will automatically complete: birthplace, family background, education path, career development, key events, representative works, influence.

This isn't "understanding"—it's "structure completion."

Like a fill-in-the-blank question: AI sees "Name" on the left, and automatically fills in all corresponding fields on the right. Whether this person exists or not, it doesn't care at all.


4. Why Can It Write "Seemingly Real Details"?

Because it learned: Person stories need details, more details mean more realism, and realism comes from "specificity."

So it will complete: a certain year in a certain place, a certain school, a certain company, a certain project, a certain award.

The New York Times found in a 2024 investigation that numerous AI-generated fake celebrity biographies appeared on Amazon. These biographies had complete birthplaces, education backgrounds, career experiences—but the people had never said these words, never done these things. AI just learned "what biographies look like," then applied templates to generate one after another.

These details might be completely fake, but language patterns make them "look reasonable."


5. Why Can It Fabricate "Birthplace, Education, Experience" for You?

Because these are the most frequently appearing fields in person templates.

AI will infer based on the name's linguistic characteristics: Which country does this name sound like, what cultural background, which career path is most common.

For example (hypothetical examples):

  • "Wang Jianguo" → AI might guess Chinese background, engineering or academic fields
  • "John Smith" → AI might guess English-speaking country background, business or legal fields
  • "Sofia Rossi" → AI might guess European background, art or fashion fields

It's not looking up information—it's making "linguistic statistical inferences." Based on the name's linguistic characteristics, guessing the most likely "life trajectory."


6. Why Can It Write "Life Turning Points"?

Because it learned: Person stories need "turning points," turning points make stories more like "life," and humans love writing about "key moments" in biographies.

So it will automatically complete: a certain failure, a certain opportunity, a certain decision, a certain mentor, a certain event.

These are all "narrative structures," not "facts."

Narratology tells us: A good story needs conflict, turning points, growth. AI learned these narrative patterns from massive text, then applied them to every "person introduction." It doesn't know whether this story is real; it only knows that writing this way is "more like what humans would write."


7. Why Can It Write "Emotions, Motives, Personality"?

Because it learned: Person stories need "internal motivation," humans like to explain "why," and emotions make stories more real.

So it will complete: "He had loved... since childhood," "She decided because of one experience...", "His resilient personality meant..."

These aren't psychological analyses—they're "narrative inertia."

AI doesn't understand what "love" is, what "resilience" is, what "motivation" is. It just sees that humans frequently use these words and sentence structures when describing people, so it uses them too. It's not "understanding human nature"—it's "mimicking human narrative methods."


8. How Language Patterns Make Fictional Characters "Look Very Real"?

AI's strength lies in: It can mimic human writing rhythm, construct self-consistent logical chains, fill in massive details, and maintain consistent style.

So what you see isn't a "real person," but a:

"Language-pattern-driven fictional character."

But it looks very real.

An American lawyer, Damon V. Able, discovered that AI had fabricated an entire "career history" for him—including cases he had never participated in, articles he had never published, and honors he had never received. The whole story was logically tight and detail-rich—but completely fake. Able later wrote: "AI created a fictional me that is more 'like' a successful lawyer than my real self."


9. Human Understanding of People vs. AI Generation of People: The Misalignment of Two Logics

Now we can see more clearly the fundamental difference between two logics:

Human understanding of people:

  • Based on facts—What is this person's real experience
  • Based on observation—What I've seen from their behavior
  • Based on experience—My experience interacting with them
  • Based on psychology—What motives drive their behavior

AI generation of people:

  • Based on patterns—How humans typically write about people
  • Based on structure—What is the template for person introductions
  • Based on probability—What content is most likely to appear here
  • Based on language—What sentence structures best fit narrative conventions

When you mistake "language generation" for "person understanding," you think AI "knows this person." But actually, it's just "fabricating" according to templates, never verifying any information.


10. Understanding AI's "Person Hallucination" to Use It Correctly

AI isn't "introducing a person"—it's "generating an instance of a person template."

It's not telling you the truth—it's telling you:

"Humans would typically write about a person this way."

Understanding this, we can:

  • Stay alert—The person AI describes might be fabricated, even with name, details, and story
  • Independently verify—Important person information needs source verification
  • Understand the mechanismAI isn't "lying," it's just "completing templates"
  • Use reasonably—Leverage AI's narrative ability to create fictional characters, but don't treat it as a source of facts

Understanding AI's "person hallucination" is the only way to truly understand its boundaries.


Closing Note

This is Article 6 of the series "The Misalignment of Intelligence: The Underlying Logic of AI Hallucination."

Next: "Why Can AI Describe Non-Existent Books Convincingly?"
—A book's "structure" is easier to mimic than content—AI uses this to fool you.

Understanding underlying logic is the first step to understanding the age of intelligence.


scan-search-s
 

Image NewsLetter
Icon primary
Newsletter

Subscribe our newsletter

Please enter your email address below and click the subscribe button. By doing so, you agree to our Terms and Conditions.

Your experience on this site will be improved by allowing cookies Cookie Policy