Exciting news! TCMS official website is live! Offering full-stack software services including enterprise-level custom R&D, App and mini-program development, multi-system integration, AI, blockchain, and embedded development, empowering digital-intelligent transformation across industries. Visit dev.tekin.cn to discuss cooperation!
Give AI any name, and it will generate a complete biography—birth year, education, career, achievements, and even personality traits. This isn't magic; it's structure completion. Real cases illustrate the danger: a Norwegian man searched his own name and discovered AI had fabricated a story that he murdered his two children. AI doesn't verify whether people exist—it applies learned patterns of "what a person's life should look like."

Opening Insight
AI doesn't know whether this person exists, but it knows "what a person should look like."
So you give it a name, and it can fabricate an entire life for you.
AI's "storytelling ability" isn't coincidence—it's an underlying mechanism. It's just applying templates.
You may have tried: Asking AI to introduce a name you casually made up. It will immediately give you: birth year, family background, education, career path, life turning points, even "representative works."
You'll be shocked: "I just made up this name—how can it write something so real-looking?"
The truth is:
AI doesn't know whether this person exists, but it knows "what a person should look like."
This isn't an isolated case. In 2025, a Norwegian man named Arve Hjalmar Holmen searched his own name on ChatGPT, and AI told him: He was a murderer who had killed his two children. This story had complete time, place, and motive—but was completely fabricated. Holmen had never killed anyone; he was just another victim of AI hallucination.
This case was covered by BBC and other media outlets, becoming a typical example of AI "fabricating false lives out of thin air."
Language models won't ask: Is this person real? Does this name appear in any database? Does this person have factual basis?
It only asks:
"How do humans typically write when introducing a person?"
So it automatically applies the "person template."
European privacy organization NOYB found in a 2024 investigation that ChatGPT not only fabricated Holmen's "murder story," but also created false information such as sexual harassment scandals and bribery accusations for other real people. AI isn't "spreading rumors"—it's just doing what it was designed to do: completing person templates.
During training, AI read countless person introductions: celebrity biographies, Wikipedia entries, news profiles, interview drafts, paper author bios.
It learned:
Person introduction = Background + Experience + Achievements + Influence
So when you give it a name, it will automatically complete: birthplace, family background, education path, career development, key events, representative works, influence.
This isn't "understanding"—it's "structure completion."
Like a fill-in-the-blank question: AI sees "Name" on the left, and automatically fills in all corresponding fields on the right. Whether this person exists or not, it doesn't care at all.
Because it learned: Person stories need details, more details mean more realism, and realism comes from "specificity."
So it will complete: a certain year in a certain place, a certain school, a certain company, a certain project, a certain award.
The New York Times found in a 2024 investigation that numerous AI-generated fake celebrity biographies appeared on Amazon. These biographies had complete birthplaces, education backgrounds, career experiences—but the people had never said these words, never done these things. AI just learned "what biographies look like," then applied templates to generate one after another.
These details might be completely fake, but language patterns make them "look reasonable."
Because these are the most frequently appearing fields in person templates.
AI will infer based on the name's linguistic characteristics: Which country does this name sound like, what cultural background, which career path is most common.
For example (hypothetical examples):
It's not looking up information—it's making "linguistic statistical inferences." Based on the name's linguistic characteristics, guessing the most likely "life trajectory."
Because it learned: Person stories need "turning points," turning points make stories more like "life," and humans love writing about "key moments" in biographies.
So it will automatically complete: a certain failure, a certain opportunity, a certain decision, a certain mentor, a certain event.
These are all "narrative structures," not "facts."
Narratology tells us: A good story needs conflict, turning points, growth. AI learned these narrative patterns from massive text, then applied them to every "person introduction." It doesn't know whether this story is real; it only knows that writing this way is "more like what humans would write."
Because it learned: Person stories need "internal motivation," humans like to explain "why," and emotions make stories more real.
So it will complete: "He had loved... since childhood," "She decided because of one experience...", "His resilient personality meant..."
These aren't psychological analyses—they're "narrative inertia."
AI doesn't understand what "love" is, what "resilience" is, what "motivation" is. It just sees that humans frequently use these words and sentence structures when describing people, so it uses them too. It's not "understanding human nature"—it's "mimicking human narrative methods."
AI's strength lies in: It can mimic human writing rhythm, construct self-consistent logical chains, fill in massive details, and maintain consistent style.
So what you see isn't a "real person," but a:
"Language-pattern-driven fictional character."
But it looks very real.
An American lawyer, Damon V. Able, discovered that AI had fabricated an entire "career history" for him—including cases he had never participated in, articles he had never published, and honors he had never received. The whole story was logically tight and detail-rich—but completely fake. Able later wrote: "AI created a fictional me that is more 'like' a successful lawyer than my real self."
Now we can see more clearly the fundamental difference between two logics:
Human understanding of people:
AI generation of people:
When you mistake "language generation" for "person understanding," you think AI "knows this person." But actually, it's just "fabricating" according to templates, never verifying any information.
AI isn't "introducing a person"—it's "generating an instance of a person template."
It's not telling you the truth—it's telling you:
"Humans would typically write about a person this way."
Understanding this, we can:
Understanding AI's "person hallucination" is the only way to truly understand its boundaries.
This is Article 6 of the series "The Misalignment of Intelligence: The Underlying Logic of AI Hallucination."
Next: "Why Can AI Describe Non-Existent Books Convincingly?"
—A book's "structure" is easier to mimic than content—AI uses this to fool you.
Understanding underlying logic is the first step to understanding the age of intelligence.