Case 27 | The AI Authenticity Crisis and Emotional Parasitism
Part I: Phenomenon Layer — When AI Feels More "Real" Than Reality2026
Two stories. One question.
Entertainment
A ten‑second clip. Tom Cruise and Brad Pitt fighting on a rooftop. Expressions, movements, voices — indistinguishable.
Screenwriter Rhett Reese: "It's likely over for us."
AI‑generated dialogue for Lord of the Rings. Alternate endings for Game of Thrones. Scarlett Johansson's voice, used without consent.
SAG‑AFTRA spoke out. Disney sent letters. Over 700 creators launched "Stealing Isn't Innovation."
Public
China's Supreme People's Court published a case. Criminals used AI to clone a "grandson's" voice. An elderly person, hearing that familiar voice, transferred money. It was gone.
Part II: Structure Layer — When "Real" Becomes a Weapon
Old fraud used fake identities. You could verify and avoid.
New fraud uses real faces, real voices. It doesn't attack your logic. It attacks what you trust most.
You know your grandson's voice. You know your mother's face. AI takes those and turns them against you.
The result is not just lost money. It's trust itself — eroded.
Part III: Conflict Layer — Speed That Leaves Everything Behind
AI progresses. Seedance 2.0 generates ultra‑HD video. Real enough to fool.
Laws lag. Celebrities defend themselves. Ordinary people have no one.
Awareness lags. An elderly person alone — how do they distinguish a real voice from a clone?
Detection tools lag. Nothing simple, nothing public.
AI moves on exponential time. Everything else moves on linear time.
Part IV: Observing Layer — Calibrating Reality
At this point, some words not usually found in tech discussions surface.
Physical signals. A real grandson comes home. Brings gifts. Sits at the table. These things cannot be simulated.
Meta‑cognition. Rhythm of speech. Reaction time. Conversational habits. AI can learn, but never perfectly.
Enough is enough. Not rejecting technology. Just knowing when you don't need it.
When "real" requires proof, trust becomes a burden.
Part V: Disaster Layer — When Progress Becomes a Disaster
AI will keep advancing. Videos will get more realistic. Voices will get more precise.
If laws can't keep up,
if education can't keep up,
if ordinary awareness can't keep up,
the line between real and fake blurs.
For ordinary people, that blur is not progress. It's a disaster.
Part VI: The Human Layer — The Final Calibrator
But perhaps the story should not end with disaster alone.
Technology itself has always been neutral.
It is a mirror, reflecting the intentions of those who use it.
When we condemn AI scams targeting elderly people living alone, the same technology, somewhere else, may be accompanying another elderly person — reminding them to take their medication, listening to their loneliness.
When we fear AI replacing human voices, the first brain-computer interface patients are using thought to move a cursor and write the words they want to say again.
People with visual impairments are using AI-assisted vision to recognise the faces of their loved ones.
People with ADHD are using AI-assisted reading tools to regain focus in learning.
These are not simulations.
They are restorations.
These are not replacements.
They are forms of empowerment.
The difference does not lie in the code.
It lies in intention.
The difference does not lie in AI.
It lies in how humans choose to use it.
When technological progress moves faster than human maturity, the risk does not come from machines alone.
It comes from humans abandoning the responsibility to calibrate how technology should be used.
In the end, the strongest defence is not an algorithm, but human judgement.
And the final measure of authenticity may not be data —but choice.
Case 27:AI 真實性危機與情感寄生
一、現象層:當 AI 比真人更「真」
2026 年,有兩條看似不相干的新聞,指向同一個問題。
第一條來自荷里活。一條十幾秒的 AI 影片在網上瘋傳——湯告魯斯和畢彼特在天台打交。表情、動作、聲線,逼真到分不出真假。編劇 Rhett Reese 只講了一句:「It's likely over for us. 」
之後一發不可收拾。有人用《魔戒》山姆懷斯的形象,配了從來沒拍過的對白。有人幫《權力遊戲》作了個新結局。史嘉蕾·祖安遜的聲音,被人未經同意拿去訓練 AI。
荷里活反枱了。
SAG-AFTRA 演員工會公開譴責,點名批評字節跳動的 Seedance 2.0 完全漠視法律和倫理,削弱真人演員的生存空間。迪士尼直接發律師信,指控 Seedance「內置盜版迪士尼版權角色庫」,將星戰、漫威角色當成公有素材。Netflix、華納兄弟、環球、派拉蒙、索尼全部跟隊。美國電影協會也出手,說這是「刻意設計的選擇」,不是技術漏洞。
演員們也開始反抗。尼可拉斯·基治在頒獎禮上說:「不可以讓機器人替我們做夢。」森姆·積遜教後輩:簽約見到「in perpetuity」和「known and unknown」這些字,直接劃掉。馬修·麥康納希為自己的口頭禪申請商標。史嘉蕾·祖安遜、姬蒂·白蘭芝聯同超過 700 位創意人,發起「Stealing Isn't Innovation」運動,說偷創作者的作品不是創新,是盜竊。
同一時間,另一條新聞更直接地威脅平民。
最高人民法院發布了一個典型案例:犯罪分子用 AI 合成「孫子」的聲音,打電話給獨居老人,說自己出事急需錢。老人家聽到那把熟悉的聲音,心急如焚,跑去銀行轉賬,然後錢就沒了。
這是 AI 擬聲詐騙。不騙你的理性,騙你的情感。
兩條新聞,一條講明星,一條講平民,但指向同一個問題:
當 AI 可以完美複製人的樣貌和聲音,還有什麼是真實?
二、結構層:技術真實的「降維打擊」
這不是簡單的騙局,而是一種更深層次的「寄生」。
以前的詐騙,用假身份、假故事,目標只是騙你的錢。你多個心眼,打個電話核實一下,大多能避開。
但現在的 AI 詐騙不同。它用的是「真」聲音、「真」樣貌。你聽到的,真是你孫子的聲音;你見到的,真是你偶像的臉。它不騙你的「理性」,而是寄生在你人類最脆弱的地方——情感。
你知道媽媽的聲音是怎樣的,你知道孫子長什麼樣。AI 就把這些你最信賴的東西,變成攻擊你的武器。
結果不光是沒了錢,而是整個社會的「信任」開始崩塌。如果連親情都可以被這樣利用,還有什麼信得過?
三、衝突層:進步太快,但其他東西追不上
這就是你一直說的那句——「其他東西追不上算力單方面的進化」。
看看現在的情況:
AI 進步的速度——Seedance 2.0 已經可以生成春晚級的超高清視頻,真到分不出。
法律的防護——連王勁松這麼出名都要自己出聲控訴,普通人的形象被盜用,可以怎麼辦?
社會的認知——一個獨居老人,怎麼分得出電話裡的是真孫子還是 AI 孫子?
技術的識別——到現在為止,還沒有全民可以隨便用的「AI 檢測工具」。
AI 快得太離譜,但法律、社會、教育、防範機制,全部還在後面慢慢追。
四、觀察層:真實的校準
這個時候,一些原本不屬於科技討論的詞,開始浮現。
物理信號。真的孫子會回家吃飯,會帶手信,會和你聊天,會有真實的互動。這些東西,AI 模擬不了。
元認知。那些「不對路」的細節——語氣節奏、反應時間、對話習慣。AI 可以學,但永遠不會和真人一模一樣。
夠用就好。不是抗拒科技,而是知道什麼時候不需要用。生活裡的大部分事情,不需要驗證,不需要校準,不需要懷疑。
當「真實」變得需要校準,人與人之間的信任,就從默認變成需要證明。
五、當進步成了災難
AI 可以進步得再快一點,可以生成再真一點的視頻,可以模仿再像一點的人聲。
但如果法律追不上,
如果社會教育追不上,
如果普通人的防範意識追不上,
如果連「什麼是真的」這個問題都開始模糊,
那麼這場「進步」,對平民來說,只會是一場更大的災難。
六、人性層:最終的校準器
但或許,我們不該只看見災難。
技術永遠是中性的。它是一面鏡子,反射的是使用者的意圖。
當我們譴責 AI 詐騙獨居老人時,同一項技術,正在另一個角落,陪伴另一位獨居老人,提醒他們吃藥,聆聽他們的孤獨。當我們恐懼 AI 取代人類聲音時,第一位腦機接口患者,正透過思維控制光標,重新寫下他想說的話。失明人士,正透過 AI 視覺輔助,重新「看見」家人的臉。ADHD 患者,正透過 AI 輔助閱讀,重新專注於學習。
這些不是「模擬」,是「修復」。這些不是「取代」,是「賦能」。
分別不在代碼,而在意圖。分別不在 AI,而在人。
錯不在 AI,錯在用的人。
當技術進化快過人性成熟,災難不是來自機器,是來自人類放棄了「校準」的責任。
最終的防線,不是算法,是人心。
最終的真實,不是數據,是選擇。