Signal vs Noise | 004 The Calibrator — Why 2028 Is Not a Prediction, It's a Structure
Your life is their laboratory. The question is whether you know it.
The announcement was impressive.
One trillion dollars in orders. Twenty-eight cities across four continents. A new platform that integrates seven different chips into five racks operating as a single supercomputer.
Jensen Huang stood on stage at GTC 2026 and described a world where AI infrastructure is everywhere — in the roads, in the sky, in space.
The audience applauded.
Nobody asked who pays when it goes wrong.
The oldest excuse in industry.
There is a logic that has served manufacturers for centuries.
I make the knife. What you do with it is your problem.
A restaurant serves the food. If you choke, that is not the kitchen's fault. A weapons manufacturer builds the gun. Whether it protects or destroys depends on the buyer. A knife-maker sells the blade. The chef who cuts their finger made an error, not the factory.
This logic has a name: product liability limitation.
It works when the product is passive. When it sits still until someone picks it up.
It starts to break down when the product makes decisions.
When the tool starts driving.
Tesla's Autopilot system has been involved in hundreds of accidents.
Tesla's position, consistently, has been clear:
Autopilot is a driver assistance feature. The driver must remain attentive at all times. By activating the system, the user accepts responsibility.
The user accepted the terms. The user is responsible.
Except the car was making decisions. The car was reading the road. The car was calculating speed and trajectory and response time.
The knife doesn't decide where to cut.
The car does.
Courts have begun to notice this distinction. Some rulings have found Tesla liable. The logic of passive tool manufacturer is beginning to fail against the reality of active AI decision-making.
But the transition is slow. And in the meantime, the accidents keep happening.
The authorisation trap.
OpenClaw's terms of service run to thousands of words.
Somewhere in those words, you authorised the use of your data, your interactions, your patterns, your errors.
You authorised it because you wanted access to the tool.
You authorised it because everyone else was authorising it.
You authorised it because the alternative was being left behind.
The moment you clicked accept, something shifted.
If the AI produces a wrong answer that costs you money — you authorised it.
If the AI makes a recommendation that damages your business — you authorised it.
If the AI hallucinates a legal citation and a judge sanctions your lawyer — the lawyer authorised it.
The tool is not responsible. The manufacturer is not responsible.
You are responsible.
This is not an accident. This is the architecture.
The real experiment.
Jensen Huang is not testing in a simulation.
He is testing in Los Angeles. In San Francisco. In twenty-six other cities across four continents.
The vehicles on those roads are not prototypes in a closed track. They are production systems operating in environments shared with pedestrians, cyclists, children crossing streets, elderly drivers who don't know what Autopilot means.
This is the real laboratory.
And the subjects of the experiment did not sign a consent form.
They are simply living in those cities.
When the system learns from a near-miss, it gets better. When the system learns from an accident, it gets better. Either way, the system gets better.
The question of who absorbs the cost of the learning — that question does not appear in the earnings call.
The signal that was never there.
Here is the deeper problem.
AI systems learn from data. The more data, the better the learning. The more real-world testing, the more accurate the calibration.
But what happens when the data itself is corrupted?
For decades, humans packaged their credentials, inflated their expertise, manufactured their reviews, optimised their content for algorithms rather than for truth.
AI learned from all of it.
The legal citations that don't exist. The coaching certifications that are self-issued. The restaurant reviews written by people who were never there. The product descriptions written to rank, not to inform.
AI learned from all of it.
And now AI produces outputs that reflect that learning — confident, fluent, structured, and sometimes entirely disconnected from reality.
The system was trained on noise. It produces noise. The noise trains the next system.
This is not a technical failure. It is a calibration failure.
The real signal was never fed in consistently enough to correct the pattern.
The outsider's dilemma.
Here is the problem no algorithm can solve.
When you encounter a field you don't know — a medical specialist, a financial advisor, a coaching certification, a legal service — you have no internal reference point.
You go online. You find the websites with the best SEO. The credentials that sound most official. The reviews that seem most numerous.
You choose based on what you can see.
But what you can see was designed to be seen.
The real question — is this person actually good at what they do — cannot be answered by a search engine. It can only be answered by someone who has direct experience of the field. Someone who has been inside it long enough to know what competence actually looks like.
And here is the uncomfortable truth:
Sometimes, even when you know the credential is manufactured — you still need it.
Because the system requires it.
Because the door only opens for people who have the right label.
Because the network you need to join has its own initiation requirements, and they are not always rational, and they are not always fair, and they are real regardless.
You cannot always choose the real over the packaged.
Sometimes the packaged is the only door available.
What calibration actually means.
This is where most discussions of misinformation end.
Learn to think critically. Verify your sources. Don't believe everything you read.
This is correct. It is also insufficient.
Because the person who needs to verify a medical credential does not have a medical degree.
Because the person who needs to evaluate a financial advisor does not have access to their track record.
Because the person who needs to find a trustworthy contractor has never built a house.
Critical thinking requires a reference point. And reference points require prior experience.
The calibrator's role is not to replace that experience.
It is to help people understand what they are actually buying — before they buy it.
Not: this is false, do not trust it.
But: here is what this credential actually means. Here is what the packaging contains and what it does not. Here is the realistic gap between what is promised and what is delivered. Here is what questions to ask before you commit.
And sometimes: yes, you probably still need to get the credential. But now you know what it is.
The structure that was always there.
2028 is not a prediction.
It is the point at which several structural forces converge simultaneously.
The AI systems currently being trained on real-world data — in twenty-eight cities, on open roads, in production environments — will reach a level of capability that makes the displacement of cognitive labour undeniable rather than theoretical.
The economic dead cycle — capital reduces headcount, unemployment rises, consumption falls, revenue falls, capital reduces headcount again — will have completed enough rotations to be visible to people who are not economists.
The legal systems that have been slowly building case law around AI liability — starting with lawyers, moving to autonomous vehicles, expanding to medical and financial decisions — will have accumulated enough precedent to begin changing behaviour.
The people who understood this structure early — who built real signal into systems that were filling with noise, who maintained calibrated judgment while the tools around them were optimising for engagement — will be the people others turn to.
Not because they predicted the future.
Because they understood the structure.
The only decision that is yours.
In a desert, a mirage is still a direction.
When the real path has disappeared, people follow what they can see. You cannot always blame them for that. The alternative is standing still in the heat.
But knowing that what you are following is a mirage changes something.
It changes how hard you commit. It changes what you keep in reserve. It changes how you position yourself relative to the horizon.
The calibrator does not take the mirage away.
The calibrator says: that is what this is. Now you decide.
Because the decision — what to pursue, what to accept, what to use as an entry ticket and what to hold as a genuine belief — that decision cannot be made for you.
The tools are becoming more powerful. The noise is becoming more sophisticated. The packaging is becoming indistinguishable from the real thing to systems that were never taught what real looks like.
In that environment, the most valuable thing is not a better algorithm.
It is a person who has seen enough of both — the real and the manufactured — to tell the difference.
And who is honest enough to say, when the situation calls for it:
I don't know for certain. But here is what I can see. And here is what that means for you.
That is the work.
It was always the work.
It will still be the work in 2028.
This is the final signal in a four-part series.
Part One: AI Made a Lawyer Lie to a Judge. Now What? → [Link]
Part Two: The Algorithm Says It Wants Real. But Can It Tell the Difference? → [Link]
Part Three: When the Label Becomes the Lifeboat → [Link]
Signal vs Noise | 004 校準者——為什麼2028不是預測,而是結構
你的生活,是他們的實驗室。問題是你知不知道。
那個發布會令人印象深刻。
一萬億美元的訂單。四大洲二十八個城市。一個把七種不同晶片整合進五個機架、作為單一超級電腦運作的新平台。
黃仁勳站在GTC 2026的舞台上,描述了一個AI基礎設施無處不在的世界——在道路上,在天空中,在太空裡。
觀眾鼓掌。
沒有人問,出了問題,誰來承擔。
工業界最古老的藉口。
有一種邏輯,服務了製造商幾個世紀。
我造刀具。你怎麼用,是你的問題。
餐廳提供食物。如果你哽死了,那不是廚房的錯。武器製造商生產槍。是保護還是傷害,取決於買家。刀具廠商賣出刀片。廚師切斷了手指,是他自己的失誤,不是工廠的責任。
這個邏輯有個名字:產品責任限制。
當產品是被動的時候,它有效。當它靜靜放在那裡,等人拿起來使用。
當產品開始做決定,它就開始失效。
當工具開始駕駛。
Tesla的Autopilot系統已經捲入數百宗事故。
Tesla的立場,一直很清晰——
Autopilot是駕駛輔助功能。駕駛員必須時刻保持注意力。啟動系統即代表用戶接受責任。
用戶接受了條款。用戶負責。
但那輛車在做決定。那輛車在讀取道路。那輛車在計算速度、軌跡和反應時間。
刀不會決定切哪裡。
車會。
法院已經開始注意到這個區別。部分裁決判定Tesla需要承擔責任。被動工具製造商的邏輯,在主動AI決策的現實面前開始站不住腳。
但轉變是緩慢的。與此同時,事故繼續發生。
授權的陷阱。
OpenClaw的服務條款有幾千個字。
在那些字的某個地方,你授權了使用你的數據、你的互動、你的行為模式、你的錯誤。
你授權了,因為你想使用這個工具。
你授權了,因為所有人都在授權。
你授權了,因為不授權就意味著被落下。
你點擊同意的那一刻,有些東西改變了。
如果AI給出了錯誤答案讓你損失了錢——你授權了。
如果AI的建議損害了你的生意——你授權了。
如果AI虛構了法律引用,法官制裁了你的律師——律師授權了。
工具不負責。製造商不負責。
你負責。
這不是意外。這是設計。
真實的實驗。
黃仁勳不是在模擬環境裡測試。
他在洛杉磯測試。在舊金山測試。在四大洲另外二十六個城市測試。
那些道路上的車輛,不是封閉跑道上的原型車。它們是在真實環境中運作的量產系統——與行人共享道路,與騎自行車的人共享道路,與不知道Autopilot是什麼的老年駕駛員共享道路。
這才是真正的實驗室。
而實驗的對象,沒有簽署知情同意書。
他們只是住在那些城市裡。
當系統從一次險些發生的事故中學習,它變得更好。當系統從一次真實的事故中學習,它變得更好。無論哪種方式,系統都在進步。
誰來承擔這個學習過程的代價——這個問題,不會出現在財報電話裡。
從來不在那裡的訊號。
這裡有一個更深的問題。
AI系統從數據中學習。數據越多,學習越好。真實世界測試越多,校準越準確。
但如果數據本身就是腐敗的呢?
幾十年來,人類包裝了自己的資歷,誇大了自己的專業,製造了自己的評論,為了演算法而不是為了真實優化了自己的內容。
AI從所有這些裡學習了。
那些不存在的法律引用。那些自己頒給自己的教練認證。那些從來沒有去過的人寫的餐廳評論。那些為了排名而寫、不是為了告知而寫的產品描述。
AI從所有這些裡學習了。
現在AI產生的輸出,反映了那個學習——自信、流暢、有結構,有時候與現實完全脫節。
系統在噪音上訓練。它產生噪音。噪音訓練下一個系統。
這不是技術失敗。這是校準失敗。
真實訊號從來沒有被足夠一致地輸入,來糾正這個模式。
外行人的困境。
這裡有一個演算法無法解決的問題。
當你遇到一個你不熟悉的領域——一個醫學專科、一個財務顧問、一個教練認證、一個法律服務——你沒有內部的參照點。
你上網。你找到SEO做得最好的網站。聽起來最官方的資歷。看起來最多的評論。
你根據能看到的東西做選擇。
但你能看到的,是被設計來讓你看到的。
真正的問題——這個人在他們做的事情上是否真的有能力——搜索引擎無法回答。只有直接接觸過這個領域的人才能回答。一個在裡面待得足夠久、知道真正的能力長什麼樣子的人。
這裡有一個令人不舒服的真相——
有時候,就算你知道那個認證是包裝出來的——你還是需要它。
因為系統要求你有。
因為那扇門只為有正確標籤的人打開。
因為你需要加入的網絡有自己的入場要求,這些要求不一定合理,不一定公平,但它們是真實的。
你不能總是選擇真實而不是包裝。
有時候包裝是唯一可用的門。
校準真正的意思。
這是大多數關於錯誤信息的討論結束的地方。
學會批判性思考。驗證你的來源。不要相信你讀到的一切。
這是對的。這也是不夠的。
因為需要驗證醫療資歷的人,沒有醫學學位。
因為需要評估財務顧問的人,無法獲取他們的業績記錄。
因為需要找一個值得信賴的承包商的人,從來沒有蓋過房子。
批判性思考需要參照點。而參照點需要先前的經驗。
校準者的角色,不是替代那個經驗。
是幫助人們在購買之前,理解他們實際上在買什麼。
不是:這是假的,不要相信。
而是:這個資歷實際上意味著什麼。這個包裝裡有什麼,沒有什麼。承諾和實際交付之間現實的差距是什麼。在你承諾之前應該問什麼問題。
有時候:是的,你可能還是需要那個認證。但現在你知道它是什麼了。
一直都在那裡的結構。
2028不是預測。
它是幾個結構性力量同時匯聚的時間點。
目前正在真實世界數據上訓練的AI系統——在二十八個城市,在開放道路上,在生產環境裡——將達到一個讓認知勞動替代從理論變成不可否認的現實的能力水平。
經濟死循環——資本減少員工,失業率上升,消費下降,收入下降,資本再次減少員工——將完成足夠多的循環,讓不是經濟學家的人也能看見。
一直在圍繞AI責任緩慢建立判例法的法律系統——從律師開始,移向自動駕駛汽車,擴展到醫療和財務決策——將積累足夠的先例,開始改變行為。
那些早早理解這個結構的人——在充滿噪音的系統裡建立了真實訊號,在周圍的工具都在為互動優化時保持了校準過的判斷——將是其他人轉向的人。
不是因為他們預測了未來。
而是因為他們理解了結構。
唯一屬於你的決定。
在沙漠裡,海市蜃樓也是一種方向。
當真實的路消失,人們跟著能看到的東西走。你不能總是怪他們。另一個選擇是站在烈日下不動。
但知道你跟著的是海市蜃樓,會改變某些東西。
它改變了你投入的程度。它改變了你保留什麼。它改變了你相對於地平線的位置。
校準者不是把海市蜃樓拿走。
校準者說:那是什麼。現在你決定。
因為那個決定——追求什麼,接受什麼,什麼用作入場券,什麼作為真實的信念——那個決定無法替你做。
工具正在變得更強大。噪音正在變得更精密。包裝正在變得與真實不可區分,對那些從來沒有被教會真實長什麼樣子的系統來說。
在那個環境裡,最有價值的東西不是更好的演算法。
而是一個見過足夠多——真實的和製造出來的——能夠分辨差別的人。
一個誠實到在情況需要時能夠說出的人——
我不能確定。但這是我能看到的。而這對你意味著什麼。
這就是這個工作。
它一直都是這個工作。
2028年,它仍然是這個工作。
這是四部曲系列的最後一個訊號。
第一篇:AI讓律師對法官說了謊。然後呢?→ [連結]
第二篇:演算法說它想要真實。但它能分辨嗎?→ [連結]
第三篇:當標籤成為救生圈 → [連結]
「你的生活是他們的實驗室。當AI用真實城市試錯,誰來承擔代價?校準者的真正位置。」