一般加密貨幣投資者現正面臨威脅:騙徒利用最先進的人工智能手段行騙。AI 技術令加密詐騙在過去一年爆增——從2024年中到2025年中,匯報的AI協助詐騙案激增456%。
騙徒現時利用AI製造極像真的假影片、假聲音及訊息,令加密騙局變得極具說服力及更難識破。根據FBI資料,單在2024年美國就有近15萬宗與加密有關的詐騙投訴,損失金額超過39億美元。全球而言,2024年加密貨幣騙案損失超過107億美元——而且這仍只是冰山一角,因為預計只有大約15%的受害者選擇報案。
連OpenAI行政總裁Sam Altman都公開警告,AI模仿人類能力已經「突破大部分認證」手段,恐引發「嚴重詐騙危機」。本文將拆解業內最常見的十大AI加密貨幣騙局,實例說明如何運作,以及最重要的是,怎樣保障自己。

圖表:自2021年起,涉AI工具的加密騙案交易大幅上升,2025年約有六成騙案收款地址與AI相關,反映騙徒已急速採用AI大規模行騙。
1. 深偽名人背書騙局(假影片及直播)
最多AI加持加密騙局之一是製作名人深偽片推廣虛假項目。騙徒用AI假扮知名科技及加密人物——Elon Musk就是常用的假身份——製作假片宣傳虛假贈送活動或投資平台。例如行騙集團會入侵熱門YouTube頻道,直播經過特製的Elon Musk(或其他如Ripple的Brad Garlinghouse、MicroStrategy的Michael Saylor等加密紅人)影片,聲稱「比特幣加倍」或保證巨額回報。事實上,這些影像和聲音全靠AI合成,真正名人從沒說過這些內容。投資者一旦給錢到指定地址「等贏獎」,其實就被行騙了。
以往這類加密贈送騙局只會用名人受訪片段剪接,現在有咗深偽技術,騙徒可以人工加插新內容,令片段看似由名人親自撐場。這些假直播乍看極具說服力。舉例,Chainabuse(一個公開詐騙舉報平台)在2024年中曾收到關於Elon Musk深偽直播騙局的報告,短時間之內有多名受害者墮入圈套,騙徒瞬間賺到可觀資金。日後區塊鏈追蹤發現,這些詐騙地址於2024年3月至2025年1月間收到至少500萬美元受害人資金,另一宗以「AI交易平台」為幌子的Musk深偽騙局亦吸金超過330萬美元。安全公司Sensity研究發現,Musk是投資詐騙中最常被深偽造假的名人,因為其公信力高和常談及加密貨幣。騙徒就是看到這點去利用群眾信任。
更嚴重的是,如今即時深偽技術已可進行視像通話換臉。騙徒可以在Zoom實時顯示另一張「臉」。有例子顯示,騙徒假冒公司財務總監和同事,在視像會議用AI換臉騙走職員2,500萬美元。深偽管理層無論外觀和聲線都無異於真人——對任何公司來說都是惡夢。正如Sam Altman所言:「聲音或影像深偽已難與現實分辨。」對普通投資者來說,信得過的CEO或項目創辦人如果有深偽影片現身,其說服力極高。
如何辨識及避免:對於聲稱「好到唔真」的加密推廣,即使有名人現身,都必須高度懷疑。細心觀察影片,深偽技術可能有怪異眨眼、嘴型奇怪或臉部光線不自然等微細瑕疵。聆聽語速和聲音是否流暢,有否電子失真。如有名人忽然承諾包賺或贈送(尤其直播要求你轉幣),幾乎百分百是詐騙。務必官方查證,例如Elon Musk曾多次聲明絕不做加密贈送。千萬不要隨便按片中地址匯款。如有懷疑,停一停——騙徒靠緊張和衝動騙人。寧願花時間查核推廣真偽(絕不可能是真的),都好過墮入深偽名人陷阱。
2. 假冒聲線及影像詐騙(公司管理層/親友被冒認)
AI深偽目標不止於知名億萬富翁——騙徒都會扮演你身邊人或值得信任的權威。這類騙局會以AI合成聲線或影像扮你親人或上司,令受害者放鬆戒心。其中一個令人震驚的版本是「親人遇險來電」:受害人接到電話,聽到「孫仔」或「配偶」用急切語氣求助,要求匯款(通常要求加密資產或電匯)。其實親人根本沒打過來,是騙徒從社交媒體片段提取聲音,透過AI複製,然後致電冒充。這類聲音深偽已騙過全球不少人,針對人性弱點。如今只要一段短錄音,就足夠造到以假亂真的聲線。
另一種作法是針對企業和投資者,冒充管理層或合作夥伴。正如上文提到,2024年初香港有國際工程顧問公司奧雅納被騙過,對方用AI換臉和假聲線於會議中欺詐職員轉帳2,500萬美元。香港某銀行亦因職員信以為真「老細」指示而損失過千萬美元——全是AI生成聲線。甚至有騙徒深偽初創公司創辦人或加密交易所高層,以視像會議聯絡客戶投資者,假冒指示出款。這種「CEO深偽」騙局趁大量商業交往已轉為網上進行愈見猖狂,只要Zoom見到老細聲畫都似真,自然願意聽從指令。即時換臉軟件可套上他人五官於騙徒面上,受害人誤以為真老細親自講話,最終被騙走巨款或重要安全密碼。好多公司到損失款項後先發現中招。
而家騙徒仲會拉長行騙週期,進行所謂「殺豬盤」:詐騙集團會用短暫視像會面「證明」身份,之後才以文字交談繼續行騙。有時會請真人演員,有時用AI生產的靚仔靚女模型短暫出鏡,只為贏得受害人信任。TRM Labs 研究團隊發現黑幫會用「深偽即服務」租用AI證件換臉,等於能點用點租,犯罪市場日漸成熟。某些騙徒集團更肯花部分詐騙所得購買AI服務,單宗案件已吸金逾6,000萬美元,證明換臉AI投資回報極高。
保護自己方法:重點是用其他渠道驗證。遇到陌生來電或視像問財務機密,千萬不要即時行動,不論幾似本人都要冷靜。應掛線,自行撥打已知電話確認,或用其他聯絡方法問清楚。例如有「老細」叫你過數,應主動電話向老細本人或其他同事查證。家人之間可預設應急暗號,一旦聲稱親人來電答不到就知係騙案。如視像會議中對方法貌異常(皮膚過滑、眼髮細節怪異)或聲音有延遲、機械感,都要警覺。任何語音或電郵指示過數,都必須另以已核實途徑二次查核。如Sam Altman強調,傳統認人方法(如聲線認證)已不再安全。任何聲稱緊急要你過數或即時加密轉帳,都要極高懷疑,就算「來自」你信得過的人都唔例外。額外花幾分鐘獨立驗證身份,可以避免你甚至整間公司陷於深偽騙局重大損失。
3. AI加強愛情/投資騙局(「殺豬盤」)
所謂殺豬盤騙局——即騙徒會和受害人在網上建立長期關係,然後把對方積蓄騙光——經AI技術後變得極度危險。傳統殺豬盤手法(多數透過交友App或社交網培養戀愛/朋友關係),騙徒花幾星期甚至幾個月資料信任,最後才介紹聲稱收益高的加密投資機會,說服受害者注資,結果一切只是虛構騙局。這類騙案過去需要大量人手,每日假扮情人或導師與受害人對話。現在AI聊天機械人和深偽加持,令殺豬盤能全球規模化,用自動對話大幅提升行騙效率。
騙徒集團會用AI語言模型(LLM)負責大部分與受害人溝通。利用如ChatGPT等工具—— or illicit uncensored variants like “WormGPT” and “FraudGPT” – they can generate fluent, charming messages in any language, 24/7. This means one scammer can manage dozens of victims concurrently, with AI crafting individualized loving texts, market analysis, or whatever the script requires. In fact, a 2023 investigation by Sophos found pig butchering groups had begun using ChatGPT to write their chats; one victim even received a strange pasted message that accidentally revealed it was AI-generated. The error aside, LLMs let scammers “hyper-personalize” their approach – adjusting tone and content to perfectly suit each victim’s background and emotional state. The days of broken English or copy-paste scripts are over. With AI, the texts feel genuine, making victims even more susceptible to the eventual pitch.
AI is also breaking the language barrier that once limited these scams. Originally, many pig butchering rings were based in Southeast Asia targeting Chinese-speaking victims. Expansion to Western victims was hampered by scammers’ weaker English skills – awkward grammar was often a red flag. Now, LLM-based translation and writing lets a non-native speaker seamlessly scam someone in English, German, Japanese, or any lucrative market. Scammers feed incoming messages to an AI for translation and reply generation, enabling them to pose as cosmopolitan investors or romantic partners even in languages they don’t speak. This has vastly expanded the pool of targets. Well-educated professionals in Europe or North America who might have dismissed clumsy scam messages can now receive polished, perfectly localized correspondence from an “attractive entrepreneur” who befriended them online. The result? More victims fattened up (like “pigs”) for the eventual slaughter.
And let’s not forget deepfakes in pig butchering. While most of the grooming occurs via text, scammers sometimes schedule brief video calls to allay suspicions. Here they increasingly use face-swapped deepfake videos – for instance, a scammer may hire a woman to appear on camera but use AI to replace her face with the stolen photos they’ve been using in the profile. This “proof of life” strategy convinces the victim that their online sweetheart is real. (Some operations even advertise for “real face models” who, with the help of AI filters, appear as the victim’s dream partner on video.) Once trust is secured, the scammers direct victims to invest in bogus crypto platforms or “liquidity mining” programs, often showing fake profit screenshots to entice larger deposits. Victims have been known to refinance homes or drain 401(k)s under the illusion of a life together with their scammer or huge returns just ahead. It’s absolutely devastating – and AI only amplifies the deception.
如何避免豬仔盤詐騙:對 只存在於網絡上的戀愛 或突如其來的導師關係要保持懷疑,尤其是關係短時間內變得過份親密。如果一個你從未見過面的人指導你投資加密貨幣,或要求財務幫助,呢個係一個好明顯嘅紅旗。用反向圖片搜尋核查對方嘅頭像係咪偷圖(好多豬仔盤騙徒會用模特兒或其他受害者嘅相片)。要小心影片通話有問題,例如對方鏡頭成日低畫質或唔願意完整露面——咁可能係用AI假臉。任何叫你快啲投資或者聲稱有內幕消息,都係詐騙套路。記住,真正投資專家或戀人 唔會承諾穩賺不賠。FBI同其他機構已經發出對豬仔盤同加密貨幣戀愛詐騙嘅正式警告——要自我增值,亦要提醒身邊人。如果懷疑對方係騙徒,即刻斷絕聯絡,同千祈唔好轉帳或者畀加密貨幣。如果真係中招,可以匿名向有關部門舉報,幫手追查犯罪集團。警覺係你最強防線,就算AI再甜言蜜語,都唔會掉進陷阱。
4. AI-Written Phishing Emails and Messages (Smarter Scams at Scale)
Phishing – those fraudulent emails or texts that trick you into clicking a malicious link or giving up private information – has long been a numbers game. Scammers blast out thousands of generic messages (“Your account is at risk, login here...”) hoping a few people take the bait. Now, AI is making phishing far more convincing and more targeted. With tools like generative language models, attackers can easily craft personalized, fluent messages that mimic the style of genuine communications, dramatically upping their success rate.
大型語言模型(LLMs)可以生成幾乎分唔出真偽嘅釣魚電郵,無論來自你個銀行、加密貨幣平台定朋友都似模似樣。以前成日有語法錯誤或啲啲笨口笨舌好容易覺得係假,而家AI寫出嚟嘅內容冇咩破綻。例如,AI可以編寫 “[Your Crypto Exchange] 安全部” 發來一封緊急電郵,話有資金提取,提供連結叫你“保障帳戶安全”。內容執得好靚,風格極像官方。騙徒仲會用AI去爬你嘅社交網絡,針對性提及你啱啱做過嘅交易,甚至朋友個名——呢啲叫做魚叉式釣魚。以前要咁做好麻煩,宜家有AI agent,幾秒掃完所有公開資料就做到。
甚至有地下AI工具專門為網絡犯罪而設。因為ChatGPT呢啲公開模型有限制唔畀做非法用途,罪犯就自己開發或買黑市LLM,例如“WormGPT”同“FraudGPT”,冇乜規限。呢啲邪惡AI係Dark Web討論區賣,可以自動生產釣魚電郵、惡意代碼,甚至逐步教你點呃人。一句Prompt,英文唔叻嘅騙徒都可以搞出高質素釣魚郵件,或者一整個網站嘅假內容。根據資安公司KnowBe4統計,到2024年差唔多74%佢哋分析過嘅釣魚郵件都有AI參與——換句話講,大部份釣魚電郵其實已經係AI執筆。
除咗電郵,AI聊天機械人喺訊息平台都係新威脅。騙徒可以自動喺Telegram、Discord、WhatsApp等放Bot同你實時對話,假扮真人引你上釣。例如你喺Twitter投訴加密貨幣錢包有問題,好快有個“客服”DM你(其實係Bot),會一步步教你做假驗證吞走你私鑰。因為Bot可以識理解同自然回應你,令你唔易懷疑佢真假。呢啲AI社交工程手法,連資深用家都中伏。有人試過俾所謂“投資助理”Chatbot呃去交易加密貨幣,點知其實係專收API Key同帳戶資料嘅陷阱嚟。
Concept illustration of a digital scammer using AI. Phishing attacks and malware are increasingly aided by AI algorithms, which can generate realistic messages and even malicious code to steal passwords and crypto funds.
另外,AI仲可以幫黑客寫惡意軟件或者自動化網攻。例如冇乜編程知識嘅罪犯都可以叫開放型AI寫程序掏空加密貨幣錢包,或者安裝Keylogger抄你Seed Phrases。現時有已知勒索軟件、信息竊取工具都係靠AI協助生成。雖然呢啲更接近黑客行為,但通常釣魚電郵就係搭載惡意程式。AI助力下,罪犯可以快過資安部門推出新變種。AI仲幫手過帳號安全,如解CAPTCHA、產假ID通過驗證(見第6節)、甚至用智能方法估密碼(雖然強密碼Key仲難破,但弱Key已經冇安全保障)。Sam Altman就警告過,自拍認證、聲紋登入都已經俾AI輕易突破,傳統身份驗證亟待升級。
點樣防範AI釣魚:傳統經驗仲係啱——千祈唔好隨便撳可疑連結或下載附件,就算郵件幾似真都要小心。見到任何郵件、短訊、私訊係度催你快啲回應,或要求登入資料、2FA code或Seed Phrase,一定要高危意識。格式再完美、語言再正確,都要核查寄件人電郵同網址——細心搵錯字或偽域名(例如“binance.support.com”實際上唔係真官方網址)。收到突如其來要求,最穩妥方法係自己另外搵客服查證,唔好用郵件提供嘅連結。網上社群見到主動DM你嘅“客服”請勿信——正規公司唔會喺DM問你密碼。技術層面,開啟郵件防釣魚過濾、安裝安全評分插件、反惡意軟件等——軟件記住要更新。最緊要係保持懷疑態度。如果一有人推你快啲交資料就要慢慢諗諗。對未經要求嘅訊息保持高度警覺,自然可以避過AI呃人的圈套。記住:正牌公司或朋友唔會怪你多花分鐘去確認,人急才會出錯,騙徒最鍾意你快快趣趣就中伏。
5. Fake “AI Trading” Bots and Platforms (The AI Hype Investment Scam)
The frenzy around artificial intelligence hasn’t just benefited scammers operationally – it’s also become the bait itself. In the past couple of years, there’s been a surge in fraudulent crypto projects and trading schemes that tout AI as their secret sauce. Scammers know that average investors are intrigued by AI’s potential to
(以下內容如需繼續,請通知我)generate profits. Thus, they create fake AI-powered trading bots, signal groups, or DeFi platforms that promise guaranteed returns through some advanced algorithm, when in reality it’s all smoke and mirrors.
賺取利潤。因此,他們會創造假冒聲稱用上人工智能技術的交易機械人、訊號群組,或 DeFi 平台,聲稱透過某些先進算法可以保證獲利,但實際上一切都只是障眼法。
One common ruse is the “AI trading bot” scam: You’re invited (often via Telegram or Reddit) to use a bot that allegedly leverages AI to trade crypto for huge gains. The bot might even show simulated results or a demo that makes a few profitable trades on a test account. But once you deposit your own funds for the bot to trade, it starts losing – or the scammers simply disappear with your money. In other cases, scammers promote an “AI investment fund” or mining pool – you send crypto to them to invest, lured by marketing buzzwords like “proprietary AI-driven strategy” – but it’s a Ponzi scheme. Early “investors” might get a bit back to prove it works, but eventually the operators vanish with the bulk of the funds, leaving behind a slick website and no accountability.
常見手法之一就是「AI 交易機械人」騙局:你(通常透過 Telegram 或 Reddit)被邀請使用一個據稱利用人工智能炒幣、可獲巨額回報的機械人。該機械人甚至可能會展示模擬結果或在測試帳戶做幾單賺錢交易的 demo。但當你真金白銀入金讓它「操作」時,不是不斷蝕錢,就是騙徒直接「蒸發」帶住你的錢走。另一些情況下,騙徒會推廣所謂的「AI 投資基金」或挖礦池——你被一堆如「獨家 AI 策略」等市場術語吸引,將加密貨幣交俾佢地投資,實際只係龐氏騙局。早期所謂「投資者」可能會收到些少回本以證明有效,但最終經營者會帶住大部份資金消失,遺下型格網站但沒任何負責任的人。
During the ChatGPT hype of 2023–2024, dozens of new crypto tokens and platforms emerged claiming some AI angle. While some were legitimate projects, many were outright scams or pump-and-dump schemes. Fraudsters would announce a token tied to AI development, watch funds pour in from excited investors, then abandon the project (a classic rug pull). The idea of AI was enough to inflate a token’s value before the crash. We also saw fake news being weaponized: deepfaked videos of Elon Musk and others were used to endorse an “AI crypto trading platform” (as mentioned earlier) to drive victims to invest. For example, one deepfake video encouraged people to invest in a platform by claiming it used AI to guarantee trading profits – nothing of the sort existed. These schemes often combined the trust in a celebrity with the mystique of AI tech to appear credible.
到 2023–2024 年 ChatGPT 熱潮期間,市面出現數十隻宣稱與 AI 有關的新加密貨幣及平台。雖然有部份真係正經項目,但好多都係騙局或者拉高出貨計劃。騙徒會推出標榜 AI 研發的代幣,大量資金湧入後就棄項目而去(即所謂「地毯式收割」)。單憑 AI 這個「概念」,已經足夠令代幣未爆煲前升值。亦見過假新聞被武器化:騙徒用 Musk 等名人的 deepfake 片段「站台」推薦「AI 加密貨幣交易平台」(如上文所述),吸引用戶投資。例如有段 deepfake 片稱某平台用 AI 保證炒幣賺錢,其實全無其事。這類騙局經常同名人效應加 AI 技術神秘感「夾埋」,令成個包裝好有說服力。
Not only do scammers lie about having AI, some actually use AI to enhance the illusion. TRM Labs noted a major pyramid scheme in 2024 named MetaMax that purported to give high returns for engaging with social media content. To appear legitimate, MetaMax’s website showed a CEO and team – but the “CEO” was just an AI-generated avatar created with deepfake tech. In other words, there was no real person, just an AI image and perhaps an AI voice, assuring investors that MetaMax was the next big thing. The scheme still managed to rake in close to $200 million (primarily from victims in the Philippines) before collapsing. Another scam site, babit.cc, went so far as to generate entire staff headshots via AI instead of using stolen photos of real people. While one might notice some uncanny perfection in those images, each passing month makes AI-generated faces more lifelike. It’s easy to see how future scam sites could have a full cast of seemingly credible executives – none of whom exist in reality.
騙徒不單止扮有 AI,有啲甚至真正用 AI 增強假象。TRM Labs 發現,2024 年有個叫 MetaMax 的大型金字塔騙局,聲稱只要參與社交媒體內容就有高回報。為咗裝得「專業」,MetaMax 網站展示咗 CEO 同團隊,但「CEO」全部都只係用 deepfake 整出嚟的 AI 頭像同聲音,根本無真人。呢個騙局最終都呃到接近 2 億美元(主要來自菲律賓)才爆煲。另一個騙站 babit.cc 甚至連團隊頭像都用 AI 生出嚟,連真相都唔用偷圖。雖然你可能會覺得啲相好唔自然,但 AI 生圖每個月都越嚟越逼真。可以想像,未來騙站可能有齊一個「管理團隊」班底,全無真人存在。
How to avoid AI-themed investment scams: Approach any “too-good-to-be-true” investment opportunity with extreme caution – especially if it heavily markets AI capabilities without clear details. Do your homework: If a project claims to use AI, is there legitimate documentation or an experienced team behind it? Be wary if you can’t find any verifiable info on the founders (or if the only info is AI-created profiles). Never trust celebrity endorsements in the crypto space unless confirmed through official channels; 99% of the time, people like Musk, CZ, or Vitalik are not randomly giving out trading advice or funds doubling offers. If an AI trading bot is so great, ask why its creators are selling access for cheap or marketing on Telegram – wouldn’t they just use it privately to get rich? This logic check often reveals the scam. Also, remember that guaranteed returns = red flag. No matter how sophisticated an algorithm, crypto markets have risk. Legitimate firms will be clear about risks and won’t promise fixed high yields. As an investor, consider that scammers love buzzwords – “AI-powered, quantum, guaranteed, secret algorithm” – these are hooks for the gullible. Stick to known exchanges and platforms, and if you’re tempted by a new project, invest only what you can afford to lose after independently verifying it. When in doubt, seek opinions from trusted voices in the community. Often, a quick post on a forum or Reddit about “Has anyone heard of XYZ AI bot?” will surface warnings if it’s fraudulent. In short, don’t let FOMO over AI breakthroughs cloud your judgment – the only thing “automated” in many of these scams is the theft of your money.
點樣避免 AI 主題投資騙局:每逢見到「好到信唔過」的投資機會都要極度小心,特別是過份標榜 AI 功能、但細節講唔清的。要做功課:個 project 話用 AI,咁有冇啲正規技術文檔、或者經驗豐富的團隊?搵唔到聯合創辦人真實資料(或者全部資訊都只係 AI 照片)就要小心。唔好信加密貨幣圈任何「名人背書」,除非透過官網證實;99% 情況下,Musk、CZ、Vitalik 等根本唔會派發炒幣貼士或者「資金翻倍」優惠俾人。如果啲所謂「AI 交易機械人」真係咁勁,創作者點解唔自己用發圍,反而要平賣入場權、甚至喺 Telegram 推銷?咁樣已經足夠拆穿騙局。另外,記住「包贏」的投資等於紅旗,無論咩先進算法,加密市場都有風險。正規公司會講清投資風險,唔會包穩定高回報。投資人要記住,騙徒最鍾意用「AI 賦能」、「量子」、「保證」、「秘密算法」等 buzzword,全部係呃新手的勾。堅持用知名交易所及平台,如果對新項目有興趣,查證完都只投自己輸得起的金額。猶豫時聽下圈內有信譽的意見,通常論壇或 Reddit 貼一問「有冇人識 XYZ AI bot?」都即刻有警告。如果唔想被 AI 熱潮衝昏頭腦:記住,這些騙局唯一「自動化」者,只係自動偷你錢。
6. Synthetic Identities and KYC Bypass with AI
6. 用 AI 生成假身份及 KYC 漏洞
Cryptocurrency scams often involve a web of fake identities – not just the people being impersonated to victims, but also the accounts and entities the scammers use. AI now allows fraudsters to generate entire synthetic identities on demand, bypassing verification measures that were meant to weed out imposters. This has two major implications: (1) Scammers can open accounts on exchanges or services under false names more easily, and (2) they can lend an air of legitimacy to their scam websites by populating them with AI-generated “team members” or testimonials.
加密貨幣騙案經常涉及大量假身份——不單止擦身而過的被冒充對象,連騙徒用的各類帳戶、實體都可能係假。人工智能令騙徒可以隨時生產完整「合成身份」,繞過原意阻止假冒者的驗證措施,造成兩大影響:(1)用假名更輕鬆開交易所或平台帳戶、(2)用 AI 生成的「團隊成員」或推薦評論,令騙站增添「官方感」。
On the compliance side, many crypto platforms require KYC (Know Your Customer) checks – e.g. upload a photo ID and a selfie. In response, criminals have started using AI tools to create fake IDs and doctored selfies that can pass these checks. A common approach is using AI image generators or deepfake techniques to combine elements of real IDs or synthesize a person’s likeness that matches the name on a stolen ID. There was a recent anecdote in Decrypt of people using basic AI to generate fake driver’s license images to fool exchanges and banks. Even biometric verifications aren’t safe: AI can output a lifelike video of a person holding an ID or performing whatever motion the system requires. Essentially, a scammer could sit at their computer and have an AI puppet a fictional person to open accounts. These accounts are then used to launder stolen crypto or to set up scam platforms. By the time investigators realize “John Doe” who withdrew millions is not real, the trail has gone cold.
合規層面,好多加密貨幣平台要求用 KYC(認識你的客戶)檢查——例如要上載身份證件相同自拍等。騙徒現時會用 AI 工具造假證或修改自拍,以通過這些驗證。常用方法包括用 AI 生圖或者 deepfake 將多張身份證元素混合,又或者造一個樣貌配合被盜身份證名的虛擬人。有關 Decrypt 記載,有人用簡單 AI 就整到以假驾照圖片騙倒交易所同銀行。連生物辨識都唔穩陣:AI 可以直接造一條人舉住證件的擬真影片、或者執任何動作。即係話,騙徒喺電腦前就可以用 AI 操控一個「虛構人」開戶,然後將賊贓洗錢,或者搭建騙站。到調查人員發現啲「John Doe」掹走幾百萬都唔係真人時,線索可能都已經斷晒。
Likewise, when promoting scams, having fake “verified” identities helps. We touched on AI-generated CEOs in Section 5 – it’s part of a broader trend. Scammers can populate LinkedIn with employees who don’t exist (using AI headshots and auto-generated CVs), create fake user reviews with GAN-generated profile pics, and even generate fake customer support agents. Some victims have reported chatting with what they thought was an exchange support rep (perhaps via a pop-up chat on a phishing site), and the agent had a realistic avatar and name. Little did they know it was likely an AI bot backed by a fictitious persona. ThisPersonDoesNotExist (an AI tool that generates random realistic faces) has been a boon for fraudsters – every time a scam account or profile is flagged, they just generate a new unique face for the next one, making it hard for spam filters to keep up.
同樣地,推廣騙局時有「已驗證」假身份都幫到手。第五節都講過 AI 生成 CEO 呢個現象,呢啲只係冰山一角。騙徒可以用 AI 頭像加自動履歷在 LinkedIn 搞一大班假員工,又可以用 GAN 合成頭像整假用戶好評,甚至造假客服。有受害人以為自己喺交易所官方支援(可能係 phishing 網站 pop-up chat)傾緊計,對方有 realistic avatar 和名,其實多數係 AI bot + 假身份。ThisPersonDoesNotExist(隨機生 realistic 臉的 AI 工具)簡直係騙徒恩物——每逢 scam 帳戶被封,就再生一個新臉,防 spam 濾波器都頂唔順。
Even outside of scams targeting end-users, AI-aided identity fraud is facilitating crimes. Organised rings use deepfakes to fool banks’ video-KYC procedures, enabling them to set up mule accounts or exchange accounts that can convert crypto to cash under a false identity. In one case, Europol noted criminals using AI to bypass voice authentication systems at banks by mimicking account holders’ voices. And law enforcement now sees evidence that crypto scam proceeds are paying for these AI “identity kits” – TRM Labs traced crypto from pig butchering victims going to an AI service provider, likely for purchasing deepfake or fake ID tools. It’s a full criminal ecosystem: buy a fake identity, use it to set up scam infrastructure, steal money, launder it through exchanges opened with more fake IDs.
就算唔針對終端用戶,AI 輔助身份詐騙都助長咗犯罪。組織化犯罪集團會用 deepfake 攪亂銀行 video-KYC,搞到身份不明戶口或可將加密變現的人頭戶都開到。有案例講 Europol 發現,有人用 AI 仿聲技術攻破銀行語音認證。執法人員都見到,部份幣圈騙案贓款直頭用來買埋 AI「身份包」——TRM Labs 追蹤過那些「殺豬盤」受害者的加密資金流向 AI 服務供應商,好可能係用來買 deepfake、假身份證等工具。形成一個完整犯罪生態:買假人,搭騙局,偷錢,再用更多假人開的戶口洗錢。
How to defend against synthetic identity scams: For individual users, this is less about something you might directly encounter and more about being aware that photos or “documentation” can be faked. If you’re dealing with a new crypto platform or service, do some due diligence: Is the team real and verifiable? If you video-call a “financial advisor” and something seems off (e.g., slight facial oddities), consider that they might not be who they claim. For companies, the onus is on strengthening KYC and fraud detection – e.g., using AI to fight AI, like checking if an ID photo is generated or if a selfie is a deepfake (there are algorithms that can detect subtle artifacts). As a user, one actionable tip is to protect your own identity data. Scammers often train their deepfake models on whatever info they can find about you online. Limiting what you share (e.g., don’t post videos of yourself publicly if avoidable, and keep profiles private) can reduce the raw material available to bad actors. Also, enable and insist on security measures beyond just ID checks – for instance, some banks will have you confirm a random phrase on video (harder for a deepfake to do on the fly, though not impossible). Ultimately, as Altman suggests, the way we verify identity needs to evolve. Multifactored and continuous verification (not just one snapshot) is safer. For now, as a consumer, prefer services that have robust security and be skeptical if an individual or site demands your personal documents or info without solid rationale. If you suspect an account or profile is fake (maybe a brand-new social profile contacting you about crypto investing), err on the side of caution and disengage. The less opportunity you give scammers to use fake identities on you, the better.
點樣防範合成身份詐騙:一般用家未必會直接遇到,但要意識到相片或「證明文件」可被偽造。如果用新平台或服務,要盡量查証當中團隊有冇真身、是否可考證。萬一同某「理財顧問」 video call 覺得塊面有啲怪、表情唔自然,就要諗下會唔會係假人。企業層面更要加強 KYC 及反詐騙——例如用 AI 對抗 AI,偵測證件相是否生成、自拍會唔會係 deepfake(有算法可以捉到微細痕跡)。用家可做的:保護好自己身份資料。騙徒通常會用你公開網上 info 去訓練 deepfake 模型,所以避免公開自己片段或相,有需要出片都 set 私人,盡量減少被「攞料」的機會。另外要啟用或要求具體安全措施,不只認證文件,例如有些銀行會叫你 video 講一句隨機短語(deepfake 臨場變都難)。長遠身份認證方式要進化,多重、持續驗證會更穩陣。消費者可揀安全實力高的平台,若有人或網站要求交大量私人文件又冇合理解釋要保持懷疑。遇到有懷疑的帳戶(例如新開社交號話幫你炒幣),以安全為先,斷線退出。越少俾機會啲騙徒用假人「扮你」,安全就越高。
7. AI-Powered Social Media Bots and Impersonators
7. AI 加持社交媒體機械人及冒充者
Crypto scammers have long thrived on social media, from Twitter and Facebook to Telegram and Discord. Now, AI is turbocharging the bots and fake accounts that facilitate these scams, making them more effective and harder to distinguish from real users. If you’ve ever tweeted about crypto and gotten instant replies
加密貨幣騙徒一直依賴社交平台——由 Twitter、Facebook,到 Telegram、Discord 等。現時 AI 令這些騙局用到的機械人和假帳戶大升級,更難辨認真假用戶、更有效力。如果你曾經在網上發 post 講加密貨幣,被即時回覆……offering “support” or seen random friend requests from attractive people into crypto, you’ve likely encountered this problem. AI allows scammers to deploy armies of bots that are more believable than ever.
如果你曾經見過有人提供「支援」或者收到啲靚人亂咁加你做friend、一齊玩加密貨幣,你大概都遇過呢個問題。AI令騙徒可以放出一大堆,比以前更具說服力嘅bot大軍。
For one, generative AI lets each bot have a unique “personality.” Instead of 1,000 bots all posting the same broken-English comment about a giveaway, each can now produce unique, coherent posts that stay on a script but avoid obvious duplication. They can even engage in conversation. For example, on crypto forums or Telegram groups, an AI bot can infiltrate by blending in, chatting casually about the markets or latest NFTs, building credibility in the community. Then, when it DMs someone with a “great opportunity” or a phishing link, the target is less suspicious because they’ve seen that account being “normal” in the group for weeks. AI can also generate realistic profile pictures for these bots (using GANs or similar), so you can’t just do a reverse image search to catch a stolen photo. Many scam Twitter accounts nowadays sport AI-created profile pics – often of an appealing, friendly-looking person – with none of the telltale glitches that earlier AI images had. Even the bios and posts are AI-written to appear authentic.
首先,用生成式AI可以令每個bot都有自己獨特嘅「人格」。以前一千個bot都係用一樣嘅爛英文話有送禮物,依家佢哋會寫到內容唔同但又清晰、仍然貼近騙局主題,唔會咁容易一眼睇到係抄襲。啲bot仲識得同人傾偈。例如喺加密貨幣討論區或者Telegram群組,啲AI bot可以融入,扮晒普通用戶咁,講下市況、NFT新消息,慢慢建立信譽。等佢遲啲DM你、有個「好機會」或者發個釣魚link俾你,你就會少咗戒心,因為你見佢喺group裝咗正常咁好耐。AI仲可以幫bot做 realistic profile pics(用GANs或者類似技術),你反查圖片都查唔到啲相係偷嚟—依家好多Twitter騙案戶口,用咗AI產生嘅profile相,望落正常又冇以前AI圖啲怪樣。連戶口bio同post都係AI寫,裝得真過真。
Impersonation of legitimate accounts is another area where AI helps. We touched on deepfake video/voice impersonation, but on text-based platforms, the imposter might just copy the profile of a known figure or support desk. AI can assist by quickly mass-producing lookalike accounts (slightly misspelled handles, for instance) and generating content that matches the tone of the official account. When victims message these fake support accounts, AI chatbots can handle the interaction, walking them through “verification” steps that actually steal information. This kind of conversational phishing is much easier to scale with AI. In one noted scam, users in a Discord community got private messages from what looked like an admin offering help to claim an airdrop; an AI likely powered those chats to convincingly guide users through connecting their wallets – straight into a trap that stole their tokens. Chainalysis reported that AI chatbots have been found infiltrating popular crypto Discord/Telegram groups, impersonating moderators and tricking people into clicking malicious links or divulging wallet keys. The bots can even respond in real-time if someone questions them, using natural language, which throws off some of the usual tip-offs (like a long lag or irrelevant reply).
AI仲可以幫手假冒正牌戶口。我哋之前講過deepfake錄影片/聲,但文字平台就多數係抄個名/頭像裝support或者「名人」。AI可以快速大批量整出一大堆咁上下樣嘅假戶口(好似handle只係字母調轉、錯一兩個字),再自動產生同官方帳戶tone調一樣嘅內容。當受害人搵呢啲假support戶口,AI chatbot就會對答自如,一步一步引人做「驗證」步驟,實際係偷資料。咁樣成條釣魚流程有AI幫手超易scale up。例如有個騙案就係Discord群組,有用戶收咗私人訊息,好似係管理員主動發嚟話幫你攞空投,AI帶你逐步連錢包,實際係set陷阱呃你啲token。Chainalysis就報導過,AI chatbot真係滲透咗好多熱門加密Discord/Telegram group,假扮版主,呃人click惡意link定話要交出錢包key。啲bot識即時回覆你,啲自然語言回應可以呃到平時識分辨bot嘅老手—因為唔再係遲鈍、亂答啲嘢。
The scale is staggering – a single scammer (or small team) can effectively run hundreds of these AI-driven personas in parallel. They might use AI agents that monitor social media for certain keywords (like “forgot password MetaMask”) and automatically reply or DM the user with a prepared scam message. Before AI, they’d have to either do this manually or use crude scripts that were easily flagged. Now it’s all more adaptive. We also see AI being used to generate fake engagement: thousands of comments and likes from bot accounts to make a scam post or scam token seem popular. For instance, a fraudulent ICO might have dozens of “investors” on Twitter (all bots) praising the project and sharing their supposed profits. Anyone researching the project sees positive chatter and might be fooled into thinking it’s legit grassroots excitement.
規模大到好嚇人—一個騙徒定一細team,可以同時操控幾百個AI人設。佢哋仲可以用AI agent睇住social media啲關鍵字(例如「忘記MetaMask密碼」),即刻自動回覆或者DM啲預設騙訊。以前冇AI好多都要人手逐個逐個做,或者用啲死板script好易比人捉到。依家就靈活好多。AI仲可以用嚟造假人氣:有齊幾千個bot戶口幫scam帖文、假幣點讚留言,令成件事假裝好受歡迎。例如有啲騙局ICO,Twitter上好多「投資者」(全部bot嚟),齊齊讚新項目、show「賺錢」感受。你一search資料,見咁多討論讚好,好易中計以為係真群眾熱潮。
How to fight social media bot scams: First, recognize the signs of bot activity. If you get an instant generic reply the moment you mention a crypto issue online, assume it’s malicious. Never click random links sent by someone who reached out unsolicited, even if their profile picture looks nice. Check profiles carefully: When was it created? Does it have a history of normal posts or is it mostly promotional? Often bots have brand-new accounts or weird follower/following ratios. On Telegram/Discord, adjust your privacy settings to not allow messages from members you don’t share a group with, or at least be wary of anyone messaging out of the blue. Official support will rarely DM you first. If someone impersonates an admin, note that reputable admins usually won’t conduct support via DM – they’ll direct you to official support channels. If a Twitter account claims to be support for a wallet or exchange, verify the handle against the company’s known handle (scammers love swapping an “0” for “O”, etc.). Utilize platform tools: Twitter’s paid verification is imperfect, but a lack of a blue check on a supposed “Binance Support” is a dead giveaway now. For Discord, communities sometimes have bot-detection tools – pay attention to admin warnings about scams and bots.
點對付social media bot騙局:首先要識睇癥狀。如果你一講加密貨幣煩惱,秒間收到啲很普通回覆,多數有古怪。唔好亂click任何唔熟、無端端加你戶口send link過嚟,就算profile相幾吸引都唔好。睇清楚戶口幾時開,有冇正常post歷史,定淨係做宣傳?bot成日新開定啲跟蹤/粉絲數浮誇唔合理。Telegram/Discord就要調校私隱設定,唔好任人DM你(最少要佢同你share群組先可以),見到陌生人主動message就小心啲。正規support好少會主動DM你。有admin戶口話係support,大多正規版主都唔會私下DM做支援—係會指示你去官方support channel。Twitter啲support號請核對清楚handle,騙徒最鍾意“0”變“O”等等。善用平台功能:Twitter付費 verification唔完美,但無藍剔又話係「Binance Support」而家已經一望就知假。Discord好多群會用bot detection功能—記住睇admin警告提醒。
As users, one of the best defenses is a healthy cynicism about “friendly strangers” offering help or money online. Real people can be kind, but in the crypto social sphere, unsolicited help is more likely a con. So, if you’re flustered about a crypto problem, resist the urge to trust the first person who DMs you claiming they can fix it. Instead, go to the official website of the service in question and follow their support process. By denying scammers that initial engagement, their whole AI-bot advantage is nullified. And finally, report and block obvious bot accounts – many platforms improve their AI detection based on user reports. It’s an ongoing arms race: AI vs AI, with platforms deploying detection algorithms to counter malicious bots. But until they perfect that, staying vigilant and not engaging with probable bots will go a long way to keeping your crypto safe.
身為用戶,最基本自保係要對「網上友善陌生人」懷有合理懷疑態度。現實中有人真係好人,網上工作細圈、加密貨幣界,唔認識人主動幫你多數係騙局。如果你因加密交易問題而心急,記住唔好第一時間信主動DM你話fix到問題個人。應該去官方網站,照正規步驟搵support。你唔俾騙徒第一次接觸機會,佢個AI bot全部優勢都冇晒。最後,見到明顯bot戶口要report同block—好多平台靠用戶報告訓練AI detection。現時已經係AI對AI比併,平台用演算法對付惡意bot。未到達完美之前,自己保持戒心、唔同bot戶口溝通,先至可以最大程度保住你嘅加密資產安全。
8. Autonomous “Agent” Scams – The Next Frontier
8. 自主式「Agent」騙局 —— 下一波威脅
Looking ahead, the most unsettling prospect is fully automated scam operations – AI agents that conduct end-to-end scams with minimal human input. We’re already seeing early signs of this. An AI agent is essentially a software program that can make decisions and perform multi-step tasks on its own (often by invoking other AI models or software tools). OpenAI’s recently announced ChatGPT-powered agents that can browse, use apps, and act like a human online have raised both excitement and concern. Scammers are undoubtedly eyeing these capabilities to scale their fraud to new heights.
展望未來,最令人擔憂嘅,係騙徒可以全自動化操作,完全靠AI agent一條龍搞掂騙局—人手介入極少。依家已經有早期跡象。AI agent其實就係一種軟件,可以自己決定、自己一步步執行任務(通常仲會叫用其他AI model或者工具)。OpenAI近排宣佈咗ChatGPT agent,可以上網、用apps、在線裝做人咁互動。呢啲功能雖然好吸引,但同時都好危險。騙徒肯定會睇中佢哋,想用嚟大規模升級網絡詐騙。
Imagine an AI agent designed for fraud: It could scan social media for potential targets (say, people posting about crypto investing or tech support needs), automatically initiate contact (via DM or email), carry on a realistic conversation informed by all the data it’s scraped about the person, and guide them to a scam outcome (like getting them to a phishing site or persuading them to send crypto). All the while, it adjusts its tactics on the fly – if the victim seems skeptical, the AI can change tone or try a different story, much as a human scammer might. Except this AI can juggle hundreds of victims simultaneously without fatigue, and operate 24/7. This is not science fiction; components of it exist now. In fact, TRM Labs warns that scammers are using AI agents to automate outreach, translation, and even the laundering of funds – for instance, summarizing a target’s social media presence to customize the con, or optimizing scam scripts by analyzing what has worked on past victims. There’s also talk of “victim persona” agents that simulate a victim to test new scam techniques safely. It’s a devious use of AI – scammers testing scams on AIs before deploying on you.
想像下,一個專用嚟詐騙嘅AI agent:可以自行不停search社交平台目標(例如發帖話要投資加密貨幣,或者搵tech support嘅人),自動DM定發email接觸,靠拎齊你所有網上內容,同你進行超真實對話,再引導你入陷阱(click釣魚網頁、叫你send幣畀佢)。AI會根據情況實時調整技倆—如果受害人古惑左,AI會即時轉語氣或者轉「劇本」,好似真人騙徒咁,但佢可以同時操控幾百個目標,24x7都唔會攰。呢啲唔係科幻,組件已經存在。TRM Labs提醒,騙徒真用AI agent自動「接觸」目標、幫手翻譯、甚至洗黑錢—例如總結一個人online footprint嚟度身訂造套路,或者分析成功率優化說辞。仲有人講緊所謂「受害者模擬AI」——模擬成受害者,安全地測試新騙局腳本。AI俾騙徒用到咁陰沉—先喺AI身上測試騙局,再對付你。
On the technical side, an AI agent can integrate with various tools: send emails, make VoIP calls (with AI voices), generate documents, etc. We could soon face automated phone scams where an AI voice calls you claiming to be from your bank’s fraud dept. and converses intelligently. Or an AI that takes over a hacked email account and chats with the victim’s contacts to request money. The combinations are endless. Sam Altman’s dire warning about adversarial AIs that could “take everyone’s money” speaks to this scenario. When AI can multi-task across platforms – perhaps using one GPT instance to talk to the victim, another to hack weak passwords, another to transfer funds once credentials are obtained – it becomes a full-fledged fraud assembly line with superhuman efficiency. And unlike human criminals, an AI doesn’t get sloppy or need sleep.
技術層面,AI agent可以對接不同工具:發email、用AI語音打VoIP電話、生成文件等。好快我哋會遇到自動騙案電話—AI語音冒認銀行防詐部打畀你同你傾計;或者AI淪陷咗人哋email之後,假扮受害人,發訊息畀佢朋友搵錢。組合無窮無盡。Sam Altman就警告過,對抗性AI隨時會「令所有人破產」,講嘅正正係呢類情景。AI可以同步multi-task—可能一個GPT同受害人對話,另一個攻擊密碼,第三個負責賬號入到後轉錢—整個詐騙平臺連人都唔需要,成為「超人效率」流水線。AI又唔會做錯嘢,亦唔洗訓覺。
It’s worth noting that security experts and law enforcement are not standing still. They are exploring AI solutions to counter AI threats (more on that in the next section). But the reality is that the scalability of AI-driven scams will challenge existing defenses. Legacy fraud detection (simple rules, known bad keywords, etc.) may fail against AI that produces ever-variant, context-aware attacks. A big coordinated effort – involving tech companies, regulators, and users themselves – will be needed to mitigate this. Regulators have started discussing requiring labels on AI-generated content or better identity verification methods to counter deepfakes. In the interim, zero-trust approaches (don’t trust, always verify) will be crucial on an individual level.
當然,安全專家同執法單位都冇坐視不理,大家都積極研究AI安全技術(下節再詳談)。但AI大規模自動化騙局確實會衝擊現有防禦。依家傳統防詐(簡單規則、keyword黑名單等)對上會變化又識睇context嘅AI攻擊,可能會失效。真係要企業、監管、用戶合作大規模對抗。監管層已經傾緊要標註AI內容,或者推強身份認證去對抗deepfake。未有後著之前,用「零信任」原則(唔信、要查證)係自保關鍵。
Staying safe in the era of AI agents: Many of the tips already given remain your best armor – skepticism, independent verification, not oversharing data that agents can mine, etc. As AI agents arise, you should raise your suspicion for any interaction that feels slightly “off” or too formulaic. For instance, an AI might handle most of a scam chat but falter on an unexpected question – if someone ignores a personal question and continues pushing their script, be wary. Continue to use multi-factor authentication (MFA) on your accounts; even if an AI tricks you into revealing a password, a second factor (and especially a physical security key) can stop it from logging in. Monitor your financial accounts closely for unauthorized actions – AI can initiate transactions, but if you catch them quickly, you might cancel or reverse it. Importantly, demand authenticity in critical communications: if “your bank” emails or calls, tell them you will call back on the official number. No
AI agent時代點自保:以上提過啲方法,依然最有用——懷疑一切、多重查證、唔亂放晒自己資料俾AI agent分析。見到網上有人同你傾偈,稍為感覺造作或者「太有劇本」,要特別小心。例如AI script問到突發問題就可能露底,如有人避開唔回你私人問題,繼續催你做指定步驟,就唔好再信。保護賬戶最好用多重認證(MFA)—即使有人呃到你個密碼,有實體認證key都可以擋住對方登入。經常留意自己銀行、加密戶口,有冇未經授權操作—AI可以助你執行交易,但你快手發現都仲可以補救。有重要溝通,必須要求對方真身—銀行email或電話話有事,叫對方等等,自行打去官網客戶熱線核實先行動。genuine institution will refuse that. As consumers, we may also see new tools emerge (perhaps AI-driven) for us to verify content – for example, browser plugins that can flag suspected AI-generated text or deepfake videos. Staying informed about such protective tech and using it will help level the playing field.
正規機構會拒絕呢啲(要求)。作為消費者,我哋可能會見到有新嘅工具出現(可能係人工智能主導),等我哋可以驗證內容——例如有啲瀏覽器擴展功能,可以標記疑似AI生成文字或者deepfake影片。保持對呢啲防護科技嘅認識並採用佢哋,有助我哋同騙徒拉近技術差距。
Ultimately, in this AI arms race, human vigilance is paramount. By recognizing that the person on the other end might not be a person at all, you can adjust your level of trust accordingly. We’re entering a time when you truly can’t take digital interactions at face value. While that is disconcerting, being aware of it is half the battle. The scams may be automated, but if you automate your skepticism in response – treating every unsolicited ask as malicious until proven otherwise – you compel even the smartest AI con to surmount a very high bar to fool you.
最終,喺呢場AI軍備競賽之中,人類嘅警覺性係至關重要。當你意識到對面未必真係一個真人,你就可以調整自己對佢嘅信任程度。現時我哋已經進入咗一個唔可以單憑表面相信數碼互動嘅時代。雖然咁樣令人唔安心,但意識到呢點已經贏咗一半。騙局可以自動化,但如果你都「自動化你嘅懷疑精神」——用「除非證明唔係,否則全部陌生或突如其來嘅請求都當惡意」的態度應對,咁就連最聰明嘅AI行騙都要過一個非常高嘅門檻至可以呃到你。
9. How Authorities and Industry are Fighting Back
It’s not all doom and gloom – the same AI technology empowering scammers can be harnessed to detect and prevent fraud, and there’s a concerted effort underway to do just that. Blockchain analytics firms, cybersecurity companies, and law enforcement are increasingly using AI and machine learning to counter the wave of AI-powered crypto scams. It’s a classic cat-and-mouse dynamic. Here’s how the good guys are responding:
9. 當局同業界點樣反擊
情況唔一定咁悲觀——用嚟助長騙徒嘅AI科技,其實都可以用嚟偵測同預防詐騙,依家業界都好積極咁做緊。現時,區塊鏈分析公司、網絡安全公司、同埋執法部門都越嚟越多用AI同機器學習,去對抗新一波AI驅動嘅加密貨幣騙局。呢個就係典型嘅貓捉老鼠遊戲。睇下「正義一方」有咩招數:
-
AI-driven scam detection: Companies like Chainalysis and TRM Labs have integrated AI into their monitoring platforms to spot patterns indicative of scams. For instance, machine learning models analyze text from millions of messages to pick up linguistic cues of AI-generation or social engineering. They also track on-chain behaviors – one report noted that about 60% of deposits into scam wallets are now linked to AI usage. By identifying wallets that pay for AI services or exhibit automated transaction patterns, investigators can flag likely scam operations early. Some anti-phishing solutions use AI vision to recognize fake websites (scanning for pixel-level mismatches in logos or slight domain differences) faster than manual reviews.
-
AI詐騙檢測:例如Chainalysis同TRM Labs啲公司,已經將AI集成入佢哋嘅監控平台,用嚟搵出有詐騙跡象嘅行為模式。例如,機器學習會分析幾百萬條信息嘅文字,捕捉AI生成或者有社交工程手法嘅語言特徵。佢哋亦會追蹤鏈上行為——有報告指出,依家大約六成詐騙錢包入錢,都同AI有關。識別到有啲錢包用嚟支付AI服務費或顯示自動化交易特徵,調查人員就可以及早標記係高危騙局。一啲防釣魚方案會用AI圖像分析,偵測假網站(例如掃描Logo像素位唔吻合、網址有微細分別等),快過人手審查。
-
Authentication improvements: In light of Altman’s comments that voice and video can’t be trusted, institutions are moving toward more robust authentication. Biometrics may shift to things like device fingerprints or behavioral biometrics (how you type or swipe) that are harder for AI to mimic en masse. Regulators are nudging banks and exchanges to implement multi-factor and out-of-band verification for large transfers – e.g., if you request a big crypto withdrawal, maybe a live video call where you have to perform a random action, making it harder for a deepfake to respond correctly. The Fed and other agencies are discussing standards for detecting AI impersonation attempts, spurred by cases like the $25M deepfake CFO scam.
-
認證改進:Altman都講過,聲音同影片唔可以再全信,機構開始轉向更強認證方式。生物認證可能會轉用裝置指紋、或者行為生物識別(例如你打字、滑動手法),咁AI大規模仿製就難好多。監管機構都推動銀行同交易所,進行重大轉帳時加入多重或跳頻(out-of-band)認證——例如大額加密提現,可能要用實時視像通話加即場做指定動作,咁deepfake都難即時應對。聯儲局同其他部門,因為有CFO被deepfake呃$2500萬案件,現正討論點樣制定偵測AI冒充標準。
-
Awareness campaigns: Authorities know that public education is crucial. The FBI, Europol, and others have released alerts and held webinars to inform people about AI scam tactics. This includes practical advice (many of which we’ve echoed in this article) such as how to spot deepfake artifacts or phishy AI-written text. The more people know what to look for, the less effective the scams. Some jurisdictions are even considering mandated warning labels – for example, requiring political ads to disclose AI-generated content; such policies could extend to financial promotions as well.
-
宣傳教育活動:有關當局好清楚,提升公眾認知好重要。FBI、歐洲刑警等發出過多次警示,開網上講座,教大家識別AI詐騙手法,包括好多實用貼士(我哋本文都提及過),如判斷deepfake有咩伪裝特徵,或者點睇AI寫嘅phishing文本。愈多人識得分辨,騙局效果就愈低。一啲司法轄區仲考慮立法,加強內容警示——例如要求政治廣告一定要聲明AI生成,將來甚至金融推廣都可能會加呢啲政策。
-
Legal and policy measures: While technology moves fast, there’s talk of tightening laws around deepfake abuse. A few U.S. states have laws against deepfakes used in elections or impersonating someone in a crime, which could be applied to scam cases. Regulators are also examining the liability of AI tool providers – if a product like WormGPT is clearly made for crime, can they go after its creators or users? In parallel, mainstream AI companies are working on watermarking AI-generated outputs or providing ways to verify authenticity (OpenAI, for instance, has researched cryptographic watermarking of GPT text). These could help distinguish real from AI if widely adopted.
-
法律與政策手段:雖然科技發展快,但針對deepfake濫用都有新一輪收緊法例討論。美國部分州份已針對選舉用deepfake或冒充罪行立法,未來可以應用於詐騙案。監管機構亦研究緊AI工具開發者責任——例如WormGPT明顯係俾犯罪用,追究開發者或用戶可唔可以做到?同時,大型AI公司都搞緊内容加水印或識別真偽(例如OpenAI研究過GPT文本加密水印)。如果普及化,有助大家區分邊啲係真資訊邊啲係AI造。
-
Collaboration and intelligence-sharing: One silver lining is that the threat of AI scams has galvanized cooperation. Crypto exchanges, banks, tech platforms, and law enforcement have been sharing data on scammer addresses, known deepfake tactics, and phishing trends. For example, if an exchange notices an account likely opened with fake credentials, they might alert others or law enforcement, preventing that same identity from being reused elsewhere. After major incidents, industry groups conduct post-mortems to learn how AI was leveraged and disseminate mitigation strategies.
-
合作同情報共享:正面啲講,AI騙局威脅令唔同行業齊心協力。加密交易所、銀行、科技平台、執法機構,有共享騙徒錢包地址、已知deepfake手法同phishing趨勢。例如某交易所發現假文件開戶,就會通知業界或者執法部門,阻止同一假身份喺其他地方再用。重大事故後,業界組織會檢討AI點被利用,並分享減少風險嘅做法。
-
Victim support and intervention: Recognizing that many victims are too embarrassed to report (recall only ~15% report losses), some agencies have become proactive. The FBI’s Operation Level Up in 2024 actually identified thousands of likely pig butchering victims before they realized they were scammed, by analyzing financial flows, and managed to prevent an additional $285 million in losses by warning them in time. In other words, better detection allowed intervention in real-time. More such initiatives, possibly AI-aided, can save would-be victims by spotting the scam patterns earlier in the cycle (e.g., unusual repetitive transactions to a fake platform).
-
受害者支援及介入:考慮到唔少受害人因為尷尬未必肯報警(得約15%有報案),有啲機構採取主動出擊。2024年美國FBI嘅「Level Up行動」就喺騙局發生前,已憑分析資金流發現幾千名可能畜牧盤(pig butchering)受害者,及時警告佢哋,幫手阻止咗多$2.85億損失。換句話說,偵測得快啲就可以實時介入。多啲咁嘅主動計劃,可能都會用AI協助,有助更早喺騙局循環中發現疑點(如不尋常重複轉帳去假平台等),救到更多人。
In the end, defeating AI-assisted scams will require “it takes a village”: technology defenses, informed users, updated regulations, and cross-border law enforcement cooperation. While scammers have proven adept at integrating AI into their workflow, the countermeasures are ramping up in parallel. It’s an ongoing battle, but not a lost one. By staying aware of both the threats and the solutions emerging, the crypto community can adapt. Think of it this way – yes, the scammers have powerful new tools, but so do we. AI can help sift through massive amounts of data to find needles in the haystack (like clustering scam wallet networks or detecting deepfake content). It can also help educate, through AI-powered training simulations that teach people how to respond to scam attempts.
講到底,打擊AI詐騙要靠「眾志成城」:包括技術防線、用戶有知識、法規更新、執法跨境齊合作。騙徒證明咗好識用AI融入佢地嘅流程,而對策亦正同步加強。呢場仗係持續中,但未輸。只要同時緊貼風險同解決方案,加密社群就可以不斷適應。咁諗啦——騙徒有新武器,我哋都一樣。AI可以幫手喺海量數據中揾針(例如將詐騙錢包分組、偵測deepfake內容),亦可以協助教學,例如用AI模擬騙案訓練大家識點回應。
10. Protecting Yourself: Key Takeaways to Stay Safe
Having explored the myriad ways AI is being abused to steal crypto, let’s distill some practical protection tips. These are the habits and precautions that can make you a hard target, even as scams evolve:
10. 自保要點:保持安全嘅重點建議
了解過AI點樣畀人濫用去呃加密資產,以下係精練出來嘅實用自保貼士。養成呢啲習慣同注意事項,即使騙案變種都冇咁易中招:
-
Be skeptical of unsolicited contact: Whether it’s an unexpected video call from a “friend,” a DM offering help, or an email about an investment opportunity, assume it could be a scam. It’s sad we have to think this way, but it’s the first line of defense. Treat every new contact or urgent request as potentially fraudulent until verified through a secondary channel.
-
小心一切突如其來嘅聯絡:無論係「朋友」突然視像Call你、有人DM主動幫你、或收到咩投資電郵,都要假設有機會係詐騙。雖然咁諗有啲無奈,但呢個係第一重防線。所有新聯絡或急切要求,都要用第二個途徑確認清楚先信。
-
Verify identities through multiple channels: If you get a communication supposedly from a known person or company, confirm it using another method. Call the person on a known number, or email the official support address from the company’s website. Don’t rely on the contact info provided in the suspicious message – look it up independently.
-
多方核實身份:如果有人話自己係你識嘅人或公司,記得用唔同渠道再查一查。打電話去對方已知電話、或者自己上官網搵官方電郵去問。千祈唔好直接用可疑訊息入面嘅聯絡資料,最好自己獨立搵資料。
-
Slow down and scrutinize content: Scammers (human or AI) rely on catching you off-guard. Take a moment to analyze messages and media. Check for the subtle signs of deepfakes (strange visual artifacts, lip-sync issues) and phishing (misspelled domains, unnatural requests for credentials). If something seems even slightly “off” about a message’s context or wording given who it claims to be from, trust your gut and investigate further.
-
冷靜分析、細心檢查內容:騙徒(無論真人定AI)都想捉你唔覺意。記住停一停,認真睇清楚每條訊息或影片。留意deepfake啲細微異常(例如畫面異常、嘴唇對唔上),釣魚訊息通常有錯字網址、要求不自然等。如果內容語氣或情境有半點唔對勁,記住靠直覺追查多啲。
-
Use strong security measures: Enable two-factor authentication (2FA) on all crypto accounts and emails. Prefer app-based or hardware 2FA over SMS if possible (SIM-swap attacks are another risk). Consider using a hardware wallet for large holdings – even if a scammer tricks you, they can’t move funds without the physical device. Keep your devices secure with updated software and antivirus, to guard against any malware that does slip through.
-
用強安全措施:所有加密帳號同電郵都開2FA。最好用認證App或者硬件2FA,好過SMS(因為SIM轉移又係一個風險)。大量資金可以考慮用硬件錢包——就算騙徒呃倒你,冇實體裝置都搞唔到。手機/電腦要定時升級軟件同安裝殺毒,避免中惡意程式。
-
Keep personal info private: The less scammers can learn about you online, the less material their AI has to work with. Don’t share sensitive details on public forums (like email, phone, financial info). Be cautious with what you post on social media – those fun personal updates or voice clips could be harvested to target you with AI (for example, training a voice clone). Also, check privacy settings to limit who can message you or see your content.
-
保密個人資料:你上網留少啲足跡,騙徒AI就少啲材料。唔好喺公開討論區留個人資料(如電郵、電話、財務等)。社交媒體出Post要小心——啲私人相、語音分享都可能畀AI攞去訓練聲音Clone等。記住Check清楚私隱設定,限制邊啲人可以PM你或者睇到你啲內容。
-
Educate yourself and others: Stay informed about the latest scam trends. Read up on new deepfake techniques or phishing strategies so you’ll recognize them. Share this knowledge with friends and family, especially those less tech-savvy, who might be even more at risk. For instance, explain to older relatives that AI can fake voices now, so they should always verify an emergency call. Empower everyone around you to be more vigilant.
-
增進自己同身邊人知識:緊貼最Update嘅騙案套路,了解新嘅deepfake竅門或者釣魚技術。多啲同親友分享,特別係年紀大啲或唔熟科技嗰啲,佢哋更易中招。例如你要教長輩,依家AI已經可以假聲,所以收到緊急電話都要再核實。用知識武裝每個人,大家都多一重防護。
-
Use trusted sources and official apps: When managing crypto, stick to official apps and websites. Don’t follow links sent to you – manually type the exchange or wallet URL. If you’re exploring new projects or bots, thoroughly research their credibility (look for reviews, news, the team’s background). Download software only from official stores or the project’s site, not from random links or files sent to you.
-
只用信得過同官方渠道:管理加密資產時,只用官方App同網站。唔好點人Send畀你嘅Link,記得手動入網址。遇到新項目或機械人,要查清楚口碑、新聞、團隊背景等。下載軟件時,記得淨係去官方商店或官網,唔好亂Click別人Send嘅連結或檔案。
-
Leverage security tools: Consider browser extensions or services that block known phishing sites. Some password managers will warn you if you’re on an unknown domain that doesn’t match the saved site. Email providers increasingly use AI to flag likely scam emails – heed those warnings. There are also emerging deepfake detection tools (for images/videos); while not foolproof, they can provide another layer of assurance if you run a suspicious video through them.
-
靈活運用安全工具:可以試下用阻截已知釣魚網站嘅瀏覽器擴展。部分密碼管理器見你進入陌生網站時會警告。愈嚟愈多電郵商用AI標記可疑郵件——一定要留意小心。仲有新興deepfake檢測工具(圖片/影片),雖然未100%保證,但查可疑片段多一層防護都好。
-
Trust, but verify – or better yet, zero-trust: In crypto, a healthy dose of paranoia can save your assets. If a scenario arises where you must trust someone (say, an OTC trade or a
-
信任但要驗證——甚至零信任:喺加密世界,「小心一點」可以救資產一命。如果一定要信對方(例如場外交易或者——new business partner),要做好詳細的盡職調查,如果涉及重大交易,甚至可以堅持要求對方親身見面。涉及你嘅金錢,double check無壞。俗語有云,“Don’t trust, verify”——本身係講區塊鏈交易,而家用嚟形容溝通同樣啱用。
-
如果成為受害目標,記得報告同求助:遇到詐騙企圖,要向平台舉報(佢哋真係會處理同刪除有問題嘅賬戶),同埋可以向比如Chainabuse或者政府嘅詐騙舉報網站報案。咁樣可以幫助社群,同時有助調查。如果唔好彩真係中招,記得即刻聯絡執法部門——雖然追回血金有難度,但越早報案越有機會追回。仲有,你個案嘅資料都可能幫到其他人,令佢哋唔會墮入陷阱。
總結嚟講,AI 驅動嘅加密貨幣詐騙日益嚴重,但都唔係無法應對。只要認清騙徒常用手法——deepfake、聲音仿冒、AI chatbot、假AI投資產品等等——你就可以預先預判佢哋動向,避開陷阱。科技進步,騙術都會不斷升級。今日有deepfake片同GPT寫嘅電郵,聽日可能有更加高明嘅詐騙。不過話晒,99%嘅詐騙都係想你做一啲對自己唔利嘅事——匯錢、泄露機密、繞過安全措施。呢個時候,就要停一停,運用你學識過嘅知識。保持警覺,保持資訊更新,你就有機會連最智能嘅機械詐騙都可以擊敗。你最大嘅防線,始終都係你自己嘅批判思維。人工虛假盛行嘅世界,真正嘅懷疑精神最值錢。

