一般加密投資人正面臨可怕的新威脅:詐騙集團運用先進人工智慧大肆作案。AI 助力的加密詐騙於過去一年激增——生成式 AI 詐騙舉報數於 2024 年中到 2025 年中期間暴增 456%。
不法分子如今利用 AI 創建栩栩如生的假影片、聲音和訊息,讓加密詐騙更加真實且難以識破。根據 FBI 統計,2024 年加密詐騙投訴將近 15 萬件,僅美國就有 39 億美元損失。全球加密詐騙損失於 2024 年突破 107 億美元——而這或許僅是冰山一角,因為僅估計 15% 受害者會報案。
就連 OpenAI 執行長 Sam Altman 也曾警告,AI 已「完全擊潰多數驗證」機制,恐引發「重大詐騙危機」。本文將拆解目前最猖獗的十大 AI 加密詐騙手法,以真實案例說明運作模式,並重點教你如何自保。

圖表:自 2021 年以來,涉及 AI 工具的加密詐騙交易佔比大增,至 2025 年約 60% 詐騙存款位址都與 AI 詐騙有關,反映犯罪集團快速導入 AI 以擴大作案規模。
1. 深偽名人背書詐騙(假影片及直播)
最猖獗的 AI 加密詐騙之一即是假扮名人推薦詐騙計畫。詐騙者以 AI 深偽技術數位化偽裝知名科技或加密圈人士——Elon Musk 最常被用——拍攝假影片宣傳贈幣或假投資平台。例如,犯人攻占知名 YouTube 頻道直播精心剪輯的 Musk(或其他如 Ripple 的 Brad Garlinghouse、MicroStrategy 的 Michael Saylor)影片,聲稱「比特幣翻倍」或虛假高報酬。事實上,這些影片與音訊都經 AI 捏造,當事人從未說過這些內容。觀眾若傳幣至指定位址,就會直接被詐騙。
該手法是在早期利用名人受訪片段行騙基礎上,進一步運用深偽科技將全新影音內容植入,彷彿名人親自背書假網站。這些假直播初看非常逼真。Chainabuse(公開詐騙舉報站)於 2024 年中收到深偽 Musk 影片舉報,數分鐘內即有多人受害,大量資金被騙。區塊鏈分析顯示,2024 年 3 月至 2025 年 1 月,詐騙位址接收至少 5 百萬美元,另一組號稱「AI 交易平台」的 Musk 假直播又詐得 330 萬美元。根據資安公司 Sensity 調查,Musk 是投資詐騙領域最常被深偽的名人,因其形象具公信力且常談論加密貨幣,詐騙集團也就利用他的可信度。
雪上加霜的是,直播深偽技術現在已能實現在即時視訊通話時偽裝面孔。換句話說,詐騙者可於 Zoom 通話時化身他人。某案例中,犯人用 AI 臉部換裝曲真 CFO 及同事於通話畫面,成功騙取員工轉帳 2500 萬美元。深偽高層看起來與聽起來都極為真實——對企業來說可謂夢魘。正如 OpenAI 的 Altman 所說,「聲音和影像深偽已與現實難以區分」。對一般加密投資人而言,可信 CEO 或項目創辦人的深偽影片極具說服力。
如何識別與防範:對於「好到不像真的」的加密促銷,即便主角是名人也務必抱持高度懷疑。仔細檢視影片——深偽畫面經常有微妙異常如眨眼不自然、嘴型怪異、臉旁光影錯位。留意聲音節奏是否機械、或有雜音。如果知名人物突然「直播」保證獲利或辦贈獎(特別是要求你轉幣的「直播」),基本上都是詐騙。請查證官方來源,例如 Musk 多次重申不辦加密贈獎。千萬別把錢轉給影片裡的隨機位址。如有疑慮請暫停——詐騙者依賴受害人的衝動與緊迫感。只要多花幾分鐘由官方或新聞頻道查證活動真偽(通常都是假的),即可避免落入這類深偽詐騙陷阱。
2. 深偽聲音與影片冒充詐騙(假主管或家人)
AI 深偽不只應用於名人冒充,詐騙集團同樣可利用這項技術偽裝你身邊的人或值得信任的權威。在這類情境下,犯人以 AI 合成聲音、影片分身騙取受害者。一種駭人版本為「親屬緊急求救」詐騙:受害者接到電話,聽見孫子或配偶焦急求助請匯款(常要求用加密貨幣或電匯)。實際上,親人根本未打電話——詐騙者可能僅抓取你的社群影片聲音,即用 AI 複製其聲線、語氣,再向你發出假求救。這類語音仿冒案已全球發生,專門利用人性弱點。用現今 AI 工具,即使只有幾秒音檔,就可產出逼真分身。
另一種變體針對企業與投資人,冒充高層或合作對象。如前述,2024 年初,詐騙人員於視訊會議偽裝國際企業 Arup CFO,但類似案例並非唯一。香港亦有銀行員工聽從「上司」——其實是 AI 聲音——指示,被詐數百萬。詐騙者也曾偽裝新創創辦人或加密交易所主管,隨機通過視訊與投資人、用戶交談,騙其付款或交出安全碼。這類「CEO 假冒」詐騙全仰賴商務互動轉為線上。既然你的老闆臉和聲音在 Zoom 裡都沒問題,誰不信?即時臉部換裝軟體可將目標人臉直接疊加於詐騙犯臉上,看起來就是本人在說話。受害人接著被要求匯款或交出保密指令。許多企業直到帳款消失才發現遭騙。
詐騙集團甚至結合多種手法進行長期誘拐:「殺豬盤」團伙開始用短暫視訊實時證明假身分真實。他們會用深偽美貌人像或請演員簡短與你視訊,取信後繼續以文字對談誘騙。TRM Labs 研究發現詐騙集團出錢「租用」深偽即時服務,代表惡意深偽需求日增。在某案例,詐團花部分詐得加密貨幣購買 AI 服務後,再詐騙受害人逾 6000 萬美元,投資深偽技術果然大獲利。
保護自身關鍵:務必以二次管道核實身分。如果突然收到求匯款、敏感資訊的電話或視訊,不論對方看聽起來多真實,都不要馬上執行。請掛斷再用熟悉號碼回撥或另尋熟人驗證。例如「老闆」發訊你要轉帳,務必親自打電話或找同事查證。家族可約好緊急暗號——如來電者說不出暗號,即知是詐騙。任何臉部看起來略為詭異(太平滑、眼髮怪異)或聲音延遲、機械感明顯的都要小心。所有透過視訊、郵件發出金流指示都該再用電話或面談確認。正如 Altman 所說,傳統聲紋認證等方式已不可靠。對任何未經證實、突如其來的急迫轉帳或加密請求都要極度懷疑。多花幾分鐘確認身分,往往可幫你或公司躲過昂貴的深偽詐騙。
3. AI 強化的愛情與投資詐騙(「殺豬盤」)
所謂殺豬盤詐騙——即詐騙集團先花時間建立線上人際關係,最終榨乾受害者積蓄——在 AI 加持下變得更致命。典型殺豬盤情境常於社群媒體、交友 app 經營戀情或友情關係,詐騙集團耗費數週、數月取得信任,再捏造「超優質」加密投資機會,誘使被害人投入資金,最終發現一切都是騙局。這對詐騙集團來說十分費工,需持續對話、扮演戀人或導師角色。如今 AI 聊天機器人與深偽技術將過往逐一欺騙轉型為大規模詐騙工廠,讓殺豬盤全球無遠弗屆。
大型詐騙組織廣泛部署 AI 大型語言模型(LLMs),自動處理和目標受害者的多數通訊。藉由 ChatGPT 等工具—— or illicit uncensored variants like “WormGPT” and “FraudGPT” – they can generate fluent, charming messages in any language, 24/7. This means one scammer can manage dozens of victims concurrently, with AI crafting individualized loving texts, market analysis, or whatever the script requires. In fact, a 2023 investigation by Sophos found pig butchering groups had begun using ChatGPT to write their chats; one victim even received a strange pasted message that accidentally revealed it was AI-generated. The error aside, LLMs let scammers “hyper-personalize” their approach – adjusting tone and content to perfectly suit each victim’s background and emotional state. The days of broken English or copy-paste scripts are over. With AI, the texts feel genuine, making victims even more susceptible to the eventual pitch.
或像「WormGPT」和「FraudGPT」這類非法未經審查的變體——它們可以全天候、毫不間斷地,以任何語言生成流利、吸引人的訊息。這意味著一個詐騙犯可以同時管理數十名受害者,由AI代為撰寫客製化的曖昧簡訊、市場分析,或任何情境所需的內容。事實上,Sophos於2023年的調查發現,殺豬盤集團已開始使用ChatGPT來撰寫他們的對話;甚至有受害者收到奇怪的貼上訊息,意外露餡出是AI生成的。撇除這種失誤,大型語言模型(LLMs)讓詐騙分子得以「超客製化」他們的手法——能針對每位受害者的背景與情緒,精準調整語氣和內容。破英文或制式貼文的時代已經結束。有了AI,這些訊息顯得格外真誠,讓受害者更容易陷入騙局。
AI is also breaking the language barrier that once limited these scams. Originally, many pig butchering rings were based in Southeast Asia targeting Chinese-speaking victims. Expansion to Western victims was hampered by scammers’ weaker English skills – awkward grammar was often a red flag. Now, LLM-based translation and writing lets a non-native speaker seamlessly scam someone in English, German, Japanese, or any lucrative market. Scammers feed incoming messages to an AI for translation and reply generation, enabling them to pose as cosmopolitan investors or romantic partners even in languages they don’t speak. This has vastly expanded the pool of targets. Well-educated professionals in Europe or North America who might have dismissed clumsy scam messages can now receive polished, perfectly localized correspondence from an “attractive entrepreneur” who befriended them online. The result? More victims fattened up (like “pigs”) for the eventual slaughter.
AI同時也打破了過去限制這類詐騙的語言障礙。原先,許多殺豬盤集團盤據東南亞,以華語受害者為主要目標。要擴展到西方受害者時,詐騙犯的英文能力往往不佳——生硬的文法常常成了破綻。但現在,仰賴LLM的自動翻譯與寫作功能,即使是非母語人士也能無縫用英語、德語、日語或任何有利可圖的市場展開詐騙。詐騙者將收到的訊息餵給AI來翻譯並生成回覆,讓他們即使不會該語言,也能冒充國際投資者或異國愛侶。如此一來,目標池大幅擴大。原本在歐洲、北美受過良好教育的專業人士,可能會輕易識破拙劣詐騙訊息,現在卻能收到精雕細琢、地道本地化的「網路創業家」偽裝來信。結果就是:有更多受害者被「養肥」等著宰殺。
And let’s not forget deepfakes in pig butchering. While most of the grooming occurs via text, scammers sometimes schedule brief video calls to allay suspicions. Here they increasingly use face-swapped deepfake videos – for instance, a scammer may hire a woman to appear on camera but use AI to replace her face with the stolen photos they’ve been using in the profile. This “proof of life” strategy convinces the victim that their online sweetheart is real. (Some operations even advertise for “real face models” who, with the help of AI filters, appear as the victim’s dream partner on video.) Once trust is secured, the scammers direct victims to invest in bogus crypto platforms or “liquidity mining” programs, often showing fake profit screenshots to entice larger deposits. Victims have been known to refinance homes or drain 401(k)s under the illusion of a life together with their scammer or huge returns just ahead. It’s absolutely devastating – and AI only amplifies the deception.
而且別忘了,殺豬盤也開始利用深偽技術。雖然大多數感情培養是靠文字進行,但詐騙犯有時會安排簡短的視訊通話以消除疑慮。這時他們越來越多地用上換臉深偽影片——比如,詐騙者可能雇用一名女子出鏡,卻用AI將其臉換成之前用於帳號頭像的盜用照片。這種「存在證明」手法讓受害者相信網戀對象是真人。(甚至有些集團還會徵求「真人臉模」,配合AI濾鏡,在視訊時幻化成受害者心目中的理想伴侶。)一旦贏得信任,詐騙者就會引導受害人投資虛假的加密貨幣平台或「流動性挖礦」計畫,還會秀出假盈利截圖,誘騙受害人加碼投入。有人甚至因此再貸房貸、掏空401(k),只因相信未來能與詐騙對象共築人生或即將迎來鉅額收益。這極其毀滅性——而AI只會讓詐騙更具欺騙力。
How to avoid pig butchering scams: Maintain a healthy skepticism about online-only relationships or mentorships that start randomly and become unusually intense. If someone you’ve never met in person is guiding you to invest in crypto or asking for financial help, that’s a glaring red flag. Reverse-image search profile pictures to see if they’re stolen (many pig-butchering scammers use photos of models or other victims). Be wary of video calls where the person’s camera is oddly low quality or they won’t fully show their face – they might be hiding a deepfake anomaly. Pressure to invest quickly or claims of insider knowledge are signs of a con. Also, remember legitimate investment professionals or romantic partners do not typically promise guaranteed profits. The FBI and other agencies have issued formal warnings about pig butchering and crypto romance scams – so educate yourself and others about how these scams operate. If you suspect someone might be a scammer, cut off contact and never send them money or crypto. And if you’ve been targeted, report it (anonymously if needed) to help authorities track these rings. Awareness is your best defense, so that no matter how slick the AI-driven sweet talk is, you won’t fall for the trap.
如何避免殺豬盤詐騙:對於單純線上的關係或導師指導,若是隨機發生且快速升溫,請保持戒心。如果一個從未見過面的人要你投資加密貨幣或要求金錢協助,這是明顯的警訊。請善用圖片反查功能查看大頭貼是否盜圖(許多殺豬盤都用名模或其他受害者照片)。如果視訊時對方鏡頭畫質低下或始終不露全臉,也要警覺對方可能在掩飾深偽假像。對方催促你快點投資或聲稱有內線消息,都是常見詐騙特徵。另外切記,真正的投資專業人士或戀人不會保證獲利*。FBI及其他機構已就殺豬盤與幣圈愛情詐騙發出正式警告——請多多了解這些詐騙的運作方式,也提醒親友。如懷疑對方是詐騙犯,應立刻斷連,切勿匯款或轉帳加密貨幣給對方。若成為目標,也請勇敢檢舉(可匿名),協助查緝。覺知是你最佳防線,不論AI花言巧語多厲害,你都能安然脫身。*
4. AI-Written Phishing Emails and Messages (Smarter Scams at Scale)
Phishing – those fraudulent emails or texts that trick you into clicking a malicious link or giving up private information – has long been a numbers game. Scammers blast out thousands of generic messages (“Your account is at risk, login here...”) hoping a few people take the bait. Now, AI is making phishing far more convincing and more targeted. With tools like generative language models, attackers can easily craft personalized, fluent messages that mimic the style of genuine communications, dramatically upping their success rate.
釣魚——這些誘使你點擊惡意連結或洩露個人資訊的詐騙電子郵件/簡訊——長久以來屬於廣撒網的數量遊戲。詐騙者會大量發送千篇一律的訊息(如「你的帳戶有危險,請在這裡登入……」),只求有幾人上鉤。但現在,AI讓釣魚詐騙變得更加逼真且精準。有了生成式語言模型等工具,攻擊者可以輕鬆打造個人化又流暢的詐騙訊息,模仿正式通信的語氣,大幅提高成功率。
Large Language Models (LLMs) can generate phishing lures that are almost indistinguishable from a real email from your bank, crypto exchange, or friend. They eliminate the grammar mistakes and awkward phrasing that often gave away older scams. For example, an AI can be instructed to write an urgent email from “[Your Crypto Exchange] Security” warning you of a withdrawal and providing a link to “secure your account.” The text will be polished and on-brand. Scammers also use AI to scan your social media and tailor messages precisely – referencing recent transactions or naming a friend – a technique known as spear phishing. This level of customization used to take significant effort, but an AI agent can do it in seconds by scraping public data.
大型語言模型(LLMs)可以產生近乎真假難辨的釣魚誘餌,看起來就像來自你銀行、幣商或朋友的真實郵件。他們能消除舊詐騙常見的語法錯誤和生硬表達。舉例來說,AI可被指示撰寫一封來自「[你的加密貨幣交易所] 安全部」的緊急郵件,警告你有取款行為並附上「保護帳戶」的連結:內容精緻、形象一貫。詐騙者也會用AI掃描你的社群媒體,量身打造訊息——點出你最近的交易或提及認識的朋友——這就是俗稱的標槍式釣魚。這等級的客製化過去需耗費許多心力,如今AI代理僅需幾秒就能爬取公開資訊完成。
There are even underground AI tools explicitly built for cybercrime. Since public models like ChatGPT have filters against illicit use, criminals have developed or bought black-market LLMs like “WormGPT” and “FraudGPT” that have no such restrictions. These malicious AIs, available on dark web forums, can output convincing phishing emails, malicious code, even step-by-step fraud advice. With a simple prompt, a scammer with limited English skills can produce a near-perfect email in any language, or generate a whole phishing website’s text content. According to cybersecurity training firm KnowBe4, by 2024 almost 74% of phishing emails they analyzed showed signs of AI usage – in other words, the majority of phishing attempts are now being turbocharged by AI’s writing capabilities.
黑市上甚至有專為犯罪設計的地下AI工具。由於ChatGPT等公開模型設有非法用途過濾,犯罪者便自行開發或購買如「WormGPT」、「FraudGPT」這類毫無限制的黑市LLM。這些惡意AI在暗網論壇上販售,能生成逼真的釣魚信件、惡意程式碼,甚至逐步指導詐騙技巧。只需一組簡單指令,英文不好的騙徒也可產出幾乎完美的多語信件,或自動產生整個釣魚網站全頁的文案。根據資安訓練公司KnowBe4,到2024年,該公司分析的釣魚郵件中高達74%有AI痕跡——也就是說,如今大多數釣魚嘗試都仰賴AI寫作大幅提升威脅力。
Beyond email, AI chatbots pose a threat in messaging platforms. Scammers can deploy bots in Telegram, Discord, WhatsApp, etc., that engage users in real-time conversation, luring them just as a human would. For instance, you might tweet about having an issue with your crypto wallet and promptly get a reply from a “support rep” (actually a bot) that DMs you. The AI, pretending to be customer service, then guides you through a fake “verification” that steals your keys. Because the chatbot can understand and respond naturally to your questions, you’re less likely to realize it’s fake. This kind of AI-driven social engineering can trick even tech-savvy users. In one case, scammers set up a fraudulent “investment assistant” chatbot that promised to help users trade crypto – it was nothing more than a trap to collect API keys and account info.
AI聊天機器人在即時通訊平台也構成巨大威脅。詐騙者能在Telegram、Discord、WhatsApp等處佈署機器人,與用戶即時互動,像真人一樣誘餌受害者。例如,你在Twitter抱怨加密錢包出了問題,立刻收到「客服專員」(其實是機器人)的私訊。這個AI假冒客服,會一步步引導你完成假的「驗證程序」,藉機盜取你的私鑰。由於機器人能自然理解和應答你的問題,你較難察覺不對勁。這種AI操控的社交工程騙局連熟悉科技者也容易上當。已曾發生詐騙集團設立「投資助手」聊天機器人,聲稱幫你炒幣,實則收集API密鑰與帳戶資料的案例。
Concept illustration of a digital scammer using AI. Phishing attacks and malware are increasingly aided by AI algorithms, which can generate realistic messages and even malicious code to steal passwords and crypto funds.
(數位詐騙犯利用AI的概念插畫。釣魚攻擊與惡意程式越來越仰賴AI,這些演算法能生成逼真的對話內容,甚至惡意程式碼,以竊取密碼和加密貨幣資產。)
Furthermore, AI can assist in hacking by producing malware code and automating attacks. For example, a criminal with minimal coding skill can ask an uncensored AI to write a program that empties crypto wallets or installs a keylogger to capture seed phrases. There have been reports of basic ransomware and infostealers created with AI help. While this crosses into hacking more than scamming, the lines blur – often a phishing email delivers malware. With AI’s help, criminals can pump out new malware variants faster than security teams can block them. And AI can help bypass security checks too: solving CAPTCHAs, generating fake IDs to pass verification (see Section 6), even cracking passwords by intelligently guessing (though strong cryptographic keys remain safe, weak ones do not). Sam Altman cautioned that even selfie ID checks and “voiceprint” logins have become trivial for AI to fool, meaning the authentication methods we rely on need urgent upgrading.
此外,AI還能協助寫出惡意程式碼與自動化攻擊。例如,一個只會基礎程式的犯罪者,可以讓無審查AI寫出清空加密錢包或安裝記錄種子詞的木馬。有報導指出,有人藉AI生成過勒索軟體、資訊竊取器。雖然這已由詐騙跨足駭客範疇,但兩者界線模糊——釣魚信常常就是植入惡意軟體的管道。有了AI,歹徒能出產新型惡意軟體的速度,遠超安全部門的封鎖效率。AI 也能幫助繞過安全檢查:破解CAPTCHA、生成假證件通關(見第6節)、甚至透過聰明猜測破解密碼(只要金鑰夠弱就危險,強加密金鑰仍然安全)。連Sam Altman都警告,連自拍身分認證與「聲紋登入」等手段,現在AI都能輕鬆欺騙,意味我們仰賴的驗證方式亟需升級。
How to stay safe from AI-powered phishers: The age-old advice still applies – never click suspicious links or download attachments from unknown senders, no matter how legit the message looks. Be on high alert for any communication (email, text, DM) that creates a sense of urgency or asks for your login credentials, 2FA codes, or seed phrase. Even if the formatting and language seem perfect, check the sender’s email/domain carefully – look for subtle errors or mismatched URLs (e.g. “binance.support.com” instead of the real domain). Confirm directly with the service or person if you get an unexpected request. Use official app/website channels rather than links provided in messages. On social platforms, distrust “support” that reaches out proactively; legitimate companies won’t ask for passwords via DM. Technically, enable phishing protections like email filters and web reputation tools – and keep your antivirus and software updated to catch malware. Most importantly, maintain a skeptical mindset. If you’re being pushed to act quickly or divulge info, that’s your cue to slow down and verify. By treating all unsolicited messages with caution, you can outsmart even AI-crafted scams. Remember, no genuine company or friend will mind if you take an extra minute to confirm authenticity – only scammers press for instant action.
如何防範AI加持的釣魚攻擊:傳統鐵則仍有效——不點擊可疑連結、不下載陌生發件人附件,就算外觀再正規也一樣提防。對於任何讓你感到緊急、要求填寫登入資料、2FA或種子詞的聯絡(不論是信件、簡訊、私訊),請格外警覺。即便格式和語氣無懈可擊,也要細查寄件者的信箱/網址——留意細微錯誤、拼錯或假網址(如「binance.support.com」不是正統官網)。遇到異常請直接聯絡服務方或本人確認,不要依照訊息附的連結操作。社群網站上主動接近的「客服」請勿輕信;正派公司不會私訊要密碼。技術上,請啟用釣魚防護(如信件過濾、網址聲譽管理),並保持防毒與軟體更新以防惡意程式。最重要的是要有懷疑精神——如果有人催你迅速操作或交出資料,務必放慢腳步查證。遇到陌生訊息就當詐騙處理,這樣你連AI編寫的詐騙招數也不會上當。記住,真正的公司或好友絕不介意你多花一分鐘確認身分——只有詐騙犯才會催你馬上回應。
5. Fake “AI Trading” Bots and Platforms (The AI Hype Investment Scam)
The frenzy around artificial intelligence hasn’t just benefited scammers operationally – it’s also become the bait itself. In the past couple of years, there’s been a surge in fraudulent crypto projects and trading schemes that tout AI as their secret sauce. Scammers know that average investors are intrigued by AI’s potential to
*人工智慧熱潮不僅讓詐騙操作升級——更直接成了詐騙誘餌。近年來,主打「AI」黑科技的虛假加密貨幣專案和炒作型交易平台大量湧現。歹徒深知一般投資人對AI的無限想像與好奇心……*generate profits. Thus, they create fake AI-powered trading bots, signal groups, or DeFi platforms that promise guaranteed returns through some advanced algorithm, when in reality it’s all smoke and mirrors.
產生獲利。因此,他們會創造出假的 AI 驅動交易機器人、訊號群組或 DeFi 平台,聲稱透過先進演算法可以保證獲利,但實際上一切都是虛有其表。
One common ruse is the “AI trading bot” scam: You’re invited (often via Telegram or Reddit) to use a bot that allegedly leverages AI to trade crypto for huge gains. The bot might even show simulated results or a demo that makes a few profitable trades on a test account. But once you deposit your own funds for the bot to trade, it starts losing – or the scammers simply disappear with your money. In other cases, scammers promote an “AI investment fund” or mining pool – you send crypto to them to invest, lured by marketing buzzwords like “proprietary AI-driven strategy” – but it’s a Ponzi scheme. Early “investors” might get a bit back to prove it works, but eventually the operators vanish with the bulk of the funds, leaving behind a slick website and no accountability.
其中一個常見的詐騙手段就是「AI 交易機器人」騙局:你會被邀請(通常是在 Telegram 或 Reddit 上)使用一個據稱能夠利用 AI 進行加密貨幣交易,帶來巨大收益的機器人。這個機器人甚至會展示模擬結果或在測試帳戶上進行幾筆賺錢的交易來做示範。但等你真的把自己的資金存入交給機器人操作後,不是開始虧損,就是詐騙集團直接帶著你的錢消失。其他情況下,詐騙者則會宣傳「AI 投資基金」或礦池——你被吸引去把加密幣匯給他們投資,廣告詞訴求如「專有 AI 策略」等,但本質上這是龐氏騙局。早期「投資人」可能會拿回一點錢來製造真的有賺的假象,但最後操盤的人會捲走大部分資金,只留下包裝精美的官網和推卸責任。
During the ChatGPT hype of 2023–2024, dozens of new crypto tokens and platforms emerged claiming some AI angle. While some were legitimate projects, many were outright scams or pump-and-dump schemes. Fraudsters would announce a token tied to AI development, watch funds pour in from excited investors, then abandon the project (a classic rug pull). The idea of AI was enough to inflate a token’s value before the crash. We also saw fake news being weaponized: deepfaked videos of Elon Musk and others were used to endorse an “AI crypto trading platform” (as mentioned earlier) to drive victims to invest. For example, one deepfake video encouraged people to invest in a platform by claiming it used AI to guarantee trading profits – nothing of the sort existed. These schemes often combined the trust in a celebrity with the mystique of AI tech to appear credible.
在 2023–2024 年的 ChatGPT 熱潮期間,出現了數十種與 AI 扯上關係的新加密貨幣與平台。雖然其中有些是正規項目,但許多完全是騙局或拉高出貨計畫。詐騙者只要宣稱發布一個和 AI 發展相關的代幣,就能吸引許多熱情投資人入金,接著直接棄守項目(經典的地毯式捲款)。單單「AI」這個概念,就足以炒高代幣價值,最後泡沫破滅。同時我們也見到假新聞武器化:馬斯克等人的深偽影片被拿來「背書」所謂的「AI 加密交易平台」(如前所述),誘使受害者投資。舉例來說,有個深偽影片聲稱某平台運用 AI,能保證交易獲利,事實上根本不存在這樣的東西。這類騙局往往結合名人信任感和 AI 技術的神秘感,讓自己看起來很可信。
Not only do scammers lie about having AI, some actually use AI to enhance the illusion. TRM Labs noted a major pyramid scheme in 2024 named MetaMax that purported to give high returns for engaging with social media content. To appear legitimate, MetaMax’s website showed a CEO and team – but the “CEO” was just an AI-generated avatar created with deepfake tech. In other words, there was no real person, just an AI image and perhaps an AI voice, assuring investors that MetaMax was the next big thing. The scheme still managed to rake in close to $200 million (primarily from victims in the Philippines) before collapsing. Another scam site, babit.cc, went so far as to generate entire staff headshots via AI instead of using stolen photos of real people. While one might notice some uncanny perfection in those images, each passing month makes AI-generated faces more lifelike. It’s easy to see how future scam sites could have a full cast of seemingly credible executives – none of whom exist in reality.
詐騙者不只謊稱擁有 AI,有些甚至真的用 AI 來加強假象。TRM Labs 發現,2024 年一個名為 MetaMax 的大型金字塔詐騙,假稱只要參與社群媒體互動就能獲高額回報。MetaMax 官網為了裝得像真的一樣,展示了 CEO 和團隊,但那位「CEO」純粹由深偽技術創建的 AI 虛擬形象。也就是說,壓根沒有真人,只有一個 AI 生出來的圖像,也許還有 AI 合成的聲音,對投資人信誓旦旦 MetaMax 會成為下一個大熱門。這種詐騙最終還是吸了近兩億美元(絕大多數來自菲律賓受害者),最後崩盤。另一個騙局網站 babit.cc,甚至直接用 AI 生成全部員工大頭照,不必再偷用真人照片。雖然有些人可能會發現那些照片有點「過分完美」,但 AI 生成臉孔的擬真程度正快速進步,很快未來的詐騙網站根本可以「組隊」一群看似超級真實的虛擬高管——其實通通是憑空生出來的。
How to avoid AI-themed investment scams: Approach any “too-good-to-be-true” investment opportunity with extreme caution – especially if it heavily markets AI capabilities without clear details. Do your homework: If a project claims to use AI, is there legitimate documentation or an experienced team behind it? Be wary if you can’t find any verifiable info on the founders (or if the only info is AI-created profiles). Never trust celebrity endorsements in the crypto space unless confirmed through official channels; 99% of the time, people like Musk, CZ, or Vitalik are not randomly giving out trading advice or funds doubling offers. If an AI trading bot is so great, ask why its creators are selling access for cheap or marketing on Telegram – wouldn’t they just use it privately to get rich? This logic check often reveals the scam. Also, remember that guaranteed returns = red flag. No matter how sophisticated an algorithm, crypto markets have risk. Legitimate firms will be clear about risks and won’t promise fixed high yields. As an investor, consider that scammers love buzzwords – “AI-powered, quantum, guaranteed, secret algorithm” – these are hooks for the gullible. Stick to known exchanges and platforms, and if you’re tempted by a new project, invest only what you can afford to lose after independently verifying it. When in doubt, seek opinions from trusted voices in the community. Often, a quick post on a forum or Reddit about “Has anyone heard of XYZ AI bot?” will surface warnings if it’s fraudulent. In short, don’t let FOMO over AI breakthroughs cloud your judgment – the only thing “automated” in many of these scams is the theft of your money.
如何避開 AI 主題的投資詐騙:任何「好到難以置信」的投資機會都要格外小心——特別是當它大量行銷 AI 能力但細節不明時。要自己做功課:如果某個項目標榜用 AI,是否有正當文件或經驗很豐富的團隊背書?如果找不到創辦人任何可驗證資料(或只有看起來像是 AI 生出來的資料),就提高警覺。不要隨便相信加密圈的名人背書,除非有官方認證消息;99% 這些「馬斯克、CZ、Vitalik 開始送錢或給你交易建議」的情節都是假的。如果 AI 交易機器人真的這麼棒,那創辦人為什麼會便宜賣你存取權,還要在 Telegram 行銷?他們早該自己偷偷賺飽就好啦?這個邏輯推理,很多時候能戳破騙局。此外,請記住「保證獲利」就是紅旗。再怎麼高明的演算法,加密市場都伴隨風險。正當公司會誠實說明風險,不會承諾高額固定報酬。作為投資人,你要留心詐騙者愛用的一些術語——「AI 驅動、量子、保證、神秘演算法」——這些都是專門吸引輕信者的魚餌。請盡量只用知名交易所及平台。假如真的很心動新項目,也僅用可以承受損失的金額試驗,並自行多方驗證。若有疑慮,去社群或 Reddit 上發問徵求意見,比方「請問有人聽過 XYZ AI 機器人嗎」,如果是騙局通常會很快收到警告。簡而言之,不要讓 FOMO(錯失 AI 新突破的恐懼)沖昏頭腦——這些騙局唯一自動化的事,大概只有偷錢而已。
6. Synthetic Identities and KYC Bypass with AI
6. 利用 AI 製造合成身份及繞過 KYC
Cryptocurrency scams often involve a web of fake identities – not just the people being impersonated to victims, but also the accounts and entities the scammers use. AI now allows fraudsters to generate entire synthetic identities on demand, bypassing verification measures that were meant to weed out imposters. This has two major implications: (1) Scammers can open accounts on exchanges or services under false names more easily, and (2) they can lend an air of legitimacy to their scam websites by populating them with AI-generated “team members” or testimonials.
加密貨幣詐騙經常圍繞著一整套人造假身份展開——不只是冒充給受害者看到的人,還包括詐騙者自己持有的帳戶及假企業。AI 現在讓不法分子可以隨時「按需生成」完整的合成身份,繞過那些原本要用來辨認冒牌貨的驗證措施。這帶來兩個重大影響:(1) 詐騙者更容易用假名在交易所或各種平台開戶,(2) 他們還可以用 AI 生成的「團隊成員」或見證人充實騙局網站,營造出「很正派」的假象。
On the compliance side, many crypto platforms require KYC (Know Your Customer) checks – e.g. upload a photo ID and a selfie. In response, criminals have started using AI tools to create fake IDs and doctored selfies that can pass these checks. A common approach is using AI image generators or deepfake techniques to combine elements of real IDs or synthesize a person’s likeness that matches the name on a stolen ID. There was a recent anecdote in Decrypt of people using basic AI to generate fake driver’s license images to fool exchanges and banks. Even biometric verifications aren’t safe: AI can output a lifelike video of a person holding an ID or performing whatever motion the system requires. Essentially, a scammer could sit at their computer and have an AI puppet a fictional person to open accounts. These accounts are then used to launder stolen crypto or to set up scam platforms. By the time investigators realize “John Doe” who withdrew millions is not real, the trail has gone cold.
在合規面,許多加密平台會要求 KYC(認識你的客戶)查核——例如上傳身分證件與自拍照。針對這點,犯罪分子已開始用 AI 工具製作假證件與修圖自拍,讓這些可以順利過關。常見的做法是利用 AI 圖像產生器或深偽技術,把多張真證件資料拼湊或直接合成一張與盜用姓名匹配的全新假人證照。Decrypt 最近還提到過,有人用簡單 AI 工具產生假的駕照照片,成功騙過交易所和銀行。連生物辨識都不一定安全:AI 能產生一段人手持證件、或按照要求動作的擬真影片。說白一點,詐騙者只需坐在電腦前,就能讓 AI 操控虛構人物開戶。這些帳號最終用來清洗被偷的加密貨幣,或架設詐騙平台。等執法人員發現那個提走數百萬的「John Doe」根本不是活人時,早已人去樓空。
Likewise, when promoting scams, having fake “verified” identities helps. We touched on AI-generated CEOs in Section 5 – it’s part of a broader trend. Scammers can populate LinkedIn with employees who don’t exist (using AI headshots and auto-generated CVs), create fake user reviews with GAN-generated profile pics, and even generate fake customer support agents. Some victims have reported chatting with what they thought was an exchange support rep (perhaps via a pop-up chat on a phishing site), and the agent had a realistic avatar and name. Little did they know it was likely an AI bot backed by a fictitious persona. ThisPersonDoesNotExist (an AI tool that generates random realistic faces) has been a boon for fraudsters – every time a scam account or profile is flagged, they just generate a new unique face for the next one, making it hard for spam filters to keep up.
同理,推廣詐騙時,擁有假「認證」身份很有幫助。第 5 節提過的 AI 生產 CEO,只是更大趨勢的一環。詐騙者可以用 AI 頭像和自動生成的履歷表,「員工」塞滿 LinkedIn;也能用 GAN 產生的帳號頭像來寫假用戶評價,甚至連客服人員都是假造的。有些受害人曾經回報,自己跟網頁上的交易所「客服」對談(搞不好根本是釣魚站彈出的聊天室),對方看起來很真的大頭照和名字,實際上極可能只是 AI 組裝的一個虛擬角色。「ThisPersonDoesNotExist」等這類能自動生成超逼真臉孔的 AI 工具,對騙徒來說更是天助——一個假帳戶被抓就再生一張新臉,讓垃圾郵件過濾器難以招架。
Even outside of scams targeting end-users, AI-aided identity fraud is facilitating crimes. Organised rings use deepfakes to fool banks’ video-KYC procedures, enabling them to set up mule accounts or exchange accounts that can convert crypto to cash under a false identity. In one case, Europol noted criminals using AI to bypass voice authentication systems at banks by mimicking account holders’ voices. And law enforcement now sees evidence that crypto scam proceeds are paying for these AI “identity kits” – TRM Labs traced crypto from pig butchering victims going to an AI service provider, likely for purchasing deepfake or fake ID tools. It’s a full criminal ecosystem: buy a fake identity, use it to set up scam infrastructure, steal money, launder it through exchanges opened with more fake IDs.
就算不以最終使用者為目標,AI 協助的身份詐騙,也正令犯罪行為變得更容易。有組織集團會用深偽技術欺騙銀行的視訊 KYC 流程,開立人頭帳戶或能將加密幣換現的假身份交易所帳號。舉例來說,Europol 曾指出,罪犯用 AI 模仿帳戶持有者聲音,繞過銀行語音認證系統。而現在執法單位發現,加密貨幣詐騙的不法所得,正流向 AI「身份套件」業者——TRM Labs 追蹤到「殺豬盤」受害者的加密幣,已流向某 AI 服務供應商,很可能就是購買深偽或假證件工具的款項。這已成一整套黑色產業鏈:買假身份,組建騙局基礎、偷錢,再用更多假身份開的戶口進行洗錢。
How to defend against synthetic identity scams: For individual users, this is less about something you might directly encounter and more about being aware that photos or “documentation” can be faked. If you’re dealing with a new crypto platform or service, do some due diligence: Is the team real and verifiable? If you video-call a “financial advisor” and something seems off (e.g., slight facial oddities), consider that they might not be who they claim. For companies, the onus is on strengthening KYC and fraud detection – e.g., using AI to fight AI, like checking if an ID photo is generated or if a selfie is a deepfake (there are algorithms that can detect subtle artifacts). As a user, one actionable tip is to protect your own identity data. Scammers often train their deepfake models on whatever info they can find about you online. Limiting what you share (e.g., don’t post videos of yourself publicly if avoidable, and keep profiles private) can reduce the raw material available to bad actors. Also, enable and insist on security measures beyond just ID checks – for instance, some banks will have you confirm a random phrase on video (harder for a deepfake to do on the fly, though not impossible). Ultimately, as Altman suggests, the way we verify identity needs to evolve. Multifactored and continuous verification (not just one snapshot) is safer. For now, as a consumer, prefer services that have robust security and be skeptical if an individual or site demands your personal documents or info without solid rationale. If you suspect an account or profile is fake (maybe a brand-new social profile contacting you about crypto investing), err on the side of caution and disengage. The less opportunity you give scammers to use fake identities on you, the better.
如何防範合成身份詐騙:對多數個人用戶來說,這方面不是你「直接」遇到的(而是應知道,照片或「證明文件」很容易造假)。你若遇到新加密平台或服務,請做點小調查:團隊是貨真價實、查得到的人嗎?如視訊會議時感到對方臉部有點微妙不對勁(例如表情怪怪的),要思考對方可能是冒名頂替。公司方面,應更加強化 KYC 與詐騙偵測——例如用 AI 對抗 AI,偵測證件照或自拍是否為合成圖,有些演算法能捕捉細微的假跡。一般用戶有一件事可以馬上做:保護你的身份資料。詐騙集團常利用網上可以找到的個人資訊來訓練深偽模型。能少分享就少分享(例如避免公開上傳個人影片,社群帳號也調整隱私),能減少落入壞人手中的資料來源。此外,啟用且要求更高階身份驗證,例如某些銀行會請你在視訊即席說出隨機詞語(深偽臨時偽造會困難一點,儘管非絕對安全)。總之,如 Altman 指出,身份驗證的方法需要與時俱進。多重且持續性驗證(不只是一次性拍照)才是更安全的。現階段,消費者應偏好具備強安全措施的服務,任何網站或個人無正當理由要求你的重要身份資料都該保持懷疑。如果你懷疑某個帳號或社群檔案是假的(例如突然接你聊加密投資的新帳號),那就多一分警覺、果斷停止接觸。越少給詐騙者機會拿合成身份來騙你,安全就越高。
7. AI-Powered Social Media Bots and Impersonators
7. AI 驅動的社群機器人與冒充者
Crypto scammers have long thrived on social media, from Twitter and Facebook to Telegram and Discord. Now, AI is turbocharging the bots and fake accounts that facilitate these scams, making them more effective and harder to distinguish from real users. If you’ve ever tweeted about crypto and gotten instant replies
加密貨幣詐騙在社群平台上早已猖獗,從 Twitter、Facebook 到 Telegram、Discord 都有其蹤跡。現在,AI 正在推動這些詐騙機器人與假帳號的能力提升,不僅讓他們更加有效,也更難和真人分辨。如果你曾在推特發文談論加密貨幣,立刻收到回應……offering “support” or seen random friend requests from attractive people into crypto, you’ve likely encountered this problem. AI allows scammers to deploy armies of bots that are more believable than ever.
如果你曾收到所謂的「支援」訊息,或看到外貌吸引人的陌生人寄來加密貨幣圈的好友邀請,你很可能就遇過這個問題。AI 讓詐騙者能夠部署一大群比以往更逼真的機器人。
For one, generative AI lets each bot have a unique “personality.” Instead of 1,000 bots all posting the same broken-English comment about a giveaway, each can now produce unique, coherent posts that stay on a script but avoid obvious duplication. They can even engage in conversation. For example, on crypto forums or Telegram groups, an AI bot can infiltrate by blending in, chatting casually about the markets or latest NFTs, building credibility in the community. Then, when it DMs someone with a “great opportunity” or a phishing link, the target is less suspicious because they’ve seen that account being “normal” in the group for weeks. AI can also generate realistic profile pictures for these bots (using GANs or similar), so you can’t just do a reverse image search to catch a stolen photo. Many scam Twitter accounts nowadays sport AI-created profile pics – often of an appealing, friendly-looking person – with none of the telltale glitches that earlier AI images had. Even the bios and posts are AI-written to appear authentic.
首先,生成式AI能讓每個機器人都擁有獨特的「個性」。現在不再是一千個機器人都貼同一個英文破碎的贈獎留言,每個機器人都能創造出獨特且連貫的貼文,既遵照腳本又不會明顯重複。他們甚至可以主動加入對話。例如在加密貨幣論壇或Telegram群組裡,AI機器人可以偽裝潛入,和大家隨意聊市場或最新NFT,一步步建立群體可信度。然後,當他私訊某人「提供好機會」或釣魚連結時,對方不易起疑,畢竟這個帳號在群組裡「正常互動」好幾週了。AI也能用GAN等技術生成這些機器人超擬真的大頭照,這樣就沒辦法單靠反向圖像搜尋揪出偷圖。現在很多詐騙X(Twitter)帳號都用AI生成的頭像——通常是看起來親切又有吸引力的人,而且沒有早期AI圖像那種明顯破綻。連自介和貼文都能AI寫得像真有人一樣。
Impersonation of legitimate accounts is another area where AI helps. We touched on deepfake video/voice impersonation, but on text-based platforms, the imposter might just copy the profile of a known figure or support desk. AI can assist by quickly mass-producing lookalike accounts (slightly misspelled handles, for instance) and generating content that matches the tone of the official account. When victims message these fake support accounts, AI chatbots can handle the interaction, walking them through “verification” steps that actually steal information. This kind of conversational phishing is much easier to scale with AI. In one noted scam, users in a Discord community got private messages from what looked like an admin offering help to claim an airdrop; an AI likely powered those chats to convincingly guide users through connecting their wallets – straight into a trap that stole their tokens. Chainalysis reported that AI chatbots have been found infiltrating popular crypto Discord/Telegram groups, impersonating moderators and tricking people into clicking malicious links or divulging wallet keys. The bots can even respond in real-time if someone questions them, using natural language, which throws off some of the usual tip-offs (like a long lag or irrelevant reply).
假冒合法帳號是AI幫得上忙的另一領域。我們談過深偽影像/語音冒充,但在純文字平台,冒名者可能只要複製一個知名人物或官方客服的個人檔案。AI可以快速大量生成外型相似的帳號(例如帳號拼音只差一字母),還能產出和官方帳號語氣吻合的內容。當受害者傳訊這些假客服帳號時,AI聊天機器人就能主導整個互動,帶著受害者一步步「驗證」,實際上是在偷取資訊。這種對話式釣魚詐騙透過AI更容易規模化。有個著名案例就是Discord群裡,用戶收到看起來像管理員的私訊,主動邀請協助領取空投;聊天過程很可能全由AI操控,誘導用戶連結錢包,其實就是圈套偷幣。Chainalysis報告指AI聊天機器人滲透知名加密貨幣Discord/Telegram群,假扮版主誘導群友點惡意連結或交出錢包金鑰。這些機器人還能用自然語言即時回應質疑,打破一般詐騙有的破綻(比如回應延遲或語意不合)。
The scale is staggering – a single scammer (or small team) can effectively run hundreds of these AI-driven personas in parallel. They might use AI agents that monitor social media for certain keywords (like “forgot password MetaMask”) and automatically reply or DM the user with a prepared scam message. Before AI, they’d have to either do this manually or use crude scripts that were easily flagged. Now it’s all more adaptive. We also see AI being used to generate fake engagement: thousands of comments and likes from bot accounts to make a scam post or scam token seem popular. For instance, a fraudulent ICO might have dozens of “investors” on Twitter (all bots) praising the project and sharing their supposed profits. Anyone researching the project sees positive chatter and might be fooled into thinking it’s legit grassroots excitement.
這種規模令人咋舌——一個詐騙犯(或小團隊)就能同時運營數百個這類AI控制的分身。他們會用AI代理監控社群媒體某些關鍵詞(像「MetaMask 忘記密碼」),然後自動回文或私訊受害者,發送預先設計好的詐騙話術。以前沒有AI只好人工處理,或用很容易被抓包的爛腳本;現在全都變得更靈活更難察覺。我們也發現AI被用來灌水提升假聲量:數以千計的機器人帳號按讚留言,讓詐騙貼文或詐騙幣看似很受歡迎。例如某個詐騙ICO可能在X(Twitter)上有好幾十個「投資人」(全是機器人)誇項目多棒、多賺錢。任何想研究這項目的人都只會看到一片叫好聲,自然誤以為這是真實市場熱潮。
How to fight social media bot scams: First, recognize the signs of bot activity. If you get an instant generic reply the moment you mention a crypto issue online, assume it’s malicious. Never click random links sent by someone who reached out unsolicited, even if their profile picture looks nice. Check profiles carefully: When was it created? Does it have a history of normal posts or is it mostly promotional? Often bots have brand-new accounts or weird follower/following ratios. On Telegram/Discord, adjust your privacy settings to not allow messages from members you don’t share a group with, or at least be wary of anyone messaging out of the blue. Official support will rarely DM you first. If someone impersonates an admin, note that reputable admins usually won’t conduct support via DM – they’ll direct you to official support channels. If a Twitter account claims to be support for a wallet or exchange, verify the handle against the company’s known handle (scammers love swapping an “0” for “O”, etc.). Utilize platform tools: Twitter’s paid verification is imperfect, but a lack of a blue check on a supposed “Binance Support” is a dead giveaway now. For Discord, communities sometimes have bot-detection tools – pay attention to admin warnings about scams and bots.
如何對抗社群機器人詐騙:首先學會辨認機器人活動的徵兆。只要你在網路上提到加密貨幣問題就立刻收到制式回覆,基本上可以判定有問題。不要點擊任何主動私訊給你的陌生連結,不管對方頭像多好看。仔細檢查對方個人檔案:帳號什麼時候創立?有正常發表過生活內容,還是幾乎全是廣告?機器人常常是全新帳號或粉絲比例古怪。在Telegram/Discord請調整你的隱私權,不允許沒共同群組的人私訊你,或至少對突如其來的陌生訊息要保持警惕。官方支援很少會主動私訊。如果有人冒充管理員,記住正牌版主通常不會用私訊處理問題——他們只會請你去官方支援管道。如果某X(Twitter)帳號自稱是錢包或交易所的客服,請對照公司公開的帳號ID(騙子很愛用「0」和「O」互換藏騙)。善用平台工具:X(Twitter)的藍勾雖不完美,但詐騙帳號如果沒藍勾還敢說自己「幣安客服」就超可疑。Discord一些社群有設防機器人插件——務必留意社群管理員的詐騙警告。
As users, one of the best defenses is a healthy cynicism about “friendly strangers” offering help or money online. Real people can be kind, but in the crypto social sphere, unsolicited help is more likely a con. So, if you’re flustered about a crypto problem, resist the urge to trust the first person who DMs you claiming they can fix it. Instead, go to the official website of the service in question and follow their support process. By denying scammers that initial engagement, their whole AI-bot advantage is nullified. And finally, report and block obvious bot accounts – many platforms improve their AI detection based on user reports. It’s an ongoing arms race: AI vs AI, with platforms deploying detection algorithms to counter malicious bots. But until they perfect that, staying vigilant and not engaging with probable bots will go a long way to keeping your crypto safe.
用戶最佳防線,其實是對網路上「熱心陌生人」保持一份質疑。真人當然也有好人,但在加密貨幣社群,沒理由主動幫你的通常是騙子。所以,遇到加密貨幣問題千萬不要急於相信第一個私訊說能幫你的人。請直接前往該服務的官方網站按正常程序尋求協助。只要一開始就不理會詐騙者,AI機器人的優勢就自動失效。最後,務必檢舉和封鎖明顯的機器人帳號——許多平台會根據用戶回報加強AI偵測能力。這場攻防戰永無止境:AI對決AI,平台持續升級演算法對抗惡意機器人。但在這些系統還不完美前,保持警覺、不與疑似機器人互動,對保護你加密貨幣安全大有幫助。
8. Autonomous “Agent” Scams – The Next Frontier
8. 自主「代理人」詐騙——下一波前線
Looking ahead, the most unsettling prospect is fully automated scam operations – AI agents that conduct end-to-end scams with minimal human input. We’re already seeing early signs of this. An AI agent is essentially a software program that can make decisions and perform multi-step tasks on its own (often by invoking other AI models or software tools). OpenAI’s recently announced ChatGPT-powered agents that can browse, use apps, and act like a human online have raised both excitement and concern. Scammers are undoubtedly eyeing these capabilities to scale their fraud to new heights.
展望未來,最令人不安的前景是——完全自動化的詐騙作業:AI代理人幾乎不需要人類參與,就能整套執行詐騙。目前我們已看到這方面的早期徵候。所謂AI代理人,本質上是可以自我決策、執行多步驟操作的軟體機器人(經常還會結合其他AI模型或軟體工具)。OpenAI近來推出的ChatGPT代理人,號稱可瀏覽網頁、操作App、模擬真人網路行為,令人同時既期待又憂慮。詐騙者顯然也虎視眈眈,要靠這些能力把騙局規模推上新巔峰。
Imagine an AI agent designed for fraud: It could scan social media for potential targets (say, people posting about crypto investing or tech support needs), automatically initiate contact (via DM or email), carry on a realistic conversation informed by all the data it’s scraped about the person, and guide them to a scam outcome (like getting them to a phishing site or persuading them to send crypto). All the while, it adjusts its tactics on the fly – if the victim seems skeptical, the AI can change tone or try a different story, much as a human scammer might. Except this AI can juggle hundreds of victims simultaneously without fatigue, and operate 24/7. This is not science fiction; components of it exist now. In fact, TRM Labs warns that scammers are using AI agents to automate outreach, translation, and even the laundering of funds – for instance, summarizing a target’s social media presence to customize the con, or optimizing scam scripts by analyzing what has worked on past victims. There’s also talk of “victim persona” agents that simulate a victim to test new scam techniques safely. It’s a devious use of AI – scammers testing scams on AIs before deploying on you.
想像一個專門為詐騙設計的AI代理人:它能掃描社群媒體尋找潛在目標(比如發文問加密投資或技術支援的人),自動發起聯繫(私訊或電郵),憑已蒐集的個人訊息進行極逼真的對話,最後一步步把人導去釣魚網站或勸你匯款。過程中還能即時調整策略——受害者一旦懷疑,它便改換語氣或編新劇本,宛如真人騙徒。不過這AI可以同時應付數百受害者,永不疲倦,全天候作業。這不是科幻小說——相關技術零件已現世。事實上,TRM Labs就警告,詐騙者已用AI代理人自動化外聯、翻譯,甚至洗錢——例如總結目標者社群活動以客製化話術,或分析歷來受害案例優化騙術腳本。還有人談論建立「受害人模擬」代理人,用來測試新型騙法的安全沙盒。這堪稱AI的邪惡應用——先在AI身上反覆測試騙術,才丟給真人。
On the technical side, an AI agent can integrate with various tools: send emails, make VoIP calls (with AI voices), generate documents, etc. We could soon face automated phone scams where an AI voice calls you claiming to be from your bank’s fraud dept. and converses intelligently. Or an AI that takes over a hacked email account and chats with the victim’s contacts to request money. The combinations are endless. Sam Altman’s dire warning about adversarial AIs that could “take everyone’s money” speaks to this scenario. When AI can multi-task across platforms – perhaps using one GPT instance to talk to the victim, another to hack weak passwords, another to transfer funds once credentials are obtained – it becomes a full-fledged fraud assembly line with superhuman efficiency. And unlike human criminals, an AI doesn’t get sloppy or need sleep.
技術上,AI代理人能整合多種工具:發信、以AI語音撥打網路電話、生成文件等。很快我們就可能遇到:AI語音假冒銀行防詐部門自動來電,整場對話都極擬真;或是AI接管被盜電子郵件帳戶,自然與通訊錄上的人聯絡詐財。組合無限多。Sam Altman提出AI敵手「可以把所有人的錢都騙走」的警告,正是針對這種情境。當AI能跨平台多工——譬如一個GPT例程跟受害者對談,一個去破解簡單密碼,另一個取得認證後馬上轉帳——這就是效率超越人類的詐騙生產線。AI不像真人罪犯會粗心或需要睡覺。
It’s worth noting that security experts and law enforcement are not standing still. They are exploring AI solutions to counter AI threats (more on that in the next section). But the reality is that the scalability of AI-driven scams will challenge existing defenses. Legacy fraud detection (simple rules, known bad keywords, etc.) may fail against AI that produces ever-variant, context-aware attacks. A big coordinated effort – involving tech companies, regulators, and users themselves – will be needed to mitigate this. Regulators have started discussing requiring labels on AI-generated content or better identity verification methods to counter deepfakes. In the interim, zero-trust approaches (don’t trust, always verify) will be crucial on an individual level.
要注意,資訊安全專家與執法機關並未坐以待斃。他們正在積極研究用AI來反制AI(詳情下節再談)。但現實是,AI詐騙的規模化必會衝擊既有防線。傳統的防詐技術(例行規則、黑名關鍵字)遇上不斷變化、識時應景的AI攻擊會難以奏效。要抵禦這一波,勢必需要科技業、監管官方、使用者共同協作。監管單位已開始討論要求AI生成內容標記標籤或強化身分驗證來對抗深偽。短期內,零信任策略(不輕信,事事驗證)將是個人層級防護關鍵。
Staying safe in the era of AI agents: Many of the tips already given remain your best armor – skepticism, independent verification, not oversharing data that agents can mine, etc. As AI agents arise, you should raise your suspicion for any interaction that feels slightly “off” or too formulaic. For instance, an AI might handle most of a scam chat but falter on an unexpected question – if someone ignores a personal question and continues pushing their script, be wary. Continue to use multi-factor authentication (MFA) on your accounts; even if an AI tricks you into revealing a password, a second factor (and especially a physical security key) can stop it from logging in. Monitor your financial accounts closely for unauthorized actions – AI can initiate transactions, but if you catch them quickly, you might cancel or reverse it. Importantly, demand authenticity in critical communications: if “your bank” emails or calls, tell them you will call back on the official number. No
AI時代自保:先前提過的那些建議依然是你最強防線——懷疑一切、獨立驗證、不要過度分享能被AI挖掘的個資等等。隨著AI代理人崛起,任何感覺「有點怪」或太模式化的對話都應大幅提高警覺。比方說,AI大多能順利進行詐騙聊天,但遇到突發問題就會答非所問——若對方跳過私人提問卻照本宣科,就要警覺。你的帳戶一定要啟用多重身份驗證(MFA);即使AI騙到密碼,有第二層驗證(尤其是實體安全金鑰),也能讓它無法登入。緊盯你的金融帳戶有無異常舉動——AI雖可發起交易,但你只要及時發現就可能追回。更重要的是,與關鍵機構溝通時務必要求真實性——如果「你的銀行」來信或來電,記得堅持你會打官方電話回撥查證。genuine institution will refuse that. As consumers, we may also see new tools emerge (perhaps AI-driven) for us to verify content – for example, browser plugins that can flag suspected AI-generated text or deepfake videos. Staying informed about such protective tech and using it will help level the playing field.
真正的機構會拒絕這種做法。作為消費者,我們也有可能看到新的工具出現(也許是 AI 驅動)來協助我們驗證內容——例如可以標示疑似 AI 生成文字或深偽影片的瀏覽器擴充功能。保持對這些防護技術的瞭解並使用它們,有助於讓大家站在更公平的起跑點。
Ultimately, in this AI arms race, human vigilance is paramount. By recognizing that the person on the other end might not be a person at all, you can adjust your level of trust accordingly. We’re entering a time when you truly can’t take digital interactions at face value. While that is disconcerting, being aware of it is half the battle. The scams may be automated, but if you automate your skepticism in response – treating every unsolicited ask as malicious until proven otherwise – you compel even the smartest AI con to surmount a very high bar to fool you.
歸根究柢,在這場 AI 軍備競賽中,人類的警覺性至關重要。當你意識到對方可能根本不是人類時,你可以相應調整自己的信任程度。我們正進入一個無法再以表面互動作為憑據的時代。雖然這讓人感到不安,但意識到這一點已經是成功的一半。詐騙可能是自動化的,但如果你也自動化你的懷疑——將每個突如其來的請求都視為惡意,直到證明其清白為止——就算是最聰明的 AI 詐騙手段,也必須跨過很高的門檻才能騙得了你。
9. How Authorities and Industry are Fighting Back
9. 當局與產業界如何反擊
It’s not all doom and gloom – the same AI technology empowering scammers can be harnessed to detect and prevent fraud, and there’s a concerted effort underway to do just that. Blockchain analytics firms, cybersecurity companies, and law enforcement are increasingly using AI and machine learning to counter the wave of AI-powered crypto scams. It’s a classic cat-and-mouse dynamic. Here’s how the good guys are responding:
情勢並非全然悲觀──賦予詐騙者力量的 AI 技術,同樣也能運用來偵測和防範詐騙,而且現在已有多方共同努力朝這個方向前進。區塊鏈分析公司、資安企業與執法機關愈來愈多地使用 AI 與機器學習來對抗 AI 驅動的加密詐騙潮。這是一場典型的貓捉老鼠遊戲。正派的一方是如何因應的呢:
-
AI-driven scam detection: Companies like Chainalysis and TRM Labs have integrated AI into their monitoring platforms to spot patterns indicative of scams. For instance, machine learning models analyze text from millions of messages to pick up linguistic cues of AI-generation or social engineering. They also track on-chain behaviors – one report noted that about 60% of deposits into scam wallets are now linked to AI usage. By identifying wallets that pay for AI services or exhibit automated transaction patterns, investigators can flag likely scam operations early. Some anti-phishing solutions use AI vision to recognize fake websites (scanning for pixel-level mismatches in logos or slight domain differences) faster than manual reviews.
-
AI 詐騙偵測:像 Chainalysis 和 TRM Labs 這類公司,已將 AI 融入其監控平台,用來發現與詐騙相關的模式。例如,機器學習模型能分析數百萬則訊息中的文本,找出 AI 生成或社交工程的語言線索。他們同時追蹤鏈上行為——某份報告指出,現在大約 60% 進入詐騙錢包的存款已與 AI 使用有關。藉由識別那些支付 AI 服務或展現自動化交易模式的錢包,調查員能提早標記潛在的詐騙行為。有些防釣魚解決方案則運用 AI 視覺功能,比人工檢查更快地辨認假網站(例如檢查標誌的像素級差異、類似的網域名稱等)。
-
Authentication improvements: In light of Altman’s comments that voice and video can’t be trusted, institutions are moving toward more robust authentication. Biometrics may shift to things like device fingerprints or behavioral biometrics (how you type or swipe) that are harder for AI to mimic en masse. Regulators are nudging banks and exchanges to implement multi-factor and out-of-band verification for large transfers – e.g., if you request a big crypto withdrawal, maybe a live video call where you have to perform a random action, making it harder for a deepfake to respond correctly. The Fed and other agencies are discussing standards for detecting AI impersonation attempts, spurred by cases like the $25M deepfake CFO scam.
-
驗證機制升級:因應 Altman 所言「聲音和影片已不可信」,各大機構正往更嚴謹的驗證機制發展。生物辨識技術可能會轉向裝置指紋、行為生物特徵(如打字或滑動方式),這些較難被 AI 大規模模仿。監管機構也正在督導銀行和交易所針對大額轉帳採用多重驗證與異地驗證,比如申請大筆加密幣提領時,可能需進行一場隨機動作的即時視訊通話,讓深偽無法正確回應。聯準會和其他單位正研議 AI 冒充行為的偵測標準,起因之一是先前 2500 萬美元 CFO 深偽事件。
-
Awareness campaigns: Authorities know that public education is crucial. The FBI, Europol, and others have released alerts and held webinars to inform people about AI scam tactics. This includes practical advice (many of which we’ve echoed in this article) such as how to spot deepfake artifacts or phishy AI-written text. The more people know what to look for, the less effective the scams. Some jurisdictions are even considering mandated warning labels – for example, requiring political ads to disclose AI-generated content; such policies could extend to financial promotions as well.
-
意識宣導活動:當局深知公眾教育非常重要。FBI、歐洲刑警等單位已發布警示並舉辦線上說明會,介紹各種 AI 詐騙手法,實用建議(本文章也多次重申)也包括如何識別深偽特徵或有詐的 AI 生成文句。民眾知道辨識重點後,詐騙的效果就會打折扣。有些司法轄區甚至考慮強制加註警語——以後政治廣告必須標註 AI 生成內容,此類規範可能擴展到財經宣傳等。
-
Legal and policy measures: While technology moves fast, there’s talk of tightening laws around deepfake abuse. A few U.S. states have laws against deepfakes used in elections or impersonating someone in a crime, which could be applied to scam cases. Regulators are also examining the liability of AI tool providers – if a product like WormGPT is clearly made for crime, can they go after its creators or users? In parallel, mainstream AI companies are working on watermarking AI-generated outputs or providing ways to verify authenticity (OpenAI, for instance, has researched cryptographic watermarking of GPT text). These could help distinguish real from AI if widely adopted.
-
法規政策調整:雖然科技發展飛快,但針對深偽技術濫用已開始討論加重法律責任。美國已有部分州針對用於選舉或犯罪冒充的深偽技術立法,未來亦可作用於詐騙案例。監管單位正進一步探討 AI 工具供應商的責任——若如 WormGPT 這種產品明顯用作犯罪,能否追究其製作者或用戶?同時,主流 AI 廠商正研發 AI 生成結果的浮水印標記或驗證方式(如 OpenAI 研究加密水印於 GPT 文本上)。若能廣泛實施,有助於分辨真實與 AI 產出。
-
Collaboration and intelligence-sharing: One silver lining is that the threat of AI scams has galvanized cooperation. Crypto exchanges, banks, tech platforms, and law enforcement have been sharing data on scammer addresses, known deepfake tactics, and phishing trends. For example, if an exchange notices an account likely opened with fake credentials, they might alert others or law enforcement, preventing that same identity from being reused elsewhere. After major incidents, industry groups conduct post-mortems to learn how AI was leveraged and disseminate mitigation strategies.
-
跨界合作與情資共享:AI 詐騙的威脅也推動各方合作。加密貨幣交易所、銀行、科技平台與執法單位開始共享詐騙者地址、已知深偽手法、釣魚趨勢等資訊。例如一間交易所若發現某帳戶可能用偽造身分開戶,便會通知他方或警察機關,防止相同身份資料被重複濫用。重大事件後,產業組織會檢討 AI 被濫用的經驗,並彙整改善建議。
-
Victim support and intervention: Recognizing that many victims are too embarrassed to report (recall only ~15% report losses), some agencies have become proactive. The FBI’s Operation Level Up in 2024 actually identified thousands of likely pig butchering victims before they realized they were scammed, by analyzing financial flows, and managed to prevent an additional $285 million in losses by warning them in time. In other words, better detection allowed intervention in real-time. More such initiatives, possibly AI-aided, can save would-be victims by spotting the scam patterns earlier in the cycle (e.g., unusual repetitive transactions to a fake platform).
-
受害者支援與主動介入:因為許多受害者羞於報案(僅約 15% 會主動通報),部分單位開始更積極出擊。例如 FBI 於 2024 年推動的「Operation Level Up」透過分析資金流向,在受害人自知被騙之前,就辨識出數千名「殺豬盤」潛在受害者,並及時發出警示,成功阻止額外 2.85 億美元損失。換句話說,更好的偵測使能即時介入救援。未來若更多這類、甚至結合 AI 工具的計畫上線,就能更早發現詐騙循環(如不尋常、重複的小額轉帳),及時救人於未然。
In the end, defeating AI-assisted scams will require “it takes a village”: technology defenses, informed users, updated regulations, and cross-border law enforcement cooperation. While scammers have proven adept at integrating AI into their workflow, the countermeasures are ramping up in parallel. It’s an ongoing battle, but not a lost one. By staying aware of both the threats and the solutions emerging, the crypto community can adapt. Think of it this way – yes, the scammers have powerful new tools, but so do we. AI can help sift through massive amounts of data to find needles in the haystack (like clustering scam wallet networks or detecting deepfake content). It can also help educate, through AI-powered training simulations that teach people how to respond to scam attempts.
總的來說,要擊潰 AI 輔助的詐騙,需要技術防線、資訊素養、法規更新、跨國執法等多方面共同努力。雖然詐騙集團很會把 AI 融入其作業流程,對應措施也在同步進階。這是一場拉鋸戰,但絕非必敗之局。只要密切追蹤潛在威脅和新興解方,加密圈就能與時俱進。換個角度想——沒錯,詐騙者有了強大新武器,我們也有!AI 能從大量資料中火眼金睛抓出稻草堆裡的針(例如找出詐騙錢包網絡、偵測深偽內容),也能作為教育工具,用 AI 驅動的訓練模擬教大家如何應對詐騙。
10. Protecting Yourself: Key Takeaways to Stay Safe
10. 自我防護:關鍵安全重點
Having explored the myriad ways AI is being abused to steal crypto, let’s distill some practical protection tips. These are the habits and precautions that can make you a hard target, even as scams evolve:
看完 AI 如何被濫用來竊取加密資產,讓我們濃縮出幾點實用防護訣竅。這些習慣和預防措施能讓你成為難以鎖定的目標,不論詐騙怎麼演變:
-
Be skeptical of unsolicited contact: Whether it’s an unexpected video call from a “friend,” a DM offering help, or an email about an investment opportunity, assume it could be a scam. It’s sad we have to think this way, but it’s the first line of defense. Treat every new contact or urgent request as potentially fraudulent until verified through a secondary channel.
-
對突如其來的聯絡保持懷疑:不管是所謂“朋友”的突發視訊通話、主動幫忙的私訊,還是投資邀約郵件,都先假設可能是詐騙。雖然這樣想令人有些心寒,但這是最前線的防線。所有新接觸或急迫要求,除非經由另一管道驗證,都要假定其有詐。
-
Verify identities through multiple channels: If you get a communication supposedly from a known person or company, confirm it using another method. Call the person on a known number, or email the official support address from the company’s website. Don’t rely on the contact info provided in the suspicious message – look it up independently.
-
多重管道驗證身份:當你收到自稱識人或名公司的聯繫,請透過另一個渠道確認。打電話到你已知的號碼、或用官網上的客服信箱詢問。絕對別直接用該可疑訊息中的聯絡方式,要自己另外查證。
-
Slow down and scrutinize content: Scammers (human or AI) rely on catching you off-guard. Take a moment to analyze messages and media. Check for the subtle signs of deepfakes (strange visual artifacts, lip-sync issues) and phishing (misspelled domains, unnatural requests for credentials). If something seems even slightly “off” about a message’s context or wording given who it claims to be from, trust your gut and investigate further.
-
慢下來、細看內容:詐騙者(無論人還是 AI)常趁你措手不及。花點時間審核訊息和媒體,注意深偽的細微跡象(如異常影像、唇形不同步)和釣魚手法(拼錯的網域異常的帳密要求)。只要訊息內容或措辭和對方身份有一絲差距,就相信直覺,繼續查證。
-
Use strong security measures: Enable two-factor authentication (2FA) on all crypto accounts and emails. Prefer app-based or hardware 2FA over SMS if possible (SIM-swap attacks are another risk). Consider using a hardware wallet for large holdings – even if a scammer tricks you, they can’t move funds without the physical device. Keep your devices secure with updated software and antivirus, to guard against any malware that does slip through.
-
採取嚴謹的安全措施:所有加密帳戶和電子郵件都啟用雙重驗證(2FA),能用 App 或硬體 2FA 就避免用簡訊(SIM 卡調包攻擊也是風險)。大額資產可考慮用硬體錢包——即便被詐騙,沒這個裝置對方也挪不走錢。定期更新軟體和防毒系統,確保裝置安全,防止駭客或惡意程式溜進來。
-
Keep personal info private: The less scammers can learn about you online, the less material their AI has to work with. Don’t share sensitive details on public forums (like email, phone, financial info). Be cautious with what you post on social media – those fun personal updates or voice clips could be harvested to target you with AI (for example, training a voice clone). Also, check privacy settings to limit who can message you or see your content.
-
保護個人隱私:詐騙者知道你越多,他們 AI 可運用的彈藥越多。不要在公開論壇透露敏感細節(如信箱、電話、財務資訊)。社群媒體也不要亂發太多私密內容或語音片段——這些也可能被用來針對你(像訓練語音分身)。此外,檢查隱私設定,限制誰可以私訊你或查看你的貼文。
-
Educate yourself and others: Stay informed about the latest scam trends. Read up on new deepfake techniques or phishing strategies so you’ll recognize them. Share this knowledge with friends and family, especially those less tech-savvy, who might be even more at risk. For instance, explain to older relatives that AI can fake voices now, so they should always verify an emergency call. Empower everyone around you to be more vigilant.
-
教育自己與他人:隨時吸收最新詐騙趨勢,瞭解深偽和釣魚新手法,學會辨識。多與親友分享這類知識,尤其要幫助不熟科技的長輩或家人——像告訴他們 AI 現在能造假聲音,遇到緊急來電應該再三查證。讓你周遭的人都能加強警覺。
-
Use trusted sources and official apps: When managing crypto, stick to official apps and websites. Don’t follow links sent to you – manually type the exchange or wallet URL. If you’re exploring new projects or bots, thoroughly research their credibility (look for reviews, news, the team’s background). Download software only from official stores or the project’s site, not from random links or files sent to you.
-
只用官方渠道和應用程式:管理加密資產時只用官方網站和 App。別點陌生鏈結,自己親手輸入網址。遇到新項目或機器人,一定要查清楚信譽好壞(看風評、查新聞、認識團隊)。下載軟體務必從官方商店或官網,別亂安裝來歷不明的檔案或應用程式。
-
Leverage security tools: Consider browser extensions or services that block known phishing sites. Some password managers will warn you if you’re on an unknown domain that doesn’t match the saved site. Email providers increasingly use AI to flag likely scam emails – heed those warnings. There are also emerging deepfake detection tools (for images/videos); while not foolproof, they can provide another layer of assurance if you run a suspicious video through them.
-
善用安全工具:可以加裝瀏覽器擴充功能或訂閱服務,以封鎖已知釣魚網站。有些密碼管家也會在你上不明網站時提醒。越來越多信箱提供商用 AI 標示可疑信件,這些警告要多加注意。愈來愈多深偽檢查(圖像/影片)工具也正問世,雖不完美,但遇到可疑影片不妨多試一層防護。
-
Trust, but verify – or better yet, zero-trust: In crypto, a healthy dose of paranoia can save your assets. If a scenario arises where you must trust someone (say, an OTC trade or a
-
信任但要驗證——甚至建議「零信任」:在加密圈裡,適度的疑心能保住你的資產。如果有情境不得不信某人(例如場外交易或……new business partner),務必進行徹底的盡職調查,重大交易時甚至可以堅持面對面會談。當涉及到你的資金時,確認兩次是完全合理的。正如那句諺語所說:「不要相信,要驗證」——原本是用來形容區塊鏈交易,現在同樣適用於溝通。
-
發現成為詐騙目標時要通報並尋求協助:如果你遇到詐騙企圖,請向平台舉報(他們確實會採取行動移除惡意帳號),以及像 Chainabuse 或政府詐欺通報網站等通報。這不僅能協助社群,還有助於案件調查。如果很不幸真的遭遇詐騙,請立即聯絡執法機關——雖然追討損失不容易,但越早通報,追回的機會就越大。而且,你的個案也可能提供寶貴情報,幫助防止他人受騙。
總結來說,AI 驅動的加密貨幣詐騙雖然是一大挑戰,但絕非無法戰勝。只要你熟悉對手的手法——深偽影像、語音複製、AI 聊天機器人、偽 AI 投資等——你就能預判詐騙伎倆並加以避開。科技會讓雙方都不斷進化,今天也許是深偽影片和 GPT 生成的郵件,明天很可能有更高明的詐騙。但說到底,大部分詐騙仍然設法讓你做出違背自身利益的事——匯款、洩露秘密、繞過防護機制。這正是你該暫停、運用所學知識判斷的時候。保持警覺,持續學習,你就能聰明避開那些最聰明的 AI 詐騙。你最強的防線,其實就是自己的批判性思考。在這個充滿假象的世界,真正的懷疑精神才最寶貴。

