Ecosystem
Wallet

Tether's BitNet Framework Runs 13B AI Models On An iPhone 16

Tether's BitNet Framework Runs 13B AI Models On An iPhone 16

Tether (USDT) released a cross-platform LoRA fine-tuning framework for Microsoft's BitNet large language models on Tuesday, enabling AI training on smartphones, consumer GPUs, and laptops without specialized Nvidia hardware.

The framework, part of the company's QVAC Fabric platform, is the first to support BitNet fine-tuning across non-Nvidia chips - including AMD, Intel, Apple Silicon, and mobile GPUs - according to Tether's announcement.

The release extends a framework Tether first launched in December 2025.

The new component specifically adds BitNet-native LoRA fine-tuning and inference acceleration across heterogeneous consumer hardware, expanding what had previously required enterprise Nvidia systems or cloud infrastructure.

What the Benchmarks Show

Tether's engineers fine-tuned a 125-million-parameter BitNet model in approximately 10 minutes on a Samsung Galaxy S25 using a biomedical dataset of roughly 18,000 tokens.

A 1-billion-parameter model completed the same task in 1 hour 18 minutes on the S25 and 1 hour 45 minutes on an iPhone 16.

The company also demonstrated fine-tuning of models up to 3.8 billion parameters on flagship phones and up to 13 billion parameters on the iPhone 16.

On mobile GPUs, BitNet inference ran two to eleven times faster than on CPU. Memory consumption for the 1-billion-parameter BitNet model (TQ1_0) was 77.8% lower than a comparable Gemma-3-1B 16-bit model across both inference and LoRA fine-tuning workloads, per Tether's published benchmarks.

Read also: Arizona Hits Kalshi With Criminal Charges

Why It Matters for AI Development

BitNet uses a ternary weight system - values of -1, 0, or 1 - that compresses model size and cuts VRAM requirements sharply compared with standard 16-bit models. LoRA (Low-Rank Adaptation) reduces fine-tuning costs further by updating small adapter layers rather than retraining the entire model.

Combining both allows edge-device training that was previously out of reach.

Tether CEO Paolo Ardoino said the framework supports federated learning workflows, where models update across distributed devices without sending data to centralized servers. The code is released as open-source under the Apache 2.0 license.

The release comes as the boundary between cryptocurrency infrastructure and AI compute continues to narrow. Bitcoin miners including Core Scientific and HIVE Digital Technologies have pivoted significant capacity toward AI and high-performance computing, while a growing number of crypto platforms have begun integrating AI agent capabilities for on-chain transactions.

Read next: BlackRock's ETHB Staked ETF Turns Ethereum Into A Dividend Play

Disclaimer and Risk Warning: The information provided in this article is for educational and informational purposes only and is based on the author's opinion. It does not constitute financial, investment, legal, or tax advice. Cryptocurrency assets are highly volatile and subject to high risk, including the risk of losing all or a substantial amount of your investment. Trading or holding crypto assets may not be suitable for all investors. The views expressed in this article are solely those of the author(s) and do not represent the official policy or position of Yellow, its founders, or its executives. Always conduct your own thorough research (D.Y.O.R.) and consult a licensed financial professional before making any investment decision.
Tether's BitNet Framework Runs 13B AI Models On An iPhone 16 | Yellow.com