Gradient CEO Eric Yang believes the next major shift in artificial intelligence won’t come from larger proprietary models or more powerful data centers.
Instead, he argues it will be driven by a foundational change in how models are trained: distributing training across a global, permissionless network of compute rather than inside the walls of a single corporate supercomputer.
Speaking about Gradient’s work in an interview with Yellow.com, Yang said today’s dominant AI labs like OpenAI, Google, Anthropic, xAI are built on an assumption that foundation models can only be trained inside massive, centralized infrastructure.
“AI benefits so much from centralization that nobody has been able to train large models across multiple data centers,” he said. Gradient is betting that this assumption is about to collapse.
Yang claims Gradient has already achieved successful reinforcement-learning training runs distributed across independent data centers, with performance that rivals centralized RLHF workflows.
He says this opens the door to something previously thought impossible: trillion-parameter model post-training conducted not by one company, but by thousands of compute providers around the world.
Also Read: As Bitcoin Evolves Into A Global Economy, A Hidden Battle Emerges Behind Closed Doors
The economic implications are equally significant. Yang describes a global “bounty-driven” marketplace where GPU operators, data centers, and even small independent infrastructure providers compete to contribute compute to training jobs.
Contributors earn rewards for supplying compute at the lowest available price, while training costs fall below the centralized alternatives that currently dominate the market.
He also believes decentralized AI infrastructure provides meaningful security and trust advantages.
If inference can be performed entirely on user-owned hardware, MacBooks, desktops, home GPUs, or hybrid setups, then personal data never leaves the device.
“Today we’re leaking far more sensitive data into AI systems than we ever did into Google,” he said. “A sovereign model running locally changes that.”
Yang argues this transparency can extend to training itself.
If training data provenance is recorded on-chain, users can see which environments and contributors shaped the model, an antidote, he says, to the biases and opaque editorial control seen in centralized systems.
In his view, the eventual AI landscape will not be ruled by a single large model, but a “sea of specialized models” trained and owned collaboratively.
“Every company will run AI just like they run analytics today,” Yang said. “When that happens, a global decentralized compute network becomes the only model that scales.”
Read Next: The Aster ETF Hoax That Fooled Even Top Crypto Influencers

