App Store
Wallet

Why Render Network Says The Real AI Bottleneck Isn’t GPU Shortage, It’s Wasted Compute

profile-murtuza-merchant
Murtuza Merchant3 minutes ago
Why Render Network Says The Real AI Bottleneck Isn’t GPU Shortage, It’s Wasted Compute

A persistent assumption across the AI industry is that growth will soon be constrained by a global shortage of high-end GPUs.

Yet the constraint shaping the next phase of AI development may be less about absolute scarcity than about structural inefficiency.

According to Render Network’s Trevor Harries-Jones, most of the world’s compute capacity is not being used at all, a disconnect he sees as more important than supply constraints.

The Misunderstood GPU shortage

“Forty percent of the GPUs in the world are idle,” he told Yellow.com in an interview, on the sidelines of Solana's Breakpoint event. “People assume there’s a shortage, but there’s actually a glut of GPUs that are performant enough to do rendering and AI jobs.”

Harries-Jones argues that while demand for training-grade chips like Nvidia’s H100 remains intense, training itself represents only a small fraction of real-world AI workloads.

“Training is actually only a very small percentage of the AI usage,” he notes. “Inference takes up 80 percent.”

That imbalance, he suggests, opens the door for consumer hardware, lower-end GPUs, and new processor classes such as LPUs, TPUs and ASICs to absorb far more of the global compute load than many assume.

A second shift he highlights is the convergence of traditional 3D workflows with emerging AI-native asset formats.

Creators Push AI Toward Cinematic-Grade Pipelines

Techniques like Gaussian splatting, which preserves underlying 3D structure rather than generating flattened 2D frames and the emergence of world models are starting to pull AI systems closer to the cinematic production pipeline.

These developments matter because they make AI outputs usable inside existing professional toolchains rather than sitting as standalone novelty formats.

Model size remains a challenge, but Harries-Jones expects quantization and model compression to continue shrinking open-weight systems until they run comfortably on consumer devices.

Smaller models, he says, are essential for decentralized networks that rely on distributed RAM and bandwidth rather than hyperscale clusters.

Also Read: Data Shows Bitcoin Enters 'Cost-Basis Cycle' Era As ETFs Redefine Market Structure

Where many expect rising model complexity to drive costs upward, he believes the opposite dynamic will dominate.

Training breakthroughs, such as recent Chinese model efforts that prioritized efficiency over scale, point toward a future where AI becomes cheaper even as usage accelerates.

“As the cost decreases,” he says, “you’re going to see more and more use cases emerge.”

Rather than compute scarcity, Harries-Jones anticipates a Jevons-paradox cycle: falling costs create more demand, and more demand incentivizes even more efficient systems.

He also expects hybrid compute, a mix of on-device, local-network, and centralized cloud workloads, to define the next stage of the industry.

Similar to Apple’s distributed intelligence model, different environments will handle different tasks depending on latency, privacy, sensitivity, and scale.

Mission-critical workloads will still require compliant data centers, but non-sensitive or batch workloads can increasingly run on decentralized networks. Advances in encryption may eventually expand that boundary.

A Coming Wave Of 3D-First Content

Longer term, he sees a far broader shift underway: the mainstreaming of 3D, powered by AI.

Harries-Jones expects the next era of consumer-facing AI to revolve around immersive, 3D-native content rather than text or flat images.

“We’re going to consume more 3D content than ever before,” he says, pointing to early signals from immersive hardware and the rapid evolution of 3D-AI tooling.

The traditional bottlenecks of motion graphics, highly technical workflows accessible only to niche experts, may give way to tools that allow millions of users to produce cinematic-grade scenes.

Creators, once resistant to AI, are now experimenting directly with these pipelines, accelerating the pace of tool refinement and shaping how hybrid workflows evolve.

Their feedback, he argues, is likely to influence the direction of the industry just as much as hardware trends.

Read Next: Why Gradient Thinks Trillion-Parameter Models Won’t Belong To OpenAI or Google In The Future

Disclaimer and Risk Warning: The information provided in this article is for educational and informational purposes only and is based on the author's opinion. It does not constitute financial, investment, legal, or tax advice. Cryptocurrency assets are highly volatile and subject to high risk, including the risk of losing all or a substantial amount of your investment. Trading or holding crypto assets may not be suitable for all investors. The views expressed in this article are solely those of the author(s) and do not represent the official policy or position of Yellow, its founders, or its executives. Always conduct your own thorough research (D.Y.O.R.) and consult a licensed financial professional before making any investment decision.
Latest News
Show All News