Carmack’s Fiber Loop: Could Light Replace RAM for AI Systems?
Carmack’s Fiber Loop: Could Light Replace RAM for AI Systems?
- Why Enterprise RAID Rebuilding Succeeds Where Consumer Arrays Fail?
- Linus Torvalds Rejects MMC Subsystem Updates for Linux 7.0: “Complete Garbage”
- The Man Who Maintained Sudo for 30 Years Now Struggles to Fund the Work That Powers Millions of Servers
- How Close Are Quantum Computers to Breaking RSA-2048?
- Why Windows 10 Users Are Flocking to Zorin OS 18 Instead of Linux Mint?
- How to Prevent Ransomware Infection Risks?
- What is the best alternative to Microsoft Office?
AI Infrastructure · Thought Experiment
Carmack’s Fiber Loop:
Could Light Replace RAM
for AI Systems?
Legendary programmer John Carmack sparked a dense technical debate in February 2026 with a proposal to use fiber-optic loops as ultra-high-bandwidth caches for AI model weights — a concept that is theoretically fascinating but practically distant.
On February 6, 2026, a short post on X from John Carmack — the co-founder of id Software and one of the most respected engineers in computing history — set off an unusually technical conversation across the technology world. In it, Carmack floated the idea of using a long loop of optical fiber as a high-speed data cache for artificial intelligence model weights, essentially replacing conventional DRAM with light itself.
The proposal arrived at an acutely uncomfortable moment for the industry. DRAM prices are surging as AI’s insatiable appetite for memory bandwidth overwhelms supply. Data centers are strained; GPU clusters are memory-constrained. Carmack’s thought experiment was not a product roadmap or a business plan — he himself called it “amusing to consider” — but it was serious enough engineering to command attention from researchers, hardware architects, and commentators across the field.
“256 Tb/s data rates over 200 km distance have been demonstrated on single-mode fiber optic, which works out to 32 GB of data in flight, ‘stored’ in the fiber, with 32 TB/s bandwidth. Neural network inference and training can have deterministic weight reference patterns, so it is amusing to consider a system with no DRAM, and weights continuously streamed into an L2 cache by a recycling fiber loop.” — John Carmack, post on X, February 6, 2026
The Core Idea: Data in Flight as Storage
The concept exploits a fundamental property of optical fiber: propagation delay. At the speeds at which modern fiber operates, data traveling through a cable takes a measurable amount of time to traverse it. That transit time, combined with extreme throughput, means a substantial volume of data exists inside the cable at any given moment — effectively “stored” in motion.
Carmack’s arithmetic is grounded in a real milestone. Single-mode fiber has demonstrated transmission rates of 256 Tb/s over distances of 200 kilometers. At that speed and distance, approximately 32 gigabytes of data are in the fiber at any instant. Configured as a closed loop, that cable would cycle data continuously, achieving an effective bandwidth of 32 TB/s — a figure that dwarfs current DRAM bandwidth by an enormous margin.
The crucial insight Carmack adds is about AI workloads specifically. Neural network inference and training can access model weights in deterministic, sequential patterns — unlike the random-access demands of general-purpose computing. That predictability matters enormously. Delay-line memory — the ancient concept this echoes — only works when you know when data will be needed, because access is time-based rather than address-based. AI weight streaming fits this constraint naturally.
A Modern Echo of Delay-Line Memory
Commenters on the original post were quick to note the historical parallel. Carmack himself acknowledged it: this is the modern equivalent of mercury delay-line memory, the technology used in early computers such as the UNIVAC I during the late 1940s and early 1950s. In those systems, data was encoded as acoustic pulses circulating through tubes of mercury, retrieved in timed loops rather than by direct addressing. Alan Turing famously experimented with the concept using gin as an alternative medium.
Carmack’s proposal revives the same principle, substituting a fiber-optic loop and photons for a mercury tube and sound waves — a trade that upgrades the concept by roughly seventy-five years of physics.
Elon Musk Joins the Conversation
On February 7, 2026, Elon Musk replied to Carmack’s post, suggesting an extension of the idea: using vacuum as the transmission medium rather than glass fiber. Since light travels faster in vacuum than in glass, this would reduce latency and signal loss. Musk also pointed out that higher refractive index materials could slow light deliberately, increasing data density per kilometer of cable.
The exchange drew attention but also skepticism. Commenters noted that “vacuum” fiber — known as hollow-core fiber — is a real research area, not science fiction, but that engineering it at scale remains challenging. The practicality of Musk’s extension was widely described as “iffy” even by those who found the broader concept interesting.
The Real Barriers
Carmack was candid about the obstacles, and the broader technical community added several more during the ensuing discussion.
Engineering challenges
Scale. A trillion-parameter AI model would require numerous fiber loops of this size operating in concert. Hundreds of kilometers of high-grade fiber would need to be packaged, routed, and integrated into operational data center environments — a physical and logistical undertaking of considerable complexity.
Signal integrity. Optical amplifiers and digital signal processors are required to maintain signal strength across long fiber distances. These components consume power, potentially offsetting one of the proposal’s main advantages: energy efficiency over DRAM.
Sequential-only access. Because the system is time-based rather than address-based, it only works for workloads that can be scheduled to match the data stream. General-purpose computing cannot use it. Even within AI, not all operations fit the deterministic weight-access model Carmack describes.
Cost. At current prices, 200 kilometers of high-grade single-mode fiber is expensive — and that is before accounting for the amplifiers, connectors, control systems, and infrastructure required to make a functional loop.
The More Practical Near-Term Alternative
Carmack’s post also pointed toward a grounded alternative that many researchers consider far closer to implementation. Rather than fiber loops, he suggested coupling large banks of inexpensive flash memory directly to AI accelerators, with carefully designed timing and pre-planned pipelines to ensure consistent data delivery. The key requirement: a standardized high-speed interface agreed upon by flash memory manufacturers and AI accelerator designers.
This is not a new idea in isolation. Research projects including Behemoth, FlashNeuron (both 2021), and FlashGNN have already explored using NAND flash as near-memory caches for neural network training. What is new is the urgency: given the scale of investment in AI infrastructure, the prospect of an industry-standard flash-to-accelerator interface now seems more commercially viable than it did even two years ago.
What This Means for the Memory Market
Some coverage of this story has drawn sweeping conclusions — that DRAM prices will “collapse” or that the memory crisis will soon be “solved.” Those conclusions go well beyond what Carmack proposed or what the technical discussion supports.
The fiber loop concept is, by Carmack’s own framing, a thought experiment. Practical implementation is, by industry analyst estimates, likely five to ten years away at minimum — if it proves feasible at all. The flash memory alternative is closer to reality, but still requires significant industry coordination that has not yet occurred.
What the conversation does illuminate is a real and growing consensus in the hardware engineering community: the memory wall — not raw compute — is increasingly the limiting factor in AI infrastructure. Whether the solution involves fiber optics, flash memory, photonic integrated circuits, or some combination, the direction of travel is clear: the current DRAM-centric architecture of AI compute will need to evolve.
- Wrong Multiple viral articles misidentify Carmack as “John W. Mamak” or attribute the proposal to “Craig Wright” — a completely unrelated figure known for cryptocurrency controversies. The correct name is John Carmack.
- Wrong Some reports describe Carmack as the founder of “Did Software.” The correct name is id Software, co-founded by Carmack in 1991.
- Wrong Several articles confuse 32 GB of data in-flight (a storage capacity figure) with 32 TB/s bandwidth (a throughput figure). These measure different things and cannot be substituted for one another.
- Overstated Claims that this proposal will cause an “epic collapse” in DRAM prices are speculative. Carmack framed this as a thought experiment, and analysts estimate practical deployment remains years away at best.
- Accurate The core technical numbers — 256 Tb/s over 200 km, 32 GB in-flight, 32 TB/s effective bandwidth — correctly reflect demonstrated fiber optic performance as cited in Carmack’s original post.
- Accurate Elon Musk did respond on February 7, 2026, suggesting vacuum (hollow-core fiber) as a medium to reduce latency and loss.
Conclusion
John Carmack’s fiber loop proposal is a genuine contribution to how the engineering community thinks about AI memory constraints — even if it is unlikely to become a product anytime soon. It revives a 75-year-old principle of delay-line memory, scales it to the physics of modern optical networking, and applies it to the specific access patterns of neural network workloads. The result is a concept that is theoretically coherent, physically grounded in real demonstrated fiber speeds, and genuinely thought-provoking.
But it is not, as some viral summaries have claimed, an imminent solution to the DRAM crisis, a formal industry proposal, or a jointly authored plan by Carmack and Musk. It is a carefully reasoned thought experiment from one of computing’s most credible voices — and that, on its own terms, is worth taking seriously.
