Google Lyra Codec: Why Asterisk and FreeSWITCH Won’t Support It Anytime Soon
Google Lyra Codec: Why Asterisk and FreeSWITCH Won’t Support It Anytime Soon
- Why Enterprise RAID Rebuilding Succeeds Where Consumer Arrays Fail?
- Linus Torvalds Rejects MMC Subsystem Updates for Linux 7.0: “Complete Garbage”
- The Man Who Maintained Sudo for 30 Years Now Struggles to Fund the Work That Powers Millions of Servers
- How Close Are Quantum Computers to Breaking RSA-2048?
- Why Windows 10 Users Are Flocking to Zorin OS 18 Instead of Linux Mint?
- How to Prevent Ransomware Infection Risks?
- What is the best alternative to Microsoft Office?
Google Lyra Codec: Why Asterisk and FreeSWITCH Won’t Support It Anytime Soon
A Technical Analysis of the Barriers to Open-Source VoIP Platform Adoption
Introduction
When Google unveiled the Lyra voice codec in February 2021, it was greeted with considerable excitement in the telecommunications and voice-over-IP (VoIP) community.
Lyra promised something remarkable: near-natural-sounding voice quality at bitrates as low as 3.2 kbps — a level where traditional codecs produce barely intelligible audio.
For a world where billions of people still connect on 2G networks or congested mobile connections, this seemed like a transformational technology.
Four years later, however, the two most widely deployed open-source telephony platforms — Asterisk (the engine behind FreePBX) and FreeSWITCH — have not integrated Lyra codec support.
This is not an oversight. It reflects a set of deep, structural barriers that are unlikely to be resolved in the near term. This article explains why.
What Makes Lyra Different — and Difficult
To understand why Lyra is hard to integrate, it helps to understand why it is impressive. Unlike traditional codecs such as G.711, G.729, or even Opus, which use classical digital signal processing to compress audio waveforms, Lyra takes an entirely different approach. It uses a neural network to extract a small set of acoustic features from the speaker’s voice, transmits only those features at extremely low bitrates, and then uses a generative machine learning model — derived from Google’s SoundStream architecture — to reconstruct (or synthesize) the voice on the receiving end.
Lyra V2, released in September 2022, improved on the original in every dimension: latency dropped from approximately 100ms to just 20ms (comparable to Opus at 26.5ms), encoding and decoding speed improved fivefold, and audio quality at 3.2–9.2 kbps now rivals or exceeds Opus at 10–14 kbps. The model was trained on thousands of hours of speech in over 70 languages, making it broadly applicable globally.
But all of these advantages come from the neural network itself — and that is precisely what makes integration into traditional PBX platforms so challenging.
Barrier 1: Lyra Requires ML Infrastructure, Not Just a Codec Module
Traditional codecs are implemented as compact, self-contained C libraries. Adding G.729 or Opus to Asterisk requires writing a relatively straightforward codec module that calls the library’s encode/decode functions. The codec operates purely in the signal processing domain — no external dependencies, no runtime model loading, no GPU or ML accelerator considerations.
Lyra V2 is fundamentally different. It is built on TensorFlow Lite, Google’s lightweight ML inference framework. Integrating Lyra into Asterisk or FreeSWITCH would require embedding TensorFlow Lite as a runtime dependency — a significant and complex addition to platforms that have been carefully designed to avoid heavyweight dependencies. This represents an architectural change, not a simple codec plugin. The Lyra codebase also uses Google’s Bazel build system, which is incompatible with the Makefiles and Autoconf systems that Asterisk and FreeSWITCH use — creating an additional layer of build engineering complexity.
Barrier 2: Server-Side Scalability Is a Fundamental Problem
A production Asterisk or FreeSWITCH server may handle hundreds or thousands of simultaneous calls. For traditional codecs, the CPU cost of encoding and decoding each call is negligible — G.711 in particular requires almost no computation. Even Opus, a modern and complex codec, is designed to be highly CPU-efficient.
Lyra’s neural network inference is inherently more expensive. While Google reports that Lyra V2 on a Pixel 6 Pro processes a 20ms audio frame in 0.57ms — 35 times faster than real-time — this is measured on a single-call basis using a modern smartphone with hardware acceleration. A telephony server handling 500 simultaneous calls would need to run 500 parallel Lyra inference operations every 20ms. Without dedicated ML hardware (which most VoIP servers do not have), this places an unreasonable CPU burden on server infrastructure, making Lyra economically impractical for server-side transcoding deployments.
This is not merely a performance concern — it changes the entire cost model for running a PBX. Operators who currently run efficient, low-cost infrastructure would face significant hardware upgrades or capacity reductions to support Lyra at scale.
Barrier 3: Lyra Was Designed for Endpoints, Not Servers
Google designed Lyra to solve a specific problem: enabling high-quality voice calls on low-bandwidth mobile connections, running on the endpoint device (a smartphone or app). The canonical use case is a mobile app on a poor network — the phone compresses audio with Lyra, transmits a tiny bitstream across a congested connection, and reconstructs it on the other end.
Asterisk and FreeSWITCH are server-side infrastructure. Their role in a VoIP architecture is to route, bridge, and in many cases transcode calls between different endpoints and networks. If Lyra is used as a pure passthrough codec between two endpoints that both support it, the PBX does not need to understand Lyra at all — it simply forwards the RTP packets. The problem arises when the PBX needs to transcode, which is extremely common in real-world deployments: connecting a Lyra endpoint to a PSTN gateway (G.711), a SIP trunk (Opus or G.729), a legacy desk phone, or any device that doesn’t support Lyra.
This transcoding requirement is where server-side Lyra support becomes both necessary and problematic. In most enterprise and carrier deployments, pure passthrough is the exception rather than the rule.
Barrier 4: No IETF Standard or RTP Payload Definition
Every codec used in SIP-based VoIP infrastructure must have a standardized way to be negotiated between endpoints. This is done through SDP (Session Description Protocol) during call setup, where both parties agree on which codec to use and at what parameters. Standard codecs have IANA-registered RTP payload types and IETF RFCs that define exactly how they should be described in SDP.
Lyra has no such standard. There is no IETF RFC for Lyra, no registered IANA payload type, and no standardized SDP attribute definition. As experts in the WebRTC community have noted, Lyra’s standardization process has not even started, and adding it to the WebRTC standard — let alone to IETF’s SIP/RTP specifications — is considered unlikely within the next several years. Without this standardization, there is no interoperable, vendor-neutral way for two SIP endpoints to negotiate Lyra as a common codec. This alone is a blocking issue for any PBX platform that aims for broad compatibility.
Barrier 5: Lyra Remains Effectively a Google-Proprietary Technology
Although Google open-sourced Lyra under the permissive Apache 2.0 license in 2021, it remains — in practical terms — a Google-controlled technology. As of early 2026, the only confirmed production deployment of Lyra is within Google’s own products: Google Meet uses Lyra to improve audio quality when user bandwidth is limited. No other major communication platform has adopted it.
Contrast this with Opus, which was developed through a collaborative industry effort, became an IETF standard (RFC 6716), was made mandatory in WebRTC, and is now supported by virtually every SIP client, PBX, and media gateway. Lyra has none of that ecosystem momentum. It is not supported in ffmpeg, not integrated into libwebrtc’s public codebase, and not present in any mainstream SIP softphone such as Linphone, Zoiper, or Bria.
For Asterisk and FreeSWITCH maintainers, adding a codec that no endpoints actually use would provide zero practical benefit to their user base.
Barrier 6: Absence of Community Demand
Both Asterisk and FreeSWITCH are community-driven open-source projects. Feature development is largely prioritized based on what enterprise deployers, telecom carriers, and individual contributors need and request. The overwhelming majority of Asterisk and FreeSWITCH deployments operate on reliable broadband or LTE connections, where Opus at 20–40 kbps performs excellently and Lyra’s ultra-low-bandwidth advantages are irrelevant.
There is no meaningful demand from the existing user base for Lyra support. The niche where Lyra excels — mobile apps serving users on extremely poor connections in emerging markets — is typically addressed by mobile app developers building on top of WebRTC or custom media stacks, not by deploying Asterisk servers. The communities and use cases simply do not overlap significantly.
What Would Need to Change for Lyra to Gain Adoption
For Lyra to eventually find its way into Asterisk or FreeSWITCH, several things would need to happen — and most of them are outside the control of those projects’ maintainers.
IETF Standardization: Lyra would need a formal IETF RFC defining its RTP payload format and SDP negotiation attributes. Without this, it cannot be used in interoperable SIP infrastructure.
WebRTC Inclusion: If Google were to integrate Lyra natively into Chrome and the public libwebrtc codebase, it would rapidly gain endpoint adoption, creating demand for server-side support. Industry experts note this could happen but has not been committed to.
Endpoint Adoption by SIP Clients: If major SIP softphones like Linphone or Zoiper added Lyra support, it would create a real-world interoperability need that PBX platforms would be pressured to address.
Build System Simplification: Google would need to provide a Lyra build path that doesn’t require the Bazel build system, making it feasible for open-source projects using standard Makefiles to integrate it.
Server-Side Efficiency Improvements: A lighter-weight, SIMD-optimized Lyra variant suitable for server transcoding at scale would need to be developed — something analogous to how Opus was engineered for low CPU overhead.
Conclusion
Google Lyra is a genuinely impressive technical achievement. Its ability to deliver intelligible, natural-sounding voice at 3.2 kbps — where competing codecs produce unintelligible noise — makes it a potentially transformational technology for connecting the world’s most underserved internet users. The Apache 2.0 license means cost is no barrier to adoption.
But the path from a compelling open-source codec to integration in battle-tested, widely deployed telephony infrastructure is long and demanding. Lyra faces structural barriers on multiple fronts simultaneously: it requires ML infrastructure that doesn’t fit the traditional codec module model; it was designed for endpoints rather than server-side transcoding; it lacks IETF standardization required for SIP interoperability; it has no ecosystem of supporting endpoints; and the communities that develop and deploy Asterisk and FreeSWITCH have no pressing need for it today.
None of these barriers are insurmountable in principle. But overcoming all of them requires sustained investment and coordination — from Google, from the IETF standards community, from SIP client developers, and from the open-source telephony community itself. As of early 2026, none of that coordination is visibly underway. Asterisk and FreeSWITCH users should not expect native Lyra support in the near future.
If your use case genuinely requires Lyra — for example, building a mobile app serving users in bandwidth-constrained emerging markets — the practical path today is to implement it at the endpoint level using Google’s open-source SDK, and design your architecture to avoid server-side transcoding of Lyra streams.
