
The Cerebras IPO is one of the most anticipated tech listings of 2026 — and for good reason. Cerebras Systems, the AI chip startup behind the world’s largest semiconductor, has filed to go public on Nasdaq under the ticker CBRS, backed by a $10 billion OpenAI deal and $510 million in 2025 revenue.
If you’re an investor, a tech professional, or simply someone trying to understand why this offering matters, this guide breaks down every critical detail: the company’s technology, financials, risks, key partnerships, and what the Cerebras IPO signals for the broader AI chip market.
What Is Cerebras Systems?
Definition: Cerebras Systems is a Sunnyvale, California-based AI hardware company founded in 2016 by Andrew Feldman and Sean Lie. It builds the Wafer-Scale Engine (WSE), widely recognized as the largest AI chip ever manufactured — a single silicon wafer that functions as one unified processor.
Unlike conventional AI chips that are cut into dozens of individual dies from a wafer, Cerebras uses the entire wafer as one chip. The WSE-3, its current flagship, is physically around 57 times larger than Nvidia’s H100 GPU. That scale translates directly into faster AI training and inference, particularly for large language models.
Feldman, who previously sold server startup SeaMicro to AMD for $355 million in 2012, describes Cerebras as building “the fastest AI hardware for training and inference.” The company has 708 employees and has raised approximately $2.8 billion across eight funding rounds since its founding.
Why Cerebras Is Filing for Its IPO Now
The Cerebras IPO in 2026 is actually the company’s second attempt at going public. Its first filing, submitted in 2024, was withdrawn after a U.S. national-security review by the Committee on Foreign Investment in the United States (CFIUS). The review was triggered by a large investment from G42, an Abu Dhabi-based AI and investment company — and at the time, G42 accounted for 87% of Cerebras’s revenue, raising serious concerns about dependence on a foreign-linked customer.
Three key changes cleared the path for the 2026 Cerebras IPO:
- CFIUS clearance: Cerebras restructured G42’s equity stake to non-voting shares, removing G42 from governance influence. The review was formally resolved in early 2026.
- Revenue diversification: By the end of 2025, G42’s share of Cerebras’s revenue had fallen from 87% to just 24%, as new enterprise and hyperscaler customers came aboard.
- The OpenAI mega-deal: In January 2026, Cerebras secured a landmark compute contract with OpenAI reportedly worth more than $10 billion — arguably the single biggest AI infrastructure deal between two private companies in history.
Add to that a $1 billion Series H round in February 2026 at a $23 billion valuation, led by Tiger Global and backed by AMD, Fidelity, Benchmark Capital, Coatue, and Altimeter, and the company had both the momentum and the credibility to go public.
The IPO is expected to price in mid-May 2026, with Morgan Stanley acting as lead underwriter. Cerebras is targeting a raise of approximately $2 billion on Nasdaq.
The Wafer-Scale Engine: How Cerebras Chips Compare to Nvidia
What Makes the WSE-3 Different?
Conventional AI accelerators — including Nvidia’s H100 and H200 GPUs — are manufactured as individual chips, then connected across circuit boards via high-bandwidth interconnects. This design introduces latency and bandwidth bottlenecks whenever data needs to pass between chips during model training or inference.
The Cerebras WSE-3 eliminates those bottlenecks entirely. By keeping all processing cores, on-chip memory, and interconnects within a single wafer-sized chip, data never needs to leave the die. The result is dramatically lower memory latency and faster throughput for workloads like large language model inference — the exact use case OpenAI and other foundation model companies care most about.
Cerebras WSE-3 vs. Nvidia H100: Head-to-Head Comparison
| Feature | Cerebras WSE-3 | Nvidia H100 |
|---|---|---|
| Chip size | Full wafer (~46,225 mm²) | ~814 mm² |
| On-chip memory | 44 GB SRAM | 80 GB HBM2e |
| AI cores | 900,000+ | ~14,000 CUDA cores |
| Memory bandwidth | 21 PB/s (on-chip) | 3.35 TB/s |
| Primary strength | Low-latency LLM inference | Versatile training & inference |
| Software ecosystem | Cerebras SDK (growing) | CUDA (dominant) |
| Availability | Cloud + direct purchase | Widely available |
The hardware advantage is real — and it is why OpenAI reportedly chose Cerebras over Nvidia for fast inference workloads. As Feldman put it bluntly in a recent Wall Street Journal interview, “[Nvidia] didn’t want to lose the fast inference business at OpenAI, and we took that from them.”
Cerebras IPO Financials: Revenue, Valuation & Key Metrics
Revenue and Profitability
According to the SEC filing submitted on April 18, 2026, Cerebras reported:
- $510 million in total revenue for 2025, representing 76% year-over-year growth from 2024
- $87.9 million in GAAP net income for 2025 — a dramatic reversal from a $485 million net loss in 2024
- A non-GAAP net loss of $75.7 million when certain one-time items are excluded
- $24.6 billion in remaining performance obligations as of December 31, 2025 — a forward revenue backlog that represents extraordinary visibility into future revenues
Historical Revenue Growth
| Year | Revenue | YoY Growth |
|---|---|---|
| 2022 | ~$30M (est.) | — |
| 2023 | $78.7M | +220% |
| 2024 | $289M (est.) | +267% |
| 2025 | $510M | +76% |
Valuation
The Cerebras IPO is targeting a valuation in the $22–25 billion range, consistent with the $23 billion post-money valuation established in the February 2026 Series H round. Secondary-market pricing on pre-IPO platforms recently suggested an implied valuation of $26–28 billion, indicating healthy institutional appetite ahead of the roadshow.
Risks Investors Should Know Before the Cerebras IPO
The Cerebras IPO story is compelling, but it carries real risks that any serious investor needs to weigh carefully.
Customer Concentration
Even after diversification, Cerebras still depends heavily on a small number of large customers. The OpenAI agreement — while enormously valuable — also means that a breakdown in that relationship could materially harm revenues. The filing explicitly states that OpenAI “represents a substantial portion of our projected revenues over the next several years.”
Software Ecosystem Gap
Nvidia’s CUDA platform has been refined over nearly two decades and is deeply embedded in the AI development workflow. Virtually every major AI framework, model, and research workflow is optimized for CUDA. Cerebras’s software stack, while functional, lacks the breadth, community support, and tooling depth of CUDA. Closing this gap will take years of sustained investment.
Manufacturing Dependency
Cerebras relies entirely on TSMC to manufacture its wafer-scale chips. Because the WSE-3 uses an entire silicon wafer, yield management and production capacity are uniquely challenging. Any disruption at TSMC — geopolitical, logistical, or technical — could directly impair Cerebras’s ability to fulfill its customer commitments.
Regulatory Overhang
The CFIUS review has been resolved, but Cerebras still has contractual relationships with G42, a UAE-based entity. Shifts in U.S.–UAE geopolitical relations or new national-security policies could reintroduce compliance complications.
Non-GAAP Profitability
While Cerebras is technically GAAP-profitable in 2025, the non-GAAP picture shows a $75.7 million loss. Investors will need to scrutinize exactly which items are being excluded and whether those exclusions are recurring in nature.
The OpenAI and AWS Deals That Changed Everything
Two partnerships fundamentally transformed the Cerebras IPO narrative.(Cerebras IPO, AI chip startup IPO, wafer-scale engine chip, Nvidia alternative AI chips, AI hardware companies)
The OpenAI Partnership
In January 2026, OpenAI signed a multi-year compute agreement with Cerebras valued at more than $10 billion, providing up to 750 MW of AI processing capacity through 2028. As part of the broader arrangement, OpenAI also gave Cerebras a $1 billion loan at a 6% annual interest rate to help build data center infrastructure. OpenAI received warrants to purchase up to 33.4 million shares of Cerebras non-voting Class N stock — warrants that fully vest only if OpenAI purchases 2 gigawatts of computing power from Cerebras over the life of the deal.
This deal does two things simultaneously: it validates Cerebras’s chip architecture at the highest level of the AI industry, and it provides the revenue durability that institutional investors need to support the Cerebras IPO at a multi-billion-dollar valuation.
The AWS Agreement
Shortly before the IPO filing, Cerebras also announced an agreement with Amazon Web Services to deploy Cerebras chips in Amazon data centers. The AWS deal is significant for two reasons. First, it moves Cerebras into hyperscaler infrastructure — historically Nvidia’s most protected territory. Second, it creates a distribution channel that can reach AWS’s enormous enterprise customer base without Cerebras needing to build its own direct sales motion at scale.
Together, OpenAI and AWS represent a powerful legitimacy stack for the Cerebras IPO: the world’s leading AI lab and the world’s largest cloud provider are both betting on Cerebras hardware.
Is the Cerebras IPO a Good Investment?
The direct answer: the Cerebras IPO presents a genuinely differentiated hardware bet in a market that has overwhelmingly favored Nvidia — but it comes with above-average concentration risk and an immature software ecosystem.
Here is a structured way to think about the investment case:
Bull case for the Cerebras IPO:
- The only public pure-play alternative to Nvidia in AI inference chips
- $24.6 billion in remaining performance obligations provides multi-year revenue visibility
- OpenAI and AWS partnerships validate both technology and business model
- Revenue growing at 76% YoY with improving margins
- GAAP profitability achieved in 2025, rare for hardware startups at this stage
Bear case for the Cerebras IPO:
- Customer concentration in a handful of large accounts remains high
- CUDA’s software moat is vast and Cerebras has not yet matched it
- Non-GAAP loss of $75.7 million suggests underlying cost structure challenges
- TSMC dependency creates supply chain fragility
- IPO valuation of $22–25 billion leaves less margin of safety than earlier private rounds
For long-term investors who believe AI inference will continue to scale rapidly and that the market will support more than one major chip architecture, the Cerebras IPO is worth close attention. For risk-averse investors, the customer concentration and software ecosystem gap may warrant waiting for at least two post-IPO quarterly reports before committing.
What the Cerebras IPO Means for the AI Chip Market
The Cerebras IPO is more than a single company’s public debut. It is a stress test for a market thesis: does Wall Street believe that specialized AI hardware can challenge Nvidia’s dominance?
For years, the AI chip landscape has looked like a monoculture. Nvidia’s GPUs power roughly 80–90% of all AI training and inference workloads globally, supported by CUDA’s ecosystem lock-in. But the economics of AI are shifting. As inference — running models in production — becomes the dominant cost driver for AI companies, the need for low-latency, high-throughput architectures is intensifying. That is precisely where the Cerebras WSE-3 architecture is designed to compete.
The Cerebras IPO also arrives alongside a wave of AI-related listings. Companies like SpaceX, Anthropic, and OpenAI are all expected to eventually access public markets, creating a new category of “AI infrastructure” stocks that institutional investors are actively trying to establish positions in.
If the Cerebras IPO prices well and its shares hold up in early trading, it could open the door for other AI chip startups — including Groq, SambaNova, and Tenstorrent — to pursue public offerings of their own. Conversely, a disappointing debut could cool investor appetite for capital-intensive AI hardware plays for years.
The market is watching closely.
Frequently Asked Questions (FAQ)
What makes this AI hardware company different from traditional chipmakers?
Unlike conventional semiconductor companies that produce smaller chips cut from silicon wafers, this company builds a full wafer-scale processor. That means the entire silicon wafer functions as a single chip rather than being divided into multiple smaller units. This architecture significantly reduces latency and improves data transfer speeds, especially for large-scale AI workloads. Traditional GPUs rely on connecting multiple chips together, which introduces communication overhead, while a unified processor eliminates many of those bottlenecks. This design is particularly beneficial for training and running large language models, where speed and efficiency are critical.
Why is this public offering generating so much attention?
The excitement comes from a combination of technological innovation, strong financial growth, and high-profile partnerships. The company is positioning itself as a serious challenger in a market long dominated by a single major player. Additionally, its rapid revenue growth and shift toward profitability have caught investor attention. The involvement of major AI and cloud companies also adds credibility, suggesting that its technology is not just experimental but already being adopted at scale.
How does the company generate revenue?
Revenue primarily comes from selling AI hardware systems and providing cloud-based access to its computing infrastructure. Instead of relying solely on hardware sales, the company has adopted a hybrid model that includes long-term contracts for AI compute services. These agreements often span multiple years and involve large-scale deployments for enterprise clients, research institutions, and AI-focused organizations. This approach creates more predictable revenue streams compared to traditional hardware sales cycles.
Is this company profitable?
The financial picture is improving, but it requires careful interpretation. While recent reports show positive net income under standard accounting rules, adjusted figures indicate that the company is still investing heavily in growth. These investments include research and development, infrastructure expansion, and scaling operations. For investors, the key question is whether these expenses will lead to sustained profitability in the future.
What are the biggest risks involved?
There are several important risks to consider. First, the company relies on a relatively small number of large customers, which means losing even one major client could significantly impact revenue. Second, its software ecosystem is still developing, while competitors have well-established platforms with widespread adoption. Third, manufacturing depends on external foundries, which introduces supply chain risks. Finally, regulatory and geopolitical factors could affect international partnerships and operations.
How does it compare to existing AI chip leaders?
The main difference lies in architecture and specialization. While established players offer versatile chips that support a wide range of applications, this company focuses on optimizing performance for specific AI workloads. Its technology excels in scenarios where large models require fast data processing with minimal latency. However, competitors maintain an advantage in software support, developer tools, and overall ecosystem maturity. This creates a trade-off between raw performance and ease of integration.
Who are the key partners and why do they matter?
Strategic partnerships play a major role in validating the company’s technology. Collaborations with leading AI organizations and cloud providers demonstrate that the hardware can handle real-world, large-scale workloads. These partnerships also provide access to broader customer bases and infrastructure, reducing the need for the company to build everything independently. In many cases, such deals include long-term commitments that ensure steady demand for its products and services.
What does this mean for the future of AI infrastructure?
This development signals a shift toward more specialized hardware solutions in the AI industry. As AI models grow larger and more complex, the demand for efficient, high-performance computing continues to increase. If this approach proves successful, it could encourage other companies to explore alternative chip architectures, leading to greater competition and innovation in the market. Over time, this could reduce reliance on a single dominant provider and create a more diverse ecosystem.
Should investors consider participating immediately or wait?
The answer depends on individual risk tolerance and investment strategy. Early participation offers the potential for high returns if the company successfully captures market share and scales its technology. However, it also comes with higher uncertainty, particularly given the company’s reliance on a few large customers and its evolving software ecosystem. More cautious investors may prefer to wait for additional financial results and market performance after the listing before making a decision.
How could this impact the broader technology sector?
If successful, this move could reshape how investors view AI infrastructure companies. It may open the door for other hardware-focused startups to access public markets, increasing competition and accelerating innovation. Additionally, it could influence how enterprises choose their AI infrastructure, potentially shifting demand toward more specialized solutions. In the long run, this could lead to faster advancements in AI capabilities and more efficient deployment of large-scale models.(Cerebras IPO, AI chip startup IPO, wafer-scale engine chip, Nvidia alternative AI chips, AI hardware companies)
Key Takeaways
- Cerebras Systems filed for its IPO on April 18, 2026, targeting a Nasdaq listing under the ticker CBRS in mid-May 2026
- The company is seeking to raise approximately $2 billion at a valuation of $22–25 billion
- 2025 revenue reached $510 million, up 76% year-over-year, with $87.9 million in GAAP net income
- The WSE-3 chip is physically ~57x larger than Nvidia’s H100 and designed for faster LLM inference
- A $10 billion deal with OpenAI and a new AWS data center agreement validate the technology and business model
- Key risks include customer concentration, CUDA’s software moat, and TSMC manufacturing dependency
- The Cerebras IPO will serve as a landmark test of whether public markets are ready to back a credible Nvidia alternative