Cerebras Programs, a pioneer in high-performance AI compute, has launched a groundbreaking answer that’s set to revolutionize AI inference. On August 27, 2024, the corporate introduced the launch of Cerebras Inference, the quickest AI inference service on the planet. With efficiency metrics that dwarf these of conventional GPU-based techniques, Cerebras Inference delivers 20 instances the velocity at a fraction of the associated fee, setting a brand new benchmark in AI computing.
Unprecedented Velocity and Price Effectivity
Cerebras Inference is designed to ship distinctive efficiency throughout numerous AI fashions, notably within the quickly evolving phase of giant language fashions (LLMs). As an example, it processes 1,800 tokens per second for the Llama 3.1 8B mannequin and 450 tokens per second for the Llama 3.1 70B mannequin. This efficiency will not be solely 20 instances quicker than that of NVIDIA GPU-based options but additionally comes at a considerably decrease value. Cerebras provides this service beginning at simply 10 cents per million tokens for the Llama 3.1 8B mannequin and 60 cents per million tokens for the Llama 3.1 70B mannequin, representing a 100x enchancment in price-performance in comparison with current GPU-based choices.
Sustaining Accuracy Whereas Pushing the Boundaries of Velocity
Probably the most spectacular features of Cerebras Inference is its capability to take care of state-of-the-art accuracy whereas delivering unmatched velocity. Not like different approaches that sacrifice precision for velocity, Cerebras’ answer stays inside the 16-bit area for the whole thing of the inference run. This ensures that the efficiency features don’t come on the expense of the standard of AI mannequin outputs, a vital issue for builders targeted on precision.
Micah Hill-Smith, Co-Founder and CEO of Synthetic Evaluation, highlighted the importance of this achievement: “Cerebras is delivering speeds an order of magnitude quicker than GPU-based options for Meta’s Llama 3.1 8B and 70B AI fashions. We’re measuring speeds above 1,800 output tokens per second on Llama 3.1 8B, and above 446 output tokens per second on Llama 3.1 70B – a brand new document in these benchmarks.”
The Rising Significance of AI Inference
AI inference is the fastest-growing phase of AI compute, accounting for roughly 40% of the entire AI {hardware} market. The appearance of high-speed AI inference, equivalent to that provided by Cerebras, is akin to the introduction of broadband web—unlocking new alternatives and heralding a brand new period for AI functions. With Cerebras Inference, builders can now construct next-generation AI functions that require advanced, real-time efficiency, equivalent to AI brokers and clever techniques.
Andrew Ng, Founding father of DeepLearning.AI, underscored the significance of velocity in AI improvement: “DeepLearning.AI has a number of agentic workflows that require prompting an LLM repeatedly to get a end result. Cerebras has constructed an impressively quick inference functionality which will likely be very useful to such workloads.”
Broad Trade Assist and Strategic Partnerships
Cerebras has garnered robust help from {industry} leaders and has fashioned strategic partnerships to speed up the event of AI functions. Kim Branson, SVP of AI/ML at GlaxoSmithKline, an early Cerebras buyer, emphasised the transformative potential of this expertise: “Velocity and scale change all the pieces.”
Different firms, equivalent to LiveKit, Perplexity, and Meter, have additionally expressed enthusiasm for the impression that Cerebras Inference may have on their operations. These firms are leveraging the facility of Cerebras’ compute capabilities to create extra responsive, human-like AI experiences, enhance person interplay in serps, and improve community administration techniques.
Cerebras Inference: Tiers and Accessibility
Cerebras Inference is offered throughout three competitively priced tiers: Free, Developer, and Enterprise. The Free Tier gives free API entry with beneficiant utilization limits, making it accessible to a broad vary of customers. The Developer Tier provides a versatile, serverless deployment possibility, with Llama 3.1 fashions priced at 10 cents and 60 cents per million tokens. The Enterprise Tier caters to organizations with sustained workloads, providing fine-tuned fashions, customized service degree agreements, and devoted help, with pricing obtainable upon request.
Powering Cerebras Inference: The Wafer Scale Engine 3 (WSE-3)
On the coronary heart of Cerebras Inference is the Cerebras CS-3 system, powered by the industry-leading Wafer Scale Engine 3 (WSE-3). This AI processor is unmatched in its dimension and velocity, providing 7,000 instances extra reminiscence bandwidth than NVIDIA’s H100. The WSE-3’s large scale allows it to deal with many concurrent customers, guaranteeing blistering speeds with out compromising on efficiency. This structure permits Cerebras to sidestep the trade-offs that usually plague GPU-based techniques, offering best-in-class efficiency for AI workloads.
Seamless Integration and Developer-Pleasant API
Cerebras Inference is designed with builders in thoughts. It options an API that’s absolutely suitable with the OpenAI Chat Completions API, permitting for simple migration with minimal code adjustments. This developer-friendly method ensures that integrating Cerebras Inference into current workflows is as seamless as doable, enabling speedy deployment of high-performance AI functions.
Cerebras Programs: Driving Innovation Throughout Industries
Cerebras Programs isn’t just a frontrunner in AI computing but additionally a key participant throughout numerous industries, together with healthcare, vitality, authorities, scientific computing, and monetary companies. The corporate’s options have been instrumental in driving breakthroughs at establishments such because the Nationwide Laboratories, Aleph Alpha, The Mayo Clinic, and GlaxoSmithKline.
By offering unmatched velocity, scalability, and accuracy, Cerebras is enabling organizations throughout these sectors to deal with a few of the most difficult issues in AI and past. Whether or not it’s accelerating drug discovery in healthcare or enhancing computational capabilities in scientific analysis, Cerebras is on the forefront of driving innovation.
Conclusion: A New Period for AI Inference
Cerebras Programs is setting a brand new commonplace for AI inference with the launch of Cerebras Inference. By providing 20 instances the velocity of conventional GPU-based techniques at a fraction of the associated fee, Cerebras will not be solely making AI extra accessible but additionally paving the best way for the following era of AI functions. With its cutting-edge expertise, strategic partnerships, and dedication to innovation, Cerebras is poised to guide the AI {industry} into a brand new period of unprecedented efficiency and scalability.
For extra info on Cerebras Programs and to strive Cerebras Inference, go to www.cerebras.ai.