WEKA Introduces New WEKApod Appliances to Accelerate Enterprise AI Deployments

WEKA Introduces New WEKApod Appliances to Accelerate Enterprise AI Deployments

WEKApod Nitro and WEKApod Prime Provide Customers with Flexible, Affordable, Scalable Solutions to Fast-Track AI Innovation

WekaIO (WEKA), the AI-native data platform company, unveiled two new WEKApod™ data platform applianced: the WEKApod Nitro for large-scale enterprise AI deployments and the WEKApod Prime for smaller-scale AI deployments and multi-purpose high-performance data use cases. WEKApod data platform appliances provide turnkey solutions combining WEKA® Data Platform software with best-in-class high-performance hardware to provide a powerful data foundation for accelerated AI and modern performance-
intensive workloads.

The WEKA Data Platform delivers scalable AI-native data infrastructure purpose-built for even the most demanding AI workloads, accelerating GPU utilization and retrieval-augmented generation (RAG) data pipelines efficiently and sustainably while providing efficient write performance for AI model checkpointing. Its advanced cloud-native architecture enables ultimate deployment flexibility, seamless data portability, and robust hybrid cloud capability.

WEKApod delivers all the capabilities and benefits of WEKA Data Platform software in an easy-to-deploy appliance ideal for organizations leveraging generative AI and other performance-intensive workloads across a broad spectrum of industries. Key benefits include:

WEKApod Nitro: Delivers exceptional performance density at scale, delivering over 18 million IOPS in a single cluster, making it ideal for large-scale enterprise AI deployments and AI solution providers training, tuning, and inferencing LLM foundation models. WEKApod Nitro is certified for NVIDIA DGX SuperPOD™. Capacity starts at half a petabyte of usable data and is expandable in half-petabyte increments.

WEKApod Prime: Seamlessly handles high-performance data throughput for HPC, AI training and inference, making it ideal for organizations that want to scale their AI infrastructure while maintaining cost efficiency and balanced price-performance. WEKApod Prime offers flexible configurations that scale up to 320 GB/s read bandwidth, 96 GB/s write bandwidth, and up to 12 million IOPS for customers with less extreme performance data processing requirements. This enables organizations to customize configurations with optional add-ons, so they only pay for what they need and avoid overprovisioning unnecessary components. Capacity starts at 0.4PB of usable data with options extending up to 1.4PB.

“Accelerated adoption of generative AI applications and multi-modal retrieval-augmented generation has permeated the enterprise faster than anyone could have predicted, driving the need for affordable, highly-performant and flexible data infrastructure solutions that deliver extremely low latency, drastically reduce the cost per tokens generated and can scale to meet the current and future needs of organizations as their AI initiatives evolve,” said Nilesh Patel, chief product officer at WEKA. “WEKApod Nitro and WEKApod Prime offer unparalleled flexibility and choice while delivering exceptional performance, energy efficiency, and value to accelerate their AI projects anywhere and everywhere they need them to run.”

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insideainews/

Join us on Facebook: https://www.facebook.com/insideAINEWSNOW

Check us out on YouTube!