New MLPerf Inference v4.1 Benchmark Results Highlight Rapid Hardware and Software Innovations in Generative AI Systems
New mixture of experts benchmark tracks emerging architectures for AI models Today, MLCommons® announced new results for its industry-standard MLPerf®Inference v4.1 benchmark suite, which delivers machine learning (ML) system performance benchmarking in an architecture-neutral, representative, and reproducible manner. This release …