Geekbench AI 1.0 Launches Cross-Platform Benchmarking Tool for AI Performance
Geekbench AI 1.0, a cross-platform benchmarking tool that can evaluate devices' AI performance, is now available for download as an app and is compatible with Windows, Linux, macOS, and Android.
On Thursday, the Geekbench AI 1.0 platform, which can evaluate a device's artificial intelligence (AI) capabilities, went live. The app, developed by Primate Labs, is a benchmarking suite that measures and evaluates devices' AI-centric performance as a whole. It can be downloaded for free on all of the major platforms. To determine the device's score, the AI tool can run a number of tests on the CPU, GPU, and neural processing unit (NPU). Additionally, developers can select the appropriate AI framework and models for workload testing.
“Geekbench AI is a benchmarking suite with a testing methodology for machine learning, deep learning, and AI-centric workloads, all with the same cross-platform utility and real-world workload reflection that our benchmarks are well-known for,” the company stated in a blog post announcing the app.
The business emphasized that the app automatically executes ten distinct AI workloads, each requiring three distinct data types. Users can get a better understanding of how well the AI works on their devices thanks to this extensive testing. The app can evaluate smartphones, tablets, laptops, desktops, and other similar devices and is available for Android, iOS, Linux, macOS, and Windows.
It's interesting to note that the preview version of the app was originally called Geekbench ML; however, the company decided to change the name because OEMs have started using the term AI to describe these workloads. Additionally, the benchmarking app takes into account the device's workloads, hardware, and AI framework to deal with the complexity of determining AI performance.
The Geekbench AI app basically tests the device for speed and accuracy to see if it makes any compromises between performance and efficiency. Datasets, frameworks, runtime, computer vision, and natural language processing (NLP) are a few other examples of such metrics.
Users can check the CPU, GPU, and NPU performance of various devices on Primate Labs' ML Benchmarks leaderboard to identify the best-performing ones. The following are the minimum software requirements for the application to run. Linux: Ubuntu 22.04 LTS (64-bit) or later, 4GB RAM (AMD or Intel processor) macOS: macOS 14 or later, 8GB RAM (Apple Silicon or Intel processor) Windows: Windows 10 (64-bit) or later, 8GB RAM (AMD, ARM, or Intel processor)