Nvidia has a reputation for pushing the boundaries of computer hardware, and at CES 2025, the company took another giant leap forward. CEO Jensen Huang introduced Project Digits, a desktop-sized, personal AI supercomputer slated for release in May 2025. Priced from $3,000, Project Digits is built around the brand-new GB10 Grace Blackwell Superchip, which promises enough raw power to train and run advanced AI models—right from your desk.

Compact Form Factor, Massive Performance

In many ways, Project Digits follows in the footsteps of Nvidia’s Jetson family of AI-focused mini computers, but on a dramatically larger scale of capability. Historically, running models with hundreds of billions of parameters has required enormous, power-hungry hardware housed in data centers. Project Digits flips that script. Featuring a sleek, Mac Mini–like design, this AI supercomputer is approximately the size of a standard small desktop PC and uses an ordinary power outlet. Despite its compact footprint, it can handle AI models with up to 200 billion parameters in a single box.

For especially ambitious projects, two Project Digits units can be linked together, enabling training and inference on models reaching 405 billion parameters. To put that in perspective, Meta’s Llama 3.1 model is at the 405-billion-parameter mark, meaning a pair of these desktop machines can tackle the same parameter scale right on a desk.

Under the Hood: The GB10 Grace Blackwell Superchip

The core innovation behind Project Digits is the GB10 Grace Blackwell Superchip, the latest in Nvidia’s hardware lineup designed expressly for artificial intelligence workloads:

FP4 Precision:

The GB10 can achieve up to 1 petaflop of AI performance by leveraging FP4 precision (a format that speeds up computations through approximation). In practical AI tasks like large language models and advanced machine learning pipelines, FP4 precision can significantly boost training and inference speed without sacrificing too much accuracy.

Grace CPU Integration:

The Superchip pairs Nvidia’s CUDA and Tensor Cores with a Grace CPU containing 20 Arm-based cores. The CPU-to-GPU connection is managed via NVLink-C2C, which reduces bottlenecks and keeps data moving quickly between the chip’s compute components. Co-Development with MediaTek:

Nvidia worked closely with MediaTek—well-known for its high-performance, power-efficient Arm-based chip designs—to optimize the GB10 for power efficiency and cooling. As a result, Project Digits can run on standard office power and cooling systems, removing one of the biggest hurdles to personal AI computing.

High Memory Capacity & Storage

Large-scale AI models can consume a staggering amount of memory, both during training (which demands high GPU memory capacity) and inference (where real-time responses benefit from quick data access). To address this need, every Project Digits unit comes equipped with:

  • 128GB Unified Memory – Traditional laptops today commonly have 16GB or 32GB of RAM. By contrast, Project Digits’ 128GB of high-speed memory provides enough capacity to manage enormous datasets and neural network operations.
  • Up to 4TB NVMe Storage – NVMe storage is key for rapid read-write operations, especially when shuffling large training files. Users can store substantial amounts of data, models, and logs directly on the device.
  • For users grappling with extremely large datasets, this unified memory architecture means the CPU and GPU share the same memory pool, cutting down on the usual overhead from copying data around.

Software Stack: Nvidia AI Enterprise & Beyond

Hardware is only half of the equation for AI development. Nvidia has bundled a comprehensive set of software tools and frameworks to help data scientists and developers hit the ground running:

Nvidia DGX OS (Linux-based): The operating system underpins Project Digits, delivering HPC (high-performance computing)–grade stability and broad compatibility with AI workflows.

Nvidia AI Enterprise Suite: Provides orchestration tools, acceleration libraries, and pre-integrated solutions, ensuring that what you build on Project Digits will deploy seamlessly to cloud or data center setups.

Supported Frameworks: PyTorch, Python, Jupyter notebooks, TensorFlow, and more.

  • Nvidia NeMo: A specialized framework for creating, tuning, and deploying language and speech models.
  • Nvidia RAPIDS: Data science libraries that leverage GPUs for end-to-end data analytics, from ingestion and cleaning to training and visualization.
  • By aligning Project Digits with its existing enterprise software ecosystem, Nvidia allows AI practitioners to prototype, test, and deploy across different environments without rewriting code.
  • It’s an especially enticing proposition for startups and research teams looking to move from experimentation to large-scale, production-grade solutions.

Use Cases: From Research Labs to Startups

Although priced at $3,000, Project Digits sits in a sweet spot for a variety of audiences:

AI Researchers & Academics

Instead of contending for shared cluster resources, labs can offer each researcher or student their own supercomputer. This speeds up iteration, fosters hands-on experimentation, and can handle everything from natural language processing to genomics.

Startups & Independent Developers

With lower up-front investment, smaller companies can quickly develop and fine-tune models in-house. When the time comes to scale, they can easily move these workloads to a hosted environment, all while preserving the same code and architecture. Enterprises & Prototyping Teams

Large enterprises often rely on centralized HPC resources that require scheduling and advanced reservations. Project Digits could operate as a personal playground for R&D teams to test new ideas before pushing them into production. Hobbyists & AI Enthusiasts

While still a sizable investment, this system opens the door to AI experimentation without requiring complex or space-consuming data center hardware. In certain scenarios, it might be overkill—but for those serious about exploring the AI, it offers an unmatched level of personal computing power.

The Broader Nvidia AI Ecosystem

Project Digits joins a growing lineup of Nvidia AI-focused computers designed for different scales and budgets:

  • Jetson Orin Nano Super (from $249): Announced in December 2024, this cost-effective model targets hobbyists, tinkerers, and small startups who need basic on-device AI. It supports models up to 8 billion parameters—far smaller than Digits, but still powerful for edge applications and robotics projects.
  • Nvidia Data Center GPUs (A100, H100, Grace Hopper, etc.): For organizations needing massive, cloud-based compute, these GPUs dominate HPC clusters worldwide.
  • DGX & HGX Systems: Nvidia’s turnkey AI training systems for large enterprises and research institutions, featuring multiple GPUs linked together with high-bandwidth interconnects.

With Project Digits, Nvidia is catering to users who need data-center-class performance on an individual scale—bridging a gap that once left smaller teams (and independent developers) unable to afford or physically host powerful AI hardware.

Looking Ahead

Nvidia’s core message is that “AI will be mainstream in every application for every industry…” as CEO Jensen Huang put it. By placing a personal AI supercomputer on the desk of every data scientist, student, and developer, the company is betting that new, innovative applications will emerge at a rapid clip. The combination of advanced hardware and a robust software stack may significantly lower barriers to entry for anyone wanting to experiment with AI, beyond what was possible even a year or two ago.

Project Digits hits the market in May 2025, with a starting price of $3,000 and multiple configurations likely available. While it remains to be seen how widely adopted personal supercomputing will become, Nvidia’s track record suggests the potential is huge. From code experimentation and academic research to early-stage commercial products, Project Digits could be the device that brings HPC-scale AI power into mainstream offices, classrooms, and homes.

In short, Nvidia’s announcement of Project Digits is a watershed moment for personal AI computing. It’s small enough to fit on a desk yet powerful enough to train 200-billion-parameter models. Coupled with the advanced Grace CPU, NVLink-C2C interconnect, and Nvidia’s deep software ecosystem, Project Digits just might mark a new era—one where next-level AI creation and research are as accessible as a desktop computer.