NemoClaw is NVIDIA's open-source AI agent reference stack. Getting it running correctly — with the right security policies, model configuration, and agent integrations — takes time and expertise.
Full installation on your infrastructure — Linux server, cloud VM, or on-prem hardware. We configure Docker, Node.js, and the NemoClaw runtime with your system specs.
Connect your agents to the right Nemotron models — local via Ollama or cloud via NVIDIA's Endpoint API — based on your latency, cost, and data privacy requirements.
Build purpose-built agents on top of NemoClaw: customer support bots, internal knowledge assistants, workflow automation, and more — integrated with your existing systems.
Configure NemoClaw's four security layers — Network, Filesystem, Process, and Inference — using NVIDIA's OpenShell runtime (Landlock, seccomp, netns). Apply policy presets for Slack, Discord, Telegram, and custom platforms.
Configure the NemoClaw privacy router to mix local and cloud inference based on data classification. Keeps sensitive business data on-premises while using cloud models for general queries.
NemoClaw is active alpha software. We monitor releases, apply updates, and handle breaking changes — so your agents keep running without you needing to track upstream changes.
Need additional agents, custom integrations, or ongoing maintenance? Contact us for a custom quote.
We've done this before. The process is straightforward.
We spend 30 minutes understanding your infrastructure, use case, and data handling requirements. This shapes every decision that follows — which Nemotron model to use, whether local or cloud inference fits, which security policies apply.
We install NemoClaw on your server (or help you provision one). This covers Docker configuration, Node.js 20+, and the NemoClaw runtime — confirmed against NVIDIA's official requirements: 4+ vCPU, 8–16 GB RAM, 20–40 GB disk.
We configure inference routing (Ollama local vs. NVIDIA Endpoint API), set up the privacy router with your data classification rules, and apply security policies using NVIDIA OpenShell — Landlock, seccomp, and network namespace controls.
We build your first OpenClaw agent on top of NemoClaw — connected to the channels your team uses (Slack, Discord, Telegram, or custom) — and integrate it with your existing tools and workflows.
You get full documentation of the setup, a runbook for common operations, and 30 days of support for questions. Your team knows exactly what's running and why.
NemoClaw is an open-source reference stack built by NVIDIA. It provides a secure, sandboxed runtime for AI agents using NVIDIA's OpenShell (Landlock, seccomp, and network namespace isolation), with support for Nemotron models and a privacy router for mixing local and cloud inference. It's Apache 2.0 licensed and currently in alpha.
Not for basic setup. NemoClaw runs on Ubuntu 22.04+, macOS Apple Silicon via Docker Desktop, or Windows via WSL — with 4+ vCPU, 8–16 GB RAM, and Docker installed. You don't need a GPU to use NemoClaw with NVIDIA's cloud Endpoint API. A GPU becomes relevant if you want to run large Nemotron models locally via Ollama.
OpenClaw is the open-source AI agent framework that handles orchestration, channel integrations (WhatsApp, Discord, Telegram), memory, and scheduling. NemoClaw provides the secure inference and sandboxed execution layer. They're complementary — OpenClaw manages what an agent does; NemoClaw controls how it does it securely.
NemoClaw is alpha software as of March 2026. The core functionality works, but the API surface is still evolving. For production use, our setup service includes stability hardening, a documented upgrade path, and ongoing support to manage upstream changes from NVIDIA.
Full NemoClaw installation and configuration, Nemotron model setup (local Ollama or NVIDIA Endpoint API), security policy configuration, one OpenClaw agent built and deployed to your channels of choice, privacy router setup, and 30 days of post-setup support. Custom agents beyond the first one are quoted separately.
Yes. We've deployed on AWS EC2, Google Cloud, DigitalOcean, Hetzner, and bare metal. If it runs Ubuntu 22.04 and meets the hardware requirements, we can work with it.
Get in touch and we'll scope out your setup. Most projects are live within one week of kickoff.
Contact CodeClaw →Book a 15-minute call or fill out the form — we'll scope your setup for free.
Or email us directly at [email protected]