OpenJetGitHub repo

Physical Intelligence on the Edge.

You can't fit a $3,000 GPU on a drone or a conveyer belt. You can't send proprietary factory data to the cloud.
OpenJet lets edge devices dynamically tap into the heavy local compute you already own. Run open-source models on the device or on a server on your network — air-gapped, zero API bills, and you own the entire compute stack end-to-end.

Air-Gapped • Open-Source Models • Zero API Bills • Own Your Compute

Install and launch

$ git clone https://github.com/l-forster/open-jet.git
$ cd open-jet
$ ./install.sh
$ open-jet --setup
$ open-jet

How it works

One heavy GPU server in a secure closet. Cheap devices across the floor.

OpenJet bridges the gap between where AI needs to run and where the compute actually lives — without ever touching the public internet.

Run on-device or on your server

Run the model directly on edge hardware, or connect to a GPU server on your local network. Same agent, flexible compute.

Open-source models only

No proprietary APIs, no vendor lock-in. OpenJet runs open-source, locally hosted models you control and can audit.

Fully air-gapped

Your data never leaves your network. No cloud calls, no telemetry, no external dependencies on the main path.

One-command setup

Auto-detects your hardware, provisions the backend, and gets a working agent ready — without manual runtime assembly.

The setup engine

Hardware detection that actually works.

OpenJet handles the heavy lifting of running local AI so you do not have to. It detects your hardware, provisions the backend, and applies the right settings so the agent is fast and stable on whatever you point it at.

Auto-detects Jetson, CUDA, Apple Silicon, or CPU-only hardware to maximize GPU offload
Reuses your existing `llama-server` binary when you already have one
Downloads, builds, and provisions `llama.cpp` entirely from scratch if you do not
Automatically pulls recommended `.gguf` models, or lets you bring your own

For companies

A manufacturing plant, a defense contractor, or a robotics lab — own the entire compute stack.

Buy one heavy GPU server, deploy 50 cheap devices across the floor. Zero latency to the cloud, zero API bills, and commercial terms your legal and security teams can sign off on.

Commercial license

Enterprise and production use requires a paid license.

Security paperwork

Includes a security attestation document for internal review.

Support

Email support with a 48 hour response target.

Simple pricing

$200/mo per team or $500/mo unlimited, with annual billing available.