Run on-device or on your server
Run the model directly on edge hardware, or connect to a GPU server on your local network. Same agent, flexible compute.
You can't fit a $3,000 GPU on a drone or a conveyer belt. You can't send proprietary factory data to the cloud.
OpenJet lets edge devices dynamically tap into the heavy local compute you already own. Run open-source models on the device or on a server on your network — air-gapped, zero API bills, and you own the entire compute stack end-to-end.
Air-Gapped • Open-Source Models • Zero API Bills • Own Your Compute
Install and launch
$ git clone https://github.com/l-forster/open-jet.git
$ cd open-jet
$ ./install.sh
$ open-jet --setup
$ open-jetHow it works
OpenJet bridges the gap between where AI needs to run and where the compute actually lives — without ever touching the public internet.
Run the model directly on edge hardware, or connect to a GPU server on your local network. Same agent, flexible compute.
No proprietary APIs, no vendor lock-in. OpenJet runs open-source, locally hosted models you control and can audit.
Your data never leaves your network. No cloud calls, no telemetry, no external dependencies on the main path.
Auto-detects your hardware, provisions the backend, and gets a working agent ready — without manual runtime assembly.
The setup engine
OpenJet handles the heavy lifting of running local AI so you do not have to. It detects your hardware, provisions the backend, and applies the right settings so the agent is fast and stable on whatever you point it at.
For companies
Buy one heavy GPU server, deploy 50 cheap devices across the floor. Zero latency to the cloud, zero API bills, and commercial terms your legal and security teams can sign off on.
Enterprise and production use requires a paid license.
Includes a security attestation document for internal review.
Email support with a 48 hour response target.
$200/mo per team or $500/mo unlimited, with annual billing available.
Join the OpenJet Discord for setup help and model discussion.