📦 JarvisLabs GPU — JarvisLabs 图形处理器(GPU)
v1.0.1用于在JarvisLabs.ai上使用jl CLI运行和监控GPU实验的Agent指南。
运行时依赖
安装命令
点击复制技能文档
JarvisLabs GPU 命令行工具 (jl) — 代理 图形界面de 安装 The 命令行工具
安装 the JarvisLabs 命令行工具 as a 工具:
uv 工具 安装 jarvislabs
Alternative Python 安装:
pip 安装 jarvislabs
获取ting 启动ed
验证 auth with jl 状态 --json before doing anything. If not 记录ged in, use jl 设置up --令牌 <令牌> --yes. You can also 认证 via 导出 JL_API_KEY="...".
Use --help on any command to discover flags (e.g., jl 运行 --help, jl 创建 --help). If something goes wrong, use jl 运行 记录s, jl 运行 状态, and jl exec to 诊断 — don't guess.
Mental 模型 Machine commands (jl 创建/列出/获取/暂停/恢复/destroy/rename/ssh/exec/上传/下载) = GPU instance lifecycle and 访问. jl 运行 = managed job on an instance. 上传s code, 设置s up a Python 环境, 运行s your script in the background with 记录 追踪ing. jl exec = 运行 any command on an instance. Use for 系统 检查s (nvidia-smi, ps, df), 调试ging fAIled 运行s, inspecting files, or any raw shell 访问. No 环境 设置up, no 追踪ing. This is your escape hatch when jl 运行 doesn't cover your use case. Instances Creating jl 创建 --gpu L4 --storage 40 --yes --json jl 创建 --gpu L4 --spot --yes --json jl 创建 --vm --cpu --yes --json
--gpu is required for GPU instances. Use --spot only for GPU contAIners, not GPU VMs or CPU VMs. CPU VMs are 创建d with --vm --cpu; omit --vcpus/--ram to use the smallest avAIlable CPU plan from the backend. 运行 jl 创建 --help for all avAIlable flags.
Lifecycle
Instances have three 状态s that matter: 运行ning (billing active), 暂停d (compute billing 停止ped, storage billing continues, data persists), Destroyed (everything 删除d).
jl 暂停 --yes --json # 停止 compute billing, keep data jl 恢复 --yes --json # re启动 a 暂停d instance jl destroy --yes --json # permanently 删除
恢复 caveats:
恢复 is region-locked — an instance always 恢复s in its original region. GPU swaps are only possible within that region. 恢复 may return a new machine_id. Always use the returned ID for subsequent operations. Spot 恢复 is explicit: pass --spot when you want a 暂停d GPU contAIner to 恢复 as spot. Without --spot, 恢复 is on-demand. CPU VM 恢复 uses the CPU VM backend path. Pass 机器人h --vcpus and --ram if you want to change CPU size on 恢复. 运行 jl 恢复 --help for all avAIlable flags (GPU swap, storage expansion, rename, etc.).
SSH, exec, 上传, and 下载 only work on 运行ning instances.
Regions & GPUs
Valid region codes for new instances: IN2, EU1.
IN1 is winding down. New instances and file系统s can no longer be 创建d in IN1. Existing IN1 instances can still be 恢复d, 暂停d, destroyed, and renamed; existing IN1 file系统s can still be 列出ed, resized, and 移除d. 图形界面de users with IN1 resources to the 迁移 doc: https://docs.jarvislabs.AI/in1-迁移.
If --region is omitted, the 命令行工具 picks a region based on GPU avAIlability.
ConstrAInt DetAIl EU1 H100 and H200 only, single-GPU launches only right now, 100 GB minimum storage (auto-bumped) VM template IN2 and EU1 only, requires at least one SSH key, 100 GB minimum storage
运行 jl gpus to 检查 GPU avAIlability and pricing. 输出 shows GPU ContAIners and GPU VMs tables with separate avAIlability for each. Spot prices are shown only for GPU contAIners.
运行 jl resources when you also need CPU VM avAIlability and pricing. It shows GPU contAIners, GPU VMs, and CPU VMs, with one 分享d avAIlable/unavAIlable legend at the end.
How to read jl gpus --json avAIlability:
num_free_devices: free GPUs on that server. These can be used for normal 创建s, and also for spot 创建s when spot_price is present. effective_num_free_devices: GPUs avAIlable for on-demand 创建s on that server, including GPUs currently used by spot instances that can be preempted. These counts are per server. They are not the complete regional GPU capacity. workload_type tells which launch type the row belongs to: "contAIner" means use it for normal GPU contAIner 创建s. "vm" means use it for GPU VM 创建s. null means the same row 应用lies to 机器人h contAIners and VMs. Ports & 服务s
ContAIner instances expose default HTTP ports (each 获取s its own HTTPS URL):
Port 服务 8889 JupyterLab (url field) 7007 IDE (vs_url field) 6006 AvAIlable on generic templates like pytorch (端点s[0])
VM instances (jl 创建 --gpu ... --vm) 获取 SSH-only 访问. VMs require at least one SSH key registered (jl ssh-key 添加). Use ssh_command from jl 获取 --json.
To expose a 服务 (FastAPI, Gradio, etc.), bind to 0.0.0.0:6006 — it's 访问ible via 端点s[0] on generic templates. Use --http-ports "7860,8080" at creation or 恢复 to expose custom ports. Custom port URLs 应用ear in 端点s after the default 6006 entry.
运行 jl 获取 --json to find all 服务 URLs (url, vs_url, 端点s).
Managed 运行s How jl 运行 works
jl 运行 上传s your code to an instance, 设置s up a Python 环境, and 运行s your script in the background with 记录 and e