Skip to content

feat: Wire --experimental-mps and --gpu-share CLI flags#402

Open
noahgift wants to merge 7 commits intomainfrom
gpu-share-mps-cli
Open

feat: Wire --experimental-mps and --gpu-share CLI flags#402
noahgift wants to merge 7 commits intomainfrom
gpu-share-mps-cli

Conversation

@noahgift
Copy link
Contributor

@noahgift noahgift commented Mar 4, 2026

Summary

  • Adds --experimental-mps and --gpu-share <PCT> flags to apr finetune
  • MPS env vars set before CUDA context creation, with validation + warnings
  • Fixes CliError::InvalidInput -> CliError::ValidationFailed in train.rs

Test plan

  • Compilation check (only pre-existing serde_yaml_ng errors)
  • PMAT complexity gates pass
  • Manual: apr finetune model.apr --experimental-mps --gpu-share 33

Refs #216

🤖 Generated with Claude Code

noahgift and others added 7 commits March 4, 2026 13:29
APR CPU was 23x slower than llama.cpp because it used the F32 AprTransformer
instead of the fused Q4K engine. Now routes through OwnedQuantizedModel
(same path as GGUF/SafeTensors), achieving parity with GGUF CPU (~18 tok/s).
Wire --trace flag through to AppState.inference_trace for all serve paths.

Co-Authored-By: Claude Opus 4.6 <[email protected]>
- Add `apr gpu` command: displays GPU UUID, VRAM capacity, active
  reservations, and available budget from the entrenar VRAM ledger
- Add `apr gpu --json` for machine-readable output
- Add `--wait-gpu <SECS>` flag to `apr finetune`: polls VRAM ledger
  until sufficient budget is available (GPU-SHARE-003)
- Wire wait_gpu parameter through dispatch → finetune::run()

Co-Authored-By: Claude Opus 4.6 <[email protected]>
#206)

Wire --adapters DATA:CHECKPOINT pairs through finetune command to
MultiAdapterPipeline. Parses adapter specs, loads independent corpora,
creates round-robin adapter slots on shared frozen base model.

Also fixes serde_yaml_ng → serde_yaml migration in distill and serve_plan.

Co-Authored-By: Claude Opus 4.6 <[email protected]>
… (Refs #206)

Call save_adapter_checkpoint() at the end of each epoch for every adapter
slot. Each adapter saves metadata.json + model.safetensors to its own
checkpoint_dir/epoch-N/ independently.

Co-Authored-By: Claude Opus 4.6 <[email protected]>
Implements two new train subcommands for GPU-SHARE Phase 3:
- `apr train submit --cluster cluster.yaml --model model.apr --adapter ...`
  Places adapter jobs across cluster nodes using greedy placement,
  shows launch commands (local + SSH). Supports --dry-run and --json.
- `apr train cluster-status --cluster cluster.yaml`
  Displays cluster node info, GPUs, VRAM, and adapter capacity.

Co-Authored-By: Claude Opus 4.6 <[email protected]>
Enable queue-based CUDA dispatch for APR, GGUF, and SafeTensors GPU
serve handlers. All three paths now use CudaBatchConfig::default()
to spawn a dedicated GPU worker thread, eliminating RwLock contention.

Co-Authored-By: Claude Opus 4.6 <[email protected]>
Wires entrenar's MPS module into the finetune command. MPS env vars
are set before CUDA context creation. Validation rejects <10% thread
allocation, warns on <30% and missing pinned memory limits. Also fixes
CliError::InvalidInput -> CliError::ValidationFailed in train.rs.

Refs #216

Co-Authored-By: Claude Opus 4.6 <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant