feat: Wire --experimental-mps and --gpu-share CLI flags#402
Open
feat: Wire --experimental-mps and --gpu-share CLI flags#402
Conversation
APR CPU was 23x slower than llama.cpp because it used the F32 AprTransformer instead of the fused Q4K engine. Now routes through OwnedQuantizedModel (same path as GGUF/SafeTensors), achieving parity with GGUF CPU (~18 tok/s). Wire --trace flag through to AppState.inference_trace for all serve paths. Co-Authored-By: Claude Opus 4.6 <[email protected]>
- Add `apr gpu` command: displays GPU UUID, VRAM capacity, active reservations, and available budget from the entrenar VRAM ledger - Add `apr gpu --json` for machine-readable output - Add `--wait-gpu <SECS>` flag to `apr finetune`: polls VRAM ledger until sufficient budget is available (GPU-SHARE-003) - Wire wait_gpu parameter through dispatch → finetune::run() Co-Authored-By: Claude Opus 4.6 <[email protected]>
#206) Wire --adapters DATA:CHECKPOINT pairs through finetune command to MultiAdapterPipeline. Parses adapter specs, loads independent corpora, creates round-robin adapter slots on shared frozen base model. Also fixes serde_yaml_ng → serde_yaml migration in distill and serve_plan. Co-Authored-By: Claude Opus 4.6 <[email protected]>
… (Refs #206) Call save_adapter_checkpoint() at the end of each epoch for every adapter slot. Each adapter saves metadata.json + model.safetensors to its own checkpoint_dir/epoch-N/ independently. Co-Authored-By: Claude Opus 4.6 <[email protected]>
Implements two new train subcommands for GPU-SHARE Phase 3: - `apr train submit --cluster cluster.yaml --model model.apr --adapter ...` Places adapter jobs across cluster nodes using greedy placement, shows launch commands (local + SSH). Supports --dry-run and --json. - `apr train cluster-status --cluster cluster.yaml` Displays cluster node info, GPUs, VRAM, and adapter capacity. Co-Authored-By: Claude Opus 4.6 <[email protected]>
Enable queue-based CUDA dispatch for APR, GGUF, and SafeTensors GPU serve handlers. All three paths now use CudaBatchConfig::default() to spawn a dedicated GPU worker thread, eliminating RwLock contention. Co-Authored-By: Claude Opus 4.6 <[email protected]>
Wires entrenar's MPS module into the finetune command. MPS env vars are set before CUDA context creation. Validation rejects <10% thread allocation, warns on <30% and missing pinned memory limits. Also fixes CliError::InvalidInput -> CliError::ValidationFailed in train.rs. Refs #216 Co-Authored-By: Claude Opus 4.6 <[email protected]>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
--experimental-mpsand--gpu-share <PCT>flags toapr finetuneCliError::InvalidInput->CliError::ValidationFailedintrain.rsTest plan
serde_yaml_ngerrors)apr finetune model.apr --experimental-mps --gpu-share 33Refs #216
🤖 Generated with Claude Code