Pipeline Scope

This app runs the full training pipeline for the maths-conjuncture-solutions project.

An autonomous training workflow for DeepSeek-Math that runs multi-stage curriculum fine-tuning on Space GPU, executes post-training quality evaluation, and publishes only qualified adapters, checkpoints, and run reports to your Hugging Face model repository.

  1. Pull released parquet splits from NorthernTribe-Research/math-conjecture-training-corpus.
  2. Build runtime training configuration from configs/deepseek_math_sota.yaml.
  3. Execute multi-stage DeepSeek-Math curriculum fine-tuning via scripts/train_sota.py.
  4. Run post-training evaluation with pass@k-style sampling and family-level metrics.
  5. Enforce autonomous quality gates before adapter promotion/push.
  6. Stream live terminal telemetry and structured run summaries.

Autonomous Mode is enabled by default and applies full-stage execution parameters automatically. Continuous Auto-Restart is enabled by default so the next training cycle starts automatically.

1 4
1 4
1 8
50 1000
0 0.5
0 1
10 2000