Math Conjecture Trainer

This console runs the full training operations lane for the maths-conjuncture-solutions project:

An autonomous training operations console for DeepSeek-Math that runs multi-stage curriculum fine-tuning on Space GPU, executes post-training quality evaluation, and publishes only qualified adapters, checkpoints, and run reports to your Hugging Face model repository.

  1. Pull released parquet splits from NorthernTribe-Research/math-conjecture-training-corpus.
  2. Build runtime training configuration from configs/deepseek_math_sota.yaml.
  3. Execute multi-stage DeepSeek-Math curriculum fine-tuning via scripts/train_sota.py.
  4. Run post-training evaluation with pass@k-style sampling and family-level metrics.
  5. Enforce autonomous quality gates before adapter promotion/push.
  6. Stream live terminal telemetry, tactical visualization, and structured run summaries.

Autonomous Mode is enabled by default and applies full-stage execution parameters automatically.

1 4
1 4
1 8
50 1000
0 0.5
0 1
10 2000