Train Your First Model
This is the first supported end-to-end WorldFlux tutorial after installation.
It uses the scaffolded project flow that already exists in the repository today:
- create a project with
worldflux init - run a short local training job with
worldflux train - validate outputs with
worldflux verify --mode quick - inspect the generated helper files such as
inference.py
If you have not completed the installation smoke test yet, do that first: CPU Success Path.
1. Create a Project
Install the CLI if needed:
uv tool install worldflux
Generate a new project:
worldflux init my-first-worldflux-model
cd my-first-worldflux-model
The scaffold creates these files for you:
worldflux.tomltrain.pydataset.pylocal_dashboard.pydashboard/index.htmlinference.py
2. Check the Generated Config
Open worldflux.toml and confirm the newcomer-safe defaults:
training.backend = "native_torch"verify.mode = "quick"data.sourcematches the scaffolded environment (gymfor the default Atari/MuJoCo path)
You do not need parity/proof setup for this tutorial.
3. Run the Contract-Smoke Lane
Start with a small CPU run:
worldflux train --steps 5 --device cpu
What to expect in this lane:
- a training summary panel in the terminal
- an
outputs/directory with checkpoints andrun_manifest.json - a local dashboard URL such as
http://127.0.0.1:8765because the scaffolded project includeslocal_dashboard.pyanddashboard/index.html
If you want a longer run later, increase training.total_steps in worldflux.toml
or pass a larger --steps value.
The contract-smoke lane is about command success, artifact creation, and manifest structure. It is not the same thing as meaningful local training.
4. Verify the Contract-Smoke Result Locally
Run the quick compatibility check:
worldflux verify --target ./outputs --mode quick
This is the supported first verification path for local projects. It is intentionally different from proof-mode parity workflows.
Quick verify may still miss the synthetic threshold if the run is too short. In the contract-smoke lane, that should be interpreted as a workflow warning: the command executed, artifacts are interpretable, but the run is not yet a strong quality signal.
Check outputs/run_manifest.json after the run. In the smoke lane you should
expect:
run_classification = "contract_smoke"data_mode = "random"unless you already configured a real environment-backed datasetdegraded_modes = []for the happy path, or explicit fallback markers if the scaffold had to degrade
5. Promote to Meaningful Local Training
Once the smoke lane passes, switch to a real environment-backed run:
- For the guaranteed DreamerV3 lane, install Atari extras with
uv sync --extra training --extra atari - Set
data.source = "gym"inworldflux.toml - Set
data.gym_env = "ALE/Breakout-v5" - Increase
training.total_stepsto a value that is meaningful for your local test - Rerun
worldflux train
This lane is successful only if outputs/run_manifest.json shows:
run_classification = "meaningful_local_training"data_mode = "offline"ordata_mode = "online"- no
degraded_modes
If random_replay_fallback, scaffold_runtime_fallback, or
env_collection_unavailable appears, treat the run as smoke only and fix the
data path before using it as evidence.
6. Inspect the Scaffolded Helper Script
The scaffold also creates inference.py.
Use it as the starting point for short rollout and imagination checks once you are ready
to inspect a trained checkpoint with the same environment you use for development.
7. What to Do Next
- Tune
training.total_steps,training.batch_size, andtraining.learning_rateinworldflux.toml - Tune
data.source,gameplay.enabled, andvisualization.enabledinworldflux.tomlwhen you want to change howworldflux traincollects data and displays the local dashboard - Read Quick Start for core API usage
- Read Training API Guide for trainer and replay buffer details
- Read Parity only when you need advanced proof-oriented workflows