Protocol API (Autogenerated)¶
This page is generated from Python docstrings via mkdocstrings.
WorldModel¶
Bases: Module, ABC
Base class for all world models.
supports(capability) ¶
Return True if the model advertises a capability.
require(capability, message=None) ¶
Raise if the model does not support a capability.
validate_batch_contract(batch) ¶
Validate batch keys/layouts against model I/O contract.
validate_state_contract(state) ¶
Validate state tensor keys/shapes against model I/O contract.
io_contract() ¶
Return runtime I/O contract.
Subclasses should override this when they have richer modality/state specs. The default keeps backward compatibility for existing models.
encode(obs, deterministic=False) ¶
Encode observation to latent state.
transition(state, action, conditions=None, deterministic=False) ¶
Predict next state (prior/imagination).
async_encode(obs, deterministic=False) async ¶
Asynchronous non-blocking variant of encode.
async_transition(state, action, conditions=None, deterministic=False) async ¶
Asynchronous non-blocking variant of transition.
async_decode(state, conditions=None) async ¶
Asynchronous non-blocking variant of decode.
update(state, action, obs, conditions=None) ¶
Update state with observation (posterior).
decode(state, conditions=None) ¶
Decode latent state to predictions.
plan_step(state, action, conditions=None, deterministic=False) ¶
Optional planner hook. Default delegates to transition().
sample_step(state, action=None, conditions=None, deterministic=False) ¶
Optional sampler hook for generative families.
If an action is provided, transition first then decode. Otherwise decode state.
teacher_forcing_step(state, action, obs, conditions=None) ¶
Optional training hook. Default delegates to update().
rollout(initial_state, action_sequence, conditions=None, deterministic=False, mode='autoregressive') ¶
Default rollout implementation using transition + decode.
async_rollout(initial_state, action_sequence, conditions=None, deterministic=False, mode='autoregressive') async ¶
Asynchronous non-blocking variant of rollout.
loss(batch) abstractmethod ¶
Compute training loss.
save_pretrained(path) ¶
Save model weights and config using a unified directory layout.
contract_fingerprint() ¶
Return a stable fingerprint for this model's declared IO contract.
ActionPayload¶
Polymorphic action container that supports multiple control modalities.
validate(*, api_version='v0.2') ¶
Validate payload consistency.
ConditionPayload¶
Optional side-conditions for conditional world modeling.
validate(*, strict=False, allowed_extra_keys=None, extra_schema=None, api_version='v0.2') ¶
Validate condition extras naming and optional allow-list contract.
WorldModelInput¶
ModelOutput¶
LossOutput¶
Standardized loss container.
items() ¶
Compatibility helper for iterating over losses.
State¶
Generic state container (tensor dictionary + metadata).
validate() ¶
Validate state tensor shapes and batch consistency.
serialize(version='v1', format='binary') ¶
Serialize state with a versioned binary envelope.
Binary envelope layout
magic (4 bytes), version id (1 byte), metadata length (4 bytes), metadata JSON bytes, then raw tensor bytes.
deserialize(payload) classmethod ¶
Deserialize state from State.serialize(...) payload.
to_shared_memory(*, namespace='worldflux-state', allow_copy_from_cuda=False) ¶
Create shared-memory descriptor for zero-copy CPU state exchange.
Notes
- CPU contiguous tensors retain zero-copy semantics when re-attached.
- CUDA tensors require
allow_copy_from_cuda=Trueand are copied to CPU.
from_shared_memory(descriptor, *, copy=False) classmethod ¶
Attach a state from shared-memory descriptor created by to_shared_memory.
close_shared_memory(*, unlink=False) ¶
Close attached shared-memory handles, optionally unlinking segments.
unlink_shared_memory(descriptor) staticmethod ¶
Unlink shared-memory segments created by to_shared_memory.
Trajectory¶
Imagination rollout trajectory in latent space.
Attributes:
| Name | Type | Description |
|---|---|---|
states | list[State] | List of latent states [T+1] (initial + T steps) |
actions | Tensor | Action tensor [T, batch, action_dim] |
rewards | Tensor | None | Predicted rewards [T, batch] (optional) |
values | Tensor | None | Predicted values [T+1, batch] (optional) |
continues | Tensor | None | Continue probabilities [T, batch] (optional) |
The trajectory maintains the invariant that len(states) == actions.shape[0] + 1, representing the initial state plus one state per action taken.