Recipe: Model Refine¶
Record -> clean -> retrain -> redeploy
This recipe matches the current sense-core keypoint workflow:
Record a dataset with
s6 track.Clean bad samples with
s6 data filter.Train and export a refined keypoint model with
s6 cog keypoint.Replace the model asset your pipeline already loads, then rerun
s6 track.
0. Before You Start¶
Use the
torchenvironment for local training and tests.s6 cog keypointreads anAugmentedKeypointDatasetconfig, not the pipeline config.If the config path you pass to
s6 cog keypointdoes not exist, the command writes the default config template and exits.
1. Record Data¶
Use s6 track with a live camera source and keep inference enabled so the recorded dataset includes the frame context your pipeline writes.
# Record live gst input to a dataset directory
s6 track -i gst -o ./datasets/run_01 -x
# Add the UI if you want to inspect the live overlays while recording
s6 track -i gst -o ./datasets/run_01 -v
Notes
Omit
--record-onlyso the pipeline runs and writes itscontext["export"]payload into the recorded samples.-owrites aStructuredDatasetto the target directory.--uplinkis optional if you want to forward live telemetry snapshots to a visualizer at the same time.
2. Clean The Dataset¶
s6 data filter opens the recorded dataset and removes bad samples in place.
# Defaults assume B/image and B/tip_point
s6 data filter ./datasets/run_01
# Override the datakeys when your dataset schema differs
s6 data filter ./datasets/run_01 \
--image-key LL/image --point-key LL/tip_point
Controls
a/dmove backward and forward.xdeletes the current sample.qexits.
Notes
If your dataset includes masks, repeat
--mask-keyonce per image key.The filter can also be driven from an
AugmentedKeypointDatasetconfig with--config.
3. Prepare Training Config¶
Create a small JSON or YAML config for s6 cog keypoint. A minimal config usually looks like this:
{
"base_dir": ["./datasets/run_01"],
"data_mappings": {
"x": ["B.image"],
"y": [["B.tip_point"]]
}
}
Notes
data_mappings.xanddata_mappings.ymust have the same length.data_mappings.yis nested. Each outer row matches oneximage key, and each inner list defines the ordered point keys assembled into that sample’s keypoint tensor.Single-keypoint configs still use singleton inner lists such as
[["B.tip_point"]].Add
maskplusnum_segmentation_classesonly if you are training segmentation too.For LL/LR semi-supervised triangulation losses, add
calibration_file, enablestereo_pairing, set nonzeroloss_termsweights, and use an even training--batch_sizebecause each batch is flattened asLL, LRpairs.
4. Preview And Train¶
Preview the dataset first, then train once the samples look right.
s6 cog keypoint -c ./cfg/ds_keypoint.json --preview-data
s6 cog keypoint -c ./cfg/ds_keypoint.json --train -e 50 -b 16 -lr 1e-3
Useful flags
--dry-runlogs the graph and runs one training step.--precision bf16forces bf16 autocast on CUDA.--no-tbdisables TensorBoard logging.
5. Export The Model¶
Export an ONNX model after training.
# Train and export from the same run
s6 cog keypoint -c ./cfg/ds_keypoint.json --train --deploy
# Export from the latest saved checkpoint with a fixed batch size
s6 cog keypoint -c ./cfg/ds_keypoint.json --restore latest --deploy --deploy-batch-size 2
Notes
Without
--deploy-path, the export lands underassets/models/using the config prefix, batch size, timestamp, and precision tag.--deploy-untrainedallows export without loading a checkpoint first.--deploy-batch-sizemust be at least1.
6. Redeploy¶
Put the exported ONNX file where the pipeline already expects it, then rerun s6 track.
PipelineT1loads a platform-specific T1 release asset name fromasset_path(...).The pipeline config does not currently expose a model-path field, so this is an asset replacement step rather than a config edit.
Current pipeline consumers still assume single-tip outputs, so multi-keypoint training/export is a model-prep step until those consumers are updated.
s6 track -i gst -v --config ./configs/pipeline.config.yaml