Recipe: Model Refine¶
Keypoint Model Loop: Record → Clean → Retrain
This recipe walks through a practical loop to improve the keypoint model:
Record a dataset while the pipeline runs and writes predicted keypoints.
Manually filter out bad samples.
Retrain a finer model using the cleaned dataset.
The commands rely on the s6 CLI (see docs/applications.md for details).
0. Prerequisites¶
Capture server: start the multi‑camera stream in another terminal (or use an existing one):
s6 streamCamera ordering (optional but recommended): author ROIs once so camera frames are matched reliably:
s6 idPipeline config: ensure your tracking pipeline is set to load a keypoint model (ONNX or Torch) via
configs/pipeline.config.json|yaml.
1. Record while running inference¶
Use s6 track without --record-only so the inference pipeline runs and predicted keypoints are added to each sample.
# Live network input, record to a dataset folder, and save logs
s6 track -i network -o ./datasets/run_01 -x
# Tip: add a UI if you want to visualize results while recording
s6 track -i network -o ./datasets/run_01 -v
Notes
-i networkselects the network capture source (the running stream server).-owrites a StructuredDataset; frames are saved and annotations exported by the pipeline.Omit
-r/--record-onlyso the inference pipeline runs.Use
--serviceto expose a telemetry WebSocket alongside recording if desired.
2. Filter out wrong predictions¶
Open the dataset in the interactive filter to remove bad samples.
# Defaults assume image at B/image and keypoint at B/tip_point
s6 data filter ./datasets/run_01
# If your datakeys differ, pass them explicitly
s6 data filter ./datasets/run_01 \
--image-key B/image --point-key B/tip_point
Controls
a/dto navigate,xto delete,qto quit.A zoomed crop renders around the keypoint to help spot mistakes quickly.
Result
The dataset on disk is updated in place; deleted entries are removed from the JSONL and file storage.
3. Prepare training config¶
Create a small JSON file describing the dataset and augmentation for the keypoint trainer. Example: cfg/ds_keypoint.json.
{
"base_dir": ["./datasets/run_01"],
"datakeys": ["B.image", "B.tip_point"],
"mirror": {"enabled": false},
"crop": {"enabled": true, "crop_factor": 0.9, "output_size": 256},
"rotation": {"enabled": true, "max_rotation": 180.0},
"blur": {"enabled": false},
"occlusion": {"enabled": false},
"color_jitter": {"brightness": 0.2, "contrast": 0.2, "gamma": 0.2},
"sampling": {"enabled": false, "bins": 10, "seed": 0}
}
Tip: You can generate a default template by running the keypoint script once; it will write a default if the config path doesn’t exist.
4. Preview data¶
Sanity‑check the augmentation and labels before training.
s6 cog keypoint --config ./cfg/ds_keypoint.json --preview-data
5. Train a finer model¶
Run training on the cleaned dataset. Tune epochs/batch size and LR to your setup.
s6 cog keypoint --config ./cfg/ds_keypoint.json \
--train -e 50 -b 16 -lr 1e-3
Optionally export an ONNX model after training:
s6 cog keypoint --config ./cfg/ds_keypoint.json \
--deploy ./models/keypoint_v2.onnx
Notes
Checkpoints are saved under
checkpoints/; use--restore latestto resume.TensorBoard logs are written to
logs/cog/…unless--no-tbis passed.
6. Put the model back into the pipeline¶
Point your pipeline config to the newly trained model (ONNX or Torch). Restart s6 track and verify improvements in the UI and/or telemetry.
s6 track -i network -v --config ./configs/pipeline.config.yaml
That’s it—iterate on steps 1–6 to progressively refine your keypoint model.