Blender Simulation Renderer (s6 sim render)¶
Runs Blender headless to render frames from named cameras in a .blend scene and writes a small, directory‑backed dataset. Images are passed as NumPy arrays into StructuredDataset.write(), which saves them under the dataset directory and inserts the relative image path into data.jsonl automatically.
Entrypoint:
src/s6/app/sim/render.pyIn‑Blender script:
src/s6/app/sim/render_animation.pyStorage API:
structured_dataset.StructuredDataset
Why use it¶
Generate reproducible, labelled test sequences without hardware.
Log per‑frame camera intrinsics/extrinsics for downstream geometry.
Produce a dataset layout that
s6 track -i <dataset_dir>can replay directly.
Quick start¶
# Ensure Blender’s Python has NumPy/Pillow
python -m s6.app.sim.install_package numpy Pillow pydantic
# Render 60 frames from cameras L, R, and B
s6 sim render \
--blend-file /path/to/scene.blend \
--output-directory ./temp/my_sim \
--cameras L --cameras R --cameras B \
--frame-count 60
Log specific object locations (relative to identity camera):
s6 sim render \
--blend-file /path/to/scene.blend \
--output-directory ./temp/my_sim \
--cameras L --cameras R --cameras B \
--objects Cube,Sphere \
--frame-count 60
Arguments (from s6.app.sim.render):
--blend-file: path to the.blendfile (required)--scene-name: optional scene to activate before running--blender: Blender executable (default:blenderon PATH)--cameras: camera object names to render (repeatable or comma-separated)--frame-count: number of frames to produce (default: 60)--output-directory: dataset root (images +data.jsonl) (required)--identity-camera: name treated as identity in calibration (default:L)--objects: scene object names to log per-frame 3D location in identity camera frame (repeatable or comma-separated)
Output layout¶
The renderer appends one JSON record per timeline frame to data.jsonl and, via StructuredDataset, saves one JPEG per camera under a subfolder named after the camera.
./temp/my_sim/
├─ data.jsonl # JSON Lines, one record per timeline frame
├─ L/
│ ├─ image_00000.jpeg
│ ├─ image_00001.jpeg
│ └─ ...
├─ R/
│ ├─ image_00000.jpeg
│ ├─ image_00001.jpeg
│ └─ ...
└─ B/
├─ image_00000.jpeg
├─ image_00001.jpeg
└─ ...
Each JSON record contains image references (auto‑injected by StructuredDataset). A single calibration file is written once per run under <output>/configs/calibration.config.json:
{
"frame": 0,
"L": { "image": "L/image_00000.jpeg" },
"R": { "image": "R/image_00000.jpeg" },
"B": { "image": "B/image_00000.jpeg" }
}
How it works:
The in‑Blender script renders each camera view to a temporary PNG on disk for reliability across Blender builds, loads it as a NumPy array (
uint8, BGR), and callsStructuredDataset.write({"L": {"image": np_array}, ...}).StructuredDatasetsaves arrays as JPEG under<root>/<camera>/image_XXXXX.jpegand replaces them with relative paths indata.jsonl.A calibration file is written to
<output>/configs/calibration.config.jsonusing Blender’s camera intrinsics and extrinsics. The--identity-camera(defaultL) defines the world frame; its extrinsic is identity and others are expressed relative to it.A calibration file is written to
<output>/configs/calibration.config.jsonusing OpenCV‑convention camera extrinsics and the simplified intrinsics described below. The--identity-camera(defaultL) defines the world frame; its extrinsic is identity and others are expressed relative to it.
Renderer details:
Output directory is resolved to an absolute path before invoking Blender and inside the Blender script to avoid working‑directory surprises.
Temporary render files are created under
<output>/.render_tmp/and removed after they are read back.Blender’s Python must have NumPy and Pillow; use
s6.app.sim.install_packagebelow.
Replay in s6 track¶
src/s6/app/track.py uses DatasetContextGenerator to load datasets. The tracking pipeline requires L.image, R.image, and B.image to be present, so include all three cameras when rendering if you plan to run the full pipeline:
python -m s6.app.track -i ./temp/my_sim -o ./temp/my_sim_run
During replay, StructuredDataset auto‑loads the image paths back into NumPy arrays, so the pipeline receives images at context["L"]["image"], context["R"]["image"], and context["B"]["image"].
Notes on calibration¶
Intrinsics (K): computed assuming horizontal sensor fit with
fx = fy.fx = f_mm * (res_x_px / sensor_width_mm),fy = fx.Principal point at the image center:
cx = res_x_px/2,cy = res_y_px/2.Pixel aspect and vertical fit are ignored by design.
Extrinsics: OpenCV‑style world→camera transform with axis conversion.
OpenCV camera axes:
+Xright,+Ydown,+Zforward.Computed via
blender_camera_to_opencv_extrinsics()and exported as a 4×4T_world_cam.Translation is divided by a constant
WORLD_SCALE(default10.0). EditWORLD_SCALEinsrc/s6/app/sim/render_animation.pyto adjust for your scene units.
These values can be fed directly into
s6.schema.CalibrationConfigors6.vision.Camerafor evaluation.
Object Logging¶
Purpose: record selected scene objects’ 3D locations in the identity camera frame for each rendered frame.
Enable via CLI: pass one or more object names using
--objects.Repeatable:
--objects Cube --objects SphereComma-separated:
--objects Cube,Sphere
Coordinate frame: OpenCV camera axes of the identity camera (
--identity-camera, defaultL):+Xright,+Ydown,+Zforward.Scaling: object world locations are divided by the same
WORLD_SCALE(default10.0) used for camera extrinsic translations, then transformed by the identity camera’s world→camera matrix.Dataset entry shape per frame:
{
"frame": 0,
"L": { "image": "L/image_00000.jpeg" },
"R": { "image": "R/image_00000.jpeg" },
"objects": {
"Cube": { "location": [0.0, 0.0, 0.0] },
"Sphere": { "location": [0.0, 0.0, 0.0] }
}
}
If the identity camera is missing or not a
CAMERA, object logging is skipped with a warning.
See also¶
Tracking CLI:
application/trackStorage API:
reference/s6.utils(modules6.utils.datastore)
Install Blender Python Packages¶
Use the helper to install into Blender’s embedded Python:
python -m s6.app.sim.install_package numpy Pillow pydantic --blender /path/to/Blender
Required for renderer:
numpy,PillowOptional for dataset model serialization:
pydantic
Calibration CLI (s6.app.sim.calib)¶
Calibrate intrinsics for L/R/B from a StructuredDataset using ChArUco detection. Uses the board definition from s6.utils.calibration (DICT_4X4_50, 8×8, square=0.015 m, marker=0.011 m).
Examples:
# Calibrate all available cameras and write calibration.charuco.json
python -m s6.app.sim.calib --dataset ./temp/my_sim
# Choose cameras and limit frames
python -m s6.app.sim.calib --dataset ./temp/my_sim \
--cameras L --cameras R --max-frames 300 --min-corners 15
# Preview detections (no calibration, interactive)
python -m s6.app.sim.calib --dataset ./temp/my_sim --preview --preview-max-frames 200
Behavior:
Non‑preview mode saves detection overlays to
<dataset>/calib_metadata/<CAM>/frame_XXXXXX.pngwhile scanning frames.Runs
cv2.calibrateCamerawith 2D–3D correspondences built from detected ChArUco corners and the known board geometry.Writes
<dataset>/calibration.charuco.jsonwith per‑cameraK,dist,rms,image_size, andframes_used/total.
Requirements:
opencv-contrib-python(forcv2.aruco).
ChArUco Utilities (s6.app.sim.charuco_detect)¶
Generates a board image matching the calibration settings and runs live ChArUco/ArUco detection from a webcam.
Useful for quick visual checks and for ensuring the printed board matches the configured dimensions.
Run directly:
python -m s6.app.sim.charuco_detect
Troubleshooting¶
Black images or empty buffers in headless Blender:
The renderer uses a temp‑file approach to avoid empty “Render Result” buffers. Ensure
Pillowis installed in Blender’s Python.
“Camera not found” warnings:
Ensure the
.blendcontains camera objects named exactlyL,R,B, or pass the correct names via--cameras.
track.pyrequiresB.image:Include camera
Bwhen generating datasets intended for the full tracking pipeline.