Blender Simulation Renderer (s6 sim render)ΒΆ

s6 sim render launches Blender in background mode, runs render_animation.py inside the Blender process, and writes a StructuredDataset to disk. The dataset contains one frame record per timeline frame, one JPEG per rendered camera, and a calibration file for the rendered camera setup.

  • Entrypoint wrapper: src/s6/app/sim/render.py

  • In-Blender script: src/s6/app/sim/render_animation.py

  • Dataset writer: structured_dataset.StructuredDataset

What it doesΒΆ

  • Renders named cameras from a .blend scene.

  • Writes image data to <output>/<camera>/image_XXXXX.jpeg.

  • Appends one JSON record per timeline frame to <output>/data.jsonl.

  • Writes <output>/configs/calibration.config.json once per run.

  • Optionally logs selected scene object locations in the identity camera frame.

UsageΒΆ

# Render 60 frames from cameras L, R, and B
s6 sim render \
  --blend-file /path/to/scene.blend \
  --output-directory ./temp/my_sim \
  --cameras L --cameras R --cameras B \
  --frame-count 60

The wrapper accepts comma-separated camera names or repeated flags:

s6 sim render \
  --blend-file /path/to/scene.blend \
  --output-directory ./temp/my_sim \
  --cameras L,R,B

To log object positions relative to the identity camera:

s6 sim render \
  --blend-file /path/to/scene.blend \
  --output-directory ./temp/my_sim \
  --cameras L --cameras R --cameras B \
  --objects Cube --objects Sphere

CLI FlagsΒΆ

From src/s6/app/sim/render.py:

  • --blend-file: path to the .blend file, required.

  • --scene-name: optional scene name to activate before running.

  • --blender: Blender executable. The current default is the Blender app path used in render.py.

  • --cameras: camera names to render. Repeat the flag or pass a comma-separated list.

  • --frame-count: number of timeline frames to render, default 60.

  • --output-directory: dataset root for images and data.jsonl, required.

  • --identity-camera: camera treated as the calibration identity, default L.

  • --objects: scene object names to log per frame, repeatable or comma-separated.

Output LayoutΒΆ

Example output tree:

./temp/my_sim/
β”œβ”€ data.jsonl
β”œβ”€ configs/
β”‚  └─ calibration.config.json
β”œβ”€ L/
β”‚  β”œβ”€ image_00000.jpeg
β”‚  └─ ...
β”œβ”€ R/
β”‚  β”œβ”€ image_00000.jpeg
β”‚  └─ ...
└─ B/
   β”œβ”€ image_00000.jpeg
   └─ ...

Each JSONL record contains the timeline frame number plus image references for the rendered cameras. When --objects is used, the record also includes an objects map with per-object 3D locations in the identity camera frame.

Calibration FileΒΆ

The renderer writes <output>/configs/calibration.config.json once per run. The file contains calibration entries for L, R, and B, plus fallback entries for RGBL and RGBR.

Implementation details from render_animation.py:

  • Extrinsics are written as world-to-camera transforms in OpenCV camera axes.

  • The identity camera gets an identity extrinsic in the exported config.

  • Intrinsics are derived from the Blender scene camera and render settings.

  • Translation values are scaled by WORLD_SCALE = 10.0.

Object LoggingΒΆ

Object logging is optional and only runs when --objects is provided.

  • Object locations are transformed into the identity camera frame.

  • The identity camera must exist and be a camera object, or object logging is skipped with a warning.

  • Object world coordinates are divided by the same WORLD_SCALE = 10.0 used for camera translation before transformation.

Example record:

{
  "frame": 0,
  "L": { "image": "L/image_00000.jpeg" },
  "R": { "image": "R/image_00000.jpeg" },
  "B": { "image": "B/image_00000.jpeg" },
  "objects": {
    "Cube": { "location": [0.0, 0.0, 0.0] },
    "Sphere": { "location": [0.0, 0.0, 0.0] }
  }
}

Replay In s6 trackΒΆ

The rendered dataset can be replayed with the normal tracking CLI using the -i input flag:

python -m s6.app.track -i ./temp/my_sim -o ./temp/my_sim_run

StructuredDataset loads the saved image paths back into NumPy arrays during replay, so the pipeline sees images at context["L"]["image"], context["R"]["image"], and context["B"]["image"].

Blender Python PackagesΒΆ

The Blender process needs numpy and Pillow available to load rendered frames. Use the helper if Blender’s embedded Python is missing packages:

python -m s6.app.sim.install_package numpy Pillow pydantic --blender /path/to/Blender
  • Required for rendering: numpy, Pillow

  • Optional for fallback dataset serialization: pydantic