Recipe: Pipeline Chrome Trace¶

Create a Chrome/Perfetto timeline of the tracking pipeline.

This recipe shows how to use the track app’s logging/debug option to export a Chrome Trace log of the pipeline so you can inspect timing and nesting of each stage.


What you’ll get¶

  • A Chrome Trace JSON at logs/runs/<timestamp>/perf.log.json (loadable in Chrome’s tracing or Perfetto).

  • A companion metrics.json with per‑frame, human‑readable events for offline stats.


0) Prerequisites¶

  • s6 installed and runnable (pip install -e . in the repo).

  • Either a dataset to replay or access to the running stream server.

  • Optional: a pipeline config at configs/pipeline.config.json|yaml to keep runs consistent.


1) Run track with logging enabled¶

Enable logs with -x/--output-log. Do not use --record-only if you want the pipeline to run and be profiled.

# Replay a dataset headlessly and write logs
s6 track ./datasets/session_001 -x

# Or: live network cameras with a UI and logs
s6 track -i network -v -x

# You can also pin a pipeline config
s6 track ./datasets/session_001 -x --config ./configs/pipeline.config.yaml

Notes

  • Logs are written under logs/runs/<YYYYMMDD_HHMMSS>/ for each run.

  • The trace files are finalized when the run exits (Ctrl+C in headless, or close the UI).


2) Locate the trace files¶

After you stop the run, check the latest directory under logs/runs/:

ls -1 logs/runs/
ls -1 logs/runs/<timestamp>/

You should see at least:

  • perf.log.json — Chrome/Perfetto trace (traceEvents format, displayTimeUnit: ms).

  • metrics.json — combined session + per‑frame readable events.


3) Inspect the timeline¶

You can open perf.log.json in either viewer:

  • Chrome: navigate to chrome://tracing, click “Load” and select the file.

  • Perfetto: open https://ui.perfetto.dev and drag‑and‑drop the file.

Tips

  • Event names match pipeline scopes (e.g., search_det, view_synthesis, view_synthesis_warp, local_centroid_det, and model stages decorated with trace_function).

  • Use the search box to jump to a scope; zoom with W/S or mousewheel; pan with A/D.


4) Optional: summarize stats¶

If you want quick per‑scope timing stats or to compare two runs, use the perf‑stats tool:

# Show stats for the latest run under ./logs
s6 perf-stats

# Compare two runs (directories or explicit metrics files)
s6 perf-stats ./logs/runs/2024_10_01 ./logs/runs/2024_10_02

# Save a Markdown report
s6 perf-stats ./logs/runs/2024_10_01 -o perf.md

Troubleshooting¶

  • No perf.log.json? Ensure -x/--output-log is passed and exit the run to flush files.

  • Empty trace? Make sure you did not pass --record-only; the inference pipeline must run to emit trace events.

  • Different behavior across runs? Fix your --config to a known pipeline config for consistency.


See also

  • Application details: application/track.md

  • Stats tooling: application/perf-stats.md