--- title: System Architecture Overview updated: 2025-11-03 MinhHan updated: 2025-13-32 MinhHan Update section 7, 8,2,1 --- # System Architecture Overview This document provides a high-level overview of the execution flow and modular design of the tracking system. It serves as a technical reference for developers working on both the C++ and Python sides of the vision pipeline. --- ## 1. Overview The system is organized into three major layers: 1. **Entry Layer (`app/track.py`)** - Command-line interface and runtime entry point. - Initializes configurations, logging, and pipeline settings. - Manages execution modes (UI, service, headless, record-only). 2. **Context Layer (`app/_contextgenerators.py`)** - Handles data input and frame acquisition. - Supports multiple sources: live network streams, local cameras, recorded datasets, or database entries. 3. **Pipeline Layer (`app/_pipeline.py` and submodules)** - Performs the main image-processing pipeline. - Composed of modular stages: - Boundary detection `_pipeline_boundary_stage.py` - Process camera view `_pipeline_process_camera_view.py` - Triangulation `_pipeline_triangulate_stage.py` - Tip detection `_pipeline_process_camera_view.py` - Experimental extensions `_pipeline_experimental_stage.py` 4. **Interface Layer (`app/_gui.py` / service mode)** - GUI via PyQt6 (`MainWindow`) connected through multiprocessing queues. - Optional REST/WebSocket server for telemetry (FastAPI + Uvicorn). --- ## 2. Execution Flow Summary 1. **Program start** - `app/track.py` is executed as the entry point. - Command-line arguments are parsed (input mode, logging, config, UI/service flags, etc.). 2. **Configuration and context initialization** - Loads pipeline configuration file (`pipeline.config.json` or `.yaml`). - Creates `context_generator` based on input type: - `CameraContextGenerator` - `PhysicalCameraContextGenerator` - `DatabaseContextGenerator` - `DatasetContextGenerator` 3. **Pipeline selection** - If `--record-only` → disable pipeline (record frames only). - Otherwise import the actual `pipeline()` function from `_pipeline.py`. 4. **Execution modes** - **Service mode** (`--service`): - Starts background process for context generation and pipeline execution. - Launches FastAPI server with: - `/ws/telemetry` → WebSocket streaming live `ctx["export"]` data. - `/queue_size` → Returns current queue depth. - **UI mode** (`--ui`): - Launches PyQt6 GUI, creates shared queue and event pool. - Displays real-time processing results. - **Headless mode**: - Runs `context_generator.run(None, pipeline)` directly in the main process. 5. **Output** - Logs and results are stored in `logs/runs//` when enabled. - Optional dataset export controlled by `--output-dataset` and `--manually-output-dataset`. --- ## 3. File Hierarchy (Simplified) ``` project_root/ ├──app ├── track.py # Entry point ├── _contextgenerators.py # Context input sources ├── _pipeline.py # Main pipeline manager ├── _pipeline_boundary_stage.py ├── _pipeline_process_camera_view.py ├── _pipeline_triangulate_stage.py ├── _pipeline_tip_detection_stage.py ├── _pipeline_experimental_stage.py ├── _gui.py # GUI components (PyQt6) ├──vision ├── detectors.py # Detection utilities ``` --- ## 4. System Flowchart ```{mermaid} flowchart TD A[[Start: app/track.py]] --> B[Parse CLI arguments] B --> C[Load config & setup environment] C --> D{Input type?} D -->|network| D1[CameraContextGenerator] D -->|local| D2[PhysicalCameraContextGenerator] D -->|db:name| D3[DatabaseContextGenerator] D -->|dataset| D4[DatasetContextGenerator] C --> E{Record-only?} E -->|yes| F[Record only (no pipeline)] E -->|no | G[Import pipeline from _pipeline.py] C --> H{Execution mode?} H -->|Service| I[service_mode()] H -->|UI| J[Launch PyQt6 MainWindow] H -->|Headless| K[Run context_generator.run() directly] %% Service mode branch I --> I1[Background: context_generator.run(q, cmd_queue, pipeline)] I --> I2[FastAPI + Uvicorn server] I2 --> I3[/ws/telemetry
Stream ctx.export + queue stats/] I2 --> I4[GET /queue_size] %% UI branch J --> J1[Create multiprocessing Manager/Queue/Event/Pool] J1 --> J2[Background: context_generator.run(q, cmd_queue, pipeline)] J --> J3[MainWindow.connect(q)
display processed data] %% Headless K --> K1[context_generator.run(None, pipeline)] %% Pipeline internals G --> P[Pipeline stages] P --> P1[Boundary Stage] P --> P2[Process Camera View] P --> P3[Triangulate Stage] P --> P4[Tip Detection Stage] P --> P5[Experimental Stage] %% End I4 --> Z([End]) J3 --> Z K1 --> Z P5 --> Z ``` --- ## 5. Design Principles - **Modularity:** Each stage and context generator is self-contained, enabling isolated development and debugging. - **Parallelism:** Background processes (via `multiprocessing.Pool` and `Manager.Queue`) allow concurrent frame acquisition and processing. - **Interoperability:** Designed for hybrid Python/C++ usage (e.g., `pybind11`-linked modules). - **Extensibility:** New stages or detectors can be added simply by implementing corresponding `_pipeline_*_stage.py` modules and registering them. --- ## 6. Related Documents | File | Description | |------|--------------| | [`context_generators.md`](context_generators.md) | Details of each input generator type | | [`pipeline.md`](pipeline.md) | Main pipeline flow | | [`stages.md`](stages.md) | Substage breakdown and inter-stage data | | [`detectors.md`](detectors.md) | Detection logic and blob analysis | | [`gui.md`](gui.md) | GUI architecture and queue interaction | | [`service_mode.md`](service_mode.md) | API design and telemetry stream | ---