s6.schema.dataΒΆ
Pydantic models for instrument tracking rendered dataset frames.
Represents schema of entries written by instrument_render_v3.py and read by convert.py.
- class s6.schema.data.WorldTarget(*, mesh_name: str, vertices: List[str], positions: List[ConstrainedListValue[float]])
Bases:
BaseModel- mesh_name: str
- vertices: List[str]
- positions: List[ConstrainedListValue[float]]
- class s6.schema.data.World(*, targets: List[WorldTarget])
Bases:
BaseModel- targets: List[WorldTarget]
- class s6.schema.data.Projection(*, vertex: str, uv: ConstrainedListValue[float], box_xyxy: ConstrainedListValue[float] | None = None)
Bases:
BaseModel- vertex: str
- uv: ConstrainedListValue[float]
- box_xyxy: ConstrainedListValue[float] | None
- class s6.schema.data.CameraTarget(*, mesh_name: str, projections: List[Projection])
Bases:
BaseModel- mesh_name: str
- projections: List[Projection]
- class s6.schema.data.CameraFrame(*, image_path: str, targets: List[CameraTarget], depth_map_path: str | None = None)
Bases:
BaseModel- image_path: str
- targets: List[CameraTarget]
- depth_map_path: str | None
- class s6.schema.data.InstrumentTrackingFrame(*, world: World, uuid: UUID, cameras: Dict[str, CameraFrame])
Bases:
BaseModel- world: World
- uuid: UUID
- cameras: Dict[str, CameraFrame]
- class Config
Bases:
object- extra = 'ignore'
- allow_population_by_field_name = True
- validate_all = True
- to_yolo(camera_name: str, class_to_id: Dict[str, int], image_size: Tuple[int, int], box_ratio: float = 0.1) YoloFrame
Convert this frame to YOLO format for the given camera.
- class s6.schema.data.YoloBox(*, class_id: int, x_center: float, y_center: float, width: float, height: float)
Bases:
BaseModelA single YOLO bounding box entry.
- class_id: int
- x_center: float
- y_center: float
- width: float
- height: float
- class s6.schema.data.YoloFrame(*, image_path: str, boxes: List[YoloBox])
Bases:
BaseModelYOLO annotation for a single image.
- image_path: str
- boxes: List[YoloBox]