Releases: roboflow/trackers
Trackers 2.3.0
Changelog
🚀 Added
- Added
OCSORTTracker, a clean re-implementation of OC-SORT. OC-SORT shifts to an observation-centric paradigm, using real detections to correct Kalman filter errors accumulated during occlusions. It introduces Observation-Centric Re-Update (ORU) for state recovery, Observation-Centric Momentum (OCM) for direction-consistency-weighted association, and Observation-Centric Recovery (OCR) for second-stage heuristic matching. OC-SORT achieves the highest HOTA on MOT17 and DanceTrack with default parameters. (#207)
| Algorithm | Description | MOT17 HOTA | SportsMOT HOTA | SoccerNet HOTA | DanceTrack HOTA |
|---|---|---|---|---|---|
| SORT | Kalman filter + Hungarian matching baseline. | 58.4 | 70.9 | 81.6 | 45.0 |
| ByteTrack | Two-stage association using high and low confidence detections. | 60.1 | 73.0 | 84.0 | 50.2 |
| OC-SORT | Observation-centric recovery for lost tracks. | 61.9 | 71.7 | 78.4 | 51.8 |
import cv2
import supervision as sv
from inference import get_model
from trackers import OCSORTTracker
model = get_model("rfdetr-medium")
tracker = OCSORTTracker()
box_annotator = sv.BoxAnnotator()
label_annotator = sv.LabelAnnotator()
cap = cv2.VideoCapture("<SOURCE_VIDEO_PATH>")
if not cap.isOpened():
raise RuntimeError("Failed to open video source")
while True:
ret, frame = cap.read()
if not ret:
break
result = model.infer(frame)[0]
detections = sv.Detections.from_inference(result)
detections = tracker.update(detections)
frame = box_annotator.annotate(frame, detections)
frame = label_annotator.annotate(frame, detections, labels=detections.tracker_id)
cv2.imshow("RF-DETR + OC-SORT", frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
cap.release()
cv2.destroyAllWindows()trackers-2.3.0-promo.mp4
- Added
trackers downloadCLI command anddownload_datasetPython API. Download benchmark datasets directly from the command line or from code. Supports MOT17 and SportsMOT with split and asset filtering. (#262)
# List available datasets
trackers download --list
# Download full dataset
trackers download mot17
# Download specific split and asset type
trackers download mot17 --split train --annotations-only
# Custom output directory
trackers download sportsmot --split val -o ./datasetsfrom trackers import download_dataset, Dataset, DatasetSplit, DatasetAsset
download_dataset(
dataset=Dataset.MOT17,
split=[DatasetSplit.VAL],
asset=[DatasetAsset.ANNOTATIONS, DatasetAsset.DETECTIONS],
output_dir="./data",
)| Dataset | Description | Splits | Assets | License |
|---|---|---|---|---|
mot17 |
Pedestrian tracking with crowded scenes and frequent occlusions. | train, val, test |
frames, annotations, detections |
CC BY-NC-SA 3.0 |
sportsmot |
Sports broadcast tracking with fast motion and similar-looking targets. | train, val, test |
frames, annotations |
CC BY 4.0 |
MOT17_MOT17-02-DPM.mp4
SportsMOT_v_-6Os86HzwCs_c001.mp4
- Added
--track-idsflag totrackers trackCLI command. Filter displayed tracks by track ID to focus on specific objects in a scene. (#280)
trackers track --source video.mp4 --output output.mp4 \
--model rfdetr-medium \
--tracker bytetrack \
--track-ids 1,2🌱 Changed
-
Made
--sourceoptional intrackers trackwhen--detectionsis provided and no visual output is requested, enabling frameless tracking for evaluation workflows. (#322) -
Optimized
xcycsr_to_xyxyandxyxy_to_xcycsrbounding box converters for the single-box hot path, reducing per-call overhead in inner tracking loops. (#296)
🛠️ Fix
- Fixed a bug in MOT evaluation where ground-truth entries with
conf=0(distractors) were not filtered, causing artificially low scores on MOT17. Tracker entries withid < 0are now also excluded. Results now match TrackEval exactly. (#322)
🏆 Contributors
@JVSCHANDRADITHYA (Chandradithya Janaswami), @salmanmkc (Salman Chishti), @AlexBodner (Alexander Bodner), @Borda (Jirka Borovec), @SkalskiP (Piotr Skalski)
Trackers 2.2.0
Changelog
🚀 Added
- Added camera motion compensation for stable trajectory visualization. (#263)
trackers-2.2.0-promo.mp4
-
Added
trackers trackCLI command. Full tracking pipeline from the command line. Point it at a video, webcam, RTSP stream, or image directory. (#242, #230, #252, #243)trackers track --source video.mp4 --output output.mp4 \ --model rfdetr-medium \ --model.confidence 0.3 \ --classes person \ --show-labels --show-trajectories -
Added
trackers evalCLI command. Evaluate tracker predictions against ground truth using standard MOT metrics. (#210, #211, #212, #214, #215, #223, #224, #226, #250)trackers eval \ --gt-dir data/gt \ --tracker-dir data/trackers \ --metrics CLEAR HOTA Identity \ --columns MOTA HOTA IDF1 IDSWSequence MOTA HOTA IDF1 IDSW ---------------------------------------------------------- MOT17-02-FRCNN 75.600 62.300 72.100 42 MOT17-04-FRCNN 78.200 65.100 74.800 31 ---------------------------------------------------------- COMBINED 75.033 62.400 72.033 73
-
Added Trackers Playground on Hugging Face Spaces. Interactive Gradio demo with model and tracker selection, COCO class filtering, visualization flags, and cached examples. (#249)
-
Added interactive CLI command builder to the docs. Generate trackers track commands with interactive controls. (#242)
🏆 Contributors
@omkar-334 (Omkar Kabde), @Aaryan2304 (Aaryan Kurade), @juan-cobos (Juan Cobos Álvarez), @Borda (Jirka Borovec), @SkalskiP (Piotr Skalski)
Trackers 2.1.0.
Changelog
Warning
Starting with version 2.1.0, Trackers package drops support for Python 3.9. If your environment still relies on Python 3.9, stay on Trackers 2.0.x or upgrade your Python runtime to 3.10 or newer.
Warning
Starting with version 2.1.0, the Trackers package drops support for DeepSORTTracker and ReIDModel. We plan to bring back improved ReID support in future releases.
🚀 Added
Added support for ByteTrack, a fast tracking by detection algorithm focused on stable identities under occlusion. We evaluated both SORT and ByteTrack implementations on three standard multiple object tracking benchmarks, MOT17, SportsMOT, and SoccerNet Tracking.
| Algorithm | Trackers API | MOT17 HOTA | MOT17 IDF1 | MOT17 MOTA | SportsMOT HOTA | SoccerNet HOTA |
|---|---|---|---|---|---|---|
| SORT | SORTTracker |
58.4 | 69.9 | 67.2 | 70.9 | 81.6 |
| ByteTrack | ByteTrackTracker |
60.1 | 73.2 | 74.1 | 73.0 | 84.0 |
import cv2
import supervision as sv
from rfdetr import RFDETRMedium
from trackers import ByteTrack
tracker = ByteTrack()
model = RFDETRMedium()
box_annotator = sv.BoxAnnotator()
label_annotator = sv.LabelAnnotator()
video_capture = cv2.VideoCapture("<SOURCE_VIDEO_PATH>")
if not video_capture.isOpened():
raise RuntimeError("Failed to open video source")
while True:
success, frame_bgr = video_capture.read()
if not success:
break
frame_rgb = cv2.cvtColor(frame_bgr, cv2.COLOR_BGR2RGB)
detections = model.predict(frame_rgb)
detections = tracker.update(detections)
annotated_frame = box_annotator.annotate(frame_bgr, detections)
annotated_frame = label_annotator.annotate(annotated_frame, detections, labels=detections.tracker_id)
cv2.imshow("RF-DETR + ByteTrack", annotated_frame)
if cv2.waitKey(1) & 0xFF == ord("q"):
break
video_capture.release()
cv2.destroyAllWindows()rf-detr-1.4.0-and-trackers-2.1.0-promo.mp4
🏆 Contributors
@tstanczyk95 (Tomasz Stańczyk), @AlexBodner (Alexander Bodner), @Borda (Jirka Borovec), @SkalskiP (Piotr Skalski)