Real-time multi-camera multi-object tracker using (YOLOv5, YOLOv7) and StrongSORT with OSNet

Overview

StrongSORT with OSNet for YoloV5 and YoloV7 (Counter)


# Official YOLOv5
CI CPU testing
Open In Colab
# Official YOLOv7

Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors

PWC Hugging Face Spaces Open In Colab arxiv.org

Introduction

This repository contains a highly configurable two-stage-tracker that adjusts to different deployment scenarios. The detections generated by YOLOv5 and YOLOv7, a family of object detection architectures and models pretrained on the COCO dataset, are passed to StrongSORT which combines motion and appearance information based on OSNet in order to tracks the objects. It can track any object that your Yolov5 model was trained to detect.

Before you run the tracker

  1. Clone the repository recursively:

git clone --recurse-submodules https://github.com/bharath5673/StrongSORT-YOLO.git t

If you already cloned and forgot to use --recurse-submodules you can run git submodule update --init

  1. Make sure that you fulfill all the requirements: Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install, run:

pip install -r requirements.txt

Tracking sources

Tracking can be run on most video formats

Select object detectors and ReID model

Yolov5

There is a clear trade-off between model inference speed and accuracy. In order to make it possible to fulfill your inference speed/accuracy needs you can select a Yolov5 family model for automatic download

$ python track_v5.py --source 0 --yolo-weights weighst/yolov5n.pt --img 640
                                            yolov5s.pt
                                            yolov5m.pt
                                            yolov5l.pt 
                                            yolov5x.pt --img 1280
                                            ...

Yolov7

There is a clear trade-off between model inference speed and accuracy. In order to make it possible to fulfill your inference speed/accuracy needs you can select a Yolov5 family model for automatic download

$ python track_v7.py --source 0 --yolo-weights weighst/yolov7-tiny.pt --img 640
                                            yolov7.pt
                                            yolov7x.pt 
                                            yolov7-w6.pt 
                                            yolov7-e6.pt 
                                            yolov7-d6.pt 
                                            yolov7-e6e.pt
                                            ...

StrongSORT

The above applies to StrongSORT models as well. Choose a ReID model based on your needs from this ReID model zoo

$ python track.py --source 0 --strong-sort-weights osnet_x0_25_market1501.pt
                                                   osnet_x0_5_market1501.pt
                                                   osnet_x0_75_msmt17.pt
                                                   osnet_x1_0_msmt17.pt
                                                   ...

Filter tracked classes

By default the tracker tracks all MS COCO classes.

If you only want to track persons I recommend you to get these weights for increased performance

python track_v*.py --source 0 --yolo-weights weights/v*.pt --classes 0  # tracks persons, only

If you want to track a subset of the MS COCO classes, add their corresponding index after the classes flag

python track_v*.py --source 0 --yolo-weights  weights/v*.pt --classes 16 17  # tracks cats and dogs, only

Counter

counter

get realtime counts of every tracking objects without any rois or trajectories or any line interctions

$ python track_v*.py --source test.mp4 -yolo-weights weights/v*.pt --save-txt --count --show-vid

Here is a list of all the possible objects that a Yolov5 model trained on MS COCO can detect. Notice that the indexing for the classes in this repo starts at zero.

MOT compliant results

Can be saved to your experiment folder runs/track/<yolo_model>_<deep_sort_model>/ by

python track_v*.py --source ... --save-txt

Cite

If you find this project useful in your research, please consider cite:

@misc{yolov5-strongsort-osnet-2022,
    title={Real-time multi-camera multi-object tracker using YOLOv5 and StrongSORT with OSNet},
    author={Mikel Broström},
    howpublished = {\url{https://github.com/mikel-brostrom/Yolov5_StrongSORT_OSNet}},
    year={2022}
}

@article{wang2022yolov7,
  title={{YOLOv7}: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors},
  author={Wang, Chien-Yao and Bochkovskiy, Alexey and Liao, Hong-Yuan Mark},
  journal={arXiv preprint arXiv:2207.02696},
  year={2022}
}

Acknowledgements

Expand
Comments
  • failure for Filter tracked classes

    failure for Filter tracked classes

    I try to track a single class by modifying python track_v7.py --source ex1_video_1.mp4 --yolo-weights weights/yolov7x.pt --classes 32

    image

    I found that the output of the video still contains multiple classes. Thank you for your help.

    opened by lpc-eol 4
  • can't play saved video when `--save-vid` is used together with `--count`

    can't play saved video when `--save-vid` is used together with `--count`

    @bharath5673, I use the following command to save a demo video python track_v7.py --source demo/video.mp4 --yolo-weights weights/best.pt --save-txt --count --save-vid --draw, it does save video with its size (i.e. not empty), but the media players can't play the saved video (VLC, PotPlayer, Windows Media Player, etc.). It has no length information either. image

    However, when I use the above command without --count, it works as expected and I can play the saved video, and its size is slightly bigger: image

    Could you check if this happens on your end?

    It'd also be great if you could let me know the exact versions of opencv-python and related packages Thanks.

    opened by bit-scientist 3
  • Object Path Trajectory

    Object Path Trajectory

    Hi,

    Can someone help me with how we can update the code track.py (both for v5 and v7) to show the moving object path trajectory as shown in below video link rather than just assigning the unique numbers to each object?

    https://www.youtube.com/watch?v=5sya7sl9wWc

    Thank you

    opened by MAli-Farooq 3
  • KeyError: 'assets' issues

    KeyError: 'assets' issues

    Traceback (most recent call last): File "/home/student/cql/t/yolov7/utils/google_utils.py", line 26, in attempt_download assets = [x['name'] for x in response['assets']] # release assets KeyError: 'assets'

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/student/cql/t/track_v7.py", line 382, in detect() File "/home/student/cql/t/track_v7.py", line 90, in detect model = attempt_load(weights, map_location=device) # load FP32 model File "/home/student/cql/t/yolov7/models/experimental.py", line 87, in attempt_load attempt_download(w) File "/home/student/cql/t/yolov7/utils/google_utils.py", line 30, in attempt_download tag = subprocess.check_output('git tag', shell=True).decode().split()[-1] IndexError: list index out of range

    Process finished with exit code 1

    Thank you for your help.

    opened by lpc-eol 2
  • vehicle_side in annotation file

    vehicle_side in annotation file

    First of all, thank you for the work you have done, it's quite helpful.

    In track_v5.py you defined vehicle_side as 0 for left and 1 for right side of the vehicle, right? I have a couple of questions regarding this.

    1. What does vehicle_side imply here? I could see that 0 (left side) is assigned if the bbox_right is smaller than the half of frame_width. I don't think it applies to all cases. In my demo video, a vehicle with id1 is coming from top-left towards bottom right at an intersection. The txt file for this vehicle assigned 0 (left side), but the visible vehicle_side is right. Can you explain it a little more?
    2. Why could one need to know the vehicle side in general?

    Thank you.

    opened by bit-scientist 1
  • Unknown model

    Unknown model

    when running the command : python track_v7.py --source test_inputs\chase.mp4 --yolo-weights weights/yolov7-tiny.pt --img 640 I get the below error: YOLOR v0.1-3-g9ee1835 torch 1.12.1+cpu CPU

    Fusing layers... Model Summary: 200 layers, 6219709 parameters, 229245 gradients Traceback (most recent call last): File "yolov5_yolov7_tracking\track_v7.py", line 378, in detect() File "yolov5_yolov7_tracking\track_v7.py", line 124, in detect StrongSORT( File "yolov5_yolov7_tracking\strong_sort\strong_sort.py", line 40, in init self.extractor = FeatureExtractor( File "yolov5_yolov7_tracking\strong_sort/deep/reid\torchreid\utils\feature_extractor.py", line 71, in init model = build_model( File "yolov5_yolov7_tracking\strong_sort/deep/reid\torchreid\models_init_.py", line 114, in build_model raise KeyError( KeyError: "Unknown model: None. Must be one of ['resnet18', 'resnet34', 'resnet50', 'resnet101', 'resnet152', 'resnext50_32x4d', 'resnext101_32x8d', 'resnet50_fc512', 'se_resnet50', 'se_resnet50_fc512', 'se_resnet101', 'se_resnext50_32x4d', 'se_resnext101_32x4d', 'densenet121', 'densenet169', 'densenet201', 'densenet161', 'densenet121_fc512', 'inceptionresnetv2', 'inceptionv4', 'xception', 'resnet50_ibn_a', 'resnet50_ibn_b', 'nasnsetmobile', 'mobilenetv2_x1_0', 'mobilenetv2_x1_4', 'shufflenet', 'squeezenet1_0', 'squeezenet1_0_fc512', 'squeezenet1_1', 'shufflenet_v2_x0_5', 'shufflenet_v2_x1_0', 'shufflenet_v2_x1_5', 'shufflenet_v2_x2_0', 'mudeep', 'resnet50mid', 'hacnn', 'pcb_p6', 'pcb_p4', 'mlfn', 'osnet_x1_0', 'osnet_x0_75', 'osnet_x0_5', 'osnet_x0_25', 'osnet_ibn_x1_0', 'osnet_ain_x1_0', 'osnet_ain_x0_75', 'osnet_ain_x0_5', 'osnet_ain_x0_25']"

    opened by marioRaouf 1
  • IndexError: list index out of range

    IndexError: list index out of range

    While runing yolov5 and it runs with no issue, however, running yolov7 gives the following issue:

    $ python3 track_v7.py --yolo-weights yolov7.pt --strong-sort-weights osnet_x0_25_msmt17.pt --source ~/TownCenter/TownCenter.mp4 --save-vid --show-vid --device 0
    Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.25, config_strongsort='strong_sort/configs/strong_sort.yaml', count=False, device='0', draw=False, exist_ok=False, exp_name='exp', hide_class=False, hide_conf=False, hide_labels=False, img_size=640, iou_thres=0.45, line_thickness=1, nosave=True, project='runs/track', save_conf=False, save_img=False, save_txt=False, save_vid=True, show_vid=True, source='/home/dl/TownCenter/TownCenter.mp4', strong_sort_weights='osnet_x0_25_msmt17.pt', trace=False, update=False, yolo_weights=['yolov7.pt'])
    YOLOR 🚀 v0.1-115-g072f76c torch 1.13.1+cu117 CUDA:0 (NVIDIA GeForce RTX 3090, 24265.3125MB)
    
    Traceback (most recent call last):
      File "/home/****/yolov7/StrongSORT-YOLO/yolov7/utils/google_utils.py", line 26, in attempt_download
        assets = [x['name'] for x in response['assets']]  # release assets
    KeyError: 'assets'
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "track_v7.py", line 387, in <module>
        detect()
      File "track_v7.py", line 90, in detect
        model = attempt_load(weights, map_location=device)  # load FP32 model
      File "/home/****/yolov7/StrongSORT-YOLO/yolov7/models/experimental.py", line 251, in attempt_load
        attempt_download(w)
      File "/home/****/yolov7/StrongSORT-YOLO/yolov7/utils/google_utils.py", line 31, in attempt_download
        tag = subprocess.check_output('git tag', shell=True).decode().split()[-1]
    IndexError: list index out of range
    
    opened by Ishihara-Masabumi 0
  • StrongSORT Features

    StrongSORT Features

    Hi, I have tested this code with yolov7 to track the passengers in moving cars. I'm getting low tracking accuracy. Have you implemented AFLink and GIS features in StrongSort.

    opened by irojy 0
  • IndexError: list assignment index out of range

    IndexError: list assignment index out of range

    Trying to run over yolov5 and it runs with no issue, however, running over yolov7 gives an issue:

    (strongsort) C:\Users\Lenovo>cd \projects\strongsort

    (strongsort) C:\projects\strongsort>python track_v7.py --source 0 --yolo-weights weights/yolov7-tiny.pt --classes 0 Namespace(yolo_weights=['weights/yolov7-tiny.pt'], strong_sort_weights=WindowsPath('C:/projects/strongsort/weights/osnet_x0_25_msmt17.pt'), config_strongsort='strong_sort/configs/strong_sort.yaml', source='0', img_size=640, conf_thres=0.25, iou_thres=0.45, device='', show_vid=True, save_txt=False, save_img=False, save_conf=False, nosave=True, save_vid=False, classes=[0], agnostic_nms=False, augment=False, update=False, project='runs/track', exp_name='exp', exist_ok=False, trace=False, line_thickness=1, hide_labels=False, hide_conf=False, hide_class=False, count=False, draw=False) YOLOR 2022-11-21 torch 1.13.0+cpu CPU

    Fusing layers... Model Summary: 200 layers, 6219709 parameters, 229245 gradients 1/1: 0... success (640x480 at 30.00 FPS).

    Traceback (most recent call last): File "C:\projects\strongsort\track_v7.py", line 386, in detect() File "C:\projects\strongsort\track_v7.py", line 189, in detect curr_frames[i] = im0 IndexError: list assignment index out of range

    (strongsort) C:\projects\strongsort>

    Tried few quick solutions but no luck so far ...

    opened by mamounjamous 0
  • IndexError

    IndexError

    I encountered this error when I used the camera.

    Traceback (most recent call last): File "track_v7.py", line 386, in detect() File "track_v7.py", line 189, in detect curr_frames[i] = im0 IndexError: list assignment index out of range

    opened by ThrowYouARem 1
  • SyntaxError

    SyntaxError

    Hello, when i want to run with python track_v7.py --source 0 --yolo-weights weights/yolov7-tiny.pt --img 640

    I get the following Error:

    File "track_v7.py", line 209 s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string ^ SyntaxError: invalid syntax

    opened by P-Pan089 1
Owner
bharath
Computer Vision, Deep Learning, iot, AI, STEM, medical image processing
bharath
StrongSort-Pip: Packaged version of StrongSort

StrongSort-Pip: Packaged version of StrongSort Overview This repo is a packaged version of the StrongSort algorithm. Installation pip install strongso

GÖKSENİN 8 Nov 7, 2022
YOLOv3、YOLOv4、YOLOv5、YOLOv5-Lite、YOLOv6、YOLOv7、YOLOX、YOLOX-Lite、TensorRT、NCNN、Tengine、OpenVINO

YOLOU:United, Study and easier to Deploy The purpose of our creation of YOLOU is to better learn the algorithms of the YOLO series and pay tribute to

null 643 Dec 30, 2022
🚀🚀🚀YOLOC is Combining different modules to build an different Object detection model.Including YOLOv3、YOLOv4、Scaled_YOLOv4、YOLOv5、YOLOv6、YOLOv7、YOLOX、YOLOR、PPYOLO、PPYOLOE

?? ?? ?? YOLOC Introduction ?? YOLOC is Combining different modules to build an different Object detection model. ?? Combining some modules and tricks

null 46 Dec 26, 2022
Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors

Official YOLOv7 Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors Web Demo Integrat

Kin-Yiu, Wong 8.1k Jan 8, 2023
🚀🚀🚀 YOLO Series of PaddleDetection implementation, PPYOLOE, YOLOX, YOLOv7, YOLOv5, MT-YOLOv6 and so on. 🚀🚀🚀

简介 此代码库是基于PaddleDetection的YOLO系列模型库,支持PP-YOLOE,YOLOv3,YOLOX,YOLOv5,MT-YOLOv6,YOLOv7等模型,其upstream为PaddleDetection的develop分支,并与PaddleDetection主代码库分支同步更新

Feng Ni 218 Jan 4, 2023
YOLOv7 Object Tracking Using PyTorch, OpenCV and Sort Tracking

yolov7-object-tracking New Features Added Label for Every Track Code can run on Both (CPU & GPU) Video/WebCam/External Camera/IP Stream Supported Comi

Muhammad Rizwan Munawar 282 Jan 9, 2023
Rotating object detection using YOLOv7 and CSL.

YOLOv7_obb Rotating object detection using YOLOv7 and CSL. 项目用途 使用YOLOv7和CSL实现旋转目标检测,在DOTA数据集和HRSC2016数据集上均取得良好结果。 (train_ota.py未上传) 效果展示 DOTA数据集HBB(0

SSTato 3 Nov 18, 2022
Python scripts performing object detection using the YOLOv7 model in ONNX.

ONNX YOLOv7 Object Detection Python scripts performing object detection using the YOLOv7 model in ONNX. Original image: https://www.flickr.com/photos/

Ibai Gorordo 119 Jan 7, 2023
YOLOv5 Object Tracking + Detection + Object Blurring + Streamlit Dashboard Using OpenCV, PyTorch and Streamlit

yolov5-object-tracking New Features YOLOv5 Object Tracking Using Sort Tracker Added Object blurring Option Added Support of Streamlit Dashboard Code c

Muhammad Rizwan Munawar 31 Dec 19, 2022
Approaching Pedestrian Tracking problem on surveillance camera with YoloV5 for pedestrian detection and DeepSORT for tracking.

Tracking with YoloV5 & DeepSORT Introduction DeepSORT basically is an improvement based on SORT which integrated a CNN feature extractor that helps re

Tien Dang Anh 2 Aug 30, 2022
YOLOv7 Instance Segmentation using OpenCV, PyTorch and Detectron2

yolov7-instance-segmentation Features How to run Code on Windows How to run Code on Linux Coming Soon Development of streamlit dashboard for Instance-

Muhammad Rizwan Munawar 133 Jan 7, 2023
Using Deeplabv3-Plus-Pytorch to perform semantic segmentation on specimen sheets. Using YOLOv5-Pytorch to perform object detection.

简体中文 | English About The Project This project is to train semantic segmentation with supervised learning (Deeplabv3) and semi-supervised learning (Nat

null 1 Oct 8, 2022
Yolo v7 and several Multi-Object Tracker(SORT, DeepSORT, ByteTrack, BoT-SORT, etc.) in VisDrone2019 Dataset

Yolo v7 and several Multi-Object Tracker(SORT, DeepSORT, ByteTrack, BoT-SORT, etc.) in VisDrone2019 Dataset. It uses a unified style and integrated tracker for easy embedding in your own projects.

Jiapeng Wu 170 Jan 3, 2023
YOLOv7 Pose estimation using OpenCV, PyTorch

yolov7-pose-estimation New Features Added Support for Comparision of (FPS & Time) Graph How to run Code in Google Colab Steps to run Code If you are u

Muhammad Rizwan Munawar 172 Jan 7, 2023
YOLOv7 training. Generates a head-only dataset in YOLO format. The labels included in the CrowdHuman dataset are Head and FullBody, but ignore FullBody.

crowdhuman_hollywoodhead_yolo_convert YOLOv7 training. Generates a head-only dataset in YOLO format. The labels included in the CrowdHuman dataset are

Katsuya Hyodo 12 Dec 9, 2022
A Beautiful Flask Web API for Yolov7 (and custom) models

Yolov7 Flask A Beautiful Flask Framework for Implementing the Latest Yolov7 Model I developed this API for the purpose of deploying my own Yolov7 mode

Michael.Wang 149 Dec 19, 2022
A ROS noetic package for the official YOLOv7

ROS package for official YOLOv7 This repo contains a ROS noetic package for the official YOLOv7. It wraps the official implementation into a ROS node

Lukas Ewecker 53 Dec 25, 2022
A part of the Capstone project in Bootcamp Techainer 2022. This model is a transfer learning YOLOv7 model.

Food_Recognition A part of the Capstone project in Bootcamp Techainer 2022. This model is a transfer learning YOLOv7 model. Preparation To run it, you

Le Nho Bach 1 Nov 2, 2022
it is an object detector in images or camera using python , opencv , frozen_inference_graph.pb and ssd_mobilenet_v3_large_coco_2020_01_14.pbtxt

objectDetection ?? Table of Contents About Project setup ScreenShots Video Contributors About it is an object detector in images or camera using pytho

norhan reda 3 Sep 12, 2022