YOLOv3、YOLOv4、YOLOv5、YOLOv5-Lite、YOLOv6、YOLOv7、YOLOX、YOLOX-Lite、TensorRT、NCNN、Tengine、OpenVINO

Overview

YOLOU:United, Study and easier to Deploy

​ The purpose of our creation of YOLOU is to better learn the algorithms of the YOLO series and pay tribute to our predecessors.

​ Here "U" means United, mainly to gather more algorithms about the YOLO series through this project, so that friends can better learn the knowledge of object detection. At the same time, in order to better apply AI technology, YOLOU will also join The corresponding Deploy technology will accelerate the implementation of the algorithms we have learned and realize the value.

YOLOU

At present, the YOLO series algorithms mainly included in YOLOU are:

Anchor-base: YOLOv3, YOLOv4, YOLOv5, YOLOv5-Lite, YOLOv7

Anchor-Free: YOLOv6, YOLOX, YOLOX-Lite

Comparison of ablation experiment results

Model size(pixels) [email protected] [email protected]:95 Parameters(M) GFLOPs TensorRT-FP32(b16)
ms/fps
TensorRT-FP16(b16)
ms/fps
YOLOv5n 640 45.7 28.0 1.9 4.5 0.95/1054.64 0.61/1631.64
YOLOv5s 640 56.8 37.4 7.2 16.5 1.7/586.8 0.84/1186.42
YOLOv5m 640 64.1 45.4 21.2 49.0 4.03/248.12 1.42/704.20
YOLOv5l 640 67.3 49.0 46.5 109.1
YOLOv5x 640 68.9 50.7 86.7 205.7
YOLOv6-T 640
YOLOv6-n 640
YOLOv6 640 60.0 41.3 20.4 28.8 3.06/326.93 1.27/789.51
YOLOv7 640 69.7 51.4 37.6 53.1 8.18/113.88 1.97/507.55
YOLOv7-X 640 71.2 53.7 71.3 95.1
YOLOv7-W6 640 72.6 54.9
YOLOv7-E6 640 73.5 56.0
YOLOv7-D6 640 74.0 56.6
YOLOv7-E6E 640 74.4 56.8
YOLOX-s 640 59.0 39.2 8.1 10.8 2.11/473.78 0.89/1127.67
YOLOX-m 640 63.8 44.5 23.3 31.2 4.94/202.43 1.58/632.48
YOLOX-l 640 54.1 77.7
YOLOX-x 640 104.5 156.2
v5-Lite-e 320 35.1 0.78 0.73 0.55/1816.10 0.49/2048.47
v5-Lite-s 416 42.0 25.2 1.64 1.66 0.72/1384.76 0.64/1567.36
v5-Lite-c 512 50.9 32.5 4.57 5.92 1.18/850.03 0.80/1244.20
v5-Lite-g 640 57.6 39.1 5.39 15.6 1.85/540.90 1.09/916.69
X-Lite-e 320 36.4 21.2 2.53 1.58 0.65/1547.58 0.46/2156.38
X-Lite-s 416 Training… Training… 3.36 2.90
X-Lite-c 512 Training… Training… 6.25 5.92
X-Lite-g 640 58.3 40.7 7.30 12.91 2.15/465.19 1.01/990.69

You can download all pretrained weights of YOLOU with Baidu Drive (YOLO)

How to use

Install

git clone https://github.com/jizhishutong/YOLOU
cd YOLOU
pip install -r requirements.txt

Training

python train.py --mode yolov6 --data coco.yaml --cfg yolov6.yaml --weights yolov6.pt --batch-size 32

Detect

python detect.py --source 0  # webcam
                            file.jpg  # image 
                            file.mp4  # video
                            path/  # directory
                            path/*.jpg  # glob
                            'https://youtu.be/NUsoVlDFqZg'  # YouTube
                            'rtsp://example.com/media.mp4'  # RTSP, RTMP, HTTP stream

DataSet

train: ../coco/images/train2017/
val: ../coco/images/val2017/
├── images            # xx.jpg example
│   ├── train2017        
│   │   ├── 000001.jpg
│   │   ├── 000002.jpg
│   │   └── 000003.jpg
│   └── val2017         
│       ├── 100001.jpg
│       ├── 100002.jpg
│       └── 100003.jpg
└── labels             # xx.txt example      
    ├── train2017       
    │   ├── 000001.txt
    │   ├── 000002.txt
    │   └── 000003.txt
    └── val2017         
        ├── 100001.txt
        ├── 100002.txt
        └── 100003.txt

Export ONNX

python export.py --weights ./weights/yolov6/yolov6s.pt

​ In order to facilitate the deployment and implementation of friends here, all models included in YOLOU have been processed to a certain extent, and their pre- and post-processing codes can be used in one set, because the format and output results of the ONNX files they export are consistent.

YOLOv5

YOLOU

YOLOv6

YOLOU

YOLOv7

YOLOU

YOLOX

YOLOU

YOLOv5-Lite

YOLOU

YOLOX-Lite

YOLOU

Reference

https://github.com/ultralytics/yolov5

https://github.com/WongKinYiu/yolor

https://github.com/ppogg/YOLOv5-Lite

https://github.com/WongKinYiu/yolov7

https://github.com/meituan/YOLOv6

https://github.com/ultralytics/yolov3

https://github.com/Megvii-BaseDetection/YOLOX

https://github.com/WongKinYiu/ScaledYOLOv4

https://github.com/WongKinYiu/PyTorch_YOLOv4

https://github.com/WongKinYiu/yolor

https://github.com/shouxieai/tensorRT_Pro

https://github.com/Tencent/ncnn

https://github.com/Gumpest/YOLOv5-Multibackbone-Compression

https://github.com/positive666/yolov5_research

https://github.com/cmdbug/YOLOv5_NCNN

https://github.com/OAID/Tengine

Citing YOLOU

If you use YOLOU in your research, please cite our work and give a star :

 @misc{yolou2022,
  title = { YOLOU:United, Study and easier to Deploy},
  author = {ChaucerG},
  year={2022}
}
You might also like...

Real-time multi-camera multi-object tracker using YOLOv7 and StrongSORT with OSNet

Real-time multi-camera multi-object tracker using YOLOv7 and StrongSORT with OSNet

Real-time multi-camera multi-object tracker using YOLOv7 and StrongSORT with OSNet

Jan 4, 2023

YOLOv7 Pose estimation using OpenCV, PyTorch

YOLOv7 Pose estimation using OpenCV, PyTorch

yolov7-pose-estimation New Features Added Support for Comparision of (FPS & Time) Graph How to run Code in Google Colab Steps to run Code If you are u

Jan 7, 2023

YOLOv7 Instance Segmentation using OpenCV, PyTorch and Detectron2

YOLOv7 Instance Segmentation using OpenCV, PyTorch and Detectron2

yolov7-instance-segmentation Features How to run Code on Windows How to run Code on Linux Coming Soon Development of streamlit dashboard for Instance-

Jan 7, 2023

YOLOv7 Object Tracking Using PyTorch, OpenCV and Sort Tracking

YOLOv7 Object Tracking Using PyTorch, OpenCV and Sort Tracking

yolov7-object-tracking New Features Added Label for Every Track Code can run on Both (CPU & GPU) Video/WebCam/External Camera/IP Stream Supported Comi

Jan 9, 2023

Python scripts performing object detection using the YOLOv7 model in ONNX.

Python scripts performing object detection using the YOLOv7 model in ONNX.

ONNX YOLOv7 Object Detection Python scripts performing object detection using the YOLOv7 model in ONNX. Original image: https://www.flickr.com/photos/

Jan 7, 2023

Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors

Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors

Official YOLOv7 Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors Web Demo Integrat

Jan 8, 2023

A Beautiful Flask Web API for Yolov7 (and custom) models

A Beautiful Flask Web API for Yolov7 (and custom) models

Yolov7 Flask A Beautiful Flask Framework for Implementing the Latest Yolov7 Model I developed this API for the purpose of deploying my own Yolov7 mode

Dec 19, 2022

A ROS noetic package for the official YOLOv7

A ROS noetic package for the official YOLOv7

ROS package for official YOLOv7 This repo contains a ROS noetic package for the official YOLOv7. It wraps the official implementation into a ROS node

Dec 25, 2022

A part of the Capstone project in Bootcamp Techainer 2022. This model is a transfer learning YOLOv7 model.

Food_Recognition A part of the Capstone project in Bootcamp Techainer 2022. This model is a transfer learning YOLOv7 model. Preparation To run it, you

Nov 2, 2022
Comments
  • 关于val_det.py推理时参数不匹配

    关于val_det.py推理时参数不匹配

    当我尝试运行val_det.py时,提示forward()出现了未知的参数val,然后查看代码 image.png

    但是去models/common.py中查看DetectMultiBackend()却发现forward()中并没有val这个参数 image.png

    所以这应该是个小bug或者是没有及时更新的地方,无论是在common.py的forward()中添加一个val参数或者是去掉推理时的val=True都可以解决这个问题.

    opened by Yshelgi 0
Owner
null
🚀🚀🚀 YOLO Series of PaddleDetection implementation, PPYOLOE, YOLOX, YOLOv7, YOLOv5, MT-YOLOv6 and so on. 🚀🚀🚀

简介 此代码库是基于PaddleDetection的YOLO系列模型库,支持PP-YOLOE,YOLOv3,YOLOX,YOLOv5,MT-YOLOv6,YOLOv7等模型,其upstream为PaddleDetection的develop分支,并与PaddleDetection主代码库分支同步更新

Feng Ni 218 Jan 4, 2023
For YOLOv3 object detection project I used pretrained YOLOv3-416 model.

For YOLOv3 object detection project I used pretrained YOLOv3-416 model. It works successfully. But the weight file is too large to push to the Github

SelenNB 1 Oct 9, 2022
Tools for simple inference testing using TensorRT, CUDA and OpenVINO CPU/GPU and CPU providers. Simple Inference Test for ONNX.

sit4onnx Tools for simple inference testing using TensorRT, CUDA and OpenVINO CPU/GPU and CPU providers. Simple Inference Test for ONNX. https://githu

Katsuya Hyodo 14 Dec 24, 2022
Provides a conversion flow for YOLACT_Edge to models compatible with ONNX, TensorRT, OpenVINO and Myriad (OAK)

Provides a conversion flow for YOLACT_Edge to models compatible with ONNX, TensorRT, OpenVINO and Myriad (OAK). My own implementation of post-processing allows for e2e inference.

Katsuya Hyodo 22 Dec 28, 2022
Real-time multi-camera multi-object tracker using (YOLOv5, YOLOv7) and StrongSORT with OSNet

StrongSORT with OSNet for YoloV5 and YoloV7 (Counter) # Official YOLOv5 # Official YOLOv7 Implementation of paper - YOLOv7: Trainable bag-of-freebies

bharath 80 Jan 4, 2023
YOLOv6: a single-stage object detection framework dedicated to industrial application.

YOLOv6 Introduction YOLOv6 is a single-stage object detection framework dedicated to industrial application, with hardware-friendly efficient design a

美团 Meituan 4.5k Jan 5, 2023
ONNX YOLOv6 Object Detection

Python scripts performing object detection using the YOLOv6 model in ONNX

Ibai Gorordo 87 Dec 13, 2022
A YOLOv6 based computer vision system to identify whether pets are getting on the couch, and if so - commands them to get down.

PetVision A YOLOv6 based computer vision system to identify whether pets are getting on the couch, and if so - commands them to get down. My dog - Zoe

Yarden Rotem 1 Sep 19, 2022
Yolov5 & YOLOX+ DeepSort

Yolo-DeepSort: YOLOv5 & YOLOX + Deep Sort Installation git clone https://github.com/kadirnar/Yolo-DeepSort cd Yolo-DeepSort pip install -r requirement

Kadir Nar 2 Jun 8, 2022
YOLOv7 training. Generates a head-only dataset in YOLO format. The labels included in the CrowdHuman dataset are Head and FullBody, but ignore FullBody.

crowdhuman_hollywoodhead_yolo_convert YOLOv7 training. Generates a head-only dataset in YOLO format. The labels included in the CrowdHuman dataset are

Katsuya Hyodo 12 Dec 9, 2022