|
1 | 1 | # TensorRT_Inference_Demo |
2 | | -A repo that uses TensorRT to deploy wll-trained models. |
| 2 | + |
| 3 | +<div align="center"> |
| 4 | + |
| 5 | + [](https://developer.nvidia.com/cuda-toolkit-archive) |
| 6 | + [](https://developer.nvidia.com/nvidia-tensorrt-8x-download) |
| 7 | + [](https://releases.ubuntu.com/20.04/) |
| 8 | +</div> |
| 9 | + |
| 10 | +## 1.Introduction |
| 11 | +This repo use TensorRT-8.x to deploy well-trained models. |
| 12 | + |
| 13 | +## 2.Update |
| 14 | + |
| 15 | +- [x] [YOLOv5](https://github.com/ultralytics/yolov5) |
| 16 | +- [x] [YOLOv5-seg](https://github.com/ultralytics/yolov5) |
| 17 | +- [x] [YOLOv7](https://github.com/WongKinYiu/yolov7) |
| 18 | +- [x] [YOLOv8](https://github.com/ultralytics/ultralytics) |
| 19 | +- [ ] [YOLOv8-seg](https://github.com/ultralytics/ultralytics) |
| 20 | + |
| 21 | + |
| 22 | +## 3.Support Models |
| 23 | + |
| 24 | +| Models | Device | BatchSize | Mode | Input Shape(HxW) | FPS | |
| 25 | +|-|-|:-:|:-:|:-:|:-:| |
| 26 | +| YOLOv5-n v7.0 |RTX3090 | 1 | FP32 | 640x640 | 264 | |
| 27 | +| YOLOv5-s v7.0 |RTX3090 | 1 | FP32 | 640x640 | 210 | |
| 28 | +| YOLOv5-s v7.0 |RTX3090 | 32 | FP32 | 640x640 | - | |
| 29 | +| YOLOv5-m v7.0 |RTX3090 | 1 | FP32 | 640x640 | 140 | |
| 30 | +| YOLOv5-l v7.0 |RTX3090 | 1 | FP32 | 640x640 | 105 | |
| 31 | +| YOLOv5-x v7.0 |RTX3090 | 1 | FP32 | 640x640 | 75 | |
| 32 | +| YOLOv7 |RTX3090 | 1 | FP32 | 640x640 | 115 | |
| 33 | +| YOLOv7x |RTX3090 | 1 | FP32 | 640x640 | - | |
| 34 | +| YOLOv8-n |RTX3090 | 1 | FP32 | 640x640 | 222 | |
| 35 | +| YOLOv8-s |RTX3090 | 1 | FP32 | 640x640 | 171 | |
| 36 | +| YOLOv8-m |RTX3090 | 1 | FP32 | 640x640 | 122 | |
| 37 | +| YOLOv8-l |RTX3090 | 1 | FP32 | 640x640 | 88 | |
| 38 | +| YOLOv8-x |RTX3090 | 1 | FP32 | 640x640 | 68 | |
| 39 | +| RT-DETR |RTX3090 | 1 | FP32 | 640x640 | - | |
| 40 | +| RT-DETR |RTX3090 | 1 | FP32 | 640x640 | - | |
| 41 | +| SOLO(r50) |RTX3090 | 1 | FP32 | 480x640 | - | |
| 42 | +| SOLOv2(r50) |RTX3090 | 1 | INT8 | 480x640 | - | |
| 43 | + |
| 44 | +## 4.Install |
| 45 | +1. Clone the repo. |
| 46 | +``` |
| 47 | +git clone https://github.com/Li-Hongda/TensorRT_Inference_Demo.git |
| 48 | +cd TensorRT_Inference_Demo/object_detection |
| 49 | +``` |
| 50 | +2. Change the path [here]() to your TensorRT path, and [here]() to your CUDA path. Then, |
| 51 | +``` |
| 52 | +mkdir build && cd build |
| 53 | +cmake .. |
| 54 | +make -j$(nproc) |
| 55 | +``` |
| 56 | +3. The executable file will be generated in `bin` in the repo directory if compile successfully.Then enjoy yourself with command like this: |
| 57 | +``` |
| 58 | +cd bin |
| 59 | +./object_detection yolov5 /path/to/input/dir false |
| 60 | +``` |
| 61 | + |
0 commit comments