You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
All speed tests were performed on RTX 3090 with COCO Val set.The time calculated here is the sum of the time of image preprocess, inference and postprocess, since image loading and visualizing are not counted in, the actual spedd will be a little slower.
+ FPS* means that the time of image loading, image processing and visualization are taken into account when calculating.FPS only counts image processing time(preprocess, inference, postprocess).
2. Change the path [here]() to your TensorRT path, and [here]() to your CUDA path. Then,
56
+
2. Change the path [here](https://github.com/Li-Hongda/TensorRT_Inference_Demo/blob/main/object_detection/CMakeLists.txt#L19) to your TensorRT path, and [here](https://github.com/Li-Hongda/TensorRT_Inference_Demo/blob/main/object_detection/CMakeLists.txt#L11) to your CUDA path. Then,
51
57
```
52
58
mkdir build && cd build
53
59
cmake ..
@@ -56,6 +62,7 @@ make -j$(nproc)
56
62
3. The executable file will be generated in `bin` in the repo directory if compile successfully.Then enjoy yourself with command like this:
0 commit comments