You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+52-6Lines changed: 52 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ You can install the library via pip:
19
19
pip install patched_yolo_infer
20
20
```
21
21
22
-
[](https://pypi.org/project/patched-yolo-infer/) - Click here to visit the PyPI page for`patched-yolo-infer`, where you can find more information and documentation.
22
+
[](https://pypi.org/project/patched-yolo-infer/) - Click here to visit the PyPI page of`patched-yolo-infer`.
23
23
24
24
Note: If CUDA support is available, it's recommended to pre-install PyTorch with CUDA support before installing the library. Otherwise, the CPU version will be installed by default.
25
25
@@ -78,7 +78,7 @@ import cv2
78
78
from patched_yolo_infer import MakeCropsDetectThem, CombineDetections
79
79
80
80
# Load the image
81
-
img_path ='test_image.jpg'
81
+
img_path ="test_image.jpg"
82
82
img = cv2.imread(img_path)
83
83
84
84
element_crops = MakeCropsDetectThem(
@@ -111,7 +111,7 @@ Class implementing cropping and passing crops through a neural network for detec
| nms_threshold | float | 0.3 | IoU/IoS threshold for non-maximum suppression. The lower the value, the fewer objects remain after suppression. |
140
140
| match_metric | str | IOS | Matching metric, either 'IOU' or 'IOS'. |
141
+
| class_agnostic_nms | bool | True | Determines the NMS mode in object detection. When set to True, NMS operates across all classes, ignoring class distinctions and suppressing less confident bounding boxes globally. Otherwise, NMS is applied separately for each class. |
141
142
| intelligent_sorter | bool | True | Enable sorting by area and rounded confidence parameter. If False, sorting will be done only by confidence (usual nms). |
142
-
| sorter_bins | int |10| Number of bins to use for intelligent_sorter. A smaller number of bins makes the NMS more reliant on object sizes rather than confidence scores. |
143
+
| sorter_bins | int |5 | Number of bins to use for intelligent_sorter. A smaller number of bins makes the NMS more reliant on object sizes rather than confidence scores. |
143
144
144
145
145
146
@@ -207,9 +208,12 @@ visualize_results(
207
208
208
209
4.**Enhancing Detection Within Patches**: To detect more objects within a single crop, increase the `imgsz` parameter and lower the confidence threshold (`conf`). All parameters available for configuring Ultralytics model inference are also accessible during the initialization of the `MakeCropsDetectThem` element.
209
210
210
-
5.**Handling Duplicate Suppression Issues**: If you encounter issues with duplicate suppression from overlapping patches, consider adjusting the `nms_threshold` and `sorter_bins` parameters in `CombineDetections` or modifying the overlap and size parameters of the patches themselves. (PS: often lowering `sorter_bins` to 5 or 4 can help).
211
+
5.**Handling Duplicate Suppression Issues**: If you encounter issues with duplicate suppression from overlapping patches, consider adjusting the `nms_threshold` and `sorter_bins` parameters in `CombineDetections` or modifying the overlap and size parameters of the patches themselves. (PS: often lowering `sorter_bins` to 4 or 2 can help).
212
+
213
+
6.**Handling Multi-Class Detection Issues**: If you are working on a multi-class detection or instance segmentation task, it may be beneficial to switch the mode to `class_agnostic_nms=False` in the `CombineDetections` parameters. The default mode, with `class_agnostic_nms` set to True, is particularly effective when handling a large number of closely related classes in pre-trained YOLO networks (for example, when there is often confusion between classes like `car` and `truck`). If in your scenario, an object of one class can physically be inside an object of another class, you should definitely set `class_agnostic_nms=False` for such cases.
214
+
215
+
7.**High-Quality Instance Segmentation**: For tasks requiring high-quality results in instance segmentation, detailed guidance is provided in the next section of the README.
211
216
212
-
6.**High-Quality Instance Segmentation**: For tasks requiring high-quality results in instance segmentation, detailed guidance is provided in the next section of the README.
213
217
---
214
218
215
219
## __How to improve the quality of the algorithm for the task of instance segmentation:__
An example of working with this mode is presented in Google Colab notebook - [![Open In Colab][colab_badge]][colab_ex1_memory_optimize]
246
250
251
+
---
252
+
253
+
## __How to automatically determine optimal parameters for patches (crops):__
254
+
255
+
To efficiently process a large number of images of varying sizes and contents, manually selecting the optimal patch sizes and overlaps can be difficult.. To address this, an algorithm has been developed to automatically calculate the best parameters for patches (crops).
256
+
257
+
The `auto_calculate_crop_values` function operates in two modes:
258
+
259
+
1.**Resolution-Based Analysis**: This mode evaluates the resolution of the source images to determine the optimal patch sizes and overlaps. It is faster but may not yield the highest quality results because it does not take into account the actual objects present in the images.
260
+
261
+
2.**Neural Network-Based Analysis**: This advanced mode employs a neural network to analyze the images. The algorithm performs a standard inference of the network on the entire image and identifies the largest detected objects. Based on the sizes of these objects, the algorithm selects patch parameters to ensure that the largest objects are fully contained within a patch, and overlapping patches ensure comprehensive coverage. In this mode, it is necessary to input the model that will be used for patch-based inference in the subsequent steps.
262
+
263
+
Possible arguments of the ```auto_calculate_crop_values``` function:
| image | np.ndarray || The input image in BGR format. |
267
+
| mode | str | "network_based" | The type of analysis to perform. Can be "resolution_based" for Resolution-Based Analysis or "network_based" for Neural Network-Based Analysis.|
268
+
| model | ultralytics model | YOLO("yolov8m.pt") | Pre-initialized model object for "network_based" mode. If not provided, the default YOLOv8m model will be used.|
269
+
| classes_list | list | None | A list of class indices to consider for object detection in "network_based" mode. If None, all classes will be considered. |
270
+
| conf | float | 0.25 | The confidence threshold for detection in "network_based" mode. |
271
+
272
+
Example of using:
273
+
```python
274
+
import cv2
275
+
from ultralytics importYOLO
276
+
from patched_yolo_infer import auto_calculate_crop_values
277
+
278
+
# Load the image
279
+
img_path ="test_image.jpg"
280
+
img = cv2.imread(img_path)
281
+
282
+
# Calculate the optimal crop size and overlap for an image
An example of working with `auto_calculate_crop_values` is presented in Google Colab notebook - [![Open In Colab][colab_badge]][colab_ex1_auto_calculate_crop_values]
Copy file name to clipboardExpand all lines: patched_yolo_infer/README.md
+34-5Lines changed: 34 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,9 +25,9 @@ Interactive notebooks are provided to showcase the functionality of the library.
25
25
26
26
__Check this Colab examples:__
27
27
28
-
Patch-Based-Inference Example - [Open in Colab](https://colab.research.google.com/drive/1XCpIYLMFEmGSO0XCOkSD7CcD9SFHSJPA?usp=sharing)
28
+
Patch-Based-Inference Example - [**Open in Colab**](https://colab.research.google.com/drive/1XCpIYLMFEmGSO0XCOkSD7CcD9SFHSJPA?usp=sharing)
29
29
30
-
Example of using various functions for visualizing basic YOLOv8/v9 inference results - [Open in Colab](https://colab.research.google.com/drive/1eM4o1e0AUQrS1mLDpcgK9HKInWEvnaMn?usp=sharing)
30
+
Example of using various functions for visualizing basic YOLOv8/v9 inference results - [**Open in Colab**](https://colab.research.google.com/drive/1eM4o1e0AUQrS1mLDpcgK9HKInWEvnaMn?usp=sharing)
31
31
32
32
33
33
## Usage
@@ -54,7 +54,7 @@ import cv2
54
54
from patched_yolo_infer import MakeCropsDetectThem, CombineDetections
55
55
56
56
# Load the image
57
-
img_path ='test_image.jpg'
57
+
img_path ="test_image.jpg"
58
58
img = cv2.imread(img_path)
59
59
60
60
element_crops = MakeCropsDetectThem(
@@ -109,8 +109,9 @@ Class implementing combining masks/boxes from multiple crops + NMS (Non-Maximum
-**nms_threshold** (*float*): IoU/IoS threshold for non-maximum suppression.
111
111
-**match_metric** (*str*): Matching metric, either 'IOU' or 'IOS'.
112
+
-**class_agnostic_nms** (*bool*) Determines the NMS mode in object detection. When set to True, NMS operates across all classes, ignoring class distinctions and suppressing less confident bounding boxes globally. Otherwise, NMS is applied separately for each class. (Default is True)
112
113
-**intelligent_sorter** (*bool*): Enable sorting by area and rounded confidence parameter. If False, sorting will be done only by confidence (usual nms). (Dafault is True)
113
-
-**sorter_bins** (*int*): Number of bins to use for intelligent_sorter. A smaller number of bins makes the NMS more reliant on object sizes rather than confidence scores. (Defaults to 10)
114
+
-**sorter_bins** (*int*): Number of bins to use for intelligent_sorter. A smaller number of bins makes the NMS more reliant on object sizes rather than confidence scores. (Defaults to 5)
114
115
115
116
116
117
---
@@ -166,7 +167,7 @@ visualize_results(
166
167
167
168
---
168
169
169
-
## __HOW TO IMPROVE THE QUALITY OF THE ALGORITHM FOR THE TASK OF INSTANCE SEGMENTATION:__
170
+
## __How to improve the quality of the algorithm for the task of instance segmentation:__
170
171
171
172
In this approach, all operations under the hood are performed on binary masks of recognized objects. Storing these masks consumes a lot of memory, so this method requires more RAM and slightly more processing time. However, the accuracy of recognition significantly improves, which is especially noticeable in cases where there are many objects of different sizes and they are densely packed. Therefore, we recommend using this approach in production if accuracy is important and not speed, and if your computational resources allow storing hundreds of binary masks in RAM.
172
173
@@ -194,4 +195,32 @@ boxes=result.filtered_boxes
194
195
masks=result.filtered_masks
195
196
classes_ids=result.filtered_classes_id
196
197
classes_names=result.filtered_classes_names
198
+
```
199
+
200
+
---
201
+
202
+
## __How to automatically determine optimal parameters for patches (crops):__
203
+
204
+
To efficiently process a large number of images of varying sizes and contents, manually selecting the optimal patch sizes and overlaps can be difficult. To address this, an algorithm has been developed to automatically calculate the best parameters for patches (crops).
205
+
206
+
The `auto_calculate_crop_values` function operates in two modes:
207
+
208
+
1.**Resolution-Based Analysis**: This mode evaluates the resolution of the source images to determine the optimal patch sizes and overlaps. It is faster but may not yield the highest quality results because it does not take into account the actual objects present in the images.
209
+
210
+
2.**Neural Network-Based Analysis**: This advanced mode employs a neural network to analyze the images. The algorithm performs a standard inference of the network on the entire image and identifies the largest detected objects. Based on the sizes of these objects, the algorithm selects patch parameters to ensure that the largest objects are fully contained within a patch, and overlapping patches ensure comprehensive coverage. In this mode, it is necessary to input the model that will be used for patch-based inference in the subsequent steps.
211
+
212
+
Example of using:
213
+
```python
214
+
import cv2
215
+
from ultralytics importYOLO
216
+
from patched_yolo_infer import auto_calculate_crop_values
217
+
218
+
# Load the image
219
+
img_path ="test_image.jpg"
220
+
img = cv2.imread(img_path)
221
+
222
+
# Calculate the optimal crop size and overlap for an image
0 commit comments