You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+35-1Lines changed: 35 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -78,7 +78,7 @@ import cv2
78
78
from patched_yolo_infer import MakeCropsDetectThem, CombineDetections
79
79
80
80
# Load the image
81
-
img_path ='test_image.jpg'
81
+
img_path ="test_image.jpg"
82
82
img = cv2.imread(img_path)
83
83
84
84
element_crops = MakeCropsDetectThem(
@@ -213,6 +213,7 @@ visualize_results(
213
213
6.**Handling Multi-Class Detection Issues**: If you are working on a multi-class detection or instance segmentation task, it may be beneficial to switch the mode to `class_agnostic_nms=False` in the `CombineDetections` parameters. The default mode, with `class_agnostic_nms` set to True, is particularly effective when handling a large number of closely related classes in pre-trained YOLO networks (for example, when there is often confusion between classes like `car` and `truck`). If in your scenario, an object of one class can physically be inside an object of another class, you should definitely set `class_agnostic_nms=False` for such cases.
214
214
215
215
7.**High-Quality Instance Segmentation**: For tasks requiring high-quality results in instance segmentation, detailed guidance is provided in the next section of the README.
216
+
216
217
---
217
218
218
219
## __How to improve the quality of the algorithm for the task of instance segmentation:__
An example of working with this mode is presented in Google Colab notebook - [![Open In Colab][colab_badge]][colab_ex1_memory_optimize]
249
250
251
+
---
252
+
253
+
## __How to automatically determine optimal parameters for patches (crops):__
254
+
255
+
To efficiently process a large number of images of varying sizes and contents, manually selecting the optimal patch sizes and overlaps can be cumbersome. To address this, an algorithm has been developed to automatically calculate the best parameters for patches (crops).
256
+
257
+
The `auto_calculate_crop_values` function operates in two modes:
258
+
259
+
1.**Resolution-Based Analysis**: This mode evaluates the resolution of the source images to determine the optimal patch sizes and overlaps. It is faster but may not yield the highest quality results because it does not take into account the actual objects present in the images.
260
+
261
+
2.**Neural Network-Based Analysis**: This advanced mode employs a neural network to analyze the images. The algorithm performs a standard inference of the network on the entire image and identifies the largest detected objects. Based on the sizes of these objects, the algorithm selects patch parameters to ensure that the largest objects are fully contained within a patch, and overlapping patches ensure comprehensive coverage. In this mode, it is necessary to input the model that will be used for patch-based inference in the subsequent steps.
262
+
263
+
Example of using:
264
+
```python
265
+
import cv2
266
+
from ultralytics importYOLO
267
+
from patched_yolo_infer import auto_calculate_crop_values
268
+
269
+
# Load the image
270
+
img_path ="test_image.jpg"
271
+
img = cv2.imread(img_path)
272
+
273
+
# Calculate the optimal crop size and overlap for an image
An example of working with `auto_calculate_crop_values` is presented in Google Colab notebook - [![Open In Colab][colab_badge]][colab_ex1_auto_calculate_crop_values]
Copy file name to clipboardExpand all lines: patched_yolo_infer/README.md
+32-4Lines changed: 32 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,9 +25,9 @@ Interactive notebooks are provided to showcase the functionality of the library.
25
25
26
26
__Check this Colab examples:__
27
27
28
-
Patch-Based-Inference Example - [Open in Colab](https://colab.research.google.com/drive/1XCpIYLMFEmGSO0XCOkSD7CcD9SFHSJPA?usp=sharing)
28
+
Patch-Based-Inference Example - [**Open in Colab**](https://colab.research.google.com/drive/1XCpIYLMFEmGSO0XCOkSD7CcD9SFHSJPA?usp=sharing)
29
29
30
-
Example of using various functions for visualizing basic YOLOv8/v9 inference results - [Open in Colab](https://colab.research.google.com/drive/1eM4o1e0AUQrS1mLDpcgK9HKInWEvnaMn?usp=sharing)
30
+
Example of using various functions for visualizing basic YOLOv8/v9 inference results - [**Open in Colab**](https://colab.research.google.com/drive/1eM4o1e0AUQrS1mLDpcgK9HKInWEvnaMn?usp=sharing)
31
31
32
32
33
33
## Usage
@@ -54,7 +54,7 @@ import cv2
54
54
from patched_yolo_infer import MakeCropsDetectThem, CombineDetections
55
55
56
56
# Load the image
57
-
img_path ='test_image.jpg'
57
+
img_path ="test_image.jpg"
58
58
img = cv2.imread(img_path)
59
59
60
60
element_crops = MakeCropsDetectThem(
@@ -167,7 +167,7 @@ visualize_results(
167
167
168
168
---
169
169
170
-
## __HOW TO IMPROVE THE QUALITY OF THE ALGORITHM FOR THE TASK OF INSTANCE SEGMENTATION:__
170
+
## __How to improve the quality of the algorithm for the task of instance segmentation:__
171
171
172
172
In this approach, all operations under the hood are performed on binary masks of recognized objects. Storing these masks consumes a lot of memory, so this method requires more RAM and slightly more processing time. However, the accuracy of recognition significantly improves, which is especially noticeable in cases where there are many objects of different sizes and they are densely packed. Therefore, we recommend using this approach in production if accuracy is important and not speed, and if your computational resources allow storing hundreds of binary masks in RAM.
173
173
@@ -195,4 +195,32 @@ boxes=result.filtered_boxes
195
195
masks=result.filtered_masks
196
196
classes_ids=result.filtered_classes_id
197
197
classes_names=result.filtered_classes_names
198
+
```
199
+
200
+
---
201
+
202
+
## __How to automatically determine optimal parameters for patches (crops):__
203
+
204
+
To efficiently process a large number of images of varying sizes and contents, manually selecting the optimal patch sizes and overlaps can be cumbersome. To address this, an algorithm has been developed to automatically calculate the best parameters for patches (crops).
205
+
206
+
The `auto_calculate_crop_values` function operates in two modes:
207
+
208
+
1.**Resolution-Based Analysis**: This mode evaluates the resolution of the source images to determine the optimal patch sizes and overlaps. It is faster but may not yield the highest quality results because it does not take into account the actual objects present in the images.
209
+
210
+
2.**Neural Network-Based Analysis**: This advanced mode employs a neural network to analyze the images. The algorithm performs a standard inference of the network on the entire image and identifies the largest detected objects. Based on the sizes of these objects, the algorithm selects patch parameters to ensure that the largest objects are fully contained within a patch, and overlapping patches ensure comprehensive coverage. In this mode, it is necessary to input the model that will be used for patch-based inference in the subsequent steps.
211
+
212
+
Example of using:
213
+
```python
214
+
import cv2
215
+
from ultralytics importYOLO
216
+
from patched_yolo_infer import auto_calculate_crop_values
217
+
218
+
# Load the image
219
+
img_path ="test_image.jpg"
220
+
img = cv2.imread(img_path)
221
+
222
+
# Calculate the optimal crop size and overlap for an image
0 commit comments