Skip to content

Commit 65dfb55

Browse files
committed
Implementing Patching at Different Resolutions
1 parent 6d54849 commit 65dfb55

3 files changed

Lines changed: 12 additions & 3 deletions

File tree

README.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -134,7 +134,7 @@ Class implementing combining masks/boxes from multiple crops + NMS (Non-Maximum
134134

135135
| **Argument** | **Type** | **Default** | **Description** |
136136
|----------------------|-------------------|-------------|-------------------------------------------------------------------------------------------------------------------------|
137-
| element_crops |MakeCropsDetectThem| | Object containing crop information. |
137+
| element_crops |MakeCropsDetectThem| | Object containing crop information. This can be either a single MakeCropsDetectThem object or a list of objects. |
138138
| nms_threshold | float | 0.3 | IoU/IoS threshold for non-maximum suppression. The lower the value, the fewer objects remain after suppression. |
139139
| match_metric | str | IOS | Matching metric, either 'IOU' or 'IOS'. |
140140
| class_agnostic_nms | bool | True | Determines the NMS mode in object detection. When set to True, NMS operates across all classes, ignoring class distinctions and suppressing less confident bounding boxes globally. Otherwise, NMS is applied separately for each class. |
@@ -286,6 +286,14 @@ shape_x, shape_y, overlap_x, overlap_y = auto_calculate_crop_values(
286286

287287
An example of working with `auto_calculate_crop_values` is presented in Google Colab notebook - [![Open In Colab][colab_badge]][colab_ex1_auto_calculate_crop_values]
288288

289+
---
290+
291+
292+
## __Implementing Patching at Different Resolutions__
293+
294+
There is an opportunity to produce cropping into patches at different resolutions. This way, small objects can be detected when cropping into smaller patches, and large objects can be detected when cropping into larger patches. As a result, the algorithm will be able to detect a wider range of object sizes in the frame. To achieve this, the image needs to be processed multiple times through MakeCropsDetectThem with different patch parameters, and then pass the list of element_crops to the CombineDetections process.
295+
296+
An example of using this approach can be seen in this Google Colab notebook - [![Open In Colab][colab_badge]][colab_ex1_different_resolutions]
289297

290298

291299
[nb_example1]: https://nbviewer.org/github/Koldim2001/YOLO-Patch-Based-Inference/blob/main/examples/example_patch_based_inference.ipynb
@@ -297,3 +305,4 @@ An example of working with `auto_calculate_crop_values` is presented in Google C
297305
[yt_link2]: https://www.youtube.com/watch?v=nBQuWa63188
298306
[colab_ex1_memory_optimize]: https://colab.research.google.com/drive/1XCpIYLMFEmGSO0XCOkSD7CcD9SFHSJPA?usp=sharing#scrollTo=DM_eCc3yXzXW
299307
[colab_ex1_auto_calculate_crop_values]: https://colab.research.google.com/drive/1XCpIYLMFEmGSO0XCOkSD7CcD9SFHSJPA?usp=sharing#scrollTo=Wkt1FkAkhCwQ
308+
[colab_ex1_different_resolutions]: !!!

patched_yolo_infer/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ Class implementing cropping and passing crops through a neural network for detec
105105
**CombineDetections**
106106
Class implementing combining masks/boxes from multiple crops + NMS (Non-Maximum Suppression).\
107107
**Args:**
108-
- **element_crops** (*MakeCropsDetectThem*): Object containing crop information.
108+
- **element_crops** (*MakeCropsDetectThem*): Object containing crop information. This can be either a single MakeCropsDetectThem object or a list of objects.
109109
- **nms_threshold** (*float*): IoU/IoS threshold for non-maximum suppression.
110110
- **match_metric** (*str*): Matching metric, either 'IOU' or 'IOS'.
111111
- **class_agnostic_nms** (*bool*) Determines the NMS mode in object detection. When set to True, NMS operates across all classes, ignoring class distinctions and suppressing less confident bounding boxes globally. Otherwise, NMS is applied separately for each class. (Default is True)

setup.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
long_description = "\n" + fh.read()
99

1010

11-
VERSION = '1.3.1'
11+
VERSION = '1.3.2'
1212
DESCRIPTION = '''Patch-Based-Inference for detection/segmentation of small objects in images.'''
1313

1414
setup(

0 commit comments

Comments
 (0)