Skip to content

Python library 1.3.8 version

Latest
Compare
Choose a tag to compare
@Koldim2001 Koldim2001 released this 10 Jan 08:55
· 2 commits to main since this release

This Python library simplifies SAHI-like inference for instance segmentation tasks, enabling the detection of small objects in images. It caters to both object detection and instance segmentation tasks, supporting a wide range of Ultralytics models.

The library also provides a sleek customization of the visualization of the inference results for all models, both in the standard approach (direct network run) and the unique patch-based variant.

Model Support: The library offers support for multiple ultralytics deep learning models, such as YOLOv8, YOLOv8-seg, YOLOv9, YOLOv9-seg, YOLO11, YOLO11-seg, FastSAM, and RTDETR. Users can select from pre-trained options or utilize custom-trained models to best meet their task requirements.

pip install patched-yolo-infer==1.3.8

🚀MAIN UPDATES:
Progress Tracking for Patching and Inference:
The update introduces the ability to monitor the progress of patching and inference tasks. You can now see the status of these processes in real-time.
Custom Progress Callback Function:
You can provide your own custom function to display the progress status. By default, the library uses tqdm to show a progress bar.

HOW TO USE: You need to pass additional parameters when creating an instance of MakeCropsDetectThem:

  1. show_processing_status (boolean, default: False):
    If set to True, a tqdm progress bar will be displayed for the cropping and object detection processes.

  2. progress_callback (function, default: None):
    This parameter allows you to pass a custom callback function to handle progress updates. The function should accept three arguments:

  task (str): The name of the task (e.g., "cropping" or "detecting").
  current (int): The current progress value.
  total (int): The total number of steps in the task.

Example:

# Custom Callback function to output progress
def progress_callback(task, current, total):
    print(f"{task}: {current / total * 100:.2f}%")

element_crops = MakeCropsDetectThem(
    image=img,
    model_path="yolov8m.pt",
    segment=False,
    show_crops=False,
    shape_x=600,
    shape_y=600,
    overlap_x=25,
    overlap_y=25,
    conf=0.5,
    iou=0.7,
    show_processing_status=True,
    # progress_callback=progress_callback,
)

result = CombineDetections(element_crops, nms_threshold=0.25)

image

Using the custom callback, the users can do whatever they want with it:

image