You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Below is an example of how you can write code to implement video stream processing and save the final processed video with the results of patch-based instance segmentation inference (as in the example from the GIF of the previous comment):
importcv2fromultralyticsimportYOLOfrompatched_yolo_inferimportMakeCropsDetectThem, CombineDetections, visualize_results# Load the YOLOv8 modelmodel=YOLO("yolov8m-seg.pt") #or yolov8m-seg.engine in case of TensorRT# Open the video filecap=cv2.VideoCapture("video.mp4")
# Check if the video file was successfully openedifnotcap.isOpened():
exit()
# Get the frames per second (fps) of the videofps=cap.get(cv2.CAP_PROP_FPS)
# Get the width and height of the video frameswidth=int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height=int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
# Define the codec and create VideoWriter objectfourcc=cv2.VideoWriter_fourcc(*'mp4v') # Codec for MP4out=cv2.VideoWriter('output.mp4', fourcc, fps, (width, height))
whileTrue:
# Read a frame from the videoret, frame=cap.read()
# Break the loop if there are no more framesifnotret:
break# Detect elements in the frame using the YOLOv8 modelelement_crops=MakeCropsDetectThem(
image=frame,
model=model,
segment=True,
shape_x=640,
shape_y=500,
overlap_x=35,
overlap_y=35,
conf=0.2,
iou=0.75,
imgsz=640,
resize_initial_size=True,
show_crops=False,
batch_inference=True,
classes_list=[0, 1, 2, 3, 4, 5, 6]
)
# Combine the detections from the different cropsresult=CombineDetections(element_crops, nms_threshold=0.2, match_metric='IOS')
# Visualize the results on the frameframe=visualize_results(
img=result.image,
confidences=result.filtered_confidences,
boxes=result.filtered_boxes,
polygons=result.filtered_polygons,
classes_ids=result.filtered_classes_id,
classes_names=result.filtered_classes_names,
segment=True,
thickness=3,
show_boxes=False,
fill_mask=True,
show_class=False,
alpha=1,
return_image_array=True
)
# Resize the frame for displayscale=0.5frame_resized=cv2.resize(frame, (-1, -1), fx=scale, fy=scale)
# Display the framecv2.imshow('video', frame_resized)
# Write the frame to the output video fileout.write(frame)
# Break the loop if 'q' is pressedifcv2.waitKey(1) &0xFF==ord('q'):
break# Release the video capture and writer objectscap.release()
out.release()
# Close all OpenCV windowscv2.destroyAllWindows()
hi I contacted you a couple of months ago about this library but back then it was to slow to be able to use it for my case I saw you telling someone that i got faster i tried the code you gave them but it doesent work it has an error here is the error itself:
"C:\Users\Mohammad karaca\PycharmProjects\yolo_teknofest.venv\Scripts\python.exe" "C:\Users\Mohammad karaca\PycharmProjects\yolo_teknofest.venv\patched_yolo.py"
Traceback (most recent call last):
File "C:\Users\Mohammad karaca\PycharmProjects\yolo_teknofest.venv\patched_yolo.py", line 34, in
element_crops = MakeCropsDetectThem(
TypeError: init() got an unexpected keyword argument 'batch_inference'
Originally posted by @Koldim2001 in #8 (comment)
The text was updated successfully, but these errors were encountered: