Releases: roboflow/supervision
Releases ยท roboflow/supervision
supervision-0.3.1
supervision-0.3.0
๐ Added
New methods in sv.Detections API:
from_transformers- convert Object Detection ๐ค Transformer result intosv.Detectionsfrom_detectron2- convert Detectron2 result intosv.Detectionsfrom_coco_annotations- convert COCO annotation intosv.Detectionsarea- dynamically calculated property storing bbox areawith_nms- initial implementation (only class agnostic) ofsv.DetectionsNMS
๐ฑ Changed
- Make
sv.Detections.confidencefieldOptional.
๐ Contributors
supervision-0.2.0
๐ช Killer features
- Support for
PolygonZoneandPolygonZoneAnnotator๐ฅ
๐ Code example
import numpy as np
import supervision as sv
from ultralytics import YOLO
# initiate polygon zone
polygon = np.array([
[1900, 1250],
[2350, 1250],
[3500, 2160],
[1250, 2160]
])
video_info = sv.VideoInfo.from_video_path(MALL_VIDEO_PATH)
zone = sv.PolygonZone(polygon=polygon, frame_resolution_wh=video_info.resolution_wh)
# initiate annotators
box_annotator = sv.BoxAnnotator(thickness=4, text_thickness=4, text_scale=2)
zone_annotator = sv.PolygonZoneAnnotator(zone=zone, color=sv.Color.white(), thickness=6, text_thickness=6, text_scale=4)
# extract video frame
generator = sv.get_video_frames_generator(MALL_VIDEO_PATH)
iterator = iter(generator)
frame = next(iterator)
# detect
model = YOLO('yolov8s.pt')
results = model(frame, imgsz=1280)[0]
detections = sv.Detections.from_yolov8(results)
detections = detections[detections.class_id == 0]
zone.trigger(detections=detections)
# annotate
box_annotator = sv.BoxAnnotator(thickness=4, text_thickness=4, text_scale=2)
labels = [f"{model.names[class_id]} {confidence:0.2f}" for _, confidence, class_id, _ in detections]
frame = box_annotator.annotate(scene=frame, detections=detections, labels=labels)
frame = zone_annotator.annotate(scene=frame)- Advanced
vs.Detectionsfiltering with pandas-like API.
detections = detections[(detections.class_id == 0) & (detections.confidence > 0.5)]- Improved integration with
YOLOv5andYOLOv8models.
import torch
import supervision as sv
model = torch.hub.load('ultralytics/yolov5', 'yolov5x6')
results = model(frame, size=1280)
detections = sv.Detections.from_yolov5(results)from ultralytics import YOLO
import supervision as sv
model = YOLO('yolov8s.pt')
results = model(frame, imgsz=1280)[0]
detections = sv.Detections.from_yolov8(results)๐ Added
supervision.get_polygon_centerfunction - takes in a polygon as a 2-dimensionalnumpy.ndarrayand returns the center of the polygon as a Point objectsupervision.draw_polygonfunction - draw a polygon on a scenesupervision.draw_textfunction - draw a text on a scenesupervision.ColorPalette.default()- class method - to generate defaultColorPalettesupervision.generate_2d_maskfunction - generate a 2D mask from a polygonsupervision.PolygonZoneclass - to define polygon zones and validate ifsupervision.Detectionsare in the zonesupervision.PolygonZoneAnnotatorclass - to drawsupervision.PolygonZoneon scene
๐ฑ Changed
VideoInfoAPI - change the property nameresolution->resolution_whto make it more descriptive; convertVideoInfotodataclassprocess_frameAPI - change argument nameframe->sceneto make it consistent with other classes and methodsLineCounterAPI - rename classLineCounter->LineZoneto make it consistent withPolygonZoneLineCounterAnnotatorAPI - rename classLineCounterAnnotator->LineZoneAnnotator
๐ Contributors
supervision-0.1.0
๐ Added
- โ Add project license
- ๐จ
DEFAULT_COLOR_PALETTE,Color, andColorPaletteclasses - ๐ initial implementation of
Point,Vector, andRectclasses - ๐ฌ
VideoInfoandVideoSinkclasses as well asget_video_frames_generator
-๐show_frame_in_notebookutil - ๐๏ธ
draw_line,draw_rectangle,draw_filled_rectangleutils added - ๐ฆ Initial version
DetectionsandBoxAnnotatoradded - ๐งฎ initial implementation of
LineCounterandLineCounterAnnotatorclasses
