Releases: roboflow/supervision
Releases Β· roboflow/supervision
supervision-0.10.0
π Added
- Ability to load and save
sv.ClassificationDatasetin a folder structure format. (#125)
>>> import supervision as sv
>>> cs = sv.ClassificationDataset.from_folder_structure(
... root_directory_path='...'
... )
>>> cs.as_folder_structure(
... root_directory_path='...'
... )- Support for
sv.ClassificationDataset.splitallowing to dividesv.ClassificationDatasetinto two parts. (#125)
>>> import supervision as sv
>>> cs = sv.ClassificationDataset(...)
>>> train_cs, test_cs = cs.split(split_ratio=0.7, random_state=42, shuffle=True)
>>> len(train_cs), len(test_cs)
(700, 300)-
Ability to extract masks from Roboflow API results using
sv.Detections.from_roboflow. (#110) -
Supervision Quickstart notebook where you can learn more about Detection, Dataset and Video APIs.
π± Changed
sv.get_video_frames_generatordocumentation to better describe actual behavior. (#135)
π Contributors
supervision-0.9.0
π Added
- Ability to select
sv.Detectionsby index, list of indexes or slice. Here is an example illustrating the new selection methods. (#118)
>>> import supervision as sv
>>> detections = sv.Detections(...)
>>> len(detections[0])
1
>>> len(detections[[0, 1]])
2
>>> len(detections[0:2])
2- Ability to extract masks from YOLOv8 results using
sv.Detections.from_yolov8. Here is an example illustrating how to extract boolean masks from the result of the YOLOv8 model inference. (#101)
>>> import cv2
>>> from ultralytics import YOLO
>>> import supervision as sv
>>> image = cv2.imread(...)
>>> image.shape
(640, 640, 3)
>>> model = YOLO('yolov8s-seg.pt')
>>> result = model(image)[0]
>>> detections = sv.Detections.from_yolov8(result)
>>> detections.mask.shape
(2, 640, 640)- Ability to crop the image using
sv.crop. Here is an example showing how to get a separate crop for each detection insv.Detections. (#122)
>>> import cv2
>>> import supervision as sv
>>> image = cv2.imread(...)
>>> detections = sv.Detections(...)
>>> len(detections)
2
>>> crops = [
... sv.crop(image=image, xyxy=xyxy)
... for xyxy
... in detections.xyxy
... ]
>>> len(crops)
2- Ability to conveniently save multiple images into directory using
sv.ImageSink. An example shows how to save every tenth video frame as a separate image. (#120)
>>> import supervision as sv
>>> with sv.ImageSink(target_dir_path='target/directory/path') as sink:
... for image in sv.get_video_frames_generator(source_path='source_video.mp4', stride=10):
... sink.save_image(image=image)π οΈ Fixed
- Inconvenient handling of
sv.PolygonZonecoordinates. Nowsv.PolygonZoneaccepts coordinates in the form of[[x1, y1], [x2, y2], ...]that can be both integers and floats. (#106)
π Contributors
supervision-0.8.0
π Added
- Support for dataset inheritance. The current
Datasetgot renamed toDetectionDataset. NowDetectionDatasetinherits fromBaseDataset. This change was made to enforce the future consistency of APIs of different types of computer vision datasets. (#100) - Ability to save datasets in YOLO format using
DetectionDataset.as_yolo. (#100)
>>> import supervision as sv
>>> ds = sv.DetectionDataset(...)
>>> ds.as_yolo(
... images_directory_path='...',
... annotations_directory_path='...',
... data_yaml_path='...'
... )- Support for
DetectionDataset.splitallowing to divideDetectionDatasetinto two parts. (#102)
>>> import supervision as sv
>>> ds = sv.DetectionDataset(...)
>>> train_ds, test_ds = ds.split(split_ratio=0.7, random_state=42, shuffle=True)
>>> len(train_ds), len(test_ds)
(700, 300)π± Changed
- Default value of
approximation_percentageparameter from0.75to0.0inDetectionDataset.as_yoloandDetectionDataset.as_pascal_voc. (#100)
π Contributors
supervision-0.7.0
π Added
Detections.from_yolo_nasto enable seamless integration with YOLO-NAS model. (#91)- Ability to load datasets in YOLO format using
Dataset.from_yolo. (#86) Detections.mergeto merge multipleDetectionsobjects together. (#84)
π± Changed
LineZoneAnnotator.annotateto allow for the custom text for the in and out tags. (#44)
π οΈ Fixed
LineZoneAnnotator.annotatedoes not return annotated frame. (#81)
π Contributors
supervision-0.6.0
π Added
- Initial
Datasetsupport and ability to saveDetectionsin Pascal VOC XML format. (#71) - New
mask_to_polygons,filter_polygons_by_area,polygon_to_xyxyandapproximate_polygonutilities. (#71) - Ability to load Pascal VOC XML object detections dataset as
Dataset. (#72)
π± Changed
- order of
Detectionsattributes to make it consistent with order of objects in__iter__tuple. (#70) generate_2d_masktopolygon_to_mask. (#71)
π Contributors
supervision-0.5.2
supervision-0.5.1
π οΈ Fixed
- Fixed
Detections.__getitem__method did not return mask for selected item. - Fixed
Detections.areacrashed for mask detections.
π Contributors
supervision-0.5.0
π Added
Detections.maskto enable segmentation support. (#58)MaskAnnotatorto allow easyDetections.maskannotation. (#58)Detections.from_samto enable native Segment Anything Model (SAM) support. (#58)
π± Changed
Detections.areabehaviour to work not only with boxes but also with masks. (#58)
π Contributors
supervision-0.4.0
π Added
Detections.emptyto allow easy creation of emptyDetectionsobjects. (#48)Detections.from_roboflowto allow easy creation ofDetectionsobjects from Roboflow API inference results. (#56)plot_images_gridto allow easy plotting of multiple images on single plot. (#56)- Initial support for Pascal VOC XML format with
detections_to_voc_xmlmethod. (#56)
π± Changed
show_frame_in_notebookrefactored and renamed toplot_image. (#56)



