wrapping `yolov5/detect.py`

class Detector[source]

Detector(weight_path, conf_threshold=0.4, iou_threshold=0.45, imgsz=416, save_dir='save_dir', write_annotated_images_to_disk=False)

A wrapper for training loading saved YOLOv5 weights

requirements: GPU enabled

weight_path = 'ipynb_tests/02_train_datadump/<AutoWeight> - 1/weights/best.pt'

repos = []
repos.append("Image Repo/unlabeled/21-3-18 rowing 8-12 /")
repos.append("Image Repo/unlabeled/21-3-22 rowing (200) 1:53-7:00")
repos.append("Image Repo/unlabeled/21-3-22 rowing (200) 7:50-12:50")

dirs = []
for repo in repos:
  files = os.listdir(repo)
  absolute_paths = [os.path.join(repo, file) for file in files]
  dirs.append(absolute_paths)
detector = Detector(weight_path=weight_path)
Fusing layers... 
samples = dirs[0][3:7] + dirs[1][3:7] + dirs[2][3:7]
for i in range(len(samples)):
  res = detector.process_image(samples[i])
  print(i, "|", res)
image 1/1 /content/drive/My Drive/Coding/ModelAssistedLabel/Image Repo/unlabeled/21-3-18 rowing 8-12 /100.jpg: 0 | [{'predictions': ['4 0.329688 0.385417 0.021875 0.0902778 0.809661', '2 0.275781 0.375694 0.0203125 0.0902778 0.821206', '7 0.358594 0.388889 0.021875 0.0888889 0.854578']}]
image 1/1 /content/drive/My Drive/Coding/ModelAssistedLabel/Image Repo/unlabeled/21-3-18 rowing 8-12 /101.jpg: 1 | [{'predictions': ['9 0.329297 0.384722 0.0210938 0.0861111 0.655793', '2 0.275391 0.375694 0.0210938 0.0902778 0.687569', '9 0.357812 0.392361 0.0234375 0.0875 0.749733']}]
image 1/1 /content/drive/My Drive/Coding/ModelAssistedLabel/Image Repo/unlabeled/21-3-18 rowing 8-12 /102.jpg: 2 | [{'predictions': ['6 0.366016 0.390278 0.0164062 0.0861111 0.723955', '2 0.275391 0.375 0.0195312 0.0888889 0.786876', '9 0.331641 0.384722 0.0210938 0.0861111 0.851035']}]
image 1/1 /content/drive/My Drive/Coding/ModelAssistedLabel/Image Repo/unlabeled/21-3-18 rowing 8-12 /103.jpg: 3 | [{'predictions': ['2 0.275781 0.375 0.01875 0.0861111 0.825081', '9 0.330078 0.385417 0.0226563 0.0875 0.87784', '2 0.360156 0.390278 0.0203125 0.0888889 0.895363']}]
image 1/1 /content/drive/My Drive/Coding/ModelAssistedLabel/Image Repo/unlabeled/21-3-22 rowing (200) 1:53-7:00/19.jpg: 4 | []
image 1/1 /content/drive/My Drive/Coding/ModelAssistedLabel/Image Repo/unlabeled/21-3-22 rowing (200) 1:53-7:00/6.jpg: 5 | []
image 1/1 /content/drive/My Drive/Coding/ModelAssistedLabel/Image Repo/unlabeled/21-3-22 rowing (200) 1:53-7:00/25.jpg: 6 | [{'predictions': ['3 0.827734 0.869444 0.0351562 0.122222 0.50515']}]
image 1/1 /content/drive/My Drive/Coding/ModelAssistedLabel/Image Repo/unlabeled/21-3-22 rowing (200) 1:53-7:00/9.jpg: 7 | []
image 1/1 /content/drive/My Drive/Coding/ModelAssistedLabel/Image Repo/unlabeled/21-3-22 rowing (200) 7:50-12:50/7.jpg: 8 | [{'predictions': ['0 0.441797 0.385417 0.0148437 0.0736111 0.849962', '9 0.412891 0.378472 0.0226563 0.0791667 0.858775', '9 0.364453 0.372222 0.0226563 0.0833333 0.8981', '7 0.389062 0.375694 0.021875 0.0791667 0.932286']}]
image 1/1 /content/drive/My Drive/Coding/ModelAssistedLabel/Image Repo/unlabeled/21-3-22 rowing (200) 7:50-12:50/0.jpg: 9 | [{'predictions': ['4 0.411719 0.378472 0.021875 0.0819444 0.866474', '9 0.365234 0.371528 0.0226563 0.0819444 0.889749', '9 0.436719 0.381944 0.0234375 0.075 0.911735']}]
image 1/1 /content/drive/My Drive/Coding/ModelAssistedLabel/Image Repo/unlabeled/21-3-22 rowing (200) 7:50-12:50/4.jpg: 10 | [{'predictions': ['6 0.394922 0.372917 0.0164062 0.0763889 0.435396', '4 0.411328 0.377083 0.0210938 0.0791667 0.895521', '9 0.365625 0.371528 0.021875 0.0791667 0.896913', '5 0.436328 0.379861 0.0210938 0.0763889 0.917066']}]
image 1/1 /content/drive/My Drive/Coding/ModelAssistedLabel/Image Repo/unlabeled/21-3-22 rowing (200) 7:50-12:50/12.jpg: 11 | [{'predictions': ['9 0.412109 0.377778 0.0226563 0.0805556 0.842007', '9 0.364844 0.372222 0.0234375 0.0861111 0.853243', '7 0.435937 0.383333 0.0234375 0.0805556 0.907213', '7 0.388672 0.375 0.0226563 0.0833333 0.929839']}]

Human-friendly Labels

Human-readable information about the class indentities is stored in the data.yaml folder. By default, the data.yaml file is created from the Defaults class. Let's take a look:

from ModelAssistedLabel.core import Defaults
print(Defaults().data_yaml)
train: ../train/images
val: ../valid/images

nc: 10
names: ['1', '2', '3', '4', '5', '6', '7', '8', '9', '0']

To convert the "names" variable to a python-friendly format, we do the following manipulation:

import ast, re

# needs to be wrapped in quotes to parse as dict
substitute = "names"

#select last line
classlist = Defaults().data_yaml.split("\n")[-1]

#add quotes around `names` ONLY around the start of a string
classlist = re.sub('^%s' % substitute, f"'{substitute}'", classlist)

#surround the string in curly braces to tell python it's a dict
classlist = f"{{{classlist}}}"

# parse string as dict
classlist = ast.literal_eval(classlist)

And now here is the value of the classes as used by yolov5

classlist
{'names': ['1', '2', '3', '4', '5', '6', '7', '8', '9', '0']}

Visualizing the YOLOv5 Output

class Viewer[source]

Viewer(weight_path, class_arr)

Connects a set of pre-trained weights to an image. Also incorporates the human-friendly class labels, as opposed to dealing with the label's index

Set up a Viewer object to investigate the behavior of a model

v = Viewer([weight_path], classlist['names'])
Fusing layers... 
%matplotlib inline
results = []
for image in samples:
  result = v.plot_for(image)
  results.append(result)
Output hidden; open in https://colab.research.google.com to view.