Welcome to WuJiGu Developer Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
167 views
in Technique[技术] by (71.8m points)

python - Making Automatic Annotiation tool

i ma trying to make an automatic annotiation tool for yolo object detection which useses previosly trained model to find the detections , and i managed to put together some code but i am stuck a little, as far as i know this needs to be the annotation format for YOLO:

18 0.154167 0.431250 0.091667 0.612500

And with my code i get

0.5576068858305613, 0.5410404056310654, -0.7516528169314066, 0.33822181820869446

I am not sure why i get the - at the third number and if i need to shorten my float number, I will post the code below if someone could help me , after completing this project i will post the whole code if someone wants to use it

def convert(size, box):
dw = 1./size[0]
dh = 1./size[1]
x = (box[0] + box[1])/2.0
y = (box[2] + box[3])/2.0
w = box[1] - box[0]
h = box[3] - box[2]
x = x*dw
w = w*dw
y = y*dh
h = h*dh
return (x,y,w,h)

The above code is the function that converts the coordinates for YOLO format , For the size you need to pass the (w,h) and the for the box you need to pass (x,x+w, y, y+h)

     net = cv2.dnn.readNetFromDarknet(config_path, weights_path)
# path_name = "images/city_scene.jpg"
path_name = image
image = cv2.imread(path_name)
file_name = os.path.basename(path_name)
filename, ext = file_name.split(".")

h, w = image.shape[:2]
# create 4D blob
blob = cv2.dnn.blobFromImage(image, 1 / 255.0, (416, 416), swapRB=True, crop=False)

# sets the blob as the input of the network
net.setInput(blob)

# get all the layer names
ln = net.getLayerNames()
ln = [ln[i[0] - 1] for i in net.getUnconnectedOutLayers()]
# feed forward (inference) and get the network output
# measure how much it took in seconds
start = time.perf_counter()
layer_outputs = net.forward(ln)
time_took = time.perf_counter() - start
print(f"Time took: {time_took:.2f}s")

boxes, confidences, class_ids = [], [], []
b=[]
a=[]
# loop over each of the layer outputs
for output in layer_outputs:
    # loop over each of the object detections
    for detection in output:
     # extract the class id (label) and confidence (as a probability) of
     # the current object detection
        scores = detection[5:]
        class_id = np.argmax(scores)
        confidence = scores[class_id]
    # discard weak predictions by ensuring the detected
    # probability is greater than the minimum probability
        if confidence > CONFIDENCE:
            # scale the bounding box coordinates back relative to the
            # size of the image, keeping in mind that YOLO actually
         # returns the center (x, y)-coordinates of the bounding
            # box followed by the boxes' width and height
            box = detection[0:4] * np.array([w, h, w, h])
            (centerX, centerY, width, height) = box.astype("float")

        # use the center (x, y)-coordinates to derive the top and
        # and left corner of the bounding box
            x = int(centerX - (width / 2))
            y = int(centerY - (height / 2))
            a = w, h
            convert(a, box)
            boxes.append([x, y, int(width), int(height)])

            confidences.append(float(confidence))
            class_ids.append(class_id)

   idxs = cv2.dnn.NMSBoxes(boxes, confidences, SCORE_THRESHOLD, 
 IOU_THRESHOLD)

font_scale = 1
thickness = 1


 # ensure at least one detection exists
if len(idxs) > 0:

# loop over the indexes we are keeping
    for i in idxs.flatten():


    # extract the bounding box coordinates
        x, y = boxes[i][0], boxes[i][1]
        w, h = boxes[i][2], boxes[i][3]
    # draw a bounding box rectangle and label on the image
        color = [int(c) for c in colors[class_ids[i]]]
        ba=w,h
        print(w,h)


        cv2.rectangle(image, (x, y), (x + w, y + h), color=color, thickness=thickness)
        text = "{}".format(labels[class_ids[i]])
        conf = "{:.3f}".format(confidences[i], x, y)
        int1, int2 = (x, y)
        print(text)
        #print(convert(ba, box))



        #b=w,h
        #print(convert(b, boxes))
        #print(convert(a, box)) #coordinates
        ivan = str(int1)

        b.append([text, ivan])
        #a.append(float(conf))
        #print(a)



    # calculate text width & height to draw the transparent boxes as background of the text
        (text_width, text_height) = 
        cv2.getTextSize(text, cv2.FONT_HERSHEY_SIMPLEX, fontScale=font_scale, thickness=thickness)[0]
        text_offset_x = x
        text_offset_y = y - 5
        box_coords = ((text_offset_x, text_offset_y), (text_offset_x + text_width + 2, text_offset_y - text_height))
        overlay = image.copy()
        cv2.rectangle(overlay, box_coords[0], box_coords[1], color=color, thickness=cv2.FILLED)
    # add opacity (transparency to the box)
        image = cv2.addWeighted(overlay, 0.6, image, 0.4, 0)
    # now put the text (label: confidence %)
        cv2.putText(image, text, (x, y - 5), cv2.FONT_HERSHEY_SIMPLEX,
                fontScale=font_scale, color=(0, 0, 0), thickness=thickness)


    text = "{}".format(labels[class_ids[i]],x,y)
    conf = "{:.3f}".format(confidences[i])

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

the problem is the indexes in your function.

box[0]=>center x
box[1]=>center y
box[2]=>width of your bbox
box[3]=>height of your bbox

and according to the document, yolo labels are like this :

<object-class> <x> <y> <width> <height>

which x and y are the center of the bounding box.so your code should be like this :

def convert(size, box):
dw = 1./size[0]
dh = 1./size[1]
x = box[0]*dw
y = box[1]*dh
w = box[2]*dw
h = box[3]*dh
return (x,y,w,h)

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to WuJiGu Developer Q&A Community for programmer and developer-Open, Learning and Share
...