Processed_image() 函数返回一个 cv2.Umat 类型值,该值将从 3 维重塑(h, ch, w)为 4 维,(h, ch, w, 1)因此i需要将其转换为 numpy 数组,或者如果可能,还可以帮助我直接 rehshapecv2.umat 类型变量直接重塑并转换为pytorch 张量,可以分配给 reshape_image_tensor。img_w=640img_h=640img_ch=3umat_img = cv2.UMat(img)display_one(umat_img, "RESPONSE") #function created by me to display imagedevice = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")with torch.no_grad(): processed_img = preprocess_image(umat_img, model_image_size = (img_h, img_ch, img_w)) #___________write YOUR CODE here________ reshaped_images_tensor = torch.from_numpy(processed_img.reshape(img_h, img_ch, img_w, 1)).float().to(device) #images_tensor.reshape(img_h, img_ch, img_w, 1) outputs = model(reshaped_images_tensor) _, predicted = torch.max(outputs, 1) c = predicted.squeeze() output_probability(predicted, processed_img, umat_img)if ord('q')==cv2.waitKey(10): exit(0)
3 回答
一只名叫tom的猫
TA贡献1906条经验 获得超3个赞
目前的答案在技术上是正确的。为了清楚起见,以防万一一些链接发生变化,您可以这样做:
import cv2 img_array = umat_img.get()
添加回答
举报
0/150
提交
取消