The code and video I used for this mission is offered on GitHub:
Let’s discover a toy downside the place I recorded a video of a ball thrown vertically into the air. The aim is to trace the ball within the video and plot it’s place p(t), velocity v(t) and acceleration a(t) over time.
Let’s outline our reference body to be the digicam and for simplicity we solely observe the vertical place of the ball within the picture. We anticipate the place to be a parabola, the rate to linearly lower and the acceleration to be fixed.
Ball Segmentation
In a primary step we have to determine the ball in every body of the video sequence. For the reason that digicam stays static, a straightforward option to detect the ball is utilizing a background subtraction mannequin, mixed with a coloration mannequin to take away the hand within the body.
First let’s get our video clip displayed with a easy loop utilizing VideoCapture from OpenCV. We merely restart the video clip as soon as it has reached its finish. We additionally be sure that to playback the video on the authentic body price by calculating the sleep_time in milliseconds primarily based on the FPS of the video. Additionally be sure that to launch the assets on the finish and shut the home windows.
import cv2cap = cv2.VideoCapture("ball.mp4")
fps = int(cap.get(cv2.CAP_PROP_FPS))
whereas True:
ret, body = cap.learn()
if not ret:
cap.set(cv2.CAP_PROP_POS_FRAMES, 0)
proceed
cv2.imshow("Body", body)
sleep_time = 1000 // fps
key = cv2.waitKey(sleep_time) & 0xFF
if key & 0xFF == ord("q"):
break
cap.launch()
cv2.destroyAllWindows()
Let’s first work on extracting a binary segmentation masks for the ball. This primarily implies that we wish to create a masks that’s energetic for pixels of the ball and inactive for all different pixels. To do that, I’ll mix two masks: a movement masks and a coloration masks. The movement masks extracts the transferring elements and the colour masks primarily eliminates the hand within the body.
For the colour filter, we will convert the picture to the HSV coloration area and choose a selected hue vary (20–100) that comprises the inexperienced colours of the ball however no pores and skin coloration tones. I don’t filter on the saturation or brightness values, so we will use the total vary (0–255).
# filter primarily based on coloration
hsv = cv2.cvtColor(body, cv2.COLOR_BGR2HSV)
mask_color = cv2.inRange(hsv, (20, 0, 0), (100, 255, 255))
To create a movement masks we will use a easy background subtraction mannequin. We use the primary body of the video for the background by setting the studying price to 1. Within the loop, we apply the background mannequin to get the foreground masks, however don’t combine new frames into it by setting the studying price to 0.
...# initialize background mannequin
bg_sub = cv2.createBackgroundSubtractorMOG2(varThreshold=50, detectShadows=False)
ret, frame0 = cap.learn()
if not ret:
print("Error: can not learn video file")
exit(1)
bg_sub.apply(frame0, learningRate=1.0)
whereas True:
...
# filter primarily based on movement
mask_fg = bg_sub.apply(body, learningRate=0)
Within the subsequent step, we will mix the 2 masks and apply a opening morphology to eliminate the small noise and we find yourself with an ideal segmentation of the ball.
# mix each masks
masks = cv2.bitwise_and(mask_color, mask_fg)
masks = cv2.morphologyEx(
masks, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (13, 13))
)
Monitoring the Ball
The one factor we’re left with is our ball within the masks. To trace the middle of the ball, I first extract the contour of the ball after which take the middle of its bounding field as reference level. In case some noise would make it by way of our masks, I’m filtering the detected contours by dimension and solely take a look at the most important one.
# discover largest contour similar to the ball we wish to observe
contours, _ = cv2.findContours(masks, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
if len(contours) > 0:
largest_contour = max(contours, key=cv2.contourArea)
x, y, w, h = cv2.boundingRect(largest_contour)
heart = (x + w // 2, y + h // 2)
We will additionally add some annotations to our body to visualise our detection. I’m going to attract two circles, one for the middle and one for the perimeter of the ball.
cv2.circle(body, heart, 30, (255, 0, 0), 2)
cv2.circle(body, heart, 2, (255, 0, 0), 2)
To maintain observe of the ball place, we will use a record. Each time we detect the ball, we merely add the middle place to the record. We will additionally visualize the trajectory by drawing strains between every of the segments within the tracked place record.
tracked_pos = []whereas True:
...
if len(contours) > 0:
...
tracked_pos.append(heart)
# draw trajectory
for i in vary(1, len(tracked_pos)):
cv2.line(body, tracked_pos[i - 1], tracked_pos[i], (255, 0, 0), 1)
Creating the Plot
Now that we will observe the ball, let’s begin exploring how we will plot the sign utilizing matplotlib. In a primary step, we will create the ultimate plot on the finish of our video first after which in a second step we fear about find out how to animate it in real-time. To indicate the place, velocity and acceleration we will use three horizontally aligned subplots:
fig, axs = plt.subplots(nrows=1, ncols=3, figsize=(10, 2), dpi=100)axs[0].set_title("Place")
axs[0].set_ylim(0, 700)
axs[1].set_title("Velocity")
axs[1].set_ylim(-200, 200)
axs[2].set_title("Acceleration")
axs[2].set_ylim(-30, 10)
for ax in axs:
ax.set_xlim(0, 20)
ax.grid(True)
We’re solely within the y place within the picture (array index 1), and to get a zero-offset place plot, we will subtract the primary place.
pos0 = tracked_pos[0][1]
pos = np.array([pos0 - pos[1] for pos in tracked_pos])
For the rate we will use the distinction in place as an approximation and for the acceleration we will use the distinction of the rate.
vel = np.diff(pos)
acc = np.diff(vel)
And now we will plot these three values:
axs[0].plot(vary(len(pos)), pos, c="b")
axs[1].plot(vary(len(vel)), vel, c="b")
axs[2].plot(vary(len(acc)), acc, c="b")plt.present()
Animating the Plot
Now on to the enjoyable half, we wish to make this plot dynamic! Since we’re working in an OpenCV GUI loop, we can not immediately use the present perform from matplotlib, as this may simply block the loop and never run our program. As a substitute we have to make use of some trickery ✨
The primary thought is to attract the plots in reminiscence right into a buffer after which show this buffer in our OpenCV window. By manually calling the draw perform of the canvas, we will pressure the determine to be rendered to a buffer. We will then get this buffer and convert it to an array. For the reason that buffer is in RGB format, however OpenCV makes use of BGR, we have to convert the colour order.
fig.canvas.draw()buf = fig.canvas.buffer_rgba()
plot = np.asarray(buf)
plot = cv2.cvtColor(plot, cv2.COLOR_RGB2BGR)
Be sure that the axs.plot calls are actually contained in the body loop:
whereas True:
...axs[0].plot(vary(len(pos)), pos, c="b")
axs[1].plot(vary(len(vel)), vel, c="b")
axs[2].plot(vary(len(acc)), acc, c="b")
...
Now we will merely show the plot utilizing the imshow perform from OpenCV.
cv2.imshow("Plot", plot)
And voilà, you get your animated plot! Nevertheless you’ll discover that the efficiency is kind of low. Re-drawing the total plot each body is kind of costly. To enhance the efficiency, we have to make use of blitting. That is a sophisticated rendering method, that pulls static elements of the plot right into a background picture and solely re-draws the altering foreground parts. To set this up, we first must outline a reference to every of our three plots earlier than the body loop.
pl_pos = axs[0].plot([], [], c="b")[0]
pl_vel = axs[1].plot([], [], c="b")[0]
pl_acc = axs[2].plot([], [], c="b")[0]
Then we have to draw the background of the determine as soon as earlier than the loop and get the background of every axis.
fig.canvas.draw()
bg_axs = [fig.canvas.copy_from_bbox(ax.bbox) for ax in axs]
Within the loop, we will now change the information for every of the plots after which for every subplot we have to restore the area’s background, draw the brand new plot after which name the blit perform to use the adjustments.
# Replace plot information
pl_pos.set_data(vary(len(pos)), pos)
pl_vel.set_data(vary(len(vel)), vel)
pl_acc.set_data(vary(len(acc)), acc)# Blit Pos
fig.canvas.restore_region(bg_axs[0])
axs[0].draw_artist(pl_pos)
fig.canvas.blit(axs[0].bbox)
# Blit Vel
fig.canvas.restore_region(bg_axs[1])
axs[1].draw_artist(pl_vel)
fig.canvas.blit(axs[1].bbox)
# Blit Acc
fig.canvas.restore_region(bg_axs[2])
axs[2].draw_artist(pl_acc)
fig.canvas.blit(axs[2].bbox)
And right here we go, the plotting is sped up and the efficiency has drastically improved.
On this put up, you discovered find out how to apply easy Laptop Imaginative and prescient strategies to extract a transferring foreground object and observe it’s trajectory. We then created an animated plot utilizing matplotlib and OpenCV. The plotting is demonstrated on a toy instance video with a ball being thrown vertically into the air. Nevertheless, the instruments and strategies used on this mission are helpful for all types of duties and real-world functions! The complete supply code is offered from my GitHub. I hope you discovered one thing at the moment, completely satisfied coding and take care!