banner



Which Data Outcome Of The Speed Of Cars Would Best Support Removing A Speed Camera?

In this tutorial, you volition larn how to use OpenCV and Deep Learning to detect vehicles in video streams, rail them, and apply speed estimation to find the MPH/KPH of the moving vehicle.

This tutorial is inspired by PyImageSearch readers who take emailed me request for speed interpretation computer vision solutions.

As pedestrians taking the dog for a walk, escorting our kids to school, or marching to our workplace in the forenoon, we've all experienced dangerous, fast-moving vehicles operated by inattentive drivers that almost mow us down.

Many of us alive in flat complexes or housing neighborhoods where ignorant drivers disregard safety and zoom past, going fashion also fast.

We experience near powerless. These drivers disregard speed limits, crosswalk areas, school zones, and "children at play" signs altogether. When there is a speed bump, they speed up nearly every bit if they are trying to take hold of some air!

Is in that location anything nosotros tin can practise?

In most cases, the answer is unfortunately "no" — we have to look out for ourselves and our families by being careful as nosotros walk in the neighborhoods we alive in.

Just what if nosotros could catch these reckless neighborhood miscreants in action and provide video evidence of the vehicle, speed, and time of solar day to local government?

In fact, nosotros tin can.

In this tutorial, we'll build an OpenCV project that:

  1. Detects vehicles in video using a MobileNet SSD and Intel Movidius Neural Compute Stick (NCS)
  2. Tracks the vehicles
  3. Estimates the speed of a vehicle and stores the evidence in the cloud (specifically in a Dropbox folder).

Once in the cloud, you lot tin provide the shareable link to anyone y'all choose. I sincerely promise it will make a departure in your neighborhood.

Let's take a ride of our own and learn how to estimate vehicle speed using a Raspberry Pi and Intel Movidius NCS.

Notation: Today's tutorial is really a chapter from my new book, Raspberry Pi for Computer Vision. This volume shows yous how to push the limits of the Raspberry Pi to build real-world Computer Vision, Deep Learning, and OpenCV Projects. Exist sure to pick up a copy of the book if you enjoy today's tutorial.

Looking for the source code to this post?

Jump Right To The Downloads Section

OpenCV Vehicle Detection, Tracking, and Speed Interpretation

In this tutorial, we will review the concept of VASCAR, a method that police employ for measuring the speed of moving objects using distance and timestamps. We'll likewise understand how here is a human component that leads to mistake and how our method can correct the human being mistake.

From in that location, we'll design our computer vision organisation to collect timestamps of cars to mensurate speed (with a known distance). By eliminating the man component, our arrangement will rely on our cognition of physics and our software development skills.

Our organisation relies on a combination of object detection and object tracking to find cars in a video stream at different waypoints. Nosotros'll briefly review these concepts so that we tin can build out our OpenCV speed estimation commuter script.

Finally nosotros'll deploy and test our system. Calibration is necessary for all speed measurement devices (including RADAR/LIDAR) — ours is no dissimilar. We'll learn to conduct drive tests and how to calibrate our system.

What is VASCAR and how is information technology used to measure speed?

Figure one: Vehicle Average Speed Computer and Recorder (VASCAR) devices let police to measure speed without RADAR or LIDAR, both of which tin can be detected. We will use a VASCAR-esque approach with OpenCV to find vehicles, track them, and estimate their speeds without relying on the human component.

Visual Average Speed Computer and Recorder (VASCAR) is a method for calculating the speed of vehicles — information technology does not rely on RADAR or LIDAR, but it borrows from those acronyms. Instead, VASCAR is a unproblematic timing device relying on the post-obit equation:

Law utilize VASCAR where RADAR and LIDAR is illegal or when they don't want to be detected by RADAR/LIDAR detectors.

In order to employ the VASCAR method, police must know the distance betwixt ii stock-still points on the road (such as signs, lines, trees, bridges, or other reference points)

  • When a vehicle passes the first reference signal, they press a button to start the timer.
  • When the vehicle passes the second indicate, the timer is stopped.

The speed is automatically computed as the computer already knows the distance per Equation 1.one.

Speed measured by VASCAR is severely limited by the homo factor.

For case, what if the police officer has poor eyesight or poor reaction time

If they press the push button late (first reference point) and then early (second reference point), so your speed will be calculated faster than you are actually going since the fourth dimension component is smaller.

If you lot are ever issued a ticket by a police officer and it says VASCAR on it, then you accept a very skillful chance of getting out of the ticket in a court. You can (and should) fight information technology. Be prepared with Equation 1.1 above and to explain how significant the man component is.

Our projection relies on a VASCAR approach, but with iv reference points. We will average the speed between all four points with a goal of having a amend estimate of the speed. Our system is also dependent upon the distance and time components.

For further reading about VASCAR, please refer to the VASCAR Wikipedia article.

Configuring your Raspberry Pi iv + OpenVINO surround

Figure two: Configuring OpenVINO on your Raspberry Pi for detecting/tracking vehicles and measuring vehicle speed.

This tutorial requires a Raspberry Pi 4B and Movidius NCS2 (or higher one time faster versions are released in the future). Lower Raspberry Pi and NCS models are simply not fast plenty. Another option is to employ a capable laptop/desktop without OpenVINO altogether.

Configuring your Raspberry Pi 4B + Intel Movidius NCS for this project is absolutely challenging.

I suggest you (1) pick upward a re-create of Raspberry Pi for Computer Vision, and (two) flash the included pre-configured .img to your microSD. The .img that comes included with the book is worth its weight in gilded.

For the stubborn few who wish to configure their Raspberry Pi 4 + OpenVINO on their own, here is a brief guide:

  1. Head to my BusterOS install guide and follow all instructions to create an environs named cv . Ensure that you lot utilise a RPi 4B model (either 1GB, 2GB, or 4GB).
  2. Caput to my OpenVINO installation guide and create a 2d environment named openvino . Be sure to download the latest OpenVINO and non an older version.

At this point, your RPi will have both a normal OpenCV environment likewise as an OpenVINO-OpenCV environment. Y'all will use the openvino environment for this tutorial.

Now, simply plug in your NCS2 into a blue USB three.0 port (for maximum speed) and follow along for the rest of the tutorial.

Caveats:

  • Some versions of OpenVINO struggle to read .mp4 videos. This is a known problems that PyImageSearch has reported to the Intel team. Our preconfigured .img includes a fix — Abhishek Thanki edited the source code and compiled OpenVINO from source. This blog mail service is long enough as is, so I cannot include the compile-from-source instructions. If you encounter this issue delight encourage Intel to fix the problem, and either (A) compile from source, or (B) pick up a copy of Raspberry Pi for Reckoner Vision and utilize the pre-configured .img.
  • We will add together to this listing if we discover other caveats.

Project Structure

Allow'southward review our project structure:

|-- config |   |-- config.json |-- pyimagesearch |   |-- utils |   |   |-- __init__.py |   |   |-- conf.py |   |-- __init__.py |   |-- centroidtracker.py |   |-- trackableobject.py |-- sample_data |   |-- cars.mp4 |-- output |   |-- log.csv |-- MobileNetSSD_deploy.caffemodel |-- MobileNetSSD_deploy.prototxt |-- speed_estimation_dl_video.py |-- speed_estimation_dl.py          

Our config.json file holds all the projection settings — we will review these configurations in the next section. Inside Raspberry Pi for Reckoner Vision with Python, yous'll find configuration files with most capacity. Yous can tweak each configuration to your needs. These come up in the form of commented JSON or Python files. Using the package, json_minify , comments are parsed out then that the JSON Python module can load the data as a Python dictionary.

We will be taking advantage of both the CentroidTracker and TrackableObject classes in this project. The centroid tracker is identical to previous people/vehicle counting projects in the Hobbyist Bundle (Chapters 19 and twenty) and Hacker Bundle (Chapter 13). Our trackable object class, on the other mitt, includes boosted attributes that we will go on track of including timestamps, positions, and speeds.

A sample video compilation from vehicles passing in front end of my colleague Dave Hoffman's house is included (cars.mp4).

This video is provided for demo purposes; however, have note that you should not rely on video files for authentic speeds — the FPS of the video, in addition to the speed at which frames are read from the file, volition impact speed readouts.

Videos like the i provided are peachy for ensuring that the program functions as intended, just again, authentic speed readings from video files are not likely — for authentic readings y'all should exist utilizing a live video stream.

The output/ folder will shop a log file, log.csv, which includes the timestamps and speeds of vehicles that have passed the photographic camera.

Our pre-trained Caffe MobileNet SSD object detector (used to detect vehicles) files are included in the root of the project.

A testing script is included — speed_estimation_dl_video.py . It is identical to the live script, with the exception that information technology uses a prerecorded video file. Refer to this note:

Note: OpenCV cannot automatically throttle a video file framerate according to the true framerate. If you utilise speed_estimation_dl_video.py every bit well every bit the supplied cars.mp4 testing file, keep in mind that the speeds reported volition exist inaccurate. For accurate speeds, you must set up the total experiment with a camera and take real cars drive by. Refer to the adjacent section, "Calibrating for Accuracy", for a real live demo in which a screencast was recorded of the live arrangement in action.

The driver script, speed_estimation_dl.py, interacts with the live video stream, object detector, and calculates the speeds of vehicles using the VASCAR arroyo. It is i of the longer scripts we cover in Raspberry Pi for Computer Vision.

Speed Estimation Config file

When I'yard working on projects involving many configurable constants, besides as input/output files and directories, I like to create a separate configuration file.

In some cases, I utilize JSON and other cases Python files. We could argue all day over which is easiest (JSON, YAML, XML, .py, etc.), merely for nigh projects inside of Raspberry Pi for Calculator Vision with Python we employ either a Python or JSON configuration in place of a lengthy list of control line arguments.

Let's review config.json, our JSON configuration settings file:

{     // maximum consecutive frames a given object is allowed to be     // marked equally "disappeared" until we demand to deregister the object     // from tracking     "max_disappear": 10,      // maximum distance betwixt centroids to associate an object --     // if the altitude is larger than this maximum distance we'll     // start to marking the object as "disappeared"     "max_distance": 175,      // number of frames to perform object tracking instead of object     // detection     "track_object": 4,      // minimum confidence     "conviction": 0.four,      // frame width in pixels     "frame_width": 400,      // lexicon holding the unlike speed interpretation columns     "speed_estimation_zone": {"A": 120, "B": 160, "C": 200, "D": 240},      // real world distance in meters     "distance": xvi,      // speed limit in mph     "speed_limit": 15,          

The "max_disappear" and "max_distance" variables are used for centroid tracking and object association:

  • The "max_disappear" frame count signals to our centroid tracker when to mark an object as disappeared (Line 5).
  • The "max_distance" value is the maximum Euclidean distance in pixels for which we'll associate object centroids (Line 10). If centroids exceed this distance we mark the object as disappeared.

Our "track_object" value represents the number of frames to perform object tracking rather than object detection (Line 14).

Performing detection on every frame would be also computationally expensive for the RPi. Instead, we apply an object tracker to lessen the load on the Pi. We'll and so intermittently perform object detection every N frames to re-associate objects and improve our tracking.

The "confidence" value is the probability threshold for object detection with MobileNet SSD. Detect objects (i.east. cars, trucks, buses, etc.) that don't come across the conviction threshold are ignored (Line 17).

Each input frame will be resized to a "frame_width" of 400 (Line 20).

As mentioned previously, we take four speed estimation zones. Line 23 holds a dictionary of the frame's columns (i.due east. y-pixels) separating the zones. These columns are obviously dependent upon the "frame_width".

Figure 3: The camera's FOV is measured at the roadside carefully. Oftentimes calibration is required. Refer to the "Calibrating for Accuracy" department to learn virtually the scale process for neighborhood speed interpretation and vehicle tracking with OpenCV.

Line 26 is the nearly important value in this configuration. Y'all will have to physically measure the "distance" on the road from one side of the frame to the other side.

It will be easier if you take a helper to brand the measurement. Have the helper watch the screen and tell you when you are standing at the very border of the frame. Put the record downward on the ground at that bespeak. Stretch the tape to the other side of the frame until your helper tells you that they encounter you at the very edge of the frame in the video stream. Accept note of the distance in meters — all your calculations volition be dependent on this value.

As shown in Figure 3, there are 49 feet between the edges of where cars will travel in the frame relative to the positioning on my photographic camera. The conversion of 49 feet to meters is fourteen.94 meters.

So why does Line 26 of our configuration reflect "distance": 16?

The value has been tuned for system calibration. See the "Calibrating for Accuracy" section to learn how to exam and calibrate your organisation. Secondly, had the measurement been fabricated at the heart of the street (i.e. further from the photographic camera), the distance would have been longer. The measurement was taken adjacent to the street by Dave Hoffman so he would not go run over by a car!

Our speed_limit in this example is 15mph (Line 29). Vehicles traveling less than this speed will not be logged. Vehicles exceeding this speed volition be logged. If y'all demand all speeds to be logged, you lot can set the value to 0.

The remaining configuration settings are for displaying frames to our screen, uploading files to the cloud (i.due east., Dropbox), as well as output file paths:

            // flag indicating if the frame must be displayed     "display": true,      // path the object detection model     "model_path": "MobileNetSSD_deploy.caffemodel",      // path to the prototxt file of the object detection model     "prototxt_path": "MobileNetSSD_deploy.prototxt",      // flag used to check if dropbox is to be used and dropbox admission     // token     "use_dropbox": imitation,     "dropbox_access_token": "YOUR_DROPBOX_APP_ACCESS_TOKEN",      // output directory and csv file name     "output_path": "output",     "csv_name": "log.csv" }          

If you set up "display" to true on Line 32, an OpenCV window is displayed on your Raspberry Pi desktop.

Lines 35-38 specify our Caffe object detection model and prototxt paths.

If you elect to "use_dropbox", and so you must set the value on Line 42 to truthful and fill up in your access token on Line 43. Videos of vehicles passing the camera will be logged to Dropbox. Ensure that you accept the quota for the videos!

Lines 46 and 47 specify the "output_path" for the log file.

Camera Positioning and Constants

Effigy 4: This OpenCV vehicle speed estimation project assumes the camera is aimed perpendicular to the road. Timestamps of a vehicle are collected at waypoints ABCD or DCBA. From there, our speed = distance / time equation is put to use to calculate three speeds among the 4 waypoints. Speeds are averaged together and converted to km/hr and miles/hr. As y'all can see, the distance measurement is different depending on where (edges or centerline) the tape is laid on the ground/road. We will business relationship for this by calibrating our system in the "Calibrating for Accuracy" section.

Effigy 4 shows an overhead view of how the project is laid out. In the case of Dave Hoffman'south house, the RPi and camera are sitting in his road-facing window. The measurement for the "distance" was taken at the side of the route on the far edges of the FOV lines for the camera. Points A, B, C, and D marking the columns in a frame. They should be equally spaced in your video frame (denoted past "speed_estimation_zone" pixel columns in the configuration).

Cars pass through the FOV in either direction while the MobileNet SSD object detector, combined with an object tracker, assists in grabbing timestamps at points ABCD (left-to-correct) or DCBA (right-to-left).

Centroid Tracker

Effigy 5: Pinnacle-left: To build a simple object tracking algorithm using centroid tracking, the kickoff stride is to accept bounding box coordinates from an object detector and use them to compute centroids. Meridian-right: In the adjacent input frame, three objects are now present. We need to compute the Euclidean distances betwixt each pair of original centroids (circumvolve) and new centroids (square). Bottom-left: Our simple centroid object tracking method has associated objects with minimized object distances. What do nosotros practice about the object in the bottom left though? Bottom-correct: We have a new object that wasn't matched with an existing object, so it is registered equally object ID #3.

Object tracking via centroid association is a concept we take already covered on PyImageSearch, however, let's take a moment to review.

A simple object tracking algorithm relies on keeping track of the centroids of objects.

Typically an object tracker works hand-in-hand with a less-efficient object detector. The object detector is responsible for localizing an object. The object tracker is responsible for keeping runway of which object is which by assigning and maintaining identification numbers (IDs).

This object tracking algorithm we're implementing is called centroid tracking as it relies on the Euclidean distance betwixt (1) existing object centroids (i.e., objects the centroid tracker has already seen before) and (ii) new object centroids between subsequent frames in a video. The centroid tracking algorithm is a multi-step process. The 5 steps include:

  1. Step #ane: Accept bounding box coordinates and compute centroids
  2. Step #2: Compute Euclidean distance betwixt new bounding boxes and existing objects
  3. Step #3: Update (x, y)-coordinates of existing objects
  4. Pace #iv: Register new objects
  5. Step #v: Deregister onetime objects

The CentroidTracker class is covered in the following resource on PyImageSearch:

  • Simple Object Tracking with OpenCV
  • OpenCV People Counter
  • Raspberry Pi for Computer Vision:
    • Creating a People/Footfall Counter — Affiliate nineteen of the Hobbyist Packet
    • Building a Traffic Counter — Chapter 20 of the Hobbyist Parcel
    • Building a Neighborhood Vehicle Speed Monitor — Chapter vii of the Hacker Parcel
    • Object Detection with the Movidius NCS — Chapter 13 of the Hacker Bundle

Tracking Objects for Speed Estimation with OpenCV

In society to runway and calculate the speed of objects in a video stream, we need an like shooting fish in a barrel way to store data regarding the object itself, including:

  • Its object ID.
  • Its previous centroids (so we can easily compute the management the object is moving).
  • A dictionary of timestamps corresponding to each of the four columns in our frame.
  • A dictionary of x-coordinate positions of the object. These positions reflect the actual position in which the timestamp was recorded so speed can accurately exist calculated.
  • The last point boolean serves as a flag to indicate that the object has passed the terminal waypoint (i.eastward. column) in the frame.
  • The calculated speed in MPH and KMPH. We calculate both and the user can choose which he/she prefers to use past a small modification to the driver script.
  • A boolean to indicate if the speed has been estimated (i.eastward. calculated) yet.
  • A boolean indicating if the speed has been logged in the .csv log file.
  • The direction through the FOV the object is traveling (left-to-right or right-to-left).

To accomplish all of these goals we tin can define an example of TrackableObject — open up the trackableobject.py file and insert the post-obit code:

# import the necessary packages import numpy as np  class TrackableObject:     def __init__(self, objectID, centroid):         # store the object ID, so initialize a list of centroids         # using the current centroid         self.objectID = objectID         self.centroids = [centroid]          # initialize a dictionaries to store the timestamp and         # position of the object at various points         self.timestamp = {"A": 0, "B": 0, "C": 0, "D": 0}         self.position = {"A": None, "B": None, "C": None, "D": None}         self.lastPoint = False          # initialize the object speeds in MPH and KMPH         self.speedMPH = None         cocky.speedKMPH = None          # initialize two booleans, (1) used to bespeak if the         # object's speed has already been estimated or not, and (2)         # used to indidicate if the object'south speed has been logged or         # not         self.estimated = Faux         self.logged = False          # initialize the direction of the object         cocky.direction = None          

The TrackableObject constructor accepts an objectID and centroid. The centroids list will contain an object's centroid location history.

Nosotros will have multiple trackable objects — one for each automobile that is beingness tracked in the frame. Each object will have the attributes shown on Lines 8-29 (detailed higher up)

Lines 18 and 19 hold the speed in MPH and KMPH. Nosotros need a function to calculate the speed, and so let's define the function at present:

            def calculate_speed(self, estimatedSpeeds):         # calculate the speed in KMPH and MPH         self.speedKMPH = np.boilerplate(estimatedSpeeds)         MILES_PER_ONE_KILOMETER = 0.621371         self.speedMPH = self.speedKMPH * MILES_PER_ONE_KILOMETER          

Line 33 calculates the speedKMPH attribute as an average of the three estimatedSpeeds betwixt the four points (passed equally a parameter to the part).

There are 0.621371 miles in i kilometer (Line 34). Knowing this, Line 35 calculates the speedMPH attribute.

Speed Estimation with Computer Vision and OpenCV

Figure 6: OpenCV vehicle detection, tracking, and speed interpretation with the Raspberry Pi.

Earlier we begin working on our commuter script, let's review our algorithm at a loftier level:

  • Our speed formula is speed = distance / fourth dimension (Equation ane.1).
  • We have a known distance constant measured by a record at the roadside. The camera will face at the road perpendicular to the altitude measurement unobstructed by obstacles.
  • Meters per pixel are calculated by dividing the distance constant by the frame width in pixels (Equation 1.2).
  • Distance in pixels is calculated equally the difference between the centroids as they pass by the columns for the zone (Equation ane.3). Distance in meters is then calculated for the particular zone (Equation 1.4).
  • 4 timestamps (t) volition be collected equally the automobile moves through the FOV past four waypoint columns of the video frame.
  • Three pairs of the iv timestamps will be used to make up one's mind iii delta t values.
  • Nosotros will calculate 3 speed values (equally shown in the numerator of Equation 1.v) for each of the pairs of timestamps and estimated distances.
  • The three speed estimates will be averaged for an overall speed (Equation one.5).
  • The speed is converted and fabricated bachelor in the TrackableObject class as speedMPH or speedKMPH. We volition display speeds in miles per hr. Small changes to the script are required if you prefer to have the kilometers per hour logged and displayed — be certain to read the notes equally yous follow along in the tutorial.

The post-obit equations stand for our algorithm:

At present that we understand the methodology for calculating speeds of vehicles and we have defined the CentroidTracker and TrackableObject classes, let'southward piece of work on our speed estimation driver script.

Open a new file named speed_estimation_dl.py and insert the following lines:

# import the necessary packages from pyimagesearch.centroidtracker import CentroidTracker from pyimagesearch.trackableobject import TrackableObject from pyimagesearch.utils import Conf from imutils.video import VideoStream from imutils.io import TempFile from imutils.video import FPS from datetime import datetime from threading import Thread import numpy as np import argparse import dropbox import imutils import dlib import time import cv2 import os          

Lines two-17 handle our imports including our CentroidTracker and TrackableObject for object tracking. The correlation tracker from Davis Male monarch's dlib is also office of our object tracking method. We'll employ the dropbox API to store data in the cloud in a split Thread so as not to interrupt the flow of the main thread of execution.

Let's implement the upload_file function now:

def upload_file(tempFile, client, imageID):     # upload the paradigm to Dropbox and cleanup the tempory image     impress("[INFO] uploading {}...".format(imageID))     path = "/{}.jpg".format(imageID)     client.files_upload(open(tempFile.path, "rb").read(), path)     tempFile.cleanup()          

Our upload_file function will run in one or more dissever threads. It accepts the tempFile object, Dropbox client object, and imageID as parameters. Using these parameters, it builds a path and then uploads the file to Dropbox (Lines 22 and 23). From there, Line 24 and then removes the temporary file from local storage.

Let's go alee and load our configuration:

# construct the argument parser and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-c", "--conf", required=True,     help="Path to the input configuration file") args = vars(ap.parse_args())  # load the configuration file conf = Conf(args["conf"])          

Lines 27-33 parse the --conf command line statement and load the contents of the configuration into the conf dictionary.

Nosotros'll then initialize our pretrained MobileNet SSD CLASSES and Dropbox customer if required:

# initialize the list of form labels MobileNet SSD was trained to # find CLASSES = ["background", "aeroplane", "wheel", "bird", "boat",     "bottle", "motorbus", "car", "cat", "chair", "cow", "diningtable",     "dog", "horse", "motorcycle", "person", "pottedplant", "sheep",     "sofa", "train", "tvmonitor"]  # check to run across if the Dropbox should be used if conf["use_dropbox"]:     # connect to dropbox and get-go the session authorization procedure     client = dropbox.Dropbox(conf["dropbox_access_token"])     impress("[SUCCESS] dropbox business relationship linked")          

And from there, we'll load our object detector and initialize our video stream:

# load our serialized model from disk print("[INFO] loading model...") net = cv2.dnn.readNetFromCaffe(conf["prototxt_path"],     conf["model_path"]) net.setPreferableTarget(cv2.dnn.DNN_TARGET_MYRIAD)  # initialize the video stream and allow the camera sensor to warmup impress("[INFO] warming upwardly photographic camera...") #vs = VideoStream(src=0).kickoff() vs = VideoStream(usePiCamera=True).start() time.sleep(2.0)  # initialize the frame dimensions (we'll set them as soon as we read # the first frame from the video) H = None West = None          

Lines 50-52 load the MobileNet SSD cyberspace and gear up the target processor to the Movidius NCS Myriad.

Using the Movidius NCS coprocessor (Line 52) ensures that our FPS is high enough for accurate speed calculations. In other words, if we have a lag between frame captures, our timestamps can get out of sync and lead to inaccurate speed readouts. If you prefer to use a laptop/desktop for processing (i.eastward. without OpenVINO and the Movidius NCS), be sure to delete Line 52.

Lines 57-63 initialize the Raspberry Pi video stream and frame dimensions.

We accept a handful more initializations to take care of:

# instantiate our centroid tracker, then initialize a list to store # each of our dlib correlation trackers, followed past a dictionary to # map each unique object ID to a TrackableObject ct = CentroidTracker(maxDisappeared=conf["max_disappear"],     maxDistance=conf["max_distance"]) trackers = [] trackableObjects = {}  # go on the count of full number of frames totalFrames = 0  # initialize the log file logFile = None  # initialize the list of various points used to calculate the avg of # the vehicle speed points = [("A", "B"), ("B", "C"), ("C", "D")]  # start the frames per second throughput estimator fps = FPS().start()          

For object tracking purposes, Lines 68-71 initialize our CentroidTracker, trackers listing, and trackableObjects dictionary.

Line 74 initializes a totalFrames counter which will be incremented each fourth dimension a frame is captured. We'll use this value to calculate when to perform object detection versus object tracking.

Our logFile object will exist opened later on (Line 77).

Our speed will be based on the ABCD cavalcade points in our frame. Line 81 initializes a list of pairs of points for which speeds will be calculated. Given our four points, we can summate the 3 estimated speeds so average them.

Line 84 initializes our FPS counter.

With all of our initializations taken care of, permit'southward begin looping over frames:

# loop over the frames of the stream while Truthful:     # grab the adjacent frame from the stream, store the current     # timestamp, and shop the new date     frame = vs.read()     ts = datetime.now()     newDate = ts.strftime("%chiliad-%d-%y")      # cheque if the frame is None, if and so, suspension out of the loop     if frame is None:         break      # if the log file has not been created or opened     if logFile is None:         # build the log file path and create/open the log file         logPath = os.path.join(conf["output_path"], conf["csv_name"])         logFile = open(logPath, mode="a")          # gear up the file pointer to finish of the file         pos = logFile.seek(0, os.SEEK_END)          # if we are using dropbox and this is a empty log file then         # write the column headings         if conf["use_dropbox"] and pos == 0:             logFile.write("Year,Month,Day,Time,Speed (in MPH),ImageID\n")          # otherwise, we are not using dropbox and this is a empty log         # file then write the cavalcade headings         elif pos == 0:             logFile.write("Year,Month,Mean solar day,Fourth dimension (in MPH),Speed\northward")          

Our frame processing loop begins on Line 87. We begin past grabbing a frame and taking our first timestamp (Lines ninety-92).

Lines 99-115 initialize our logFile and write the cavalcade headings. Notice that if we are using Dropbox, one additional column is present in the CSV — the image ID.

Annotation: If you prefer to log speeds in kilometers per hour, exist sure to update the CSV column headings on Line 110 and Line 115.

Let's preprocess our frame and perform a couple of initializations:

            # resize the frame     frame = imutils.resize(frame, width=conf["frame_width"])     rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)      # if the frame dimensions are empty, set them     if W is None or H is None:         (H, Westward) = frame.shape[:2]         meterPerPixel = conf["distance"] / W      # initialize our list of bounding box rectangles returned by     # either (1) our object detector or (2) the correlation trackers     rects = []          

Line 118 resizes our frame to a known width direct from the "frame_width" value held in the config file.

Note: If y'all change "frame_width" in the config, exist sure to update the "speed_estimation_zone" columns as well.

Line 119 converts the frame to RGB format for dlib'due south correlation tracker.

Lines 122-124 initialize the frame dimensions and summate meterPerPixel. The meters per pixel value helps to calculate our three estimated speeds among the four points.

Note: If your lens introduces distortion (i.e. a wide expanse lens or fisheye), yous should consider a proper camera calibration via intrinsic/extrinsic camera parameters then that the meterPerPixel value is more accurate. Calibration will exist a future PyImageSearch blog topic.

Line 128 initializes an empty list to hold bounding box rectangles returned by either (i) our object detector or (2) the correlation trackers.

At this point, we're ready to perform object detection to update our trackers:

            # bank check to see if we should run a more computationally expensive     # object detection method to help our tracker     if totalFrames % conf["track_object"] == 0:         # initialize our new set of object trackers         trackers = []          # convert the frame to a blob and pass the blob through the         # network and obtain the detections         blob = cv2.dnn.blobFromImage(frame, size=(300, 300),             ddepth=cv2.CV_8U)         net.setInput(hulk, scalefactor=1.0/127.5, hateful=[127.5,             127.5, 127.v])         detections = internet.forward()          

Object tracking willy only occur on multiples of "track_object" per Line 132. Performing object detection merely every Northward frames reduces the expensive inference operations. We'll perform object tracking whenever possible to reduce computational load.

Lines 134 initializes our new list of object trackers to update with accurate bounding box rectangles and so that correlation tracking can do its task later.

Lines 138-142 perform inference using the Movidius NCS.

Let's loop over the detections and update our trackers:

            # loop over the detections         for i in np.arange(0, detections.shape[two]):             # extract the confidence (i.eastward., probability) associated             # with the prediction             confidence = detections[0, 0, i, 2]              # filter out weak detections by ensuring the `conviction`             # is greater than the minimum confidence             if confidence > conf["confidence"]:                 # extract the alphabetize of the class label from the                 # detections list                 idx = int(detections[0, 0, i, 1])                  # if the form characterization is non a car, ignore it                 if CLASSES[idx] != "machine":                     continue                  # compute the (x, y)-coordinates of the bounding box                 # for the object                 box = detections[0, 0, i, 3:seven] * np.array([Due west, H, W, H])                 (startX, startY, endX, endY) = box.astype("int")                  # construct a dlib rectangle object from the bounding                 # box coordinates and then start the dlib correlation                 # tracker                 tracker = dlib.correlation_tracker()                 rect = dlib.rectangle(startX, startY, endX, endY)                 tracker.start_track(rgb, rect)                  # add together the tracker to our list of trackers so we can                 # employ information technology during skip frames                 trackers.suspend(tracker)          

Line 145 begins a loop over detections.

Lines 148-159 filter the detection based on the "conviction" threshold and CLASSES type. Nosotros only await for the "car" class using our pretrained MobileNet SSD.

Lines 163 and 164 calculate the bounding box of an object.

Nosotros then initialize a dlib correlation tracker and begin track the rect ROI establish by our object detector (Lines 169-171). Line 175 adds the tracker to our trackers list.

At present allow'due south handle the event that we'll be performing object tracking rather than object detection:

            # otherwise, we should use our object *trackers* rather than     # object *detectors* to obtain a higher frame processing     # throughput     else:         # loop over the trackers         for tracker in trackers:             # update the tracker and grab the updated position             tracker.update(rgb)             pos = tracker.get_position()              # unpack the position object             startX = int(pos.left())             startY = int(pos.meridian())             endX = int(pos.right())             endY = int(pos.bottom())              # add the bounding box coordinates to the rectangles list             rects.append((startX, startY, endX, endY))      # utilize the centroid tracker to associate the (1) old object     # centroids with (2) the newly computed object centroids     objects = ct.update(rects)          

Object tracking is less of a computational load on our RPi, so most of the time (i.e. except every N "track_object" frames) we will perform tracking.

Lines 180-185 loop over the available trackers and update the position of each object.

Lines 188-194 add together the bounding box coordinates of the object to the rects list.

Line 198 and so updates the CentroidTracker'southward objects using either the object detection or object tracking rects.

Let's loop over the objects now and accept steps towards calculating speeds:

            # loop over the tracked objects     for (objectID, centroid) in objects.items():         # bank check to see if a trackable object exists for the current         # object ID         to = trackableObjects.go(objectID, None)          # if there is no existing trackable object, create one         if to is None:             to = TrackableObject(objectID, centroid)          

Each trackable object has an associated objectID. Lines 204-208 create a trackable object (with ID) if necessary.

From here we'll check if the speed has been estimated for this trackable object still:

            # otherwise, if there is a trackable object and its speed has         # not withal been estimated then gauge it         elif non to.estimated:             # check if the management of the object has been set, if             # not, calculate it, and set it             if to.management is None:                 y = [c[0] for c in to.centroids]                 direction = centroid[0] - np.hateful(y)                 to.direction = direction          

If the speed has non been estimated (Line 212), then nosotros get-go need to determine the direction in which the object is moving (Lines 215-218).

Positive management values indicate left-to-right motion and negative values indicate right-to-left motion.

Knowing the direction is important and then that we tin estimate our speed between the points properly.

With the direction in mitt, at present let's collect our timestamps:

            # if the direction is positive (indicating the object             # is moving from left to right)             if to.direction > 0:                 # cheque to run into if timestamp has been noted for                 # point A                 if to.timestamp["A"] == 0 :                     # if the centroid'due south ten-coordinate is greater than                     # the corresponding point then gear up the timestamp                     # every bit current timestamp and prepare the position as the                     # centroid's x-coordinate                     if centroid[0] > conf["speed_estimation_zone"]["A"]:                         to.timestamp["A"] = ts                         to.position["A"] = centroid[0]                  # check to meet if timestamp has been noted for                 # indicate B                 elif to.timestamp["B"] == 0:                     # if the centroid'due south x-coordinate is greater than                     # the corresponding point and so set the timestamp                     # as electric current timestamp and set the position equally the                     # centroid's x-coordinate                     if centroid[0] > conf["speed_estimation_zone"]["B"]:                         to.timestamp["B"] = ts                         to.position["B"] = centroid[0]                  # cheque to see if timestamp has been noted for                 # betoken C                 elif to.timestamp["C"] == 0:                     # if the centroid's 10-coordinate is greater than                     # the corresponding point then set the timestamp                     # as current timestamp and set the position as the                     # centroid's 10-coordinate                     if centroid[0] > conf["speed_estimation_zone"]["C"]:                         to.timestamp["C"] = ts                         to.position["C"] = centroid[0]                  # cheque to see if timestamp has been noted for                 # indicate D                 elif to.timestamp["D"] == 0:                     # if the centroid'southward x-coordinate is greater than                     # the corresponding point then set the timestamp                     # as electric current timestamp, set the position equally the                     # centroid's 10-coordinate, and set up the last point                     # flag as True                     if centroid[0] > conf["speed_estimation_zone"]["D"]:                         to.timestamp["D"] = ts                         to.position["D"] = centroid[0]                         to.lastPoint = True          

Lines 222-267 collect timestamps for cars moving from left-to-correct for each of our columns, A, B, C, and D.

Let's inspect the calculation for column A:

  1. Line 225 checks to see if a timestamp has been made for betoken A — if not, we'll continue to do and so.
  2. Line 230 checks to see if the current x-coordinate centroid is greater than cavalcade A.
  3. If so, Lines 231 and 232 record a timestamp and the verbal xposition of the centroid.
  4. Columns B, C, and D use the same method to collect timestamps and positions with i exception. For cavalcade D, the lastPoint is marked as True. Nosotros'll utilize this flag later to indicate that it is fourth dimension to perform our speed formula calculations.

Now let'due south perform the aforementioned timestamp, position, and last point updates for right-to-left traveling cars (i.e. direction < 0):

            # if the direction is negative (indicating the object             # is moving from correct to left)             elif to.direction < 0:                 # check to see if timestamp has been noted for                 # point D                 if to.timestamp["D"] == 0 :                     # if the centroid's x-coordinate is lesser than                     # the corresponding signal and so set the timestamp                     # every bit current timestamp and prepare the position as the                     # centroid's x-coordinate                     if centroid[0] < conf["speed_estimation_zone"]["D"]:                         to.timestamp["D"] = ts                         to.position["D"] = centroid[0]                  # cheque to see if timestamp has been noted for                 # betoken C                 elif to.timestamp["C"] == 0:                     # if the centroid'southward x-coordinate is bottom than                     # the respective indicate then set the timestamp                     # as current timestamp and gear up the position as the                     # centroid'southward x-coordinate                     if centroid[0] < conf["speed_estimation_zone"]["C"]:                         to.timestamp["C"] = ts                         to.position["C"] = centroid[0]                  # check to see if timestamp has been noted for                 # signal B                 elif to.timestamp["B"] == 0:                     # if the centroid'southward x-coordinate is bottom than                     # the corresponding point and so set the timestamp                     # equally electric current timestamp and set the position every bit the                     # centroid'south x-coordinate                     if centroid[0] < conf["speed_estimation_zone"]["B"]:                         to.timestamp["B"] = ts                         to.position["B"] = centroid[0]                  # check to run into if timestamp has been noted for                 # indicate A                 elif to.timestamp["A"] == 0:                     # if the centroid'south x-coordinate is lesser than                     # the respective point then set the timestamp                     # as current timestamp, set the position equally the                     # centroid'southward ten-coordinate, and ready the last betoken                     # flag as Truthful                     if centroid[0] < conf["speed_estimation_zone"]["A"]:                         to.timestamp["A"] = ts                         to.position["A"] = centroid[0]                         to.lastPoint = True          

Lines 271-316 grab timestamps and positions for cars equally they pass past columns D, C, B, and A (again, for correct-to-left tracking). For A the lastPoint is marked as Truthful.

Now that a car's lastPoint is True, we tin calculate the speed:

            # check to encounter if the vehicle is past the final signal and             # the vehicle's speed has not still been estimated, if yep,             # then calculate the vehicle speed and log information technology if it's             # over the limit             if to.lastPoint and not to.estimated:                 # initialize the list of estimated speeds                 estimatedSpeeds = []                  # loop over all the pairs of points and estimate the                 # vehicle speed                 for (i, j) in points:                     # calculate the altitude in pixels                     d = to.position[j] - to.position[i]                     distanceInPixels = abs(d)                      # bank check if the altitude in pixels is zero, if so,                     # skip this iteration                     if distanceInPixels == 0:                         continue                      # calculate the time in hours                     t = to.timestamp[j] - to.timestamp[i]                     timeInSeconds = abs(t.total_seconds())                     timeInHours = timeInSeconds / (60 * 60)                      # summate distance in kilometers and append the                     # calculated speed to the list                     distanceInMeters = distanceInPixels * meterPerPixel                     distanceInKM = distanceInMeters / 1000                     estimatedSpeeds.append(distanceInKM / timeInHours)                  # calculate the average speed                 to.calculate_speed(estimatedSpeeds)                  # set the object as estimated                 to.estimated = True                 print("[INFO] Speed of the vehicle that just passed"\                     " is: {:.2f} MPH".format(to.speedMPH))          # store the trackable object in our dictionary         trackableObjects[objectID] = to          

When the trackable object's (1) final point timestamp and position has been recorded, and (ii) the speed has non still been estimated (Line 322) nosotros'll go on to judge speeds.

Line 324 initializes a list to concord three estimatedSpeeds. Let's calculate the 3 estimates at present.

Line 328 begins a loop over our pairs of points:

We calculate the distanceInPixels using the position values (Lines 330-331). If the distance is 0, we'll skip this pair (Lines 335 and 336).

Side by side, we calculate the elapsed time between ii points in hours (Lines 339-341). Nosotros need the time in hours considering nosotros are calculating kilometers per hour and miles per hour.

We and then summate the altitude in kilometers by multiplying the pixel distance by the estimated meterPerPixel value (Lines 345 and 346). Retrieve that meterPerPixel is based on (one) the width of the FOV at the roadside and (ii) the width of the frame.

The speed is calculated by Equation 1.i-1.iv (altitude over time) and added to the estimatedSpeeds listing.

Line 350 makes a telephone call to the TrackableObject class method calculate_speed to average out our three estimatedSpeeds in both miles per hour and kilometers per 60 minutes (Equation 1.5).

Line 353 marks the speed equally estimated.

Lines 354 and 355 and so print the speed in the final.

Note: If y'all prefer to print the speed in km/hour be certain to update both the string to KMPH and the format variable to to.speedKMPH.

Line 358 stores the trackable object to the trackableObjects dicitionary.

Phew! The hard part is out of the way in this script. Allow's wrap up, showtime past annotating the centroid and ID on the frame:

            # describe both the ID of the object and the centroid of the         # object on the output frame         text = "ID {}".format(objectID)         cv2.putText(frame, text, (centroid[0] - ten, centroid[one] - 10)             , cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)         cv2.circle(frame, (centroid[0], centroid[1]), 4,             (0, 255, 0), -1)          

A small dot is drawn on the centroid of the moving car with the ID number adjacent to it.

Next we'll become ahead and update our log file and store vehicle images in Dropbox:

            # check if the object has not been logged         if not to.logged:             # check if the object's speed has been estimated and it             # is higher than the speed limit             if to.estimated and to.speedMPH > conf["speed_limit"]:                 # ready the current year, month, twenty-four hour period, and time                 yr = ts.strftime("%Y")                 month = ts.strftime("%grand")                 day = ts.strftime("%d")                 time = ts.strftime("%H:%M:%S")                  # check if dropbox is to exist used to store the vehicle                 # image                 if conf["use_dropbox"]:                     # initialize the image id, and the temporary file                     imageID = ts.strftime("%H%M%Due south%f")                     tempFile = TempFile()                     cv2.imwrite(tempFile.path, frame)                      # create a thread to upload the file to dropbox                     # and start information technology                     t = Thread(target=upload_file, args=(tempFile,                         client, imageID,))                     t.kickoff()                      # log the event in the log file                     info = "{},{},{},{},{},{}\n".format(year, month,                         24-hour interval, fourth dimension, to.speedMPH, imageID)                     logFile.write(info)                  # otherwise, we are not uploading vehicle images to                 # dropbox                 else:                     # log the event in the log file                     info = "{},{},{},{},{}\north".format(year, month,                         day, time, to.speedMPH)                     logFile.write(info)                  # set the object has logged                 to.logged = True          

At a minimum, every vehicle that exceeds the speed limit will be logged in the CSV file. Optionally Dropbox will be populated with images of the speeding vehicles.

Lines 369-372 check to see if the trackable object has been logged, speed estimated, and if the machine was speeding.

If so Lines 374-477 extract the year, calendar month, solar day, and fourth dimension from the timestamp.

If an image volition exist logged in Dropbox, Lines 381-391 store a temporary file and spawn a thread to upload the file to Dropbox.

Using a separate thread for a potentially time-consuming upload is critical so that our chief thread isn't blocked, impacting FPS and speed calculations. The filename volition exist the imageID on Line 383 and then that it can easily be establish later if it is associated in the log file.

Lines 394-404 write the CSV data to the logFile. If Dropbox is used, the imageID is the last value.

Note: If you lot prefer to log the kilometers per 60 minutes speed, but update to.speedMPH to to.speedKMPH on Line 395 and Line 403.

Line 396 marks the trackable object equally logged.

Let'south wrap up:

            # if the *display* flag is prepare, then display the current frame     # to the screen and record if a user presses a key     if conf["display"]:         cv2.imshow("frame", frame)         key = cv2.waitKey(1) & 0xFF          # if the `q` primal is pressed, intermission from the loop         if central == ord("q"):             suspension      # increment the full number of frames processed thus far and     # then update the FPS counter     totalFrames += 1     fps.update()  # cease the timer and display FPS information fps.cease() print("[INFO] elapsed time: {:.2f}".format(fps.elapsed())) print("[INFO] approx. FPS: {:.2f}".format(fps.fps()))  # check if the log file object exists, if it does, then close information technology if logFile is not None:     logFile.shut()  # shut any open windows cv2.destroyAllWindows()  # make clean up impress("[INFO] cleaning upwardly...") vs.finish()          

Lines 411-417 brandish the annotated frame and await for the q keypress in which case we'll quit (intermission).

Lines 421 and 422 increment totalFrames and update our FPS counter.

When nosotros take broken out of the frame processing loop we perform housekeeping including printing FPS stats, closing our log file, destroying GUI windows, and stopping our video stream (Lines 424-438)

Vehicle Speed Estimation Deployment

Now that our code is implemented, we'll deploy and test our system.

I highly recommend that you conduct a handful of controlled drive-bys and tweak the variables in the config file until yous are achieving accurate speed readings.

Prior to any fine-tuning calibration, we'll just ensure that the program working. Be sure you have met the following requirements prior to trying to run the application:

  • Position and aim your camera perpendicular to the road as per Figure three.
  • Ensure your camera has a clear line of sight with express obstructions — our object detector must be able to detect a vehicle at multiple points as it crosses through the camera'due south field of view (FOV).
  • It is best if your camera is positioned far from the road. The further points A and D are from each other at the point at which cars laissez passer on the road, the better the distance / fourth dimension calculations volition boilerplate out and produce more than accurate speed readings. If your photographic camera is close to the road, a broad-angle lens is an selection, merely then you'll demand to perform photographic camera calibration (a future PyImageSearch web log topic).
  • If you are using Dropbox functionality, ensure that your RPi has a solid WiFi, Ethernet, or even cellular connection.
  • Ensure that you lot accept ready all constants in the config file. We may elect to fine-tune the constants in the adjacent section.

Assuming you have met each of the requirements, you are at present fix to deploy and run your program. Offset, we must setup our environment.

Pre-configured Raspbian .img users: Delight activate your virtual environment as follows:

$ source ~/start_openvino.sh          

Using that script ensures that (1) the virtual environment is activated, and (two) Intel's environment variables are loaded.

If y'all installed OpenVINO on your own (i.e. you aren't using my Pre-configured Raspbian .img): Delight source the setupvars.sh script as follows (suit the command to where your script lives):

$ workon <env_name> $ source ~/openvino/inference_engine_vpu_arm/bin/setupvars.sh          

Note: You may have sourced the environment in your ~/.bashrc file per the OpenVINO installation instructions. In this instance, the environment variables are fix automatically when you lot launch a concluding or connect via SSH. That said, you will still demand to use the workon command to activate your virtual environment.

Again, if you are using my Pre-configured Raspbian .img that comes with either Practical Python and OpenCV + Instance Studies (Quickstart and Hardcopy) and/or Raspberry Pi for Computer Vision (all bundles), and then the ~/start_openvino.sh will call the setupvars.sh script automatically.

If you lot do not perform either of the following steps, you will encounter an Illegal Didactics error.

Enter the following command to start the program and begin logging speeds:

$ python speed_estimation_dl.py --conf config/config.json [INFO] loading model... [INFO] warming  up camera... [INFO] Speed of the vehicle that just passed is: 26.08 MPH [INFO] Speed of the vehicle that just passed is: 22.26 MPH [INFO] Speed of the vehicle that just passed is: 17.91 MPH [INFO] Speed of the vehicle that just passed is: 15.73 MPH [INFO] Speed of the vehicle that just passed is: 41.39 MPH [INFO] Speed of the vehicle that just passed is: 35.79 MPH [INFO] Speed of the vehicle that just passed is: 24.ten MPH [INFO] Speed of the vehicle that just passed is: 20.46 MPH [INFO] Speed of the vehicle that just passed is: 16.02 MPH          
Figure 7: OpenCV vehicle speed estimation deployment. Vehicle speeds are calculated after they go out the viewing frame. Speeds are logged to CSV and images are stored in Dropbox.

Every bit shown in Figure 7 and the video, our OpenCV organization is measuring speeds of vehicles traveling in both directions. In the next department, we will perform drive-by tests to ensure our organization is reporting accurate speeds.

Notation: The video has been post-processed for demo purposes. Proceed in mind that we do not know the vehicle speed until subsequently the vehicle has passed through the frame. In the video, the speed of the vehicle is displayed while the vehicle is in the frame a meliorate visualization.

Note: OpenCV cannot automatically throttle a video file framerate according to the true framerate. If yous employ speed_estimation_dl_video.py too as the supplied cars.mp4 testing file, keep in heed that the speeds reported will exist inaccurate. For accurate speeds, you must ready the total experiment with a camera and have existent cars drive past. Refer to the next department, "Calibrating for Accuracy", for a real live demo in which a screencast was recorded of the live arrangement in action. To use the script, run this command: python speed_estimation_dl_video.py --conf config/config.json --input sample_data/cars.mp4

On occasions when multiple cars are passing through the frame at ane given time, speeds will be reported inaccurately. This tin occur when our centroid tracker mixes up centroids. This is a known drawback to our algorithm. To solve the effect additional algorithm engineering will need to be conducted by you as the reader. One suggestion would be to perform instance sectionalization to accurate segment each vehicle.

Credits:

  • Audio for demo video: BenSound

Calibrating for Accuracy

Figure eight: Neighborhood vehicle speed estimation and tracking with OpenCV bulldoze test results.

You may find that the system produces slightly inaccurate readouts of the vehicle speeds going past. Do non disregard the project just nonetheless. You can tweak the config file to go closer and closer to accurate readings.

We used the post-obit approach to calibrate our system until our readouts were spot-on:

  • Begin recording a screencast of the RPi desktop showing both the video stream and last. This screencast should record throughout testing.
  • Meanwhile, record a voice memo on your smartphone throughout testing of you lot driving by while stating what your drive-by speed is.
  • Drive by the calculator-vision-based VASCAR system in both directions at predetermined speeds. We chose 10mph, 15mph, 20mph, and 25mph to compare our speed to the VASCAR calculated speed. Your neighbors might retrieve you're weird as you drive back and forth past your house, but just give them a squeamish grin!
  • Sync the screencast to the audio file then that information technology tin can be played back.
  • The speed +/- differences could exist jotted down equally you playback your video with the synced audio file.
  • With this information, melody the constants:
    • (1) If your speed readouts are a niggling high, so decrease the "altitude" constant
    • (2) Conversely, if your speed readouts are slightly low, then increase the "distance" abiding.
  • Rinse and echo until you are satisfied. Don't worry, you won't fire besides much fuel in the process.

PyImageSearch colleagues Dave Hoffman and Abhishek Thanki found that Dave needed to increase his distance constant from fourteen.94m to 16.00m.

Be sure to refer to the following concluding testing video which corresponds to the timestamps and speeds for the table in Figure viii:

Here are the results of an case calculation with the calibrated constant. Exist sure to compare Figure 9 to Figure iv:

Figure 9: An example calculation using our calibrated distance for OpenCV vehicle detection, tracking, and speed estimation.

With a calibrated system, y'all're now ready to let information technology run for a full twenty-four hours. Your system is likely only configured for daytime use unless yous have streetlights on your road.

Note: For nighttime employ (outside the scope of this tutorial), yous may need infrared cameras and infrared lights and/or adjustments to your camera parameters (refer to the Raspberry Pi for Computer Vision Hobbyist Bundle Chapters half dozen, 12, and 13 for these topics).

What'south next? I recommend PyImageSearch University.

Course data:
35+ total classes • 39h 44m video • Last updated: April 2022
★★★★★ 4.84 (128 Ratings) • 13,800+ Students Enrolled

I strongly believe that if yous had the right teacher you could master computer vision and deep learning.

Do yous remember learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in information science?

That'due south not the case.

All you need to chief reckoner vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that's exactly what I do. My mission is to alter education and how complex Artificial Intelligence topics are taught.

If y'all're serious well-nigh learning calculator vision, your adjacent stop should be PyImageSearch University, the most comprehensive figurer vision, deep learning, and OpenCV grade online today. Here y'all'll learn how to successfully and confidently utilise calculator vision to your piece of work, inquiry, and projects. Join me in computer vision mastery.

Inside PyImageSearch University you'll find:

  • 35+ courses on essential computer vision, deep learning, and OpenCV topics
  • 35+ Certificates of Completion
  • 39+ hours of on-demand video
  • Brand new courses released regularly , ensuring you can keep up with state-of-the-art techniques
  • Pre-configured Jupyter Notebooks in Google Colab
  • ✓ Run all lawmaking examples in your web browser — works on Windows, macOS, and Linux (no dev surround configuration required!)
  • ✓ Access to centralized code repos for all 450+ tutorials on PyImageSearch
  • Like shooting fish in a barrel one-click downloads for code, datasets, pre-trained models, etc.
  • Access on mobile, laptop, desktop, etc.

Click hither to join PyImageSearch University

Summary

In this tutorial, we utilized Deep Learning and OpenCV to build a organisation to monitor the speeds of moving vehicles in video streams.

Rather than relying on expensive RADAR or LIDAR sensors, we used:

  • Timestamps
  • A known distance
  • And a simple physics equation to calculate speeds.

In the police force world, this is known as Vehicle Average Speed Estimator and Recorder (VASCAR). Police rely on their eyesight and push button-pushing reaction time to collect timestamps — a method that barely holds in court in comparison to RADAR and LIDAR.

But of course, we are engineers so our system seeks to eliminate the homo error component when computing vehicle speeds automatically with computer vision.

Using both object detection and object tracking we coded a method to calculate four timestamps via four waypoints. We then let the math do the talking:

Nosotros know that speed equals distance over fourth dimension. Three speeds were calculated among the three pairs of points and averaged for a solid gauge.

One drawback of our automated system is that information technology is only as good as the central distance abiding.

To gainsay this, we measured advisedly and then conducted drive-bys while looking at our speedometer to verify operation. Adjustments to the altitude abiding were made if needed.

Yes, there is a human component in this verification method. If you have a cop friend that tin can assist you verify with their RADAR gun, that would be even amend. Maybe they will even ask for your data to provide to the city to encourage them to identify speed bumps, stop signs, or traffic signals in your expanse!

Another area that needs further engineering is to ensure that trackable object IDs do non become swapped when vehicles are moving in different directions. This is a challenging trouble to solve and I encourage give-and-take in the comments.

We hope y'all enjoyed today's blog post!

To download the source code to this post (and exist notified when hereafter tutorials are published here on PyImageSearch), but enter your electronic mail address in the form beneath!

Download the Source Code and Gratis 17-folio Resource Guide

Enter your e-mail address below to get a .zip of the code and a Gratis 17-page Resource Guide on Calculator Vision, OpenCV, and Deep Learning. Inside you'll find my paw-picked tutorials, books, courses, and libraries to assistance yous chief CV and DL!

Which Data Outcome Of The Speed Of Cars Would Best Support Removing A Speed Camera?,

Source: https://pyimagesearch.com/2019/12/02/opencv-vehicle-detection-tracking-and-speed-estimation/

Posted by: herbertthead1935.blogspot.com

0 Response to "Which Data Outcome Of The Speed Of Cars Would Best Support Removing A Speed Camera?"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel