Application of Visual Tracking Technology in Intelligent Transportation

In the monitoring of urban rail transit, intelligent video analysis technology has been extremely popular. However, due to the complex monitoring environment of urban rail transit, it not only has a large area, has a long perimeter, but also has many equipments such as multiple platforms and multiple fences. This complex environment brings many difficulties to intelligent analysis, and as the current novel TLD (acronym for Tracking-Learning-Detection) visual tracking technology can solve these problems.

The biggest characteristic of the TLD tracking system is that it can continuously learn the locked target to obtain the latest appearance features of the target, so as to improve the tracking in time to achieve the best state. In other words, initially only one frame of stationary target image is provided, but as the target continues to move, the system can continuously perform detection to know the changes in the angle, distance, depth of field and other aspects of the target, and identify in real time, after a period of time. After studying time, the goal can no longer escape.

TLD technology consists of three parts: the tracker, the learning process, and the detector. TLD technology adopts a combination of tracking and detection strategies and is an adaptive and reliable tracking technology. In the TLD technology, the tracker and the detector run in parallel, and both of the generated results are involved in the learning process. The model after learning also acts against the tracker and the detector and updates it in real time, thereby ensuring that even if the target appearance occurs. Changes can also be tracked continuously.

Tracker

The TLD tracker uses overlapping block tracking strategies and the monoblock tracking uses the Lucas-Kanade optical flow method. The TLD needs to specify the target to be tracked before it is tracked, which is indicated by a rectangular box. The final overall target motion takes the median of all local block moves. This local tracking strategy can solve the problem of partial occlusion.

learning process

The learning process of TLD is based on the online model. The online model is a collection of image blocks of size 15×15. These image blocks come from the results of the tracker and the checker. The initial online model is the target image to be tracked specified at the start of tracking.

The online model is a dynamic model that grows or shrinks with the video sequence. The development of online models has two events to drive, namely growth events and pruning events. In reality, the influence of multiple factors, such as the environment and the target itself, constantly changes the appearance of the target, which makes the target image produced by the tracker contain more interesting factors. If we regard all the target images on the tracking trajectory as a feature space, then as the video sequence progresses, the feature space caused by the tracker will continue to increase. This is called a growth event. In order to prevent the impurities (other non-target images) brought by the growth event from affecting the tracking effect, the opposite trim event is used to balance. Pruning events are used to remove impurities caused by growth events. Thus, the interaction of the two events has prompted the online model to remain consistent with the current tracking goals.

The expansion of the feature space brought about by the growth event comes from the tracker, which selects the appropriate sample from the target image on the tracking trajectory and updates the online model accordingly. There are three choice strategies, as detailed below.

Image blocks similar to the target image to be tracked are added to the online model;

· If the current frame's tracking target image is similar to the previous frame, the current tracking result image is added to the online model;

Calculate the distance between the target image on the tracking trajectory and the online model, select the target image with a specific pattern, that is, the distance between the target image and the online model is small at first, then the distance increases gradually, and then the distance recovers to a smaller state. . The loop checks whether this mode exists and adds the target image in the mode to the online model.

The feature selection method of the growth event ensures that the online model always follows the latest state of the tracking target and avoids the loss of tracking due to the non-real-time update of the model. The last choice strategy is also one of the characteristics of TLD technology. It embodies the characteristics of adaptive tracking. When tracking drifts, the tracker automatically adapts to the background and does not jump abruptly to the tracking target.

The pruning event assumes that there is only one target per frame. When both the tracker and the detector recognize the target position, the remaining detected images are considered to be wrong samples and are deleted from the online model.

The samples in the online model provide material for the TLD learning process. In addition, TLD employs two kinds of constraints in the process of generating classifiers (random forests): P constraints and N constraints. The P constraint specifies that the image block that is close to the target image on the tracking trajectory is a positive sample; conversely, if it is a negative sample, it is an N constraint. The PN constraint reduces the error rate of the classifier. Within a certain range, the error rate approaches zero.

Detector

TLD technology has designed a fast, reliable detector that provides the necessary support for the tracker. When the result of the tracker fails, the result of the detector needs to be used to supplement the correction and the tracker is re-initialized. The specific approach is as follows.

· For each frame running tracker, detector at the same time, the tracker predicts a target position information, and the detector may detect multiple images;

· When determining the final position of the target, priority is given to the results obtained by the tracker, ie if the similarity of the tracked image to the original target image is greater than a certain threshold, the tracked result is accepted; otherwise, the tracked image will be selected from the results of the detector. The image with the highest similarity to the original target serves as the tracking result.

• If the latter is the second step, then the initial target model of the tracker is updated at this time, the existing target model is replaced with the current tracking result, and at the same time, the sample in the previous model is deleted, and the new sample is restarted.

The detector is a random forest classifier generated from training and learning of samples in an online model. The selected feature is the edge direction of the area, which is called the 2bitBP feature. It has the property of being free from light interference. Characteristics By quantization, there are 4 possible codes. For a given area, its feature code is unique. Multiscale feature calculations can use integral images.

Each graph block is represented by a number of 2bitBP features, and these features are divided into different groups of the same size, each group representing a different representation of the appearance of the image block. The classifier used for detection is in the form of a random forest. Random forests consist of trees, and each tree is constructed from a feature group. Each feature of the tree serves as a decision node.

Random forests complete online updates and evolutions through growth events and pruning events. At the beginning, each tree is constructed from the feature set of the original target template, and there is only one "branch". With the selection of positive samples for growth events, random forests are constantly adding new "branches"; pruning events, on the other hand, will remove unused "branches" from random forests. This real-time detector employs a scanning window strategy: scanning input frames by location and scale, applying a classifier to each sub-window to determine whether it belongs to the target image.

TLD technology skillfully combines trackers, detectors, and learning processes to achieve goal tracking.

Eva Foam Puzzle is made from Eco-friendly, closed cell, non-smell and non-toxic Eva Foam material.

It consists of 9 pieces small puzzles, printed with beautiful and colorful pattern. You don't have to worry about it to be wet, cause the material is closed cell and antibacterial. Besides, when the puzzle is wet, it can just put on the wall, and won't fell, brining a lot fun for children and baby when in bathroom.

EVA Foam Puzzle

EVA Foam Puzzle

Eva Foam Puzzle,Foam Puzzle,Eva Foam Mat Puzzles,Eva Foam Animal Puzzle,Eva Foam Alphabet Puzzle ,Eva Foam Soft Puzzle

Huizhou Melors Plastic Products Factory , https://www.melorsfoam.com