How does the traffic sign recognition system work?


The driver of any vehicle makes a huge number of decisions in a minute. And you also have to monitor the situation very carefully. Cars around, road markings and signs, correct use of controls - doing all this at the same time is not only difficult, but very difficult. While the motorist gains experience and is able to control the situation around him more confidently, a lot of time will pass and there is always the possibility that an accident may occur.

The traffic sign recognition system is designed to help drivers in everyday driving and ease some pressure on the human brain, especially when he is tired behind the wheel.

Traffic Sign Recognition System

SUBJECT AND METHOD

Typical situations for testing devices are abundantly provided by public roads. Highways near Moscow are rich in various speed limits, narrower highways have “lollipops” prohibiting overtaking, and highways and expressways within the city have non-standard signs that the video eye must read at a fairly fast speed. We walked the route twice: in daylight and in the dark. Moreover, during the day the windshield was periodically wetted by rain, and heavy-duty trucks were generously doused with mud. In general, they did not retreat from the realities of the road.

Unlike most electronic assistants, the one responsible for recognizing characters is designed relatively simply. The camera snatches and checks with its card index plates that are similar in shape, set and arrangement of symbols. The scanners do not respond to similar restrictions on the maximum weight or height of the car (also large numbers with a red border). True, it comes to funny things. When overtaking another truck, the Opel suddenly showed “30” on the display. It turned out that the scanner was reading a miniature speed limit sign on the sprinkler tank.

Similar developments from different automakers

Most modern companies are developing sign recognition systems. Names of solutions from different manufacturers:

  • Speed ​​Limit Assist from Mercedes-Benz.
  • Road Sign Information from Volvo.
  • Traffic Sign Recognition (TSR) is found in cars such as Audi, Ford, BMW, Volkswagen.
  • Opel Eye from the manufacturer of the same name.

The difference between the systems lies in the quality of the equipment used and the logic of the algorithms for recognizing objects on the road.

ROAD CHECK

The BMW disappoints in the first minutes: it rarely notices “80” signs on the right side of wide Moscow highways. Opel is a little more careful. But it is felt that these conditions are abnormal for the systems: on such a wide road it is necessary to duplicate information on stretch marks, the upper ramp or the dividing barrier.

Outside the city, the assistants were more confident: there was less distracting information, and the signs were much closer. However, this pot of honey was not without its tar. Opel is sensitive to the orientation of the sign. If he is slightly tilted or turned, the camera misses him. BMW has its own quirks. The Bavarian's standard navigation includes speed limits for all roads. If the scanner does not see road signs, the computer relies on road map data. But it would be better if he didn’t do this! Electronic Susanin does not always clearly track the boundaries of settlements, switching the limits “60” and “90”. And sometimes it highlights completely inexplicable restrictions of 50 or 70 km/h, and the change occurs in a forest or open field. The Opel i system is not tied to navigation; it only tracks real roadside information, and therefore misinforms the driver less often.

Characteristics of Neoline G-Tech X77

  • video recording at a resolution of 1920 x 1080 pixels, 30 or 60 fps
  • grip angle 140°
  • IPS screen 2 inches, non-touch
  • Micro SD memory card support (now up to 256 GB)
  • supercapacitor for autonomous power supply (see review)
  • Micro USB connection
  • GPS module mounted on glass
  • sensors: accelerometer
  • modes: HDR, night shooting, interval or continuous (cyclic) recording
  • operating / storage temperature -10˚ / -20˚ to +60˚ / +70˚
  • power supply: 5 V, 1.5 A (the plug has a USB port for accessories or charging a smartphone)
  • Dimensions: 74 x 42 x 35 mm / 93 grams

VISUAL DEFECTS

A short but heavy rain spoiled the Opel's accuracy rating. Drops on the glass that the brushes do not have time to wipe off reduce the assistant’s vigilance by half. The high beams turned on at night had a similar effect on his visual acuity: the electronics consistently ignored signs caught in the headlights. BMW does not allow itself such whims.

But don't expect the system to warn you about the sign in advance. With a declared 100-meter range, the cameras of both Opel and BMW transmit information to the displays no earlier than the sign reaches the front bumper. And the “Bavarian” even takes a theatrical pause and issues a message only when a column looms in the rearview mirror. Moreover, this does not depend on weather conditions or time of day.

What is a traffic sign recognition system

The development is designed to increase road safety and make driving easier. Engineers are creating solutions that will automatically recognize road signs, record information about permissible speeds and restrictions, including the direction of travel, the presence of intersections, train runs and other data.


Traffic Sign Recognition System

The more warnings the system receives from the external environment, the more reliable the car and the driving process become. It is physically difficult for the driver to monitor all the parameters of the road, especially on long trips. A software solution can solve the problem of inattention and reduce the influence of the human factor while driving.

Traffic sign recognition is one of the components required for self-driving cars. The machine must independently determine markings, restrictions, signs and traffic conditions.

Kirill Mileshkin

“It is impossible to view the colored symbols of the prohibition of overtaking and its cancellation that appear on the head-up display, but the real signs on the side of the roads are easy to see. Darkness, drops or dirt on the windshield are no problem for the camera. I wish I had such sharp eyes! As a result, BMW is superior to Opel in two key parameters - scanning quality and information presentation. As for the incorrect navigation prompts about the speed limit, it never overestimated them relative to the real limit.”

Maxim Sachkov:

Exploring the Dataset

First, we import all the necessary libraries.

import os import matplotlib import numpy as np from PIL import Image from tensorflow.keras.preprocessing.image import img_to_array from sklearn.model_selection import train_test_split from keras.utils import to_categorical from keras.models import Sequential, load_model from keras.layers import Conv2D, MaxPool2D , Dense, Flatten, Dropout from tensorflow.keras import backend as K import matplotlib.pyplot as plt from sklearn.metrics import accuracy_score

To train the neural network, we will use images from the “train” folder, which contains 43 folders of individual classes. We initialize two lists: data and labels. These listings will be responsible for storing our images that we upload along with the corresponding class labels.

data = [] labels = []

Next, using the os module, we iterate through all the classes and add the images and their corresponding labels to the data and labels list. The PIL library is used to open the image content.

for num in range(0, classes): path = os.path.join('train',str(num)) imagePaths = os.listdir(path) for img in imagePaths: image = Image.open(path + '/ '+ img) image = image.resize((30,30)) image = img_to_array(image) data.append(image) labels.append(num)

This loop simply loads and resizes each image to a fixed 30x30 pixels and stores all images and their labels in data and labels lists.

The list then needs to be converted into numpy arrays to feed into the model.

data = np.array(data) labels = np.array(labels)

The shape of the data is (39209, 30, 30, 3), meaning there are 39,209 images of 30x30 pixels, and the last 3 means the data contains color images (RGB value).

print(data.shape, labels.shape) (39209, 30, 30, 3) (39209,)

From the sklearn package, we use the train_test_split() method to split the training and testing data, using 80% of the images for training and 20% for testing. This is a typical split for this amount of data.

X_train, X_test, y_train, y_test = train_test_split(data, labels, test_size=0.2, random_state=42) print(X_train.shape, X_test.shape, y_train.shape, y_test.shape) (31367, 30, 30, 3) ( 7842, 30, 30, 3) (31367,) (7842,)

Let's check how many classes we have and how many images are in the training set for each class and plot the class distribution chart.

def cnt_img_in_classes(labels): count = {} for i in labels: if i in count: count
+= 1 else: count = 1 return count samples_distribution = cnt_img_in_classes (y_train) def diagram(count_classes): plt.bar(range(len (dct)), sorted(list(count_classes.values())), align='center') plt.xticks(range(len(dct)), sorted(list(count_classes.keys())), rotation=90 , fontsize=7) plt.show() diagram(samples_distribution)


Distribution Chart
From the graph, we can see that the training data set is not balanced, but we can deal with this fact by using data augmentation technique.

def aug_images(images, p): from imgaug import augmenters as iaa augs = iaa.SomeOf((2, 4), [ iaa.Crop(px=(0, 4)), iaa.Affine(scale={"x" : (0.8, 1.2), "y": (0.8, 1.2)}), iaa.Affine(translate_percent={"x": (-0.2, 0.2), "y": (-0.2, 0.2)}), iaa.Affine(rotate=(-45, 45)) iaa.Affine(shear=(-10, 10)) ]) seq = iaa.Sequential([iaa.Sometimes(p, augs)]) res = seq.augment_images (images) return res def augmentation(images, labels): min_imgs = 500 classes = cnt_img_in_classes(labels) for i in range(len(classes)): if (classes
< min_imgs): add_num = min_imgs — classes imgs_for_augm = [] lbls_for_augm = [] for j in range(add_num): im_index = random.choice(np.where(labels == i)[0]) imgs_for_augm.append(images[im_index]) lbls_for_augm.append(labels[im_index]) augmented_class = augment_imgs(imgs_for_augm, 1) augmented_class_np = np.array(augmented_class) augmented_lbls_np = np.array(lbls_for_augm) imgs = np.concatenate((images, augmented_class_np), axis=0) lbls = np.concatenate((labels, augmented_lbls_np), axis=0) return (images, labels) X_train, y_train = augmentation(X_train, y_train)
After augmentation, our training dataset has the following form.

print(X_train.shape, X_test.shape, y_train.shape, y_test.shape) (36256, 30, 30, 3) (7842, 30, 30, 3) (36256,) (7842,)

Let's check the data distribution again.

augmented_samples_distribution = cnt_img_in_classes(y_train) diagram(augmented_samples_distribution)


Distribution diagram after augmentation

The graph shows that our set has become more balanced. Next, from the keras.utils package, we use the to_categorical method to convert the labels present in y_train_t_test into one-hot encoding.

y_train = to_categorical(y_train, 43) y_test = to_categorical(y_test, 43)

Maxim Sachkov

“I would prefer the Opel Eye. The Bavarian’s electronic assistant resembles an overly caring grandmother, carefully protecting her beloved grandson from all kinds of misfortunes, and his fellow countryman from Rüsselsheim resembles a young father who provides the child with enough freedom and only backs up in dangerous situations. I prefer it when a person relies on himself and does not constantly expect help from others. Although such people will also need practical advice in difficult times.”

Useful little things

A holder with a GPS receiver is attached to the glass

using 3M tape. It sticks very tightly to the glass. However, the kit includes one more strip just in case of fire.

I really like the magnetic mount on all G-Tech X series DVRs. To remove the DVR, you just need to pull it with a little force - even one hand is enough. But you can’t turn the recorder with its lens facing into the interior. In this case, the power is interrupted and the device simply does not work.

Date, time, license plate number, coordinates with latitude and longitude - all this is recorded directly on top of the video, you just need to set it up once and forget it. Thus, Neoline G-Tech X77 can easily be used as evidence in court

. The main thing is not to throw away the box and accompanying documentation - all this can be useful in a lawsuit, if it comes to that.

Previously entered data and current settings will not be lost, even if you remove the recorder and take it home with you for several weeks. This is especially true for severe frosts in winter or to avoid overheating of the device in the sun. G-Tech X77 has a built-in supercapacitor

- this is a kind of battery replacement, only more stable, at least that’s what Neoline claims. Without a direct connection to the car's on-board network or to the power bank, the device, of course, does not work. However, a supercapacitor is more than enough to save settings.

Neoline G-Tech X77 can monitor the car in parking mode

. Even when the engine is off, the built-in accelerometer will monitor vehicle movement or impacts. And the lens tracks movement from the front. In both cases, the device starts emergency recording. This kind of video is placed in a special protected folder on a flash drive, from where the files can be deleted only after full formatting. By the way, you can start an emergency recording manually. To do this, there is a large physical button on the left side.

The parking mode is activated through the menu and works when the recorder is connected directly to the car's fuse box. To do this you need to purchase a special accessory Neoline Fuse Cord X7

for 990 rubles. It is important to remember that in this mode the device actively consumes the car’s battery, so it must be of high quality and not worn out, with an excellent charge level.

Rating
( 1 rating, average 4 out of 5 )
Did you like the article? Share with friends:
For any suggestions regarding the site: [email protected]
Для любых предложений по сайту: [email protected]