Augmented Reality Technology Software and Hardware Composition in Surveillance System

An augmented reality system needs display technology, tracking and positioning technology, interface and visualization technology, and calibration technology.
Tracking and positioning technology and calibration technology jointly complete the detection of the position and orientation, and report the data to the AR system, so that the coordinates of the tracked object in the real world and the coordinates in the virtual world are unified, and the virtual object and the user environment are achieved. Sew the combined goals.
In order to generate accurate positioning, the augmented reality system requires a large number of calibrations. The measured values ​​include camera parameters, view area, sensor offset, object positioning, and deformation.
The fixed-camera augmented reality system, mainly composed of an image acquisition system and an observation display system, constitutes a telescopic tube type structure, and has a relatively large volume. In order to reduce the volume and weight of the system, the two parts are separated in the design, the camera system is still fixed by the bracket, and the observation system is changed to a handheld device or a helmet display, thus eliminating the system connection in the telescopic tube structure. Parts and support parts, the weight of the stent will be greatly reduced. Due to the use of a more lightweight structure, the maintenance and use of the entire system is more convenient. In the system, two cameras are used to shoot real scenes, and the stereoscopic imaging combined with virtual models realizes the augmented reality effect of stereoscopic vision, which greatly improves the practicality of the system and is very suitable for use in security monitoring.
Monitors the hardware and software of augmented reality systems
By using the motor structure to adjust the angle of the camera while using a tracker that can be fixed to the user's head, the observer's head movement can be measured in real time.
First, the hardware system
computer. The computer is the brain of the whole system. All the images and data will be summarized here. Therefore, the computer needs to perform a large number of calculations, including the processing of inertial tracker measurement results, the control of the follower motor platform, the establishment of a virtual world, and the virtual camera. The control, the generation of stereoscopic images, the realization of augmented reality effects, and the like.
Helmet-type display. The head-mounted display is the display output device of the system, and the computer-rendered augmented reality image with stereoscopic vision is output to the user for observation, so the head-mounted display is also the device in the system that has the most direct contact with the observer. The realization of user immersiveness depends to a large extent on the imaging quality and sense of fit of the helmet display rather than the augmented reality effect rendered by the computer. Therefore, the helmet-type display can ensure that the output image is not distorted and the overall volume is lighter and thinner.
Inertial tracker. Used to measure the observer's head movements and send the data to the computer in real time. In order to achieve the match between each device and the virtual camera, all devices share the measurement results of the inertia tracker, so the inertia tracker is equivalent to the main control element in the system, and its accuracy directly affects the accuracy of the entire system.
Follow-up motor platform. The platform completes the system's real-world scene collection tasks and consists of a dual-camera system, a follow-up platform, and a drive motor. The dual camera system is fixed on the follow-up platform and is relatively fixed with the follow-up platform, and the drive motor control platform rotates. After the computer obtains the observer's head movement data, after analyzing and calculating the direction that the camera system needs to shoot and sending a signal to the motor, the motor drives the follower platform to rotate according to the requirements of the computer, thereby driving the camera system to change the image acquisition direction.
Second, the software system
As the data processing center of the system, the computer calculates the rotation speed and distance of the motor after processing the relevant data, sends the control signal to the platform, and drives the real camera on the platform to change the shooting direction.
At the same time, the follow-up platform feeds back its current position to the computer as a reference for the next calculation of the computer, thereby forming closed-loop control.
In order to match the final virtual image, the computer needs to adjust the position of the virtual computer while controlling the follow-up motor platform, so that the shooting angle of the virtual computer is always the same. For a computer system, each frame of image is transmitted to a computer. These images are rendered and fused to generate an augmented reality stereoscopic image and output to a helmet display for viewing by the user.
In the above process, the tasks to be completed by the software system can be divided into two phases: offline and real-time. In the off-line phase, the real camera needs to be calibrated, and the distortion of the helmet-type display is calculated to generate a distortion correction map.
The real camera is calibrated to create a virtual world and set up virtual scenery services. The distortion correcting image is mainly used to adjust the images that need to be displayed on the helmet display, so that they produce barrel distortion before display, so that after undergoing the pincushion distortion of the helmet display, it will return to the normal ratio and provide the user with viewing. Moreover, after the distortion correction map is obtained, the image correction work can be performed on the GPU, which can greatly reduce the burden on the CPU and increase the speed of the system.
In the real-time operation of the system, on the one hand, the head movement of the user needs to be tracked, and the shooting direction of the virtual reality camera is adjusted. On the other hand, it is necessary to collect the image of the virtual camera, perform rendering and image fusion, and finally output the image for the user to view. The two parts of the software are relatively independent and therefore are performed in different threads.
At present, augmented reality has not yet been popularized and developed on a large scale, but its scope of application has become more and more widespread. It has gradually extended from the industrial field to many aspects such as medical care, entertainment, interaction and games. Many security companies already have effective integration of video surveillance technology and AR. Augmented reality technology will play a significant role in security monitoring.

Led housing

Led housing,High Quality Led housing,Led housing Details, Ningbo Hoteng Machinery Co., Ltd.

Ningbo Hoteng Machinery Co.,ltd. , https://en.boss-goo.com