
Mariaprovector/Shutterstock
Anyone who has tried to park on a busy city street knows that drivers can take advantage of all the help they can get. Parking spaces are often tight, and making a mistake could mean you’re trading paint and insurance information with vehicles on both sides. Automakers continue to introduce more advanced technologies to help drivers alleviate parking problems.
As Get My Parking reports, parking aids are a fairly new invention— – They were largely in the market from the beginning in 2003 with the Toyota Prius. The technology starts with electromagnetic and ultrasonic parking distance sensors mounted on the front and rear bumpers, allowing drivers to play a game of hot and cold to get into tight spaces. The next key development is the placement of cameras on the outside of the vehicle, usually on the rear bumper, to eliminate the annoying beeps often used by older sensor technology. This feature effectively allows the driver to see the outside of the vehicle.
The natural evolution of the rear camera – apparently without auto-parking – is the bird’s-eye view camera made by Nissan was first introduced in 2007. The 360-degree bird’s-eye view camera allows drivers to view the vehicle and its surroundings from above, as if they were looking down a few feet above the roof. Obviously, there is no GoPro mounted on a selfie stick on top of a car or a drone permanently hovering over it, so how exactly does this technology work?
Texas Instruments is a manufacturer and supplier of such systems . The company explained in a white paper that automotive advanced driver assistance systems use a set of 180-degree cameras, a system-on-chip (SoC) processor and some clever programming to stitch images from four to six cameras mounted on the front Bumper, rear bumper and side of vehicle. Each camera captures an ultra-wide field of view of its surroundings, and algorithms align geometries in overlapping images from adjacent cameras, effectively stitching them together. Collecting and combining data is only part of the equation, though. Before stitching images from different cameras together, the SoC must be corrected Distortion and artifacts produced by different cameras. A fisheye lens, as well as a perspective transformation, make the image look more like a photo taken from above. After correction, the system performs brightness, white balance, and color balance to compensate for any differences between content. This ensures that the final image reaching the driver through the infotainment system looks cohesive, with no visible seams. The result is a seemingly magical look at the car’s surroundings, but with a gap where the vehicle should be.
Fills this gap by overlaying the vehicle into the resulting video feed. The system is not plug-and-play, and every vehicle design requires some research and development to tune the parameters of the algorithm. For some larger vehicles, it is not always easy to judge the size of the vehicle when it is displayed on the screen inside the vehicle, so manufacturers often include perimeter lines and guides that show the projected path based on the steering wheel orientation,