CONNECT WITH US

Nodar innovates stereo vision calibration for autonomous vehicles

Yusin Hu, DIGITIMES Asia, Taipei 0

Nodar COO and co-founder Brad Rosen Credit: Nodar

Stereo vision, first utilized as far back as in the late 1800s, is now entering the autonomous driving space. Nodar, a Boston-based software company, is developing an algorithm to align camera measurement by software. Unlike the traditional practice of stereo vision that installs two cameras close together on a rigid metal beam to maintain alignment, Nodar's software can maintain alignment through signal processing when given fast GPU processors.

"Stereo vision systems are very sensitive to the relative alignment between cameras", said Brad Rosen, COO and co-Founder at Nodar. He added that "even a misalignment of a hundredth of a degree can cause erroneous measurement. This is why legacy stereo vision requires a rigid beam securing the cameras. However, the distance these systems can 'see' is limited to about 50 meters because the cameras must be positioned very close together on the beam."

Nodar's Hammerhead stereo vision system, which was named after the Hammerhead shark which has the widest distance between its eyes, has many advantages over lidar systems for autonomous driving, according to Rosen. First, cameras are much lower-cost, have higher resolution, and have longer lifetimes than lidar. This makes them suitable for the mass-produced vehicle market. With Hammerhead, the cameras can be mounted independently on the vehicle and far apart, enabling the system to "see" really far ahead (up to 1,000 meters). Hammerhead also can work with any kind of cameras.

With Nodar's system, the online calibration takes ~3 TOPS of processing power and calibration occurs every frame, in realtime. The system, which was demonstrated at CES 2023 in early January, can create 10 to 20 frames per second with up to five million pixels per frame.

The Nodar engine takes raw images from 2 cameras as input and outputs high-density 3D data in real time, while at the same time dynamically adjusting camera intrinsic and extrinsic parameters.

"Currently, Nodar Hammerhead works with Nvidia GPUs and the system is expected to output 20 frames per second at five megapixels per frame. " Rosen said.

The software-enabled alignment will allow carmakers to place the cameras more freely in a car – whether on the car roof, on the A-pillars, by the headlights, or in the sideview mirrors.

Rosen said the software-enabled processing could not have been achieved ten years ago without the ultra-fast GPU processors available now. He added that public research conducted by Daimler & others showed that stereo vision performed the best among optical systems in bad weather conditions (see Note1 for reference).

Nodar Hammerhead DevKit launch

At CES this year, Nodar launched Hammerhead DevKit, which uses a Lucid Triton 5.4MP camera and NVIDIA Orin Jetson processor. Rosen said the company has already been working with European auto suppliers and OEMs for several years on delivering the product in the automotive sector, and they hope to announce some new partners this year. The Hammerhead system is most likely to be deployed on L4 autonomous trucks first.

The company has done many proof-of-concepts (POC) in other markets that could advance into operation autonomy: air taxis, heavy equipment, construction cranes, autonomous ferries, personal boating, and autonomous farming.

Rosen said the system could unlock the mass market of self-diving vehicles and accelerate adoption of up to 250 million L3 autonomous vehicles in the next 10 years.

Most importantly, since the value lies in the software, the Nodar system is agnostic to cameras and the compute platform, Rosen said, There are still a few requirements for hardware to meet, but automotive-grade cameras and GPUs are able to carry the system.

Nodar's wide-baseline stereo vision allows the system to see up to 1,000 meters ahead. It is vital for autonomous trucking fleets to detect as far as possible any obstacles on the highway. Rosen said being able to detect dangerous objects at far range is a critical requirement for the L4 autonomous trucking industry because truck crashes are usually fatal and must be avoided.

As long as the obstacles are within the overlapping fields of view created by the cameras, the system is able to detect small objects as small as 10 centimeters at 150 meters away or, for example, an overturned motorcycle at 350 meters on the highway.

Rosen stressed that the cameras can be mounted anywhere on the car which makes the solution super flexible for automobile design.

Note1: published paper: pixel-accurate depth evaluation in realistic driving scenarios, 2019.