Difference between revisions of "Smart 3D Cameras"

From MIT Technology Roadmapping
Jump to navigation Jump to search
Line 91: Line 91:
==List of R&T Projects and Prototypes==
==List of R&T Projects and Prototypes==


# Project Morphy (TRL3 -> TRL7)
1.'''Project Morphy (TRL3 -> TRL7)'''
Morphy is an ambitious R&D program to accelerate the technology maturity of neuromorphic sensors and computers for use in production grade cameras. This project will license the two patents listed below and validate concepts from paper #2.  
Morphy is an ambitious R&D program to accelerate the technology maturity of neuromorphic sensors and computers for use in production grade cameras. This project will license the two patents listed below and validate concepts from paper #2.  
# Project Edge (TRL6 -> TRL9)
2. '''Project Edge (TRL6 -> TRL9)'''
Edge will develop methods to leverage FPGAs and ASICs to perform pixel computation closer to the sensor in lieu of power hungry GPUs. This project will reproduce and improve upon the results from paper #1
Edge will develop methods to leverage FPGAs and ASICs to perform pixel computation closer to the sensor in lieu of power hungry GPUs. This project will reproduce and improve upon the results from paper #1
# Project Nimbus (TRL6 -> TRL9)
3. '''Project Nimbus (TRL6 -> TRL9)'''
Nimbus is an ambitious project to simplify ML algorithms and models so that they can be performant on embedded devices. This project will build upon ideas from paper #3.  
Nimbus is an ambitious project to simplify ML algorithms and models so that they can be performant on embedded devices. This project will build upon ideas from paper #3.  


These three projects can be classified as follows:
These three projects can be classified as follows:
[[File:R d Portfolio.png|800px]]


==Key Publications, Presentations and Patents==
==Key Publications, Presentations and Patents==

Revision as of 01:52, 20 November 2019

Work in Progress

Technology Roadmap Sections and Deliverables

The Smart 3D Camera roadmap is a level 2 roadmap as it enables the level 1 roadmaps for autonomous navigation of robots, drones and cars.

  • 2S3DCAM - Smart 3D Camera

Roadmap Overview

Smart 3D Cameras use a pair of identical optical imaging sensors and IR projectors (in certain use-cases) to capture stereo images of the environment. These images are then processed to calculate the disparity and then extract depth information for all pixels. In addition to the depth map, the scene is segmented to extract objects of interest and to identify them using training neural nets. Note that this roadmap will focus on passive stereo vision cameras that DO NOT use structured light.

Smart3DCamera.png

Smart 3D Camera decomposition.jpg

Design Structure Matrix (DSM) Allocation

Smart3DCam DSM.png

The 2S3DCAM roadmap is part of the larger company effort to develop an autonomous navigation stack as it enables 1ANAV.

Roadmap Model using OPM

Smart 3D Camera OPM.png

Figures of Merit

Figure of Merit Units Description
Million Disparity Estimations Per Second (MDE/s) (10^6) pxHz Comparison metric defined as:

MDE/s = Image resolution * disparities * frame rate

Power Consumption Watts (W) Power consumed by the entire stereo camera and image processing pipeline to produce a depth map
Image resolution Pixel (px) Number of pixels in the captured image
Range (m) m The maximum sensing distance
Accuracy (m) m The measuring confidence in the depth data point
Frame rate (fps or Hz) fps or Hz The scanning frame rate of the entire system
Depth Pixels (px) px The number of data points in the generated depth map
Cost ($) $ The commercial price for a customer, at volume
Energy Consumed per Depth Pixel, E_dpx W/pxHz The total power consumed by the sensing and processing pipeline divided by the Million Disparity Estimations per Second

The Energy Consumed per Depth Pixel, E_dpx which is the total energy cost for acquiring and processing the image divided by the product of the image resolution (n_px), number of disparities (n_d) and frame rate (f).

EnergyEquation.png

Alignment with Company Strategic Drivers

# Strategic Driver Alignment and Targets
1 To develop a compact, high performance and low-power smart 3D camera that can detect objects in both indoors and outdoor environments The 2S3DCAM roadmap will target the development of a passive stereo camera with onboard computing that has a sensing range of >20m, sensing speed of >30fps at an energy cost lower than 1mW/px in a 15cm x 5cm x 5cm footprint.
2 To enable autonomous classification and identification of relevant objects in the scene The 2S3DCAM roadmap will enable the capability for AI neural nets to run onboard the camera to perform image classification and recognition actions.

Positioning of Company vs. Competition

StereoCam Comparison.png

Technical Model

The most important FOM is the Energy Consumed per Depth Pixel, E_dpx which is the total energy cost for acquiring and processing the image divided by the product of the image resolution (n_px), number of disparities (n_d) and frame rate (f).

EnergyEquation.png

Since the image resolution and number of disparities are constants for a comparison, the relationship can be described as:

EnergyDiffe.png


The parameters that affect frame rate is the image resolution, number of disparities and processor/image sensor technology. The curves below were generated empirically based on the publications analyzed for this assignment. SpeedPower.png

The normalized model with three controllable parameters is shown in the tornado chart below. The imaging sensor and processor choice has a significantly larger impact on power consumption, followed by the image resolution and then by the number of disparities. Tornado.png

This informs the variable selection in the morphological matrix below. The cells highlighted in green are favorable choices and the final choice is boxed in purple.

MorphMatrix.png

Financial Model

Assignment 4

List of R&T Projects and Prototypes

1.Project Morphy (TRL3 -> TRL7) Morphy is an ambitious R&D program to accelerate the technology maturity of neuromorphic sensors and computers for use in production grade cameras. This project will license the two patents listed below and validate concepts from paper #2. 2. Project Edge (TRL6 -> TRL9) Edge will develop methods to leverage FPGAs and ASICs to perform pixel computation closer to the sensor in lieu of power hungry GPUs. This project will reproduce and improve upon the results from paper #1 3. Project Nimbus (TRL6 -> TRL9) Nimbus is an ambitious project to simplify ML algorithms and models so that they can be performant on embedded devices. This project will build upon ideas from paper #3.

These three projects can be classified as follows: R d Portfolio.png

Key Publications, Presentations and Patents

Patents

  • Dawson et al. Neuromorphic Digital Focal Plane Array. US Pat Pending. US20180278868A1

This patent claims new techniques for creating imaging sensors that leverage the principle of neuromorphism to embed pixel processing directly onto the sensor. For a Smart 3D camera this presents a disruptive option for two FOMs - reduce power consumption and increased frame rate.

Dawson et al.png

  • Bobda et al. Reconfigurable 3D Pixel-Parallel Neuromorphic Architecture for Smart Image Sensor. Pending. US20190325250A1

This patent also leverages neuromorphism but instead of only tracking pixel changes, it also embeds processing elements into different regions of the image sensor. With this technology, it will be feasible to embed intelligent processing of shapes and features close to the image capture system. By leveraging image sensor embedded processing, a magnitude improvement in power efficiency and performance can be achieved. Bobda et al.png

Publications

  • Michalik et al. Real time smart stereo camera based on FPGA-SoC. 2017. IEEE-RAS

This work presents a realtime smart stereo camera system implementation resembling the full stereo processing pipeline in a single FPGA device. The paper introduces a novel memory optimized stereo processing algorithm ”Sparse Retina Census Correlation” (SRCC) that embodies a combination of two well established window based stereo matching approaches. The presented smart camera solution has demonstrated real-time stereo processing of 1280×720 pixel depth images with 256 disparities on a Zynq XC7Z030 FPGA device at 60fps. This approach is ~3x faster than the nearest competitor.

Michalik et al .png

  • Andrepoulos et al. A Low Power, High Throughput, Fully Event-Based Stereo System. 2018 IEEE CVF

This paper uses neuromorphic event-based hardware to implement stereo vision. This is the first time that an end-to-end stereo pipeline from image acquisition and rectification, multi-scale spatiotemporal stereo correspondence, winner-take-all, to disparity regularization is implemented fully on event-based hardware. Using a cluster of TrueNorth neurosynaptic processors, the authors demonstrates their ability to process bilateral event-based inputs streamed live by Dynamic Vision Sensors (DVS), at up to 2,000 disparity maps per second, producing high fidelity disparities which are in turn used to reconstruct, at low power, the depth of events produced from rapidly changing scenes. They consume ~200x lesser power at 0.058mW/pixel!

Andrepoulos et al.png

  • Shin et al. An 1.92mW Feature Reuse Engine based on inter-frame similarity for low-power object recognition in video frames. 2014 IEEE

This paper proposes a Feature Reuse Engine (FReE) to achieve low-power object recognition in video frames. Unlike previous works, proposed FReE reuses 58% of features from previous frame with inter-frame similarity. Power consumption of object recognition processor is reduced by 31% with the proposed FReE which consumes only 1.92mW in a 130nm CMOS technology. This has potential for reducing power consumption for smart stereo cameras.

Technology Strategy Statement

Develop smart 3D cameras that cost less than $500 that can sense and classify the world at greater than 720p resolution at 30fps and at a low power consumption of less than 1mW/pixel.