Breaking News

Operator of Dairy Queen franchises fined $42K after federal investigation finds child labor violations at 11 Indiana, Michigan, locations US Department of Labor awards $4.5M in funding to support continued clean-up, recovery efforts after western Oregon’s 2020 wildfires Findings of the Independent International Commission of Inquiry on Ukraine | Press Release US Department of Labor awards nearly $199M in Quality Jobs, Equity, Strategy, Training Disaster Recovery grants Administrator Samantha Power Tests Positive for COVID-19 | Press Release NASA’s DART Mission Hits Asteroid in First-Ever Planetary Defense Test Governor Newsom Signs Legislation to Protect Animal Welfare 9.26.22 Additional Municipalities Can Now Apply for FEMA Assistance due to Hurricane Fiona

As any driver knows, accidents can happen in the blink of an eye — so when it comes to the camera system in autonomous vehicles, processing time is critical. The time that it takes for the system to snap an image and deliver the data to the microprocessor for image processing could mean the difference between avoiding an obstacle or getting into a major accident.

In-sensor image processing, in which important features are extracted from raw data by the image sensor itself instead of the separate microprocessor, can speed up the visual processing. To date, demonstrations of in-sensor processing have been limited to emerging research materials which are, at least for now, difficult to incorporate into commercial systems.

Now, researchers at Harvard University have developed the first in-sensor processor that could be integrated into commercial silicon imaging sensor chips, known as complementary metal-oxide-semiconductor image sensors. These chips are used in nearly all commercial devices, including smartphones, that need to capture visual information.

The study was supported in part by the U.S. National Science Foundation and is published in Nature Electronics.

“This work is an excellent example of the translation of cutting-edge research to application,” said Tomasz Durakiewicz, a program director in NSF’s Directorate for Mathematical and Physical Sciences. “The in-sensor programmable image processing saves costs and time and provides the computing power needed for interactions of machines with visual images.”

Said Harvard’s Donhee Ham, senior author of the paper, “Our work can harness the mainstream semiconductor electronics industry to rapidly bring in-sensor computing to a wide variety of real-world applications.”

Ham and his colleagues developed a silicon photodiode array. Commercially available image sensing chips also have a silicon photodiode array to capture images, but the team’s photodiodes are electrostatically doped, meaning that sensitivity of individual photodiodes, or pixels, to incoming light can be tuned by voltages. An array that connects multiple voltage-tunable photodiodes together can perform an analog version of multiplication and additional operations central to many image processing pipelines, extracting the relevant visual information as soon as the image is captured.

“These dynamic photodiodes can concurrently filter images as they are captured, allowing for the first stage of vision processing to be moved from the microprocessor to the sensor itself,” said Houk Jang, a co-first author of the paper.

The silicon photodiode array can be programmed into different image filters to remove unnecessary details or noise for various applications. An imaging system in an autonomous vehicle, for example, may call for a high-pass filter to track lane markings, while other applications may call for a filter that blurs for noise reduction.

“Looking ahead, we foresee the use of this silicon-based in-sensor processor not only in machine vision applications, but also in bio-inspired applications, wherein early information processing allows for the co-location of sensor and compute units, like in the brain,” said Henry Hinton, a co-first author of the paper.

Next, the team aims to increase the density of photodiodes and integrate them with silicon integrated circuits.

“By replacing the standard nonprogrammable pixels in commercial silicon image sensors with the programmable ones developed here, imaging devices can intelligently trim out unneeded data, thus [increasing efficiency] in both energy and bandwidth to address the demands of the next generation of sensory applications,” said Jang.

Source link