Color pixels made of nanowires offer new paradigm for cameras

Digital cameras achieve color by using red, green, and blue Bayer color filters through which light passes on its way to image sensors, which then convert the light into electrical signals. This technology is widespread, but has disadvantages related to durability, low absorption coefficient, and fabrication complexity.
 
In addition, the absorbed light in the color filter cannot be converted into photocurrent. To maximize the efficiency in the trends of higher pixel density, this light needs to be converted to photocurrent.
 
In the past few years, researchers have been investigating new ways to achieve color in digital cameras that don’t rely on conventional organic dye filters. In a new paper published in Nano Letters, a team of researchers from Harvard University in Cambridge, Massachusetts, and Zena Technologies Inc., in Topsfield, Massachusetts, have presented a new filter-free approach to color imaging.
 
The technique uses silicon nanowires with different radii to absorb specific wavelengths, and thus colors, of light and convert the light into photocurrent.
 
"Our nanowire-based approach performs color imaging without conventional color filters," coauthor Kenneth B. Crozier of Harvard University told Phys.org. "This has two major advantages. First, our approach simplifies the fabrication process.
 
Nanowire-based image sensor pixels with different color responses can be defined at the same time through a single lithography step. This means no additional materials or repeated deposition steps are needed for separating colors. Second, our approach opens the way to increase the efficiency of an image sensor.
 
Each nanowire captures light of a specific color, and converts it to photocurrent. If we add substrate photodetector, we can capture the remainder of the spectrum. In this way, the image sensor can have higher efficiency, as photons would be not discarded by absorptive filters."