Computing and Optics Team Up over Metasurfaces
Jumping spiders excel at hunting. They sneak up on their prey—an ant or a fly—and pounce, leaping several times their own body length to land directly on their target. And they manage to calculate the distance of their jumps with a brain the size of a poppy seed. They need little brain power because of the way their eyes work, so scientists are turning to new types of optics to mimic what those eyes can do. The hope is that the right optical design can reduce the computational burden in sensor systems, paving the way for tiny, low-power image processing for miniature drones or self-driving cars.
The spiders perform depth calculations using a technique called "depth from defocus." Their eyes hold multiple layers of retinas, each capturing an image of the target that is only partially in focus. Comparing the different amounts of defocus from one image to another, they can calculate depth with relatively little computational power.
Machines can also derive depth from defocus, but that generally entails taking a photo, changing the aperture of the lens, and taking another, then measuring the difference in defocus and performing calculations. Not only does the equipment involve the complexity of moving parts, it also allows time to pass between shots, slowing the response and allowing the possibility that the image will be blurred by motion.
Researchers at Harvard University wanted to see if they could do what a simple spider could, but stacking semitransparent artificial retinas on top of each other didn't seem practical. Instead, they turned to a metalens, an optic studded with carefully engineered nanostructures that provides fine-grained control of light waves. Federico Capasso, professor of applied physics at Harvard, has already shown he can make a metalens capable of performing multiple functions at once, so he and his team created one that could take two images of the same scene simultaneously, each with a different defocus. Meanwhile, Todd Zickler, a professor of computer science at Harvard, developed an image-processing algorithm that could compare the defocus in each pixel with relatively little computing power. Combining the two technologies—metasurface and algorithm—allowed the researchers to emulate the spider's ability.
The metalens provides the raw data needed for the simple algorithm to be useful, says Capasso, and that's important if the goal is to make a depth-sensing camera that can operate in a small space. "You want to drive your power consumption down," he says, and without a metalens, "the kind of calculation you need to get depth sensing takes a lot of resources in terms of GPU."
Metasurfaces have been attracting a lot of attention in the photonics community in recent years. They're capable of controlling the direction and focus of light beams, changing the light's phase, and even altering polarization. They accomplish these feats thanks to nanometer-scale structures built on a surface, with the arrangement of the size, shape, orientation, spacing, and dielectric constant of those structures creating the desired effects.
Some of the most exciting developments in metasurfaces lie at the intersection of optics and computing. Metasurfaces could lift some of the computational burden from applications with size and power constraints, such as virtual reality headsets and autonomous driving systems, by performing mathematical operations based on light. They might both aid in and benefit from the processes involved in fabricating computer chips. And computational advancements, such as neural networks, are helping to move the design of metasurfaces forward.
Finding the Edge
Researchers at AMOLF, a scientific institute in the Netherlands, recently showed they could use a metasurface to perform a mathematical operation that allowed them to do edge detection, an important step in image recognition. Their metasurface consisted of a sheet of sapphire less than half a millimeter thick, studded with silicon nanopillars that were 206 nm thick, 142 nm tall, and 300 nm apart. They also made a miniature copy of Vermeer's painting Girl with a Pearl Earring out of chromium dots on a transparent sheet.
They placed the metasurface atop a CCD detector and shone red laser light through the sheet. The metasurface was designed with optical modes that matched the resonant frequency of the laser, so when that light struck the device, nonresonant wavelengths were filtered out. That eliminated the details of the girl's face but clearly showed her outline.
The metasurface effectively performed a mathematical derivative, says Andrea Cordaro, a PhD student at AMOLF and author of the paper. While a computer might take several milliseconds to perform the operation, depending on the size of the image and the power of the computer, this calculation happened in the time it took for light to pass through the thickness of the metasurface—basically instantaneously, he says. And the metasurface itself requires no power to achieve that. If such a setup were used in an imaging system for an autonomous vehicle or for a virtual reality headset, it could cut down on power demands and speed up processing time, making smaller, less expensive systems. While the initial demonstration was performed on a one-dimensional image, Cordaro says the team has since extended the work to 2D.
The metastructure that can solve integral equations with waves, demonstrated by the Engheta group. Credit: Eric Sucar/University of Pennsylvania
He's also collaborating with Nader Engheta, a professor of electrical engineering, physics, and materials science at the University of Pennsylvania. In 2014, Engheta proposed that he could use metasurfaces to perform mathematical calculations, creating an optical version of an analog computer. Last year, Engheta and his team demonstrated that concept, building photonic structures that can solve a general integral equation based on the waveforms that pass through them.
Engheta describes the device as looking like Swiss cheese. It consists of a block of polystyrene, a dielectric material, with a specific distribution of air holes. The metasurface encodes an equation, much like the circuit of an old computer in the predigital days. When a wave with a certain distribution of phase and intensity passes through the system, the metamaterial alters it in a programmed way, and the equation produces a new distribution of phase and intensity as a result. Changing the wave changes the input, so the same equation will produce a different output.
For instance, if you wanted to model the acoustics of a concert hall, you could create an equation that would represent the geometry of the hall. The incoming wave would carry information about the volume and location of the instruments. The output would tell you how loud the music would be at a given point in the hall.
The team's particular metasurface was designed to work at microwave frequencies, just for ease of experimental design, but Engheta says it can work at all sorts of wavelengths. It should be possible to design a system that works at the telecommunication wavelength of 1550 nm and that will fit on a microchip. And while the device took hundreds of nanoseconds to produce a solution, at optical wavelengths it should work much faster, in picoseconds.
Marrying metasurfaces to computer chips is something Capasso wants to see. His team created 45 separate all-glass metalenses on a single wafer, using the same deep ultraviolet lithography techniques that chipmakers use to create computer circuits. Their 45 lenses were each 1 cm in diameter, etched with 160 million nanopillars of various thicknesses in a radial pattern on a fused silica wafer. "My vision is that in the future you will be able to fabricate [lenses] with the same technology you use to fabricate integrated circuits," Capasso says.
45 fabricated 1 cm metalenses on a 4 inch glass wafer. Credit: Joon-Suh Park/Harvard University
If Capasso's vision comes to pass, then factories that build complementary metal-oxide-semiconductor sensors for cell phone cameras could also build the lenses, simplifying manufacturing. The traditional plastic lenses in today's cell phone cameras are curved, and five or six of them are stacked together, creating a thick system that requires careful optical alignment. Metalenses are flat, and can be designed so only one or two are required, making a thinner, lighter, more easily aligned stack. Given the size of the smartphone market and the trend toward adding multiple cameras, "it's really huge, I think, potentially," he says. In fact, Capasso's startup Metalenz was created to try to commercialize the technology.
The ability of metasurfaces to combine multiple optical functions would simplify other imaging systems as well, making them easier to build and use. For instance, one metasurface could replace a whole set of birefringent filters for capturing the polarization of light. That could lead to more compact, less complex depth-sensing cameras that could be used in everything from aerial drones to the imaging systems of self-driving cars.
Etching the Future
While metasurfaces may benefit from the well-established silicon manufacturing process, they could also improve those processes, says Yashi (Alex) Yi, who runs the Integrated Nano Optoelectronics and Intelligent Micro Systems Laboratory at the University of Michigan. Each of the individual nanostructures on a metasurface tailors the light passing through it, and they're often designed to collectively achieve a single result - to focus various wavelengths of light to the same point, for instance.
Yi realized designers could take a different tack. Instead of designing the nanostructures to project every incoming light beam to a single focal point, "potentially you can project every light shining onto every single nanostructure to a different point." In other words, he realized, it's possible to make an artificial focus pattern that could not be created with traditional optics.
Each nanostructure shifts the phase of light, and with a million phase shifters on a chip, it's possible to design them to project lines and arcs in any desired pattern. Yi created a prototype chip that projected a U and an M, but the real potential lies not in drawing university logos but in patterning the photoresists used to make microchips. "It can potentially replace the hundreds of lenses for the lithography tool," Yi says. He's working on finding the right materials to build the nanostructures—higher refractive indexes provide larger phase changes, which enable him to manipulate the light more easily—while still using CMOS processes to fabricate the metasurface. He hopes to demonstrate a metalens for lithography later this year.
Not all nanostructures used in metasurfaces are variations on pillars, however. Jonathan Fan, a professor of electrical engineering at Stanford University, favors freeform devices, oddly shaped structures that he says look like abstract art. While pillars act essentially as waveguides or phase shifters, freeform nanostructures interact with the light differently. Incoming light bounces around inside the structure, then comes out in a way that creates constructive or destructive interference, scattering the light in a desired direction, whether as a lens, a grating, or a prism.
But designing the best freeform shape for a given task is challenging, and designing another one for a slightly different task means a lot more work. "If you had to recreate this abstract art for each and every application from scratch, it would be very computationally expensive," Fan says.
So he has turned to artificial intelligence to help with the design process. AI has seen a surge of growth over the last few years through the use of neural networks, which mimic how the brain works to recognize patterns from large sets of data. Fan uses a specialized version called a generative neural network that can learn to create patterns based on the training data-instead of learning to recognize a cat in a photo, it can create a brand new image of a cat.
Essentially, Fan's computer is trained on a set of existing freeform device designs, then learns to generate new variations on those designs. For instance, given some metasurfaces that focus red wavelengths and others that focus blue, the computer will produce a range of devices that focus green light. Given a sampling of devices optimized for slightly different wavelengths, slightly different materials, or slightly different tasks, the computer interpolates the design space that those fit into, then suggests other devices that will also fit.
It's difficult, however, to know if the computer-generated freeform device is the best one for a given task. So Fan introduced a new approach - dataless training. In this case, the neural network, called a global topology optimization network, is given noise as input data. With nothing to compare to, it produces a range of freeform structures with no known function. The researchers take those structures—essentially blobs—test them against the known rules of optics, and tell the computer which ones are slightly better for the function they want. The computer takes those and performs the same task, and the process repeats until the machine produces a narrow distribution of devices, centered around the best one.
"It works because we have Maxwell's equations, and Maxwell's equations tell us exactly how light-matter interactions occur," Fan says. "We are able to use neural networks to design not just metasurfaces, but actually all different types of photonic components at the limits of performance."
Pushing the limits is what all of these scientists have in mind. As researchers develop new and more complex designs for metasurfaces, they're opening up new ways to manipulate light that traditional optics could never achieve. They're gaining control, point by point, over the amplitude, phase, and polarization of a light beam, which could benefit computer imaging systems at the same time that computing and manufacturing technology make new metasurfaces possible.
"The possibilities in wavefront engineering are enormous," Capasso says. "I think, honestly, this is a revolution."
Neil Savage writes about science and technology in Lowell, Massachusetts.
|Enjoy this article?
Get similar news in your inbox