Monthly Archives: November 2016

Speedy Terahertz-Based May Detect Explosives

Terahertz spectroscopy, which uses the band of electromagnetic radiation between microwaves and infrared light, is a promising security technology because it can extract the spectroscopic “fingerprints” of a wide range of materials, including chemicals used in explosives.

But traditional terahertz spectroscopy requires a radiation source that’s heavy and about the size of a large suitcase, and it takes 15 to 30 minutes to analyze a single sample, rendering it impractical for most applications.

In the latest issue of the journal Optica, researchers from MIT’s Research Laboratory of Electronics and their colleagues present a new terahertz spectroscopy system that uses a quantum cascade laser, a source of terahertz radiation that’s the size of a computer chip. The system can extract a material’s spectroscopic signature in just 100 microseconds.

The device is so efficient because it emits terahertz radiation in what’s known as a “frequency comb,” meaning a range of frequencies that are perfectly evenly spaced.

“With this work, we answer the question, ‘What is the real application of quantum-cascade laser frequency combs?’” says Yang Yang, a graduate student in electrical engineering and computer science and first author on the new paper. “Terahertz is such a unique region that spectroscopy is probably the best application. And QCL-based frequency combs are a great candidate for spectroscopy.”

Different materials absorb different frequencies of terahertz radiation to different degrees, giving each of them a unique terahertz-absorption profile. Traditionally, however, terahertz spectroscopy has required measuring a material’s response to each frequency separately, a process that involves mechanically readjusting the spectroscopic apparatus. That’s why the method has been so time consuming.

Because the frequencies in a frequency comb are evenly spaced, however, it’s possible to mathematically reconstruct a material’s absorption fingerprint from just a few measurements, without any mechanical adjustments.

Getting even

The trick is evening out the spacing in the comb. Quantum cascade lasers, like all electrically powered lasers, bounce electromagnetic radiation back and forth through a “gain medium” until the radiation has enough energy to escape. They emit radiation at multiple frequencies that are determined by the length of the gain medium.

But those frequencies are also dependent on the medium’s refractive index, which describes the speed at which electromagnetic radiation passes through it. And the refractive index varies for different frequencies, so the gaps between frequencies in the comb vary, too.

To even out their lasers’ frequencies, the MIT researchers and their colleagues use an oddly shaped gain medium, with regular, symmetrical indentations in its sides that alter the medium’s refractive index and restore uniformity to the distribution of the emitted frequencies.

Yang; his advisor, Qing Hu, the Distinguished Professor in Electrical Engineering and Computer Science; and first author David Burghoff, who received his PhD in electrical engineering and computer science from MIT in 2014 and is now a research scientist in Hu’s group, reported this design in Nature Photonics in 2014. But while their first prototype demonstrated the design’s feasibility, it in fact emitted two frequency combs, clustered around two different central frequencies, with a gap between them, which made it less than ideal for spectroscopy.

In the new work, Yang and Burghoff, who are joint first authors; Hu; Darren Hayton and Jian-Rong Gao of the Netherlands Institute for Space Research; and John Reno of Sandia National Laboratories developed a new gain medium that produces a single, unbroken frequency comb. Like the previous gain medium, the new one consists of hundreds of alternating layers of gallium arsenide and aluminum gallium arsenide, with different but precisely calibrated thicknesses.

Getting practical

As a proof of concept, the researchers used their system to measure the spectral signature of not a chemical sample but an optical device called an etalon, made from a wafer of gallium arsenide, whose spectral properties could be calculated theoretically in advance, providing a clear standard of comparison. The new system’s measurements were a very good fit for the etalon’s terahertz-transmission profile, suggesting that it could be useful for detecting chemicals.

Although terahertz quantum cascade lasers are of chip scale, they need to be cooled to very low temperatures, so they require refrigerated housings that can be inconveniently bulky. Hu’s group continues to work on the design of increasingly high-temperature quantum cascade lasers, but in the new paper, Yang and his colleagues demonstrated that they could extract a reliable spectroscopic signature from a target using only very short bursts of terahertz radiation. That could make terahertz spectroscopy practical even at low temperatures.

“We used to consume 10 watts, but my laser turns on only 1 percent of the time, which significantly reduces the refrigeration constraints,” Yang explains. “So we can use compact-sized cooling.”

“This paper is a breakthrough, because these kinds of sources were not available in terahertz,” says Gerard Wysocki, an assistant professor of electrical engineering at Princeton University. “Qing Hu is the first to actually present terahertz frequency combs that are semiconductor devices, all integrated, which promise very compact broadband terahertz spectrometers.”

“Because they used these very inventive phase correction techniques, they have demonstrated that even with pulsed sources you can extract data that is reasonably high resolution already,” Wysocki continues. “That’s a technique that they are pioneering, and this is a great first step toward chemical sensing in the terahertz region.”

Lincoln Laboratory supercomputing center

MIT Lincoln Laboratory has been a world leader in interactive supercomputing since the 1950s. In 1955, TX-0, the first fully transistor-based computer, was built to support a wide range of research at the laboratory and the MIT campus, and became the basis for the second-largest computing company in the world, Digital Equipment Corporation. In 2001, the laboratory developed Parallel Matlab, which enabled thousands of researchers worldwide to use interactive supercomputing for high-performance data analysis. In 2008, the laboratory demonstrated the largest single problem ever run on a computer, using its TX-2500 supercomputer, a part of the system called LLGrid. In April, the laboratory acknowledged the importance of the LLGrid world-class computing capability with the establishment of the Lincoln Laboratory Supercomputing Center (LLSC).

LLSC is based, in part, on the LLGrid infrastructure, but was developed to enhance computing power and accessibility to more than 1,000 researchers across the Institute. “By establishing the LLSC, Lincoln Laboratory will be able to better address supercomputing needs across all Laboratory missions, develop new supercomputing capabilities and technologies, and spawn even closer collaborations with MIT campus supercomputing initiatives,” says Jeremy Kepner, laboratory fellow, and head of the LLSC. “These brilliant engineers, scientists, faculty, and students use our capabilities to conduct research in diverse fields such as space observations, robotic vehicles, communications, cybersecurity, machine learning, sensor processing, electronic devices, bioinformatics, and air traffic control.”

Only 13 years ago, the laboratory’s supercomputing capability, LLGrid, was composed of a single 16-processor system. Albert Reuther, manager of LLSC, says that a “different kind of supercomputing” was clearly needed to meet the needs of laboratory researchers. Since then, the capability has expanded to thousands of processors across several systems. In addition, Reuther says that the center differs from others like it because of the team’s “focus on interactive supercomputing for high-performance data analysis,” and the “extremely ‘green’ computing center in Holyoke, Massachusetts, which allows our computers to run 93 percent carbon-free.”

“This new level of supercomputing capability will be a key technology for the computational fluid dynamics (CFD) work performed in the Structural and Thermal-Fluids Engineering Group,” says Nathan J. Falkiewicz. Falkiewicz explains that the new capability will allow his team to take advantage of the parallelism inherent in existing CFD codes to significantly reduce simulation time for computationally taxing problems, as well as enable simulation for certain classes of problems that would otherwise have “prohibitively long” execution times without access to large core-count, high-performance computing clusters.

Orion S. Crisafulli of the Active Optical Systems Group says that the supercomputing capability has enabled his team, in collaboration with MIT campus, to run complex simulations in the performance investigation of a compact microlidar system. “Access to a large number of compute nodes, each with substantial memory and a streamlined job submission process, has shortened the run time for our simulations from a week to a few hours,” Crisafulli says. “This allows us to explore a significantly larger system parameter space than we would otherwise be able to, and ultimately achieve a more complete understanding of the capabilities of the microlidar system concept.”

Reuther says that the LLSC exists today in large part because of the researchers who utilize supercomputing capabilities to produce cutting-edge research results, as well as many other supporters: “LLSC has been blessed to have the support of visionaries in the Director’s Office, the Technology Office, and the Steering Committee who have seen the potential of supercomputing to enable all of the Laboratory’s missions.” Reuther also credits the MIT Lincoln Laboratory Beaver Works Center for playing a critical role in the LLSC’s collaborations with campus.

“Creating the Lincoln Laboratory Supercomputing Center has been a goal for the team for many years, and it is tremendously rewarding to see it come to fruition,” Kepner says. “Laboratory researchers will see continued improvement in the LLSC systems, MIT Campus will benefit from our unique interactive supercomputing technologies, and Laboratory and campus researchers will be able to collaborate more closely on their joint research projects.”

Patching up Web applications

By exploiting some peculiarities of the popular Web programming framework Ruby on Rails, MIT researchers have developed a system that can quickly comb through tens of thousands of lines of application code to find security flaws.

In tests on 50 popular Web applications written using Ruby on Rails, the system found 23 previously undiagnosed security flaws, and it took no more than 64 seconds to analyze any given program.

The researchers will present their results at the International Conference on Software Engineering, in May.

According to Daniel Jackson, professor in the Department of Electrical Engineering and Computer Science, the new system uses a technique called static analysis, which seeks to describe, in a very general way, how data flows through a program.

“The classic example of this is if you wanted to do an abstract analysis of a program that manipulates integers, you might divide the integers into the positive integers, the negative integers, and zero,” Jackson explains. The static analysis would then evaluate every operation in the program according to its effect on integers’ signs. Adding two positives yields a positive; adding two negatives yields a negative; multiplying two negatives yields a positive; and so on.

“The problem with this is that it can’t be completely accurate, because you lose information,” Jackson says. “If you add a positive and a negative integer, you don’t know whether the answer will be positive, negative, or zero. Most work on static analysis is focused on trying to make the analysis more scalable and accurate to overcome those sorts of problems.”

With Web applications, however, the cost of accuracy is prohibitively high, Jackson says. “The program under analysis is just huge,” he says. “Even if you wrote a small program, it sits atop a vast edifice of libraries and plug-ins and frameworks. So when you look at something like a Web application written in language like Ruby on Rails, if you try to do a conventional static analysis, you typically find yourself mired in this huge bog. And this makes it really infeasible in practice.”

That vast edifice of libraries, however, also gave Jackson and his former student Joseph Near, who graduated from MIT last spring and is now doing a postdoc at the University of California at Berkeley, a way to make to make static analysis of programs written in Ruby on Rails practical.

A library is a compendium of code that programmers tend to use over and over again. Rather than rewriting the same functions for each new program, a programmer can just import them from a library.

Ruby on Rails — or Rails, as it’s called for short — has the peculiarity of defining even its most basic operations in libraries. Every addition, every assignment of a particular value to a variable, imports code from a library.

Near rewrote those libraries so that the operations defined in them describe their own behavior in a logical language. That turns the Rails interpreter, which converts high-level Rails programs into machine-readable code, into a static-analysis tool. With Near’s libraries, running a Rails program through the interpreter produces a formal, line-by-line description of how the program handles data.

In his PhD work, Near used this general machinery to build three different debuggers for Ruby on Rails applications, each requiring different degrees of programmer involvement. The one described in the new paper, which the researchers call Space, evaluates a program’s data access procedures.

Near identified seven different ways in which Web applications typically control access to data. Some data are publicly available, some are available only to users who are currently logged in, some are private to individual users, some users — administrators — have access to select aspects of everyone’s data, and so on.

For each of these data-access patterns, Near developed a simple logical model that describes what operations a user can perform on what data, under what circumstances. From the descriptions generated by the hacked libraries, Space can automatically determine whether the program adheres to those models. If it doesn’t, there’s likely to be a security flaw.

Using Space does require someone with access to the application code to determine which program variables and functions correspond to which aspects of Near’s models. But that isn’t an onerous requirement: Near was able to map correspondences for all 50 of the applications he evaluated. And that mapping should be even easier for a programmer involved in an application’s development from the outset, rather than coming to it from the outside as Near did.