Monthly Archives: September 2016

Germanium tin laser could increase processing speed of computer chips

A multi-institutional team of researchers, led by University of Arkansas engineering professor Shui-Qing “Fisher” Yu and a leading semiconductor equipment manufacturer, have fabricated an “optically pumped” laser made of the alloy germanium tin grown on silicon substrates.

The augmented material could lead to the development of fully integrated silicon photonics, including both circuits and lasers, and thus faster micro-processing speed at much lower cost.

The researchers’ findings were published in Applied Physics Letters.

Germanium tin holds great promise as a semiconducting material for future optical integration of computer chips, because it harnesses efficient emission of light, which silicon, the standard material for making computer chips, cannot do. In recent years, materials scientists and engineers, including Yu and several of his colleagues on this project, have focused on the development of germanium tin, grown on silicon substrates, to build a so-called optoelectronics “superchip” that can transmit data much faster than current chips.

Yu and his colleagues’ most recent contribution to this effort is an optically pumped laser using germanium tin. Optically pumped means the material is injected with light, similar to an injection of electrical current.

“We reduced the laser threshold 80 percent at a lasing operation temperature up to 110 Kelvin,” Yu said. “This is significant progress compared with the previously reported best result and shows that germanium tin holds great promise as an on-chip laser.”

The temperature 110 Kelvin is equal to about -261 Fahrenheit.

On this project, Yu and his colleagues worked with ASM America Inc.’s research and development staff, who developed the growth methods. ASM’s methods produce low-cost and high-quality germanium tin in an industry standard chemical vapor deposition reactor.

Mummy visualization impresses in computer journal

Anders Ynnerman, professor of scientific visualization at Linköping University and director of Visualization Center C, together with colleagues from Linköping University, Interspectral AB, the Interactive Institute Swedish ICT, and the British Museum, describes in the article the technology behind the visualization.

The Geberlein Man, who was mummified by natural processes, and the collaboration with the British Museum constitute the framework for the article, which focusses on the development of the technology used in the visualization table, which has received a great deal of attention.

“It was challenging to obtain sufficiently high performance of the visualization such that visitors can interact with the table in real-time, without experiencing delays. Further, the interaction must be both intuitive and informative,” says Anders Ynnerman.

Several thousand images of the mummy taken by computer tomography (CT) are stored in the table. In this case, 10,000 virtual slices through the complete mummy have been imaged, each one as thin as 0.3 mm. Rapid graphics processors can then create volumetric images, 3D images, in real-time to display what the visitors want to look at.

The degree of reflection and absorption of the X-rays by the mummy is recorded by the CT scanner and converted with the aid of a specially developed transfer function to different colours and degrees of transparency. Bone, for example, gives a signal that is converted to a light grey colour while soft tissue and metal objects give completely different signals that are represented by other colours or structures

“The table displays 60 images per second, which our brain interprets as continuous motion. Sixty times each second, virtual beams, one for each pixel on the screen, are projected through the dataset and a colour contribution for each is determined. We use the latest type of graphics processor, the type that is used in gaming computers,” says Patric Ljung, senior lecturer in immersive visualization at Linköping University.

This makes it possible for visitors to interact with the table. The desiccated skin of the mummy can be peeled away in the image and only the parts that consist of bone displayed. When this is done, it becomes clear that the Gebelein Man was killed by a stab through the shoulder.

The principles that have determined the design of the table are also described in the article. The design arose in close collaboration between the personnel at the museum and Interactive Institute Swedish ICT, working within the framework of Visualization Center C in Norrköping.

The design is minimalist and intuitive. The display table must be rapid, and no delay in the image can be tolerated. It must be able to withstand use by the six million visitors to the museum each year, and much emphasis has been placed on creating brief narrative texts with the aid of information points. Simple and self-explanatory icons have been used, and several favourable viewpoints and parameters have been preprogrammed in order to increase the table’s robustness.

“Allowing a broader public to visualize scientific phenomena and results makes it possible for them to act as researchers themselves. We allow visitors to investigate the same data that the researchers have used. This creates incredible possibilities for new ways to communicate knowledge, to stimulate interest, and to engage others. It’s an awesome experience — watching the next generation of young researchers be inspired by our technology,” says Anders Ynnerman.

Training computers to differentiate between people

This conundrum occurs in a wide range of environments from the bibliographic — which Anna Hernandez authored a specific study? — to the law enforcement — which Robert Jones is attempting to board an airplane flight?

Two computer scientists from the School of Science at Indiana University-Purdue University Indianapolis and a Purdue University doctoral student have developed a novel-machine learning method to provide better solutions to this perplexing problem. They report that the new method is an improvement on currently existing approaches of name disambiguation because the IUPUI method works on streaming data that enables the identification of previously unencountered John Smiths, Maria Garcias, Wei Zhangs and Omar Alis.

Existing methods can disambiguate an individual only if the person’s records are present in machine-learning training data, whereas the new method can perform non-exhaustive classification so that it can detect the fact that a new record which appears in streaming data actually belongs to a fourth John Smith, even if the training data has records of only three different John Smiths. “Non-exhaustiveness” is a very important aspect for name disambiguation because training data can never be exhaustive, because it is impossible to include records of all living John Smiths.

“Bayesian Non-Exhaustive Classification — A Case Study: Online Name Disambiguation using Temporal Record Streams” by Baichuan Zhang, Murat Dundar and Mohammad al Hasan is published in Proceedings of the 25th International Conference on Information and Knowledge Management. Zhang is a Purdue graduate student. Dundar and Hasan are IUPUI associate professors of computer science and experts in machine learning.

“We looked at a problem applicable to scientific bibliographies using features like keywords, and co-authors, but our disambiguation work has many other real-life applications — in the security field, for example,” said Hasan, who led the study. “We can teach the computer to recognize names and disambiguate information accumulated from a variety of sources — Facebook, Twitter and blog posts, public records and other documents — by collecting features such as Facebook friends and keywords from people’s posts using the identical algorithm. Our proposed method is scalable and will be able to group records belonging to a unique person even if thousands of people have the same name, an extremely complicated task.

“Our innovative machine-learning model can perform name disambiguation in an online setting instantaneously and, importantly, in a non-exhaustive fashion,” Hasan said. ” Our method grows and changes when new persons appear, enabling us to recognize the ever-growing number of individuals whose records were not previously encountered. Also, some names are more common than others, so the number of individuals sharing that name grows faster than other names. While working in non-exhaustive setting, our model automatically detects such names and adjusts the model parameters accordingly.”

Machine learning employs algorithms — sets of steps — to train computers to classify records belonging to different classes. Algorithms are developed to review data, to learn patterns or features from the data, and to enable the computer to learn a model that encodes the relationship between patterns and classes so that future records can be correctly classified. In the new study, for a given name value, computers were “trained” by using records of different individuals with that name to build a model that distinguishes between individuals with that name, even individuals about whom information had not been included in the training data previously provided to the computer.

“Features” are bits of information with some degree of predictive power to define a specific individual. The researchers focused on three types of features:

  1. Relational or association features to reveal persons with whom an individual is associated: for example, relatives, friends, and colleagues
  2. Text features, such as keywords in documents: for example, repeated use of sports- culinary-, or terrorism-associated keywords
  3. Venue features: for example, institutions, memberships or events with which an individual is currently or was formerly associated

The study was funded by the National Science Foundation through CAREER awards to Hasan and Dundar in 2012 and 2013, respectively.

The researchers hope to continue this line of inquiry, scaling up with the support of enhanced technologies, including distributed computing platforms.