Monthly Archives: August 2016

Recognize sounds by watching video

In recent years, computers have gotten remarkably good at recognizing speech and images: Think of the dictation software on most cellphones, or the algorithms that automatically identify people in photos posted to Facebook.

But recognition of natural sounds — such as crowds cheering or waves crashing — has lagged behind. That’s because most automated recognition systems, whether they process audio or visual information, are the result of machine learning, in which computers search for patterns in huge compendia of training data. Usually, the training data has to be first annotated by hand, which is prohibitively expensive for all but the highest-demand applications.

Sound recognition may be catching up, however, thanks to researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). At the Neural Information Processing Systems conference next week, they will present a sound-recognition system that outperforms its predecessors but didn’t require hand-annotated data during training.

Instead, the researchers trained the system on video. First, existing computer vision systems that recognize scenes and objects categorized the images in the video. The new system then found correlations between those visual categories and natural sounds.

“Computer vision has gotten so good that we can transfer it to other domains,” says Carl Vondrick, an MIT graduate student in electrical engineering and computer science and one of the paper’s two first authors. “We’re capitalizing on the natural synchronization between vision and sound. We scale up with tons of unlabeled video to learn to understand sound.”

The researchers tested their system on two standard databases of annotated sound recordings, and it was between 13 and 15 percent more accurate than the best-performing previous system. On a data set with 10 different sound categories, it could categorize sounds with 92 percent accuracy, and on a data set with 50 categories it performed with 74 percent accuracy. On those same data sets, humans are 96 percent and 81 percent accurate, respectively.

“Even humans are ambiguous,” says Yusuf Aytar, the paper’s other first author and a postdoc in the lab of MIT professor of electrical engineering and computer science Antonio Torralba. Torralba is the final co-author on the paper.

“We did an experiment with Carl,” Aytar says. “Carl was looking at the computer monitor, and I couldn’t see it. He would play a recording and I would try to guess what it was. It turns out this is really, really hard. I could tell indoor from outdoor, basic guesses, but when it comes to the details — ‘Is it a restaurant?’ — those details are missing. Even for annotation purposes, the task is really hard.”

Complementary modalities

Because it takes far less power to collect and process audio data than it does to collect and process visual data, the researchers envision that a sound-recognition system could be used to improve the context sensitivity of mobile devices.

When coupled with GPS data, for instance, a sound-recognition system could determine that a cellphone user is in a movie theater and that the movie has started, and the phone could automatically route calls to a prerecorded outgoing message. Similarly, sound recognition could improve the situational awareness of autonomous robots.

Processing power and 20 times more bandwidth

The new TX-Green computing system at the MIT Lincoln Laboratory Supercomputing Center (LLSC) has been named the most powerful supercomputer in New England, 43rd most powerful in the U.S., and 106th most powerful in the world. A team of experts at TOP500 ranks the world’s 500 most powerful supercomputers biannually. The systems are ranked based on a LINPACK Benchmark, which is a measure of a system’s floating-point computing power, i.e., how fast a computer solves a dense system of linear equations.

Established in early 2016, the LLSC was developed to enhance computing power and accessibility for more than 1,000 researchers across the laboratory. The LLSC uses interactive supercomputing to augment the processing power of desktop systems to process large sets of sensor data, create high-fidelity simulations, and develop new algorithms. Located in Holyoke, Massachusetts, the new system is the only zero-carbon supercomputer on the TOP500 list; it uses energy from a mixture of hydroelectric, wind, solar, and nuclear sources.

In November, Dell EMC installed a new petaflop-scale system, which consists of 41,472 Intel processor cores and can compute 1015 operations per second. Compared to LLSC’s previous technology, the new system provides 6 times more processing power and 20 times more bandwidth. This technology enables research in several laboratory research areas, such as space observation, robotic vehicles, communications, cybersecurity, machine learning, sensor processing, electronic devices, bioinformatics, and air traffic control.

The LLSC mission is to address supercomputing needs, develop new supercomputing capabilities and technologies, and collaborate with MIT campus supercomputing initiatives. “The LLSC vision is to enable the brilliant scientists and engineers at Lincoln Laboratory to analyze and process enormous amounts of information with complex algorithms,” says Jeremy Kepner, Lincoln Laboratory Fellow and head of the LLSC. “Our new system is one of the largest on the East Coast and is specifically focused on enabling new research in machine learning, advanced physical devices, and autonomous systems.”

National Inventors Hall of Fame

Is the Internet old or new? According to MIT professor of mathematics Tom Leighton, co-founder of Akamai, the internet is just getting started. His opinion counts since his firm, launched in 1998 with pivotal help from Danny Lewin SM ’98, keeps the internet speedy by copying and channeling massive amounts of data into orderly and secure places that are quick to access. Now, the National Inventors Hall of Fame (NIHF) has recognized Leighton and Lewin’s work, naming them both as 2017 inductees.

“We think about the internet and the tremendous accomplishments that have been made and, the exciting thing is, it’s in its infancy,” Leighton says in an Akamai video. Online commerce, which has grown rapidly and is now denting mall sales, has huge potential, especially as dual screen use grows. Soon mobile devices will link to television, and then viewers can change channels on their mobile phones and click to buy the cool sunglasses Tom Cruise is wearing on the big screen. “We are going to see [that] things we never thought about existing will be core to our lives within 10 years, using the internet,” Leighton says.

Leighton’s former collaborator, Danny Lewin, was pivotal to the early development of Akamai’s technology. Tragically, Lewin died as a passenger on an American Airlines flight that was hijacked by terrorists and crashed into New York’s World Trade Center on Sept. 11, 2001. Lewin, a former Israeli Defense Forces officer, is credited with trying to stop the attack.

According to Akami, Leighton, Lewin, and their team “developed the mathematical algorithms necessary to intelligently route and replicate content over a large network of distributed servers,” which solved congestion that was then becoming known as the “World Wide Wait.” Today the company delivers nearly 3 trillion internet interactions each day.

The NIHF describes Leighton and Lewin’s contributions as pivotal to making the web fast, secure, and reliable. Their tools were applied mathematics and algorithms, and they focused on congested nodes identified by Tim Berners-Lee, inventor of the World Wide Web and an MIT professor with an office near Leighton. Leighton, an authority on parallel algorithms for network applications who earned his PhD at MIT, holds more than 40 U.S. patents involving content delivery, internet protocols, algorithms for networks, cryptography, and digital rights management. He served as Akamai’s chief scientist for 14 years before becoming chief executive officer in 2013.