Breaking News

MIT News – CSAIL | Robotics | Computer Science and Artificial Intelligence Laboratory (CSAIL) | Robots

MIT News – CSAIL | Robotics | Computer Science and Artificial Intelligence Laboratory (CSAIL) | Robots | Artificial intelligencehttps://news.mit.edu/rss/topic/robotics
MIT news feed about: CSAIL | Robotics | Computer Science and Artificial Intelligence Laboratory (CSAIL) | Robots | Artificial intelligence
en

Fri, 21 Aug 2020 00:00:00 -0400

Real-time data for a better response to disease outbreakshttps://news.mit.edu/2020/kinsa-health-0821
The startup Kinsa uses its smart thermometers to detect and track the spread of contagious illness before patients go to the hospital.
Fri, 21 Aug 2020 00:00:00 -0400
https://news.mit.edu/2020/kinsa-health-0821
Zach Winn | MIT News Office
<p>Kinsa was founded by MIT alumnus Inder Singh MBA ’06, SM ’07 in 2012, with the mission of collecting information about when and where infectious diseases are spreading in real-time. Today the company is fulfilling that mission along several fronts.</p>

<p>It starts with families. More than 1.5 million of Kinsa’s&nbsp;“smart” thermometers have been sold or given away across the country, including hundreds of thousands to families from low-income school districts. The thermometers link to an app that helps users decide if they should seek medical attention based on age, fever, and symptoms.</p>

<p>At the community level, the data generated by the thermometers are anonymized and aggregated, and can be shared with parents and school officials, helping them understand what illnesses are going around and prevent the spread of disease in classrooms.</p>

<p>By working with over 2,000 schools to date in addition to many businesses, Kinsa has also developed predictive models that can forecast flu seasons each year. In the spring of this year, <a href=”https://www.medrxiv.org/content/10.1101/2020.06.07.20078956v1″ target=”_blank”>the company showed</a> it could predict flu spread 12-20 weeks in advance at the city level.</p>

<p>The milestone prepared Kinsa for its most profound scale-up yet. When Covid-19 came to the U.S., the company was able to estimate its spread in real-time by tracking fever levels above what would normally be expected. Now Kinsa is working with health officials in five states and three cities to help contain and control the virus.</p>

<p>“By the time the CDC [U.S. Centers for Disease Control] gets the data, it has been processed, deidentified, and people have entered the health system to see a doctor,” say Singh, who is Kinsa’s CEO as well as its founder. “There’s a huge delay from when someone contracts an illness and when they see a doctor. The current health care system only sees the latter; we see the former.”</p>

<p>Today Kinsa finds itself playing a central role in America’s Covid-19 response. In addition to its local partnerships, the company has become a central information hub for the public, media, and researchers with its Healthweather tool, which maps unusual rates of fevers — among the most common symptom of Covid-19 — to help visualize the prevalence of illness in communities.</p>

<p>Singh says Kinsa’s data complement other methods of containing the virus like testing, contact tracing, and the use of face masks.</p>

<p><strong>Better data for better responses</strong></p>

<p>Singh’s first exposure to MIT came while he was attending the Harvard University Kennedy School of Government as a graduate student.</p>

<p>“I remember I interacted with some MIT undergrads, we brainstormed some social-impact ideas,” Singh recalls. “A week later I got an email from them saying they’d prototyped what we were talking about. I was like, ‘You prototyped what we talked about in a week!?’ I was blown away, and it was an insight into how MIT is such a do-er campus. It was so entrepreneurial. I was like, ‘I want to do that.’”</p>

<p>Soon Singh enrolled in the Harvard-MIT Program in Health Sciences and Technology, an interdisciplinary program where Singh earned his master’s and MBA degrees while working with leading research hospitals in the area. The program also set him on a course to improve the way we respond to infectious disease.</p>

<p>Following his graduation, he joined the Clinton Health Access Initiative (CHAI), where he brokered deals between pharmaceutical companies and low-resource countries to lower the cost of medicines for HIV, malaria, and tuberculosis. Singh described CHAI as a dream job, but it opened his eyes to several shortcomings in the global health system.</p>

<p>“The world tries to curb the spread of infectious illness with almost zero real-time information about when and where disease is spreading,” Singh says. “The question I posed to start Kinsa was ‘how do you stop the next outbreak before it becomes an epidemic if you don’t know where and when it’s starting and how fast it’s spreading’?”</p>

<p>Kinsa was started in 2012 with the insight that better data were needed to control infectious diseases. In order to get that data, the company needed a new way of providing value to sick people and families.</p>

<p>“The behavior in the home when someone gets sick is to grab the thermometer,” Singh says. “We piggy-backed off of that to create a communication channel to the sick, to help them get better faster.”</p>

<p>Kinsa started by selling its thermometers and creating a sponsorship program for corporate donors to fund thermometer donations to Title 1 schools, which serve high numbers of economically disadvantaged students. Singh says 40 percent of families that receive a Kinsa thermometer through that program did not previously have any thermometer in their house.</p>

<p>The company says its program has been shown to help schools improve attendance, and has yielded years of real-time data on fever rates to help compare to official estimates and develop its models.</p>

<p>“We had been forecasting flu incidence accurately several weeks out for years, and right around early 2020, we had a massive breakthrough,” Singh recalls. “We showed we could predict flu 12 to 20 weeks out — then March hit. We said, let’s try to remove the fever levels associated with cold and flu from our observed real time signal. What’s left over is unusual fevers, and we saw hotspots across the country. We observed six years of data and there’d been hot spots, but nothing like we were seeing in early March.”</p>

<p>The company quickly made their real-time data available to the public, and on March 14, Singh got on a call with the former New York State health commissioner, the former head of the U.S. Food and Drug Administration, and the man responsible for Taiwan’s successful Covid-19 response.</p>

<p>“I said, ‘There’s hotspots everywhere,” Singh recalls. “They’re in New York, around the Northeast, Texas, Michigan. They said, ‘This is interesting, but it doesn’t look credible because we’re not seeing case reports of Covid-19.’ Low and behold, days and weeks later, we saw the Covid cases start building up.”</p>

<p><strong>A tool against Covid-19</strong></p>

<p>Singh says Kinsa’s data provide an unprecedented look into the way a disease is spreading through a community.</p>

<p>“We can predict the entire incidence curve [of flu season] on a city-by-city basis,” Singh says. “The next best model is [about] three weeks out, at a multistate level. It’s not because we’re smarter than others; it’s because we have better data. We found a way to communicate with someone consistently when they’ve just fallen ill.”</p>

<p>Kinsa has been working with health departments and research groups around the country to help them interpret the company’s data and react to early warnings of Covid-19’s spread. It’s also helping companies around the country as they begin bringing employees back to offices.</p>

<p>Now Kinsa is working on expanding its international presence to help curb infectious diseases on multiple fronts around the world, just like it’s doing in the U.S. The company’s progress promises to help authorities monitor diseases long after Covid-19.</p>

<p>“I started Kinsa to create a global, real-time outbreak monitoring and detection system, and now we have predictive power beyond that,” Singh says. “When you know where and when symptoms are starting and how fast their spreading, you can empower local individuals, families, communities, and governments.”</p>

The startup Kinsa, founded by MIT alumnus Inder Singh MBA ’06, SM ’07, uses data generated by its thermometers to detect and track contagious illness earlier than methods that rely on hospital testing.
Image: Courtesy of Kinsa

Rewriting the rules of machine-generated arthttps://news.mit.edu/2020/rewriting-rules-machine-generated-art-0818
An artificial intelligence tool lets users edit generative adversarial network models with simple copy-and-paste commands.
Tue, 18 Aug 2020 15:00:00 -0400
https://news.mit.edu/2020/rewriting-rules-machine-generated-art-0818
Kim Martineau | MIT Quest for Intelligence
<p>Horses don’t normally wear hats, and deep generative models, or GANs, don’t normally follow rules laid out by human programmers. But a new tool developed at MIT lets anyone go into a GAN and tell the model, like a coder, to put hats on the heads of the horses it draws.&nbsp;</p>

<p>In&nbsp;<a href=”https://arxiv.org/abs/2007.15646″>a new study</a>&nbsp;appearing at the&nbsp;<a href=”https://eccv2020.eu/” target=”_blank”>European Conference on Computer Vision</a> this month, researchers show that the deep layers of neural networks can be edited, like so many lines of code, to generate surprising images no one has seen before.</p>

<p>“GANs are incredible artists, but they’re confined to imitating the data they see,” says the study’s lead author,&nbsp;<a href=”https://people.csail.mit.edu/davidbau/home/” target=”_blank”>David Bau</a>, a PhD student at MIT. “If we can rewrite the rules of a GAN directly, the only limit is human imagination.”</p>

<p>Generative adversarial networks, or GANs, pit two neural networks against each other to create hyper-realistic images and sounds. One neural network, the generator, learns to mimic the faces it sees in photos, or the words it hears spoken. A second network, the discriminator, compares the generator’s outputs to the original. The generator then iteratively builds on the discriminator’s feedback until its fabricated images and sounds are convincing enough to pass for real.</p>

<p>GANs have captivated artificial intelligence researchers for their ability to create representations that are stunningly lifelike and, at times, deeply bizarre, from a receding cat that&nbsp;<a href=”http://news.mit.edu/2020/visualizing-the-world-beyond-the-frame-0506″ target=”_self”>melts into a pile of fur</a>&nbsp;to a wedding dress standing in a church door as if&nbsp;<a href=”http://news.mit.edu/2019/visualizing-ai-models-blind-spots-1108″ target=”_self”>abandoned by the bride</a>. Like most deep learning models, GANs depend on massive datasets to learn from. The more examples they see, the better they get at mimicking them.&nbsp;</p>

<p>But the new study suggests that big datasets are not essential. If you understand how a model is wired, says Bau, you can edit the numerical weights in its layers to get the behavior you desire, even if no literal example exists. No dataset? No problem. Just create your own.</p>

<p>“We’re like prisoners to our training data,” he says. “GANs only learn patterns that are already in our data. But here I can manipulate a condition in the model to create horses with hats. It’s like editing a genetic sequence to create something entirely new, like inserting the DNA of a firefly into a plant to make it glow in the dark.”</p>

<p>Bau was a software engineer at Google, and had&nbsp;led the development&nbsp;of Google Hangouts and Google Image Search, when he decided to go back to school. The field of deep learning was exploding and he wanted to pursue foundational questions in computer science. Hoping to learn how to build transparent systems that would empower users, he joined the lab of MIT Professor&nbsp;<a href=”http://web.mit.edu/torralba/www/” target=”_blank”>Antonio Torralba</a>. There, he began probing deep nets and their millions of mathematical operations to understand how they represent the world.</p>

<p>Bau showed that you could slice into a GAN, like layer cake, to isolate the artificial neurons that had learned to draw a particular feature, like a tree, and switch them off to make the tree disappear. With this insight, Bau helped create <a href=”http://gandissect.res.ibm.com/ganpaint.html?project=churchoutdoor&amp;layer=layer4″>GANPaint</a>, a tool that lets users add and remove features like doors and clouds from a picture. In the process, he discovered that GANs have a stubborn streak: they wouldn’t let you draw doors in the sky.</p>

<p>“It had some rule that seemed to say, ‘doors don’t go there,’” he says. “That’s fascinating, we thought. It’s like an ‘if’ statement in a program. To me, it was a clear signal that the network had some kind of inner logic.”</p>

<p>Over several sleepless nights, Bau ran experiments and picked through the layers of his models for the equivalent of a conditional statement. Finally, it dawned on him. “The neural network has different memory banks that function as a set of general rules, relating one set of learned patterns to another,” he says. “I realized that if you could identify one line of memory, you could write a new memory into it.”&nbsp;</p>

<p>In a <a href=”https://www.youtube.com/watch?v=i2_-zNqtEPk&amp;feature=youtu.be” target=”_blank”>short version of his ECCV talk</a>, Bau demonstrates how to edit the model and rewrite memories using an intuitive interface he designed. He copies a tree from one image and pastes it into another, placing it, improbably, on a building tower. The model then churns out enough pictures of tree-sprouting towers to fill a family photo album. With a few more clicks, Bau transfers hats from human riders to their horses, and wipes away a reflection of light from a kitchen countertop.</p>

<p>The researchers hypothesize that each layer of a deep net acts as an associative memory, formed after repeated exposure to similar examples. Fed enough pictures of doors and clouds, for example, the model learns that doors are entryways to buildings, and clouds float in the sky. The model effectively memorizes a set of rules for understanding the world.</p>

<p>The effect is especially striking when GANs manipulate light. When GANPaint added windows to a room, for example, the model automatically added nearby reflections. It’s as if the model had an intuitive grasp of physics and how light should behave on object surfaces. “Even this relationship suggests that associations learned from data can be stored as lines of memory, and not only located but reversed,” says Torralba, the study’s senior author.&nbsp;</p>

<p>GAN editing has its limitations. It’s not easy to identify all of the neurons corresponding to objects and animals the model renders, the researchers say. Some rules also appear edit-proof; some changes the researchers tried to make failed to execute.</p>

<p>Still, the tool has immediate applications in computer graphics, where GANs are widely studied, and in training expert AI systems to recognize rare features and events through data augmentation. The tool also brings researchers closer to understanding how GANs learn visual concepts with minimal human guidance. If the models learn by imitating what they see, forming associations in the process, they may be a springboard for new kinds of machine learning applications.&nbsp;</p>

<p>The study’s other authors are Steven Liu, Tongzhou Wang, and Jun-Yan Zhu.</p>

A new GAN-editing tool developed at MIT allows users to copy features from one set of photos and paste them into another, creating an infinite array of pictures that riff on the new theme — in this case, horses with hats on their heads.
Image: David Bau

Rewriting the rules of machine-generated arthttps://news.mit.edu/2020/rewriting-rules-machine-generated-art-0818
An artificial intelligence tool lets users edit generative adversarial network models with simple copy-and-paste commands.
Tue, 18 Aug 2020 15:00:00 -0400
https://news.mit.edu/2020/rewriting-rules-machine-generated-art-0818
Kim Martineau | MIT Quest for Intelligence
<p>Horses don’t normally wear hats, and deep generative models, or GANs, don’t normally follow rules laid out by human programmers. But a new tool developed at MIT lets anyone go into a GAN and tell the model, like a coder, to put hats on the heads of the horses it draws.&nbsp;</p>

<p>In&nbsp;<a href=”https://arxiv.org/abs/2007.15646″>a new study</a>&nbsp;appearing at the&nbsp;<a href=”https://eccv2020.eu/” target=”_blank”>European Conference on Computer Vision</a> this month, researchers show that the deep layers of neural networks can be edited, like so many lines of code, to generate surprising images no one has seen before.</p>

<p>“GANs are incredible artists, but they’re confined to imitating the data they see,” says the study’s lead author,&nbsp;<a href=”https://people.csail.mit.edu/davidbau/home/” target=”_blank”>David Bau</a>, a PhD student at MIT. “If we can rewrite the rules of a GAN directly, the only limit is human imagination.”</p>

<p>Generative adversarial networks, or GANs, pit two neural networks against each other to create hyper-realistic images and sounds. One neural network, the generator, learns to mimic the faces it sees in photos, or the words it hears spoken. A second network, the discriminator, compares the generator’s outputs to the original. The generator then iteratively builds on the discriminator’s feedback until its fabricated images and sounds are convincing enough to pass for real.</p>

<p>GANs have captivated artificial intelligence researchers for their ability to create representations that are stunningly lifelike and, at times, deeply bizarre, from a receding cat that&nbsp;<a href=”http://news.mit.edu/2020/visualizing-the-world-beyond-the-frame-0506″ target=”_self”>melts into a pile of fur</a>&nbsp;to a wedding dress standing in a church door as if&nbsp;<a href=”http://news.mit.edu/2019/visualizing-ai-models-blind-spots-1108″ target=”_self”>abandoned by the bride</a>. Like most deep learning models, GANs depend on massive datasets to learn from. The more examples they see, the better they get at mimicking them.&nbsp;</p>

<p>But the new study suggests that big datasets are not essential. If you understand how a model is wired, says Bau, you can edit the numerical weights in its layers to get the behavior you desire, even if no literal example exists. No dataset? No problem. Just create your own.</p>

<p>“We’re like prisoners to our training data,” he says. “GANs only learn patterns that are already in our data. But here I can manipulate a condition in the model to create horses with hats. It’s like editing a genetic sequence to create something entirely new, like inserting the DNA of a firefly into a plant to make it glow in the dark.”</p>

<p>Bau was a software engineer at Google, and had&nbsp;led the development&nbsp;of Google Hangouts and Google Image Search, when he decided to go back to school. The field of deep learning was exploding and he wanted to pursue foundational questions in computer science. Hoping to learn how to build transparent systems that would empower users, he joined the lab of MIT Professor&nbsp;<a href=”http://web.mit.edu/torralba/www/” target=”_blank”>Antonio Torralba</a>. There, he began probing deep nets and their millions of mathematical operations to understand how they represent the world.</p>

<p>Bau showed that you could slice into a GAN, like layer cake, to isolate the artificial neurons that had learned to draw a particular feature, like a tree, and switch them off to make the tree disappear. With this insight, Bau helped create <a href=”http://gandissect.res.ibm.com/ganpaint.html?project=churchoutdoor&amp;layer=layer4″>GANPaint</a>, a tool that lets users add and remove features like doors and clouds from a picture. In the process, he discovered that GANs have a stubborn streak: they wouldn’t let you draw doors in the sky.</p>

<p>“It had some rule that seemed to say, ‘doors don’t go there,’” he says. “That’s fascinating, we thought. It’s like an ‘if’ statement in a program. To me, it was a clear signal that the network had some kind of inner logic.”</p>

<p>Over several sleepless nights, Bau ran experiments and picked through the layers of his models for the equivalent of a conditional statement. Finally, it dawned on him. “The neural network has different memory banks that function as a set of general rules, relating one set of learned patterns to another,” he says. “I realized that if you could identify one line of memory, you could write a new memory into it.”&nbsp;</p>

<p>In a <a href=”https://www.youtube.com/watch?v=i2_-zNqtEPk&amp;feature=youtu.be” target=”_blank”>short version of his ECCV talk</a>, Bau demonstrates how to edit the model and rewrite memories using an intuitive interface he designed. He copies a tree from one image and pastes it into another, placing it, improbably, on a building tower. The model then churns out enough pictures of tree-sprouting towers to fill a family photo album. With a few more clicks, Bau transfers hats from human riders to their horses, and wipes away a reflection of light from a kitchen countertop.</p>

<p>The researchers hypothesize that each layer of a deep net acts as an associative memory, formed after repeated exposure to similar examples. Fed enough pictures of doors and clouds, for example, the model learns that doors are entryways to buildings, and clouds float in the sky. The model effectively memorizes a set of rules for understanding the world.</p>

<p>The effect is especially striking when GANs manipulate light. When GANPaint added windows to a room, for example, the model automatically added nearby reflections. It’s as if the model had an intuitive grasp of physics and how light should behave on object surfaces. “Even this relationship suggests that associations learned from data can be stored as lines of memory, and not only located but reversed,” says Torralba, the study’s senior author.&nbsp;</p>

<p>GAN editing has its limitations. It’s not easy to identify all of the neurons corresponding to objects and animals the model renders, the researchers say. Some rules also appear edit-proof; some changes the researchers tried to make failed to execute.</p>

<p>Still, the tool has immediate applications in computer graphics, where GANs are widely studied, and in training expert AI systems to recognize rare features and events through data augmentation. The tool also brings researchers closer to understanding how GANs learn visual concepts with minimal human guidance. If the models learn by imitating what they see, forming associations in the process, they may be a springboard for new kinds of machine learning applications.&nbsp;</p>

<p>The study’s other authors are Steven Liu, Tongzhou Wang, and Jun-Yan Zhu.</p>

A new GAN-editing tool developed at MIT allows users to copy features from one set of photos and paste them into another, creating an infinite array of pictures that riff on the new theme — in this case, horses with hats on their heads.
Image: David Bau

Data systems that learn to be betterhttps://news.mit.edu/2020/mit-data-systems-learn-be-better-tsunami-bao-0810
Storage tool developed at MIT CSAIL adapts to what its datasets’ users want to search.
Mon, 10 Aug 2020 16:05:00 -0400
https://news.mit.edu/2020/mit-data-systems-learn-be-better-tsunami-bao-0810
Adam Conner-Simons | MIT CSAIL
<p>Big data has gotten really, really big: By 2025, all the world’s data will add up to <a href=”https://www.bernardmarr.com/default.asp?contentID=1846″>an estimated 175 trillion gigabytes</a>. For a visual, if you stored that amount of data on DVDs, it would stack up tall enough to circle the Earth 222 times.&nbsp;</p>

<p>One of the biggest challenges in computing is handling this onslaught of information while still being able to efficiently store and process it. A team from MIT’s <a href=”http://csail.mit.edu”>Computer Science and Artificial Intelligence Laboratory</a> (CSAIL) believe that the answer rests with something called “instance-optimized systems.”&nbsp;&nbsp;</p>

<p>Traditional storage and database systems are designed to work for a wide range of applications because of how long it can take to build them — months or, often, several years. As a result, for any given workload such systems provide performance that is good, but usually not the best. Even worse, they sometimes require administrators to painstakingly tune the system by hand to provide even reasonable performance.&nbsp;</p>

<p>In contrast, the goal of instance-optimized systems is to build systems that optimize and partially re-organize themselves for the data they store and the workload they serve.&nbsp;</p>

<p>“It’s like building a database system for every application from scratch, which is not economically feasible with traditional system designs,” says MIT Professor Tim Kraska.&nbsp;</p>

<p>As a first step toward this vision, Kraska and colleagues developed Tsunami and Bao. <a href=”https://arxiv.org/pdf/2006.13282.pdf”>Tsunami</a> uses machine learning to automatically re-organize a dataset’s storage layout based on the types of queries that its users make. Tests show that it can run queries up to 10 times faster than state-of-the-art systems. What’s more, its datasets can be organized via a series of “learned indexes” that are up to 100 times smaller than the indexes used in traditional systems.&nbsp;</p>

<p>Kraska has been exploring the topic of learned indexes for several years, going back to his influential <a href=”https://arxiv.org/abs/1712.01208″>work with colleagues at Google</a> in 2017.&nbsp;</p>

<p>Harvard University Professor Stratos Idreos, who was not involved in the Tsunami project, says that a unique advantage of learned indexes is their small size, which, in addition to space savings, brings substantial performance improvements.</p>

<p>“I think this line of work is a paradigm shift that’s going to impact system design long-term,” says Idreos. “I expect approaches based on models will be one of the core components at the heart of a new wave of adaptive systems.”</p>

<p><a href=”https://arxiv.org/abs/2004.03814″>Bao</a>, meanwhile, focuses on improving the efficiency of query optimization through machine learning. A query optimizer rewrites a high-level declarative query to a query plan, which can actually be executed over the data to compute the result to the query. However, often there exists more than one query plan to answer any query; picking the wrong one can cause a query to take days to compute the answer, rather than seconds.&nbsp;</p>

<p>Traditional query optimizers take years to build, are very hard to maintain, and, most importantly, do not learn from their mistakes. Bao is the first learning-based approach to query optimization that has been fully integrated into the popular database management system PostgreSQL. Lead author Ryan Marcus, a postdoc in Kraska’s group, says that Bao produces query plans that run up to 50 percent faster than those created by the PostgreSQL optimizer, meaning that it could help to significantly reduce the cost of cloud services, like Amazon’s Redshift, that are based on PostgreSQL.</p>

<p>By fusing the two systems together, Kraska hopes to build the first instance-optimized database system that can provide the best possible performance for each individual application without any manual tuning.&nbsp;</p>

<p>The goal is to not only relieve developers from the daunting and laborious process of tuning database systems, but to also provide performance and cost benefits that are not possible with traditional systems.</p>

<p></p>

<p>Traditionally, the systems we use to store data are limited to only a few storage options and, because of it, they cannot provide the best possible performance for a given application. What Tsunami can do is dynamically change the structure of the data storage based on the kinds of queries that it receives and create new ways to store data, which are not feasible with more traditional approaches.</p>

<p>Johannes Gehrke, a managing director at Microsoft Research who also heads up machine learning efforts for Microsoft Teams, says that his work opens up many interesting applications, such as doing so-called “multidimensional queries” in main-memory data warehouses. Harvard’s Idreos also expects the project to spur further work on how to maintain the good performance of such systems when new data and new kinds of queries arrive.</p>

<p>Bao is short for “bandit optimizer,” a play on words related to the so-called “multi-armed bandit” analogy where a gambler tries to maximize their winnings at multiple slot machines that have different rates of return. The multi-armed bandit problem is commonly found in any situation that has tradeoffs between exploring multiple different options, versus exploiting a single option — from risk optimization to A/B testing.</p>

<p>“Query optimizers have been around for years, but they often make mistakes, and usually they don’t learn from them,” says Kraska. “That’s where we feel that our system can make key breakthroughs, as it can quickly learn for the given data and workload what query plans to use and which ones to avoid.”</p>

<p>Kraska says that in contrast to other learning-based approaches to query optimization, Bao learns much faster and can outperform open-source and commercial optimizers with as little as one hour of training time.In the future, his team aims to integrate Bao into cloud systems to improve resource utilization in environments where disk, RAM, and CPU time are scarce resources.</p>

<p>“Our hope is that a system like this will enable much faster query times, and that people will be able to answer questions they hadn’t been able to answer before,” says Kraska.</p>

<p>A related paper about Tsunami was co-written by Kraska, PhD students Jialin Ding and Vikram Nathan, and MIT Professor Mohammad Alizadeh. A paper about Bao was co-written by Kraska, Marcus, PhD students Parimarjan Negi and Hongzi Mao, visiting scientist Nesime Tatbul, and Alizadeh.</p>

<p>The work was done as part of the Data System and AI Lab (DSAIL@CSAIL), which is sponsored by Intel, Google, Microsoft, and the U.S. National Science Foundation.&nbsp;</p>

One of the biggest challenges in computing is handling a staggering onslaught of information while still being able to efficiently store and process it.

Data systems that learn to be betterhttps://news.mit.edu/2020/mit-data-systems-learn-be-better-tsunami-bao-0810
Storage tool developed at MIT CSAIL adapts to what its datasets’ users want to search.
Mon, 10 Aug 2020 16:05:00 -0400
https://news.mit.edu/2020/mit-data-systems-learn-be-better-tsunami-bao-0810
Adam Conner-Simons | MIT CSAIL
<p>Big data has gotten really, really big: By 2025, all the world’s data will add up to <a href=”https://www.bernardmarr.com/default.asp?contentID=1846″>an estimated 175 trillion gigabytes</a>. For a visual, if you stored that amount of data on DVDs, it would stack up tall enough to circle the Earth 222 times.&nbsp;</p>

<p>One of the biggest challenges in computing is handling this onslaught of information while still being able to efficiently store and process it. A team from MIT’s <a href=”http://csail.mit.edu”>Computer Science and Artificial Intelligence Laboratory</a> (CSAIL) believe that the answer rests with something called “instance-optimized systems.”&nbsp;&nbsp;</p>

<p>Traditional storage and database systems are designed to work for a wide range of applications because of how long it can take to build them — months or, often, several years. As a result, for any given workload such systems provide performance that is good, but usually not the best. Even worse, they sometimes require administrators to painstakingly tune the system by hand to provide even reasonable performance.&nbsp;</p>

<p>In contrast, the goal of instance-optimized systems is to build systems that optimize and partially re-organize themselves for the data they store and the workload they serve.&nbsp;</p>

<p>“It’s like building a database system for every application from scratch, which is not economically feasible with traditional system designs,” says MIT Professor Tim Kraska.&nbsp;</p>

<p>As a first step toward this vision, Kraska and colleagues developed Tsunami and Bao. <a href=”https://arxiv.org/pdf/2006.13282.pdf”>Tsunami</a> uses machine learning to automatically re-organize a dataset’s storage layout based on the types of queries that its users make. Tests show that it can run queries up to 10 times faster than state-of-the-art systems. What’s more, its datasets can be organized via a series of “learned indexes” that are up to 100 times smaller than the indexes used in traditional systems.&nbsp;</p>

<p>Kraska has been exploring the topic of learned indexes for several years, going back to his influential <a href=”https://arxiv.org/abs/1712.01208″>work with colleagues at Google</a> in 2017.&nbsp;</p>

<p>Harvard University Professor Stratos Idreos, who was not involved in the Tsunami project, says that a unique advantage of learned indexes is their small size, which, in addition to space savings, brings substantial performance improvements.</p>

<p>“I think this line of work is a paradigm shift that’s going to impact system design long-term,” says Idreos. “I expect approaches based on models will be one of the core components at the heart of a new wave of adaptive systems.”</p>

<p><a href=”https://arxiv.org/abs/2004.03814″>Bao</a>, meanwhile, focuses on improving the efficiency of query optimization through machine learning. A query optimizer rewrites a high-level declarative query to a query plan, which can actually be executed over the data to compute the result to the query. However, often there exists more than one query plan to answer any query; picking the wrong one can cause a query to take days to compute the answer, rather than seconds.&nbsp;</p>

<p>Traditional query optimizers take years to build, are very hard to maintain, and, most importantly, do not learn from their mistakes. Bao is the first learning-based approach to query optimization that has been fully integrated into the popular database management system PostgreSQL. Lead author Ryan Marcus, a postdoc in Kraska’s group, says that Bao produces query plans that run up to 50 percent faster than those created by the PostgreSQL optimizer, meaning that it could help to significantly reduce the cost of cloud services, like Amazon’s Redshift, that are based on PostgreSQL.</p>

<p>By fusing the two systems together, Kraska hopes to build the first instance-optimized database system that can provide the best possible performance for each individual application without any manual tuning.&nbsp;</p>

<p>The goal is to not only relieve developers from the daunting and laborious process of tuning database systems, but to also provide performance and cost benefits that are not possible with traditional systems.</p>

<p></p>

<p>Traditionally, the systems we use to store data are limited to only a few storage options and, because of it, they cannot provide the best possible performance for a given application. What Tsunami can do is dynamically change the structure of the data storage based on the kinds of queries that it receives and create new ways to store data, which are not feasible with more traditional approaches.</p>

<p>Johannes Gehrke, a managing director at Microsoft Research who also heads up machine learning efforts for Microsoft Teams, says that his work opens up many interesting applications, such as doing so-called “multidimensional queries” in main-memory data warehouses. Harvard’s Idreos also expects the project to spur further work on how to maintain the good performance of such systems when new data and new kinds of queries arrive.</p>

<p>Bao is short for “bandit optimizer,” a play on words related to the so-called “multi-armed bandit” analogy where a gambler tries to maximize their winnings at multiple slot machines that have different rates of return. The multi-armed bandit problem is commonly found in any situation that has tradeoffs between exploring multiple different options, versus exploiting a single option — from risk optimization to A/B testing.</p>

<p>“Query optimizers have been around for years, but they often make mistakes, and usually they don’t learn from them,” says Kraska. “That’s where we feel that our system can make key breakthroughs, as it can quickly learn for the given data and workload what query plans to use and which ones to avoid.”</p>

<p>Kraska says that in contrast to other learning-based approaches to query optimization, Bao learns much faster and can outperform open-source and commercial optimizers with as little as one hour of training time.In the future, his team aims to integrate Bao into cloud systems to improve resource utilization in environments where disk, RAM, and CPU time are scarce resources.</p>

<p>“Our hope is that a system like this will enable much faster query times, and that people will be able to answer questions they hadn’t been able to answer before,” says Kraska.</p>

<p>A related paper about Tsunami was co-written by Kraska, PhD students Jialin Ding and Vikram Nathan, and MIT Professor Mohammad Alizadeh. A paper about Bao was co-written by Kraska, Marcus, PhD students Parimarjan Negi and Hongzi Mao, visiting scientist Nesime Tatbul, and Alizadeh.</p>

<p>The work was done as part of the Data System and AI Lab (DSAIL@CSAIL), which is sponsored by Intel, Google, Microsoft, and the U.S. National Science Foundation.&nbsp;</p>

One of the biggest challenges in computing is handling a staggering onslaught of information while still being able to efficiently store and process it.

Shrinking deep learning’s carbon footprinthttps://news.mit.edu/2020/shrinking-deep-learning-carbon-footprint-0807
Through innovation in software and hardware, researchers move to reduce the financial and environmental costs of modern artificial intelligence.
Fri, 07 Aug 2020 17:00:00 -0400
https://news.mit.edu/2020/shrinking-deep-learning-carbon-footprint-0807
Kim Martineau | MIT Quest for Intelligence
<p>In June, OpenAI unveiled the largest language model in the world, a text-generating tool called GPT-3 that can&nbsp;<a href=”https://www.gwern.net/GPT-3″>write creative fiction</a>, translate&nbsp;<a href=”https://twitter.com/michaeltefula/status/1285505897108832257″>legalese into plain English</a>, and&nbsp;<a href=”https://lacker.io/ai/2020/07/06/giving-gpt-3-a-turing-test.html”>answer obscure trivia</a>&nbsp;questions. It’s the latest feat of intelligence achieved by deep learning, a machine learning method patterned after the way neurons in the brain process and store information.</p>

<p>But it came at a hefty price: at least $4.6 million and&nbsp;<a href=”https://lambdalabs.com/blog/demystifying-gpt-3/”>355 years in computing time</a>, assuming the model&nbsp;was trained on a standard neural network chip, or GPU.&nbsp;The model’s colossal size — 1,000 times larger than&nbsp;<a href=”https://arxiv.org/pdf/1810.04805.pdf”>a typical</a>&nbsp;language model — is the main factor in&nbsp;its high cost.</p>

<p>“You have to throw a lot more computation at something to get a little improvement in performance,” says&nbsp;<a href=”http://ide.mit.edu/about-us/people/neil-thompson”>Neil Thompson</a>, an MIT researcher who has tracked deep learning’s unquenchable thirst for computing. “It’s unsustainable. We have to find more efficient ways to scale deep learning or develop other technologies.”</p>

<p>Some of the excitement over AI’s recent progress has shifted to alarm. In a&nbsp;<a href=”https://arxiv.org/abs/1906.02243″>study last year</a>, researchers at the University of Massachusetts at Amherst estimated that training&nbsp;a large deep-learning model&nbsp;produces 626,000 pounds of planet-warming carbon dioxide, equal to the lifetime emissions of five cars. As models grow bigger, their demand for computing is outpacing improvements in hardware efficiency.&nbsp;Chips specialized for neural-network processing, like GPUs (graphics processing units) and TPUs (tensor processing units), have offset the demand for more computing, but not by enough.&nbsp;</p>

<p>“We need to rethink the entire stack — from software to hardware,” says&nbsp;<a href=”http://olivalab.mit.edu/audeoliva.html”>Aude Oliva</a>, MIT director of the MIT-IBM Watson AI Lab and co-director of the MIT Quest for Intelligence.&nbsp;“Deep learning has made the recent AI revolution possible, but its growing cost in energy and carbon emissions is untenable.”</p>

<p>Computational limits have dogged neural networks from their earliest incarnation —&nbsp;<a href=”https://news.cornell.edu/stories/2019/09/professors-perceptron-paved-way-ai-60-years-too-soon”>the perceptron</a>&nbsp;— in the 1950s.&nbsp;As computing power exploded, and the internet unleashed a tsunami of data, they evolved into powerful engines for pattern recognition and prediction. But each new milestone brought an explosion in cost, as data-hungry models demanded increased computation. GPT-3, for example, trained on half a trillion words and ballooned to 175 billion parameters&nbsp;— the mathematical operations, or weights, that tie the model together —&nbsp;making it 100 times bigger than its predecessor, itself just a year old.</p>

<p>In&nbsp;<a href=”https://arxiv.org/pdf/2007.05558.pdf”>work posted</a>&nbsp;on the pre-print server arXiv,&nbsp;Thompson and his colleagues show that the ability of deep learning models to surpass key benchmarks tracks their nearly exponential rise in computing power use. (Like others seeking to track AI’s carbon footprint, the team had to guess at many models’ energy consumption due to a lack of reporting requirements). At this rate, the researchers argue, deep nets will survive only if they, and the hardware they run on, become radically more efficient.</p>

<p><strong>Toward leaner, greener algorithms</strong></p>

<p>The human perceptual system is extremely efficient at using data. Researchers have borrowed this idea for recognizing actions in video and in real life to make models more compact.&nbsp;In a paper at the&nbsp;<a href=”https://eccv2020.eu/”>European Conference on Computer Vision</a> (ECCV) in August, researchers at the&nbsp;<a href=”https://mitibmwatsonailab.mit.edu/”>MIT-IBM Watson AI Lab</a>&nbsp;describe a method for unpacking a scene from a few glances, as humans do, by cherry-picking the most relevant data.</p>

<p>Take a video clip of someone making a sandwich. Under the method outlined in the paper, a policy network strategically picks frames of the knife slicing through roast beef, and meat being stacked on a slice of bread, to represent at high resolution. Less-relevant frames are skipped over or represented at lower resolution. A second model then uses the abbreviated CliffsNotes version of the movie to label it “making a sandwich.” The approach leads to faster video classification at half the computational cost as the next-best model, the researchers say.</p>

<p>“Humans don’t pay attention to every last detail — why should our models?” says the study’s senior author,&nbsp;<a href=”http://rogerioferis.com/”>Rogerio Feris</a>, research manager at the MIT-IBM Watson AI Lab. “We can use machine learning to adaptively select the right data, at the right level of detail, to make deep learning models more efficient.”</p>

<p>In a complementary approach, researchers are using deep learning itself to design more economical models through an automated process known as neural architecture search.&nbsp;<a href=”https://songhan.mit.edu/”>Song Han</a>, an assistant professor at MIT, has used automated search to design models with fewer weights, for language understanding and scene recognition, where picking out looming obstacles quickly is acutely important in driving applications.&nbsp;</p>

<p>In&nbsp;<a href=”https://hanlab.mit.edu/projects/spvnas/papers/spvnas_eccv.pdf”>a paper at ECCV</a>, Han and his colleagues propose a model architecture for three-dimensional scene&nbsp;recognition that can spot safety-critical details like road signs, pedestrians, and cyclists with relatively less computation. They used&nbsp;an evolutionary-search algorithm to evaluate 1,000 architectures before settling on a model they say is three times faster and uses eight times less computation than the next-best method.&nbsp;</p>

<p>In&nbsp;<a href=”https://arxiv.org/pdf/2005.14187.pdf”>another recent paper</a>, they use evolutionary search within an augmented designed space to find the most efficient architectures for machine translation on a specific device, be it a GPU, smartphone, or tiny&nbsp;Raspberry Pi.&nbsp;Separating the search and training process leads to huge reductions in computation, they say.</p>

<p>In a third approach, researchers are probing the essence of deep nets to see if it might be possible to&nbsp;train a small part of even hyper-efficient networks like those above.&nbsp;Under their proposed <a href=”https://arxiv.org/abs/1803.03635″>lottery ticket hypothesis</a>, PhD student&nbsp;<a href=”http://www.jfrankle.com/”>Jonathan Frankle</a>&nbsp;and MIT Professor&nbsp;<a href=”https://people.csail.mit.edu/mcarbin/”>Michael Carbin</a>&nbsp;proposed that within each model lies a tiny subnetwork that could have been trained in isolation with as few as one-tenth as many weights — what they call a “winning ticket.”&nbsp;</p>

<p>They showed that an algorithm could retroactively&nbsp;find these winning subnetworks in&nbsp;small image-classification models. Now,&nbsp;<a href=”https://arxiv.org/abs/1912.05671″>in a paper</a>&nbsp;at the International Conference on Machine Learning (ICML), they show that the algorithm finds winning tickets in large models, too; the models just need to be rewound to an early, critical point in training when the order of the training data no longer&nbsp;influences the training outcome.&nbsp;</p>

<p>In less than two years, the lottery ticket idea has been cited more than&nbsp;<a href=”https://scholar.google.com/citations?user=MlLJapIAAAAJ&amp;hl=en”>more than 400 times</a>, including by Facebook researcher Ari Morcos, who has&nbsp;<a href=”https://ai.facebook.com/blog/understanding-the-generalization-of-lottery-tickets-in-neural-networks/”>shown</a>&nbsp;that winning tickets can be transferred from one vision task to another, and that winning tickets exist in language and reinforcement learning models, too.&nbsp;</p>

<p>“The standard explanation for why we need such large networks is that overparameterization aids the learning process,” says Morcos. “The lottery ticket hypothesis disproves that — it’s all about finding an appropriate starting point. The big downside, of course, is that, currently, finding these ‘winning’ starting points requires training the full overparameterized network anyway.”</p>

<p>Frankle says he’s hopeful that an efficient way to find winning tickets will be found. In the meantime, recycling those winning tickets, as Morcos suggests, could lead to big savings.</p>

<p><strong>Hardware designed for efficient deep net algorithms</strong></p>

<p>As deep nets push classical computers to the limit, researchers are pursuing alternatives, from optical computers that transmit and store data with photons instead of electrons, to quantum computers, which have the potential to increase computing power exponentially by representing data in multiple states at once.</p>

<p>Until a new paradigm emerges, researchers have focused on adapting the modern chip to the demands of deep learning. The trend began with&nbsp;the discovery that video-game graphical chips, or GPUs, could turbocharge deep-net training with their ability to perform massively parallelized matrix computations. GPUs are now one of the workhorses of modern AI, and have spawned new ideas for boosting deep net efficiency through specialized hardware.&nbsp;</p>

<p>Much of this work hinges on finding ways to&nbsp;store and reuse data locally, across the chip’s processing cores,&nbsp;rather than waste time and energy shuttling data to and from&nbsp;a designated memory site. Processing data locally not only speeds up model training but improves inference, allowing deep learning applications to run more smoothly on smartphones and other mobile devices.</p>

<p><a href=”https://www.rle.mit.edu/eems/”>Vivienne Sze</a>, a professor at MIT, has literally written&nbsp;<a href=”http://www.morganclaypoolpublishers.com/catalog_Orig/product_info.php?products_id=1530″>the book</a>&nbsp;on efficient deep nets. In collaboration with book co-author Joel Emer, an MIT professor and researcher at NVIDIA, Sze has designed a chip that’s flexible enough to process the widely-varying shapes of both large and small deep learning models. Called&nbsp;<a href=”https://ieeexplore.ieee.org/document/8686088″>Eyeriss 2</a>, the chip uses 10 times less energy than a mobile GPU.</p>

<p>Its versatility lies in its on-chip network, called a hierarchical mesh, that adaptively reuses data and adjusts to the bandwidth requirements of different deep learning models. After reading from memory, it reuses the data across as many processing elements as possible to minimize data transportation costs and maintain high throughput.&nbsp;</p>

<p>“The goal is to translate small and sparse networks into energy savings and fast inference,” says Sze. “But the hardware should be flexible enough to also efficiently support large and dense deep neural networks.”</p>

<p>Other hardware innovators are focused on reproducing the brain’s energy efficiency. Former Go world champion Lee Sedol may have lost his title to a computer, but his performance&nbsp;<a href=”https://jacquesmattheij.com/another-way-of-looking-at-lee-sedol-vs-alphago/”>was fueled</a>&nbsp;by a mere 20 watts of power. AlphaGo, by contrast, burned an estimated megawatt of energy, or 500,000 times more.</p>

<p>Inspired by the brain’s frugality, researchers are experimenting with replacing the binary, on-off switch of classical transistors with analog devices that mimic the way that synapses in the brain grow stronger and weaker during learning and forgetting.</p>

<p>An electrochemical device, developed at MIT and recently&nbsp;<a href=”https://www.nature.com/articles/s41467-020-16866-6″>published in <em>Nature Communications</em></a>, is modeled after the way resistance between two neurons grows or subsides as calcium, magnesium or potassium ions flow across the synaptic membrane dividing them.&nbsp;The device uses the flow of protons — the smallest and fastest ion in solid state — into and out of a crystalline lattice of tungsten trioxide to tune its resistance along a continuum, in an analog fashion.</p>

<p>“Even though the device is not yet optimized, it gets to the order of energy consumption per unit area per unit change in conductance that’s close to that in the brain,” says&nbsp;the study’s senior author, <a href=”https://web.mit.edu/nse/people/faculty/yildiz.html”>Bilge Yildiz</a>, a professor at MIT.</p>

<p>Energy-efficient algorithms and hardware can shrink AI’s environmental impact. But there are other reasons to innovate, says Sze, listing them off: Efficiency will allow computing to move from data centers to edge devices like smartphones, making AI accessible to more people around the world; shifting computation from the cloud to personal devices reduces the flow, and potential leakage, of sensitive data; and processing data on the edge eliminates transmission costs, leading to faster inference with a shorter reaction time, which is key for interactive driving and augmented/virtual reality applications.</p>

<p>“For all of these reasons, we need to embrace efficient AI,” she says.</p>

Deep learning has driven much of the recent progress in artificial intelligence, but as demand for computation and energy to train ever-larger models increases, many are raising concerns about the financial and environmental costs. To address the problem, researchers at MIT and the MIT-IBM Watson AI Lab are experimenting with ways to make software and hardware more energy efficient, and in some cases, more like the human brain.
Image: Niki Hinkle/MIT Spectrum

Shrinking deep learning’s carbon footprinthttps://news.mit.edu/2020/shrinking-deep-learning-carbon-footprint-0807
Through innovation in software and hardware, researchers move to reduce the financial and environmental costs of modern artificial intelligence.
Fri, 07 Aug 2020 17:00:00 -0400
https://news.mit.edu/2020/shrinking-deep-learning-carbon-footprint-0807
Kim Martineau | MIT Quest for Intelligence
<p>In June, OpenAI unveiled the largest language model in the world, a text-generating tool called GPT-3 that can&nbsp;<a href=”https://www.gwern.net/GPT-3″>write creative fiction</a>, translate&nbsp;<a href=”https://twitter.com/michaeltefula/status/1285505897108832257″>legalese into plain English</a>, and&nbsp;<a href=”https://lacker.io/ai/2020/07/06/giving-gpt-3-a-turing-test.html”>answer obscure trivia</a>&nbsp;questions. It’s the latest feat of intelligence achieved by deep learning, a machine learning method patterned after the way neurons in the brain process and store information.</p>

<p>But it came at a hefty price: at least $4.6 million and&nbsp;<a href=”https://lambdalabs.com/blog/demystifying-gpt-3/”>355 years in computing time</a>, assuming the model&nbsp;was trained on a standard neural network chip, or GPU.&nbsp;The model’s colossal size — 1,000 times larger than&nbsp;<a href=”https://arxiv.org/pdf/1810.04805.pdf”>a typical</a>&nbsp;language model — is the main factor in&nbsp;its high cost.</p>

<p>“You have to throw a lot more computation at something to get a little improvement in performance,” says&nbsp;<a href=”http://ide.mit.edu/about-us/people/neil-thompson”>Neil Thompson</a>, an MIT researcher who has tracked deep learning’s unquenchable thirst for computing. “It’s unsustainable. We have to find more efficient ways to scale deep learning or develop other technologies.”</p>

<p>Some of the excitement over AI’s recent progress has shifted to alarm. In a&nbsp;<a href=”https://arxiv.org/abs/1906.02243″>study last year</a>, researchers at the University of Massachusetts at Amherst estimated that training&nbsp;a large deep-learning model&nbsp;produces 626,000 pounds of planet-warming carbon dioxide, equal to the lifetime emissions of five cars. As models grow bigger, their demand for computing is outpacing improvements in hardware efficiency.&nbsp;Chips specialized for neural-network processing, like GPUs (graphics processing units) and TPUs (tensor processing units), have offset the demand for more computing, but not by enough.&nbsp;</p>

<p>“We need to rethink the entire stack — from software to hardware,” says&nbsp;<a href=”http://olivalab.mit.edu/audeoliva.html”>Aude Oliva</a>, MIT director of the MIT-IBM Watson AI Lab and co-director of the MIT Quest for Intelligence.&nbsp;“Deep learning has made the recent AI revolution possible, but its growing cost in energy and carbon emissions is untenable.”</p>

<p>Computational limits have dogged neural networks from their earliest incarnation —&nbsp;<a href=”https://news.cornell.edu/stories/2019/09/professors-perceptron-paved-way-ai-60-years-too-soon”>the perceptron</a>&nbsp;— in the 1950s.&nbsp;As computing power exploded, and the internet unleashed a tsunami of data, they evolved into powerful engines for pattern recognition and prediction. But each new milestone brought an explosion in cost, as data-hungry models demanded increased computation. GPT-3, for example, trained on half a trillion words and ballooned to 175 billion parameters&nbsp;— the mathematical operations, or weights, that tie the model together —&nbsp;making it 100 times bigger than its predecessor, itself just a year old.</p>

<p>In&nbsp;<a href=”https://arxiv.org/pdf/2007.05558.pdf”>work posted</a>&nbsp;on the pre-print server arXiv,&nbsp;Thompson and his colleagues show that the ability of deep learning models to surpass key benchmarks tracks their nearly exponential rise in computing power use. (Like others seeking to track AI’s carbon footprint, the team had to guess at many models’ energy consumption due to a lack of reporting requirements). At this rate, the researchers argue, deep nets will survive only if they, and the hardware they run on, become radically more efficient.</p>

<p><strong>Toward leaner, greener algorithms</strong></p>

<p>The human perceptual system is extremely efficient at using data. Researchers have borrowed this idea for recognizing actions in video and in real life to make models more compact.&nbsp;In a paper at the&nbsp;<a href=”https://eccv2020.eu/”>European Conference on Computer Vision</a> (ECCV) in August, researchers at the&nbsp;<a href=”https://mitibmwatsonailab.mit.edu/”>MIT-IBM Watson AI Lab</a>&nbsp;describe a method for unpacking a scene from a few glances, as humans do, by cherry-picking the most relevant data.</p>

<p>Take a video clip of someone making a sandwich. Under the method outlined in the paper, a policy network strategically picks frames of the knife slicing through roast beef, and meat being stacked on a slice of bread, to represent at high resolution. Less-relevant frames are skipped over or represented at lower resolution. A second model then uses the abbreviated CliffsNotes version of the movie to label it “making a sandwich.” The approach leads to faster video classification at half the computational cost as the next-best model, the researchers say.</p>

<p>“Humans don’t pay attention to every last detail — why should our models?” says the study’s senior author,&nbsp;<a href=”http://rogerioferis.com/”>Rogerio Feris</a>, research manager at the MIT-IBM Watson AI Lab. “We can use machine learning to adaptively select the right data, at the right level of detail, to make deep learning models more efficient.”</p>

<p>In a complementary approach, researchers are using deep learning itself to design more economical models through an automated process known as neural architecture search.&nbsp;<a href=”https://songhan.mit.edu/”>Song Han</a>, an assistant professor at MIT, has used automated search to design models with fewer weights, for language understanding and scene recognition, where picking out looming obstacles quickly is acutely important in driving applications.&nbsp;</p>

<p>In&nbsp;<a href=”https://hanlab.mit.edu/projects/spvnas/papers/spvnas_eccv.pdf”>a paper at ECCV</a>, Han and his colleagues propose a model architecture for three-dimensional scene&nbsp;recognition that can spot safety-critical details like road signs, pedestrians, and cyclists with relatively less computation. They used&nbsp;an evolutionary-search algorithm to evaluate 1,000 architectures before settling on a model they say is three times faster and uses eight times less computation than the next-best method.&nbsp;</p>

<p>In&nbsp;<a href=”https://arxiv.org/pdf/2005.14187.pdf”>another recent paper</a>, they use evolutionary search within an augmented designed space to find the most efficient architectures for machine translation on a specific device, be it a GPU, smartphone, or tiny&nbsp;Raspberry Pi.&nbsp;Separating the search and training process leads to huge reductions in computation, they say.</p>

<p>In a third approach, researchers are probing the essence of deep nets to see if it might be possible to&nbsp;train a small part of even hyper-efficient networks like those above.&nbsp;Under their proposed <a href=”https://arxiv.org/abs/1803.03635″>lottery ticket hypothesis</a>, PhD student&nbsp;<a href=”http://www.jfrankle.com/”>Jonathan Frankle</a>&nbsp;and MIT Professor&nbsp;<a href=”https://people.csail.mit.edu/mcarbin/”>Michael Carbin</a>&nbsp;proposed that within each model lies a tiny subnetwork that could have been trained in isolation with as few as one-tenth as many weights — what they call a “winning ticket.”&nbsp;</p>

<p>They showed that an algorithm could retroactively&nbsp;find these winning subnetworks in&nbsp;small image-classification models. Now,&nbsp;<a href=”https://arxiv.org/abs/1912.05671″>in a paper</a>&nbsp;at the International Conference on Machine Learning (ICML), they show that the algorithm finds winning tickets in large models, too; the models just need to be rewound to an early, critical point in training when the order of the training data no longer&nbsp;influences the training outcome.&nbsp;</p>

<p>In less than two years, the lottery ticket idea has been cited more than&nbsp;<a href=”https://scholar.google.com/citations?user=MlLJapIAAAAJ&amp;hl=en”>more than 400 times</a>, including by Facebook researcher Ari Morcos, who has&nbsp;<a href=”https://ai.facebook.com/blog/understanding-the-generalization-of-lottery-tickets-in-neural-networks/”>shown</a>&nbsp;that winning tickets can be transferred from one vision task to another, and that winning tickets exist in language and reinforcement learning models, too.&nbsp;</p>

<p>“The standard explanation for why we need such large networks is that overparameterization aids the learning process,” says Morcos. “The lottery ticket hypothesis disproves that — it’s all about finding an appropriate starting point. The big downside, of course, is that, currently, finding these ‘winning’ starting points requires training the full overparameterized network anyway.”</p>

<p>Frankle says he’s hopeful that an efficient way to find winning tickets will be found. In the meantime, recycling those winning tickets, as Morcos suggests, could lead to big savings.</p>

<p><strong>Hardware designed for efficient deep net algorithms</strong></p>

<p>As deep nets push classical computers to the limit, researchers are pursuing alternatives, from optical computers that transmit and store data with photons instead of electrons, to quantum computers, which have the potential to increase computing power exponentially by representing data in multiple states at once.</p>

<p>Until a new paradigm emerges, researchers have focused on adapting the modern chip to the demands of deep learning. The trend began with&nbsp;the discovery that video-game graphical chips, or GPUs, could turbocharge deep-net training with their ability to perform massively parallelized matrix computations. GPUs are now one of the workhorses of modern AI, and have spawned new ideas for boosting deep net efficiency through specialized hardware.&nbsp;</p>

<p>Much of this work hinges on finding ways to&nbsp;store and reuse data locally, across the chip’s processing cores,&nbsp;rather than waste time and energy shuttling data to and from&nbsp;a designated memory site. Processing data locally not only speeds up model training but improves inference, allowing deep learning applications to run more smoothly on smartphones and other mobile devices.</p>

<p><a href=”https://www.rle.mit.edu/eems/”>Vivienne Sze</a>, a professor at MIT, has literally written&nbsp;<a href=”http://www.morganclaypoolpublishers.com/catalog_Orig/product_info.php?products_id=1530″>the book</a>&nbsp;on efficient deep nets. In collaboration with book co-author Joel Emer, an MIT professor and researcher at NVIDIA, Sze has designed a chip that’s flexible enough to process the widely-varying shapes of both large and small deep learning models. Called&nbsp;<a href=”https://ieeexplore.ieee.org/document/8686088″>Eyeriss 2</a>, the chip uses 10 times less energy than a mobile GPU.</p>

<p>Its versatility lies in its on-chip network, called a hierarchical mesh, that adaptively reuses data and adjusts to the bandwidth requirements of different deep learning models. After reading from memory, it reuses the data across as many processing elements as possible to minimize data transportation costs and maintain high throughput.&nbsp;</p>

<p>“The goal is to translate small and sparse networks into energy savings and fast inference,” says Sze. “But the hardware should be flexible enough to also efficiently support large and dense deep neural networks.”</p>

<p>Other hardware innovators are focused on reproducing the brain’s energy efficiency. Former Go world champion Lee Sedol may have lost his title to a computer, but his performance&nbsp;<a href=”https://jacquesmattheij.com/another-way-of-looking-at-lee-sedol-vs-alphago/”>was fueled</a>&nbsp;by a mere 20 watts of power. AlphaGo, by contrast, burned an estimated megawatt of energy, or 500,000 times more.</p>

<p>Inspired by the brain’s frugality, researchers are experimenting with replacing the binary, on-off switch of classical transistors with analog devices that mimic the way that synapses in the brain grow stronger and weaker during learning and forgetting.</p>

<p>An electrochemical device, developed at MIT and recently&nbsp;<a href=”https://www.nature.com/articles/s41467-020-16866-6″>published in <em>Nature Communications</em></a>, is modeled after the way resistance between two neurons grows or subsides as calcium, magnesium or potassium ions flow across the synaptic membrane dividing them.&nbsp;The device uses the flow of protons — the smallest and fastest ion in solid state — into and out of a crystalline lattice of tungsten trioxide to tune its resistance along a continuum, in an analog fashion.</p>

<p>“Even though the device is not yet optimized, it gets to the order of energy consumption per unit area per unit change in conductance that’s close to that in the brain,” says&nbsp;the study’s senior author, <a href=”https://web.mit.edu/nse/people/faculty/yildiz.html”>Bilge Yildiz</a>, a professor at MIT.</p>

<p>Energy-efficient algorithms and hardware can shrink AI’s environmental impact. But there are other reasons to innovate, says Sze, listing them off: Efficiency will allow computing to move from data centers to edge devices like smartphones, making AI accessible to more people around the world; shifting computation from the cloud to personal devices reduces the flow, and potential leakage, of sensitive data; and processing data on the edge eliminates transmission costs, leading to faster inference with a shorter reaction time, which is key for interactive driving and augmented/virtual reality applications.</p>

<p>“For all of these reasons, we need to embrace efficient AI,” she says.</p>

Deep learning has driven much of the recent progress in artificial intelligence, but as demand for computation and energy to train ever-larger models increases, many are raising concerns about the financial and environmental costs. To address the problem, researchers at MIT and the MIT-IBM Watson AI Lab are experimenting with ways to make software and hardware more energy efficient, and in some cases, more like the human brain.
Image: Niki Hinkle/MIT Spectrum

3 Questions: John Leonard on the future of autonomous vehicleshttps://news.mit.edu/2020/mit-3-questions-john-leonard-future-of-autonomous-vehicles-0804
MIT Task Force on the Work of the Future examines job changes in the AV transition and how training can help workers move into careers that support mobility systems.
Tue, 04 Aug 2020 14:07:00 -0400
https://news.mit.edu/2020/mit-3-questions-john-leonard-future-of-autonomous-vehicles-0804
MIT Task Force on the Work of the Future
<p><em>As part of the MIT Task Force on the Work of the Future’s new series of research briefs, professor of mechanical engineering John Leonard teamed with professor of aeronautics and astronautics and the Dibner Professor of the History of Engineering and Manufacturing David Mindell and with doctoral candidate Erik Stayton to explore the future of autonomous vehicles (AV) — an area that could arguably be called the touchstone for the discussion of jobs of the future in recent years. Leonard is the Samuel C. Collins Professor of Mechanical and Ocean Engineering in the Department of Mechanical Engineering,&nbsp;a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and member of the MIT Task Force on the Work of the Future.&nbsp;His research addresses&nbsp;navigation and mapping for autonomous mobile robots operating in challenging environments.&nbsp;</em></p>

<p><em>Their research brief, “<a href=”http://workofthefuture.mit.edu/2020-Research-Brief-Leonard-Mindell-Stayton” target=”_blank”>Autonomous Vehicles, Mobility, and Employment Policy: The Roads Ahead</a>,” looks at how the AV transition will affect jobs and explores how sustained investments in workforce training for advanced mobility can help drivers and other mobility workers transition into new careers that support mobility systems and technologies. It also highlights the policies that will greatly ease the integration of automated systems into urban mobility systems, including investing in local and national infrastructure, and forming public-private partnerships. Leonard spoke recently on some of the findings in the brief.</em></p>

<p><strong>Q:</strong> When would you say Level 4 autonomous vehicle systems — those that can operate without active supervision by a human driver — increase their area of operation beyond today’s limited local deployments?</p>

<p><strong>A: </strong>The widespread deployment of Level 4 automated vehicles will take much longer than many have predicted — at least a decade for favorable environments, and possibly much longer. Despite substantial recent progress by the community, major challenges remain before we will see the disruptive rollout of fully automated driving systems that have no safety driver onboard over large areas.&nbsp;Expansion will likely be gradual, and will happen region-by-region in specific categories of transportation, resulting in wide variations in availability across the country. The key question is not just “when,” but “where” will the technology be available and profitable?</p>

<p>Driver assistance and active safety systems (known as Level 2 automation) will continue to become more widespread on personal vehicles.&nbsp;These systems, however, will have limited impacts on jobs, since a human driver must be on board and ready to intervene at any moment.&nbsp;Level 3 systems can operate without active engagement by the driver for certain geographic settings, so long as the driver is ready to intervene when requested; however, these systems will likely be restricted to low-speed traffic.</p>

<p>Impacts on trucking are also expected to be less than many have predicted, due to technological challenges and risks that remain, even for more structured highway environments.</p>

<p><strong>Q:</strong> In the brief, you make the argument that AV transition, while threatening numerous jobs, will not be “jobless.” Can you explain?&nbsp; What are the likely impacts to mobility jobs — including transit, vehicle sales, vehicle maintenance, delivery, and other related industries?</p>

<p><strong>A: </strong>The longer rollout time for Level 4 autonomy provides time for sustained investments in workforce training that can help drivers and other mobility workers transition into new careers that support mobility systems and technologies.&nbsp;Transitioning from current-day driving jobs to these jobs represents potential pathways for employment, so long as job-training resources are available. Because the geographical rollout of Level 4 automated driving is expected to be slow, human workers will remain essential to the operation of these systems for the foreseeable future, in roles that are both old and new.&nbsp;</p>

<p>In some cases, Level 4 remote driving systems could move driving jobs from vehicles to fixed-location centers, but these might represent a step down in job quality for many professional drivers. The skills required for these jobs is largely unknown, but they are likely to be a combination of call-center, dispatcher, technician, and maintenance roles with strong language skills. More advanced engineering roles could also be sources of good jobs if automated taxi fleets are deployed at scale, but will require strong technical training that may be out of reach for many.&nbsp;</p>

<p>Increasing availability of Level 2 and Level 3 systems will result in changes in the nature of work for professional drivers, but do not necessarily impact job numbers to the extent that other systems might, because these systems do not remove drivers from vehicles.&nbsp;</p>

<p>While the employment implications of widespread Level 4 automation in trucking could eventually be considerable, as with other domains, the rollout is expected to be gradual. Truck drivers do more than just drive, and so human presence within even highly automated trucks would remain valuable for other reasons such as loading, unloading, and maintenance. Human-autonomous truck platooning, in which multiple Level 4 trucks follow a human-driven lead truck, may be more viable than completely operator-free Level 4 operations in the near term.&nbsp;&nbsp;<br />
<br />
<strong>Q: </strong>How should we prepare policy in the three key areas of infrastructure, jobs, and innovation?&nbsp;</p>

<p><strong>A:</strong> Policymakers can act now to prepare for and minimize disruptions to the millions of jobs in ground transportation and related industries that may come in the future, while also fostering greater economic opportunity and mitigating environmental impacts by building safe and accessible mobility systems. Investing in local and national infrastructure, and forming public-private partnerships, will greatly ease integration of automated systems into urban mobility systems.&nbsp;&nbsp;</p>

<p>Automated vehicles should be thought of as one element in a mobility mix, and as a potential feeder for public transit rather than a replacement for it, but unintended consequences such as increased congestion remain risks. The crucial role of public transit for connecting workers to workplaces will endure: the future of work depends in large part on how people get to work.</p>

<p>Policy recommendations in the trucking sector include strengthening career pathways for drivers, increasing labor standards and worker protections, advancing public safety, creating good jobs via human-led truck platooning, and promoting safe and electric trucks.</p>

Professor John Leonard says the widespread deployment of Level 4 automated vehicles, which can operate without active supervision by a human driver, will take much longer than many have predicted.

New US postage stamp highlights MIT researchhttps://news.mit.edu/2020/new-us-postal-stamp-highlights-mit-research-0802
For the robotics category in a new series celebrating innovation, the USPS chose the bionic prosthesis designed and built by the Media Lab&#039;s Biomechatronics group.
Sun, 02 Aug 2020 00:00:00 -0400
https://news.mit.edu/2020/new-us-postal-stamp-highlights-mit-research-0802
Alexandra Kahn | MIT Media Lab
<p>Letter writers across the country will soon have a fun and beautiful new Forever stamp to choose from, featuring novel research from the Media Lab’s Biomechatronics research group.&nbsp;</p>

<p>The stamp is part of a new U.S. Postal Service (USPS) series on innovation, representing computing, biomedicine, genome sequencing, robotics, and solar technology.&nbsp;For the robotics category, the USPS chose the bionic prosthesis designed and built by Matt Carney PhD ’20 and members of the Biomechatronics group, led by Professor Hugh Herr.</p>

<p>The image used in the stamp was taken by photographer Andy Ryan, whose portfolio spans&nbsp;images from around the world, and who for many years has been capturing the MIT experience — from stunning architectural shots to the research work of labs across campus. Ryan suggested the bionic work of the biomechatronics group to USPS to represent the future of robotics. Ryan also created the images that became the computing and solar technology stamps in the series.&nbsp;</p>

<p>“I was aware that Hugh Herr and his research team were incorporating robotic elements into the prosthetic legs they were developing and testing,” Ryan notes.&nbsp;“This vision of robotics was, in my mind, a true depiction of how robots and robotics would manifest and impact society in the future.”&nbsp;</p>

<p>With encouragement from Herr, Ryan submitted high-definition, stylized, and close-up images of Matt Carney working on the group’s latest designs.&nbsp;</p>

<p>Carney, who recently completed his PhD in media arts and sciences at the Media Lab, views bionic limbs as the ultimate humanoid robot, and an ideal innovation to represent and portray robotics in 2020. He was all-in for sharing that work with the world.</p>

<p>”Robotic prostheses integrate biomechanics, mechanical, electrical, and software engineering, and no piece is off-the-shelf,” Carney says. “To attempt to fit within the confines of the human form, and to match the bandwidth and power density of the human body, we must push the bounds of every discipline: computation, strength of materials, magnetic energy densities, sensors, biological interfaces, and so much more.”</p>

<p>In his childhood, Carney himself collected stamps from different corners of the globe, and so the selection of his research for a U.S. postal stamp has been especially meaningful.&nbsp;</p>

<p>”It’s a freakin’ honor to have my PhD work featured as a USPS stamp,” Carney says, breaking into a big smile. “I hope this feat is an inspiration to young students everywhere to crush their homework, and to build the skills to make a positive impact on the world. And while I worked insane hours to build this thing — and really tried to inspire with its design as much as its engineering — it’s truly the culmination of powered prosthesis work pioneered by Dr. Hugh Herr and our entire team at the Media Lab’s Biomechatronics group, and it expands on work from a global community over more than a decade of development.”</p>

<p>The new MIT stamp joins a venerable list of other stamps associated with the Institute.&nbsp;Formerly issued stamps have featured Apollo 11 astronaut and moonwalker Buzz Aldrin ScD ’63, Nobel Prize winner Richard Feynman ’39, and architect Robert Robinson Taylor, who graduated from MIT in 1892 and is considered the nation’s first academically trained African American architect, followed by Pritzker Prize-winning architect I.M. Pei ’40, whose work includes the Louvre Glass Pyramid and the East Building on the National Gallery in Washington, as well as numerous buildings on the MIT campus.&nbsp;</p>

<p>The new robotics stamp, however, is the first to feature MIT research, as well as members of the MIT community.</p>

<p>”I’m deeply honored that a USPS Forever stamp has been created to celebrate technologically-advanced robotic prostheses, and along with that, the determination to alleviate human impairment,” Herr says. “Through the marriage of human physiology and robotics, persons with leg amputation can now walk with powered prostheses that closely emulate the biological leg. By integrating synthetic sensors, artificial computation, and muscle-like actuation, these technologies are already improving people’s lives in profound ways, and may one day soon bring about the end of disability.”</p>

<p>The Innovation Stamp series will be <a href=”http://about.usps.com/newsroom/national-releases/2020/0706-usps-announces-new-stamps-celebrating-innovation.pdf”>available for purchase</a> through the U.S. Postal Service later this month.</p>

<p>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;</p>

Matt Carney, together with a team of researchers from the Media Lab’s Biomechatronics group, designed and built the robotic prothesis featured in the new U.S. postage stamp.
Photo: Andy Ryan

An automated health care system that understands when to step inhttps://news.mit.edu/2020/machine-learning-health-care-system-understands-when-to-step-in-0731
Machine learning system from MIT CSAIL can look at chest X-rays to diagnose pneumonia — and also knows when to defer to a radiologist.
Fri, 31 Jul 2020 14:15:01 -0400
https://news.mit.edu/2020/machine-learning-health-care-system-understands-when-to-step-in-0731
Adam Conner-Simons | MIT CSAIL
<p>In recent years, entire industries have popped up that rely on the delicate interplay between human workers and automated software. Companies like Facebook work to keep hateful and violent content off their platforms using&nbsp;a <a href=”https://www.siliconrepublic.com/companies/facebook-content-moderation-automated” target=”_blank”>combination of automated filtering and human moderators</a>. In the medical field, researchers at MIT and elsewhere have used machine learning to help radiologists&nbsp;<a href=”http://news.mit.edu/2019/using-ai-predict-breast-cancer-and-personalize-care-0507″ target=”_blank”>better detect different forms of cancer</a>.&nbsp;</p>

<p>What can be tricky about these hybrid approaches is understanding when to rely on the expertise of people versus programs. This isn’t always merely a question of who does a task “better;” indeed, if a person has limited bandwidth, the system may have to be trained to minimize how often it asks for help.</p>

<p>To tackle this complex issue, researchers from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) have developed a machine learning system that can either make a prediction about a task, or defer the decision to an expert. Most importantly, it can adapt when and how often it defers to its human collaborator, based on factors such as its teammate’s availability and level of experience.</p>

<p>The team trained the system on multiple tasks, including looking at chest X-rays to diagnose specific conditions such as atelectasis (lung collapse) and cardiomegaly (an enlarged heart). In the case of cardiomegaly, they found that their human-AI hybrid model performed 8 percent better than either could on their own (based on AU-ROC scores).&nbsp;&nbsp;</p>

<p>“In medical environments where doctors don’t have many extra cycles, it’s not the best use of their time to have them look at every single data point from a given patient’s file,” says PhD student Hussein Mozannar, lead author with David Sontag, the <span class=”person__info__def”>Von Helmholtz Associate Professor of Medical Engineering in the Department of Electrical Engineering and Computer Science</span>, of a new paper about the system that was recently presented at the International Conference of Machine Learning. “In that sort of scenario, it’s important for the system to be especially sensitive to their time and only ask for their help when absolutely necessary.”</p>

<p>The system has two parts: a “classifier” that can predict a certain subset of tasks, and a “rejector” that decides whether a given task should be handled by either its own classifier or the human expert.</p>

<p>Through experiments on tasks in medical diagnosis and text/image classification, the team showed that their approach not only achieves better accuracy than baselines, but does so with a lower computational cost and with far fewer training data samples.</p>

<p>“Our algorithms allow you to optimize for whatever choice you want, whether that’s the specific prediction accuracy or the cost of the expert’s time and effort,” says Sontag, who is also a member of MIT’s Institute for Medical Engineering and Science. “Moreover, by interpreting the learned rejector, the system provides insights into how experts make decisions, and in which settings AI may be more appropriate, or vice-versa.”</p>

<p>The system’s particular ability to help detect offensive text and images could also have interesting implications for content moderation. Mozanner suggests that it could be used at companies like Facebook in conjunction with a team of human moderators. (He is hopeful that such systems could minimize the amount of hateful or traumatic posts that human moderators have to review every day.)</p>

<p>Sontag clarified that the team has not yet tested the system with human experts, but instead developed a series of “synthetic experts” so that they could tweak parameters such as experience and availability. In order to work with a new expert it’s never seen before, the system would need some minimal onboarding to get trained on the person’s particular strengths and weaknesses.</p>

<p>In future work, the team plans to test their approach with real human experts, such as radiologists for X-ray diagnosis. They will also explore how to develop systems that can learn from biased expert data, as well as systems that can work with — and defer to — several experts at once.<strong>&nbsp;</strong>For example, Sontag imagines a hospital scenario where the system could collaborate with different radiologists who are more experienced with different patient populations.</p>

<p>“There are many obstacles that understandably prohibit full automation in clinical settings, including issues of trust and accountability,” says Sontag. “We hope that our method will inspire machine learning practitioners to get more creative in integrating real-time human expertise into their algorithms.”&nbsp;</p>

<p>Mozanner is affiliated with both CSAIL and the MIT Institute for Data, Systems and Society (IDSS). The team’s work was supported, in part, by the National Science Foundation.</p>

<p></p>

The system either queries the expert to diagnose the patient based on their X-ray and medical records, or looks at the X-ray to make the diagnosis itself.
Image courtesy of MIT CSAIL.

Algorithm finds hidden connections between paintings at the Methttps://news.mit.edu/2020/algorithm-finds-hidden-connections-between-paintings-met-museum-0729
A team from MIT helped create an image retrieval system to find the closest matches of paintings from different artists and cultures. <br />

Wed, 29 Jul 2020 10:00:00 -0400
https://news.mit.edu/2020/algorithm-finds-hidden-connections-between-paintings-met-museum-0729
Rachel Gordon | MIT CSAIL
<p></p>

<p>Art is often heralded as the greatest journey into the past, solidifying a moment in time and space; the beautiful vehicle that lets us momentarily escape the present.&nbsp;</p>

<p>With the boundless treasure trove of paintings that exist, the connections between these works of art from different periods of time and space can often go overlooked. It’s impossible for even the most knowledgeable of art critics to take in millions of paintings across thousands of years and be able to find unexpected parallels in themes, motifs, and visual styles.&nbsp;</p>

<p>To streamline this process, a group of researchers from MIT’s <a href=”http://csail.mit.edu”>Computer Science and Artificial Intelligence Laboratory</a> (CSAIL) and Microsoft created an algorithm to discover hidden connections between paintings at the Metropolitan Museum of Art (the Met) and Amsterdam’s Rijksmuseum.&nbsp;</p>

<p>Inspired by a special exhibit “Rembrandt and Velazquez” in the Rijksmuseum, the new “MosAIc” system finds paired or “analogous” works from different cultures, artists, and media by using deep networks to understand how “close” two images are. In that exhibit, the researchers were inspired by an unlikely, yet similar pairing: Francisco de Zurbarán’s “The Martyrdom of Saint Serapion”<em> </em>and Jan Asselijn’s “The Threatened Swan,” two works that portray scenes of profound altruism with an eerie visual resemblance.</p>

<p>“These two artists did not have a correspondence or meet each other during their lives, yet their paintings hinted at a rich, latent structure that underlies both of their works,” says CSAIL PhD student Mark Hamilton, the lead author on a paper about “MosAIc.”&nbsp;</p>

<p>To find two similar paintings, the team used a new algorithm for image search to unearth the closest match by a particular artist or culture. For example, in response to a query about “which musical instrument is closest to this painting of a blue-and-white dress,” the algorithm retrieves an image of a blue-and-white porcelain violin. These works are not only similar in pattern and form, but also draw their roots from a broader cultural exchange of porcelain between the Dutch and Chinese.&nbsp;</p>

<p>“Image retrieval systems let users find images that are semantically similar to a query image, serving as the backbone of reverse image search engines and many product recommendation engines,” says Hamilton. “Restricting an image retrieval system to particular subsets of images can yield new insights into relationships in the visual world. We aim to encourage a new level of engagement with creative artifacts.”&nbsp;</p>

<p><strong>How it works&nbsp;</strong></p>

<p>For many, art and science are irreconcilable: one grounded in logic, reasoning, and proven truths, and the other motivated by emotion, aesthetics, and beauty. But recently, AI and art took on a new flirtation that, over the past 10 years, developed into something more serious.&nbsp;</p>

<p>A large branch of this work, for example, has previously focused on generating new art using AI. There was the <a href=”http://nvidia-research-mingyuliu.com/gaugan/”>GauGAN</a> project developed by researchers at MIT, NVIDIA, and the University of California at Berkeley; Hamilton and others’ previous <a href=”https://gen.studio/”>GenStudio</a> project; and even an AI-generated artwork that sold at Sotheby’s for <a href=”https://www.fastcompany.com/90305344/the-future-of-ai-art-goes-up-for-auction-at-sothebys-for-50000″>$51,000</a>.&nbsp;</p>

<p>MosAIc, however, doesn’t aim to create new art so much as help explore existing art. One similar tool, Google’s “<a href=”https://artsexperiments.withgoogle.com/xdegrees/ogGvLdZg_9FlIQ/jwEy-G0atUfUJg”>X Degrees of Separation</a>,” finds paths of art that connect two works of art, but MosAIc differs in that it only requires a single image. Instead of finding paths, it uncovers connections in whatever culture or media the user is interested in, such as finding the shared artistic form of “Anthropoides paradisea” and “Seth Slaying a Serpent, Temple of Amun at Hibis.”<em>&nbsp;</em></p>

<p>Hamilton notes that building out their algorithm was a tricky endeavor, because they wanted to find images that were similar not just in color or style, but in meaning and theme. In other words, they’d want dogs to be close to other dogs, people to be close to other people, and so forth. To achieve this, they probe a deep network’s inner “activations” for each image in the combined open access collections of the Met and the Rijksmuseum. Distance between the “activations” of this deep network, which are commonly called “features,” was how they judged image similarity.</p>

<p>To find analogous images between different cultures, the team used a new image-search data structure called a “conditional KNN tree” that groups similar images together in a tree-like structure. To find a close match, they start at the tree’s “trunk” and follow the most promising “branch” until they are sure they’ve found the closest image. The data structure improves on its predecessors by allowing the tree to quickly “prune” itself to a particular culture, artist, or collection, quickly yielding answers to new types of queries.</p>

<p>What Hamilton and his colleagues found surprising was that this approach could also be applied to helping find problems with existing deep networks, related to the surge of “deepfakes” that have recently cropped up. They applied this data structure to find areas where probabilistic models, such as the generative adversarial networks (GANs) that are often used to create deepfakes, break down. They coined these problematic areas “blind spots,” and note that they give us insight into how GANs can be biased. Such blind spots further show that GANs struggle to represent particular areas of a dataset, even if most of their fakes can fool a human.&nbsp;</p><p><strong>Testing MosAIc&nbsp;</strong></p>

<p>The team evaluated MosAIc’s speed, and how closely it aligned with our human intuition about visual analogies.</p>

<p>For the speed tests, they wanted to make sure that their data structure provided value over simply searching through the collection with quick, brute-force search.&nbsp;</p>

<p>To understand how well the system aligned with human intuitions, they made and released two new datasets for evaluating conditional image retrieval systems. One dataset challenged algorithms to find images with the same content even after they had been “stylized” with a neural style transfer method. The second dataset challenged algorithms to recover English letters across different fonts. A bit less than two-thirds of the time, MosAIc was able to recover the correct image in a single guess from a “haystack” of 5,000 images.</p>

<p>“Going forward, we hope this work inspires others to think about how tools from information retrieval can help other fields like the arts, humanities, social science, and medicine,” says Hamilton. “These fields are rich with information that has never been processed with these techniques and can be a source for great inspiration for both computer scientists and domain experts. This work can be expanded in terms of new datasets, new types of queries, and new ways to understand the connections between works.”&nbsp;</p>

<p>Hamilton wrote the paper on MosAIc alongside Professor Bill Freeman and MIT undergraduates Stefanie Fu and Mindren Lu. The MosAIc website was built by MIT, Fu, Lu, Zhenbang Chen, Felix Tran, Darius Bopp, Margaret Wang, Marina Rogers, and Johnny Bui, at the Microsoft Garage winter externship program.</p>

A machine learning system developed at MIT was inspired by an exhibit in Amsterdam’s Rijksmuseum that featured the unlikely but similar pairing of Francisco de Zurbarán’s “The Martyrdom of Saint Serapion” (left) and Jan Asselijn’s “The Threatened Swan.”
Image courtesy of MIT CSAIL.

Looking into the black boxhttps://news.mit.edu/2020/looking-black-box-deep-learning-neural-networks-0727
Recent advances give theoretical insight into why deep learning networks are successful.
Mon, 27 Jul 2020 16:45:01 -0400
https://news.mit.edu/2020/looking-black-box-deep-learning-neural-networks-0727
Sabbi Lall | McGovern Institute for Brain Research
<p>Deep learning systems are revolutionizing technology around us, from voice recognition that pairs you with your phone to autonomous vehicles that are increasingly able to see and recognize obstacles ahead. But much of this success involves trial and error when it comes to the deep learning networks themselves. A group of MIT researchers <a href=”http://www.pnas.org/content/early/2020/06/08/1907369117/tab-article-info”>recently reviewed</a> their contributions to a better theoretical understanding of deep learning networks, providing direction for the field moving forward.</p>

<p>“Deep learning was in some ways an accidental discovery,” explains Tommy Poggio, investigator at the McGovern Institute for Brain Research, director of the Center for Brains, Minds, and Machines (CBMM), and the Eugene McDermott Professor in Brain and Cognitive Sciences. “We still do not understand why it works. A theoretical framework is taking form, and I believe that we are now close to a satisfactory theory. It is time to stand back and review recent insights.”</p>

<p><strong>Climbing data mountains</strong></p>

<p>Our current era is marked by a superabundance of data — data from inexpensive sensors of all types, text, the internet, and large amounts of genomic data being generated in the life sciences. Computers nowadays ingest these multidimensional datasets, creating a set of problems dubbed the “curse of dimensionality” by the late mathematician Richard Bellman.</p>

<p>One of these problems is that representing a smooth, high-dimensional function requires an astronomically large number of parameters. We know that deep neural networks are particularly good at learning how to represent, or approximate, such complex data, but why? Understanding why could potentially help advance deep learning applications.</p>

<p>“Deep learning is like electricity after Volta discovered the battery, but before Maxwell,” explains Poggio, who is the founding scientific advisor of The Core, MIT Quest for Intelligence, and an investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. “Useful applications were certainly possible after Volta, but it was Maxwell’s theory of electromagnetism, this deeper understanding that then opened the way to the radio, the TV, the radar, the transistor, the computers, and the internet.”</p>

<p>The theoretical treatment by Poggio, Andrzej Banburski, and Qianli Liao points to why deep learning might overcome data problems such as “the curse of dimensionality.” Their approach starts with the observation that many natural structures are hierarchical. To model the growth and development of a tree doesn’t require that we specify the location of every twig. Instead, a model can use local rules to drive branching hierarchically. The primate visual system appears to do something similar when processing complex data. When we look at natural images — including trees, cats, and faces — the brain successively integrates local image patches, then small collections of patches, and then collections of collections of patches.&nbsp;</p>

<p>“The physical world is compositional — in other words, composed of many local physical interactions,” explains Qianli Liao, an author of the study, and a graduate student in the Department of Electrical Engineering and Computer Science and a member of the CBMM. “This goes beyond images. Language and our thoughts are compositional, and even our nervous system is compositional in terms of how neurons connect with each other. Our review explains theoretically why deep networks are so good at representing this complexity.”</p>

<p>The intuition is that a hierarchical neural network should be better at approximating a compositional function than a single “layer” of neurons, even if the total number of neurons is the same. The technical part of their work identifies what “better at approximating” means and proves that the intuition is correct.</p>

<p><strong>Generalization puzzle</strong></p>

<p>There is a second puzzle about what is sometimes called the unreasonable effectiveness of deep networks. Deep network models often have far more parameters than data to fit them, despite the mountains of data we produce these days. This situation ought to lead to what is called “overfitting,” where your current data fit the model well, but any new data fit the model terribly. This is dubbed poor generalization in conventional models. The conventional solution is to constrain some aspect of the fitting procedure. However, deep networks do not seem to require this constraint. Poggio and his colleagues prove that, in many cases, the process of training a deep network implicitly “regularizes” the solution, providing constraints.</p>

<p>The work has a number of implications going forward. Though deep learning is actively being applied in the world, this has so far occurred without a comprehensive underlying theory.<strong> </strong>A theory of deep learning that explains why and how deep networks work, and what their limitations are, will likely allow development of even much more powerful learning approaches.</p>

<p>“In the long term, the ability to develop and build better intelligent machines will be essential to any technology-based economy,” explains Poggio. “After all, even in its current — still highly imperfect — state, deep learning is impacting, or about to impact, just about every aspect of our society and life.”</p>

Neural network

Looking into the black boxhttps://news.mit.edu/2020/looking-black-box-deep-learning-neural-networks-0727
Recent advances give theoretical insight into why deep learning networks are successful.
Mon, 27 Jul 2020 16:45:01 -0400
https://news.mit.edu/2020/looking-black-box-deep-learning-neural-networks-0727
Sabbi Lall | McGovern Institute for Brain Research
<p>Deep learning systems are revolutionizing technology around us, from voice recognition that pairs you with your phone to autonomous vehicles that are increasingly able to see and recognize obstacles ahead. But much of this success involves trial and error when it comes to the deep learning networks themselves. A group of MIT researchers <a href=”http://www.pnas.org/content/early/2020/06/08/1907369117/tab-article-info”>recently reviewed</a> their contributions to a better theoretical understanding of deep learning networks, providing direction for the field moving forward.</p>

<p>“Deep learning was in some ways an accidental discovery,” explains Tommy Poggio, investigator at the McGovern Institute for Brain Research, director of the Center for Brains, Minds, and Machines (CBMM), and the Eugene McDermott Professor in Brain and Cognitive Sciences. “We still do not understand why it works. A theoretical framework is taking form, and I believe that we are now close to a satisfactory theory. It is time to stand back and review recent insights.”</p>

<p><strong>Climbing data mountains</strong></p>

<p>Our current era is marked by a superabundance of data — data from inexpensive sensors of all types, text, the internet, and large amounts of genomic data being generated in the life sciences. Computers nowadays ingest these multidimensional datasets, creating a set of problems dubbed the “curse of dimensionality” by the late mathematician Richard Bellman.</p>

<p>One of these problems is that representing a smooth, high-dimensional function requires an astronomically large number of parameters. We know that deep neural networks are particularly good at learning how to represent, or approximate, such complex data, but why? Understanding why could potentially help advance deep learning applications.</p>

<p>“Deep learning is like electricity after Volta discovered the battery, but before Maxwell,” explains Poggio, who is the founding scientific advisor of The Core, MIT Quest for Intelligence, and an investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. “Useful applications were certainly possible after Volta, but it was Maxwell’s theory of electromagnetism, this deeper understanding that then opened the way to the radio, the TV, the radar, the transistor, the computers, and the internet.”</p>

<p>The theoretical treatment by Poggio, Andrzej Banburski, and Qianli Liao points to why deep learning might overcome data problems such as “the curse of dimensionality.” Their approach starts with the observation that many natural structures are hierarchical. To model the growth and development of a tree doesn’t require that we specify the location of every twig. Instead, a model can use local rules to drive branching hierarchically. The primate visual system appears to do something similar when processing complex data. When we look at natural images — including trees, cats, and faces — the brain successively integrates local image patches, then small collections of patches, and then collections of collections of patches.&nbsp;</p>

<p>“The physical world is compositional — in other words, composed of many local physical interactions,” explains Qianli Liao, an author of the study, and a graduate student in the Department of Electrical Engineering and Computer Science and a member of the CBMM. “This goes beyond images. Language and our thoughts are compositional, and even our nervous system is compositional in terms of how neurons connect with each other. Our review explains theoretically why deep networks are so good at representing this complexity.”</p>

<p>The intuition is that a hierarchical neural network should be better at approximating a compositional function than a single “layer” of neurons, even if the total number of neurons is the same. The technical part of their work identifies what “better at approximating” means and proves that the intuition is correct.</p>

<p><strong>Generalization puzzle</strong></p>

<p>There is a second puzzle about what is sometimes called the unreasonable effectiveness of deep networks. Deep network models often have far more parameters than data to fit them, despite the mountains of data we produce these days. This situation ought to lead to what is called “overfitting,” where your current data fit the model well, but any new data fit the model terribly. This is dubbed poor generalization in conventional models. The conventional solution is to constrain some aspect of the fitting procedure. However, deep networks do not seem to require this constraint. Poggio and his colleagues prove that, in many cases, the process of training a deep network implicitly “regularizes” the solution, providing constraints.</p>

<p>The work has a number of implications going forward. Though deep learning is actively being applied in the world, this has so far occurred without a comprehensive underlying theory.<strong> </strong>A theory of deep learning that explains why and how deep networks work, and what their limitations are, will likely allow development of even much more powerful learning approaches.</p>

<p>“In the long term, the ability to develop and build better intelligent machines will be essential to any technology-based economy,” explains Poggio. “After all, even in its current — still highly imperfect — state, deep learning is impacting, or about to impact, just about every aspect of our society and life.”</p>

Neural network

Commentary: America must invest in its ability to innovatehttps://news.mit.edu/2020/america-innovate-endless-frontier-0724
Presidents of MIT and Indiana University urge America’s leaders to support bipartisan innovation bill.
Fri, 24 Jul 2020 09:52:12 -0400
https://news.mit.edu/2020/america-innovate-endless-frontier-0724
Zach Winn | MIT News Office
<p>In July of 1945, in an America just beginning to establish a postwar identity, former MIT vice president Vannevar Bush set forth a vision that guided the country to decades of scientific dominance and economic prosperity. Bush’s report to the president of the United States, <a href=”https://www.nsf.gov/od/lpa/nsf50/vbush1945.htm”>“Science: The Endless Frontier,”</a> called on the government to support basic research in university labs. Its ideas, including the creation of the National Science Foundation (NSF), are credited with helping to make U.S. scientific and technological innovation the envy of the world.</p>

<p>Today, America’s lead in science and technology is being challenged as never before, write MIT President L. Rafael Reif and Indiana University President Michael A. McRobbie in <a href=”https://www.chicagotribune.com/opinion/commentary/ct-opinion-universities-scientific-research-20200723-sv5envi5wzetni6kdjm73kr23a-story.html”>an op-ed</a> published today by <em>The Chicago Tribune</em>. They describe a “triple challenge” of bolder foreign competitors, faster technological change, and a merciless race to get from lab to market.</p>

<p>The government’s decision to adopt Bush’s ideas was bold and controversial at the time, and similarly bold action is needed now, they write.</p>

<p>“The U.S. has the fundamental building blocks for success, including many of the world’s top research universities <a href=”https://www.aau.edu/research/featured-research/battling-covid-19″>that are at the forefront of the fight against COVID-19</a>,” reads the op-ed. “But without a major, sustained funding commitment, a focus on key technologies and a faster system for transforming discoveries into new businesses, products and quality jobs, in today’s arena, America will not prevail.”</p>

<p>McRobbie and Reif believe <a href=”https://www.young.senate.gov/imo/media/doc/Endless%20Frontier%20Act%20Bill%20Text%205.21.2020.pdf” target=”_blank”>a bipartisan bill</a> recently introduced in both chambers of Congress can help America’s innovation ecosystem meet the challenges of the day. Named the “Endless Frontier Act,” the bill would support research focused on advancing key technologies like artificial intelligence and quantum computing. It does not seek to alter or replace the NSF, but to “create new strength in parallel,” they write.&nbsp;</p>

<p>The bill would also create scholarships, fellowships, and other forms of assistance to help build an American workforce ready to develop and deploy the latest technologies. And, it would facilitate experiments to help commercialize new ideas more quickly.</p>

<p>“Today’s leaders have the opportunity to display the far-sighted vision their predecessors showed after World War II — to expand and shape of our institutions, and to make the investments to adapt to a changing world,” Reif and McRobbie write.</p>

<p></p>

<p>Both university presidents acknowledge that measures such as the Endless Frontier Act require audacious choices. But if leaders take the right steps now, they write, those choices will seem, in retrospect, obvious and wise.</p>

<p>“Now as then, our national prosperity hinges on the next generation of technical triumphs,” Reif and Mcrobbie write. “Now as then, that success is not inevitable, and it will not come by chance. But with focused funding and imaginative policy, we believe it remains in reach.”</p>

Neural vulnerability in Huntington’s disease tied to release of mitochondrial RNAhttps://news.mit.edu/2020/neural-vulnerability-huntingtons-disease-tied-to-mitochondrial-rna-release-0721
Unique survey of gene expression by cell type in humans and mice reveals several deficits affecting the most vulnerable neurons.
Tue, 21 Jul 2020 12:00:01 -0400
https://news.mit.edu/2020/neural-vulnerability-huntingtons-disease-tied-to-mitochondrial-rna-release-0721
David Orenstein | Picower Institute for Learning and Memory
<p>In the first study to comprehensively track how different types of brain cells respond to the mutation that causes Huntington’s disease (HD), MIT neuroscientists found that a significant cause of death for an especially afflicted kind of neuron might be an immune response to genetic material errantly released by mitochondria, the cellular components that provide cells with energy.</p>

<p>In different cell types at different stages of disease progression, the researchers measured how levels of RNA differed from normal in brain samples from people who died with Huntington’s disease and in mice engineered with various degrees of the genetic mutation. Among several novel observations in both species, one that particularly stood out is that RNA from mitochondria were misplaced within the brain cells, called spiny projection neurons (SPNs), that are ravaged in the disease, contributing to its fatal neurological symptoms. The scientists observed that these stray RNAs, which look different to cells than RNA derived from the cell nucleus, triggered a problematic immune reaction.</p>

<p>“When these RNAs are released from the mitochondria, to the cell they can look just like viral RNAs, and this triggers innate immunity and can lead to cell death,” says study senior author Myriam Heiman, associate professor in MIT’s Department of Brain and Cognitive Sciences, the Picower Institute for Learning and Memory, and the Broad Institute of MIT and Harvard. “We believe this to be part of the pathway that triggers inflammatory signaling, which has been seen in HD before.”</p>

<p>Picower Fellow Hyeseung Lee and former visiting scientist Robert Fenster are co-lead authors of <a href=”https://www.cell.com/neuron/fulltext/S0896-6273(20)30475-X” target=”_blank”>the study</a> published in <em>Neuron</em>.</p>

<p><strong>Mitochondrial mishap</strong></p>

<p>The team’s two different screening methods, “<a href=”https://picower.mit.edu/innovations-inventions/trap”>TRAP</a>,” which can be used in mice, and single-nucleus RNA sequencing, which can also be used in mice and humans, not only picked up the presence of mitochondrial RNAs most specifically in the SPNs but also showed a deficit in the expression of genes for a process called oxidative phosphorylation that fuel-hungry neurons employ to make energy. The mouse experiments showed that this downregulation of oxidative phosphorylation and increase in mitochondrial RNA release both occurred very early in disease, before most other gene expression differences were manifest.</p>

<p>Moreover, the researchers found increased expression of an immune system protein called PKR, which has been shown to be a sensor of the released mitochondrial RNA. In fact, the team found that PKR was not only elevated in the neurons, but also activated and bound to mitochondrial RNAs.</p>

<p>The new findings appear to converge with other clinical conditions that, like Huntington’s disease, lead to damage in a brain region called the striatum, Heiman said. In a condition called Aicardi-Goutières syndrome, the same brain region can be damaged because of a misregulated innate immune response. In addition, children with thiamine deficiency suffer mitochondrial dysfunction, and a prior study has shown that mice with thiamine deficiency show PKR activation, much like Heiman’s team found.</p>

<p>“These non-HD human disorders that are characterized by striatal cell death extend the significance of our findings by linking both the oxidative metabolism deficits and autoinflammatory activation phenomena described here directly to human striatal cell death absent the [Huntington’s mutation] context,” they wrote in <em>Neuron</em>.</p>

<p><strong>Other observations</strong></p>

<p>Though the mitochondrial RNA release discovery was the most striking, the study produced several other potentially valuable findings, Heiman says.</p>

<p>One is that the study produced a sweeping catalog of substantial differences in gene expression, including ones related to important neural functions such as their synapse circuit connections and circadian clock function. Another, based on some of the team’s analysis of their results, is that a master regulator of these alterations to gene transcription in neurons may be the retinoic acid receptor b (or “Rarb”) transcription factor. Heiman said that this could be a clinically useful finding because there are drugs that can activate Rarb.</p>

<p>“If we can inhibit transcriptional misregulation, we might be able to alter the outcome of the disease,” Heiman speculates. “It’s an important hypothesis to test.”</p>

<p>Another, more basic, finding in the study is that many of the gene expression differences the researchers saw in neurons in the human brain samples matched well with the changes they saw in mouse neurons, providing additional assurance that mouse models are indeed useful for studying this disease, Heiman says. The question has dogged the field somewhat because mice typically don’t show as much neuron death as people do.</p>

<p>“What we see is that actually the mouse models recapitulate the gene-expression changes that are occurring in these stage HD human neurons very well,” she says. “Interestingly, some of the other, non-neuronal, cell types did not show as much conservation between the human disease and mouse models, information that our team believes will be helpful to other investigators in future studies.”</p>

<p>The single-nucleus RNA sequencing study was part of a longstanding collaboration with Manolis Kellis’s group in MIT’s Computer Science and Artificial Intelligence Laboratory. Together, the two labs hope to expand these studies in the near future to further understand Huntington’s disease mechanisms.</p>

<p>In addition to Heiman, Lee, and Fenster, the paper’s other authors are Sebastian Pineda, Whitney Gibbs, Shahin Mohammadi, Fan Gao, Jose-Davila-Velderrain, Francisco Garcia, Martine Therrien, Hailey Novis, Hilary Wilkinson, Thomas Vogt, Manolis Kellis, and Matthew LaVoie.</p>

<p>The CHDI Foundation, the U.S. National Institutes of Health, Broderick Fund for Phytocannabinoid Research at MIT, and the JPB Foundation funded the study.</p>

MIT neuroscientists have linked the vulnerability of neurons in Hungtington’s disease to the release of mitochondrial RNA and an associated immune system response. In this image, on the right are neurons from a Huntington’s model mouse showing much more PKR (a marker of immune response to mitochondrial RNA) in green than neurons on the left, which are from a healthy mouse.
Image: Hyeseung Lee

MIT Schwarzman College of Computing announces first named professorshipshttps://news.mit.edu/2020/mit-schwarzman-college-computing-announces-first-named-professorships-0720
Honorees will receive additional support to pursue their research and develop their careers.
Mon, 20 Jul 2020 15:20:01 -0400
https://news.mit.edu/2020/mit-schwarzman-college-computing-announces-first-named-professorships-0720
MIT Schwarzman College of Computing
<p>The MIT Stephen A. Schwarzman College of Computing announced its first two named professorships, beginning July 1, to Frédo Durand and Samuel Madden in the Department of Electrical Engineering and Computer Science (EECS). These named positions recognize the outstanding achievements and future potential of their academic careers.</p>

<p>“I’m thrilled to acknowledge Frédo and Sam for their outstanding contributions in research and education. These named professorships recognize them for their extraordinary achievements,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing.</p>

<p><a href=”http://people.csail.mit.edu/fredo/” target=”_blank”>Frédo Durand</a>, a professor of computer science and engineering in EECS, has been named the&nbsp;inaugural Amar Bose Professor of Computing. The professorship, named after Amar Bose, former longtime member of the MIT faculty and the founder of Bose Corporation, is granted in recognition of the recipient’s excellence in teaching, research, and mentorship in the field of computing. A member of the Computer Science and Artificial Intelligence Laboratory, Durand’s research interests span most aspects of picture generation and creation, including rendering and computational photography. His recent focus includes video magnification for revealing the invisible, differentiable rendering, and compilers for productive high-performance imaging.</p>

<p>He received an inaugural Eurographics Young Researcher Award in 2004; an NSF CAREER Award in 2005; an inaugural Microsoft Research New Faculty Fellowship in 2005; a Sloan Foundation Fellowship in 2006; a Spira Award for distinguished teaching in 2007; and the ACM SIGGRAPH Computer Graphics Achievement Award in 2016.</p>

<p><a href=”http://db.csail.mit.edu/madden/” target=”_blank”>Samuel Madden</a> has been named the inaugural College of Computing Distinguished Professor of Computing. A professor of electrical engineering and computer science in EECS, Madden is being honored as an outstanding faculty member who is recognized as a leader and innovator. His research is in the area of database systems, focusing on database analytics and query processing, ranging from clouds to sensors to modern high-performance server architectures. He co-directs the Data Systems for AI Lab initiative and the Data Systems Group, investigating issues related to systems and algorithms for data focusing on applying new methodologies for processing data, including applying machine learning methods to data systems and engineering data systems for applying machine learning at scale.&nbsp;</p>

<p>Madden was named one of <em>MIT Technology Review</em>’s “35 Innovators Under 35” in 2005, and received an NSF CAREER Award in 2004 and a Sloan Foundation Fellowship in 2007. He has also received several best paper awards in VLDB 2004 and 2007 and in MobiCom 2006. In addition, he was recognized with a “test of time” award in SIGMOD 2013 for his work on acquisitional query processing and a 10-year best paper award in VLDB 2015 for his work on the C-Store system.</p>

Frédo Durand (left) and Sam Madden are the recipients of the first two named professorships in the Schwarzman College of Computing.

Better simulation meshes well for design software (and more)https://news.mit.edu/2020/better-simulation-meshes-well-for-design-software-and-more-0720
New work on 2D and 3D meshing aims to address challenges with some of today’s state-of-the-art methods.
Mon, 20 Jul 2020 14:50:01 -0400
https://news.mit.edu/2020/better-simulation-meshes-well-for-design-software-and-more-0720
Adam Conner-Simons | MIT CSAIL
<p>The digital age has spurred the rise of entire industries aimed at simulating our world and the objects in it. Simulation is what helps movies have realistic effects, automakers test cars virtually, and scientists analyze geophysical data.</p>

<p>To simulate physical systems in 3D, researchers often program computers to divide objects into sets of smaller elements, a procedure known as “meshing.” Most meshing approaches tile 2D objects with patterns of triangles or quadrilaterals (quads), and tile 3D objects with patterns of triangular pyramids (tetrahedra) or bent cubes (hexahedra, or “hexes”).</p>

<p>While much progress has been made in the fields of computational geometry and geometry processing, scientists surprisingly still don’t fully understand the math of stacking together cubes when they are allowed to bend or stretch a bit. Many questions remain about the patterns that can be formed by gluing cube-shaped elements together, which relates to an area of math called topology.</p>

<p>New work out of MIT’s <a href=”http://csail.mit.edu”>Computer Science and Artificial Intelligence Laboratory</a>&nbsp;(CSAIL) aims to explore several of these questions. Researchers have published a series of papers that address shortcomings of existing meshing tools by seeking out mathematical structure in the problem. In collaboration with scientists at the University of Bern and the University of Texas at Austin, their work shows how areas of math like algebraic geometry, topology, and differential geometry could improve physical simulations used in computer-aided design (CAD), architecture, gaming, and other sectors.</p>

<p>“Simulation tools that are being deployed ‘in the wild’ don’t always fail gracefully,” says MIT Associate Professor Justin Solomon, senior author on the three new meshing-related papers. “If one thing is wrong with the mesh, the simulation might not agree with real-world physics, and you might have to throw the whole thing out.”&nbsp;</p>

<p>In <a href=”https://diglib.eg.org/handle/10.1111/cgf14074″ target=”_blank”>one paper</a>, a team led by MIT undergraduate Zoë Marschner developed an algorithm to repair issues that can often trip up existing approaches for hex meshing, specifically.</p>

<p>For example, some meshes contain elements that are partially inside-out or that self-intersect in ways that can’t be detected from their outer surfaces. The team’s algorithm works in iterations to repair those meshes in a way that untangles any such inversions while remaining faithful to the original shape.</p>

<p>“Thorny unsolved topology problems show up all over the hex-meshing universe,” says Marschner. “Until we figure them out, our algorithms will often fail in subtle ways.”</p>

<p>Marschner’s algorithm uses a technique called “sum-of-squares (SOS) relaxation” to pinpoint exactly where hex elements are inverted (which researchers describe as being “invalid”). It then moves the vertices of the hex element so that the hex is valid at the point where it was previously most invalid. The algorithm repeats this procedure to repair the hex.</p>

<p>In addition to being published at this week’s Symposium on Geometry Processing, Marschner’s work earned her MIT’s 2020 Anna Pogosyants UROP Award.</p>

<p>A <a href=”https://dl.acm.org/doi/abs/10.1145/3374209″ target=”_blank”>second paper</a> spearheaded by PhD student Paul Zhang improves meshing by incorporating curves, edges, and other features that provide important cues for the human visual system and pattern recognition algorithms.&nbsp;</p>

<p>It can be difficult for computers to find these features reliably, let alone incorporate them into meshes. By using an existing construction called an “octahedral frame field”<strong> </strong>that is traditionally used for meshing 3D volumes, Zhang and his team have been able to develop 2D surface meshes without depending on unreliable methods that try to trace out features ahead of time.&nbsp;</p>

<p>Zhang says that they’ve shown that these so-called “feature-aligned” constructions automatically create visually accurate quad meshes, which are widely used in computer graphics and virtual reality applications.</p>

<p>“As the goal of meshing is to simultaneously simplify the object and maintain accuracy to the original domain, this tool enables a new standard in feature-aligned quad meshing,” says Zhang.&nbsp;</p>

<p>A <a href=”https://dl.acm.org/doi/abs/10.1145/3366786″ target=”_blank”>third paper</a> led by PhD student David Palmer links Zhang and Marschner’s work, advancing the theory of octahedral fields and showing how better math provides serious practical improvement for hex meshing.&nbsp;</p>

<p>In physics and geometry, velocities and flows are represented as “vector fields,” which attach an arrow to every point in a region of space. In 3D, these fields can twist, knot around, and cross each other in remarkably complicated ways. Further complicating matters, Palmer’s research studies the structure of “frame fields,” in which more than one arrow appears at each point.</p>

<p>Palmer’s work gives new insight into the ways frames can be described and uses them to design methods for placing frames in 3D space. Building off of existing work, his methods produce smooth, stable fields that can guide the design of high-quality meshes.</p>

<p>Solomon says that his team aims to eventually characterize all the ways that octahedral frames twist and knot around each other to create structures in space.&nbsp;</p>

<p>“This is a cool area of computational geometry where theory has a real impact on the quality of simulation tools,” says Solomon.&nbsp;</p>

<p>Palmer cites organizations like Sandia National Labs that conduct complicated physical simulations involving phenomena like nonlinear elasticity and object deformation. He says that, even today, engineering teams often build or repair hex meshes almost completely by hand.&nbsp;</p>

<p>“Existing software for automatic meshing often fails to produce a complete mesh, even if the frame field guidance ensures that the mesh pieces that are there look good,” Palmer says. “Our approach helps complete the picture.”</p>

<p>Marschner’s paper was co-written by Solomon, Zhang, and Palmer. Zhang’s paper was co-written by Solomon, Josh Vekhter, and Etienne Vouga at the University of Texas at Austin, Professor David Bommes of the University of Bern in Germany, and CSAIL postdoc Edward Chien. Palmer’s paper was co-written by Solomon and Bommes. Zhang and Palmer’s papers will be presented at the SIGGRAPH computer graphics conference later this month.</p>

<p>The projects were supported, in part, by Adobe Systems, the U.S. Air Force Office of Scientific Research, the U.S. Army Research Office, the U.S. Department of Energy, the Fannie and John Hertz Foundation, MathWorks, the MIT-IBM Watson AI Laboratory, the National Science Foundation, the Skoltech-MIT Next Generation program, and the Toyota-CSAIL Joint Research Center<strong>.</strong></p>

Recent work from MIT CSAIL addresses how computers divide objects into sets of smaller elements, a procedure known as “meshing.” Zhang et al. produced a range of detailed 2D images without depending on unreliable methods that try to trace out features like curves and edges ahead of time.
Image courtesy of MIT CSAIL.

Tackling the misinformation epidemic with “In Event of Moon Disaster”https://news.mit.edu/2020/mit-tackles-misinformation-in-event-of-moon-disaster-0720
New website from the MIT Center for Advanced Virtuality rewrites an important moment in history to educate the public on the dangers of deepfakes.
Mon, 20 Jul 2020 05:00:00 -0400
https://news.mit.edu/2020/mit-tackles-misinformation-in-event-of-moon-disaster-0720
MIT Open Learning
<p>Can you recognize a digitally manipulated video when you see one? It’s harder than most people realize. As the technology to produce realistic “deepfakes” becomes more easily available, distinguishing fact from fiction will only get more challenging. A new digital storytelling project from MIT’s Center for Advanced Virtuality aims to educate the public about the world of deepfakes with “<a href=”http://moondisaster.org/”>In Event of Moon Disaster</a>.”</p>

<p>This provocative website showcases a “complete” deepfake (manipulated audio and video) of U.S. President Richard M. Nixon delivering the real contingency speech written in 1969 for a scenario in which the Apollo 11 crew were unable to return from the moon. The team worked with a voice actor and a company called Respeecher to produce the synthetic speech using deep learning techniques. They also worked with the company Canny AI to use video dialogue replacement techniques to study and replicate the movement of Nixon’s mouth and lips. Through these sophisticated AI and machine learning technologies, the seven-minute film shows how thoroughly convincing deepfakes can be.&nbsp;</p>

<p>“Media misinformation is a longstanding phenomenon, but, exacerbated by deepfake technologies and the ease of disseminating content online, it’s become a crucial issue of our time,” says D. Fox Harrell, professor of digital media and of artificial intelligence at MIT and director of the MIT Center for Advanced Virtuality, part of MIT Open Learning. “With this project — and a course curriculum on misinformation being built around it — our powerfully talented XR Creative Director Francesca Panetta is pushing forward one of the center’s broad aims: using AI and technologies of virtuality to support creative expression and truth.”</p>

<p>Alongside the film, <a href=”http://moondisaster.org” target=”_blank”>moondisaster.org</a> features an array of interactive and educational resources on deepfakes. Led by Panetta and Halsey Burgund, a fellow at MIT Open Documentary Lab, an interdisciplinary team of artists, journalists, filmmakers, designers, and computer scientists has created a robust, interactive resource site where educators and media consumers can deepen their understanding of deepfakes: how they are made and how they work; their potential use and misuse; what is being done to combat deepfakes; and teaching and learning resources.&nbsp;</p>

<p>“This alternative history shows how new technologies can obfuscate the truth around us, encouraging our audience to think carefully about the media they encounter daily,” says Panetta.</p>

<p>Also part of the launch is a new documentary, “To Make a Deepfake,” a 30-minute film by <em>Scientific American</em>, that uses “In Event of Moon Disaster” as a jumping-off point to explain the technology behind AI-generated media. The documentary features prominent scholars and thinkers on the state of deepfakes, on the stakes for the spread of misinformation and the twisting of our digital reality, and on the future of truth.</p>

<p>The project is supported by the MIT Open Documentary Lab and the Mozilla Foundation, which awarded “In Event of Moon Disaster” a Creative Media Award last year. These awards are part of Mozilla’s mission to realize more trustworthy AI in consumer technology. <a href=”https://mailtrack.io/trace/link/9e489a848e99d1eceb2c08bb8cf8f5986d07ad4c?url=https%3A%2F%2Fblog.mozilla.org%2Fblog%2F2019%2F09%2F17%2Fexamining-ais-effect-on-media-and-truth%2F&amp;userId=5427979&amp;signature=3bd187fb1c19bc28″>The latest cohort of awardees</a> uses art and advocacy to examine AI’s effect on media and truth.</p>

<p>Says J. Bob Alotta, Mozilla’s vice president of global programs: “AI plays a central role in consumer technology today — it curates our news, it recommends who we date, and it targets us with ads. Such a powerful technology should be demonstrably worthy of trust, but often it is not. Mozilla’s Creative Media Awards draw attention to this, and also advocate for more privacy, transparency, and human well-being in AI.”&nbsp;</p>

<p>“In Event of Moon Disaster” <a href=”http://news.mit.edu/2019/mit-apollo-deepfake-art-installation-aims-to-empower-more-discerning-public-1125″ target=”_self”>previewed last fall</a> as a physical art installation at the International Documentary Film Festival Amsterdam, where it won the Special Jury Prize for Digital Storytelling; it was selected for the 2020 Tribeca Film Festival and Cannes XR. The new website is the project’s global digital launch, making the film and associated materials available for free to all audiences.</p>

<p>The past few months have seen the world move almost entirely online: schools, talk shows, museums, election campaigns, doctor’s appointments — all have made a rapid transition to virtual. When every interaction we have with the world is seen through a digital filter, it becomes more important than ever to learn how to distinguish between authentic and manipulated media.&nbsp;</p>

<p>“It’s our hope that this project will encourage the public to understand that manipulated media plays a significant role in our media landscape,” says co-director Burgund, “and that, with further understanding and diligence, we can all reduce the likelihood of being unduly influenced by it.”</p>

Using sophisticated AI and machine learning technologies, the “In Event of Moon Disaster” team merged Nixon’s face with the movements of an actor reading a speech the former president never actually delivered.
Image: MIT Center for Advanced Virtuality

Tackling the misinformation epidemic with “In Event of Moon Disaster”https://news.mit.edu/2020/mit-tackles-misinformation-in-event-of-moon-disaster-0720
New website from the MIT Center for Advanced Virtuality rewrites an important moment in history to educate the public on the dangers of deepfakes.
Mon, 20 Jul 2020 05:00:00 -0400
https://news.mit.edu/2020/mit-tackles-misinformation-in-event-of-moon-disaster-0720
MIT Open Learning
<p>Can you recognize a digitally manipulated video when you see one? It’s harder than most people realize. As the technology to produce realistic “deepfakes” becomes more easily available, distinguishing fact from fiction will only get more challenging. A new digital storytelling project from MIT’s Center for Advanced Virtuality aims to educate the public about the world of deepfakes with “<a href=”http://moondisaster.org/”>In Event of Moon Disaster</a>.”</p>

<p>This provocative website showcases a “complete” deepfake (manipulated audio and video) of U.S. President Richard M. Nixon delivering the real contingency speech written in 1969 for a scenario in which the Apollo 11 crew were unable to return from the moon. The team worked with a voice actor and a company called Respeecher to produce the synthetic speech using deep learning techniques. They also worked with the company Canny AI to use video dialogue replacement techniques to study and replicate the movement of Nixon’s mouth and lips. Through these sophisticated AI and machine learning technologies, the seven-minute film shows how thoroughly convincing deepfakes can be.&nbsp;</p>

<p>“Media misinformation is a longstanding phenomenon, but, exacerbated by deepfake technologies and the ease of disseminating content online, it’s become a crucial issue of our time,” says D. Fox Harrell, professor of digital media and of artificial intelligence at MIT and director of the MIT Center for Advanced Virtuality, part of MIT Open Learning. “With this project — and a course curriculum on misinformation being built around it — our powerfully talented XR Creative Director Francesca Panetta is pushing forward one of the center’s broad aims: using AI and technologies of virtuality to support creative expression and truth.”</p>

<p>Alongside the film, <a href=”http://moondisaster.org” target=”_blank”>moondisaster.org</a> features an array of interactive and educational resources on deepfakes. Led by Panetta and Halsey Burgund, a fellow at MIT Open Documentary Lab, an interdisciplinary team of artists, journalists, filmmakers, designers, and computer scientists has created a robust, interactive resource site where educators and media consumers can deepen their understanding of deepfakes: how they are made and how they work; their potential use and misuse; what is being done to combat deepfakes; and teaching and learning resources.&nbsp;</p>

<p>“This alternative history shows how new technologies can obfuscate the truth around us, encouraging our audience to think carefully about the media they encounter daily,” says Panetta.</p>

<p>Also part of the launch is a new documentary, “To Make a Deepfake,” a 30-minute film by <em>Scientific American</em>, that uses “In Event of Moon Disaster” as a jumping-off point to explain the technology behind AI-generated media. The documentary features prominent scholars and thinkers on the state of deepfakes, on the stakes for the spread of misinformation and the twisting of our digital reality, and on the future of truth.</p>

<p>The project is supported by the MIT Open Documentary Lab and the Mozilla Foundation, which awarded “In Event of Moon Disaster” a Creative Media Award last year. These awards are part of Mozilla’s mission to realize more trustworthy AI in consumer technology. <a href=”https://mailtrack.io/trace/link/9e489a848e99d1eceb2c08bb8cf8f5986d07ad4c?url=https%3A%2F%2Fblog.mozilla.org%2Fblog%2F2019%2F09%2F17%2Fexamining-ais-effect-on-media-and-truth%2F&amp;userId=5427979&amp;signature=3bd187fb1c19bc28″>The latest cohort of awardees</a> uses art and advocacy to examine AI’s effect on media and truth.</p>

<p>Says J. Bob Alotta, Mozilla’s vice president of global programs: “AI plays a central role in consumer technology today — it curates our news, it recommends who we date, and it targets us with ads. Such a powerful technology should be demonstrably worthy of trust, but often it is not. Mozilla’s Creative Media Awards draw attention to this, and also advocate for more privacy, transparency, and human well-being in AI.”&nbsp;</p>

<p>“In Event of Moon Disaster” <a href=”http://news.mit.edu/2019/mit-apollo-deepfake-art-installation-aims-to-empower-more-discerning-public-1125″ target=”_self”>previewed last fall</a> as a physical art installation at the International Documentary Film Festival Amsterdam, where it won the Special Jury Prize for Digital Storytelling; it was selected for the 2020 Tribeca Film Festival and Cannes XR. The new website is the project’s global digital launch, making the film and associated materials available for free to all audiences.</p>

<p>The past few months have seen the world move almost entirely online: schools, talk shows, museums, election campaigns, doctor’s appointments — all have made a rapid transition to virtual. When every interaction we have with the world is seen through a digital filter, it becomes more important than ever to learn how to distinguish between authentic and manipulated media.&nbsp;</p>

<p>“It’s our hope that this project will encourage the public to understand that manipulated media plays a significant role in our media landscape,” says co-director Burgund, “and that, with further understanding and diligence, we can all reduce the likelihood of being unduly influenced by it.”</p>

Using sophisticated AI and machine learning technologies, the “In Event of Moon Disaster” team merged Nixon’s face with the movements of an actor reading a speech the former president never actually delivered.
Image: MIT Center for Advanced Virtuality

Faculty receive funding to develop artificial intelligence techniques to combat Covid-19https://news.mit.edu/2020/faculty-receive-funding-develop-novel-ai-techniques-combat-covid-19-0717
C3.ai Digital Transformation Institute awards $5.4 million to top researchers to steer how society responds to the pandemic.
Fri, 17 Jul 2020 15:30:01 -0400
https://news.mit.edu/2020/faculty-receive-funding-develop-novel-ai-techniques-combat-covid-19-0717
School of Engineering | MIT Schwarzman College of Computing
<p>Artificial intelligence has the power to help put an end to the Covid-19 pandemic. Not only can techniques of machine learning and natural language processing be used to track and report Covid-19 infection rates, but other AI techniques can also be used to make smarter decisions about everything from when states should reopen to how vaccines are designed. Now, MIT researchers working on seven groundbreaking projects on Covid-19 will be funded to more rapidly develop and apply novel AI techniques to improve medical response and slow the pandemic spread.</p>

<p>Earlier this year, the <a href=”https://c3dti.ai/” target=”_blank”>C3.ai Digital Transformation Institute</a> (C3.ai DTI) formed, with the goal of attracting the world’s leading scientists to join in a coordinated and innovative effort to advance the digital transformation of businesses, governments, and society. The consortium is dedicated to accelerating advances in research and combining machine learning, artificial intelligence, internet of things, ethics, and public policy — for enhancing societal outcomes. MIT, under the auspices of the School of Engineering, joined the C3.ai DTI consortium, along with C3.ai, Microsoft Corporation, the University of Illinois at Urbana-Champaign, the University of California at Berkeley, Princeton University, the University of Chicago, Carnegie Mellon University, and, most recently, Stanford University.</p><p>The initial call for project proposals aimed to embrace the challenge of abating the spread of Covid-19 and advance the knowledge, science, and technologies for mitigating the impact of pandemics using AI. Out of a total of 200 research proposals, 26 projects were selected and awarded $5.4 million to continue AI research to mitigate the impact of Covid-19 in the areas of medicine, urban planning, and public policy.</p>

<p>The <a href=”https://c3dti.ai/c3-ai-digital-transformation-institute-announces-covid-19-awards/”>first round of grant recipients was recently announced</a>, and among them are five projects led by MIT researchers from across the Institute: Saurabh Amin, associate professor of civil and environmental engineering; Dimitris Bertsimas, the Boeing Leaders for Global Operations Professor of Management; Munther Dahleh, the William A. Coolidge Professor of Electrical Engineering and Computer Science and director of the MIT Institute for Data, Systems, and Society; David Gifford, professor of biological engineering and of electrical engineering and computer science; and Asu Ozdaglar, the MathWorks Professor of Electrical Engineering and Computer Science, head of the Department of Electrical Engineering and Computer Science, and deputy dean of academics for MIT Schwarzman College of Computing.</p>

<p>“We are proud to be a part of this consortium, and to collaborate with peers across higher education, industry, and health care to collectively combat the current pandemic, and to mitigate risk associated with future pandemics,” says Anantha P. Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “We are so honored to have the opportunity to accelerate critical Covid-19 research through resources and expertise provided by the C3.ai DTI.”</p>

<p>Additionally, three MIT researchers will collaborate with principal investigators from other institutions on projects blending health and machine learning. Regina Barzilay, the Delta Electronics Professor in the Department of Electrical Engineering and Computer Science, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science, join Ziv Bar-Joseph from Carnegie Mellon University for a project using machine learning to seek treatment for Covid-19. Aleksander Mądry, professor of computer science in the Department of Electrical Engineering and Computer Science, joins Sendhil Mullainathan of the University of Chicago for a project using machine learning to support emergency triage of pulmonary collapse due to Covid-19 on the basis of X-rays.</p>

<p>Bertsimas’s project develops automated, interpretable, and scalable decision-making systems based on machine learning and artificial intelligence to support clinical practices and public policies as they respond to the Covid-19 pandemic. When it comes to reopening the economy while containing the spread of the pandemic, Ozdaglar’s research provides quantitative analyses of targeted interventions for different groups that will guide policies calibrated to different risk levels and interaction patterns. Amin is investigating the design of actionable information and effective intervention strategies to support safe mobilization of economic activity and reopening of mobility services in urban systems. Dahleh’s research innovatively uses machine learning to determine how to safeguard schools and universities against the outbreak. Gifford was awarded funding for his project that uses machine learning to develop more informed vaccine designs with improved population coverage, and to develop models of Covid-19 disease severity using individual genotypes.</p>

<p>“The enthusiastic support of the distinguished MIT research community is making a huge contribution to the rapid&nbsp;start and significant progress of the C3.ai Digital Transformation Institute,” says Thomas Siebel, chair and CEO of C3.ai. “It is a privilege to be working with such an accomplished team.”</p>

<p>The following projects are the MIT recipients of the inaugural C3.ai DTI Awards:&nbsp;</p>

<p>”Pandemic Resilient Urban Mobility: Learning Spatiotemporal Models for Testing, Contact Tracing, and Reopening Decisions” — Saurabh Amin, associate professor of civil and environmental engineering; and Patrick Jaillet, the Dugald C. Jackson Professor of Electrical Engineering and Computer Science</p>

<p>”Effective Cocktail Treatments for SARS-CoV-2 Based on Modeling Lung Single Cell Response Data” — Regina Barzilay, the Delta Electronics Professor in the Department of Electrical Engineering and Computer Science, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science (Principal investigator: Ziv Bar-Joseph of Carnegie Mellon University)</p>

<p>”Toward Analytics-Based Clinical and Policy Decision Support to Respond to the Covid-19 Pandemic” — Dimitris Bertsimas, the Boeing Leaders for Global Operations Professor of Management and associate dean for business analytics; and Alexandre Jacquillat, assistant professor of operations research and statistics</p>

<p>”Reinforcement Learning to Safeguard Schools and Universities Against the Covid-19 Outbreak” — Munther Dahleh, the William A. Coolidge Professor of Electrical Engineering and Computer Science and director of MIT Institute for Data, Systems, and Society; and Peko Hosoi, the Neil and Jane Pappalardo Professor of Mechanical Engineering and associate dean of engineering</p>

<p>”Machine Learning-Based Vaccine Design and HLA Based Risk Prediction for Viral Infections” — David Gifford, professor of biological engineering and of electrical engineering and computer science</p>

<p>”Machine Learning Support for Emergency Triage of Pulmonary Collapse in Covid-19″ — Aleksander Mądry,<em> </em>professor of computer science in the Department of Electrical Engineering and Computer Science (Principal investigator: Sendhil Mullainathan of the University of Chicago)</p>

<p>”Targeted Interventions in Networked and Multi-Risk SIR Models: How to Unlock the Economy During a Pandemic” — Asu Ozdaglar, the MathWorks Professor of Electrical Engineering and Computer Science, department head of electrical engineering and computer science, and deputy dean of academics for MIT Schwarzman College of Computing; and Daron Acemoglu, Institute Professor</p>

Out of a total of 200 research proposals, 26 projects were selected and awarded $5.4 million to continue AI research to mitigate the impact of Covid-19 in the areas of medicine, urban planning, and public policy.

Letting robots manipulate cableshttps://news.mit.edu/2020/letting-robots-manipulate-cables-0713
Robotic gripper with soft sensitive fingers developed at MIT can handle cables with unprecedented dexterity.
Mon, 13 Jul 2020 07:00:00 -0400
https://news.mit.edu/2020/letting-robots-manipulate-cables-0713
Rachel Gordon | MIT CSAIL
<p>For humans, it can be challenging to manipulate thin flexible objects like ropes, wires, or cables. But if these problems are hard for humans, they are nearly impossible for robots. As a cable slides between the fingers, its shape is constantly changing, and the robot’s fingers must be constantly sensing and adjusting the cable’s position and motion.</p>

<p>Standard approaches have used a series of slow and incremental deformations, as well as mechanical fixtures, to get the job done. Recently, a group of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and from the MIT Department of Mechanical Engineering pursued the task from a different angle, in a manner that more closely mimics us humans. The team’s <a href=”http://www.roboticsproceedings.org/rss16/p029.pdf” target=”_blank”>new system</a> uses a pair of soft robotic grippers with high-resolution tactile sensors (and no added mechanical constraints) to successfully manipulate freely moving cables.</p>

<p>One could imagine using a system like this for both industrial and household tasks, to one day enable robots to help us with things like tying knots, wire shaping, or even surgical suturing.&nbsp;</p>

<p>The team’s first step was to build a novel two-fingered gripper. The opposing fingers are lightweight and quick moving, allowing nimble, real-time adjustments of force and position. On the tips of the fingers are vision-based <a href=”http://news.mit.edu/2017/gelsight-robots-sense-touch-0605″ target=”_self”>“GelSight” sensors</a>, built from soft rubber with embedded cameras. The gripper is mounted on a robot arm, which can move as part of the control system.</p>

<p>The team’s second step was to create a perception-and-control framework to allow cable manipulation. For perception, they used the GelSight sensors to estimate the pose of the cable between the fingers, and to measure the frictional forces as the cable slides. Two controllers run in parallel: one modulates grip strength, while the other adjusts the gripper pose to keep the cable within the gripper.</p>

<p>When mounted on the arm, the gripper could reliably follow a USB cable starting from a random grasp position. Then, in combination with a second gripper, the robot can move the cable “hand over hand” (as a human would) in order to find the end of the cable. It could also adapt to cables of different materials and thicknesses.</p>

<p>As a further demo of its prowess, the robot performed an action that humans routinely do when plugging earbuds into a cell phone. Starting with a free-floating earbud cable, the robot was able to slide the cable between its fingers, stop when it felt the plug touch its fingers, adjust the plug’s pose, and finally insert the plug into the jack.&nbsp;</p>

<p>“Manipulating soft objects is so common in our daily lives, like cable manipulation, cloth folding, and string knotting,” says Yu She, MIT postdoc and lead author on a new paper about the system. “In many cases, we would like to have robots help humans do this kind of work, especially when the tasks are repetitive, dull, or unsafe.”&nbsp;</p><p><strong>String me along</strong>&nbsp;</p><p>Cable following is challenging for two reasons. First, it requires controlling the “grasp force” (to enable smooth sliding), and the “grasp pose” (to prevent the cable from falling from the gripper’s fingers).&nbsp;&nbsp;</p><p>This information is hard to capture from conventional vision systems during continuous manipulation, because it’s usually occluded, expensive to interpret, and sometimes inaccurate.&nbsp;</p>

<p>What’s more, this information can’t be directly observed with just vision sensors, hence the team’s use of tactile<em> </em>sensors. The gripper’s joints are also flexible — protecting them from potential impact.&nbsp;</p>

<p>The algorithms can also be generalized to different cables with various physical properties like material, stiffness, and diameter, and also to those at different speeds.&nbsp;</p>

<p>When comparing different controllers applied to the team’s gripper, their control policy could retain the cable in hand for longer distances than three others. For example, the “open-loop” controller only followed 36 percent of the total length, the gripper easily lost the cable when it curved, and it needed many regrasps to finish the task.&nbsp;</p>

<p><strong>Looking ahead&nbsp;</strong></p>

<p>The team observed that it was difficult to pull the cable back when it reached the edge of the finger, because of the convex surface of the GelSight sensor. Therefore, they hope to improve the finger-sensor shape to enhance the overall performance.&nbsp;</p>

<p>In the future, they plan to study more complex cable manipulation tasks such as cable routing and cable inserting through obstacles, and they want to eventually explore autonomous cable manipulation tasks in the auto industry.</p>

<p>Yu She wrote the paper alongside MIT PhD students Shaoxiong Wang, Siyuan Dong, and Neha Sunil; Alberto Rodriguez,&nbsp;MIT associate professor of mechanical engineering; and Edward Adelson, the <span class=”person__info__def”>John and Dorothy Wilson Professor in the MIT Department of Brain and Cognitive Sciences</span>.&nbsp;</p>

<p>This work was supported by the Amazon Research Awards, the Toyota Research Institute, and the Office of Naval Research.</p>

The system uses a pair of soft robotic grippers with high-resolution tactile sensors to successfully manipulate freely moving cables.
Photo courtesy of MIT CSAIL.

Letting robots manipulate cableshttps://news.mit.edu/2020/letting-robots-manipulate-cables-0713
Robotic gripper with soft sensitive fingers developed at MIT can handle cables with unprecedented dexterity.
Mon, 13 Jul 2020 07:00:00 -0400
https://news.mit.edu/2020/letting-robots-manipulate-cables-0713
Rachel Gordon | MIT CSAIL
<p>For humans, it can be challenging to manipulate thin flexible objects like ropes, wires, or cables. But if these problems are hard for humans, they are nearly impossible for robots. As a cable slides between the fingers, its shape is constantly changing, and the robot’s fingers must be constantly sensing and adjusting the cable’s position and motion.</p>

<p>Standard approaches have used a series of slow and incremental deformations, as well as mechanical fixtures, to get the job done. Recently, a group of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and from the MIT Department of Mechanical Engineering pursued the task from a different angle, in a manner that more closely mimics us humans. The team’s <a href=”http://www.roboticsproceedings.org/rss16/p029.pdf” target=”_blank”>new system</a> uses a pair of soft robotic grippers with high-resolution tactile sensors (and no added mechanical constraints) to successfully manipulate freely moving cables.</p>

<p>One could imagine using a system like this for both industrial and household tasks, to one day enable robots to help us with things like tying knots, wire shaping, or even surgical suturing.&nbsp;</p>

<p>The team’s first step was to build a novel two-fingered gripper. The opposing fingers are lightweight and quick moving, allowing nimble, real-time adjustments of force and position. On the tips of the fingers are vision-based <a href=”http://news.mit.edu/2017/gelsight-robots-sense-touch-0605″ target=”_self”>“GelSight” sensors</a>, built from soft rubber with embedded cameras. The gripper is mounted on a robot arm, which can move as part of the control system.</p>

<p>The team’s second step was to create a perception-and-control framework to allow cable manipulation. For perception, they used the GelSight sensors to estimate the pose of the cable between the fingers, and to measure the frictional forces as the cable slides. Two controllers run in parallel: one modulates grip strength, while the other adjusts the gripper pose to keep the cable within the gripper.</p>

<p>When mounted on the arm, the gripper could reliably follow a USB cable starting from a random grasp position. Then, in combination with a second gripper, the robot can move the cable “hand over hand” (as a human would) in order to find the end of the cable. It could also adapt to cables of different materials and thicknesses.</p>

<p>As a further demo of its prowess, the robot performed an action that humans routinely do when plugging earbuds into a cell phone. Starting with a free-floating earbud cable, the robot was able to slide the cable between its fingers, stop when it felt the plug touch its fingers, adjust the plug’s pose, and finally insert the plug into the jack.&nbsp;</p>

<p>“Manipulating soft objects is so common in our daily lives, like cable manipulation, cloth folding, and string knotting,” says Yu She, MIT postdoc and lead author on a new paper about the system. “In many cases, we would like to have robots help humans do this kind of work, especially when the tasks are repetitive, dull, or unsafe.”&nbsp;</p><p><strong>String me along</strong>&nbsp;</p><p>Cable following is challenging for two reasons. First, it requires controlling the “grasp force” (to enable smooth sliding), and the “grasp pose” (to prevent the cable from falling from the gripper’s fingers).&nbsp;&nbsp;</p><p>This information is hard to capture from conventional vision systems during continuous manipulation, because it’s usually occluded, expensive to interpret, and sometimes inaccurate.&nbsp;</p>

<p>What’s more, this information can’t be directly observed with just vision sensors, hence the team’s use of tactile<em> </em>sensors. The gripper’s joints are also flexible — protecting them from potential impact.&nbsp;</p>

<p>The algorithms can also be generalized to different cables with various physical properties like material, stiffness, and diameter, and also to those at different speeds.&nbsp;</p>

<p>When comparing different controllers applied to the team’s gripper, their control policy could retain the cable in hand for longer distances than three others. For example, the “open-loop” controller only followed 36 percent of the total length, the gripper easily lost the cable when it curved, and it needed many regrasps to finish the task.&nbsp;</p>

<p><strong>Looking ahead&nbsp;</strong></p>

<p>The team observed that it was difficult to pull the cable back when it reached the edge of the finger, because of the convex surface of the GelSight sensor. Therefore, they hope to improve the finger-sensor shape to enhance the overall performance.&nbsp;</p>

<p>In the future, they plan to study more complex cable manipulation tasks such as cable routing and cable inserting through obstacles, and they want to eventually explore autonomous cable manipulation tasks in the auto industry.</p>

<p>Yu She wrote the paper alongside MIT PhD students Shaoxiong Wang, Siyuan Dong, and Neha Sunil; Alberto Rodriguez,&nbsp;MIT associate professor of mechanical engineering; and Edward Adelson, the <span class=”person__info__def”>John and Dorothy Wilson Professor in the MIT Department of Brain and Cognitive Sciences</span>.&nbsp;</p>

<p>This work was supported by the Amazon Research Awards, the Toyota Research Institute, and the Office of Naval Research.</p>

The system uses a pair of soft robotic grippers with high-resolution tactile sensors to successfully manipulate freely moving cables.
Photo courtesy of MIT CSAIL.

Letting robots manipulate cableshttps://news.mit.edu/2020/letting-robots-manipulate-cables-0713
Robotic gripper with soft sensitive fingers developed at MIT can handle cables with unprecedented dexterity.
Mon, 13 Jul 2020 07:00:00 -0400
https://news.mit.edu/2020/letting-robots-manipulate-cables-0713
Rachel Gordon | MIT CSAIL
<p>For humans, it can be challenging to manipulate thin flexible objects like ropes, wires, or cables. But if these problems are hard for humans, they are nearly impossible for robots. As a cable slides between the fingers, its shape is constantly changing, and the robot’s fingers must be constantly sensing and adjusting the cable’s position and motion.</p>

<p>Standard approaches have used a series of slow and incremental deformations, as well as mechanical fixtures, to get the job done. Recently, a group of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and from the MIT Department of Mechanical Engineering pursued the task from a different angle, in a manner that more closely mimics us humans. The team’s <a href=”http://www.roboticsproceedings.org/rss16/p029.pdf” target=”_blank”>new system</a> uses a pair of soft robotic grippers with high-resolution tactile sensors (and no added mechanical constraints) to successfully manipulate freely moving cables.</p>

<p>One could imagine using a system like this for both industrial and household tasks, to one day enable robots to help us with things like tying knots, wire shaping, or even surgical suturing.&nbsp;</p>

<p>The team’s first step was to build a novel two-fingered gripper. The opposing fingers are lightweight and quick moving, allowing nimble, real-time adjustments of force and position. On the tips of the fingers are vision-based <a href=”http://news.mit.edu/2017/gelsight-robots-sense-touch-0605″ target=”_self”>“GelSight” sensors</a>, built from soft rubber with embedded cameras. The gripper is mounted on a robot arm, which can move as part of the control system.</p>

<p>The team’s second step was to create a perception-and-control framework to allow cable manipulation. For perception, they used the GelSight sensors to estimate the pose of the cable between the fingers, and to measure the frictional forces as the cable slides. Two controllers run in parallel: one modulates grip strength, while the other adjusts the gripper pose to keep the cable within the gripper.</p>

<p>When mounted on the arm, the gripper could reliably follow a USB cable starting from a random grasp position. Then, in combination with a second gripper, the robot can move the cable “hand over hand” (as a human would) in order to find the end of the cable. It could also adapt to cables of different materials and thicknesses.</p>

<p>As a further demo of its prowess, the robot performed an action that humans routinely do when plugging earbuds into a cell phone. Starting with a free-floating earbud cable, the robot was able to slide the cable between its fingers, stop when it felt the plug touch its fingers, adjust the plug’s pose, and finally insert the plug into the jack.&nbsp;</p>

<p>“Manipulating soft objects is so common in our daily lives, like cable manipulation, cloth folding, and string knotting,” says Yu She, MIT postdoc and lead author on a new paper about the system. “In many cases, we would like to have robots help humans do this kind of work, especially when the tasks are repetitive, dull, or unsafe.”&nbsp;</p><p><strong>String me along</strong>&nbsp;</p><p>Cable following is challenging for two reasons. First, it requires controlling the “grasp force” (to enable smooth sliding), and the “grasp pose” (to prevent the cable from falling from the gripper’s fingers).&nbsp;&nbsp;</p><p>This information is hard to capture from conventional vision systems during continuous manipulation, because it’s usually occluded, expensive to interpret, and sometimes inaccurate.&nbsp;</p>

<p>What’s more, this information can’t be directly observed with just vision sensors, hence the team’s use of tactile<em> </em>sensors. The gripper’s joints are also flexible — protecting them from potential impact.&nbsp;</p>

<p>The algorithms can also be generalized to different cables with various physical properties like material, stiffness, and diameter, and also to those at different speeds.&nbsp;</p>

<p>When comparing different controllers applied to the team’s gripper, their control policy could retain the cable in hand for longer distances than three others. For example, the “open-loop” controller only followed 36 percent of the total length, the gripper easily lost the cable when it curved, and it needed many regrasps to finish the task.&nbsp;</p>

<p><strong>Looking ahead&nbsp;</strong></p>

<p>The team observed that it was difficult to pull the cable back when it reached the edge of the finger, because of the convex surface of the GelSight sensor. Therefore, they hope to improve the finger-sensor shape to enhance the overall performance.&nbsp;</p>

<p>In the future, they plan to study more complex cable manipulation tasks such as cable routing and cable inserting through obstacles, and they want to eventually explore autonomous cable manipulation tasks in the auto industry.</p>

<p>Yu She wrote the paper alongside MIT PhD students Shaoxiong Wang, Siyuan Dong, and Neha Sunil; Alberto Rodriguez,&nbsp;MIT associate professor of mechanical engineering; and Edward Adelson, the <span class=”person__info__def”>John and Dorothy Wilson Professor in the MIT Department of Brain and Cognitive Sciences</span>.&nbsp;</p>

<p>This work was supported by the Amazon Research Awards, the Toyota Research Institute, and the Office of Naval Research.</p>

The system uses a pair of soft robotic grippers with high-resolution tactile sensors to successfully manipulate freely moving cables.
Photo courtesy of MIT CSAIL.

Letting robots manipulate cableshttps://news.mit.edu/2020/letting-robots-manipulate-cables-0713
Robotic gripper with soft sensitive fingers developed at MIT can handle cables with unprecedented dexterity.
Mon, 13 Jul 2020 07:00:00 -0400
https://news.mit.edu/2020/letting-robots-manipulate-cables-0713
Rachel Gordon | MIT CSAIL
<p>For humans, it can be challenging to manipulate thin flexible objects like ropes, wires, or cables. But if these problems are hard for humans, they are nearly impossible for robots. As a cable slides between the fingers, its shape is constantly changing, and the robot’s fingers must be constantly sensing and adjusting the cable’s position and motion.</p>

<p>Standard approaches have used a series of slow and incremental deformations, as well as mechanical fixtures, to get the job done. Recently, a group of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and from the MIT Department of Mechanical Engineering pursued the task from a different angle, in a manner that more closely mimics us humans. The team’s <a href=”http://www.roboticsproceedings.org/rss16/p029.pdf” target=”_blank”>new system</a> uses a pair of soft robotic grippers with high-resolution tactile sensors (and no added mechanical constraints) to successfully manipulate freely moving cables.</p>

<p>One could imagine using a system like this for both industrial and household tasks, to one day enable robots to help us with things like tying knots, wire shaping, or even surgical suturing.&nbsp;</p>

<p>The team’s first step was to build a novel two-fingered gripper. The opposing fingers are lightweight and quick moving, allowing nimble, real-time adjustments of force and position. On the tips of the fingers are vision-based <a href=”http://news.mit.edu/2017/gelsight-robots-sense-touch-0605″ target=”_self”>“GelSight” sensors</a>, built from soft rubber with embedded cameras. The gripper is mounted on a robot arm, which can move as part of the control system.</p>

<p>The team’s second step was to create a perception-and-control framework to allow cable manipulation. For perception, they used the GelSight sensors to estimate the pose of the cable between the fingers, and to measure the frictional forces as the cable slides. Two controllers run in parallel: one modulates grip strength, while the other adjusts the gripper pose to keep the cable within the gripper.</p>

<p>When mounted on the arm, the gripper could reliably follow a USB cable starting from a random grasp position. Then, in combination with a second gripper, the robot can move the cable “hand over hand” (as a human would) in order to find the end of the cable. It could also adapt to cables of different materials and thicknesses.</p>

<p>As a further demo of its prowess, the robot performed an action that humans routinely do when plugging earbuds into a cell phone. Starting with a free-floating earbud cable, the robot was able to slide the cable between its fingers, stop when it felt the plug touch its fingers, adjust the plug’s pose, and finally insert the plug into the jack.&nbsp;</p>

<p>“Manipulating soft objects is so common in our daily lives, like cable manipulation, cloth folding, and string knotting,” says Yu She, MIT postdoc and lead author on a new paper about the system. “In many cases, we would like to have robots help humans do this kind of work, especially when the tasks are repetitive, dull, or unsafe.”&nbsp;</p><p><strong>String me along</strong>&nbsp;</p><p>Cable following is challenging for two reasons. First, it requires controlling the “grasp force” (to enable smooth sliding), and the “grasp pose” (to prevent the cable from falling from the gripper’s fingers).&nbsp;&nbsp;</p><p>This information is hard to capture from conventional vision systems during continuous manipulation, because it’s usually occluded, expensive to interpret, and sometimes inaccurate.&nbsp;</p>

<p>What’s more, this information can’t be directly observed with just vision sensors, hence the team’s use of tactile<em> </em>sensors. The gripper’s joints are also flexible — protecting them from potential impact.&nbsp;</p>

<p>The algorithms can also be generalized to different cables with various physical properties like material, stiffness, and diameter, and also to those at different speeds.&nbsp;</p>

<p>When comparing different controllers applied to the team’s gripper, their control policy could retain the cable in hand for longer distances than three others. For example, the “open-loop” controller only followed 36 percent of the total length, the gripper easily lost the cable when it curved, and it needed many regrasps to finish the task.&nbsp;</p>

<p><strong>Looking ahead&nbsp;</strong></p>

<p>The team observed that it was difficult to pull the cable back when it reached the edge of the finger, because of the convex surface of the GelSight sensor. Therefore, they hope to improve the finger-sensor shape to enhance the overall performance.&nbsp;</p>

<p>In the future, they plan to study more complex cable manipulation tasks such as cable routing and cable inserting through obstacles, and they want to eventually explore autonomous cable manipulation tasks in the auto industry.</p>

<p>Yu She wrote the paper alongside MIT PhD students Shaoxiong Wang, Siyuan Dong, and Neha Sunil; Alberto Rodriguez,&nbsp;MIT associate professor of mechanical engineering; and Edward Adelson, the <span class=”person__info__def”>John and Dorothy Wilson Professor in the MIT Department of Brain and Cognitive Sciences</span>.&nbsp;</p>

<p>This work was supported by the Amazon Research Awards, the Toyota Research Institute, and the Office of Naval Research.</p>

The system uses a pair of soft robotic grippers with high-resolution tactile sensors to successfully manipulate freely moving cables.
Photo courtesy of MIT CSAIL.

Empowering kids to address Covid-19 through codinghttps://news.mit.edu/2020/empowering-kids-covid-19-coding-0709
MIT App Inventor Challenge allows children to create apps that tackle the coronavirus pandemic.
Thu, 09 Jul 2020 11:50:20 -0400
https://news.mit.edu/2020/empowering-kids-covid-19-coding-0709
Abby Abazorius | MIT News Office
<p>When schools around the world closed their doors due to the coronavirus pandemic, the team behind <a href=”http://appinventor.mit.edu/”>MIT App Inventor</a> — a web-based, visual-programming environment that allows children to develop applications for smartphones and tablets — began thinking about how they could not only help keep children engaged and learning, but also empower them to create new tools to address the pandemic.</p>

<p align=”center”></p>

<p>In April, the App Inventor team launched a new challenge that encourages children and adults around the world to build mobile technologies that could be used to help stem the spread of Covid-19, aid local communities, and provide moral support to people around the world.</p><p>“Many people, including kids, are locked down at home with little to do and with a sense of loss of control over their lives,” says Selim Tezel, a curriculum developer for MIT App Inventor. “We wanted to empower them to take action, be involved in a creative process, and do something good for their fellow citizens.”</p><p>Since the <a href=”http://appinventor.mit.edu/blogs/selim/2020/03/14/CoronavirusAppChallenge”>Coronavirus App Inventor Challenge</a> launched this spring, there have been submissions from inventors ranging in age from 9 to 72 years and from coders around the globe, including New Zealand, the Democratic Republic of Congo, Italy, China, India, and Spain. While the App Inventor platform has historically been used in classrooms as an educational tool, Tezel and Hal Abelson, the Class of 1922 Professor in the Department of Electrical Engineering in Computer Science, explain that they have seen increased individual engagement with the platform during the pandemic, particularly on a global scale.</p><p>“The nice thing about App Inventor is that you’re learning about coding, but it also gives you something that you can actually do and a chance to contribute,” says Abelson. “It provides kids with an opportunity to say, ‘I’m not just learning, I’m doing a project, and it’s not only a project for me, it’s a project that can actually help other people.’ I think that can be very powerful.”</p><p>Winners are announced on a monthly basis and honor apps for creativity, design, and overall inventiveness. Challenge participants have addressed a wide variety of issues associated with the pandemic, from health and hygiene to mental health and education. For example, April’s Young Inventors of the Month, Bethany Chow and Ice Chow from Hong Kong, developed an <a href=”http://ai2.appinventor.mit.edu/?galleryId=5597360410984448″>app</a> aimed at motivating users to stay healthy. Their app features a game that encourages players to adapt healthy habits by collecting points that they can use to defeat virtual viruses, as well as an optional location tracker function that can alert users if they have frequented a location that has a Covid-19 outbreak.</p><p>Akshaj Singhal, a 11-year-old from India, was selected as the June Inventor of the Month in the Young Inventors category, which includes children 12 years old and younger, for his app called <a href=”http://ai2.appinventor.mit.edu/?galleryId=4986860229492736″>Covid-19 Warrior</a>. The app offers a host of features aimed at spreading awareness of Covid-19, including a game and quiz to test a user’s knowledge of the virus, as well as local daily Covid-19 news updates and information on how to make your own mask.</p><p>The challenge has attracted participants with varying levels of technical expertise, allowing aspiring coders a chance to hone and improve their skills. Prayanshi Garg, a 12-year-old from India, created her first <a href=”http://ai2.appinventor.mit.edu/?galleryId=5152209850990592″>app</a> for the challenge, an educational quiz aimed at increasing awareness of Covid-19. Vansh Reshamwala, a 10-year-old from India, created an <a href=”http://ai2.appinventor.mit.edu/?galleryId=5010744702009344″>app</a> that features a recording of his voice sharing information about ways to help prevent the spread of Covid-19 and thanking heroes for their efforts during the pandemic.</p><p>Participants have also been able to come together virtually to develop apps during a time when social interactions and team activities are limited. For example, three high school students from Singapore developed <a href=”http://ai2.appinventor.mit.edu/?galleryId=6048165371707392″>Maskeraid</a>, an app that connects users in need of assistance with volunteers who are able to help with a variety of services.</p><p>“The ultimate goal is to engage our very creative App Inventor community of all ages and empower them during this time,” says Tezel. “We also see this time as an incredible opportunity to help people vastly improve their coding skills.&nbsp; When one is confronted by a tangible challenge, one’s skills and versatility can grow to meet the challenge.”</p><p>The App Inventor team plans to continue hosting the challenge for so long as the pandemic is having a worldwide impact. Later this month, the App Inventor team will be hosting a virtual <a href=”https://hack2020.appinventor.mit.edu/”>hackathon</a> or worldwide “appathon,” an event that will encourage participants to create apps aimed at improving the global good.</p><p>“Our global App Inventor community never ceases to amaze us,” says Tezel.&nbsp;“We are delighted by how inventors of all ages have been rising to the challenge of the coronavirus, empowering themselves by putting their coding skills to good use for the well-being of their communities.”</p>

A new challenge launched by MIT App Inventor — a web-based, visual-programming environment that allows children to develop applications for smartphones and tablets — encourages kids and adults to build mobile technologies that could be used to help stem the spread of Covid-19, aid local communities, and provide moral support to people around the world. This image includes four screenshots from apps submitted to the site that were made by participants.
Image from MIT App Inventor website and edited by MIT News.

Exploring interactions of light and matterhttps://news.mit.edu/2020/juejun-hu-light-and-matter-0701
Juejun Hu pushes the frontiers of optoelectronics for biological imaging, communications, and consumer electronics.
Tue, 30 Jun 2020 23:59:59 -0400
https://news.mit.edu/2020/juejun-hu-light-and-matter-0701
David L. Chandler | MIT News Office
<p>Growing up in a small town in Fujian province in southern China, Juejun Hu was exposed to engineering from an early age. His father, trained as a mechanical engineer, spent his career working first in that field, then in electrical engineering, and then civil engineering.</p><p>“He gave me early exposure to the field. He brought me books and told me stories of interesting scientists and scientific activities,” Hu recalls. So when it came time to go to college — in China students have to choose their major before enrolling — he picked materials science, figuring that field straddled his interests in science and engineering. He pursued that major at Tsinghua University in Beijing.</p><p>He never regretted that decision. “Indeed, it’s the way to go,” he says. “It was a serendipitous choice.” He continued on to a doctorate in materials science at MIT, and then spent four and a half years as an assistant professor at the University of Delaware before joining the MIT faculty. Last year, Hu earned tenure as an associate professor in MIT’s Department of Materials Science and Engineering.</p><p>In his work at the Institute, he has focused on optical and photonic devices, whose applications include improving high-speed communications, observing the behavior of molecules, designing better medical imaging systems, and developing innovations in consumer electronics such as display screens and sensors.</p><p>“I got fascinated with light,” he says, recalling how he began working in this field. “It has such a direct impact on our lives.”</p><p>Hu is now developing devices to transmit information at very high rates, for data centers or high-performance computers. This includes work on devices called optical diodes or optical isolators, which allow light to pass through only in one direction, and systems for coupling light signals into and out of photonic chips.</p><p>Lately, Hu has been focusing on applying machine-learning methods to improve the performance of optical systems. For example, he has developed an algorithm that improves the sensitivity of a spectrometer, a device for analyzing the chemical composition of materials based on how they emit or absorb different frequencies of light. The new approach made it possible to shrink a device that ordinarily requires bulky and expensive equipment down to the scale of a computer chip, by improving its ability to overcome random noise and provide a clean signal.</p><p>The miniaturized spectrometer makes it possible to analyze the chemical composition of individual molecules with something “small and rugged, to replace devices that are large, delicate, and expensive,” he says.</p><p>Much of his work currently involves the use of metamaterials, which don’t occur in nature and are synthesized usually as a series of ultrathin layers, so thin that they interact with wavelengths of light in novel ways. These could lead to components for biomedical imaging, security surveillance, and sensors on consumer electronics, Hu says. Another project he’s been working on involved developing a kind of optical zoom lens based on metamaterials, which uses no moving parts.</p><p>Hu is also pursuing ways to make photonic and photovoltaic systems that are flexible and stretchable rather than rigid, and to make them lighter and more compact. This could &nbsp;allow for installations in places that would otherwise not be practical. “I’m always looking for new designs to start a new paradigm in optics, [to produce] something that’s smaller, faster, better, and lower cost,” he says.</p><p>Hu says the focus of his research these days is mostly on amorphous materials — whose atoms are randomly arranged as opposed to the orderly lattices of crystal structures — because crystalline materials have been so well-studied and understood. When it comes to amorphous materials, though, “our knowledge is amorphous,” he says. “There are lots of new discoveries in the field.”</p><p>Hu’s wife, Di Chen, whom he met when they were both in China, works in the financial industry. They have twin daughters, Selena and Eos, who are 1 year old, and a son Helius, age 3. Whatever free time he has, Hu says, he likes to spend doing things with his kids.</p><p>Recalling why he was drawn to MIT, he says, “I like this very strong engineering culture.” He especially likes MIT’s strong system of support for bringing new advances out of the lab and into real-world application. “This is what I find really useful.” When new ideas come out of the lab, “I like to see them find real utility,” he adds.</p>

MIT professor Juejun Hu specializes in optical and photonic devices, whose applications include improving high-speed communications, observing the behavior of molecules, and developing innovations in consumer electronics.
Image: Denis Paiste

The MIT Press and UC Berkeley launch Rapid Reviews: COVID-19https://news.mit.edu/2020/mit-press-and-uc-berkeley-launch-rapid-reviews-covid-19-0629
The new open access, rapid-review overlay journal aims to combat misinformation in Covid-19 research.
Mon, 29 Jun 2020 15:35:01 -0400
https://news.mit.edu/2020/mit-press-and-uc-berkeley-launch-rapid-reviews-covid-19-0629
MIT Press
<p><a href=”https://mitpress.mit.edu/” target=”_blank”>The MIT Press</a> has announced the launch of <a href=”http://rapidreviewscovid19.mitpress.mit.edu/” target=”_blank”><em>Rapid Reviews: COVID-19</em></a> (<em>RR:C19</em>), an open access, rapid-review overlay journal that will accelerate peer review of Covid-19-related research and deliver real-time, verified scientific information that policymakers and health leaders can use.</p>

<p>Scientists and researchers are working overtime to understand the SARS-CoV-2 virus and are producing an unprecedented amount of preprint scholarship that is publicly available online but has not been vetted yet by peer review for accuracy. Traditional peer review can take four or more weeks to complete, but <em>RR:C19’s </em>editorial team, led by Editor-in-Chief Stefano M. Bertozzi, professor of health policy and management and dean emeritus of the <a href=”https://publichealth.berkeley.edu/”>School of Public Health</a> at the University of California at Berkeley, will produce expert reviews in a matter of days.</p>

<p>Using artificial intelligence tools, a global team will identify promising scholarship in preprint repositories, commission expert peer reviews, and publish the results on an open access platform in a completely transparent process. The journal will strive for disciplinary and geographic breadth, sourcing manuscripts from all regions and across a wide variety of fields, including medicine; public health; the physical, biological, and chemical sciences; the social sciences; and the humanities. <em>RR:C19 </em>will also provide a new publishing option for revised papers that are positively reviewed.</p>

<p>Amy Brand, director of the MIT Press sees the no-cost open access model as a way to increase the impact of global research and disseminate high-quality scholarship. “Offering a peer-reviewed model on top of preprints will bring a level of diligence that clinicians, researchers, and others worldwide rely on to make sound judgments about the current crisis and its amelioration,” says Brand. “The project also aims to provide a proof-of-concept for new models of peer-review and rapid publishing for broader applications.”</p>

<p>Made possible by a $350,000 grant from the Patrick J. McGovern Foundation and hosted on <a href=”https://pubpub.org”>PubPub</a>, an open-source publishing platform from the Knowledge Futures Group for collaboratively editing and publishing journals, monographs, and other open access scholarly content, <em>RR:C19</em> will limit the spread of misinformation about Covid-19, according to Bertozzi.</p>

<p>“There is an urgent need to validate — or debunk — the rapidly growing volume of Covid-19-related manuscripts on preprint servers,” explains Bertozzi. “I’m excited to be working with the MIT Press, the Patrick J. McGovern Foundation, and the Knowledge Futures Group to create a novel publishing model that has the potential to more efficiently translate important scientific results into action. We are also working with <a href=”http://covidscholar.org”>COVIDScholar</a>, an initiative of UC Berkeley and Lawrence Berkeley National Lab, to create unique AI/machine learning tools to support the review of hundreds of preprints per week.”</p>

<p>“This project signals a breakthrough in academic publishing, bringing together urgency and scientific rigor so the world’s researchers can rapidly disseminate new discoveries that we can trust,” says Vilas Dhar, trustee of the Patrick J. McGovern Foundation. “We are confident the <em>RR:C19 </em>journal will quickly become an invaluable resource for researchers, public health officials, and healthcare providers on the frontline of this pandemic. We’re also excited about the potential for a long-term transformation in how we evaluate and share research across all scientific disciplines.”</p>

<p>On the collaboration around this new journal, Travis Rich, executive director of the Knowledge Futures Group<strong> </strong>notes, “At a moment when credibility is increasingly crucial to the well-being of society, we’re thrilled to be partnering with this innovative journal to expand the idea of reviews as first-class research objects, both on PubPub and as a model for others.</p>

<p><em>RR:C19</em> will publish its first reviews in July 2020 and is actively recruiting potential reviewers and contributors. To learn more about this project and its esteemed editorial board, visit<a href=”http://rapidreviewscovid19.mitpress.mit.edu/”> rapidreviewscovid19.mitpress.mit.edu</a>.</p>

Rapid Reviews: COVID-19 (RR:C19) is an open access, rapid-review overlay journal that will accelerate peer review of Covid-19-related research.

CSAIL robot disinfects Greater Boston Food Bankhttps://news.mit.edu/2020/csail-robot-disinfects-greater-boston-food-bank-covid-19-0629
Using UV-C light, the system can disinfect a warehouse floor in half an hour — and could one day be employed in grocery stores, schools, and other spaces.
Sun, 28 Jun 2020 23:59:59 -0400
https://news.mit.edu/2020/csail-robot-disinfects-greater-boston-food-bank-covid-19-0629
Rachel Gordon | MIT CSAIL
<p>With every droplet that we can’t see, touch, or feel dispersed into the air, the threat of spreading Covid-19 persists. It’s become increasingly critical to keep these heavy droplets from lingering — especially on surfaces, which are welcoming and generous hosts.&nbsp;</p>

<p>Thankfully, our chemical cleaning products are effective, but using them to disinfect larger settings can be expensive, dangerous, and time-consuming. Across the globe there are thousands of warehouses, grocery stores, schools, and other spaces where cleaning workers are at risk.</p>

<p>With that in mind, a team from MIT’s <a href=”http://csail.mit.edu”>Computer Science and Artificial Intelligence Laboratory</a> (CSAIL), in collaboration with <a href=”https://www.avarobotics.com/”>Ava Robotics</a> and the <a href=”https://www.gbfb.org/”>Greater Boston Food Bank</a> (GBFB), designed a new robotic system that powerfully disinfects surfaces and neutralizes aerosolized forms of the coronavirus.</p>
<p>The approach uses a custom UV-C light fixture designed at CSAIL that is integrated with Ava Robotics’ mobile robot base. The results were encouraging enough that researchers say that the approach could be useful for autonomous UV disinfection in other environments, such as factories, restaurants, and supermarkets.&nbsp;</p>

<p>UV-C light has proven to be effective at killing viruses and bacteria on surfaces and aerosols, but it’s unsafe for humans to be exposed. Fortunately, Ava’s telepresence robot doesn’t require any human supervision. Instead of the telepresence top, the team subbed in a UV-C array for disinfecting surfaces. Specifically, the array uses short-wavelength ultraviolet light to kill microorganisms and disrupt their DNA in a process called ultraviolet germicidal irradiation.</p>

<p>The complete robot system is capable of mapping the space — in this case, GBFB’s warehouse — and navigating between waypoints and other specified areas. In testing the system, the team used a UV-C dosimeter, which confirmed that the robot was delivering the expected dosage of UV-C light predicted by the model.</p>

<p>“Food banks provide an essential service to our communities, so it is critical to help keep these operations running,” says Alyssa Pierson, CSAIL research scientist and technical lead of the UV-C lamp assembly. “Here, there was a unique opportunity to provide additional disinfecting power to their current workflow, and help reduce the risks of Covid-19 exposure.”&nbsp;</p>

<p>Food banks are also facing a particular demand due to the stress of Covid-19. The United Nations projected that, because of the virus, the number of people facing severe food insecurity worldwide <a href=”https://www.wfp.org/news/covid-19-will-double-number-people-facing-food-crises-unless-swift-action-taken”>could double to 265 million</a>. In the United States alone, the five-week total of job losses has risen to 26 million, potentially pushing millions more into food insecurity.&nbsp;</p>

<p>During tests at GBFB, the robot was able to drive by the pallets and storage aisles at a speed of roughly 0.22 miles per hour. At this speed, the robot could cover a 4,000-square-foot space in GBFB’s warehouse in just half an hour. The UV-C dosage delivered during this time can neutralize approximately 90 percent of coronaviruses on surfaces. For many surfaces, this dose will be higher, resulting in more of the virus neutralized.</p>

<p>Typically, this method of ultraviolet germicidal irradiation is used largely in hospitals and medical settings, to sterilize patient rooms and stop the spread of microorganisms like <span>methicillin-resistant <em>staphylococcus aureus</em></span> and <span class=”st”><em>Clostridium difficile</em></span>, and the UV-C light also works against airborne pathogens. While it’s most effective in the direct “line of sight,” it can get to nooks and crannies as the light bounces off surfaces and onto other surfaces.&nbsp;</p>

<p>”Our 10-year-old warehouse is a relatively new food distribution facility with AIB-certified, state-of-the-art cleanliness and food safety standards,” says Catherine D’Amato, president and CEO of the Greater Boston Food Bank. “Covid-19 is a new pathogen that GBFB, and the rest of the world, was not designed to handle. We are pleased to have this opportunity to work with MIT CSAIL and Ava Robotics to innovate and advance our sanitation techniques to defeat this menace.”&nbsp;</p>

<p>As a first step, the team teleoperated the robot to teach it the path around the warehouse — meaning it’s equipped with autonomy to move around, without the team needing to navigate it remotely.&nbsp;</p>

<p>It can go to defined waypoints on its map, such as going to the loading dock, then the warehouse shipping floor, then returning to base. They define those waypoints from the expert human user in teleop mode, and then can add new waypoints to the map as needed.&nbsp;</p>

<p>Within GBFB, the team identified the warehouse shipping floor as a “high-importance area” for the robot to disinfect. Each day, workers stage aisles of products and arrange them for up to 50 pickups by partners and distribution trucks the next day. By focusing on the shipping area, it prioritizes disinfecting items leaving the warehouse to reduce Covid-19 spread out into the community.</p>

<p>Currently, the team is exploring how to use its onboard sensors to adapt to changes in the environment, such that in new territory, the robot would adjust its speed to ensure the recommended dosage is applied to new objects and surfaces.&nbsp;</p>

<p>A unique challenge is that the shipping area is constantly changing, so each night, the robot encounters a slightly new environment. When the robot is deployed, it doesn’t necessarily know which of the staging aisles will be occupied, or how full each aisle might be. Therefore, the team notes that they need to teach the robot to differentiate between the occupied and unoccupied aisles, so it can change its planned path accordingly.</p>

<p>As far as production went, “in-house manufacturing” took on a whole new meaning for this prototype and the team. The UV-C lamps were assembled in Pierson’s basement, and CSAIL PhD student Jonathan Romanishin crafted a makeshift shop in his apartment for the electronics board assembly.&nbsp;</p>

<p>“As we drive the robot around the food bank, we are also researching new control policies that will allow the robot to adapt to changes in the environment and ensure all areas receive the proper estimated dosage,” says Pierson. “We are focused on remote operation to minimize&nbsp; human supervision, and, therefore, the additional risk of spreading Covid-19, while running our system.”&nbsp;</p>

<p>For immediate next steps, the team is focused on increasing the capabilities of the robot at GBFB, as well as eventually implementing design upgrades. Their broader intention focuses on how to make these systems more capable at adapting to our world: how a robot can dynamically change its plan based on estimated UV-C dosages, how it can work in new environments, and how to coordinate teams of UV-C robots to work together.</p>

<p>“We are excited to see the UV-C disinfecting robot support our community in this time of need,” says CSAIL director and project lead Daniela Rus. “The insights we received from the work at GBFB has highlighted several algorithmic challenges. We plan to tackle these in order to extend the scope of autonomous UV disinfection in complex spaces, including dorms, schools, airplanes, and grocery stores.”&nbsp;</p>

<p>Currently, the team’s focus is on GBFB, although the algorithms and systems they are developing could be transferred to other use cases in the future, like warehouses, grocery stores, and schools.&nbsp;</p>

<p>”MIT has been a great partner, and when they came to us, the team was eager to start the integration, which took just four weeks to get up and running,” says Ava Robotics CEO Youssef Saleh. “The opportunity for robots to solve workplace challenges is bigger than ever, and collaborating with MIT to make an impact at the food bank has been a great experience.”&nbsp;</p>

<p>Pierson and Romanishin worked alongside Hunter Hansen (software capabilities), Bryan Teague of MIT Lincoln Laboratory (who assisted with the UV-C lamp assembly), Igor Gilitschenski and Xiao Li (assisting with future autonomy research), MIT professors Daniela Rus and Saman Amarasinghe, and Ava leads Marcio Macedo and Youssef Saleh.&nbsp;</p>

<p>This project was supported in part by Ava Robotics, who provided their platform and team support.</p>

In tests, the CSAIL team’s robot could disinfect a 4,000-square-foot space in the food bank’s warehouse in just half an hour.
Photo: Alyssa Pierson/CSAIL

CSAIL robot disinfects Greater Boston Food Bankhttps://news.mit.edu/2020/csail-robot-disinfects-greater-boston-food-bank-covid-19-0629
Using UV-C light, the system can disinfect a warehouse floor in half an hour — and could one day be employed in grocery stores, schools, and other spaces.
Sun, 28 Jun 2020 23:59:59 -0400
https://news.mit.edu/2020/csail-robot-disinfects-greater-boston-food-bank-covid-19-0629
Rachel Gordon | MIT CSAIL
<p>With every droplet that we can’t see, touch, or feel dispersed into the air, the threat of spreading Covid-19 persists. It’s become increasingly critical to keep these heavy droplets from lingering — especially on surfaces, which are welcoming and generous hosts.&nbsp;</p>

<p>Thankfully, our chemical cleaning products are effective, but using them to disinfect larger settings can be expensive, dangerous, and time-consuming. Across the globe there are thousands of warehouses, grocery stores, schools, and other spaces where cleaning workers are at risk.</p>

<p>With that in mind, a team from MIT’s <a href=”http://csail.mit.edu”>Computer Science and Artificial Intelligence Laboratory</a> (CSAIL), in collaboration with <a href=”https://www.avarobotics.com/”>Ava Robotics</a> and the <a href=”https://www.gbfb.org/”>Greater Boston Food Bank</a> (GBFB), designed a new robotic system that powerfully disinfects surfaces and neutralizes aerosolized forms of the coronavirus.</p>
<p>The approach uses a custom UV-C light fixture designed at CSAIL that is integrated with Ava Robotics’ mobile robot base. The results were encouraging enough that researchers say that the approach could be useful for autonomous UV disinfection in other environments, such as factories, restaurants, and supermarkets.&nbsp;</p>

<p>UV-C light has proven to be effective at killing viruses and bacteria on surfaces and aerosols, but it’s unsafe for humans to be exposed. Fortunately, Ava’s telepresence robot doesn’t require any human supervision. Instead of the telepresence top, the team subbed in a UV-C array for disinfecting surfaces. Specifically, the array uses short-wavelength ultraviolet light to kill microorganisms and disrupt their DNA in a process called ultraviolet germicidal irradiation.</p>

<p>The complete robot system is capable of mapping the space — in this case, GBFB’s warehouse — and navigating between waypoints and other specified areas. In testing the system, the team used a UV-C dosimeter, which confirmed that the robot was delivering the expected dosage of UV-C light predicted by the model.</p>

<p>“Food banks provide an essential service to our communities, so it is critical to help keep these operations running,” says Alyssa Pierson, CSAIL research scientist and technical lead of the UV-C lamp assembly. “Here, there was a unique opportunity to provide additional disinfecting power to their current workflow, and help reduce the risks of Covid-19 exposure.”&nbsp;</p>

<p>Food banks are also facing a particular demand due to the stress of Covid-19. The United Nations projected that, because of the virus, the number of people facing severe food insecurity worldwide <a href=”https://www.wfp.org/news/covid-19-will-double-number-people-facing-food-crises-unless-swift-action-taken”>could double to 265 million</a>. In the United States alone, the five-week total of job losses has risen to 26 million, potentially pushing millions more into food insecurity.&nbsp;</p>

<p>During tests at GBFB, the robot was able to drive by the pallets and storage aisles at a speed of roughly 0.22 miles per hour. At this speed, the robot could cover a 4,000-square-foot space in GBFB’s warehouse in just half an hour. The UV-C dosage delivered during this time can neutralize approximately 90 percent of coronaviruses on surfaces. For many surfaces, this dose will be higher, resulting in more of the virus neutralized.</p>

<p>Typically, this method of ultraviolet germicidal irradiation is used largely in hospitals and medical settings, to sterilize patient rooms and stop the spread of microorganisms like <span>methicillin-resistant <em>staphylococcus aureus</em></span> and <span class=”st”><em>Clostridium difficile</em></span>, and the UV-C light also works against airborne pathogens. While it’s most effective in the direct “line of sight,” it can get to nooks and crannies as the light bounces off surfaces and onto other surfaces.&nbsp;</p>

<p>”Our 10-year-old warehouse is a relatively new food distribution facility with AIB-certified, state-of-the-art cleanliness and food safety standards,” says Catherine D’Amato, president and CEO of the Greater Boston Food Bank. “Covid-19 is a new pathogen that GBFB, and the rest of the world, was not designed to handle. We are pleased to have this opportunity to work with MIT CSAIL and Ava Robotics to innovate and advance our sanitation techniques to defeat this menace.”&nbsp;</p>

<p>As a first step, the team teleoperated the robot to teach it the path around the warehouse — meaning it’s equipped with autonomy to move around, without the team needing to navigate it remotely.&nbsp;</p>

<p>It can go to defined waypoints on its map, such as going to the loading dock, then the warehouse shipping floor, then returning to base. They define those waypoints from the expert human user in teleop mode, and then can add new waypoints to the map as needed.&nbsp;</p>

<p>Within GBFB, the team identified the warehouse shipping floor as a “high-importance area” for the robot to disinfect. Each day, workers stage aisles of products and arrange them for up to 50 pickups by partners and distribution trucks the next day. By focusing on the shipping area, it prioritizes disinfecting items leaving the warehouse to reduce Covid-19 spread out into the community.</p>

<p>Currently, the team is exploring how to use its onboard sensors to adapt to changes in the environment, such that in new territory, the robot would adjust its speed to ensure the recommended dosage is applied to new objects and surfaces.&nbsp;</p>

<p>A unique challenge is that the shipping area is constantly changing, so each night, the robot encounters a slightly new environment. When the robot is deployed, it doesn’t necessarily know which of the staging aisles will be occupied, or how full each aisle might be. Therefore, the team notes that they need to teach the robot to differentiate between the occupied and unoccupied aisles, so it can change its planned path accordingly.</p>

<p>As far as production went, “in-house manufacturing” took on a whole new meaning for this prototype and the team. The UV-C lamps were assembled in Pierson’s basement, and CSAIL PhD student Jonathan Romanishin crafted a makeshift shop in his apartment for the electronics board assembly.&nbsp;</p>

<p>“As we drive the robot around the food bank, we are also researching new control policies that will allow the robot to adapt to changes in the environment and ensure all areas receive the proper estimated dosage,” says Pierson. “We are focused on remote operation to minimize&nbsp; human supervision, and, therefore, the additional risk of spreading Covid-19, while running our system.”&nbsp;</p>

<p>For immediate next steps, the team is focused on increasing the capabilities of the robot at GBFB, as well as eventually implementing design upgrades. Their broader intention focuses on how to make these systems more capable at adapting to our world: how a robot can dynamically change its plan based on estimated UV-C dosages, how it can work in new environments, and how to coordinate teams of UV-C robots to work together.</p>

<p>“We are excited to see the UV-C disinfecting robot support our community in this time of need,” says CSAIL director and project lead Daniela Rus. “The insights we received from the work at GBFB has highlighted several algorithmic challenges. We plan to tackle these in order to extend the scope of autonomous UV disinfection in complex spaces, including dorms, schools, airplanes, and grocery stores.”&nbsp;</p>

<p>Currently, the team’s focus is on GBFB, although the algorithms and systems they are developing could be transferred to other use cases in the future, like warehouses, grocery stores, and schools.&nbsp;</p>

<p>”MIT has been a great partner, and when they came to us, the team was eager to start the integration, which took just four weeks to get up and running,” says Ava Robotics CEO Youssef Saleh. “The opportunity for robots to solve workplace challenges is bigger than ever, and collaborating with MIT to make an impact at the food bank has been a great experience.”&nbsp;</p>

<p>Pierson and Romanishin worked alongside Hunter Hansen (software capabilities), Bryan Teague of MIT Lincoln Laboratory (who assisted with the UV-C lamp assembly), Igor Gilitschenski and Xiao Li (assisting with future autonomy research), MIT professors Daniela Rus and Saman Amarasinghe, and Ava leads Marcio Macedo and Youssef Saleh.&nbsp;</p>

<p>This project was supported in part by Ava Robotics, who provided their platform and team support.</p>

In tests, the CSAIL team’s robot could disinfect a 4,000-square-foot space in the food bank’s warehouse in just half an hour.
Photo: Alyssa Pierson/CSAIL

CSAIL robot disinfects Greater Boston Food Bankhttps://news.mit.edu/2020/csail-robot-disinfects-greater-boston-food-bank-covid-19-0629
Using UV-C light, the system can disinfect a warehouse floor in half an hour — and could one day be employed in grocery stores, schools, and other spaces.
Sun, 28 Jun 2020 23:59:59 -0400
https://news.mit.edu/2020/csail-robot-disinfects-greater-boston-food-bank-covid-19-0629
Rachel Gordon | MIT CSAIL
<p>With every droplet that we can’t see, touch, or feel dispersed into the air, the threat of spreading Covid-19 persists. It’s become increasingly critical to keep these heavy droplets from lingering — especially on surfaces, which are welcoming and generous hosts.&nbsp;</p>

<p>Thankfully, our chemical cleaning products are effective, but using them to disinfect larger settings can be expensive, dangerous, and time-consuming. Across the globe there are thousands of warehouses, grocery stores, schools, and other spaces where cleaning workers are at risk.</p>

<p>With that in mind, a team from MIT’s <a href=”http://csail.mit.edu”>Computer Science and Artificial Intelligence Laboratory</a> (CSAIL), in collaboration with <a href=”https://www.avarobotics.com/”>Ava Robotics</a> and the <a href=”https://www.gbfb.org/”>Greater Boston Food Bank</a> (GBFB), designed a new robotic system that powerfully disinfects surfaces and neutralizes aerosolized forms of the coronavirus.</p>
<p>The approach uses a custom UV-C light fixture designed at CSAIL that is integrated with Ava Robotics’ mobile robot base. The results were encouraging enough that researchers say that the approach could be useful for autonomous UV disinfection in other environments, such as factories, restaurants, and supermarkets.&nbsp;</p>

<p>UV-C light has proven to be effective at killing viruses and bacteria on surfaces and aerosols, but it’s unsafe for humans to be exposed. Fortunately, Ava’s telepresence robot doesn’t require any human supervision. Instead of the telepresence top, the team subbed in a UV-C array for disinfecting surfaces. Specifically, the array uses short-wavelength ultraviolet light to kill microorganisms and disrupt their DNA in a process called ultraviolet germicidal irradiation.</p>

<p>The complete robot system is capable of mapping the space — in this case, GBFB’s warehouse — and navigating between waypoints and other specified areas. In testing the system, the team used a UV-C dosimeter, which confirmed that the robot was delivering the expected dosage of UV-C light predicted by the model.</p>

<p>“Food banks provide an essential service to our communities, so it is critical to help keep these operations running,” says Alyssa Pierson, CSAIL research scientist and technical lead of the UV-C lamp assembly. “Here, there was a unique opportunity to provide additional disinfecting power to their current workflow, and help reduce the risks of Covid-19 exposure.”&nbsp;</p>

<p>Food banks are also facing a particular demand due to the stress of Covid-19. The United Nations projected that, because of the virus, the number of people facing severe food insecurity worldwide <a href=”https://www.wfp.org/news/covid-19-will-double-number-people-facing-food-crises-unless-swift-action-taken”>could double to 265 million</a>. In the United States alone, the five-week total of job losses has risen to 26 million, potentially pushing millions more into food insecurity.&nbsp;</p>

<p>During tests at GBFB, the robot was able to drive by the pallets and storage aisles at a speed of roughly 0.22 miles per hour. At this speed, the robot could cover a 4,000-square-foot space in GBFB’s warehouse in just half an hour. The UV-C dosage delivered during this time can neutralize approximately 90 percent of coronaviruses on surfaces. For many surfaces, this dose will be higher, resulting in more of the virus neutralized.</p>

<p>Typically, this method of ultraviolet germicidal irradiation is used largely in hospitals and medical settings, to sterilize patient rooms and stop the spread of microorganisms like <span>methicillin-resistant <em>staphylococcus aureus</em></span> and <span class=”st”><em>Clostridium difficile</em></span>, and the UV-C light also works against airborne pathogens. While it’s most effective in the direct “line of sight,” it can get to nooks and crannies as the light bounces off surfaces and onto other surfaces.&nbsp;</p>

<p>”Our 10-year-old warehouse is a relatively new food distribution facility with AIB-certified, state-of-the-art cleanliness and food safety standards,” says Catherine D’Amato, president and CEO of the Greater Boston Food Bank. “Covid-19 is a new pathogen that GBFB, and the rest of the world, was not designed to handle. We are pleased to have this opportunity to work with MIT CSAIL and Ava Robotics to innovate and advance our sanitation techniques to defeat this menace.”&nbsp;</p>

<p>As a first step, the team teleoperated the robot to teach it the path around the warehouse — meaning it’s equipped with autonomy to move around, without the team needing to navigate it remotely.&nbsp;</p>

<p>It can go to defined waypoints on its map, such as going to the loading dock, then the warehouse shipping floor, then returning to base. They define those waypoints from the expert human user in teleop mode, and then can add new waypoints to the map as needed.&nbsp;</p>

<p>Within GBFB, the team identified the warehouse shipping floor as a “high-importance area” for the robot to disinfect. Each day, workers stage aisles of products and arrange them for up to 50 pickups by partners and distribution trucks the next day. By focusing on the shipping area, it prioritizes disinfecting items leaving the warehouse to reduce Covid-19 spread out into the community.</p>

<p>Currently, the team is exploring how to use its onboard sensors to adapt to changes in the environment, such that in new territory, the robot would adjust its speed to ensure the recommended dosage is applied to new objects and surfaces.&nbsp;</p>

<p>A unique challenge is that the shipping area is constantly changing, so each night, the robot encounters a slightly new environment. When the robot is deployed, it doesn’t necessarily know which of the staging aisles will be occupied, or how full each aisle might be. Therefore, the team notes that they need to teach the robot to differentiate between the occupied and unoccupied aisles, so it can change its planned path accordingly.</p>

<p>As far as production went, “in-house manufacturing” took on a whole new meaning for this prototype and the team. The UV-C lamps were assembled in Pierson’s basement, and CSAIL PhD student Jonathan Romanishin crafted a makeshift shop in his apartment for the electronics board assembly.&nbsp;</p>

<p>“As we drive the robot around the food bank, we are also researching new control policies that will allow the robot to adapt to changes in the environment and ensure all areas receive the proper estimated dosage,” says Pierson. “We are focused on remote operation to minimize&nbsp; human supervision, and, therefore, the additional risk of spreading Covid-19, while running our system.”&nbsp;</p>

<p>For immediate next steps, the team is focused on increasing the capabilities of the robot at GBFB, as well as eventually implementing design upgrades. Their broader intention focuses on how to make these systems more capable at adapting to our world: how a robot can dynamically change its plan based on estimated UV-C dosages, how it can work in new environments, and how to coordinate teams of UV-C robots to work together.</p>

<p>“We are excited to see the UV-C disinfecting robot support our community in this time of need,” says CSAIL director and project lead Daniela Rus. “The insights we received from the work at GBFB has highlighted several algorithmic challenges. We plan to tackle these in order to extend the scope of autonomous UV disinfection in complex spaces, including dorms, schools, airplanes, and grocery stores.”&nbsp;</p>

<p>Currently, the team’s focus is on GBFB, although the algorithms and systems they are developing could be transferred to other use cases in the future, like warehouses, grocery stores, and schools.&nbsp;</p>

<p>”MIT has been a great partner, and when they came to us, the team was eager to start the integration, which took just four weeks to get up and running,” says Ava Robotics CEO Youssef Saleh. “The opportunity for robots to solve workplace challenges is bigger than ever, and collaborating with MIT to make an impact at the food bank has been a great experience.”&nbsp;</p>

<p>Pierson and Romanishin worked alongside Hunter Hansen (software capabilities), Bryan Teague of MIT Lincoln Laboratory (who assisted with the UV-C lamp assembly), Igor Gilitschenski and Xiao Li (assisting with future autonomy research), MIT professors Daniela Rus and Saman Amarasinghe, and Ava leads Marcio Macedo and Youssef Saleh.&nbsp;</p>

<p>This project was supported in part by Ava Robotics, who provided their platform and team support.</p>

In tests, the CSAIL team’s robot could disinfect a 4,000-square-foot space in the food bank’s warehouse in just half an hour.
Photo: Alyssa Pierson/CSAIL

Improving global health equity by helping clinics do more with lesshttps://news.mit.edu/2020/macro-eyes-vaccine-chain-health-equity-0626
The startup macro-eyes uses artificial intelligence to improve vaccine delivery and patient scheduling.
Thu, 25 Jun 2020 23:59:59 -0400
https://news.mit.edu/2020/macro-eyes-vaccine-chain-health-equity-0626
Zach Winn | MIT News Office
<p>More children are being vaccinated around the world today than ever before, and the prevalence of many vaccine-preventable diseases has dropped over the last decade. Despite these encouraging signs, however, the availability of essential vaccines has stagnated globally in recent years, according the World Health Organization.</p><p>One problem, particularly in low-resource settings, is the difficulty of predicting how many children will show up for vaccinations at each health clinic. This leads to vaccine shortages, leaving children without critical immunizations, or to surpluses that can’t be used.</p><p>The startup macro-eyes is seeking to solve that problem with a vaccine forecasting tool that leverages a unique combination of real-time data sources, including new insights from front-line health workers. The company says the tool, named the Connected Health AI Network (CHAIN), was able to reduce vaccine wastage by 96 percent across three regions of Tanzania. Now it is working to scale that success across Tanzania and Mozambique.</p><p>“Health care is complex, and to be invited to the table, you need to deal with missing data,” says macro-eyes Chief Executive Officer Benjamin Fels, who co-founded the company with Suvrit Sra, the Esther and Harold E. Edgerton Career Development Associate Professor at MIT. “If your system needs age, gender, and weight to make predictions, but for one population you don’t have weight or age, you can’t just say, ‘This system doesn’t work.’ Our feeling is it has to be able to work in any setting.”</p><p>The company’s approach to prediction is already the basis for another product, the patient scheduling platform Sibyl, which has analyzed over 6 million hospital appointments and reduced wait times by more than 75 percent at one of the largest heart hospitals in the U.S. Sibyl’s predictions work as part of CHAIN’s broader forecasts.</p><p>Both products represent steps toward macro-eyes’ larger goal of transforming health care through artificial intelligence. And by getting their solutions to work in the regions with the least amount of data, they’re also advancing the field of AI.</p><p>“The state of the art in machine learning will result from confronting fundamental challenges in the most difficult environments in the world,” Fels says. “Engage where the problems are hardest, and AI too will benefit: [It will become] smarter, faster, cheaper, and more resilient.”</p><p><strong>Defining an approach</strong></p><p>Sra and Fels first met about 10 years ago when Fels was working as&nbsp;an algorithmic trader&nbsp;for a hedge fund and Sra was&nbsp;a visiting faculty member&nbsp;at the University of California at Berkeley. The pair’s experience crunching numbers in different industries alerted them to a shortcoming in health care.</p>

<p>“A question that became an obsession to me was, ‘Why were financial markets almost entirely determined by machines — by algorithms — and health care the world over is probably the least algorithmic part of anybody’s life?’” Fels recalls. “Why is health care not more data-driven?”</p>

<p>Around 2013, the co-founders began building machine-learning algorithms that measured similarities between patients to better inform treatment plans at Stanford School of Medicine and another large academic medical center in New York. It was during that early work that the founders laid the foundation of the company’s approach.</p><p>“There are themes we established at Stanford that remain today,” Fels says. “One is [building systems with] humans in the loop: We’re not just learning from the data, we’re also learning from the experts. The other is multidimensionality. We’re not just looking at one type of data; we’re looking at 10 or 15 types, [including] images, time series, information about medication, dosage, financial information, how much it costs the patient or hospital.”</p><p>Around the time the founders began working with Stanford, Sra joined MIT’s Laboratory for Information and Decision Systems (LIDS) as a principal research scientist. He would go on to become a faculty member in the Department of Electrical Engineering and Computer Science and MIT’s Institute for Data, Systems, and Society (IDSS). The mission of IDSS, to advance fields including data science and to use those advances to improve society, aligned well with Sra’s mission at macro-eyes.</p><p>“Because of that focus [on impact] within IDSS, I find it my focus to try to do AI for social good,’ Sra says. “The true judgment of success is how many people did we help? How could we improve access to care for people, wherever they may be?”</p>

<p>In 2017, macro-eyes received a small grant from the Bill and Melinda Gates Foundation to explore the possibility of using data from front-line health workers to build a predictive supply chain for vaccines. It was the beginning of a relationship with the Gates Foundation that has steadily expanded as the company has reached new milestones, from building accurate vaccine utilization models in Tanzania and Mozambique to integrating with supply chains to make vaccine supplies more proactive. To help with the latter mission, Prashant Yadav recently joined the board of directors; Yadav worked as a professor of supply chain management with the MIT-Zaragoza International Logistics Program for seven years and is now a senior fellow at the Center for Global Development, a nonprofit thinktank.</p>

<p>In conjunction with their work on CHAIN, the company has deployed another product, Sibyl, which uses machine learning to determine when patients are most likely to show up for appointments, to help front-desk workers at health clinics build schedules. Fels says the system has allowed hospitals to improve the efficiency of their operations so much they’ve reduced the average time patients wait to see a doctor from 55 days to 13 days.</p>

<p>As a part of CHAIN, Sibyl similarly uses a range of data points to optimize schedules, allowing it to accurately predict behavior in environments where other machine learning models might struggle.</p>

<p>The founders are also exploring ways to apply that approach to help direct Covid-19 patients to health clinics with sufficient capacity. That work is being developed with Sierra Leone Chief Innovation Officer David Sengeh SM ’12 PhD ’16.</p>

<p><strong>Pushing frontiers</strong></p>

<p>Building solutions for some of the most underdeveloped health care systems in the world might seem like a difficult way for a young company to establish itself, but the approach is an extension of macro-eyes’ founding mission of building health care solutions that can benefit people around the world equally.</p><p>“As an organization, we can never assume data will be waiting for us,” Fels says. “We’ve learned that we need to think strategically and be thoughtful about how to access or generate the data we need to fulfill our mandate: Make the delivery of health care predictive, everywhere.”</p><p>The approach is also a good way to explore innovations in mathematical fields the founders have spent their careers working in.</p><p>“Necessity is absolutely the mother of invention,” Sra says. “This is innovation driven by need.”</p><p>And going forward, the company’s work in difficult environments should only make scaling easier.</p><p><strong>“</strong>We think every day about how to make our technology more rapidly deployable, more generalizable, more highly scalable,” Sra says. “How do we get to the immense power of bringing true machine learning to the world’s most important problems without first spending decades and billions of dollars in building digital infrastructure? How do we leap into the future?”</p>

The startup macro-eyes is bringing new techniques in machine learning and artificial intelligence to global health problems like vaccine delivery and patient scheduling with its Connected Health AI Network (CHAIN).
Courtesy of macro-eyes

Identifying a melody by studying a musician’s body languagehttps://news.mit.edu/2020/music-gesture-artificial-intelligence-identifies-melody-by-musician-body-language-0625
Music gesture artificial intelligence tool developed at the MIT-IBM Watson AI Lab uses body movements to isolate the sounds of individual instruments.
Thu, 25 Jun 2020 11:25:01 -0400
https://news.mit.edu/2020/music-gesture-artificial-intelligence-identifies-melody-by-musician-body-language-0625
Kim Martineau | MIT Quest for Intelligence
<p>We listen to music with our ears, but also our eyes, watching with appreciation as the pianist’s fingers fly over the keys and the violinist’s bow rocks across the ridge of strings. When the ear fails to tell two instruments apart, the eye often pitches in by matching each musician’s movements to the beat of each part.&nbsp;</p>

<p>A <a href=”http://music-gesture.csail.mit.edu/” target=”_blank”>new artificial intelligence tool</a> developed by the&nbsp;<a href=”https://mitibmwatsonailab.mit.edu/” target=”_blank”>MIT-IBM Watson AI Lab</a>&nbsp;leverages the virtual eyes and ears of a computer to separate similar sounds that are tricky even for humans to differentiate. The tool improves on earlier iterations by matching the movements of individual musicians, via their skeletal keypoints, to the tempo of&nbsp;individual parts, allowing listeners to isolate a single flute or violin among multiple flutes or violins.&nbsp;</p>

<p>Potential applications for the work range from sound mixing, and turning up the volume of an instrument in a recording, to reducing the confusion that leads people to talk over one another on a video-conference calls. The work will be presented at the virtual&nbsp;<a href=”http://cvpr2020.thecvf.com/”>Computer Vision Pattern Recognition</a>&nbsp;conference this month.</p>

<p>“Body keypoints provide powerful structural information,” says the study’s lead author,&nbsp;<a href=”https://mitibmwatsonailab.mit.edu/people/chuang-gan/”>Chuang Gan</a>, an IBM researcher at the lab. “We use that here to improve the AI’s ability to listen and separate sound.”&nbsp;</p>

<p>In this project, and in others like it, the researchers have capitalized on synchronized audio-video tracks to recreate the way that humans learn. An AI system that learns through multiple sense modalities may be able to learn faster, with fewer data, and without humans having to add pesky labels to each real-world representation. “We learn from all of our senses,” says Antonio Torralba, an MIT professor and co-senior author of the study. “Multi-sensory processing is the precursor to embodied intelligence and AI systems that can perform more complicated tasks.”</p>

<p>The current tool, which uses&nbsp;body gestures&nbsp;to separate sounds, builds on earlier work that harnessed motion cues in sequences of images. Its earliest incarnation,&nbsp;PixelPlayer, let you&nbsp;<a href=”http://news.mit.edu/2018/ai-editing-music-videos-pixelplayer-csail-0705″>click on an instrument</a>&nbsp;in a concert video to make it louder or softer. An&nbsp;<a href=”https://arxiv.org/abs/1904.05979″>update</a>&nbsp;to PixelPlayer allowed you to distinguish between two violins in a duet by matching each musician’s movements with the tempo of their part. This newest version adds keypoint data, favored by sports analysts to track athlete performance, to extract finer grained motion data to tell nearly identical sounds apart.</p>

<p>The work highlights the importance of visual cues in training computers to have a better ear, and using sound cues to give them sharper eyes. Just as the current study uses musician pose information to isolate similar-sounding instruments, previous work has leveraged sounds to isolate similar-looking animals and objects.&nbsp;</p>

<p>Torralba and his colleagues have shown that deep learning models trained on paired audio-video data can learn to&nbsp;<a href=”http://news.mit.edu/2016/computer-learns-recognize-sounds-video-1202″>recognize natural sounds</a>&nbsp;like birds singing or waves crashing. They can also pinpoint the geographic coordinates of a&nbsp;<a href=”https://arxiv.org/abs/1910.11760″>moving car</a>&nbsp;from the sound of its engine and tires rolling toward, or away from, a microphone.&nbsp;</p>

<p>The latter study suggests that sound-tracking tools might be a useful addition in self-driving cars, complementing their cameras in poor driving conditions. “Sound trackers could be especially helpful at night, or in bad weather, by helping to flag cars that might otherwise be missed,” says Hang Zhao, PhD ’19, who contributed to both the motion and sound-tracking studies.</p>

<p>Other authors of the CVPR music gesture study are Deng Huang and Joshua Tenenbaum at MIT.</p>

Researchers use skeletal keypoint data to match the movements of musicians with the tempo of their part, allowing listeners to isolate similar-sounding instruments.
Image courtesy of the researchers.

Identifying a melody by studying a musician’s body languagehttps://news.mit.edu/2020/music-gesture-artificial-intelligence-identifies-melody-by-musician-body-language-0625
Music gesture artificial intelligence tool developed at the MIT-IBM Watson AI Lab uses body movements to isolate the sounds of individual instruments.
Thu, 25 Jun 2020 11:25:01 -0400
https://news.mit.edu/2020/music-gesture-artificial-intelligence-identifies-melody-by-musician-body-language-0625
Kim Martineau | MIT Quest for Intelligence
<p>We listen to music with our ears, but also our eyes, watching with appreciation as the pianist’s fingers fly over the keys and the violinist’s bow rocks across the ridge of strings. When the ear fails to tell two instruments apart, the eye often pitches in by matching each musician’s movements to the beat of each part.&nbsp;</p>

<p>A <a href=”http://music-gesture.csail.mit.edu/” target=”_blank”>new artificial intelligence tool</a> developed by the&nbsp;<a href=”https://mitibmwatsonailab.mit.edu/” target=”_blank”>MIT-IBM Watson AI Lab</a>&nbsp;leverages the virtual eyes and ears of a computer to separate similar sounds that are tricky even for humans to differentiate. The tool improves on earlier iterations by matching the movements of individual musicians, via their skeletal keypoints, to the tempo of&nbsp;individual parts, allowing listeners to isolate a single flute or violin among multiple flutes or violins.&nbsp;</p>

<p>Potential applications for the work range from sound mixing, and turning up the volume of an instrument in a recording, to reducing the confusion that leads people to talk over one another on a video-conference calls. The work will be presented at the virtual&nbsp;<a href=”http://cvpr2020.thecvf.com/”>Computer Vision Pattern Recognition</a>&nbsp;conference this month.</p>

<p>“Body keypoints provide powerful structural information,” says the study’s lead author,&nbsp;<a href=”https://mitibmwatsonailab.mit.edu/people/chuang-gan/”>Chuang Gan</a>, an IBM researcher at the lab. “We use that here to improve the AI’s ability to listen and separate sound.”&nbsp;</p>

<p>In this project, and in others like it, the researchers have capitalized on synchronized audio-video tracks to recreate the way that humans learn. An AI system that learns through multiple sense modalities may be able to learn faster, with fewer data, and without humans having to add pesky labels to each real-world representation. “We learn from all of our senses,” says Antonio Torralba, an MIT professor and co-senior author of the study. “Multi-sensory processing is the precursor to embodied intelligence and AI systems that can perform more complicated tasks.”</p>

<p>The current tool, which uses&nbsp;body gestures&nbsp;to separate sounds, builds on earlier work that harnessed motion cues in sequences of images. Its earliest incarnation,&nbsp;PixelPlayer, let you&nbsp;<a href=”http://news.mit.edu/2018/ai-editing-music-videos-pixelplayer-csail-0705″>click on an instrument</a>&nbsp;in a concert video to make it louder or softer. An&nbsp;<a href=”https://arxiv.org/abs/1904.05979″>update</a>&nbsp;to PixelPlayer allowed you to distinguish between two violins in a duet by matching each musician’s movements with the tempo of their part. This newest version adds keypoint data, favored by sports analysts to track athlete performance, to extract finer grained motion data to tell nearly identical sounds apart.</p>

<p>The work highlights the importance of visual cues in training computers to have a better ear, and using sound cues to give them sharper eyes. Just as the current study uses musician pose information to isolate similar-sounding instruments, previous work has leveraged sounds to isolate similar-looking animals and objects.&nbsp;</p>

<p>Torralba and his colleagues have shown that deep learning models trained on paired audio-video data can learn to&nbsp;<a href=”http://news.mit.edu/2016/computer-learns-recognize-sounds-video-1202″>recognize natural sounds</a>&nbsp;like birds singing or waves crashing. They can also pinpoint the geographic coordinates of a&nbsp;<a href=”https://arxiv.org/abs/1910.11760″>moving car</a>&nbsp;from the sound of its engine and tires rolling toward, or away from, a microphone.&nbsp;</p>

<p>The latter study suggests that sound-tracking tools might be a useful addition in self-driving cars, complementing their cameras in poor driving conditions. “Sound trackers could be especially helpful at night, or in bad weather, by helping to flag cars that might otherwise be missed,” says Hang Zhao, PhD ’19, who contributed to both the motion and sound-tracking studies.</p>

<p>Other authors of the CVPR music gesture study are Deng Huang and Joshua Tenenbaum at MIT.</p>

Researchers use skeletal keypoint data to match the movements of musicians with the tempo of their part, allowing listeners to isolate similar-sounding instruments.
Image courtesy of the researchers.

Cynthia Breazeal named Media Lab associate directorhttps://news.mit.edu/2020/cynthia-breazeal-named-media-lab-associate-director-0619
Expert in personal social robots will work with lab faculty and researchers to develop strategic research initiatives, and to explore new funding mechanisms.
Fri, 19 Jun 2020 15:15:01 -0400
https://news.mit.edu/2020/cynthia-breazeal-named-media-lab-associate-director-0619
MIT Media Lab
<p>Cynthia Breazeal has been promoted to full professor and named associate director of the Media Lab, joining the two other associate directors: Hiroshi Ishii and Andrew Lippman. Both appointments are effective July 1.</p><p>In her new associate director role, Breazeal will work with lab faculty and researchers to develop new strategic research initiatives. She will also play a key role in exploring new funding mechanisms to support broad Media Lab needs, including multi-faculty research efforts, collaborations with other labs and departments across the MIT campus, and experimental executive education opportunities.&nbsp;</p><p>“I am excited that Cynthia will be applying her tremendous energy, creativity, and intellect to rally the community in defining new opportunities for funding and research directions,” says Pattie Maes, chair of the lab’s executive committee. “As a first step, she has already organized a series of informal charrettes, where all members of the lab community can participate in brainstorming collaborations that range from tele-creativity, to resilient communities, to sustainability and climate change.”&nbsp;</p><p>Most recently, Breazeal has led an MIT collaboration between the Media Lab, MIT Stephen A. Schwarzman College of Computing, and MIT Open Learning to develop <a href=”https://aieducation.mit.edu/”>aieducation.mit.edu</a>, an online learning site for grades K-12, which shares a variety of online activities for students to learn about artificial intelligence, with a focus on how to design and use AI responsibly.&nbsp;</p><p>While assuming these new responsibilities, Breazeal will continue to head the lab’s Personal Robots research group, which focuses on developing personal social robots and their potential for meaningful impact on everyday life — from educational aids for children, to pediatric use in hospitals, to at-home assistants for the elderly.</p><p>Breazeal is globally recognized as a pioneer in human-robot interaction. Her book, “Designing Sociable Robots” (MIT Press, 2002), is considered pivotal in launching the field. In 2019 she was named an AAAI fellow. Previously, she received numerous awards including the National Academy of Engineering’s Gilbreth Lecture Award and <em>MIT Technology Review</em>’s TR100/35 Award. Her robot Jibo was on the cover of <em>TIME</em> magazine in its Best Inventions list of 2017, and in 2003 she was a finalist for the National Design Awards in Communications Design. In 2014, <em>Fortune</em> magazine recognized her as one of the Most Promising Women Entrepreneurs. The following year, she was named one of <em>Entrepreneu</em>r magazine’s Women to Watch.</p>

<p>Breazeal earned a BS in electrical and computer engineering from the University of California at Santa Barbara, and MS and ScD degrees from MIT in electrical engineering and computer science.</p>

<div></div>

<div></div>

<div></div>

<div></div>

<div></div>

<p></p>

Cynthia Breazeal has been promoted to full professor and named associate director of the Media Lab.
Photo courtesy of Cynthia Breazeal.

Bringing the predictive power of artificial intelligence to health carehttps://news.mit.edu/2020/closedloop-ai-predictive-health-care-0619
The startup ClosedLoop has created a platform of predictive models to help organizations improve patient care.
Thu, 18 Jun 2020 23:59:59 -0400
https://news.mit.edu/2020/closedloop-ai-predictive-health-care-0619
Zach Winn | MIT News Office
<p>An important aspect of treating patients with conditions like diabetes and heart disease is helping them stay healthy outside of the hospital — before they to return to the doctor’s office with further complications.</p><p>But reaching the most vulnerable patients at the right time often has more to do with probabilities than clinical assessments. Artificial intelligence (AI) has the potential to help clinicians tackle these types of problems, by analyzing large datasets to identify the patients that would benefit most from preventative measures. However, leveraging AI has often required health care organizations to hire their own data scientists or settle for one-size-fits-all solutions that aren’t optimized for their patients.</p><p>Now the startup ClosedLoop.ai is helping health care organizations tap into the power of AI with a flexible analytics solution that lets hospitals quickly plug their data into machine learning models and get actionable results.</p><p>The platform is being used to help hospitals determine which patients are most likely to miss appointments, acquire infections like sepsis, benefit from periodic check ups, and more. Health insurers, in turn, are using ClosedLoop to make population-level predictions around things like patient readmissions and the onset or progression of chronic diseases.</p><p>“We built a health care data science platform that can take in whatever data an organization has, quickly build models that are specific to [their patients], and deploy those models,” says ClosedLoop co-founder and Chief Technology Officer Dave DeCaprio ’94. “Being able to take somebody’s data the way it lives in their system and convert that into a model that can be readily used is still a problem that requires a lot of [health care] domain knowledge, and that’s a lot of what we bring to the table.”</p><p>In light of the Covid-19 pandemic, ClosedLoop has also created a model that helps organizations identify the most vulnerable people in their region and prepare for patient surges. The open source tool, called the C-19 Index, has been used to connect high-risk patients with local resources and helped health care systems create risk scores for tens of millions of people overall.</p><p>The index is just the latest way that ClosedLoop is accelerating the health care industry’s adoption of AI to improve patient health, a goal DeCaprio has worked toward for the better part of his career.</p><p><strong>Designing a strategy</strong></p><p>After working as a software engineer for several private companies through the internet boom of the early 2000s, DeCaprio was looking to make a career change when he came across a project focused on genome annotation at the Broad Institute of MIT and Harvard.</p><p>The project was DeCaprio’s first professional exposure to the power of artificial intelligence. It blossomed into a six year stint at the Broad, after which he continued exploring the intersection of big data and health care.</p><p>“After a year in health care, I realized it was going to be really hard to do anything else,” DeCaprio says. “I’m not going to be able to get excited about selling ads on the internet or anything like that. Once you start dealing with human health, that other stuff just feels insignificant.”</p><p>In the course of his work, DeCaprio began noticing problems with the ways machine learning and other statistical techniques were making their way into health care, notably in the fact that predictive models were being applied without regard for hospitals’ patient populations.</p><p>“Someone would say, ‘I know how to predict diabetes’ or ‘I know how to predict readmissions,’ and they’d sell a model,” DeCaprio says. “I knew that wasn’t going to work, because the reason readmissions happen in a low-income population of New York City is very different from the reason readmissions happen in a retirement community in Florida. The important thing wasn’t to build one magic model but to build a system that can quickly take somebody’s data and train a model that’s specific for their problems.”</p><p>With that approach in mind, DeCaprio joined forces with former co-worker and serial entrepreneur Andrew Eye, and started ClosedLoop in 2017. The startup’s first project involved creating models that predicted patient health outcomes for the Medical Home Network (MHN), a not-for-profit hospital collaboration focused on improving care for Medicaid recipients in Chicago.</p><p>As the founders created their modeling platform, they had to address many of the most common obstacles that have slowed health care’s adoption of AI solutions.</p><p>Often the first problems startups run into is making their algorithms work with each health care system’s data. Hospitals vary in the type of data they collect on patients and the way they store that information in their system. Hospitals even store the same types of data in vastly different ways.</p><p>DeCaprio credits his team’s knowledge of the health care space with helping them craft a solution that allows customers to upload raw data sets into ClosedLoop’s platform and create things like patient risk scores with a few clicks.</p><p>Another limitation of AI in health care has been the difficulty of understanding how models get to results. With ClosedLoop’s models, users can see the biggest factors contributing to each prediction, giving them more confidence in each output.</p><p>Overall, to become ingrained in customer’s operations, the founders knew their analytics platform needed to give simple, actionable insights. That has translated into a system that generates lists, risk scores, and rankings that care managers can use when deciding which interventions are most urgent for which patients.</p><p>“When someone walks into the hospital, it’s already too late [to avoid costly treatments] in many cases,” DeCaprio says. “Most of your best opportunities to lower the cost of care come by keeping them out of the hospital in the first place.”</p><p>Customers like health insurers also use ClosedLoop’s platform to predict broader trends in disease risk, emergency room over-utilization, and fraud.</p><p><strong>Stepping up for Covid-19</strong></p><p>In March, ClosedLoop began exploring ways its platform could help hospitals prepare for and respond to Covid-19. The efforts culminated in a company hackathon over the weekend of March 16. By Monday, ClosedLoop had an open source model on GitHub that assigned Covid-19 risk scores to Medicare patients. By that Friday, it had been used to make predictions on more than 2 million patients.</p><p>Today, the model works with all patients, not just those on Medicare, and it has been used to assess the vulnerability of communities around the country. Care organizations have used the model to project patient surges and help individuals at the highest risk understand what they can do to prevent infection.</p><p>“Some of it is just reaching out to people who are socially isolated to see if there’s something they can do,” DeCaprio says. “Someone who is 85 years old and shut in may not know there’s a community based organization that will deliver them groceries.”</p><p>For DeCaprio, bringing the predictive power of AI to health care has been a rewarding, if humbling, experience.</p><p>“The magnitude of the problems are so large that no matter what impact you have, you don’t feel like you’ve moved the needle enough,” he says. “At the same time, every time an organization says, ‘This is the primary tool our care managers have been using to figure out who to reach out to,’ it feels great.”</p>

The startup ClosedLoop.ai, co-founded by an MIT alumnus, is using a platform of AI models to help hospitals make predictions based on their patient data.
Image: MIT News, with images courtesy of the researchers

MIT and Toyota release innovative dataset to accelerate autonomous driving researchhttps://news.mit.edu/2020/mit-toyota-release-visual-open-data-accelerate-autonomous-driving-research-0618
DriveSeg contains precise, pixel-level representations of many common road objects, but through the lens of a continuous video driving scene.
Thu, 18 Jun 2020 14:55:01 -0400
https://news.mit.edu/2020/mit-toyota-release-visual-open-data-accelerate-autonomous-driving-research-0618
MIT AgeLab
<p><em>The following was issued as a joint release from the MIT AgeLab and Toyota Collaborative Safety Research Center.</em></p>

<p>How can we train self-driving vehicles to have a deeper awareness of the world around them? Can computers learn from past experiences to recognize future patterns that can help them safely navigate new and unpredictable situations?</p>

<p>These are some of the questions researchers from the AgeLab at the MIT Center for Transportation and Logistics and the <a href=”https://csrc.toyota.com/”>Toyota Collaborative Safety Research Center</a> (CSRC) are trying to answer by sharing an innovative new open dataset called DriveSeg<em>.</em></p>

<p>Through the release of DriveSeg, MIT and Toyota are working to advance research in autonomous driving systems that, much like human perception, perceive the driving environment as a continuous flow of visual information.</p>

<p>“In sharing this dataset, we hope to encourage researchers, the industry, and other innovators to develop new insight and direction into temporal AI modeling that enables the next generation of assisted driving and automotive safety technologies,” says Bryan Reimer, principal researcher. “Our longstanding working relationship with Toyota CSRC has enabled our research efforts to impact future safety technologies.”</p>

<p>“Predictive power is an important part of human intelligence,” says Rini Sherony, Toyota CSRC’s senior principal engineer. “Whenever we drive, we are always tracking the movements of the environment around us to identify potential risks and make safer decisions. By sharing this dataset, we hope to accelerate research into autonomous driving systems and advanced safety features that are more attuned to the complexity of the environment around them.”</p>

<p>To date, self-driving data made available to the research community have primarily consisted of troves of static, single images that can be used to identify and track common objects found in and around the road, such as bicycles, pedestrians, or traffic lights, through the use of “bounding boxes.” By contrast, DriveSeg contains more precise, pixel-level representations of many of these same common road objects, but through the lens of a continuous video driving scene. This type of full-scene segmentation can be particularly helpful for identifying more amorphous objects — such as road construction and vegetation — that do not always have such defined and uniform shapes.</p>

<p>According to Sherony, video-based driving scene perception provides a flow of data that more closely resembles dynamic, real-world driving situations. It also allows researchers to explore data patterns as they play out over time, which could lead to advances in machine learning, scene understanding, and behavioral prediction.</p>

<p>DriveSeg is available for free and can be used by researchers and the academic community for non-commercial purposes at the links below. The data is comprised of two parts. <a href=”https://ieee-dataport.org/open-access/mit-driveseg-manual-dataset”>DriveSeg (manual)</a> is 2 minutes and 47 seconds of high-resolution video captured during a daytime trip around the busy streets of Cambridge, Massachusetts. The video’s 5,000 frames are densely annotated manually with per-pixel human labels of 12 classes of road objects.</p>

<p><a href=”https://ieee-dataport.org/open-access/mit-driveseg-semi-auto-dataset”>DriveSeg (Semi-auto)</a> is 20,100 video frames (67 10-second video clips) drawn from <a href=”https://agelab.mit.edu/avt”>MIT Advanced Vehicle Technologies (AVT)</a> Consortium data. DriveSeg (Semi-auto) is labeled with the same pixel-wise semantic annotation as DriveSeg (manual), except annotations were completed through a novel semiautomatic annotation approach developed by MIT. This approach leverages both manual and computational efforts to coarsely annotate data more efficiently at a lower cost than manual annotation. This dataset was created to assess the feasibility of annotating a wide range of real-world driving scenarios and assess the potential of training vehicle perception systems on pixel labels created through AI-based labeling systems.</p>

<p>To learn more about the technical specifications and permitted use-cases for the data, visit the <a href=”https://agelab.mit.edu/driveseg”>DriveSeg dataset page.</a></p>

Sample frames from MIT AgeLab’s annotated video dataset
Image courtesy of Li Ding, Jack Terwilliger, Rini Sherony, Bryan Reimer, and Lex Fridman.

MIT-Takeda program launcheshttps://news.mit.edu/2020/mit-takeda-program-launches-research-ai-and-human-health-0618
Research projects will harness the power of artificial intelligence to positively impact human health.
Thu, 18 Jun 2020 14:20:01 -0400
https://news.mit.edu/2020/mit-takeda-program-launches-research-ai-and-human-health-0618
School of Engineering
<p>In February, researchers from MIT and Takeda Pharmaceuticals joined together to celebrate the official launch of the <a href=”http://news.mit.edu/2020/mit-school-engineering-takeda-join-to-advance-artificial-intelligence-health-research-0106″>MIT-Takeda Program</a>. The MIT-Takeda Program aims to fuel the development and application of artificial intelligence (AI) capabilities to benefit human health and drug development. Centered within the Abdul Latif Jameel Clinic for Machine Learning in Health (<a href=”https://www.jclinic.mit.edu/”>Jameel Clinic</a>), the program brings together the MIT School of Engineering and Takeda Pharmaceuticals, to combine knowledge and address challenges of mutual interest.&nbsp; &nbsp;</p>

<p>Following a competitive proposal process, nine inaugural research projects were selected. The program’s flagship research projects include principal investigators from departments and labs spanning the School of Engineering and the Institute. Research includes diagnosis of diseases, prediction of treatment response, development of novel biomarkers, process control and improvement, drug discovery, and clinical trial optimization.</p>

<p>“We were truly impressed by the creativity and breadth of the proposals we received,” says Anantha P. Chandrakasan, dean of the School of Engineering, Vannevar Bush Professor of Electrical Engineering and Computer Science, and co-chair of the MIT-Takeda Program Steering Committee.</p>

<p>Engaging with researchers and industry experts from Takeda, each project team will bring together different disciplines, merging theory and practical implementation, while combining algorithm and platform innovations.</p>

<p>“This is an incredible opportunity to merge the cross-disciplinary and cross-functional expertise of both MIT and Takeda researchers,” says Chandrakasan. “This particular collaboration between academia and industry is of great significance as our world faces enormous challenges pertaining to human health. I look forward to witnessing the evolution of the program and the impact its research aims to have on our society.”&nbsp;</p>

<p>“The shared enthusiasm and combined efforts of researchers from across MIT and Takeda have the opportunity to shape the future of health care,” says Anne Heatherington, senior vice president and head of Data Sciences Institute (DSI) at Takeda, and co-chair of the MIT-Takeda Program Steering Committee. “Together we are building capabilities and addressing challenges through interrogation of multiple data types that we have not been able to solve with the power of humans alone that have the potential to benefit both patients and the greater community.”</p>

<p>The following are the inaugural projects of the MIT-Takeda Program. Included are the MIT teams collaborating with Takeda researchers, who are leveraging AI to positively impact human health.</p>

<p>”AI-enabled, automated inspection of lyophilized products in sterile pharmaceutical manufacturing”: Duane Boning, the Clarence J. LeBel Professor of Electrical Engineering and faculty co-director of the Leaders for Global Operations program; Luca Daniel, professor of electrical engineering and computer science; Sanjay Sarma, the Fred Fort Flowers and Daniel Fort Flowers Professor of Mechanical Engineering and vice president for open learning; and Brian Subirana, research scientist and director MIT Auto-ID Laboratory within the Department of Mechanical Engineering.</p>

<p>”Automating adverse effect assessments and scientific literature review”: Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science and Jameel Clinic faculty co-lead; Tommi Jaakkola, the Thomas Siebel&nbsp;Professor of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society; and Jacob Andreas, assistant professor of electrical engineering and computer science.</p>

<p>”Automated analysis of speech and language deficits for frontotemporal dementia”: James Glass, senior research scientist in the MIT Computer Science and Artificial Intelligence Laboratory; Sanjay Sarma, the Fred Fort Flowers and Daniel Fort Flowers Professor of Mechanical Engineering and vice president for open learning; and Brian Subirana, research scientist and director of the MIT Auto-ID Laboratory within the Department of Mechanical Engineering.</p>

<p>”Discovering human-microbiome protein interactions with continuous distributed representation”: Jim Collins, the Termeer Professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science and Department of Biological Engineering, Jameel Clinic faculty co-lead, and MIT-Takeda Program faculty lead; and Timothy Lu, associate professor of electrical engineering and computer science and of biological engineering.</p>

<p>”Machine learning for early diagnosis, progression risk estimation, and identification of non-responders to conventional therapy for inflammatory bowel disease”: Peter Szolovits, professor of computer science and engineering, and David Sontag, associate professor of electrical engineering and computer science.</p>

<p>”Machine learning for image-based liver phenotyping and drug discovery”: Polina Golland, professor of electrical engineering and computer science; Brian W. Anthony, principal research scientist in the Department of Mechanical Engineering; and Peter Szolovits, professor of computer science and engineering.</p>

<p>”Predictive in silico models for cell culture process development for biologics manufacturing”: Connor W. Coley, assistant professor of chemical engineering, and J. Christopher Love, the Raymond A. (1921) and Helen E. St. Laurent Professor of Chemical Engineering.</p>

<p>”Automated data quality monitoring for clinical trial oversight via probabilistic programming”: Vikash Mansinghka, principal research scientist in the Department of Brain and Cognitive Sciences; Tamara Broderick, associate professor<em> </em>of electrical engineering and computer science; David Sontag, associate professor of electrical engineering and computer science; Ulrich Schaechtle, research scientist in the Department of Brain and Cognitive Sciences; and Veronica Weiner, director of special projects for the MIT Probabilistic Computing Project.</p>

<p>”Time series analysis from video data for optimizing and controlling unit operations in production and manufacturing”: Allan S. Myerson, professor of chemical engineering; George Barbastathis, professor of mechanical engineering; Richard Braatz, the Edwin R. Gilliland Professor of Chemical Engineering; and Bernhardt Trout, the Raymond F. Baddour, ScD, (1949) Professor of Chemical Engineering.</p>

<p>“The flagship research projects of the MIT-Takeda Program offer real promise to the ways we can impact human health,” says Jim Collins. “We are delighted to have the opportunity to collaborate with Takeda researchers on advances that leverage AI and aim to shape health care around the globe.”</p>

Researchers present at the MIT-Takeda launch event earlier this year.

MIT-Takeda program launcheshttps://news.mit.edu/2020/mit-takeda-program-launches-research-ai-and-human-health-0618
Research projects will harness the power of artificial intelligence to positively impact human health.
Thu, 18 Jun 2020 14:20:01 -0400
https://news.mit.edu/2020/mit-takeda-program-launches-research-ai-and-human-health-0618
School of Engineering
<p>In February, researchers from MIT and Takeda Pharmaceuticals joined together to celebrate the official launch of the <a href=”http://news.mit.edu/2020/mit-school-engineering-takeda-join-to-advance-artificial-intelligence-health-research-0106″>MIT-Takeda Program</a>. The MIT-Takeda Program aims to fuel the development and application of artificial intelligence (AI) capabilities to benefit human health and drug development. Centered within the Abdul Latif Jameel Clinic for Machine Learning in Health (<a href=”https://www.jclinic.mit.edu/”>Jameel Clinic</a>), the program brings together the MIT School of Engineering and Takeda Pharmaceuticals, to combine knowledge and address challenges of mutual interest.&nbsp; &nbsp;</p>

<p>Following a competitive proposal process, nine inaugural research projects were selected. The program’s flagship research projects include principal investigators from departments and labs spanning the School of Engineering and the Institute. Research includes diagnosis of diseases, prediction of treatment response, development of novel biomarkers, process control and improvement, drug discovery, and clinical trial optimization.</p>

<p>“We were truly impressed by the creativity and breadth of the proposals we received,” says Anantha P. Chandrakasan, dean of the School of Engineering, Vannevar Bush Professor of Electrical Engineering and Computer Science, and co-chair of the MIT-Takeda Program Steering Committee.</p>

<p>Engaging with researchers and industry experts from Takeda, each project team will bring together different disciplines, merging theory and practical implementation, while combining algorithm and platform innovations.</p>

<p>“This is an incredible opportunity to merge the cross-disciplinary and cross-functional expertise of both MIT and Takeda researchers,” says Chandrakasan. “This particular collaboration between academia and industry is of great significance as our world faces enormous challenges pertaining to human health. I look forward to witnessing the evolution of the program and the impact its research aims to have on our society.”&nbsp;</p>

<p>“The shared enthusiasm and combined efforts of researchers from across MIT and Takeda have the opportunity to shape the future of health care,” says Anne Heatherington, senior vice president and head of Data Sciences Institute (DSI) at Takeda, and co-chair of the MIT-Takeda Program Steering Committee. “Together we are building capabilities and addressing challenges through interrogation of multiple data types that we have not been able to solve with the power of humans alone that have the potential to benefit both patients and the greater community.”</p>

<p>The following are the inaugural projects of the MIT-Takeda Program. Included are the MIT teams collaborating with Takeda researchers, who are leveraging AI to positively impact human health.</p>

<p>”AI-enabled, automated inspection of lyophilized products in sterile pharmaceutical manufacturing”: Duane Boning, the Clarence J. LeBel Professor of Electrical Engineering and faculty co-director of the Leaders for Global Operations program; Luca Daniel, professor of electrical engineering and computer science; Sanjay Sarma, the Fred Fort Flowers and Daniel Fort Flowers Professor of Mechanical Engineering and vice president for open learning; and Brian Subirana, research scientist and director MIT Auto-ID Laboratory within the Department of Mechanical Engineering.</p>

<p>”Automating adverse effect assessments and scientific literature review”: Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science and Jameel Clinic faculty co-lead; Tommi Jaakkola, the Thomas Siebel&nbsp;Professor of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society; and Jacob Andreas, assistant professor of electrical engineering and computer science.</p>

<p>”Automated analysis of speech and language deficits for frontotemporal dementia”: James Glass, senior research scientist in the MIT Computer Science and Artificial Intelligence Laboratory; Sanjay Sarma, the Fred Fort Flowers and Daniel Fort Flowers Professor of Mechanical Engineering and vice president for open learning; and Brian Subirana, research scientist and director of the MIT Auto-ID Laboratory within the Department of Mechanical Engineering.</p>

<p>”Discovering human-microbiome protein interactions with continuous distributed representation”: Jim Collins, the Termeer Professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science and Department of Biological Engineering, Jameel Clinic faculty co-lead, and MIT-Takeda Program faculty lead; and Timothy Lu, associate professor of electrical engineering and computer science and of biological engineering.</p>

<p>”Machine learning for early diagnosis, progression risk estimation, and identification of non-responders to conventional therapy for inflammatory bowel disease”: Peter Szolovits, professor of computer science and engineering, and David Sontag, associate professor of electrical engineering and computer science.</p>

<p>”Machine learning for image-based liver phenotyping and drug discovery”: Polina Golland, professor of electrical engineering and computer science; Brian W. Anthony, principal research scientist in the Department of Mechanical Engineering; and Peter Szolovits, professor of computer science and engineering.</p>

<p>”Predictive in silico models for cell culture process development for biologics manufacturing”: Connor W. Coley, assistant professor of chemical engineering, and J. Christopher Love, the Raymond A. (1921) and Helen E. St. Laurent Professor of Chemical Engineering.</p>

<p>”Automated data quality monitoring for clinical trial oversight via probabilistic programming”: Vikash Mansinghka, principal research scientist in the Department of Brain and Cognitive Sciences; Tamara Broderick, associate professor<em> </em>of electrical engineering and computer science; David Sontag, associate professor of electrical engineering and computer science; Ulrich Schaechtle, research scientist in the Department of Brain and Cognitive Sciences; and Veronica Weiner, director of special projects for the MIT Probabilistic Computing Project.</p>

<p>”Time series analysis from video data for optimizing and controlling unit operations in production and manufacturing”: Allan S. Myerson, professor of chemical engineering; George Barbastathis, professor of mechanical engineering; Richard Braatz, the Edwin R. Gilliland Professor of Chemical Engineering; and Bernhardt Trout, the Raymond F. Baddour, ScD, (1949) Professor of Chemical Engineering.</p>

<p>“The flagship research projects of the MIT-Takeda Program offer real promise to the ways we can impact human health,” says Jim Collins. “We are delighted to have the opportunity to collaborate with Takeda researchers on advances that leverage AI and aim to shape health care around the globe.”</p>

Researchers present at the MIT-Takeda launch event earlier this year.

What jumps out in a photo changes the longer we lookhttps://news.mit.edu/2020/what-jumps-out-photo-changes-longer-we-look-0617
Researchers capture our shifting gaze in a model that suggests how to prioritize visual information based on viewing duration.
Wed, 17 Jun 2020 14:35:01 -0400
https://news.mit.edu/2020/what-jumps-out-photo-changes-longer-we-look-0617
Kim Martineau | MIT Quest for Intelligence
<p>What seizes your attention at first glance might change with a closer look. That elephant dressed in&nbsp;red wallpaper&nbsp;might initially grab your eye until&nbsp;your gaze&nbsp;moves to the woman on the living room couch and the surprising realization that the pair appear to be sharing a quiet moment together.</p>

<p>In a study being presented at the virtual&nbsp;<a href=”http://cvpr2020.thecvf.com/” target=”_blank”>Computer Vision and Pattern Recognition</a>&nbsp;conference this week, researchers show that our attention moves in distinctive ways the longer we stare at an image, and that these viewing patterns can be replicated by artificial intelligence models. The work suggests immediate ways of improving how visual content is teased and eventually displayed online. For example, an automated cropping tool might zoom in on the elephant for a thumbnail&nbsp;preview or zoom out to include the intriguing details that&nbsp;become visible once a reader clicks on the story.</p>

<p>“In the real world, we look at the scenes around us and our attention also moves,” says&nbsp;<a href=”http://anelise.mit.edu/”>Anelise Newman</a>, the study’s co-lead author and a master’s student at MIT. “What captures our interest over time varies.” The study’s senior authors are&nbsp;<a href=”http://web.mit.edu/zoya/www/”>Zoya Bylinskii</a> PhD ’18,&nbsp;a research scientist at Adobe Research,&nbsp;and&nbsp;<a href=”http://olivalab.mit.edu/audeoliva.html”>Aude Oliva</a>, co-director of the MIT Quest for Intelligence and a senior research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory.</p>

<p>What researchers know about saliency, and how humans perceive images, comes from experiments in which&nbsp;participants are shown pictures for a fixed period of time. But in the real world, human attention often shifts abruptly. To simulate this variability, the researchers used a crowdsourcing user interface called CodeCharts to show participants photos at three durations — half a second, 3 seconds, and 5 seconds — in a set of online experiments.&nbsp;</p>

<p>When the image disappeared, participants were asked to report where they had last looked by typing in a three-digit code on a gridded map corresponding to the image. In the end, the researchers were able to gather heat maps of where in a given image participants had collectively focused their gaze at different moments in time.&nbsp;</p>

<p>At the split-second interval, viewers focused on faces or a visually dominant animal or object. By 3 seconds, their gaze had shifted to action-oriented features, like a dog on a leash, an archery target, or an airborne frisbee. At 5 seconds, their gaze either shot back, boomerang-like, to the main subject, or it lingered on the suggestive details.&nbsp;</p>

<p>“We were surprised at just how consistent these viewing patterns were at different durations,” says the study’s other lead author,&nbsp;<a href=”https://cfosco.github.io/”>Camilo Fosco</a>, a PhD student at MIT.</p>

<p>With&nbsp;real-world&nbsp;data in hand, the researchers next trained a deep learning model to predict the focal points of images it had never seen before, at different viewing durations. To reduce the size of their model, they included a recurrent module that works on compressed representations of the input image, mimicking the human gaze as it explores an image at varying durations. When tested, their model outperformed the state of the art at predicting saliency across viewing durations.</p>

<p>The model has potential applications for editing and rendering compressed images and even improving the accuracy of automated image captioning. In addition to guiding an editing tool to crop an image for shorter or longer viewing durations, it could prioritize which elements in a compressed image to render first for viewers. By clearing away the visual clutter in a scene, it could improve the overall accuracy of current photo-captioning techniques. It could also generate captions for images meant for split-second viewing only.&nbsp;</p>

<p>“The content that you consider most important depends on the time you have to look at it,” says Bylinskii. “If you see the full image at once, you may not have time to absorb it all.”</p>

<p>As more images and videos are shared online, the need for better tools to find and make sense of relevant content is growing. Research on human attention offers insights for technologists. Just as computers and camera-equipped mobile phones helped create the data overload, they are also giving researchers new platforms for studying human attention and designing better tools to help us cut through the noise.</p>

<p>In a related study accepted to the&nbsp;<a href=”https://chi2020.acm.org/”>ACM Conference on Human Factors in Computing Systems</a>, researchers outline the relative benefits of four web-based user interfaces, including CodeCharts, for gathering human attention data at scale. All four tools capture attention without relying on traditional eye-tracking hardware in a lab, either by collecting self-reported gaze data, as CodeCharts does, or by recording where subjects click their mouse or zoom in on an image.</p>

<p>“There’s no one-size-fits-all interface that works for all use cases, and our paper focuses on teasing apart these trade-offs,” says Newman, lead author of the study.</p>

<p>By making it faster and cheaper to gather human attention data, the platforms may help to generate new knowledge on human vision and cognition. “The more we learn about how humans see and understand the world, the more we can build these insights into our AI tools to make them more useful,” says Oliva.</p>

<p>Other authors of the CVPR paper are Pat Sukhum, Yun Bin Zhang, and Nanxuan Zhao. The research was supported by the Vannevar Bush Faculty Fellowship program, an Ignite grant from the SystemsThatLearn@CSAIL, and cloud computing services from MIT Quest.</p>

An MIT study shows viewers’ attention shifts the longer they gaze at an image. Given just a half-second to look at the photo at left, in online experiments, they focused on the elephant, as shown in this heat map.
Image courtesy of the researchers.

What jumps out in a photo changes the longer we lookhttps://news.mit.edu/2020/what-jumps-out-photo-changes-longer-we-look-0617
Researchers capture our shifting gaze in a model that suggests how to prioritize visual information based on viewing duration.
Wed, 17 Jun 2020 14:35:01 -0400
https://news.mit.edu/2020/what-jumps-out-photo-changes-longer-we-look-0617
Kim Martineau | MIT Quest for Intelligence
<p>What seizes your attention at first glance might change with a closer look. That elephant dressed in&nbsp;red wallpaper&nbsp;might initially grab your eye until&nbsp;your gaze&nbsp;moves to the woman on the living room couch and the surprising realization that the pair appear to be sharing a quiet moment together.</p>

<p>In a study being presented at the virtual&nbsp;<a href=”http://cvpr2020.thecvf.com/” target=”_blank”>Computer Vision and Pattern Recognition</a>&nbsp;conference this week, researchers show that our attention moves in distinctive ways the longer we stare at an image, and that these viewing patterns can be replicated by artificial intelligence models. The work suggests immediate ways of improving how visual content is teased and eventually displayed online. For example, an automated cropping tool might zoom in on the elephant for a thumbnail&nbsp;preview or zoom out to include the intriguing details that&nbsp;become visible once a reader clicks on the story.</p>

<p>“In the real world, we look at the scenes around us and our attention also moves,” says&nbsp;<a href=”http://anelise.mit.edu/”>Anelise Newman</a>, the study’s co-lead author and a master’s student at MIT. “What captures our interest over time varies.” The study’s senior authors are&nbsp;<a href=”http://web.mit.edu/zoya/www/”>Zoya Bylinskii</a> PhD ’18,&nbsp;a research scientist at Adobe Research,&nbsp;and&nbsp;<a href=”http://olivalab.mit.edu/audeoliva.html”>Aude Oliva</a>, co-director of the MIT Quest for Intelligence and a senior research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory.</p>

<p>What researchers know about saliency, and how humans perceive images, comes from experiments in which&nbsp;participants are shown pictures for a fixed period of time. But in the real world, human attention often shifts abruptly. To simulate this variability, the researchers used a crowdsourcing user interface called CodeCharts to show participants photos at three durations — half a second, 3 seconds, and 5 seconds — in a set of online experiments.&nbsp;</p>

<p>When the image disappeared, participants were asked to report where they had last looked by typing in a three-digit code on a gridded map corresponding to the image. In the end, the researchers were able to gather heat maps of where in a given image participants had collectively focused their gaze at different moments in time.&nbsp;</p>

<p>At the split-second interval, viewers focused on faces or a visually dominant animal or object. By 3 seconds, their gaze had shifted to action-oriented features, like a dog on a leash, an archery target, or an airborne frisbee. At 5 seconds, their gaze either shot back, boomerang-like, to the main subject, or it lingered on the suggestive details.&nbsp;</p>

<p>“We were surprised at just how consistent these viewing patterns were at different durations,” says the study’s other lead author,&nbsp;<a href=”https://cfosco.github.io/”>Camilo Fosco</a>, a PhD student at MIT.</p>

<p>With&nbsp;real-world&nbsp;data in hand, the researchers next trained a deep learning model to predict the focal points of images it had never seen before, at different viewing durations. To reduce the size of their model, they included a recurrent module that works on compressed representations of the input image, mimicking the human gaze as it explores an image at varying durations. When tested, their model outperformed the state of the art at predicting saliency across viewing durations.</p>

<p>The model has potential applications for editing and rendering compressed images and even improving the accuracy of automated image captioning. In addition to guiding an editing tool to crop an image for shorter or longer viewing durations, it could prioritize which elements in a compressed image to render first for viewers. By clearing away the visual clutter in a scene, it could improve the overall accuracy of current photo-captioning techniques. It could also generate captions for images meant for split-second viewing only.&nbsp;</p>

<p>“The content that you consider most important depends on the time you have to look at it,” says Bylinskii. “If you see the full image at once, you may not have time to absorb it all.”</p>

<p>As more images and videos are shared online, the need for better tools to find and make sense of relevant content is growing. Research on human attention offers insights for technologists. Just as computers and camera-equipped mobile phones helped create the data overload, they are also giving researchers new platforms for studying human attention and designing better tools to help us cut through the noise.</p>

<p>In a related study accepted to the&nbsp;<a href=”https://chi2020.acm.org/”>ACM Conference on Human Factors in Computing Systems</a>, researchers outline the relative benefits of four web-based user interfaces, including CodeCharts, for gathering human attention data at scale. All four tools capture attention without relying on traditional eye-tracking hardware in a lab, either by collecting self-reported gaze data, as CodeCharts does, or by recording where subjects click their mouse or zoom in on an image.</p>

<p>“There’s no one-size-fits-all interface that works for all use cases, and our paper focuses on teasing apart these trade-offs,” says Newman, lead author of the study.</p>

<p>By making it faster and cheaper to gather human attention data, the platforms may help to generate new knowledge on human vision and cognition. “The more we learn about how humans see and understand the world, the more we can build these insights into our AI tools to make them more useful,” says Oliva.</p>

<p>Other authors of the CVPR paper are Pat Sukhum, Yun Bin Zhang, and Nanxuan Zhao. The research was supported by the Vannevar Bush Faculty Fellowship program, an Ignite grant from the SystemsThatLearn@CSAIL, and cloud computing services from MIT Quest.</p>

An MIT study shows viewers’ attention shifts the longer they gaze at an image. Given just a half-second to look at the photo at left, in online experiments, they focused on the elephant, as shown in this heat map.
Image courtesy of the researchers.

Photorealistic simulator made MIT robot racing competition a live online experiencehttps://news.mit.edu/2020/photorealistic-simulator-made-mit-robot-racing-competition-live-online-experience-0609
Teaching assistants in Robotics: Science and Systems pulled out all the stops to help engineering students race across the finish line this spring.
Tue, 09 Jun 2020 15:20:01 -0400
https://news.mit.edu/2020/photorealistic-simulator-made-mit-robot-racing-competition-live-online-experience-0609
Ashley Belanger | School of Engineering
<p>Every spring, the basement of the Ray and Maria Stata Center becomes a racetrack for tiny self-driving cars that tear through the halls one by one. Sprinting behind each car on foot is a team of three to six students, sometimes carrying wireless routers or open laptops extended out like Olympic torches. Lining the basement walls, their classmates cheer them on, knowing the effort it took to program the algorithms steering the cars around the course during this annual MIT autonomous racing competition.</p>

<p>The competition is the final project for Course 6.141/16.405 (Robotics: Science and Systems). It’s an end-of-semester event that gets pulses speeding, and prizes are awarded for finishing different race courses with the fastest times out of 20 teams.</p>

<p>With campus evacuated this spring due to the Covid-19 pandemic, however, not a single robotic car burned rubber in the Stata Center basement. Instead, a new race was on as Luca Carlone, the Charles Stark Draper Assistant Professor of Aeronautics and Astronautics and member of the Institute for Data, Systems, and Society; Nicholas Roy, professor of aeronautics and astronautics; and teaching assistants (TAs) including Marcus Abate, Lukas Lao Beyer, and Caris Mariah Moses had only four weeks to figure out how to bring the excitement of this highly-anticipated race online.</p>

<p>Because the lab sometimes uses a simple simulator for other research, Carlone says they considered taking the race in that direction. With this simple simulator, students could watch as their self-driving cars snaked around a flat map, like a car depicted by a dot moving along a GPS navigation system. Ultimately, they decided that wasn’t the right route. The racing competition needed to be noisy. Realistic. Exciting. The dynamics of the car needed to be nearly as complex as the robotic cars the students had planned to use. Building on his prior research in collaboration with MIT Lincoln Laboratory, Abate worked with Lao Beyer and engineering graduate student Sabina Chen to develop a new photorealistic simulator at the last minute.</p>

<p>The race was back on, and Carlone was impressed by how everything from the cityscape to the sleek car designs looked “as realistic as possible.”</p>

<p>“The modifications involved introducing an outdoor environment based on open-source assets, building in realistic car dynamics for the agent, and adding lidar sensors,” Abate says. “I also had to revamp the interfacing with Python and Robot Operating System (ROS) to make it all plug-and-play for the students.”</p>

<p>What that means is that the race ran a lot like a racing game, such as Gran Turismo or Forza. Only instead of sitting on your couch thumbing the joystick to direct the car, students developed algorithms to anticipate every roadblock and bend ahead. For students, programming for this new environment was perhaps the biggest adjustment. “The simulator used an outdoor scene and a full-sized car with a very different dynamics model than the real-life race car in the Stata basement,” Abate says.</p>

<p>The TAs also had to adjust to complications behind the scenes of the race’s new setting. “A huge amount of effort was put into the new simulator, as well as into the logistics of obtaining and evaluating students’ software,” Lao Beyer says. “Usually, teams are able to configure the software on their race car however they want, but it is very difficult to accommodate for such a diversity of software setups in the virtual race.”</p><p>Once the simulator was ready, there was no time to troubleshoot, so TAs made themselves available to debug on the fly any issues that arose. “I think that saved the day for the final project and the final race,” Carlone says.</p>

<p>Programming their autonomous racing code wasn’t the only way that students customized their race experience, though. Co-instructor Jane Abbott brought Writing, Rhetoric, and Professional Communication (WRAP) into the course. As coordinator of the communication-intensive team that focused on helping teams work effectively, she says she was worried the silence that often looms on Zoom would suck out all the energy of the race. She suggested the TAs add a soundtrack.</p>

<p>In the end, the remote race ran for nearly four hours, bringing together more than 100 people in one Zoom call with commentators and Mario Kart music playing. “We got to watch every student’s solution with some cool visualization code running that showed the trajectory and any obstacles hit,” says Samuel Ubellacker, an electrical engineering and computer science student who raced this year. “We got to see how each team’s solution ran much clearer in the simulator because the camera was always following the race car.”</p>

<p>For Yorai Shaoul, another electrical engineering and computer science student in the race, getting out of the basement helped him become more engaged with other teams’ projects. “Before leaving campus, we found ourselves working long hours in the Stata basement,” Shaoul says. “So focused on our robot, we failed to notice that other teams were right there next to us the whole time.”</p>

<p>During the race, other programming solutions his team had overlooked became clear. “The TAs showcased and narrated each team’s run, finally allowing us to see the diverse approaches other teams were developing,” Shaoul says.</p>

<p>“One thing that was nice: When we’ve done it live in the tunnels, you can only see a part of it,” Abbott says. “You sort of stand at a fixed point and you see the car go by. It’s like watching the marathon: you see the runners for 100 yards and then they’re gone.”</p>

<p>Over Zoom, participants could watch every impressive cruise and spectacular crash as it happened, plus replays. Many stayed to watch, and Lao Beyer says, “We managed to retain as much excitement and suspense about the final challenge as possible.” Ubellacker agrees: “It was certainly an unforgettable experience!”</p>

<p>For those students who don’t bro down with Mario, they could also choose the music they wanted to accompany their races. “Near, far, wherever you are,” these lyrics from one team’s choice to use the “Titanic” movie theme “My Heart Will Go On” are a wink to the extra challenge of collaborating as teams at a distance.</p>

<p>One of the masters of ceremonies for the 2020 race, Marwa Abdulhai ’20, was a TA last year and says one obvious benefit of the online race is that it’s a lot easier to figure out why your car crashed. “Pros of this virtual approach have been allowing students to race through the track multiple times and knowing that the car’s performance was primarily due to the algorithm and not any physical constraints,” Abdulhai says.</p>

<p>For Ubellacker that was actually a con, though: “The biggest element that I missed without having a physical car was not being able to experience the differences between simulation and real life.” He says, “Part of the fun to me is designing a system that works perfectly in the simulator, and then getting to figure out all the crazy ways it will fail in the real world!”</p>

<p>Shaoul says instead of working on one car, sometimes it felt like they were working on five individual cars that lived on each team member’s computer. “With one car, it was easy to see how well it did and what required fixing, whereas virtually it was more ambiguous,” Shaoul says. “We faced challenges with keeping track of up-to-date code versions and also simple communication.”</p>

<p>Carlone was concerned students wouldn’t be as invested in their algorithms without the experience of seeing the car’s performance play out in real life to motivate them to push harder. “Every year, the record time on that Stata Center track was getting better and better,” he says. “This year, we were a bit concerned about the performance.”</p>

<p>Fortunately, many students were very much still in the race, with some teams beating the most optimistic predictions, despite having to adjust to new racing conditions and greater challenges collaborating as a team fully online. The winning students completed the race courses sometimes three times faster than other teams, without any collisions. “It was just beyond expectation,” Carlone says.</p><p>Although this shift in the final project somewhat changed the takeaways from the course, Carlone says the experience will still advance algorithmic skills for students working on robotics, as well as introducing them to the intensity of communication required to work effectively as remote teams. “Many robotics groups are doing research using photorealistic simulation, because you can test conditions that you cannot test on the real robot,” he says. Co-instructor Roy says it worked so well, the new simulator might become a permanent feature of the course — not to replace the physical race, but as an extra element. “The robotics experience was good,” Carlone says of the 2020 race, but still: “The human experience is, of course, different.”</p>

More than 100 people participated in a four-hour online robot race, which served as the final project for MIT Course 6.141/16.405 (Robotics: Science and Systems).

Learning the ropes and throwing lifelineshttps://news.mit.edu/2020/student-geeticka-chauhan-0609
PhD student Geeticka Chauhan draws on her experiences as an international student to strengthen the bonds of her MIT community.
Mon, 08 Jun 2020 23:59:59 -0400
https://news.mit.edu/2020/student-geeticka-chauhan-0609
Sofia Tong | MIT News correspondent
<p>In March, as her friends and neighbors were scrambling to pack up and leave campus due to the Covid-19 pandemic, Geeticka Chauhan found her world upended in yet another way. Just weeks earlier, she had been elected council president of MIT’s largest graduate residence, Sidney-Pacific. Suddenly the fourth-year PhD student was plunged into rounds of emergency meetings with MIT administrators.</p><p>From her apartment in Sidney-Pacific, where she has stayed put due to travel restrictions in her home country of India, Chauhan is still learning the ropes of her new position. With others, she has been busy preparing to meet the future challenge of safely redensifying the living space of more than 1,000 people: how to regulate high-density common areas, handle noise complaints as people spend more time in their rooms, and care for the mental and physical well-being of a community that can only congregate virtually. “It’s just such a crazy time,” she says.</p><p>She’s prepared for the challenge. During her time at MIT, while pursuing her research using artificial intelligence to understand human language, Chauhan has worked to strengthen the bonds of her community in numerous ways, often drawing on her experience as an international student to do so.</p><p><strong>Adventures in brunching</strong></p><p>When Chauhan first came to MIT in 2017, she quickly fell in love with Sidney-Pacific’s thriving and freewheeling “helper culture.” “These are all researchers, but they’re maybe making brownies, doing crazy experiments that they would do in lab, except in the kitchen,” she says. “That was my first introduction to the MIT spirit.”</p><p>Next thing she knew, she was teaching Budokon yoga, mashing chickpeas into guacamole, and immersing herself in the complex operations of a <a href=”http://news.mit.edu/2019/serving-brunch-mit-graduate-community-0219″ target=”_blank”>monthly brunch</a> attended by hundreds of graduate students, many of whom came to MIT from outside the U.S. In addition to the genuine thrill of cracking <a href=”https://gradadmissions.mit.edu/blog/graduate-student-becomes-chickpea-master-masher” target=”_blank”>300 eggs in 30 minutes</a>, working on the brunches kept her grounded in a place thousands of miles from her home in New Delhi. “It gave me a sense of community and made me feel like I have a family here,” she says.</p><p>Chauhan has found additional ways to address the particular difficulties that international students face. As a member of the Presidential Advisory Council this year, she gathered international student testimonies on visa difficulties and presented them to MIT’s president and the director of the International Students Office. And when a friend from mainland China had to self-quarantine on Valentine’s Day, Chauhan knew she had to act. As brunch chair, she organized food delivery, complete with chocolates and notes, for Sidney-Pacific residents who couldn’t make it to the monthly event. “Initially when you come back to the U.S. from your home country, you really miss your family,” she says. “I thought self-quarantining students should feel their MIT community cares for them.”</p><p><strong>Culture shock</strong></p><p>Growing up in New Delhi, math was initially one of her weaknesses, Chauhan says, and she was scared and confused by her early introduction to coding. Her mother and grandmother, with stern kindness and chocolates, encouraged her to face these fears. “My mom used to teach me that with hard work, you can make your biggest weakness your biggest strength,” she explains. She soon set her sights on a future in computer science.</p><p>However, as Chauhan found her life increasingly dominated by the high-pressure culture of preparing for college, she began to long for a feeling of wholeness, and for the person she left behind on the way. “I used to have a lot of artistic interests but didn’t get to explore them,” she says. She quit her weekend engineering classes, enrolled in a black and white photography class, and after learning about the extracurricular options at American universities, landed a full scholarship to attend Florida International University.</p><p>It was a culture shock. She didn’t know many Indian students in Miami and felt herself struggling to reconcile the individualistic mindset around her with the community and family-centered life at home. She says the people she met got her through, including <a href=”http://users.cs.fiu.edu/~markaf/” target=”_blank”>Mark Finlayson</a>, a professor studying the science of narrative from the viewpoint of natural language processing. Under Finlayson’s guidance she developed a fascination with the way AI techniques could be used to better understand the patterns and <a href=”https://www.aclweb.org/anthology/C18-1001.pdf” target=”_blank”>structures in human narratives</a>. She learned that studying AI wasn’t just a way of imitating human thinking, but rather an approach for deepening our understanding of ourselves as reflected by our language. “It was due to Mark’s mentorship that I got involved in research” and applied to MIT, she says.</p><p><strong>The holistic researcher</strong></p><p>Chauan now works in the Clinical Decision Making Group led by Peter Szolovits at the Computer Science and Artificial Intelligence Laboratory, where she is focusing on the ways natural language processing can address health care problems. For her master’s project, she worked on the problem of relation extraction and built a tool to digest clinical literature that would, for example, help pharamacologists easily assess negative drug interactions. Now, she’s finishing up a <a href=”https://www.csail.mit.edu/research/quantification-pulmonary-edema-chest-radiographs” target=”_blank”>project</a> integrating visual analysis of chest radiographs and textual analysis of radiology reports for quantifying pulmonary edema, to help clinicians manage the fluid status of their patients who have suffered acute heart failure.</p><p>“In routine clinical practice, patient care is interweaved with a lot of bureaucratic work,” she says. “The goal of my lab is to assist with clinical decision making and give clinicians the full freedom and time to devote to patient care.”</p><p>It’s an exciting moment for Chauhan, who recently submitted a paper she co-first authored with another grad student, and is starting to think about her next project: interpretability, or how to elucidate a decision-making model’s “thought process” by highlighting the data from which it draws its conclusions. She continues to find the intersection of computer vision and natural language processing an exciting area of research. But there have been challenges along the way.</p><p>After the initial flurry of excitement her first year, personal and faculty expectations of students’ independence and publishing success grew, and she began to experience uncertainty and imposter syndrome. “I didn’t know what I was capable of,” she says. “That initial period of convincing yourself that you belong is difficult. I am fortunate to have a supportive advisor that understands that.”</p><p>Finally, one of her first-year projects showed promise, and she came up with a master’s thesis plan in a month and submitted the project that semester. To get through, she says, she drew on her “survival skills”: allowing herself to be a full person beyond her work as a researcher so that one setback didn’t become a sense of complete failure. For Chauhan, that meant working as a teaching assistant, drawing henna designs, singing, enjoying yoga, and staying involved in student government. “I used to try to separate that part of myself with my work side,” she says. “I needed to give myself some space to learn and grow, rather than compare myself to others.”</p><p>Citing a <a href=”https://www.washingtonpost.com/opinions/catherine-rampell-women-should-embrace-the-bs-in-college-to-make-more-later/2014/03/10/1e15113a-a871-11e3-8d62-419db477a0e6_story.html” target=”_blank”>study</a> showing that women are more likely to drop out of STEM disciplines when they receive a B grade in a challenging course, Chauhan says she wishes she could tell her younger self not to compare herself with an ideal version of herself. Dismantling imposter syndrome requires an understanding that qualification and success can come from a broad range of experiences, she says: It’s about “seeing people for who they are holistically, rather than what is seen on the resume.”</p>

PhD student Geeticka Chauhan is finishing up a project integrating visual analysis of chest radiographs and textual analysis of radiology reports, to help clinicians assess the proper balance of treatments for acute heart failure.
Illustration: Jose-Luis Olivares, MIT

Learning the ropes and throwing lifelineshttps://news.mit.edu/2020/student-geeticka-chauhan-0609
PhD student Geeticka Chauhan draws on her experiences as an international student to strengthen the bonds of her MIT community.
Mon, 08 Jun 2020 23:59:59 -0400
https://news.mit.edu/2020/student-geeticka-chauhan-0609
Sofia Tong | MIT News correspondent
<p>In March, as her friends and neighbors were scrambling to pack up and leave campus due to the Covid-19 pandemic, Geeticka Chauhan found her world upended in yet another way. Just weeks earlier, she had been elected council president of MIT’s largest graduate residence, Sidney-Pacific. Suddenly the fourth-year PhD student was plunged into rounds of emergency meetings with MIT administrators.</p><p>From her apartment in Sidney-Pacific, where she has stayed put due to travel restrictions in her home country of India, Chauhan is still learning the ropes of her new position. With others, she has been busy preparing to meet the future challenge of safely redensifying the living space of more than 1,000 people: how to regulate high-density common areas, handle noise complaints as people spend more time in their rooms, and care for the mental and physical well-being of a community that can only congregate virtually. “It’s just such a crazy time,” she says.</p><p>She’s prepared for the challenge. During her time at MIT, while pursuing her research using artificial intelligence to understand human language, Chauhan has worked to strengthen the bonds of her community in numerous ways, often drawing on her experience as an international student to do so.</p><p><strong>Adventures in brunching</strong></p><p>When Chauhan first came to MIT in 2017, she quickly fell in love with Sidney-Pacific’s thriving and freewheeling “helper culture.” “These are all researchers, but they’re maybe making brownies, doing crazy experiments that they would do in lab, except in the kitchen,” she says. “That was my first introduction to the MIT spirit.”</p><p>Next thing she knew, she was teaching Budokon yoga, mashing chickpeas into guacamole, and immersing herself in the complex operations of a <a href=”http://news.mit.edu/2019/serving-brunch-mit-graduate-community-0219″ target=”_blank”>monthly brunch</a> attended by hundreds of graduate students, many of whom came to MIT from outside the U.S. In addition to the genuine thrill of cracking <a href=”https://gradadmissions.mit.edu/blog/graduate-student-becomes-chickpea-master-masher” target=”_blank”>300 eggs in 30 minutes</a>, working on the brunches kept her grounded in a place thousands of miles from her home in New Delhi. “It gave me a sense of community and made me feel like I have a family here,” she says.</p><p>Chauhan has found additional ways to address the particular difficulties that international students face. As a member of the Presidential Advisory Council this year, she gathered international student testimonies on visa difficulties and presented them to MIT’s president and the director of the International Students Office. And when a friend from mainland China had to self-quarantine on Valentine’s Day, Chauhan knew she had to act. As brunch chair, she organized food delivery, complete with chocolates and notes, for Sidney-Pacific residents who couldn’t make it to the monthly event. “Initially when you come back to the U.S. from your home country, you really miss your family,” she says. “I thought self-quarantining students should feel their MIT community cares for them.”</p><p><strong>Culture shock</strong></p><p>Growing up in New Delhi, math was initially one of her weaknesses, Chauhan says, and she was scared and confused by her early introduction to coding. Her mother and grandmother, with stern kindness and chocolates, encouraged her to face these fears. “My mom used to teach me that with hard work, you can make your biggest weakness your biggest strength,” she explains. She soon set her sights on a future in computer science.</p><p>However, as Chauhan found her life increasingly dominated by the high-pressure culture of preparing for college, she began to long for a feeling of wholeness, and for the person she left behind on the way. “I used to have a lot of artistic interests but didn’t get to explore them,” she says. She quit her weekend engineering classes, enrolled in a black and white photography class, and after learning about the extracurricular options at American universities, landed a full scholarship to attend Florida International University.</p><p>It was a culture shock. She didn’t know many Indian students in Miami and felt herself struggling to reconcile the individualistic mindset around her with the community and family-centered life at home. She says the people she met got her through, including <a href=”http://users.cs.fiu.edu/~markaf/” target=”_blank”>Mark Finlayson</a>, a professor studying the science of narrative from the viewpoint of natural language processing. Under Finlayson’s guidance she developed a fascination with the way AI techniques could be used to better understand the patterns and <a href=”https://www.aclweb.org/anthology/C18-1001.pdf” target=”_blank”>structures in human narratives</a>. She learned that studying AI wasn’t just a way of imitating human thinking, but rather an approach for deepening our understanding of ourselves as reflected by our language. “It was due to Mark’s mentorship that I got involved in research” and applied to MIT, she says.</p><p><strong>The holistic researcher</strong></p><p>Chauan now works in the Clinical Decision Making Group led by Peter Szolovits at the Computer Science and Artificial Intelligence Laboratory, where she is focusing on the ways natural language processing can address health care problems. For her master’s project, she worked on the problem of relation extraction and built a tool to digest clinical literature that would, for example, help pharamacologists easily assess negative drug interactions. Now, she’s finishing up a <a href=”https://www.csail.mit.edu/research/quantification-pulmonary-edema-chest-radiographs” target=”_blank”>project</a> integrating visual analysis of chest radiographs and textual analysis of radiology reports for quantifying pulmonary edema, to help clinicians manage the fluid status of their patients who have suffered acute heart failure.</p><p>“In routine clinical practice, patient care is interweaved with a lot of bureaucratic work,” she says. “The goal of my lab is to assist with clinical decision making and give clinicians the full freedom and time to devote to patient care.”</p><p>It’s an exciting moment for Chauhan, who recently submitted a paper she co-first authored with another grad student, and is starting to think about her next project: interpretability, or how to elucidate a decision-making model’s “thought process” by highlighting the data from which it draws its conclusions. She continues to find the intersection of computer vision and natural language processing an exciting area of research. But there have been challenges along the way.</p><p>After the initial flurry of excitement her first year, personal and faculty expectations of students’ independence and publishing success grew, and she began to experience uncertainty and imposter syndrome. “I didn’t know what I was capable of,” she says. “That initial period of convincing yourself that you belong is difficult. I am fortunate to have a supportive advisor that understands that.”</p><p>Finally, one of her first-year projects showed promise, and she came up with a master’s thesis plan in a month and submitted the project that semester. To get through, she says, she drew on her “survival skills”: allowing herself to be a full person beyond her work as a researcher so that one setback didn’t become a sense of complete failure. For Chauhan, that meant working as a teaching assistant, drawing henna designs, singing, enjoying yoga, and staying involved in student government. “I used to try to separate that part of myself with my work side,” she says. “I needed to give myself some space to learn and grow, rather than compare myself to others.”</p><p>Citing a <a href=”https://www.washingtonpost.com/opinions/catherine-rampell-women-should-embrace-the-bs-in-college-to-make-more-later/2014/03/10/1e15113a-a871-11e3-8d62-419db477a0e6_story.html” target=”_blank”>study</a> showing that women are more likely to drop out of STEM disciplines when they receive a B grade in a challenging course, Chauhan says she wishes she could tell her younger self not to compare herself with an ideal version of herself. Dismantling imposter syndrome requires an understanding that qualification and success can come from a broad range of experiences, she says: It’s about “seeing people for who they are holistically, rather than what is seen on the resume.”</p>

PhD student Geeticka Chauhan is finishing up a project integrating visual analysis of chest radiographs and textual analysis of radiology reports, to help clinicians assess the proper balance of treatments for acute heart failure.
Illustration: Jose-Luis Olivares, MIT

Engineers put tens of thousands of artificial brain synapses on a single chiphttps://news.mit.edu/2020/thousands-artificial-brain-synapses-single-chip-0608
The design could advance the development of small, portable AI devices.
Mon, 08 Jun 2020 12:18:05 -0400
https://news.mit.edu/2020/thousands-artificial-brain-synapses-single-chip-0608
Jennifer Chu | MIT News Office
<p>MIT engineers have designed a “brain-on-a-chip,” smaller than a piece of confetti, that is made from tens of thousands of artificial brain synapses known as memristors — silicon-based components that mimic the information-transmitting synapses in the human brain.</p><p>The researchers borrowed from principles of metallurgy to fabricate each memristor from alloys of silver and copper, along with silicon. When they ran the chip through several visual tasks, the chip was able to “remember” stored images and reproduce them many times over, in versions that were crisper and cleaner compared with existing memristor designs made with unalloyed elements.</p><p>Their results, published today in the journal <em>Nature Nanotechnology</em>, demonstrate a promising new memristor design for neuromorphic devices — electronics that are based on a new type of circuit that processes information in a way that mimics the brain’s neural architecture. Such brain-inspired circuits could be built into small, portable devices, and would carry out complex computational tasks that only today’s supercomputers can handle.</p><p>“So far, artificial synapse networks exist as software. We’re trying to build real neural network hardware for portable artificial intelligence systems,” says Jeehwan Kim, associate professor of mechanical engineering at MIT. “Imagine connecting a neuromorphic device to a camera on your car, and having it recognize lights and objects and make a decision immediately, without having to connect to the internet. We hope to use energy-efficient memristors to do those tasks on-site, in real-time.”</p><p><strong>Wandering ions</strong></p><p>Memristors, or memory transistors, are an essential element in neuromorphic computing. In a neuromorphic device, a memristor would serve as the transistor in a circuit, though its workings would more closely resemble a brain synapse — the junction between two neurons. The synapse receives signals from one neuron, in the form of ions, and sends a corresponding signal to the next neuron.</p><p>A transistor in a conventional circuit transmits information by switching between one of only two values, 0 and 1, and doing so only when the signal it receives, in the form of an electric current, is of a particular strength. In contrast, a memristor would work along a gradient, much like a synapse in the brain. The signal it produces would vary depending on the strength of the signal that it receives. This would enable a single memristor to have many values, and therefore carry out a far wider range of operations than binary transistors.</p><p>Like a brain synapse, a memristor would also be able to “remember” the value associated with a given current strength, and produce the exact same signal the next time it receives a similar current. This could ensure that the answer to a complex equation, or the visual classification of an object, is reliable — a feat that normally involves multiple transistors and capacitors.</p><p>Ultimately, scientists envision that memristors would require far less chip real estate than conventional transistors, enabling powerful, portable computing devices that do not rely on supercomputers, or even connections to the Internet.</p><p>Existing memristor designs, however, are limited in their performance. A single memristor is made of a positive and negative electrode, separated by a “switching medium,” or space between the electrodes. When a voltage is applied to one electrode, ions from that electrode flow through the medium, forming a “conduction channel” to the other electrode. The received ions make up the electrical signal that the memristor transmits through the circuit. The size of the ion channel (and the signal that the memristor ultimately produces) should be proportional to the strength of the stimulating voltage.</p><p>Kim says that existing memristor designs work pretty well in cases where voltage stimulates a large conduction channel, or a heavy flow of ions from one electrode to the other. But these designs are less reliable when memristors need to generate subtler signals, via thinner conduction channels.</p><p>The thinner a conduction channel, and the lighter the flow of ions from one electrode to the other, the harder it is for individual ions to stay together. Instead, they tend to wander from the group, disbanding within the medium. As a result, it’s difficult for the receiving electrode to reliably capture the same number of ions, and therefore transmit the same signal, when stimulated with a certain low range of current.</p><p><strong>Borrowing from metallurgy</strong></p><p>Kim and his colleagues found a way around this limitation by borrowing a technique from metallurgy, the science of melding metals into alloys and studying their combined properties.</p><p>“Traditionally, metallurgists try to add different atoms into a bulk matrix to strengthen materials, and we thought, why not tweak the atomic interactions in our memristor, and add some alloying element to control the movement of ions in our medium,” Kim says.</p><p>Engineers typically use silver as the material for a memristor’s positive electrode. Kim’s team looked through the literature to find an element that they could combine with silver to effectively hold silver ions together, while allowing them to flow quickly through to the other electrode.</p><p>The team landed on copper as the ideal alloying element, as it is able to bind both with silver, and with silicon.</p><p>“It acts as a sort of bridge, and stabilizes the silver-silicon interface,” Kim says.</p><p>To make memristors using their new alloy, the group first fabricated a negative electrode out of silicon, then made a positive electrode by depositing a slight amount of copper, followed by a layer of silver. They sandwiched the two electrodes around an amorphous silicon medium. In this way, they patterned a millimeter-square silicon chip with tens of thousands of memristors.</p><p>As a first test of the chip, they recreated a gray-scale image of the Captain America shield. They equated each pixel in the image to a corresponding memristor in the chip. They then modulated the conductance of each memristor that was relative in strength to the color in the corresponding pixel.</p><p>The chip produced the same crisp image of the shield, and was able to “remember” the image and reproduce it many times, compared with chips made of other materials.</p><p>The team also ran the chip through an image processing task, programming the memristors to alter an image, in this case of MIT’s Killian Court, in several specific ways, including sharpening and blurring the original image. Again, their design produced the reprogrammed images more reliably than existing memristor designs.</p><p>“We’re using artificial synapses to do real inference tests,” Kim says. “We would like to develop this technology further to have larger-scale arrays to do image recognition tasks. And some day, you might be able to carry around artificial brains to do these kinds of tasks, without connecting to supercomputers, the internet, or the cloud.”</p><p>This research was funded, in part, by the MIT Research Support Committee funds, the MIT-IBM Watson AI Lab, Samsung Global Research Laboratory, and the National Science Foundation.</p>

A close-up view of a new neuromorphic “brain-on-a-chip” that includes tens of thousands of memristors, or memory transistors.
Credit: Peng Lin

If transistors can’t get smaller, then coders have to get smarterhttps://news.mit.edu/2020/mit-csail-computing-technology-after-moores-law-0605
MIT CSAIL researchers say improving computing technology after Moore&#039;s Law will require more efficient software, new algorithms, and specialized hardware.
Fri, 05 Jun 2020 11:50:01 -0400
https://news.mit.edu/2020/mit-csail-computing-technology-after-moores-law-0605
Adam Conner-Simons | MIT CSAIL
<p>In 1965, Intel co-founder Gordon Moore predicted that the number of transistors that could fit on a computer chip would grow exponentially — and they did, doubling about every two years. For half a century, Moore’s Law has endured: Computers have gotten smaller, faster, cheaper, and more efficient, enabling the rapid worldwide adoption of PCs, smartphones, high-speed internet, and more.</p>

<p>This miniaturization trend has led to silicon chips today that have almost unimaginably small circuitry. Transistors, the tiny switches that implement computer microprocessors, are so small that 1,000 of them laid end-to-end are no wider than a human hair. And for a long time, the smaller the transistors were, the faster they could switch. But today, we’re approaching the limit of how small transistors can get. As a result, over the past decade researchers have been scratching their heads to find other ways to improve performance so that the computer industry can continue to innovate.</p>

<p>While we wait for the maturation of new computing technologies like quantum, carbon nanotubes, or photonics (which may take a while), other approaches will be needed to get performance as Moore’s Law comes to an end. In a recent journal article published in <em>Science</em>, a team from MIT’s <a href=”http://csail.mit.edu”>Computer Science and Artificial Intelligence Laboratory</a> (CSAIL) <a href=”https://science.sciencemag.org/content/368/6495/eaam9744.editor-summary” target=”_blank”>identifies three key areas</a> to prioritize to continue to deliver computing speed-ups: better software, new algorithms, and more streamlined hardware.</p>

<p>Senior author Charles E. Leiserson says that the performance benefits from miniaturization have been so great that, for decades, programmers have been able to prioritize making code-writing easier rather than making the code itself run faster. The inefficiency that this tendency introduces has been acceptable, because faster computer chips have always been able to pick up the slack.</p>

<p>“But nowadays, being able to make further advances in fields like machine learning, robotics, and virtual reality will require huge amounts of computational power that miniaturization can no longer provide,” says Leiserson, the Edwin Sibley Webster Professor in MIT’s Department of Electrical Engineering and Computer Science. “If we want to harness the full potential of these technologies, we must change our approach to computing.”</p>

<p>Leiserson co-wrote the <a href=”https://science.sciencemag.org/content/368/6495/eaam9744.editor-summary” target=”_blank”>paper</a>, published this week, with Research Scientist Neil Thompson, professors Daniel Sanchez and Joel Emer, Adjunct Professor Butler Lampson, and research scientists Bradley Kuszmaul and Tao Schardl.</p>

<p><strong>No more Moore</strong></p>

<p>The authors make recommendations about three areas of computing: software, algorithms, and hardware architecture.</p>

<p>With software, they say that programmers’ previous prioritization of productivity over performance has led to problematic strategies like “reduction”: taking code that worked on problem A and using it to solve problem B. For example, if someone has to create a system to recognize yes-or-no voice commands, but doesn’t want to code a whole new custom program, they could take an existing program that recognizes a wide range of words and tweak it to respond only to yes-or-no answers.</p>

<p>While this approach reduces coding time, the inefficiencies it creates quickly compound: if a single reduction is 80 percent as efficient as a custom solution, and you then add 20 layers of reduction, the code will ultimately be 100 times less efficient than it could be.</p>

<p>“These are the kinds of strategies that programmers have to rethink as hardware improvements slow down,” says Thompson. “We can’t keep doing ‘business as usual’ if we want to continue to get the speed-ups we’ve grown accustomed to.”</p>

<p>Instead, the researchers recommend techniques like parallelizing code. Much existing software has been designed using ancient assumptions that processors can only do only one operation at a time. But in recent years multicore technology has enabled complex tasks to be completed thousands of times faster and in a much more energy-efficient way.&nbsp;</p>

<p>“Since Moore’s Law will not be handing us improved performance on a silver platter, we will have to deliver performance the hard way,” says Moshe Vardi, a professor in computational engineering at Rice University. “This is a great opportunity for computing research, and the [MIT CSAIL] report provides a road map for such research.”&nbsp;</p>

<p>As for algorithms, the team suggests a three-pronged approach that includes exploring new problem areas, addressing concerns about how algorithms scale, and tailoring them to better take advantage of modern hardware.</p>

<p>Lastly, in terms of hardware architecture, the team advocates that hardware be streamlined so that problems can be solved with fewer transistors and less silicon. Streamlining includes using simpler processors and creating hardware tailored to specific applications, like the graphics-processing unit is tailored for computer graphics.&nbsp;</p>

<p>“Hardware customized for particular domains can be much more efficient and use far fewer transistors, enabling applications to run tens to hundreds of times faster,” says Schardl. “More generally, hardware streamlining would further encourage parallel programming, creating additional chip area to be used for more circuitry that can operate in parallel.”</p>

<p>While these approaches may be the best path forward, the researchers say that it won’t always be an easy one. Organizations that use such techniques may not know the benefits of their efforts until after they’ve invested a lot of engineering time. Plus, the speed-ups won’t be as consistent as they were with Moore’s Law: they may be dramatic at first, and then require large amounts of effort for smaller improvements.&nbsp;</p>

<p>Certain companies have already gotten the memo.</p>

<p>“For tech giants like Google and Amazon, the huge scale of their data centers means that even small improvements in software performance can result in large financial returns,” says Thompson.&nbsp; “But while these firms may be leading the charge, many others will need to take these issues seriously if they want to stay competitive.”</p>

<p>Getting improvements in the areas identified by the team will also require building up the infrastructure and workforce that make them possible.&nbsp;&nbsp;</p>

<p>“Performance growth will require new tools, programming languages, and hardware to facilitate more and better performance engineering,” says Leiserson. “It also means computer scientists being better educated about how we can make software, algorithms, and hardware work together, instead of putting them in different silos.”</p>

<p>This work was supported, in part, by the National Science Foundation.</p>

<p></p>

We’re approaching the limit of how small transistors can get. As a result, over the past decade researchers have been working to find other ways to improve performance so that the computer industry can continue to innovate.

Giving soft robots feelinghttps://news.mit.edu/2020/giving-soft-robots-senses-0601
In a pair of papers from MIT CSAIL, two teams enable better sense and perception for soft robotic grippers.
Mon, 01 Jun 2020 09:00:00 -0400
https://news.mit.edu/2020/giving-soft-robots-senses-0601
Rachel Gordon | MIT CSAIL
<p>One of the hottest topics in robotics is the field of soft robots, which utilizes squishy and flexible materials rather than traditional rigid materials. But soft robots have been limited due to their lack of good sensing. A good robotic gripper needs to feel what it is touching (tactile sensing), and it needs to sense the positions of its fingers (proprioception). Such sensing has been missing from most soft robots.</p>

<p>In a new pair of papers, researchers from MIT’s <a href=”http://csail.mit.edu”>Computer Science and Artificial Intelligence Laboratory</a> (CSAIL) came up with new tools to let robots better perceive what they’re interacting with: the ability to see and classify items, and a softer, delicate touch.&nbsp;</p>

<p>“We wish to enable seeing the world by feeling the world. Soft robot hands have sensorized skins that allow them to pick up a range of objects, from delicate, such as potato chips, to heavy, such as milk bottles,” says CSAIL Director Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and the deputy dean of research for the MIT Stephen A. Schwarzman College of Computing.&nbsp;</p>
<p><a href=”https://arxiv.org/abs/1910.01287″>One paper</a> builds off last year’s <a href=”http://news.mit.edu/2019/new-robot-hand-gripper-soft-and-strong-0315″>research</a> from MIT and Harvard University, where a team developed a soft and strong robotic gripper in the form of a cone-shaped origami structure. It collapses in on objects much like a Venus’ flytrap, to pick up items that are as much as 100 times its weight.&nbsp;</p>

<p>To get that newfound versatility and adaptability even closer to that of a human hand, a new team came up with a sensible addition: tactile sensors, made from latex “bladders” (balloons) connected to pressure transducers. The new sensors let the gripper not only pick up objects as delicate as potato chips, but it also classifies them — letting the robot better understand what it’s picking up, while also exhibiting that light touch.&nbsp;</p>

<p>When classifying objects, the sensors correctly identified 10 objects with over 90 percent accuracy, even when an object slipped out of grip.</p>

<p>“Unlike many other soft tactile sensors, ours can be rapidly fabricated, retrofitted into grippers, and show sensitivity and reliability,” says MIT postdoc Josie Hughes, the lead author on a new paper about the sensors. “We hope they provide a new method of soft sensing that can be applied to a wide range of different applications in manufacturing settings, like packing and lifting.”&nbsp;</p>

<p>In <a href=”https://arxiv.org/pdf/1910.01287.pdf”>a second paper</a>, a group of researchers created a soft robotic finger called “GelFlex” that uses embedded cameras and deep learning to enable high-resolution tactile sensing and “proprioception” (awareness of positions and movements of the body).&nbsp;</p>

<p>The gripper, which looks much like a two-finger cup gripper you might see at a soda station, uses a tendon-driven mechanism to actuate the fingers. When tested on metal objects of various shapes, the system had over 96 percent recognition accuracy.&nbsp;</p>

<p>“Our soft finger can provide high accuracy on proprioception and accurately predict grasped objects, and also withstand considerable impact without harming the interacted environment and itself,” says Yu She, lead author on a new paper on GelFlex. “By constraining soft fingers with a flexible exoskeleton, and performing high-resolution sensing with embedded cameras, we open up a large range of capabilities for soft manipulators.”&nbsp;</p>

<p><strong>Magic ball senses&nbsp;</strong></p>

<p>The magic ball gripper is made from a soft origami structure, encased by a soft balloon. When a vacuum is applied to the balloon, the origami structure closes around the object, and the gripper deforms to its structure.&nbsp;</p>

<p>While this motion lets the gripper grasp a much wider range of objects than ever before, such as soup cans, hammers, wine glasses, drones, and even a single broccoli floret, the greater intricacies of delicacy and understanding were still out of reach — until they added the sensors.&nbsp;&nbsp;</p>

<p>When the sensors experience force or strain, the internal pressure changes, and the team can measure this change in pressure to identify when it will feel that again.&nbsp;</p>

<p>In addition to the latex sensor, the team also developed an algorithm which uses feedback to let the gripper possess a human-like duality of being both strong and precise — and 80 percent of the tested objects were successfully grasped without damage.&nbsp;</p>

<p>The team tested the gripper-sensors on a variety of household items, ranging from heavy bottles to small, delicate objects, including cans, apples, a toothbrush, a water bottle, and a bag of cookies.&nbsp;</p>

<p>Going forward, the team hopes to make the methodology scalable, using computational design and reconstruction methods to improve the resolution and coverage using this new sensor technology. Eventually, they imagine using the new sensors to create a fluidic sensing skin that shows scalability and sensitivity.&nbsp;</p>

<p>Hughes co-wrote the new paper with Rus, which they will present virtually at the 2020 International Conference on Robotics and Automation.&nbsp;</p>

<p><strong>GelFlex</strong></p>

<p>In the second paper, a CSAIL team looked at giving a soft robotic gripper more nuanced, human-like senses. Soft fingers allow a wide range of deformations, but to be used in a controlled way there must be rich tactile and proprioceptive sensing. The team used embedded cameras with wide-angle “fisheye” lenses that capture the finger’s deformations in great detail.</p>

<p>To create GelFlex, the team used silicone material to fabricate the soft and transparent finger, and put one camera near the fingertip and the other in the middle of the finger. Then, they painted reflective ink on the front and side surface of the finger, and added LED lights on the back. This allows the internal fish-eye camera to observe the status of the front and side surface of the finger.&nbsp;</p>

<p>The team trained neural networks to extract key information from the internal cameras for feedback. One neural net was trained to predict the bending angle of GelFlex, and the other was trained to estimate the shape and size of the objects being grabbed. The gripper could then pick up a variety of items such as a Rubik’s cube, a DVD case, or a block of aluminum.&nbsp;</p>

<p>During testing, the average positional error while gripping was less than 0.77 millimeter, which is better than that of a human finger. In a second set of tests, the gripper was challenged with grasping and recognizing cylinders and boxes of various sizes. Out of 80 trials, only three were classified incorrectly.&nbsp;</p>

<p>In the future, the team hopes to improve the proprioception and tactile sensing algorithms, and utilize vision-based sensors to estimate more complex finger configurations, such as twisting or lateral bending, which are challenging for common sensors, but should be attainable with embedded cameras.</p>

<p>Yu She co-wrote the GelFlex paper with MIT graduate student Sandra Q. Liu, Peiyu Yu of Tsinghua University, and MIT Professor Edward Adelson. They will present the paper virtually at the 2020 International Conference on Robotics and Automation.</p>

<p></p>

Professor Ted Adelson’s team created a soft robotic finger that uses embedded cameras and deep learning to enable high-resolution tactile sensing and “proprioception” (awareness of positions and movements of the body).
Photo courtesy of the researchers.

Giving soft robots feelinghttps://news.mit.edu/2020/giving-soft-robots-senses-0601
In a pair of papers from MIT CSAIL, two teams enable better sense and perception for soft robotic grippers.
Mon, 01 Jun 2020 09:00:00 -0400
https://news.mit.edu/2020/giving-soft-robots-senses-0601
Rachel Gordon | MIT CSAIL
<p>One of the hottest topics in robotics is the field of soft robots, which utilizes squishy and flexible materials rather than traditional rigid materials. But soft robots have been limited due to their lack of good sensing. A good robotic gripper needs to feel what it is touching (tactile sensing), and it needs to sense the positions of its fingers (proprioception). Such sensing has been missing from most soft robots.</p>

<p>In a new pair of papers, researchers from MIT’s <a href=”http://csail.mit.edu”>Computer Science and Artificial Intelligence Laboratory</a> (CSAIL) came up with new tools to let robots better perceive what they’re interacting with: the ability to see and classify items, and a softer, delicate touch.&nbsp;</p>

<p>“We wish to enable seeing the world by feeling the world. Soft robot hands have sensorized skins that allow them to pick up a range of objects, from delicate, such as potato chips, to heavy, such as milk bottles,” says CSAIL Director Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and the deputy dean of research for the MIT Stephen A. Schwarzman College of Computing.&nbsp;</p>
<p><a href=”https://arxiv.org/abs/1910.01287″>One paper</a> builds off last year’s <a href=”http://news.mit.edu/2019/new-robot-hand-gripper-soft-and-strong-0315″>research</a> from MIT and Harvard University, where a team developed a soft and strong robotic gripper in the form of a cone-shaped origami structure. It collapses in on objects much like a Venus’ flytrap, to pick up items that are as much as 100 times its weight.&nbsp;</p>

<p>To get that newfound versatility and adaptability even closer to that of a human hand, a new team came up with a sensible addition: tactile sensors, made from latex “bladders” (balloons) connected to pressure transducers. The new sensors let the gripper not only pick up objects as delicate as potato chips, but it also classifies them — letting the robot better understand what it’s picking up, while also exhibiting that light touch.&nbsp;</p>

<p>When classifying objects, the sensors correctly identified 10 objects with over 90 percent accuracy, even when an object slipped out of grip.</p>

<p>“Unlike many other soft tactile sensors, ours can be rapidly fabricated, retrofitted into grippers, and show sensitivity and reliability,” says MIT postdoc Josie Hughes, the lead author on a new paper about the sensors. “We hope they provide a new method of soft sensing that can be applied to a wide range of different applications in manufacturing settings, like packing and lifting.”&nbsp;</p>

<p>In <a href=”https://arxiv.org/pdf/1910.01287.pdf”>a second paper</a>, a group of researchers created a soft robotic finger called “GelFlex” that uses embedded cameras and deep learning to enable high-resolution tactile sensing and “proprioception” (awareness of positions and movements of the body).&nbsp;</p>

<p>The gripper, which looks much like a two-finger cup gripper you might see at a soda station, uses a tendon-driven mechanism to actuate the fingers. When tested on metal objects of various shapes, the system had over 96 percent recognition accuracy.&nbsp;</p>

<p>“Our soft finger can provide high accuracy on proprioception and accurately predict grasped objects, and also withstand considerable impact without harming the interacted environment and itself,” says Yu She, lead author on a new paper on GelFlex. “By constraining soft fingers with a flexible exoskeleton, and performing high-resolution sensing with embedded cameras, we open up a large range of capabilities for soft manipulators.”&nbsp;</p>

<p><strong>Magic ball senses&nbsp;</strong></p>

<p>The magic ball gripper is made from a soft origami structure, encased by a soft balloon. When a vacuum is applied to the balloon, the origami structure closes around the object, and the gripper deforms to its structure.&nbsp;</p>

<p>While this motion lets the gripper grasp a much wider range of objects than ever before, such as soup cans, hammers, wine glasses, drones, and even a single broccoli floret, the greater intricacies of delicacy and understanding were still out of reach — until they added the sensors.&nbsp;&nbsp;</p>

<p>When the sensors experience force or strain, the internal pressure changes, and the team can measure this change in pressure to identify when it will feel that again.&nbsp;</p>

<p>In addition to the latex sensor, the team also developed an algorithm which uses feedback to let the gripper possess a human-like duality of being both strong and precise — and 80 percent of the tested objects were successfully grasped without damage.&nbsp;</p>

<p>The team tested the gripper-sensors on a variety of household items, ranging from heavy bottles to small, delicate objects, including cans, apples, a toothbrush, a water bottle, and a bag of cookies.&nbsp;</p>

<p>Going forward, the team hopes to make the methodology scalable, using computational design and reconstruction methods to improve the resolution and coverage using this new sensor technology. Eventually, they imagine using the new sensors to create a fluidic sensing skin that shows scalability and sensitivity.&nbsp;</p>

<p>Hughes co-wrote the new paper with Rus, which they will present virtually at the 2020 International Conference on Robotics and Automation.&nbsp;</p>

<p><strong>GelFlex</strong></p>

<p>In the second paper, a CSAIL team looked at giving a soft robotic gripper more nuanced, human-like senses. Soft fingers allow a wide range of deformations, but to be used in a controlled way there must be rich tactile and proprioceptive sensing. The team used embedded cameras with wide-angle “fisheye” lenses that capture the finger’s deformations in great detail.</p>

<p>To create GelFlex, the team used silicone material to fabricate the soft and transparent finger, and put one camera near the fingertip and the other in the middle of the finger. Then, they painted reflective ink on the front and side surface of the finger, and added LED lights on the back. This allows the internal fish-eye camera to observe the status of the front and side surface of the finger.&nbsp;</p>

<p>The team trained neural networks to extract key information from the internal cameras for feedback. One neural net was trained to predict the bending angle of GelFlex, and the other was trained to estimate the shape and size of the objects being grabbed. The gripper could then pick up a variety of items such as a Rubik’s cube, a DVD case, or a block of aluminum.&nbsp;</p>

<p>During testing, the average positional error while gripping was less than 0.77 millimeter, which is better than that of a human finger. In a second set of tests, the gripper was challenged with grasping and recognizing cylinders and boxes of various sizes. Out of 80 trials, only three were classified incorrectly.&nbsp;</p>

<p>In the future, the team hopes to improve the proprioception and tactile sensing algorithms, and utilize vision-based sensors to estimate more complex finger configurations, such as twisting or lateral bending, which are challenging for common sensors, but should be attainable with embedded cameras.</p>

<p>Yu She co-wrote the GelFlex paper with MIT graduate student Sandra Q. Liu, Peiyu Yu of Tsinghua University, and MIT Professor Edward Adelson. They will present the paper virtually at the 2020 International Conference on Robotics and Automation.</p>

<p></p>

Professor Ted Adelson’s team created a soft robotic finger that uses embedded cameras and deep learning to enable high-resolution tactile sensing and “proprioception” (awareness of positions and movements of the body).
Photo courtesy of the researchers.

Giving soft robots feelinghttps://news.mit.edu/2020/giving-soft-robots-senses-0601
In a pair of papers from MIT CSAIL, two teams enable better sense and perception for soft robotic grippers.
Mon, 01 Jun 2020 09:00:00 -0400
https://news.mit.edu/2020/giving-soft-robots-senses-0601
Rachel Gordon | MIT CSAIL
<p>One of the hottest topics in robotics is the field of soft robots, which utilizes squishy and flexible materials rather than traditional rigid materials. But soft robots have been limited due to their lack of good sensing. A good robotic gripper needs to feel what it is touching (tactile sensing), and it needs to sense the positions of its fingers (proprioception). Such sensing has been missing from most soft robots.</p>

<p>In a new pair of papers, researchers from MIT’s <a href=”http://csail.mit.edu”>Computer Science and Artificial Intelligence Laboratory</a> (CSAIL) came up with new tools to let robots better perceive what they’re interacting with: the ability to see and classify items, and a softer, delicate touch.&nbsp;</p>

<p>“We wish to enable seeing the world by feeling the world. Soft robot hands have sensorized skins that allow them to pick up a range of objects, from delicate, such as potato chips, to heavy, such as milk bottles,” says CSAIL Director Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and the deputy dean of research for the MIT Stephen A. Schwarzman College of Computing.&nbsp;</p>
<p><a href=”https://arxiv.org/abs/1910.01287″>One paper</a> builds off last year’s <a href=”http://news.mit.edu/2019/new-robot-hand-gripper-soft-and-strong-0315″>research</a> from MIT and Harvard University, where a team developed a soft and strong robotic gripper in the form of a cone-shaped origami structure. It collapses in on objects much like a Venus’ flytrap, to pick up items that are as much as 100 times its weight.&nbsp;</p>

<p>To get that newfound versatility and adaptability even closer to that of a human hand, a new team came up with a sensible addition: tactile sensors, made from latex “bladders” (balloons) connected to pressure transducers. The new sensors let the gripper not only pick up objects as delicate as potato chips, but it also classifies them — letting the robot better understand what it’s picking up, while also exhibiting that light touch.&nbsp;</p>

<p>When classifying objects, the sensors correctly identified 10 objects with over 90 percent accuracy, even when an object slipped out of grip.</p>

<p>“Unlike many other soft tactile sensors, ours can be rapidly fabricated, retrofitted into grippers, and show sensitivity and reliability,” says MIT postdoc Josie Hughes, the lead author on a new paper about the sensors. “We hope they provide a new method of soft sensing that can be applied to a wide range of different applications in manufacturing settings, like packing and lifting.”&nbsp;</p>

<p>In <a href=”https://arxiv.org/pdf/1910.01287.pdf”>a second paper</a>, a group of researchers created a soft robotic finger called “GelFlex” that uses embedded cameras and deep learning to enable high-resolution tactile sensing and “proprioception” (awareness of positions and movements of the body).&nbsp;</p>

<p>The gripper, which looks much like a two-finger cup gripper you might see at a soda station, uses a tendon-driven mechanism to actuate the fingers. When tested on metal objects of various shapes, the system had over 96 percent recognition accuracy.&nbsp;</p>

<p>“Our soft finger can provide high accuracy on proprioception and accurately predict grasped objects, and also withstand considerable impact without harming the interacted environment and itself,” says Yu She, lead author on a new paper on GelFlex. “By constraining soft fingers with a flexible exoskeleton, and performing high-resolution sensing with embedded cameras, we open up a large range of capabilities for soft manipulators.”&nbsp;</p>

<p><strong>Magic ball senses&nbsp;</strong></p>

<p>The magic ball gripper is made from a soft origami structure, encased by a soft balloon. When a vacuum is applied to the balloon, the origami structure closes around the object, and the gripper deforms to its structure.&nbsp;</p>

<p>While this motion lets the gripper grasp a much wider range of objects than ever before, such as soup cans, hammers, wine glasses, drones, and even a single broccoli floret, the greater intricacies of delicacy and understanding were still out of reach — until they added the sensors.&nbsp;&nbsp;</p>

<p>When the sensors experience force or strain, the internal pressure changes, and the team can measure this change in pressure to identify when it will feel that again.&nbsp;</p>

<p>In addition to the latex sensor, the team also developed an algorithm which uses feedback to let the gripper possess a human-like duality of being both strong and precise — and 80 percent of the tested objects were successfully grasped without damage.&nbsp;</p>

<p>The team tested the gripper-sensors on a variety of household items, ranging from heavy bottles to small, delicate objects, including cans, apples, a toothbrush, a water bottle, and a bag of cookies.&nbsp;</p>

<p>Going forward, the team hopes to make the methodology scalable, using computational design and reconstruction methods to improve the resolution and coverage using this new sensor technology. Eventually, they imagine using the new sensors to create a fluidic sensing skin that shows scalability and sensitivity.&nbsp;</p>

<p>Hughes co-wrote the new paper with Rus, which they will present virtually at the 2020 International Conference on Robotics and Automation.&nbsp;</p>

<p><strong>GelFlex</strong></p>

<p>In the second paper, a CSAIL team looked at giving a soft robotic gripper more nuanced, human-like senses. Soft fingers allow a wide range of deformations, but to be used in a controlled way there must be rich tactile and proprioceptive sensing. The team used embedded cameras with wide-angle “fisheye” lenses that capture the finger’s deformations in great detail.</p>

<p>To create GelFlex, the team used silicone material to fabricate the soft and transparent finger, and put one camera near the fingertip and the other in the middle of the finger. Then, they painted reflective ink on the front and side surface of the finger, and added LED lights on the back. This allows the internal fish-eye camera to observe the status of the front and side surface of the finger.&nbsp;</p>

<p>The team trained neural networks to extract key information from the internal cameras for feedback. One neural net was trained to predict the bending angle of GelFlex, and the other was trained to estimate the shape and size of the objects being grabbed. The gripper could then pick up a variety of items such as a Rubik’s cube, a DVD case, or a block of aluminum.&nbsp;</p>

<p>During testing, the average positional error while gripping was less than 0.77 millimeter, which is better than that of a human finger. In a second set of tests, the gripper was challenged with grasping and recognizing cylinders and boxes of various sizes. Out of 80 trials, only three were classified incorrectly.&nbsp;</p>

<p>In the future, the team hopes to improve the proprioception and tactile sensing algorithms, and utilize vision-based sensors to estimate more complex finger configurations, such as twisting or lateral bending, which are challenging for common sensors, but should be attainable with embedded cameras.</p>

<p>Yu She co-wrote the GelFlex paper with MIT graduate student Sandra Q. Liu, Peiyu Yu of Tsinghua University, and MIT Professor Edward Adelson. They will present the paper virtually at the 2020 International Conference on Robotics and Automation.</p>

<p></p>

Professor Ted Adelson’s team created a soft robotic finger that uses embedded cameras and deep learning to enable high-resolution tactile sensing and “proprioception” (awareness of positions and movements of the body).
Photo courtesy of the researchers.

Giving soft robots feelinghttps://news.mit.edu/2020/giving-soft-robots-senses-0601
In a pair of papers from MIT CSAIL, two teams enable better sense and perception for soft robotic grippers.
Mon, 01 Jun 2020 09:00:00 -0400
https://news.mit.edu/2020/giving-soft-robots-senses-0601
Rachel Gordon | MIT CSAIL
<p>One of the hottest topics in robotics is the field of soft robots, which utilizes squishy and flexible materials rather than traditional rigid materials. But soft robots have been limited due to their lack of good sensing. A good robotic gripper needs to feel what it is touching (tactile sensing), and it needs to sense the positions of its fingers (proprioception). Such sensing has been missing from most soft robots.</p>

<p>In a new pair of papers, researchers from MIT’s <a href=”http://csail.mit.edu”>Computer Science and Artificial Intelligence Laboratory</a> (CSAIL) came up with new tools to let robots better perceive what they’re interacting with: the ability to see and classify items, and a softer, delicate touch.&nbsp;</p>

<p>“We wish to enable seeing the world by feeling the world. Soft robot hands have sensorized skins that allow them to pick up a range of objects, from delicate, such as potato chips, to heavy, such as milk bottles,” says CSAIL Director Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and the deputy dean of research for the MIT Stephen A. Schwarzman College of Computing.&nbsp;</p>
<p><a href=”https://arxiv.org/abs/1910.01287″>One paper</a> builds off last year’s <a href=”http://news.mit.edu/2019/new-robot-hand-gripper-soft-and-strong-0315″>research</a> from MIT and Harvard University, where a team developed a soft and strong robotic gripper in the form of a cone-shaped origami structure. It collapses in on objects much like a Venus’ flytrap, to pick up items that are as much as 100 times its weight.&nbsp;</p>

<p>To get that newfound versatility and adaptability even closer to that of a human hand, a new team came up with a sensible addition: tactile sensors, made from latex “bladders” (balloons) connected to pressure transducers. The new sensors let the gripper not only pick up objects as delicate as potato chips, but it also classifies them — letting the robot better understand what it’s picking up, while also exhibiting that light touch.&nbsp;</p>

<p>When classifying objects, the sensors correctly identified 10 objects with over 90 percent accuracy, even when an object slipped out of grip.</p>

<p>“Unlike many other soft tactile sensors, ours can be rapidly fabricated, retrofitted into grippers, and show sensitivity and reliability,” says MIT postdoc Josie Hughes, the lead author on a new paper about the sensors. “We hope they provide a new method of soft sensing that can be applied to a wide range of different applications in manufacturing settings, like packing and lifting.”&nbsp;</p>

<p>In <a href=”https://arxiv.org/pdf/1910.01287.pdf”>a second paper</a>, a group of researchers created a soft robotic finger called “GelFlex” that uses embedded cameras and deep learning to enable high-resolution tactile sensing and “proprioception” (awareness of positions and movements of the body).&nbsp;</p>

<p>The gripper, which looks much like a two-finger cup gripper you might see at a soda station, uses a tendon-driven mechanism to actuate the fingers. When tested on metal objects of various shapes, the system had over 96 percent recognition accuracy.&nbsp;</p>

<p>“Our soft finger can provide high accuracy on proprioception and accurately predict grasped objects, and also withstand considerable impact without harming the interacted environment and itself,” says Yu She, lead author on a new paper on GelFlex. “By constraining soft fingers with a flexible exoskeleton, and performing high-resolution sensing with embedded cameras, we open up a large range of capabilities for soft manipulators.”&nbsp;</p>

<p><strong>Magic ball senses&nbsp;</strong></p>

<p>The magic ball gripper is made from a soft origami structure, encased by a soft balloon. When a vacuum is applied to the balloon, the origami structure closes around the object, and the gripper deforms to its structure.&nbsp;</p>

<p>While this motion lets the gripper grasp a much wider range of objects than ever before, such as soup cans, hammers, wine glasses, drones, and even a single broccoli floret, the greater intricacies of delicacy and understanding were still out of reach — until they added the sensors.&nbsp;&nbsp;</p>

<p>When the sensors experience force or strain, the internal pressure changes, and the team can measure this change in pressure to identify when it will feel that again.&nbsp;</p>

<p>In addition to the latex sensor, the team also developed an algorithm which uses feedback to let the gripper possess a human-like duality of being both strong and precise — and 80 percent of the tested objects were successfully grasped without damage.&nbsp;</p>

<p>The team tested the gripper-sensors on a variety of household items, ranging from heavy bottles to small, delicate objects, including cans, apples, a toothbrush, a water bottle, and a bag of cookies.&nbsp;</p>

<p>Going forward, the team hopes to make the methodology scalable, using computational design and reconstruction methods to improve the resolution and coverage using this new sensor technology. Eventually, they imagine using the new sensors to create a fluidic sensing skin that shows scalability and sensitivity.&nbsp;</p>

<p>Hughes co-wrote the new paper with Rus, which they will present virtually at the 2020 International Conference on Robotics and Automation.&nbsp;</p>

<p><strong>GelFlex</strong></p>

<p>In the second paper, a CSAIL team looked at giving a soft robotic gripper more nuanced, human-like senses. Soft fingers allow a wide range of deformations, but to be used in a controlled way there must be rich tactile and proprioceptive sensing. The team used embedded cameras with wide-angle “fisheye” lenses that capture the finger’s deformations in great detail.</p>

<p>To create GelFlex, the team used silicone material to fabricate the soft and transparent finger, and put one camera near the fingertip and the other in the middle of the finger. Then, they painted reflective ink on the front and side surface of the finger, and added LED lights on the back. This allows the internal fish-eye camera to observe the status of the front and side surface of the finger.&nbsp;</p>

<p>The team trained neural networks to extract key information from the internal cameras for feedback. One neural net was trained to predict the bending angle of GelFlex, and the other was trained to estimate the shape and size of the objects being grabbed. The gripper could then pick up a variety of items such as a Rubik’s cube, a DVD case, or a block of aluminum.&nbsp;</p>

<p>During testing, the average positional error while gripping was less than 0.77 millimeter, which is better than that of a human finger. In a second set of tests, the gripper was challenged with grasping and recognizing cylinders and boxes of various sizes. Out of 80 trials, only three were classified incorrectly.&nbsp;</p>

<p>In the future, the team hopes to improve the proprioception and tactile sensing algorithms, and utilize vision-based sensors to estimate more complex finger configurations, such as twisting or lateral bending, which are challenging for common sensors, but should be attainable with embedded cameras.</p>

<p>Yu She co-wrote the GelFlex paper with MIT graduate student Sandra Q. Liu, Peiyu Yu of Tsinghua University, and MIT Professor Edward Adelson. They will present the paper virtually at the 2020 International Conference on Robotics and Automation.</p>

<p></p>

Professor Ted Adelson’s team created a soft robotic finger that uses embedded cameras and deep learning to enable high-resolution tactile sensing and “proprioception” (awareness of positions and movements of the body).
Photo courtesy of the researchers.

Undergraduates develop next-generation intelligence toolshttps://news.mit.edu/2020/undergraduates-develop-next-generation-intelligence-tools-0526
UROP students explore applications in robotics, health care, language understanding, and nuclear engineering.
Tue, 26 May 2020 14:35:01 -0400
https://news.mit.edu/2020/undergraduates-develop-next-generation-intelligence-tools-0526
Kim Martineau | MIT Quest for Intelligence
<p>The coronavirus pandemic has driven us apart physically while reminding us of the power of technology to connect. When MIT shut its doors in March, much of campus moved online, to virtual classes, labs, and chatrooms. Among those making the pivot were students engaged in independent research under MIT’s Undergraduate Research Opportunities Program (UROP).&nbsp;</p>

<p>With regular check-ins with their advisors via Slack and Zoom, many students succeeded in pushing through to the end. One even carried on his experiments from his bedroom, after schlepping&nbsp;his Sphero Bolt robots home in a backpack. “I’ve been so impressed by their resilience and dedication,” says Katherine Gallagher, one of three artificial intelligence engineers at MIT Quest for Intelligence who works with students each semester on intelligence-related applications. “There was that initial week of craziness and then they were right back to work.” Four projects from this spring are highlighted below.</p>

<p><strong>Learning to explore the world with open eyes and ears</strong></p>

<p>Robots rely heavily on images beamed through their built-in cameras, or surrogate “eyes,” to get around. MIT senior Alon Kosowsky-Sachs thinks they could do a lot more if they also used their microphone “ears.”&nbsp;</p>

<p>From his home in Sharon, Massachusetts, where he retreated after MIT closed in March, Kosowsky-Sachs is training four baseball-sized Sphero Bolt robots to roll around a homemade arena. His goal is to teach the robots to pair sights with sounds, and to exploit this information to build better representations of their environment. He’s working with&nbsp;<a href=”https://people.csail.mit.edu/pulkitag/”>Pulkit Agrawal</a>, an assistant professor in MIT’s Department of Electrical Engineering and Computer Science, who is interested in designing algorithms with human-like curiosity.</p>

<p>While Kosowsky-Sachs sleeps, his robots putter away, gliding&nbsp;through an object-strewn rink he built for them from two-by-fours. Each burst of movement becomes a pair of one-second video and audio clips. By day, Kosowsky-Sachs trains a “curiosity” model aimed&nbsp;at pushing the robots to become bolder, and more skillful, at navigating their obstacle course.</p>

<p>“I want them to see something through their camera, and hear something from their microphone, and know that these two things happen together,” he says. “As humans, we combine a lot of sensory information to get added insight about the world. If we hear a thunder clap, we don’t need to see lightning to know that a storm has arrived. Our hypothesis is that robots with a better model of the world will be able to accomplish more difficult tasks.”</p>

<p><strong>Training a robot agent to design a more efficient nuclear reactor&nbsp;</strong></p>

<p>One important factor driving the cost of nuclear power is the layout of its reactor core. If fuel rods are arranged in an optimal fashion, reactions last longer, burn less fuel, and need less maintenance. As engineers look for ways to bring down the cost of nuclear energy, they are eying the redesign of the reactor core.</p>

<p>“Nuclear power emits very little carbon and is surprisingly safe compared to other energy sources, even solar or wind,” says third-year student Isaac Wolverton. “We wanted to see if we could use AI to make it more efficient.”&nbsp;</p>

<p>In a project with Josh Joseph, an AI engineer at the MIT Quest, and&nbsp;<a href=”http://web.mit.edu/nse/people/faculty/shirvan.html”>Koroush Shirvan</a>, an assistant professor in MIT’s Department of Nuclear Science and Engineering, Wolverton spent the year training a reinforcement learning agent to find the best way to lay out fuel rods in a reactor core. To simulate the process, he turned the problem into a game, borrowing a machine learning technique for producing agents with superhuman abilities at chess and Go.</p>

<p>He started by training his agent on a simpler problem: arranging colored tiles on a grid so that as few tiles as possible of the same color would touch. As Wolverton increased the number of options, from two colors to five, and four tiles to 225, he grew excited as the agent continued to find the best strategy. “It gave us hope we could teach it to swap the cores into an optimal arrangement,” he says.</p>

<p>Eventually, Wolverton moved to an environment meant to simulate a 36-rod reactor core, with two enrichment levels and 2.1 million possible core configurations. With input from researchers in Shirvan’s lab, Wolverton trained an agent that arrived at the optimal solution.</p>

<p>The lab is now building on Wolverton’s code to try to train an agent in a life-sized 100-rod environment with 19 enrichment levels.&nbsp;“There’s no breakthrough at this point,” he says. “But we think it’s possible, if we can find enough compute resources.”</p>

<p><strong>Making more livers available to patients who need them</strong></p>

<p>About 8,000 patients in the United States receive liver transplants each year, but that’s only half the number who need one. Many more livers might&nbsp;be made available if hospitals had a faster way to screen them, researchers say. In a collaboration with&nbsp;Massachusetts General Hospital, MIT Quest is evaluating whether automation could help to boost the nation’s supply of viable livers.&nbsp;&nbsp;</p>

<p>In approving&nbsp;a liver for transplant, pathologists estimate its fat content from a slice of tissue. If it’s low enough, the liver is deemed ready for transplant. But&nbsp;there are often not enough qualified doctors to review tissue samples&nbsp;on the tight timeline needed to match livers with recipients.&nbsp;A shortage of doctors, coupled with the subjective nature of analyzing tissue, means that viable livers are inevitably discarded.</p>

<p>This loss represents a huge opportunity for machine learning, says third-year student Kuan Wei Huang, who joined the project to explore AI applications in health care. The project involves training a deep neural network to pick out globules of fat on&nbsp;liver tissue slides to estimate the liver’s overall fat content.</p>

<p>One challenge, says Huang, has been figuring out how to handle variations in how various pathologists classify fat globules. “This makes it harder to tell whether I’ve created the appropriate masks to feed into the neural net,” he says. “However, after meeting with experts in the field, I received clarifications and was able to continue working.”</p>

<p>Trained on images labeled by pathologists, the model will eventually learn to isolate fat globules&nbsp;in unlabeled images on its own. The final output will be&nbsp;a fat content estimate with pictures of highlighted fat globules showing how the model arrived at its final count. “That’s the easy part — we just count up the&nbsp;pixels in the highlighted globules&nbsp;as a percentage of the overall biopsy and we have our fat content estimate,” says the Quest’s Gallagher, who is leading the project.</p>

<p>Huang says he’s excited by the project’s potential to help people. “Using machine learning to address medical problems is one of the best ways that a computer scientist can impact the world.”</p>

<p><strong>Exposing the hidden constraints of what we mean in what we say</strong></p>

<p>Language shapes our understanding of the world in subtle ways, with slight variations in the words we use conveying sharply different meanings. The sentence, “Elephants live in Africa and Asia,” looks a lot like the sentence “Elephants eat twigs and leaves.”&nbsp;But most readers will conclude that the elephants in the first sentence are split into distinct groups living on separate continents but not apply the same reasoning to the second sentence, because eating twigs and eating leaves can both be true of the same elephant in a way that living on different continents cannot.</p>

<p>Karen Gu is a senior majoring in computer science and molecular biology, but instead of putting cells under a microscope for her SuperUROP project, she chose to look at sentences like the ones above. “I’m fascinated by the complex and subtle things that we do to constrain language understanding, almost all of it subconsciously,” she says.</p>

<p>Working with&nbsp;<a href=”http://www.mit.edu/~rplevy/”>Roger Levy</a>, a professor in MIT’s Department of Brain and Cognitive Sciences, and postdoc MH Tessler, Gu explored how prior knowledge guides our interpretation of syntax and ultimately, meaning. In the sentences above, prior knowledge about geography and mutual exclusivity interact with syntax to produce different meanings.</p>

<p>After steeping herself in linguistics theory, Gu built a model to explain how, word by word, a given sentence produces meaning. She then ran a set of online experiments to see how human subjects would interpret analogous sentences in a story. Her experiments, she says, largely validated intuitions from linguistic theory.</p>

<p>One challenge, she says, was having to reconcile two approaches for studying language. “I had to figure out how to combine formal linguistics, which applies an almost mathematical approach to understanding how words combine, and probabilistic semantics-pragmatics, which has focused more on how people interpret whole utterances.’ “</p>

<p>After MIT closed in March, she was able to finish the project from her parents’ home in East Hanover, New Jersey. “Regular meetings with my advisor have been really helpful in keeping me motivated and on track,” she says. She says she also got to improve her web-development skills, which will come in handy when she starts work at Benchling, a San Francisco-based software company, this summer.</p>

<p>Spring semester Quest UROP projects were funded, in part, by the MIT-IBM Watson AI Lab and Eric Schmidt,&nbsp;technical advisor to Alphabet Inc., and his wife, Wendy.</p>

Students participating in MIT Quest for Intelligence-funded UROP projects include: (clockwise from top left) Alon Kosowsky-Sachs, Isaac Wolverton, Kuan Wei Huang, and Karen Gu.
Photo collage: Samantha Smiley

Source: MIT News