Breaking News

Artificial intelligence on the edge – WSU News

Many of us may not even understand exactly where or what the Cloud is.
Yet, much of the data and programs that control our lives live on this Cloud of distant computer servers with the directions to run our devices coming over the Internet.
As the prevalence of artificial intelligence (AI)-driven devices grows, researchers would like to bring some of that decision-making back to our own devices. WSU researchers have developed a novel framework to more efficiently use AI algorithms on mobile platforms and other portable devices. They presented their most recent work at the 2020 Design Automation Conference and the 2020 International Conference on Computer Aided Design.
Jana Doppa
“The goal is to push intelligence to mobile platforms that are resource-constrained in terms of power, computation, and memory,” said Jana Doppa, George and Joan Berry Associate Professor in the School of Electrical Engineering and Computer Science. “This has a huge number of applications ranging from mobile health, augmented and virtual reality, self-driving cars, digital agriculture, and image and video processing mobile applications.”
Voice-recognition software, mobile health, robotics, and Internet-of-Things devices all use artificial intelligence to keep society moving at an ever-faster and automated pace. Self-driving cars powered by AI algorithms remain somewhere on the not-too-distant horizon.
The decisions for these increasingly sophisticated devices are all made in the Cloud, but as demands increase, the Cloud can become increasingly problematic, Doppa said. For instance, it isn’t fast enough.  Having a device in a self-driving car decide to turn right while “looking” both ways requires that information go from the car to the Cloud and then back to the car.
“The time required to make decisions might not meet real-time requirements,” said Partha Pande, Boeing Centennial Chair professor in School of EECS, who collaborated in this research.
Many rural or under-developed areas also don’t have easy access to the infrastructure needed for the requirements of AI related communications and transferring information back and forth through the Cloud can also raise privacy concerns.
At the same time, however, requiring sophisticated computer algorithms to run on portable devices is also problematic. Computational resources haven’t been good enough, a phone’s computing memory is small, and a lot of decision-making will quickly drain the battery power.
“We need to run the algorithms in a resource-constrained environment,” Pande said.
Doppa’s group came up with a framework that is able to run complex neural network-based algorithms locally using less power and computation.
The researchers took an approach that prioritizes problem solving. As in human decision-making in which problems vary in their complexity and require more or less brain power, the researchers developed a framework in which their algorithms spend a lot of energy on only the complex parts of problems while using less resources for the easy ones.
“By doing this, we are improving performance and saving a lot of energy,” Doppa said.
So, for instance, in a digital agriculture application, their more efficient software and hardware could be embedded on a UAV, which could efficiently make decisions about crop spraying with less computational and energy requirements.
Nitthilan Kanappan Jayakodi
The researchers have applied their algorithms to virtual/augmented reality as well as image editing applications. The researchers are the first to adapt state-of-the-art AI approaches for structured outputs to a mobile platform. These include Graph Convolution Networks (GCNs), which are used to produce three-dimensional object shapes from images in augmented and virtual reality, and Generative Adversarial Networks (GANs) technology, which is used to generate synthetic images. In the case of the GAN technology, the solution the researchers developed was able to achieve a more than 50% savings in energy for a loss of about 10% in accuracy.
“Since mobile platforms are constrained by resources, there is a great need for low-overhead solutions for these emerging GCNs and GANs to perform energy-constrained inference,” said Nitthilan Kanappan Jayakodi, a graduate student in the School of Electrical Engineering and Computer Science who was lead author on the research and was selected as a Richard Newton Young Fellow from the ACM Special Interest Group on Design Automation for his outstanding research contributions. “To the best of our knowledge, this is the first work on studying methods to deploy emerging GCNs and GANs to predict complex structured outputs on mobile platforms.”
The work was funded by the National Science Foundation and the U.S. Army Research Office.