Machine learning at the edge still has a ways to go – Stacey on IoT

  • Lauren
  • June 15, 2020
  • Comments Off on Machine learning at the edge still has a ways to go – Stacey on IoT

When it comes to edge computing and machine learning, there’s a lot of marketing hype out there that ignores the reality of current deployments. Dan Jeavons, general manager of data science at Shell, said at a Stacey on IoT event on Thursday that while the oil and gas giant is keen to deploy machine learning at the edge, the reality is that current costs, complexity, and trade-offs stand in the way.“So in the cloud, [machine learning] is very real,” he said. “It’s very big; it’s very now. And we’ve talked very publicly about some of the things we’ve done in that space.” But when it comes to the edge, he said, “This is quite difficult. We’ve struggled to get the scale that we want on the edge, so I should unpack some of the issues behind that.”
Dan Jeavons, GM of data science at Shell, told me that machine learning at the edge is still in the earliest stages, and there are a lot of issues to work out.
Jeavons went on to talk about how having a trillion sensors at the edge means that you have a trillion places where things can go wrong. People still have to change batteries on devices, and the organization still has to have systems that can attest to the security of those devices and make sure they are working. And in many cases, the edge sensors or computing devices are located in the wild where anyone might interfere with them.
The second challenge at the edge is cost. Companies turn to edge processing for latency-sensitive operations, when there’s a large amount of data and bandwidth costs might be too high, and when there are connectivity challenges, such as with a platform in the middle of an ocean. But edge computing is still a computer, and in many cases, a collection of highly developed sensors. So before you invest in putting thousands of devices out there, you have to weigh the costs vs. the business case.
When Jeavons is tasked with evaluating the business case, oftentimes the cost of latency or bandwidth looks reasonable compared to what it costs to buy a ton of gear. “If you’re really talking about a high-cost edge, it doesn’t work at scale because you end up coming back to the point where we’ll accept the latency on the use case in order to move the data back to the cloud,” he said. “And often the answer is that that’s more cost-effective. So I think that’s another scenario where the cost point for a low-cost edge with the right computational capabilities is emerging.”
He added that, currently, model migration is a huge issue standing in the way of more machine learning at the edge. “Just being able to manage thousands of models is a real logistical headache. And it’s something that we’ve been trying to figure out,” he said. “If you add to that thousands of models running in thousands of edge devices where latency may be a problem, how do you manage those sorts of logistics? And how do you maintain that at scale?”
Jeavons didn’t have an answer to either of those questions, but there’s a real opportunity for anyone who can figure this out. And while Jeavons spent his time discussing why we’re still in the early days of doing machine learning at the edge, others at the event honed in on use cases.
For example, John Deere’s Julian Sanchez discussed how the farm equipment company is using computer vision on combines and tractors to differentiate between seeds and weeds, while Karen Panetta, IEEE Fellow and the dean of graduate education for the School of Engineering at Tufts University, talked about her work providing computer vision for firefighters to help them see through smoke.
Broadly speaking, we are trying to use machine learning at the edge to help machines develop the senses that humans have in order to perceive the world around them. Computer vision is all about teaching a computer to “see” things and determine what they are; for example, whether what they’re focused on is a seed or person lighting a cigarette at a gas station (one of Shell’s test cases). In industrial manufacturing, machine learning at the edge uses vibration or sound (in other words, touch and hearing) to understand a machine’s health. There’s even the possibility of teaching machines to “taste.” As Sanchez noted, John Deere would like to use machine learning at the edge to detect the concentration of minerals in soil.
So while today it’s still incredibly complicated to deploy machine learning at the edge of an entire business such as Shell’s, we already have companies applying it to individual, useful business cases in ways that can help improve their products and produce a real return on investment. If you’d like to learn more, you can watch the sessions from this event by visiting
Share this: