Breaking News

3 Vectors of Artificial Intelligence and Machine Learning

Hosted for the global cloud computing community, Amazon Web Services‘ re:Invent 2021 brought together developers, engineers, IT executives and the technical decision-makers that are transforming how the world around us operates. The early stages of IT infrastructure were inflexible and expensive, but this year’s conference brought to light the next shift in the digital journey that highlights the cloud’s leading role as an enabler in the way that businesses function with machine learning (ML) and artificial intelligence (AI).

In this on-the-show-floor video from the event, we looked at the three areas that are reshaping business processes and environments — from the intelligent applications that embed AI/ML and take advantage of data, and the system of enablers that allow them to reach scale to the chips that power them. We spoke with Tom Trahan, vice president of business development at CircleCI, Matt McIlwain, managing director at Madrona Venture Group, and Luis Ceze, CEO at OctoML. TNS Publisher Alex Williams hosted these conversations.

Watch our recap here and our lightly edited transcript of the video.

[embedded content]

At AWS re:Invent, we saw the intersection of three vectors in the artificial intelligence and machine learning landscape. Number one: a new generation of data clouds and intelligent applications. Number two: scalable AI/ML development. And number three: chips, chips, and more chips.

A New Generation of Data Clouds and Intelligent Applications

Number one, data clouds are leading to a new generation of intelligent applications. Madrona Ventures is marking this shift in intelligent apps with a new list it calls the #IA40. With support from Goldman Sachs, the list represents early, mid and late-stage private companies. It also showcases technology enabling companies like Databricks and DataRobot that are empowering intelligent applications. Continuous development workflows are emerging as data gets programmed more deeply into the apps.

Matt McIlwain, Madrona Venture Group: I think you’re going to see the conceptual equivalents of things like GitLab and GitHub that are going to need to be built out on the data side. An interesting example applied to life sciences — and they’re a great company — is called Insitro. They published an open source — something I think they call recon — which is a version of a data pipeline. Ultimately, those two worlds must come together. I built my data models; I’ve trained them; I think they’re ready for deployment. And they’re being deployed with this contextually relevant software to go solve a problem. And so, I think what we can see most interestingly in the next three to five years is how those two parallel paths get built out, especially on the data pipeline side, and then how they integrate and come together and deal with issues like model drift, ongoing model learning and improving of the models. Based on what we’re seeing out in the real world.

Scalable AI/ML Development

Williams: Is MLOps for real? It’s happening. With services such as auto ML, it’s getting easier to scale-out machine learning models. OctoML is based upon Apache TVM, an end-to-end machine learning compiler framework for CPUs, GPUs and accelerators. Apache TVM is a project originating from the University of Washington. TVM stands for tensor virtual machine. It provides a common layer across targets that exposes a clean interface to the upper layers of the stack, and machine learning frameworks, such as TensorFlow and PyTorch.

Luis Ceze, OctoML: What Apache TVM does is create a set of common primitives across all sorts of different hardware from embedded CPUs to service CPUs, small GPUs, large GPUs, accelerators, and so on. And then it uses machine learning internally to produce efficient machine learning code. Okay, so any section here uses machine learning for machine learning, code optimization. The reason that’s important is because by and large today, the work done to get a model ready for deployment is manual. It involves a lot of manual software engineering to get your model ready to be deployed. And we (OctoML) automate that using these machines learning-based techniques.

Williams: AWS SageMaker Canvas now has that no-code/low-code capability, as does AWS Amplify.

McIlwain:  I found what’s even more interesting is trying to make machine learning more accessible. So, there’s a new capability on AWS SageMaker is essentially a no-code ability. No code and low code are a popular trend right now in software in general; applying that into the machine learning area we think will be a big trend in the years ahead. And there’s Amazon being out early on that with their SageMaker, you know, Canvas offering. So, I found that to be very interesting. Also, Amazon has been working on their own versions of chipsets that are Arm-based. And they’ve got the Graviton chips; they came out with Graviton3 so I think there’s that whole generation of chips as well, that’s quite interesting. Because the model training is becoming increasingly specific — I chose particular frameworks that have different data types, I want to train them on very specific types of instances at a high capacity and that’s what they’re trying to enable.

Chips, Chips, and More Chips

Williams: Number three chips, chips and more chips. The AWS Graviton chip is an example of the specialized hardware that fits both continuous development environments, and AI/ML workloads. The primary difference comes down to the amount of processing power being used: lower energy costs means more savings, up to 30 to 40% savings on your computing power.

Tom Trahan, CircleCI: And so, with Circle supporting their use of their workloads now running on ARM, they’re able to build their software in a multi-architecture built in the same pipeline flows. So, with one commit into their repositories, they’re able to now build for Intel architecture for ARM architecture across the multiple form factors that they want their application to be available and distributed in. And (they) are able to do that all in a single pipeline as opposed to previously they would run separate solutions to try and accomplish those two things.

Williams: In conclusion, three themes surface at AWS re:Invent. Number one: data clouds are emerging as a middle layer on cloud services. Snowflake and services such as AWS Redshift allow for an easier convergence of different data silos, leading to a new generation of intelligent applications. Number two: AI and ML are moving from the research lab and with it comes the first emergence of simpler ways to deploy machine learning models at scale. Number three, a new generation of chips are emerging for compute-intensive workloads that also are being used in CI and CD environments.