Everything Artificial Intelligence has ever been, hopes to be, or currently is to the enterprise has been encapsulated in a single emergent concept, a hybrid term, simultaneously detailing exactly where it is today, and just where it’s headed in the coming year.
The ModelOps notion is so emblematic of AI because it gives credence to its full breadth (from machine learning to its knowledge base), which Gartner indicates involves rules, agents, knowledge graphs, and more.
ModelOps is about more than simply operationalizing and governing AI models. It’s about doing so quickly, at scale, with full accountability, and in a manner that resolves the most mission critical business problems—if not those for society, as well.
Moreover, it involves doing so onsite while leveraging the advantages of the cloud and, when it comes to AI’s machine learning prowess, with a range of approaches rooted in supervised, unsupervised, and even reinforcement learning.
Implicit to these capabilities is the need to position machine learning models at the edge, supersede their traditional training data limitations (and methods), and imbibe everything from streaming to static data for a predictive exactness based on the most current data possible.
Or, as SAS Chief Data Scientist Wayne Thompson put it, “Right now, most organizations are just checking the scores for the model and seeing if the model’s scores have changed using an older offline model. What is state of the art is actually putting the model into the training environment, and deploy and train simultaneously and update the model’s weights.”
In many ways, ModelOps is just an updated term for model management, albeit one that acknowledges that AI is more than mere statistics while prioritizing timely deployments. ModelOps is perfected when organizations can expedite the creation and operation of tailored models for any specific use case. Thompson cited a banking example where the institution “wanted a push button system and have that thing run much like a factory. And, yes they want to be able to checkpoint and see if things are going out of whack at any point in time, but they want to truly automate.”
Platforms tailored around model management facilitate these boons in plentiful ways. Firstly, they can score models and their results by placing “these models into packages like this ASTORE or into a scoring function and hand that over to a much more conservative, much more structured, much more highly regulated [audience]: something that has 99999 reliability associated with it,” Thompson said. They can also integrate model production into workflows with APIs, illustrating the cloud’s mounting importance to AI. Most importantly, they can input models into production points to dynamically adjust weights and measurements with real data, as opposed to stale or historic data.
The Internet of Things and edge computing provide peerless opportunities to update models in real time to counter model drift, which will otherwise intrinsically occur over time. Whereas ModelOps use cases in finance involve automating the scaffolding and delivery of models—at scale—for targeted customer micro-segmentation, compelling IoT deployments center on preeminent public (and private) health concerns with streaming data and, oftentimes, computer vision. The predominant problem with the so called AIoT is inputting credible models into endpoint gadgets “because deep learning models are so big,” Thompson reflected.
A reliable solution is to position them into an “ASTORE file, which is just a binary blob that we pack all these co-efficients into so that it’s transparent to you,” Thompson remarked. “That binary file gets stored and compacted, and then can be shared.” With this method, organizations can support computer vision and object detection use cases to ensure people preserve social distancing, implement contact tracing, or just monitor equipment asset health in the Industrial Internet. Moreover, they can leverage an approach in which models are adjusting to the actual production data, while utilizing architecture and hardware best practices for TinyML.
The vitality of the cloud to ModelOps is almost unparalleled, particularly with the current emphasis on remote collaborations. Competitive ModelOps solutions are containerized for ease of deployment; numerous model procurement options are cloud accessible. Almost any workflow is merely an API call away. The cloud is increasingly becoming the setting for training machine learning models, operating as a fecund launching point for its three chief forms:
- Supervised Learning: This machine learning variety requires labeled training data. According to Thompson, “Some of these models, like a recommendation engine, we actually learn online as we’re building out a user item dataset.” These real-time inputs in the cloud, updated every time a customer makes a purchase, for example, provide optimal training for models. “The more that you can actually train that model as the data is being collected, your model is going to be much fresher,” Thompson acknowledged.
- Reinforcement Learning: This machine learning type eschews typical training datasets; instead, an agent dynamically interacts with an environment according to a series of what Thompson called rewards and constraints. Cutting edge cloud measures enable organizations to let agents learn in simulation then “swap out the simulated environment with the real environment; it’s the same API so the reinforcement agent is agnostic to this,” Thompson observed. “And, it’s possible once again to deploy and train simultaneously.”
- Unsupervised Learning: This learning variety involves training data without labels. It’s based on clustering techniques and dimensionality reduction measures, which are enormously beneficial when dealing with data at scale. These measures “help you do dimension reduction by being able to, let’s say, reduce 500 variables into three,” said Gul Ege, SAS Senior Director of Advanced Analytics, Research and Development. This capacity is critical for IoT streaming data deployments, like observing manufacturing lines with computer vision, and “devising what are the parts of this data you really need to keep without going to the cloud if you need to,” Ege explained. “The rest is either just noise or…too much data saying the same thing.
Predictive vs. Prescriptive Analytics
Robotic Process Automation has arisen as one of the most widely adopted means of translating AI’s predictive effectiveness in a prescriptive one. According to One Network COO Joe Bellini, these bots are capable of “not only being able to analyze and predict, but actually prescribe and execute. And, you can make it autonomous.” By equipping virtual agents with machine learning capabilities, they can identify trends in supply chain networks between organizations, for example, and ascertain how best to react to a predicted shortage. “The agent can not only recommend prescriptions based on the data that’s available in the network, but the agent can also execute the decisions that are made—make it actionable,” Bellini confirmed. “So, actually change a plan, change a schedule, change a load on a carrier, reallocate.”
Bots can also implement the necessary details to profit from prescriptive analytics, which Automation Anywhere SVP of Products and Solutions Marketing Kevin Murray characterized as “the last mile activities around automation.” He outlined a help desk use case in which AI systems recommend a replacement product for a customer before bots are tasked with actuating “the procurement, the inventory, the shipping, all the information that happens downstream after that magic recommendation has occurred.” Pertinent RPA trends for AI include assembling individual bots into comprehensive digital assistants able to integrate with data, systems, and computing paradigms across environments. Pairing these capabilities with reinforcement learning may prove a long term solution for AI’s progression into general AI.
Natural Language Technologies
Conversational AI is still the summit of natural language technologies because it amalgamates facets of Natural Language Processing, Natural Language Understanding, Natural Language Generation, and Natural Language Querying. It’s a practical way to interact with systems sans physical contact, which is lauded in contemporary social settings. According to TopQuadrant CTO Ralph Hodgson, however, natural language technologies may soon include a synthesis with image data, heralding a convergence between the realms of image recognition and NLP. Based on what he called “word vectors”, these capabilities can “make a text document appear with an image,” Hodgson denoted. Current applications include unstructured textual data, which is surging throughout the enterprise. “Neural networks are going to do what they do for images against text—documents,” Hodgson predicted.
The overall import of ModelOps is binary. It ensures model accountability for data governance staples like lifecycle management, and positions models—expeditiously—where they avail the enterprise most. Many of these deployments, whether involving natural language interactions or computer vision, are at the network’s edge. The present public health crisis reinforces this need and ModelOps merit.
“It kind of seems silly that we were going to have stores without cashiers a year ago, but now it seems pretty relevant,” opined Jacob Smith, Equinix VP of Bare Metal Strategy & Marketing. “Same with drones that deliver your groceries.” Such relevancy makes it imperative to continually update predictive models with the latest data for simultaneous deployment, training, and optimization.
About the Author
Jelani Harper is an editorial consultant servicing the information technology market. He specializes in data-driven applications focused on semantic technologies, data governance and analytics.
Sign up for the free insideBIGDATA newsletter.