Breaking News

2023 Trends in Artificial Intelligence and Machine Learning: Generative AI Unfolds

At present, the potential for generative Artificial Intelligence—the variety of predominantly advanced machine learning that analyzes content to produce strikingly similar new content—is boundless.

These technologies have transcended Natural Language Generation, in which they achieved much of their early renown via paradigms such as Bidirectional Encoder Representations from Transformers (BERT), Generative Pre-trained Transformer 3 (GPT3), and deep neural networks. Although it’s still utilized to create verbal summaries of documents and analytics results, generative AI is now widely employed to compose poetry, music, visual arts, and many other things once thought relegated to the realm of human ingenuity.  

A closer examination of the technological underpinnings driving these applications reveals a number of poignant findings that impact their sway on enterprise AI for the New Year:

  • Statistical Techniques: Generative AI primarily relies on AI’s statistical foundation, not its knowledge foundation. It excels in machine learning, eschews machine reasoning, and doesn’t actually understand the underlying domains to which it’s applied.
  • Deep Learning: Generative AI models are predicated on the massive scale and compute required for deep learning. Conversely, they’re prey to the same caveats espoused about this form of AI for years, in terms of applications for which it is and isn’t suitable. The latter involves transparency, explainability, and interpretability.
  • Neural Networks: Because of its reliance on deep neural networks such as Generative Adversarial Networks and others, generative AI tends to forsake simpler machine learning models that are still highly relevant for business cases like fraud detection, information security, and recommendations.

Still, generative AI’s benefits of automation, time-to-action, and scalability are the very reasons organizations rely on AI in the first place. Prudent companies will adopt these advantages within broader frameworks for mitigating the shortfalls of advanced machine learning to provide tangible business value for decision support, customer satisfaction, workload optimization, and cost reductions.

By tempering the use of deep neural networks within highly reliable constructs such as rules-based systems, ensemble modeling, graph techniques, and others, generative AI may prove the vast leap forward its champions proclaim it is.

GPT3

There are innumerable deployments of Natural Language Generation that are directly attributed to both generative AI and its advanced machine learning methods. “GPT3 can sometimes generate, given a context and a particular bound scenario, specific boilerplate code or boilerplate text that can be useful for… summaries,” posited Ignacio Segovia, Altimetrik Head of Product Engineering. Consequently, there are increasing numbers of use cases for implementing these deep learning approaches to deliver a rapid synopsis of interactions between customers and bots, company representatives, and system interfaces. On the one hand, this approach benefits customer segments because “this person who’s looking for a retirement plan might be actually a better candidate to be in this other type of retirement plan, just because that particular summarization of a conversation indicated that,” Segovia remarked.

On the other hand, when transformer architecture is involved, it’s possible to provide summaries of unthinkable quantities of data that behooves an organization’s employees. “Copilot takes GitHub’s entire known space of code, imagine that, and renders that very quickly into boilerplate code where the developer can simply worry about the validity of it, formulate it, and fine-tune it,” Segovia commented. Although this particular use case doesn’t involve writing code from scratch, GPT3’s synopsis of code, at scale, is remarkable for developers fostering innovative applications, or simply bettering existent ones.

Algorithmic Rules

As Segovia alluded to, one of the most desired outputs of generative AI is the capability to produce code for developers writing new applications or designing web pages. At present, however, there are methods by which AI can improve code to lower costs and increase efficiency for universal needs—like cloud computing. By coupling AI’s knowledge foundation and its statistical foundation (which Gartner has termed composite AI), intelligent systems can optimize cloud workloads, find the most expensive queries in popular tools like Snowflake, and determine how to decrease their costs. According to Bluesky CEO Mingsheng Hong, there are “computer algorithms [that] do what human experts used to do; they can scale up and analyze not just 10, 100 queries, but millions of queries.”

What’s significant about this application is it doesn’t rely on deep learning, but on what Hong described as “rules-based algorithms”. Employing rules with supervised learning approaches enables the system to refine human-created queries to optimize their cost-efficiency by providing “tuning recommendations across changing the query code and changing the data layout,” Hong revealed. Although AI isn’t used to create the code for querying cloud sources, it still modifies it, as need be, to boost its cost efficiency and workload optimization. The horizontal applicability of this use case is as noteworthy as the fact that the composite AI approach it relies on is not typically considered part of generative AI. However, it’s still assistive for writing code, which is one of the primary deployments of generative AI for practical business utility.

Classic Machine Learning 

The pervasiveness of advanced machine learning has not, nor likely will, obsolete the merit of traditional machine learning. It’s not uncommon to provide fraud detection, Anti-Money Laundering measures, and cyber security via classic machine learning, while fortifying much needed explainability in the process. Ensemble modeling techniques, which combine the predictive prowess of multiple machine learning models to aggrandize them, are instrumental in maximizing the value of traditional machine learning algorithms for use cases involving generative AI. With this approach, simple machine learning models analyze one aspect of a transaction (like its device or amount). Fusing them preserves their traceability while covering an enlarging scope of fraudulent activity. Thus, organizations can “easily have a textual output next to a [fraud detection] score saying the size of this transaction is way too high for this economic sector,” mentioned Martin Rehak, Resistant AI CEO.

In this use case, NLG facilitates the explanation of classic machine learning models that are inherently traceable. Those algorithms are pivotal to success because they are readily explained and don’t require as much example data for sufficient training as deep learning does without representation learning techniques. What’s critical about this application is it suggests classic and advanced machine learning aren’t mutually exclusive. “You start with simpler models, and as you build the expertise you transition in the…middle…in the ensemble to deep learning principles as soon as you can,” Rehak noted. “We can alternate between traditional models and deep learning, and flip between them based on the needs of the rest of the ensemble.”

Graph Analysis 

Graph techniques are particularly influential for AI applications in which “you have relationships and you want to exploit those relationships,” observed Suman Bera, Senior Software Engineer at Katana Graph. Whether assessing aspects of loan origination, healthcare treatments, information security or fraud, graphs can uncover even minute relationships between data to improve analytics. According to Rehak, it’s not uncommon for “in the ensemble, some of the algorithms are based on graphs, and these are some of those that get the highest weight.” Graph neural networks will continue to impact enterprise AI next year by excelling in high dimensionality spaces for use cases involving predictions about nodes and their links.

Additional approaches predicated on embedding, which is similar to the word embeddings involved in model-based methods for natural language technologies, identify relationships in data that users otherwise wouldn’t see. These capabilities are ideal for expediting certain aspects of feature detection for building machine learning models. Moreover, graph neural networks are primed for taming the rigors of unstructured data—even for the most unwieldy use cases. “Graph neural networks work on things that are graph structures,” Bera explained. “There’s still a structure to it, but not your conventional grid structure.” The grid structures Bera referenced are applicable to forms of computer vision. Graph neural networks are of some use for such deployments, but also for pinpointing irregular structures found in life sciences use cases, for example, like spurring new pharmaceuticals to market.

Benchmarks 

In many ways, generative AI has already become a formidable instrument in the toolset for enterprise applications of machine learning and Artificial Intelligence. It’s usefulness for NLG, which frequently involves aspects of Natural Language Understanding, is fairly widespread. However, it’s important not to sequester generative AI capabilities from more time-honored approaches involving traditional machine learning, symbolic AI, and graph technologies. Tempering generative AI’s deep neural networks with these other methods for AI and machine learning can only bolster its productivity—resulting in more transparent and accountable applications in core enterprise use cases.

“How standards are setting up benchmarks for accuracy is constantly changing,” Segovia reflected. “Things that were relatively accurate three or four years ago are completely out of date. Most of the industry, if you go to any of the research papers that are out there, are essentially outdated benchmarks. That’s one area that I think needs to be improved considerably: the standardization for benchmarking.”

About the Author

Jelani Harper is an editorial consultant servicing the information technology market. He specializes in data-driven applications focused on semantic technologies, data governance and analytics.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW