Breaking News

Goldman: AI tools have potential in finance beyond smart stock trading

Last Chance: Register for Transform, VB’s AI event of the year, hosted online July 15-17.


Continuing the Technology and Automation first day focus of VentureBeat’s Transform 2020 digital conference, one of today’s many featured AI leaders is Charles Elkan, Goldman Sachs’ global head of machine learning, whose fireside chat offered concrete guidance on using AI within the world of finance. While AI isn’t ready to replace humans, Elkan suggested, it has a unique ability to provide actionable guidance based on large quantities of data — assuming companies are realistic about its capabilities and limitations.

Noting his past experiences as Amazon’s ML director and a professor at U.C. San Diego, Elkan was highly familiar with time-series forecasts, which traditionally rely upon historical data — years of it, if possible — to predict future needs. With modern ML, including deep learning neural networks, Elkan says that 52-week forecasts can be produced for products that are almost brand new, utilizing natural language processing to locate similar products by searching catalog descriptions, then examining sales trends for those products to deduce how a new version will perform. Amazon made its Forecast tools generally available via AWS last year, promising up to 50% more precise forecasts than traditional systems.

Elkan also offered a clear guide to creating products based on machine learning. He identified the key as a product manager who serves first as the user’s voice when working with developers, then as the developers’ voice when speaking with the product’s users. The product manager’s chief task is to focus on the user-facing problem the ML is supposed to solve, and make sure that the overall product appropriately delivers that solution.

Along the way to the final product, the manager needs to understand the size of the ML solution’s opportunity, determine what sort of output will be useful for the organization, and obtain quality, useful data to train the machine. In addition to understanding how the solution fits with the company’s existing systems, the manager needs to quantify its latency, the volume of processing it will be doing, and system level needs, then help with UI design, as well as creating guardrails and monitoring for its ongoing use.

ML products can fail for multiple reasons, Elkan said, particularly due to issues of scope, input, output, and perception. An ML-based solution might be asked to make decisions that are either beyond a machine’s abilities — such as the wisdom of venture capital investments — or within its scope, but minus the predictive accuracy to be useful. It might also lack access to necessary real time or historic data, or be held to unrealistically high standards compared with current alternatives. Elkan referenced a package sniffing dog as a biological neural network trained for a specific task, illustrating that while stakeholders might feel comfortable with only a basic explanation of how a biological ML model works, they may apply a different standard of expected comfort or understanding before deploying machine solutions.

During a “lightning round” of questions from Wing Venture Capital partner Rajeev Chand, Elkan was asked whether ML models can be better at predicting stock prices than traders and analysts — “sometimes, yes” — and, critically, how bias can be removed from ML models. We can remove the biases we’re aware of and can quantify, Elkan notes, but the challenge is becoming aware of biases we might not be thinking about so that we can consciously remove them. The topic of ML and AI bias has been simmering for years, but has recently attracted considerably more attention thanks to some particularly embarrassing examples and belated corporate interest following Black Lives Matter protests.

Audience questions for Elkan included how financial data can be collected in an increasingly privacy-focused world — something he said was governed by stringent legal guidelines that Goldman carefully follows — and whether corporate anti-fraud AI is now being attacked by criminal-developed fraud AI, which Elkan said is something that’s already happening. Elkan was also asked whether any Goldman clients were actually asking for AI, and he quickly said yes: In addition to external clients asking for AI solutions, he works with clients inside the company who are curious about and looking for useful AI.