While organizations across industries continue to rely on legacy systems and applications, mainframe systems can be hard to maintain, improve, and expand. Since these aging yet reliable systems are the core of many businesses, any downtime or defects can ultimately result in disastrous consequences, from financial losses to embarrassment in the marketplace. Unfortunately, often due to a lack of detailed understanding of the system, making changes to these systems can be costly, risky, and can tarnish an organization’s reputation. Why? Because today’s code search tools, linters and program analysis tools fall well short of actually mitigating the risks associated with maintaining and improving applications.Read More on Artificial Intelligence on BuiltIn.comThe Future of AI: How Artificial Intelligence Will Change the World
Legacy Enterprises Are Stuck in the Past
The constantly evolving business marketplace demands that these systems and applications keep up with the rapid pace of change yet entirely too many organizations say their legacy applications struggle to evolve fast enough to support those shifting demands.
With 10,000 mainframes actively being used across the globe, including by 96 of the world’s 100 largest banks, nine out of ten of the biggest insurance companies in the world, 23 of the 25 largest retailers in the U.S., and 71 percent of Fortune 500 companies, the typical organization — regardless of industry — spends between 60 and 80 percent of its IT budget simply on maintaining existing mainframe systems and applications.
When engineers who wrote the original code are no longer with the company, and developers new to the system can’t get comprehensive knowledge of the system’s functionality fast enough — yet these critical applications must remain running — managing aging applications in the mainframe becomes even more complex.
Current Tools Can Help, but Not Enough
Today, roughly 75 percent of a developer’s time is spent querying and understanding code to fix a bug or make a necessary change as opposed to writing new code. While code search tools, linters, and static and dynamic analysis tools can all help developers improve their efficiency and effectiveness, most of these tools are insufficient at actually identifying the specific lines of code that require attention — especially considering the intertwined nature of program instructions in a codebase with millions of lines. With software repositories growing to unprecedented sizes, developers responsible for maintaining and preserving a system’s functionality say it’s becoming more difficult to find bugs in code without machine assistance. Even worse, correcting and then validating the fix of a single bug can take days, weeks, or even longer.
Semantic search tools like Sourcegraph help search for words and complex phrases, accelerating the rate at which developers can build a mental model of the code, but human programmers still have to rely on their own ability to pull all the pieces together because code search tools are notorious for false positives. That leaves the developer to weed out those false positives just to find the code to make the change. But even with all their training, even experienced developers still sometimes make mistakes, and changing the code might still break it rather than fix it. Worse yet, code completion tools such as GitHub Copilot do not mitigate risk either. In fact, they can actually introduce risk because the suggestions they make are not precise (the code it offers is only correct 29 percent of the time) and may be disproportionately relied upon by less experienced developers. Further, the most advanced static and dynamic analysis tools simply do not find behavior in code, they merely reflect the code in ways that developers still have to interpret (possibly incorrectly).
In these examples, identifying issues is simple enough, but narrowing in on all of the relevant code sprawled throughout the system to remediate the problem is difficult and time-intensive. Even with the best code search tools, linters and program analysis tools, developers still have to simultaneously simulate several aspects of a program’s detailed execution path to wholly ameliorate any issue, which is cognitively demanding — even for developers with a wealth of experience.
Perhaps worst of all, today’s tools don’t have any way to effectively validate the scope and accuracy of a proposed change. The influence of the proposed change may be difficult to identify, and the data necessary for comprehensive test coverage is amorphous. Such is the nature of code. While even the most advanced tools can help developers identify the relevant code, developers still have to conceptualize the functionality that the code represents to bring the intent to light and expose the bug that they seek to fix.
When a programmer fails to understand how changing code in one area impacts the system as a whole, even a minor tweak can be catastrophic. Consequently, software developers still have to undergo the time-consuming, mentally challenging, and potentially error-inducing effort that is building a mental model of what the code does so that they can make any necessary adjustments.
So, how can developers better mitigate risk while managing legacy applications? By employing a novel approach to artificial intelligence. With AI, developers can improve their ability to efficiently identify the specific lines of code relevant to the tasks at hand.
Read More on Artificial Intelligence on Built In’s Expert Contributors NetworkAI Gone Wild: Why Startups Need Algorithmic Canaries
Artificial Intelligence Can Help Fix Broken Code
Across the IT sphere, AI technology is becoming a key strategy in organizing and optimizing processes — and software application maintenance is just another area where AI assists.
When keeping applications running on the mainframe, AI tools can improve maintenance by helping developers better understand the codebase. Developers can “ask” the AI tool where a specific behavior is coming from and be guided to the code associated with that behavior by using a collaborative approach known as augmented intelligence. By reinterpreting what the computation represents and converting it into concepts, AI technology ensures software developers no longer have to pore over millions of lines of code to unearth the intent of previous developers because the behavior (like a bug) is revealed at machine speed.
AI can identify in advance all possible behaviors without repeatedly searching through code to get to the specific lines in question (as a human does). That means such tools can collaborate with developers to narrow down to the specific code that needs to change. Developers are then able to propose any changes while they are coding — without recompiling or checking the code in — so AI mitigates risk by verifying whether the change is isolated to the specific behavior. The use of neo-classical AI approaches allow for the AI to simulate the proposed change, just as a human would mentally, but with machine precision and without having to actually deploy the code to see what happens. Furthermore, such AI tools can be made part of the continuous integration/continuous deployment (CI/CD) pipeline by exposing a developer’s request of the AI programmatically via API to ensure that behavior will never change in the future.
Deploying AI in this way to look at code is almost like hiring a skilled human programmer. While AI tools won’t yet make changes to the code on their own, they will direct developers precisely to where any changes are needed. Then programmers skilled in the specific programming language — even if they are not totally familiar with the particular system or application — can make the necessary fixes with minimal risk.
Read More on Artificial Intelligence on Built In’s Expert Contributors NetworkDon’t Fear the Robot: The Human Consequences of Automation