Breaking News

A whopping 66% of AI decision-makers are worried about achieving their company’s ethical objectives

66 percent of AI/ML decision-makers expressed fear that their company’s ethical business objectives will be unable to be met, according to a recent research by InRule Technology, which was based on data from a Forrester poll. High prices (60 percent), poor company results (47 percent), and noncompliance with regulatory standards round out the list of top concerns (37 percent ). There is just a 56% confidence in their capacity to deal with these threats.

As many as 66% of decision makers believe that AI and ML are important to the success of digital decision-making, and this percentage is anticipated to rise to 95% in the next three years. There are still stumbling barriers when it comes to implementing a variety of technologies for automating decision-making processes.

Although RPA and DPA are widely used, they fall short of the definition of “automated” in the following ways: DPA-only solutions were identified by 70 percent of users as difficult to maintain complicated choices, whereas 41 percent of organisations utilise a business process management system with an external decision engine. Automated solutions are in high demand, but companies are having to cope with a wide range of technologies that don’t always work together and need continual monitoring and review. Nearly two-thirds of respondents said that technical, organisational, and operational difficulties hamper agility and scalability, notably scope creep and model drift.

AI/ML decision-making technologies have the potential to improve operational efficiency, but IT decision-makers are concerned about the potential for negative bias to lead to erroneous judgments (58 percent), inconsistent decisions (46 percent), and a loss of revenue (39 percent) (32 percent ). More than three-quarters of IT decision-makers say their firm has already encountered negative bias in its digital decision-making projects, and this isn’t an unwarranted worry.