April 20, 2024

[ad_1]

It’s easy to get excited about AI projects. Especially when you hear about all the amazing things people are doing with AI, from conversational and natural language processing (NLP) systems, to image recognition, autonomous systems, and great predictive analytics and pattern and anomaly detection capabilities. However, when people get excited about AI projects, they tend to overlook some important red flags. And it’s these red flags that cause over 80% of AI projects to fail.

One of the biggest reasons for AI project failure is that companies do not justify the use of AI in terms of ROI. Simply put, they are not worth the time and expense given the cost, complexity and difficulty of implementing AI systems.

Organizations are rushing through the exploration phase of AI adoption, moving from the right of simple proof-of-concept “demos” to production, without first assessing whether the solution will have a positive return. A big reason for this is that measuring the ROI of an AI project can prove more difficult than first anticipated. Too often teams are pressured by senior management, peers, or external groups to get started with their AI efforts, and projects move forward without a clear answer to the problem they’re actually trying to solve or the ROI that’s going to be seen. When companies struggle to develop a clear understanding of what to expect when it comes to AI ROI, the result is invariably a misalignment of expectations.

Misaligned and misaligned ROI expectations

So what happens when an AI project’s ROI doesn’t align with management’s expectations? One of the most common reasons why AI projects fail is that the return on investment (ROI) does not justify the investment of money, resources and time. If you’re going to spend your time, effort, human resources, and money implementing an AI system, you want to have a well-defined positive return.

Even worse than a misaligned ROI is the fact that many organizations don’t even measure or quantify ROI in the first place. ROI can be measured in various ways from a financial performance, such as revenue generation or cost reduction, but it can also be measured as on-time performance, shifting or reallocating critical resources, improving reliability and safety, reducing errors, and improving quality control or improving security and compliance. It’s easy to see how an AI project could deliver a positive ROI if you spend a hundred thousand dollars on an AI project to eliminate two million dollars of potential cost or liability, then it’s worth every dollar spent to reduce obligation. However, you will only see that ROI if you plan ahead and manage that ROI.

Management guru Peter Drucker once said, “You can’t manage what you don’t measure.” The act of measuring and managing AI ROI is what separates those who see positive value from AI from those who end up undoing their projects years and millions of dollars in their efforts.

Boiling the ocean and biting off more than you can chew

Another big reason companies don’t see the ROI they expect is that projects are trying to squeeze too much in at once. Iterative, agile best practices, especially those used by best practice AI methodologies such as CPMAI clearly advises project owners to “Think Big. Start small. I repeat often.” Unfortunately, there are many unsuccessful AI implementations that have taken the opposite approach of thinking big, starting big, and iterating rarely. A prime example is Walmart’s investment in AI-powered robots to manage In 2017, Walmart invested in robots to scan store shelves, and by 2022, it has pulled them from stores.

Clearly Walmart had adequate resources and smart people. So you can’t blame their failure on bad people or bad technology. Rather, the main issue was a poor solution to the problem. Walmart realized that it was simply cheaper and easier to use human employees who already worked in the stores to complete the same tasks that the robot needed to do. Another example of a project not returning the expected results can be found with the various applications of the Pepper robot in supermarkets, museums and tourist areas. Better people or better technology would not have solved this problem. Probably a better approach to managing and evaluating AI projects. Methodology, guys.

Adopt a step-by-step approach to executing AI and machine learning projects

Did these companies get caught up in the technology hype? Maybe these companies just wanted to have a robot roaming the halls for the “cool” factor? Because being cool doesn’t solve any real business problem or solve a pain point. Don’t do AI for AI’s sake. If you’re doing AI just for AI’s sake, then don’t be surprised if you don’t get a positive ROI.

So, what can companies do to ensure a positive ROI for their projects? First, stop implementing AI projects for AI’s sake. Successful companies take a step-by-step approach to executing AI and machine learning projects. As mentioned earlier, methodology is often the secret sauce missing from successful AI projects. Organizations are now seeing benefits from implementing approaches such as the Cognitive Project Management for AI (CPMAI) methodology, based on decades of data-centric project approaches such as CRISP-DM and incorporating established agile best-practice approaches to deliver short, iterative project sprints.

All of these approaches start with the business user and requirements in mind. The first step of CRISP-DM, CPMAI, and even Agile is figuring out if you should even go ahead with an AI project. These methodologies suggest alternative approaches such as automation or direct programming or even more people may be better suited to solving the problem.

The ‘AI Go No Go’ analysis.

If AI is the right solution, then you should make sure you answer yes to a variety of different questions to assess whether you are ready to start your AI project. The set of questions you need to ask to determine whether to proceed with an AI project is called an “AI Go No Go” analysis, and this is part of the first phase of the CPMAI methodology. The AI ​​Go No Go analysis asks users a series of nine questions in three general categories. For an AI project to really move forward, you need three things in alignment: business feasibility, data feasibility, and technology / execution feasibility. The first of the three general categories asks about business feasibility and asks if there is a clear problem definition, if the organization is really willing to invest in this change once it is created, and if there is sufficient return on investment or impact.

These may seem like very basic questions, but all too often these very simple questions are overlooked. The second set of questions deals with data, including data quality, data quantity, and data access. The third set of questions is about implementation, including whether you have the right team and skill sets needed, can execute the model as needed, and that the model can be used where it is intended.

The hardest part of asking these questions is being honest with the answers. It’s important to be really honest when considering whether to go ahead with the project, and if you answer ‘no’ to one or more of these questions, it means you’re either not ready to go ahead yet, or you shouldn’t go ahead at all. . Don’t just go ahead and do it anyway because if you do, don’t be surprised when you’ve wasted a lot of time, energy and resources and not gotten the ROI you were hoping for.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *