

Discover more from STAY Sustainable Blog
Artificial Intelligence Starts Down the Slope of Disillusionment
Ignore Occam’s Razor at your peril
Occam’s Razor is the problem-solving principle that recommends searching for explanations constructed with the smallest possible set of elements, otherwise known as parsimony. Many organizations are ignoring this principle and trying to solve problems with Artificial Intelligence (AI, or the wrong type of AI) when there are simpler solutions with similar results. For example, one of the problems with large language models (LLM) currently is that answers are purposefully non-deterministic. If you are looking for one exact answer, an LLM may not be the right solution. These issues are starting to push AI down the slope of disillusionment. In this article, we make recommendations for how to structure AI projects for success.
Executive Summary
AI should be used when it can improve the outcome over conventional methods without onerous cost.
AI is more than Chat GPT! Different types of AI are good at solving different types of problems. Applying a more complex solution than necessary violates Occam’s Razor.
The rush to use AI as a solution regardless of the problem has started to push companies down the slope of disillusionment with AI.
In order to effectively use AI, organizations must scope the problem, identify the least complex solution (which may not be AI), and compare cost/benefit of different potential solutions.
One often overlooked issue is that AI is only as good as the data it has access to. If the data is not curated to be clean, accurate, broad, and deep, quality will suffer.
In addition, AI solutions need to be reviewed for ethical and risk management considerations.
Source: AI generated image from Bing
AI Is Now a Marketing Term
Companies are rushing to incorporate AI everywhere they can, especially in their marketing materials. I have opined before that while all companies are claiming to benefit from AI, those that really do so effectively will have a huge advantage over their competitors in the next few years. Not every company can be a winner, and currently the largest technology companies appear to have large advantages.
The demand from corporate management to their technology teams to come up with “something, anything!” they can call AI has short-circuited the normal project planning, budgeting, and review process at many companies. Additionally, many are in danger of ignoring risk management (including cyber risk) and potentially ethical considerations.
Simply put, AI should be used when it can improve the outcome over conventional methods without onerous cost. This is an important consideration, as some companies have discovered they can achieve significant improvement over conventional methods with AI, but the additional cost more than eats up the benefit. Over time, the cost of developing and using AI will come down, but one must proceed cautiously at this time.
AI is More than Chat-GPT!
Different types of AI are good at solving different types of problems. Many of these were covered in detail in the following article.
Chat GPT is not always the solution! Here are some concrete examples of problems and different AI-related solutions. One of the sources of problems with projects that incorporate AI is that the wrong type of AI is being used to solve a problem. For example, using an LLM as a database.
Source: STAY Sustainable blog
Violating Occam’s Razor
Even more important, organizations need to ask if they should even be using AI at all to solve a certain problem. This is analogous to what we saw with blockchain technology. Once the hype died down, organizations would simply ask “Can I do this without blockchain?” If the answer was “yes,” that was the last question.
Similarly, organizations need to have a robust process for AI projects that includes whether AI is necessary for the project at all. I happen to like the process outlined in “Bulletproof Problem Solving,” where I would argue that the most important step is the first, defining the problem to be solved. Questions that should be asked include:
Are we solving the right problem?
Is the scope of the problem set correctly for success of the project?
What are some different ways to solve the problem (or parts of the problem, including AI) and what are the costs, pros, and cons of each?
Should we build or buy part of the solution?
Does the potential solution align with Occam’s razor?
In the end, if management asked for an AI solution, but there is a better answer, you had better be able to communicate that very well.
Source: Bulletproof Problem Solving
Down the Slope of Disillusionment
We have probably seen the local maximum of peak hype for LLM specifically, and AI in general, earlier this year. As organizations spend a lot of money on large AI projects, as with any type of technology project, some will be successful, some will not meet expectations, and some will fail to deliver at all. This will push many organizations down the slope of disillusionment as they realize that AI is not the magical solution to all their problems.
Another issue with AI-driven solutions is that they tend to be far more resource-intensive than conventional solutions. The cost to train, and then apply, large AI systems can run into the millions of dollars—and compete with Boulder, Colorado for the amount of electricity being consumed in the process.
One More Thing…
We should not have to write this, but AI solutions need to be reviewed for ethical and risk management considerations. Best practices require not just internal, but third-party review. While guidelines in the US are voluntary, the EU has published rules that take a risk-based approach. You can see a summary and analysis of those rules in the following post.
Risk management, regulatory compliance, and cybersecurity for AI are still nascent areas, and it may be tough for risk generalists to handle all of them. In most cases, very specialized knowledge is required. Ethical considerations are a separate, but related, matter, and include bias, transparency, accountability, and social responsibility.
There are two opposing futures: in one, everyone has their own AI co-pilot, their productivity is vastly enhanced, AI works well and with minimal bias, and this increases productivity and raises everyone’s standard of living. In the other, AI is not accurate, nevertheless, we have huge unemployment and underemployment as AI pitches middle-class workers out of their jobs in a race to the bottom with a cratering global economy. Let’s, as a society, advocate for something closest to the first path.
News of the Week
I hope to see many of you at the Risk Live! Summit in New York City on October 19th!
https://www.risklive.net/north-america
Bibliography
Conn, C. & McLean, R. (2019) Bulletproof Problem Solving.
Copyright © 2023 by Alec Crawford
#artificialintelligence #innovation #STAYblog