Every day we are bombarded with new chatGPT exploits. The enormous airtime given to this product – for consumption by the general public – has hidden far greater advances in AI, which are far more relevant for companies than GenAI for the masses…
THE HYPE
Despite the hype, GenAI is just the top of the iceberg. When it comes to Intelligent Automation, we can even say that it’s just the “tip of the top of the iceberg”. Tools such as Midjourney or chatGPT were only designed and trained to generate images and text, with minimal planning or task execution capabilities. They also can only be retrained by those who provide them, which makes them not very flexible for real use in business environments.
In fact, the major limitations we still feel today in Intelligent Automation are better addressed by other AI technologies:
1 “Understanding” unstructured data, such as text, images, video, time series, or even the non-numerical data that abounds in databases. Systems like chatGPT are trained to produce text from text, not to create a useful internal representation that can be used for other things. To be useful, language models need to be trained for other specific tasks, often different for each client, which implies the use of Transfer Learning tools that can be retrained with a few examples.
2 Replacing humans at critical decision points, nowadays handled with human-in-the-loop technology. These decisions are difficult to formalize (and therefore program). We need systems that can learn decision rules from a “human model” and that can easily explain the decisions that are made. We don’t want a robot to decide whether an insurance claim is approved or not, according to some criteria that the robot has learned – if we can’t ask “why” and receive a convincing and detailed answer. The ability to explain decision-making is one of Deep Learning’s Achilles heels.
3 Replacing humans at points in the process that involve sequences of actions, plans that may need to be revised if conditions change. Technologies such as Reinforcement Learning are the way forward and the big AI players have made remarkable advances.
4 Intelligent robot orchestration. Recognize that massive automation leads to the need to manage hundreds of processes, running on many dozens of robots (software agents programmed with a particular mission). This is impossible without Intelligent Orchestration. Orchestration is one of the areas where we predict AI can help a lot, by automating the tasks that humans still need to do.
For example, today’s orchestrators just let you define the priority of execution but what we really need is that “priority” needs to be determined automatically, based on the process’s SLAs. These are combinatorial optimization problems for which Deep Learning is still very poorly suited.
5 Finding automatic ways of mapping processes (process mining) or identifying unit tasks and their sequences (task mining). There are already some attempts to use AI for this type of activity, but it will take major advances to be able to do without people in these steps. Once again, there will be systems based on Reinforcement Learning that can lead the way.
Having said that, it’s time to point out that the biggest obstacle to AI use in intelligent automation is not technological. It will be much more difficult to manage all the organizational, ethical, moral and even legal impacts that AI advances will bring to organizations.
The Accountability Challenge
AI can be used, roughly speaking, in three types of activities: classifying things, deciding on the basis of these classifications and acting on decisions. Any step in this chain is subject to error, and we have learned to live with our mistakes. We have created safety mechanisms to ensure that mistakes are rarer as their consequences increase in severity, as well as creating laws that “hold people accountable” when mistakes happen.
No matter how evolved AI is today, it will be impossible to eliminate mistakes completely. We will have to learn to live with mistakes for which it will be very difficult to find a person responsible. With AI taking over the process, this “accountability system” no longer makes sense: a robot is “not afraid” of breaking the law, we can’t “put” a robot in jail hoping that it will “learn and change”.
The need for justice goes hand in hand with the notion of accountability, preferably from a human. The more we move towards global automation, the more we will understand that we must evolve the laws we created exclusively for humans.
In the foreseeable future, there will still be a need for a human to “take responsibility” for the actions of a robot, even if it is made of software. Today, accountability is still one of the “big issues”. Who is responsible? Is it the company that sold the AI software? Is it who collected and selected the training data? Is it the programmer who wrote the code? Is it the project manager?
In the foreseeable future, there will always be a need for a human to “take responsibility” for the actions of a robot, even if it is made of software. Today, it’s not yet clear whose responsibility this is or will be.
Explaining and Accepting the Decision
The systems we have today can’t explain the “reason why” they made a certain classification, took a specific decision, or implemented an action plan: neural networks are a “black box”, a set of numbers that feed a huge mathematical equation. It’s very difficult for a doctor to accept a recommendation from a “system” when he doesn’t agree with it, can’t analyze the steps that were made until the final decision and can’t even ask “why” – and we probably don’t want to take the doctor out of the loop.
Not only because the system can make mistakes, but mainly because we want someone accountable for the decision. Our society lives well with doctors´ mistakes (as with all other professionals). At the limit, we use “our laws” to resolve extreme cases.
Beyond LLMs
Reminding the iceberg: we only see the tip that’s out of the water, but it’s supported by much more ice that is invisible underwater. Many automation problems can be solved using much simpler things than LLMs, or even neural networks, with great advantages over the ability to explain “why”. With all the hype surrounding GenAI, it’s easy to fall into the temptation of wanting to use it for everything, forgetting that there are hundreds of other more suitable solutions.
In a business environment, this temptation is even more dangerous.
In organizations, it’s very important to convince top management to sit around the table with people they trust most – including legal and middle management – and try to identify the main processes that can be improved, automated and made autonomous.
At the same time, there should be a review of the accountability assumed by each element of the team, given the greater autonomy – sponsored by “intelligent agents” – in the decision-making process. Ultimately, without any human intervention and without the ability to explain.
final message
ARPA is working very hard with its clients to help them rethink the digital transition challenge that these technologies bring – as well as filtering out the hype surrounding them – allowing companies to take a “deeper look” in opposition to the lighter, superficial look that seems to be everywhere, concerning AI and RPA.
There is no magic: we just assure the best applicability of AI and RPA to real problems. We go beyond building the action plan that best suits each customer, as we are a strong “hands-on” partner that pursues implementation with our solutions and expertise, connecting the “body with the brain”.
There is no doubt that AI’s time has come. What companies will do with such certainty is what will distinguish successful businesses from those that just ignore how this technology will fit and help their businesses survive and thrive.
Our suggestion is that you invite us to your “table” when this theme arises! Let’s do it together!
“Nothing has the power of an idea whose time has come.” – Victor Hugo