Are Agents Effective Decision Engines?
Achieving Loan Origination Efficiency in Companies with Cognitive Agents
Would you trust your agent to make a decision for you? I don’t mean to draft a Reddit comment or debate a topic with you. I mean really trusting the agent to make a real life altering decision without your input. Like investing into a stock for you, sending a payment, decide on health care coverage, or approve a loan.
I suppose in early 2025, most people wont.
Why is that? In my opinion it boils down to 2 dimensions:
Opportunity, and
Trust.
There are not many real-life processes where we engage with agents yet, nor do we trust they would make the right decision if put in such a position. The reality is that despite all the hype most AI Agent implementations under-perform because the people who implement them don’t know what they need to do. They lack the domain expertise. On the other hand, non-agentic AI solutions have been used to create operational efficiencies for decades. One of the areas where AI is heavily in use is Financial Services.
One very common implementation of operational AI in Financial Services is Decision Engines. FICO is a vendor of such a system that I have used in the past. FICO’S solution is in its most basic implementation a decision tree of arbitrary complexity.
A simple example of a decision tree:
Some of the benefits why decision engines are popular is that they provide an easy to understand and easy to govern system that can be used to make decisions.
Does that make them AI Agents though?
Great question. Even though they lack the defined LLM brain, memory, tool use, and all the other fancy frameworks we currently use to build agents, I think they are. Because it can be argued that they have similar components.
What you see here is an example of one of the most impactful applications of decision engines I have worked on: loan origination.
Decision Engine vs Agents in Financial Applications:
In loan origination, real-time risk assessment, fraud detection, and automated underwriting are immediate and immense value drivers that materialize themselves in form of response time to the customer, non-performing loans, and net credit losses.
Loan Origination
My team and I introduced automated credit decisioning for 15 countries around the world effectively streamlining the evaluation and approval processes, reducing response time to the customer from ~5 business days to under one minute without increasing Non-Performing Loans. That was deemed such an achievement at the time that my team and I won the regional excellence award two years in a row.
While I can’t show to many details, we used a combination of a statistical scorecard for application scoring and policy rules for the edge cases (less than 0.3% likelihood that the event occurs) to build our decision engine.
Based on my real-life experience, implementing a decision engine in loan origination proved to offer several advantages:
A fully digital workflow reduced manual intervention leading to faster processing times, improved data quality, and therefore reduced operational costs;
Standardized statistical decision-making processes minimize human errors and biases, ensuring fair evaluations; And since automated systems can be designed to adhere strictly to regulatory requirements, reducing the risk of non-compliance;
and finally
Faster decisions and transparent processes improve customer satisfaction and trust.
Who doesn’t want to have a faster decision (if its positive). On that note, the systems that I had implemented would approve the right customer fast, but if there was any doubt would bring back the human into the loop to confirm and refine the deal details with a customer. For example, in same cases it would be necessary to bring in an additional guarantor or a higher down-payment to ensure that the customer was able and willing to pay. When we implemented the system we started with a low automatic approval rate (2%-5%) and revised annually the risk instruments managing the ratio higher without losing track of actually incurred losses.
Hence, I’d summarize that in an increasingly technology-enable financial landscape decision engines allow financial institutions to offer faster and more reliable services while maintaining high standards of accuracy and compliance.
For me that’s a net positive.
Now, what do true AI Agents do differently then?
The Role of AI Agents
At Mercedes we always had the tradition to rely on structured policies written by experienced human with the right intuition augmented by historical precedent as observed in data. However, we did have one problem.
We could never use Neural Network based models, because
Neural Networks do not keep a traceable state of a value throughout its layers therefore it is difficult to explain why the decision engine has made its decision.
A Mercedes-Benz is still a luxury product. Even though I had all the access to historical region-wide records, data would simply not be enough to develop a reliable model per market.
But that was 2020. Now we are half a decade later, maybe we might actually be in a spot where true cognitive agents are ready for operational prime time?
Case Study: Agent-Driven Loan Origination
That’s the “if I were to do it again” segment. Bottom line up front, I suppose not too many parts of the flow would be different.
Automated Credit Scoring: For audit requirements, the agent would still build its decision on a robust statistical model. This can either be built in house or bought from an external vendor like Transunion, Dun and Bradstreet, or FICO. It is important that the data sources are clean and holistic and, include at minimum transaction history, repayment performance data, and external credit bureau reports.
The reality though is that, while in general, a FICO score is useful and predictive, it does not really give a full picture. What if the applicant never used credit before? What if the credit history was once 7 years ago? Therefore, it always make sense to augment the decision making with a set of business rules.
Business rules/Reasoning: In a decision tree structure policy rules are quite easy to implement. You ask a question that can be answered with Yes or NO, i.e., is the applicant below 20 years and the CEO of a construction company? The problem is that next generation AI Agents do not have a clear business rule interface. How do they make decisions then? As I have shown in my Game Theoretical work, you can’t just give the rule to the agent and have it reason over it. Even in early 2025, it is quite clear that this would not provide results that one would put into production workflow that would ultimately make a credit decision. I think the possible approach to this is to fine-tune the model with exactly these policy rules in JSON format. But likely it just makes the most sense to just keep the policy rules in the rule engine and provide the output to the agent’s context.
Loan Application / Context: When the agent picks up loan application, it’s decision making must be shielded from any other credit application data that might have poisoned the reasoning process. Also, this is one of those cases where you want the model to be reliable and stable. There is a component to govern model risk that is amplified when it is not possible to sandbox the model. I.e., you can of course just link to an future version of GPT via API, but you will run the risk that the system responds differently to the same prompt in the same context. For proper credit decisioning you need the system to be deterministic. I.e., given the same information it must provide the same response. In order to allow for a deterministic system, it is ultimately critical to manage context. And in the case of loan applications, all components of the applicant and the deal are considered context.
Historical data: It should be no surprise if we access historical data of the applicant and the application to the agent via a tool. However, as of early 2025, there is hardly a difference between tool use and an API call. Therefore I treat it as pretty much the same task when providing the information to the agent.
In closing
To answer the question. They can be if provided the right guardrails and governance structure. We need to realize that with continuous advancements in agent tech, the question is no longer if companies should adopt AI-driven decision engines, but how fast they can do so without facing fines and disrupting their operations. Most financial services are likely operating already agent innovation projects to evaluate how they can use them in their workflow. The reality is that regulations are in place for a reason and most new entrants would be well advised to ensure that they understand why these regulations are in place. Otherwise, they will pay the price.