Jose Guido

Sr Advisor, Credit Risk

Sarah Sun

Sr. Director Risk Modelling

Matt Browning

President

Dinesh Mistry

SVP, Risk & Strategy

Under the Microscope: AI Credit Decisioning

Abstract: In this panel moderated by Jose Guido (Senior Advisor, Credit Risk, Deloitte), leaders across the AI decisioning stack discussed how artificial intelligence is already influencing real credit decisions in automotive lending. Sarah Sun (Senior Director, Risk Modelling, RBC) brought the bank risk and regulatory perspective, Dinesh Mistry (SVP, Risk & Strategy, Northlake Financial) shared the fully automated non-bank lending perspective, and Matt Browning (President, Trust Science) covered the technology and analytics underpinning modern credit models. The discussion moved beyond buzzwords to focus on practical use cases, organizational readiness, governance expectations, and how AI—if implemented responsibly—can improve both risk management and the consumer lending experience.

👉 Check out the full VIDEO here.


Jose Guido: AI is live. It’s here. It’s not something that’s coming—it’s already influencing real credit decisions. That’s exactly what we’re here to unpack today. So maybe we start with a simple question: what actually counts as AI in your organizations? And how do you separate the hype from the real work being done?


AI in lending is less about headlines and more about better risk visibility

Matt Browning (Trust Science): For us, AI isn’t primarily about large language models or the things getting the headlines. What we focus on is enabling lenders to apply data tools and techniques differently within their lending operations.

Often lenders have a concept or opportunity they want to pursue. Our role is helping them quantify the value of that opportunity and then applying the right data sources, modeling techniques, and tools to achieve it.

That might mean incorporating cash flow data, bringing in new datasets outside traditional bureaus, or using different modeling approaches to create a much more layered view of risk. It’s not necessarily about flashy AI—it’s about building a deeper and more refined understanding of borrower behavior.


Fully automated lenders view AI as an operational tool—not a buzzword

Dinesh Mistry (Northlake Financial): One thing lenders have to be careful about is getting swept up in the hype. You really need to understand what AI means for your specific business.

At Northlake, we’re 100% automated decisioning—we have no credit underwriters. So AI for us is part of a broader journey. It’s essentially a prosthetic tool that helps us execute what we already do well.

We use AI across several parts of the organization—from components of the decision engine to how we handle customer interactions. But our approach is very deliberate. Every deployment starts with a thesis: if we do this successfully, what outcome do we expect and how does it move the business forward?

The key is discipline. AI is powerful, but it needs to be tied to real use cases and measurable outcomes.


Banks have used AI-like techniques for decades—but the new wave is changing productivity

Sarah Sun (RBC): I actually think about AI in two ways.

First, many of the techniques we now call AI—things like neural networks—have existed in credit risk modeling for decades. Data scientists have been using versions of these tools since the 1980s.

But the second wave is different. The newer generation of tools—especially co-pilot style technologies—are dramatically accelerating productivity.

For example, our team recently used an AI co-pilot to complete work in three days that normally would have taken three or four months. That’s when the shift really becomes tangible.

And if your organization doesn’t yet have the infrastructure to support modern modeling techniques—things like embeddings, neural nets, or scalable compute—you’re already behind.


From theory to practice: how AI is actually used in credit decisioning

Jose Guido: Let’s move from definitions to practical use cases. How are you using AI in day-to-day credit decisioning?

Dinesh Mistry (Northlake Financial): Credit risk has evolved significantly over time. Early on it was based on gut instinct from experienced lenders. Then we moved into expert models and logistic regression. Later we added business intelligence and deeper analytics.

AI is simply the next step in that evolution.

But the most important caution is making sure the intelligence doesn’t become “artificial” in the wrong way—meaning detached from the business reality.

One of the biggest opportunities AI creates is reducing cycle time in data preparation, which is historically the slowest part of model development. AI tools can help clean, normalize, and identify patterns in data that previously required enormous manual effort.

Our philosophy is simple:

  • Crawl, walk, run: start small, validate results, ensure humans understand the outputs, and then scale.

  • Speed to change: if a deployment doesn’t work as expected, we need the ability to quickly undeploy or pivot without damaging the customer experience.

AI is powerful—but it has to be implemented responsibly.


Matt Browning (Trust Science): The biggest challenge we see isn’t technical. It’s organizational.

Many lenders hesitate because they think they need a perfect solution from day one. But that’s not how this works. The key is simply starting somewhere.

Pick a use case, experiment, learn, and iterate.

Because if you keep doing what you’ve always done, you’ll actually get worse results than you’ve always gotten—because the rest of the industry is moving forward.


The hidden challenge: balancing AI ambition with organizational reality

One of the biggest tensions discussed during the panel was the pressure some organizations feel to promise massive AI-driven results immediately.

Dinesh Mistry (Northlake Financial): Sometimes leadership needs to separate short-term excitement from long-term sustainability.

You can deploy something quickly and see early success—but if the foundations aren’t there, the downside can be severe.

AI is exciting. But building the foundation for AI is not glamorous work. It requires discipline around data, infrastructure, governance, and talent.

Running too fast without those foundations can create serious problems later.


What AI readiness actually looks like

Jose Guido: If readiness isn’t the flashy part of AI adoption, what does it actually involve?

Sarah Sun (RBC): One thing I’m noticing across organizations is that many teams are limiting themselves to a single AI tool. But different tools have different strengths.

Experimentation across tools often exposes where your real gaps are—especially in areas like data infrastructure or compute capacity.

The key point is that experiments only matter if they can eventually be productionized. That requires scalable data systems, modern compute environments, and teams capable of operationalizing what they build.


Dinesh Mistry (Northlake Financial): Culture is just as important as technology.

AI adoption cannot simply be mandated overnight. Many employees still fear the technology because they don’t fully understand it.

Organizations need to bring people along on the journey. That means building collaboration between risk teams, operations teams, and model developers.

In many lending organizations today, human judgment quietly compensates for model limitations. If you remove those humans, you might discover your model isn’t as strong as you thought.

AI implementation requires transparency, collaboration, and a culture that supports learning.


Matt Browning (Trust Science): None of this works without leadership commitment.

AI adoption cannot be isolated in one small team. It requires organization-wide participation, tool access, and leadership support to drive adoption.

There’s no “AI team in the corner” that can magically transform the entire company.


Regulation and governance remain critical guardrails

Jose Guido: Let’s talk about regulation. How do regulators fit into this evolving landscape?

Sarah Sun (RBC): Regulators are actively trying to understand AI just like the rest of us. They’re engaging with the industry, hosting discussions, and working to develop frameworks.

But strong governance remains essential.

At RBC, AI is treated as a tool for modelers, not an autonomous decision-maker. Every step—from data input to final credit decision—passes through checks and balances involving model validation teams, credit strategists, and business partners.

If automated decisions deteriorate in quality, regulators will quickly start asking difficult questions.


Dinesh Mistry (Northlake Financial): Whether you’re regulated or not, you still need a strong moral compass.

Even with automated systems, decisions must remain explainable.

AI systems can unintentionally introduce bias because they learn patterns from historical data. That means models must be constrained and governed carefully.

And the idea that you can simply buy datasets, feed them into an AI system, and generate a working credit model is a myth. It doesn’t work that way.


Matt Browning (Trust Science): I’m often more concerned about the consumer and the business partner than I am about regulators.

If you keep the consumer at the center of your decisions—and think about headline risk—you’re likely moving in the right direction.

It’s also important to remember that non-banks and banks are not “regulated versus unregulated.” They’re simply regulated differently, which is why non-banks often move faster.


What excites the panel most about the future of AI in auto lending

When asked what excites them most about the future of AI, three themes emerged.

Matt Browning: Real-time data and real-time decisioning. The ability to detect events—like payday loans or fraud signals—as they happen could fundamentally change credit risk management.

Dinesh Mistry: The biggest benefit may ultimately be for consumers. AI can reduce friction in the lending journey and help lenders introduce verification steps only where they’re actually needed.

Sarah Sun: I’m excited about the possibility of making processes simply easier. Today’s workflows contain endless exceptions and friction points. AI tools can smooth those processes and bring consistency across the system.


Advice for lenders beginning their AI journey

As the discussion wrapped up, panelists offered a final piece of advice to lenders still evaluating AI adoption.

  • Give teams access to tools and encourage experimentation.

  • Build a culture that removes fear and encourages collaboration.

  • Start small and build momentum.

Jose Guido: Even for skeptics, AI is already proving to be a powerful productivity tool. What lenders build on top of that productivity will ultimately determine who leads the next phase of credit decisioning.


Here are 10 key insights from the panel:

  1. AI is already influencing real lending decisions
    The conversation has moved beyond experimentation into real operational impact.

  2. Many “AI breakthroughs” are actually better data practices
    Alternative datasets and improved feature engineering often deliver the biggest gains.

  3. Crawl-walk-run adoption is safer than rapid transformation
    Gradual deployment reduces risk to both lenders and consumers.

  4. Data preparation is one of the biggest opportunities for AI
    Automating data cleaning and normalization can significantly accelerate model development.

  5. Infrastructure readiness determines long-term success
    Data systems, compute capacity, and production environments are critical foundations.

  6. Culture may be the biggest adoption barrier
    Fear and misunderstanding can slow progress more than technology limitations.

  7. Explainability must remain central to AI-driven decisions
    Credit models must remain transparent and governed.

  8. Regulators are learning alongside the industry
    Open collaboration helps reduce friction and improve outcomes.

  9. Non-banks are currently moving faster than traditional banks
    Structural differences in governance and regulation drive this divergence.

  10. Consumers stand to benefit the most if AI is implemented responsibly
    Reduced friction, faster decisions, and improved fraud detection could significantly improve the lending experience.

Sign up for the CLA Finance Summit Series