Abstract: In this panel discussion, industry leaders explored the real-world impact and future of AI in financial services, distinguishing between overhyped areas like generative AI and underappreciated innovations such as agentic AI. Panelists emphasized that while large institutions benefit from budget and talent, smaller financial institutions can leverage cloud-native systems and agile governance to implement AI more efficiently. The conversation highlighted the importance of adopting AI holistically—integrating people, process, and technology—while cautioning against deploying solutions without clear business value or user adoption strategies. Looking ahead to 2030, panelists envisioned AI enabling fully automated, personalized financial services but warned that progress could be hindered by legacy infrastructure, data quality issues, and slow regulatory adaptation. To accelerate innovation, they advocated for better talent retention in Canada and faster adoption of open banking frameworks.
Thomas Purves: Banking should be intelligent and automated. I don’t think Hannah from the last panel could have teed this up better. And I don’t know how anyone could disagree with that statement. And I also agree with Gary. I think that AI is the heart of like, where we want those, the, those solutions of the future to live and to be enabled by. But as much as I think AI is as transformative as mobile and the Internet was before to our industry, we got to be honest, there’s also a tsunami of hype and frankly BS around AI that we’re probably all fatigued about hearing about too. So I’ve got this esteemed panel here today who’s going to help us cut through some of that buzz and that hype and hopefully get you guys some straight facts that you can use.
So I’m going to kick this off with a quick round robin question I’m going to ask each of you personally. What’s one area of AI right now that you think’s from your own perspective? Overhyped versus underhyped.
Rob Dunlap: Okay, so I’m actually gonna use the same word, which maybe is a little unconventional. I think agents. AI agents is overhyped right now. Feels like every single point functionality, API, whatever it might be that exists in a workflow is all of a sudden an AI agent, which is maybe a little disingenuous and very hype. Ish. But agentic AI, like AI that is actually cycling through multiple iterations to solve a problem to find the most optimal solution, I think is something that is vastly underhyped right now, probably a little bit misunderstood, but is going to have a dramatic impact on our industry and across the technology space.
Janet Lin: So in the last 12 to 18 months, the industry, across industry, whether if it’s finance or telecom or retail, a lot of the AI case has been applied on like customer service. Right. We heard a lot. Everybody has implemented some kind of AI applications through that customer service journey. However, I think the next 6 to 12 months, the real AI use case should go broader and deeper. Broader, like from a banking perspective, getting to like underwriting, sales, marketing, servicing, as well as like fraud, all those compliance use cases. And deeper in terms of intersectionality between different line of business, including customer clients, back office intersectionalities.
Simon Sun: Before answering, I think it’s important to define AI in three parts: generative AI (interfaces like chatbots), traditional AI (machine learning and predictive models), and agentic AI (decision-making systems). Generative AI is the most overhyped—many people overestimate its capabilities, especially in decisioning, where bias and hallucinations remain serious issues. A U.S. study even showed people of color needed a 50% higher credit score to get loan approval. The most promising but underhyped area is agentic AI, which combines different AI types to power full decision flows, though it’s still early and developing.
Thomas Purves: Rob, that’s a good segue. Generative AI may be new, but predictive AI has been around for a long time and we’re just beginning to unlock how to combine those approaches. You’ve been in AI for more than a decade. How is working in this space changed and what’s different now versus what stayed the same?
Rob Dunlap: Yeah, I think Simon’s statement of terms is really important and important to understand too that just because we have this fantastic new technology, generative AI, large language models, foundation models, et cetera, it doesn’t mean that it is the application or the proper application. Every problem, it’s not going to solve everything. But when you take all these technologies together, as well as more standard IT technologies, you can develop and build solutions that we could never do before. And so when we look back 10 years or even just five years, we think about statistical modeling, forecasting, optimization. No one would ever be up on a panel really in this type of audience to talk about that as much.
There wasn’t as much hype and it was the domain of the data scientist, it was the domain of the mathematician, the statistical model or whoever it might have been. The really interesting thing that I think has changed because of generative AI is a democratization. Who in this room actually put up your hand if you haven’t used a large language model? Hopefully no one puts up their hand. Right? That’s pretty, that’s pretty staggering. We’re not going to do everyone with their hand up because it would be just overwhelming. And so that’s a really fantastic and remarkable factor for all of us to determine how it’s going to change the way that we work and obviously change the way that we live.
But, but to Simon’s point, it also needs to be something that we trust and that is grounded and is going to be reliable for us. And for some use cases, there’s not much of a risk profile. You know, if you’re asking it for recommendations on a recipe, there isn’t that much of a risk profile. But obviously the things that we’re here talking about today are much more, much higher risk profile and we need to ensure that they’re safe enough to leverage.
Thomas Purves: Janet, Getting down to practicality, when it comes to actually implementing AI projects and programs, smaller institutions versus larger ones are going to be coming with different resources and different constraints. How do you think about if you’re a smaller bank versus a larger bank, how do you play to your strengths and think about what your individual strategy would be with AI?
Janet Lin: I would like to say AI at the end of day it’s a tool that it’s a disruptive new tool that just in terms of generative AI came to the mark in the last two years for large banks it does have the advantage of resources in terms of budget, in terms of attract talents. We know that two out of the five bank has acquired company that builds LLMs to blend that into their business. And some of the banks, they have not only acquired solutions but start to build their internal LLMs with hundreds of AI engineers and PhDs. That’s their advantage. But on the other side also they also have a large set of diverse data. However it comes with expense. How to train those models with large set of diverse data, that’s a problem they also need to solve.
From a smaller FI or a banking perspective. The advantage of the small fis is from a technology perspective we are much more cloud native. We don’t have that much legacy systems from a governance and engagement perspective. And keeping in mind AI is not just a technology transformation, it’s organizational transformation. So the line of business engagement is much more streamlined and simplified compared to the large banks. The governance structure is also streamlined and simplified than the large bank. Like from a data perspective, a lot of our small FIS data are cloud native. That’s much easier to be leveraged to train the MM models and to be leveraged so and from a delivery perspective, the small FIs we normally have the more agile delivery approach that drive the business outcome faster.
So those are the pros and Cons and at the end it’s AI comes to for us to leverage it is solving a business problem. So we just need to understanding the pros and the cons, the efficiency and the constraints of the tool and take our best use of it.
Rob Dunlap: Do you mind if I build on that just a tiny bit, Tom? Yeah, so I think those are all super well made points and Janet, and as you mentioned Tom, like I’m a retire or I’m a recovering consultant after 10 years or 12 years in the AI consulting space. And I actually made the change to move to a SaaS company to move to Zafin because what I see in the future is a lot of these AI tools, a lot of these AI capabilities being embedded in the technology that you may choose to buy.
And because AI is such generative AI specifically and how we put it together with agentic AI is such an accelerant for innovation, the ability of a smaller FI to buy a point solution that is best in breed, that carries along with it really strategic AI capabilities is way more than over the last decade and certainly before. And so while the big FIs are going to continue to want to build their own technology and they’re going to invest millions and billions in that, the ability for any challenger organization to leverage emerging technology from SaaS companies is going to be a really interesting dynamic and accelerant for them.
Thomas Purves: Yeah, I think we’ve definitely seen those challenges of there are some, whether they’re fintechs or SaaS vendors who are emerging, who have the technology acumen but don’t have the data. And then you have the other side of the coin. You have institutions that have the data but don’t necessarily have the acumen or are encumbered by legacy to really make effective use of it. And we need to find ways for parties to collaborate.
Rob Dunlap: Talk about the foundation for a great partnership, right? Yeah. Totally symbiotic.
Thomas Purves: Now, leading into a question I want to ask you, Simon. I was on the stage last year, very similar panel. We had some great bankers up here talking about how they’ve been running a ton of experiments in AI. They’re really excited about the potential, but couldn’t talk too much about anything that was in production yet. I think this is also endemic of something that I’ve seen pretty widespread, that it’s easy to spin up experiments or to create a corporate mandate that says we need to start playing with AI. But a lot of those things that we play with or start to build don’t Necessarily grow up to be robust, production ready, useful tools.
So Simon, from the benefit of your perspective, what do you think it takes for a program or a project that you spin up in AI or an experiment to go the distance and make it all the way to production?
Simon Sun: It’s a great question. I totally agree with Jenny’s point. When we talk about the AI project, it’s not really a technology project. We have to look at that holistically as the organization. So probably think about people, process and technology, right? If you just focus on the technology itself, usually there are a lot of things that will go segues that you may not even expect, right? So having that kind of holistic view, when you design the whole process, the whole project is something critical. And the second thing is focus more on. At the end of the day, we’re a financial institution, right? We’re a bank, we’re insurer. So what really tries the re drives the revenue for the business, right? To Janet’s point, the gen or whatever AI project, AI is just a tool, right?
Even as fancy as the name may sound like, the investors might be excited to hear that you’re doing something innovative. But essentially that has to come back to the real value, to the first principle. What, what kind of value are you bringing to this organization? Don’t just create a project just for the sake of hey, I have a gen AI model or have a large language model, then what? Or so what, right? So focus on the foundational stuff. I think that will be something very meaningful. And the second, the other tip I can give is maybe start something small, right? For several reasons. One is usually those kind of AI gen, AI agent, take AI kind of process are more engineering focused than statistical or mathematical focused.
So if you’re an engineer, regardless of the computer engineer or electric engineer, whatever, you see that starting something small with thorough testing, that can really benefit you to realize and identify potential risks at an earlier stage. Right. And the other thing, why you know, doing small testing is important is because we’ve all seen that how powerful this new technology can be. If you want to be a responsible organization, you know, putting the guardrails is very important, right. Especially you know, with all the. Mostly the AI technologies is like a black box and we have to deal with regulators. Really understanding the barriers, the guardrails is more important than just implementing the fancy technology itself.
Thomas Purves: Anything you guys want to pile onto that? Janet, what’s your bar for something that’s going to get the green light to be deployed?
Janet Lin: Yeah. So I like to share as a Technologist being in software engineering or technology implementation for 20 years. The whole AI delivery methodology is so different from the traditional SDLC like engineering building. So I think number one, it’s a cross functional organizational transformation. However, the architecture of how we design the generative AI or AI agentic AI need to be centrally think to be able to create a scalable, reliable solution. You do not want to lose that long term view of architecture of AI solutions. So that’s one. And also just like build rethink about your sdlc, the traditional SDLC and putting the more AI flavor of how we address the specific concerns inaccuracy that AI brings to put additional controls and model validation, ongoing monitoring.
Those are the things that you do not necessarily need to do for the traditional software engineering that plays into the new AI journey. So just to share those two points.
Rob Dunlap: Yeah, those are both great points and I just add one more. Traditionally in Simon, I’d be interested in whether you agree with this. What I’ve found is one of the biggest challenges with AI projects is adoption. You build a really fantastic application, it does some fantastic optimization or assistant or whatever it might be and then people don’t use it. And you think to yourself, wow, we just spent a million dollars or X amount of money and no one is actually leveraging the technology. And often that’s because the users are a little bit of an afterthought. Whether it’s intentional or not, you’re not embedding it in the workflow that’s going to be intuitive for them. The user interface isn’t perfect for them to navigate. Or as Simon mentions, like if a decision is a black box, I need to understand why.
Why is it recommending this? If it doesn’t, then I’m going to trust my own intuition. And so one of the really remarkable things about generative AI uniquely is our ability to build prototypes is accelerated from months to weeks or potentially days like everyone in the room has probably heard about. Vive coding. If you’re building an AI application for end users, there’s no excuse to not be showing that to more than champion users. To lots of users, maybe outside of the customer service space, you’re. Your customers may not want to see your MVP necessarily. But showing that to the users, getting their feedback, ensuring that it’s intuitive for them, even if the back end isn’t actually functional or built so that you know when you push it to production, it’s actually going to be useful for the users, they’re going to adopt it.
And then as a result you’re going to be able to achieve the ROI that you associated with it.
Simon Sun: Yeah, I 100% agree, Rob. And you know, I work in the software company. Right. So regardless of how fancy our R and D team develop our solution, it really depends on the customer, whether they’re going to like the feature or not, whether they’re going to use it or not. So the adoption phase is definitely the most critical one. Right. And to your point, that comes through, you know, several challenges, several channels. You may have an advocate within the organization to really help you rolling out this project to the different users. And then another thing I was thinking is, you know, with the powerful interactivity we have with Genai is much easier for user to test without reading all the manuals and so on. Right.
So hopefully, you know, if you’re rolling out this project to a small group of people that can make it things faster because it’s easier to interact with the software and the solution. Right. But definitely, you know, getting more from a user perspective seeking for the feedback so you can have a higher ROI eventually.
Thomas Purves: I think that’s all very insightful and I think too it depends on what’s your benchmark. Are you expecting too much of the AI or do you just need it to better than what it’s replacing? I have a friend who is a CEO at a major brokerage in the US and they had an AI application they were going to launch and it was actually the engineering team that said, I don’t know, they weren’t comfortable launching it yet because they knew that it wasn’t perfect and would make mistakes sometimes. My friend said, guys, I have 7,000 customer service agents around the world. Do you have any idea how many escalations I deal with on a daily basis for all the screw ups and mistakes that those 7,000 humans make?
It just has to be a little bit better than that and we’re winning and understanding that the failure modes will be different than what mistakes that humans make. But it’s what are your expectations and are you understanding the nature of it and how you manage that risk? Now I want to turn the conversation back again to banking should be automatic and should be intelligent. Now let’s cast your mind forward, say it’s 2030 and AI has been highly effective in transforming the customer experience and the products that we’ve come to know in banking. What does that world look like for you?
Simon Sun: I don’t know if we’re able to predict by 2030, even next year. I think it’ll be very difficult to Predict. Right.
Thomas Purves: Give it enough time, Give it enough time that AI has transformed banking. What would the good case look like?
Simon Sun: Yeah, I don’t want to predict on the technology itself, like what’s available, what’s not, but if we focus more on the outcome, what can better? I think it’s the responsibility. How are we going to be able to build trustworthy AI? I think the financial industry or AI in general is very similar to the pharmaceutical company. Right. Like, we all know how powerful a drug could be and we all know how dangerous a drug could be as well. I don’t think anybody will buy a drug over the counter without really understanding and knowing that has been approved by the FDA or Health Canada or whatever. Right. So the pharmaceutical company, definitely, they go through several phases of clinical trials when they develop new drugs. I don’t think we are that mature from a regulation perspective for AI in general.
I know, you know, OSFI and other regulators are doing very much as they can, you know, to design a governance framework. But I think a lot of the information has to come. The information or the requirement has to come from the doers. Right. If you are building these applications, you understand what are some pain points, you understand how powerful and how dangerous this technology could be. So how can you embed the bias assessment, the transparency, the explainability and all those kind of good things into the building phase of your technology? I think that’s something I’m expecting to see.
Thomas Purves: And from your perspective, thinking about the end value to consumers and businesses, what, what are you thinking that AI should eventually be able to unlock in terms of those value?
Janet Lin: So I’ll go back to my earlier point of like, AI is just another disruptive technology. Like it’s not a, it’s not too fancy. But you know, as we talk about the Internet, the cloud 10 years ago, the Internet 20, 30 years ago, like AI, just like the generator concept of generator, AI just came up in the last two years and we all see any disruptive technology when it comes, the controls and the validations are not mature, like this is normal. We need to be comfortable with that type of disruptive technology. And if Your question is 2030, we have five years, like the Runway is so long.
I’m very confident the technology will mature as we grow in like, as the foundation of technology is so powerful and robust and the controls, the regulations and the governance will strength in like super fast in the next couple months or years. So back to your question, with that bigger context, to your question of what I would imagine from A consumer facing a banking perspective in 2020, 2030. Imagine anything you can do walking to a branch today that should be able to happen with this technology digitally. When you walk in, when you open a bank account, a virtual agent will be there to assist you navigate through the account opening through like your wealth, any financial needs like that to me is the 2020, 30, I think a.
Rob Dunlap: Truly personalized customer service. You know where your bank, your whoever the FI is that’s supporting you feel like they truly understand your needs, that they’re able to bundle and bring together products and services to support your individual needs and actually service you in a way that is in an ongoing way truly adapted to you. And like today we can’t do that because one we can’t eke out any better digital experience is one of my perspectives. You know, there’s only so many more millions of dollars we can build into mobile and web apps. And at the end of the day humans experience is not scalable. There’s only so many one one interactions you can have. So I think that’s one piece, I think the second piece is that’s customer facing. For employees, for people who actually work within FIs, it should feel like a partnership.
If everything goes really well, they’re going to feel like they still add a huge amount of value to the end customers, that they play a pivotal role in actually providing the services of the fi, but that they’re being assisted and supported every step of the way by an AI solution in a way that is a true partnership for them.
Thomas Purves: I want to wake up in this world whether it’s 2030 or whenever it is that even as a Canadian, I could wake up in this world as a consumer or as a business. And when money comes into my account, it is automatically allocated to the right deposit, the right credit or the right wealth management account on the fly. My expenses are managed and optimized, my cash flow is financed, forecasted, and if I need credit, something sourcing that for me. If I have excess cash, something useful is being done with that. It’s automagic, it just happens. I’ve got agents doing this for me. I don’t have to reconcile manually payments ever again. In my small business between books, it just works. That would be great. I could imagine agents and AI helping a lot in that solution.
But let’s say it’s 2030 or so and that’s not the world we live in, or it’s not the world that anyone in Canada lives in. What do you think would be the Number one blocker that sort of prevents us that you foresee or the biggest impediment to reaching that end state goal that I kind of described there.
Rob Dunlap: I could start technical debt and legacy systems. I think that’s a huge thing that basically every FI or almost all FIs, maybe with the exception of some smaller or challenger FIS is going to really struggle with over the next five years for sure. I think it’s a unique opportunity where the technology exists for you to do things like hollow out and squeeze your core if you’re a major banking or lending client. That’s one of the reasons that I’m so excited to be at ZAFN now. I think that there’s a huge opportunity for us to use this new technology to modernize in a way that we felt that there’s too much enterprise risk to do that previously. So that’s something that could hold us back but hopefully actually will be something that is a win in terms of reducing that operational risk.
Simon Sun: I agree with what Rob just mentioned. But on top of that I’ll just add another barrier is data. We may take that for granted. Data is everywhere, right? But believe it or not, we can easily use up the data as soon as possible. Right? And the other thing is I think we can all agree that the data is if we don’t believe the society we’re living is perfect, I think we can also believe that the data that’s being used for model training is not perfect either.
Thomas Purves: So speaking of data is the oxygen for AI and what we heard from the earlier panels is you know what would be great for data? Rich, widely available, open banking rich data on payments in Canada. I don’t know what happened in this country but like I’ve been in the US for a long time. I come back and it’s like nothing has moved in the last 10 or 15 years. But the rest of the world has advanced in those domains and we’re very slowly paying catch up.
If you had a 30 second elevator pitch to Mark Carney or anything from a policy perspective, do you think would make a difference either less regulation, more regulation or more kick in the pants for the industry? What do you think that would be? Anyone want to take that?
Rob Dunlap: I won’t pick up the data one. I can leave one of that maybe for one of the two of you. I think AI right now is a talent war and we have so much fantastic talent in Canada. We build so much talent through Canadian universities and so much talent goes south of the border. Nothing against the United States. I won’t get political at all. But we are also in a really unique time where people in the United States and Canadians who’ve gone down to work there are thinking, you know what, maybe this isn’t the perfect thing for me. So I think we should have and should continue to have an even more aggressive strategy of supporting and bringing talent into Canada and growing organizations in Canada. I think that’s the most strategic thing we could do in AI. All right.
AI Agents Are Overhyped, But Agentic AI Is Underhyped
Several panelists agreed that “AI agents” are being overused as a buzzword. However, true agentic AI—where systems iterate toward optimal solutions autonomously—is still in its early stages and vastly underutilized despite its transformative potential.
AI Hype vs. Practical Use
There’s wide industry consensus that AI is as transformative as the Internet or mobile, but we’re currently experiencing a “tsunami of hype.” Many institutions struggle to move beyond experimentation into scalable, production-ready solutions that create real value.
Generative AI Democratizes Access, but Not Trust
Generative AI has made AI more accessible across roles and organizations, but trust, transparency, and risk management remain major adoption barriers—especially in high-stakes applications like lending or customer decisioning.
Large Banks vs. Small FIs: Different Strengths
Large banks have deep budgets, in-house talent, and broad data sets—but they also face complex governance and legacy tech constraints. Smaller FIs are more agile, cloud-native, and often able to adopt off-the-shelf AI solutions faster by partnering with SaaS providers.
Data Quality and Legacy Systems Are Critical Blockers
Access to rich, clean, and diverse datasets—as well as overcoming legacy infrastructure—are major hurdles to meaningful AI adoption. Institutions cannot build trust or effectiveness in AI without addressing both.
AI Projects Need Organizational Buy-In, Not Just Code
Successful AI implementation requires cross-functional transformation—not just tech upgrades. Adoption suffers when user needs, workflows, and interface design are treated as afterthoughts, especially if decisioning remains a “black box.”
Guardrails and Governance Are Still Maturing
Like pharmaceuticals, AI can be powerful or dangerous. Embedding bias checks, explainability, and model validation into the development phase—not just after deployment—is essential for responsible AI use in finance.
Canada Risks Falling Behind in Open Data & Talent Retention
Slow progress on open banking and poor data access is stalling innovation. Meanwhile, Canada is losing top AI talent to the U.S. The panel advocated for stronger talent retention strategies and smarter policy to keep the country competitive.
The Vision for 2030: Automagic Banking
In the ideal future, banking is invisible, intelligent, and automatic. Tasks like account optimization, expense management, credit sourcing, and reconciliation are all AI-driven in real-time—freeing consumers from manual financial management.
Adoption Trumps Perfection
AI doesn’t have to be flawless to be valuable—it just has to be better than current manual systems. Institutions must balance risk tolerance with practical progress, or risk falling into analysis paralysis while competitors move ahead.
Sign up for the CLA Finance Summit Series