Skip to content

Our Latest Insights

The tech world is always changing; we’re here to help you stay ahead. Subscribe to our newsletter for the latest updates and insights into cloud technology consulting, digital transformation services, and AI-powered CRM solutions.

Subscribing to our insights means you accept our privacy policy.

Search

Search our Resources and Insights.

Welcome to the Jungle: Agentic Systems in Financial Services

Step Three to Understanding the power of Agentforce Financial Sector

A Field Guide to Agentic AI: Financial Services Edition

Welcome back, explorers, to our third venture into the fascinating world of autonomous agents in their enterprise habitat. I’m your guide, John J.C. Cosgrove, and today we venture into dangerous territory – the financial services ecosystem, where evolution happens at the speed of market trades and only the fittest survive.

Introduction: The Perfect Habitat

If you thought the customer service savannah was dicey, strap in. We’re entering the dog-eat-dog jungle of financial services and banking – a sector I know intimately well. Here, numbers aren’t just figures on a spreadsheet; they’re the lifeblood of institutions that have employed consultants of my ilk since before the advent of personal computing, let alone our latest evolutionary leap: the large language model.

The Perfect Habitat

What many don’t realise is that beneath those solid, masonry-bounded infrastructures we conjure when imagining banking institutions, lies the beating heart of technological innovation. Almost every major competitive advancement either finds its way to the financial sector, or is usually born there. The global economy’s reliance on the speed, precision, and intelligence of these systems has transformed them into hotbeds of innovation.

In the cutthroat world of investment banking and foreign exchange, single milliseconds of advantage translate into billions of dollars of potential upside. The scale of gains from seemingly marginal technological benefits has positioned the finance sector at the cutting edge of computational innovation since the dawn of computing itself.

Machine learning, contrary to belief, didn’t cut its teeth in Silicon Valley – it honed its predatory instincts on the hard flagstones of Wall Street and in the light-speed fibre optic cones of high-velocity trading. It’s fitting that we now turn our attention to the more intricate capabilities of Agentforce within this OG machine learning environment.

Walk with me, reader, on the wild side. To the financial sector…

The API Whisperer: Legacy Systems as Opportunity

One word: “legacy”.

Something of a dirty word in our profession of IT and transformation. We use it, I think, too often to classify the epic achievements and long nights of hard work of the preceding consultants we hope to replace. Perhaps that’s a touch cynical, but forgive me if my years of building sandcastles on the beach as the tide comes in leaves me philosophical about the IT industry at large.

The truth is, some of the most magnificent thinking machines ever built – the great mainframes – remain at the beating heart of the global economy, lovingly maintained by the great banking institutions. These machines, with core architectures based on innovations 40, even 50 years old, are indeed legacy, but I’d argue it’s a legacy we should celebrate – one that has stood the test of time.

Field note: Celebrate, yes; but perhaps from a distance… if you need to integrate data from one of the steel behemoths, we can help you with that, but it’s not for the faint of heart. Where does the 770kg IBM z15 sit? Not in the cloud, my friend…

Here’s what most people mean when they characterise the existing landscape as legacy: they’re old-fashioned, clunky compared to today’s UI and UX standards. We’ve become accustomed to a 15-second gratification cycle on our iPhone, forgetting that mere years ago, waiting a minute for an app to perform its magic was revolutionary.

But it’s this environment, of all industries, that we really need to watch. Why? Because dropping our new little creatures into this tool-dense interconnected ecosystem of machine-optimised high-performance systems will unlock what this pattern of automation could truly achieve.

The financial services sector, operating under the highest regulatory scrutiny imaginable, built these systems for one purpose: to work at scale and work between systems. Not to be pretty, not to be fast, but to function reliably in a tool-dense ecosystem optimised for machine interoperability. This is how the banking sector has operated since its first adoption of computers, and it’s precisely why it’s uniquely positioned to unlock the potential of this new pattern of automation.

Tool Use: The Critical Evolution

The financial services environment presents us with a rich ecosystem of pre-built workflow systems ripe for orchestration. Consider the interface between sophisticated buyers – brokers and investment groups – and their expert account managers, advisors and support staff in a partnered institution. Account managers choreograph complex information flows about asset classes, instruments, and investments, ensuring the right data reaches the right people at precisely the right moment.

This represents a multi-layered B2B relationship chain where brokers accessing our portal often interface with multiple suppliers simultaneously, each providing specialised services. Our challenge lies in equipping this portal with comprehensive self-service capabilities that can handle sophisticated technical requirements while maintaining a competitive edge.

Let’s examine the core workflows that power this ecosystem, starting with that fundamental financial gatekeeping ritual: KYC – “Know Your Customer.” Modern banking regulation demands rigorous identity verification, varying by jurisdiction and entity type. This involves extracting specific information from individuals and verifying it against government repositories or third-party systems with precision and security.

Tool Use The Critical Evolution

We might then progress to credit checks, either for loans or leveraged investments, which requires bridging multiple services with the delicacy of a tightrope walker. This process demands additional documentation and follows strict sequences, often spanning multiple systems. Account status management follows, tracking funds under management through various KPIs: net inflows, outflows, total flows, and current valuations – there are dozens of heartbeat-like metrics that need to be tracked.

The system also handles market intelligence – monitoring communications, sentiment analysis, and interaction patterns. Many institutions employ machine learning for propensity modeling, cross-sell recommendations, and retention analysis. These components form a sophisticated network of interdependent services, each one a specialised creature in our technological ecosystem.

Finally, we have the operational workflows: transaction processing, complaint handling, and documentation management. Each represents a distinct API call, yet their real power emerges when orchestrated together. This environment, with its emphasis on machine interoperability and regulatory compliance, creates an ideal testing ground for agentic systems.

Now, let’s see what happens when our little agentic creatures pick up these instruments like a metaphorical stick about to be poked down the anthill.

Field note: I wanted to build up to a full 2001: A Space Odyssey moment here, but I’ve been informed by marketing that I’ve used up my obsessive nerd quota halfway through blog 2. For those in the know, the monkeys are picking up the metaphorical bone axes at this point and the trumpet solo has started…

KYC Flow – Simple LLM workflow with External API call

The KYC flow demonstrates the elegant simplicity of LLM tool use. The agent assumes the role of interrogator, collecting KYC information through natural conversation before passing it to verification systems. Done well, we’re going for refined butler rather than bouncer, with the LLM asking probing and clarifying questions if things don’t go smoothly.

KYC Flow Simple LLM workflow with External API call

Interestingly, the LLM uses existing verification flows by becoming bi-lingual. Remembering from the earlier blogs that the “chat” interface is really an illusion – we control what is sent to the LLM and what we do with the LLM’s output. When we give an LLM the ability to use a tool, what happens is it learns to output a structured format – literally a slash command is often the format, or a block of JSON. The same program we have monitoring the output to display it to you as a chat now detects that a different type of output has been generated – a “tool call”. The program instantly works like a router or switch – it invokes the API or service instead of displaying that text to you, the human user. The actual API calls are then fired off by the existing system, based on the parameters that the LLM has provided. The LLM doesn’t have to fully understand how the API call works – it just needs to know what parameters to supply and how to ‘swing’ the tool.

Field note: See! Like 2001: A Space Odyssey! You can HEAR the trumpets! DA-DAAAA!!! dum dum dum dum dum dum dum dum dum DUM….

Credit Check Flow – Chained LLM workflow with External API calls

The credit check flow builds upon the KYC foundation but introduces more complexity through its sequential nature – a particularly elaborate dance where each step must be executed perfectly before the next can begin. Additional documentation must be attached and processed in a strict sequence, often bridging multiple services or systems. This makes it an excellent example of the chained workflow pattern we explored in Anthropic’s blog post, with gates requiring evaluation before proceeding to subsequent steps.

Credit Check Flow Chained LLM workflow with External API calls

What makes this flow fascinating is its interdependencies – the credit check typically requires completed KYC verification and may branch based on outcomes, like a choose-your-own-adventure novel where each choice must be meticulously documented. Documents need to be attached and potentially interpreted by the LLM, introducing opportunities for retrieval augmented generation. The flow demonstrates how we can chain workflows, implement gates, and handle dependencies in a way that proves invaluable for financial services applications.

Account Status Summary – Multi-step LLM summary workflow with queries

Account status intelligence is about enriching our understanding through database queries, analyses, forecasting and intelligence gathering to inform rich client conversations. Agentforce provides powerful built-in capabilities to convert records into LLM-friendly text for quick interpretation and summarisation, which makes this particular use case a staple in most of our deployments already.

Account Status Summary Multi step LLM summary workflow with queries

But the real power emerges from building dedicated RAG-augmented prompt flows designed to extract and summarise key information across an account’s related records in standardised formats. Using prompt builder allows us to take what’s already a superpower in finance (creating quantitative account position summaries) and transform it into a tool leverageable across the institution. The ability to use LLMs to trivially reformat information into standard layouts at scale transforms your data into an army of perfectly coordinated librarians, each one capable of instantly reorganising vast amounts of information into whatever format is most useful. You don’t just get your answer – you get the perfect format for that answer and you get it every time.

We implement multiple chaining calls so summaries can roll up into higher-level summaries while preserving the detail, nested like Russian dolls where each layer builds on the one beneath. This means agents can also drill down through prompt builder flows to get more granular information when needed, with the precision of a microscopist adjusting their lens. The potential applications and permutations of this pattern are endless and extremely promising for transforming how we handle account intelligence.

Knowledge Service – RAG LLM workflow

Here’s another fascinating specimen in our technological menagerie. We mentioned how knowledge bases are critical for service, but a close runner-up may be everything you need to know about regulatory compliance and financial instrument descriptions in a major investment bank. There can be an enormous amount of supporting documentation – not mere Wiki pages, but comprehensive tomes – required to meet all disclosure obligations. And there can be thousands of these instruments just in one branch of one section of one institution, like an entire library dedicated to describing the various species of financial butterflies.

Knowledge Service RAG LLM workflow

This presents an incredible opportunity to harness the retrieval augmented generation pattern. Simply ensuring this information is correctly stored in a vector database – yes, just like the one we’ve already got waiting for us inside Data Cloud – has enormous potential to accelerate the speed with which this information can be identified, retrieved and summarised. Not just for account managers but for direct presentation through self-service tools – like a broker portal.

Transaction Service – Precise API call

When it comes to executing financial transactions, precision is paramount. While our agents need to facilitate routine transactions and requests, we cannot allow LLMs to directly process or make decisions about financial transactions, lest their occasional hallucinations have consequences more severe than a misplaced semicolon. The solution is to use LLMs to guide users to the right tools – a well-informed concierge who knows exactly which department can help but wouldn’t dream of handling your valuables personally.

The key architectural principle is keeping the LLM’s role simple and constrained, giving clear boundaries to an enthusiastic but occasionally overconfident assistant. Rather than passing transaction parameters through the LLM, we implement precise API calls that invoke dedicated screen flows where users can input transaction details directly. These details then flow through traditional, deterministic programming interfaces without LLM interpretation or modification.

Transaction Service Precise API call

This approach of keeping components simple is critical because the real power of agents emerges from the interaction of many simple elements rather than individual complex ones, much like how a colony of ants can build elaborate structures through the combination of simple behaviors. The LLM’s role is more facilitative than actual execution – directing users to the appropriate transaction interfaces while ensuring all actual processing happens through proven, reliable systems. This pattern of simplicity and separation of concerns helps manage the emergent complexity that arises when multiple agent components interact.

Escalation Service – LLM Handoff workflow

The ability for an LLM agent to “phone home” is as critical as having a safety net beneath a trapeze artist. Having a break-glass component for escalation and human handoff provides essential safety nets and exit paths when the agent needs assistance or encounters issues it can’t handle alone.

The escalation workflow itself is elegantly simple – it either creates a new case or transfers ownership of an existing one, routing it through traditional systems to get human help. This can be as straightforward as a /handoff command triggering standard programming interfaces, often requiring no complex LLM decision-making at all.

Escalation Service LLM Handoff workflow

This handoff capability opens up interesting possibilities beyond just human escalation. While our example shows human handoff, the same mechanism allows for agent-to-agent transfers, which is a core feature of how Agentforce operates – seamlessly passing control between specialised agents as needed, like runners in a relay race, each knowing exactly when to pass the baton.

The Symphony of Swarms

Now I need to explain to you another important fundamental part of how LLMs work in the wild – the system prompt, or as we’re now calling it, the “developer prompt”. This core concept is perhaps the most critical for understanding how to architect and develop with these systems.

What we found very early on with LLMs and GPTs is that, similar to a human with the idea of primacy and recency, they have an enormous bias to pay attention to the things which are said at the very start of the string we feed them. This led quickly to discoveries around how we could change the behavior of the LLM by putting developer instructions right at the front and then appending the rest of the instructions from the user.

But here’s the really important thing: this is quite literally exactly what Agentforce does with its topic system. Those six workflows we just discussed? They become six perfect candidate topics, each one containing precise instructions for what the agent should try to do and, yes, actions, which will involve us connecting all of the existing flows built inside the environment.

The Symphony of Swarms

This is why the most important thing to learn in mastering the use of Agentforce is the topic classification section. When there’s an interaction with the agent, Agentforce composes a special prompt where these scope descriptions are listed like a menu, allowing the LLM to interpret the conversation and select which topic should become the active agent – effectively rotating in the appropriate system prompt for specialised handling.

Your success with Agentforce deployments hinges on crafting precise, methodical classification descriptions. As a developer or architect, you’re not just defining system prompts – you’re establishing the conditions for topic hand-offs. Overlapping or ambiguous definitions will cause your agent to behave unpredictably and create hard-to-debug issues.

Field note: Yes… you will literally get… “I’m sorry Dave, I can’t do that”…

Consider a broker saying “hi I’d like to inquire about a particular type of investment instrument” – the topic classifier immediately interprets this context to determine the right topic. With well-structured classifiers and contextual information from authentication passed through the agent’s session, it can smoothly transition to the KYC topic, loading the appropriate interview prompts to verify the customer’s identity.

We now have a complex organism – an agent – with a powerful reasoning and planning core and a set of tool-enhanced workflows radiating outward like appendages. This creature is deeply integrated into its environment by virtue of the fact that it leverages tools that are already built specifically FOR that environment. It can read and write to that environment in the form of the databases and records it manipulates and it understands the context of its current interactions with the user(s) it is helping. And it’s not even a single creature really – it’s a swarm of them, each one with its own specialized role and its own set of tools.

The Symphony of Swarms 02

Whatever your take on the hype of agentic AI, you must admit: this new creature sounds formidable.

Evolution in Action

Let’s see this evolution in action. A broker logs into their portal and immediately the agent springs to life. First, it verifies their identity through the Authentication topic, checking credentials and session tokens. Then, as the broker begins asking about their client’s portfolio, the agent seamlessly transitions to Portfolio Management, analysing positions and market conditions.

The agent doesn’t just respond to queries – it proactively manages the interaction. When the broker mentions a new investment opportunity, the agent immediately recognises the need for KYC verification. It maintains context throughout the entire interaction, remembering previous discussions about risk tolerance and investment preferences. This isn’t just a chatbot responding to prompts – it’s an intelligent system navigating a complex regulatory and operational landscape.

Evolution in Action

Suddenly, disaster strikes. The broker wants to leverage up a position, but the credit check workflow is failing. The agent interprets the response from the check API and identifies that something is wrong with the registration information of the client with the credit agency. This is strange though – because the same information worked fine for KYC. The agent has BOTH of these events in its current context… so it can do a lot more than just error out – it can use the context to fix the problem. The agent can update the incorrect information in the client record to match the verified information from the KYC step and then re-run the credit check. It works.

Evolution in Action 02

What makes this particularly powerful is the agent’s ability to retain context across topic switches. When discussing a new investment product, it doesn’t just access product information – it considers the client’s KYC status, their portfolio composition, and their risk profile (maybe even the recent leverage application). This context retention is crucial in financial services, where decisions must account for multiple factors simultaneously. The agent wraps up the interaction by summarising detailed compliance information on the new instruments via RAG and it even schedules a follow-up call with the client to discuss the product with their account manager. It writes meticulous summaries of the interaction to the account record, meaning that any future agent interaction will fold this interaction into the context for the next one.

Well. Looks like there’s a new king of the jungle…

This is where the financial services environment proves itself as the perfect testing ground for intelligent agents. The complexity of interactions, the need for precise documentation, and the regulatory requirements create an ecosystem where agentic AI can truly demonstrate its value. We’re not just building a better interface – we’re creating a new way of managing financial services interactions that is more efficient, more accurate, and more responsive to user needs.

Conclusion: Survival of the Fittest

The financial services jungle is evolving, and agentic AI is proving remarkably well-adapted to this environment. Through the sophisticated combination of topic management, context retention, reasoning, planning and tool use, we’re witnessing the emergence of systems that can navigate the complexity of financial services with unprecedented efficiency.

The key to survival in this environment isn’t merely raw processing power or sophisticated algorithms – it’s the ability to think in systems, to understand how different components interact, and to create agents that can adapt to changing conditions. As we continue to develop and deploy these systems, we’re not just improving individual processes – we’re evolving the entire ecosystem of financial services technology.

Conclusion Survival of the Fittest

Welcome to the jungle, indeed. It’s going to be an exciting ride.

John 'JC' Cosgrove

Partner, Cloudwerx

JC is a pioneer in seamlessly embedding data into businesses. From the early days of “big data” hype to today’s cutting-edge innovations, his mission has always been clear: bring data to life, make businesses smarter, and push boundaries. And the best part? The journey is just getting started.

LinkedIn: https://www.linkedin.com/in/johnnycosgrove/

photo of John Cosgrove