In this episode, we chat with Dan Jermyn, Chief Decision Scientist for Commonwealth Bank of Australia, about an artificial intelligence-enabled digital system the bank uses to communicate with its 15 million+ customers.
As you’ll hear in the episode, Dan has a track record of leading teams involved in creating groundbreaking data-driven tools for the financial sector in both the UK and Australia. We invited him to join us today to talk about the Customer Engagement Engine or CEE – a system that uses customer data and artificial intelligence to help the bank communicate with its customers across all of its platforms. CEE is fast becoming a fundamental part of how CBA thinks about engaging with its customers, and is one example of how digital infrastructure with the capacity to connect data to action has the potential to shape the future.

This episode was developed in support of the Algorithmic Futures Policy Lab, a collaboration between the Australian National University (ANU) Centre for European Studies, ANU School of Cybernetics, ANU Fenner School of Environment and Society, DIMACS at Rutgers University, and CNRS LAMSADE. The Algorithmic Futures Policy Lab is supported by an Erasmus+ Jean Monnet grant from the European Commission.
Disclaimers
The European Commission support for the Algorithmic Futures Policy Lab does not constitute an endorsement of the contents of the podcast or this webpage, which reflect the views only of the speakers or writers, and the Commission cannot be held responsible for any use which may be made of the information contained therein.
All information we present here is purely for your education and enjoyment and should not be taken as advice specific to your situation.
Episode Credits
Podcast Creator – Liz Williams
Hosts – Zena Assaad, Liz Williams
Guest – Dan Jermyn
Producers – Zena Assaad, Liz Williams
Assistant producer – Brenda Martin
Episode artwork – Zena Assaad
Audio editing – Liz Williams
See episode transcript for links to musical credits and references.
Episode Transcript
Liz: Hi everyone, I’m Liz Williams.
Zena: And I’m Zena Assaad.
And this is The Algorithmic Futures Podcast.
Liz: Join us, as we talk to technology creators, regulators and dreamers from around the world to learn how complex technologies may shape our environment and societies in the years to come.
Zena: In today’s episode, our guest is Dan Jermyn, the Chief Decision Scientist at the Commonwealth Bank of Australia, or CBA.
He and his team are responsible for creating data-driven solutions that improve the experience of the bank’s 15 million plus customers, and ultimately change how CBA—and the financial sector more broadly—think about applying data science tools to the financial sector.
Liz: We’ve invited Dan to join us today because organisations like CBA are places where our technological futures are already being created.
It can be hard to tell what these futures might look like, because often, these systems are proprietary, and are being created behind closed doors. In this episode, through Dan’s words, we’re hoping to give you some sense of what one of these systems looks like. We’re also going to share some of the history that has, in many ways, contributed to the form these systems take today.
Zena: We will also explore how companies like CBA are thinking about building and managing AI-enabled technologies responsibly for the future – and how places like Australia and the EU are governing these technologies.
Liz: We’re focusing our discussion with Dan today on the “Customer Engagement Engine”, or CEE – a digital system that makes it possible for CBA to send out tailored communications to its customers through a range of channels — from messages in its banking app to the conversation a customer has with a local teller in the bank branch.
Zena: This system is interesting because it has become fundamental to how the bank thinks about interacting with its customers.
Liz: It is also made possible by vast quantities of data and “artificial intelligence”, or AI, a term which defies a single definition.
Zena: In this episode, when we talk about AI, we’re referring to a computer system that uses data to make decisions. AI systems are special in that they have the capacity to learn from the data they collect. This ability can help systems like this make better decisions.
Liz: Before we get into what the Customer Engagement Engine is and how it works, we want to help you understand Dan’s background, and how he came to his role as Chief Decision Scientist for CBA.
Dan: I have been at the bank for, I think, four and a half years now. It’s gone by really quickly. I never set out to do this kind of role, mostly because it didn’t really exist when I set out, I suppose. I came from Cardiff in old South Wales, not the new and improved version that I currently find myself in.
I thought I was going to do scientifically type things. I had a computer. I had some vague notion that something along those lines might end up being what I did, but I actually did academic type things.
I started a PhD in Computing — that was going to be me. Then at that point, one of my friends came along and suggested that we start a business looking at digital analytics, which was very hot at the time. So, I did, and we got into technology and the emerging tech space, and particularly analytics within that became obvious very early that this was going to be a very significant thing to be looking at.
From there, we developed a piece of software. We looked into engineering, we started selling those sorts of things. All went very well, and then I find myself needing a real job, having sold the company, which was why I ended up at the Royal Bank of Scotland. Now, there I started to look at a broader set of analytics, and this is really where data science and some of the statistical and advanced machine learning techniques, which is starting to come into play. And you could start to see some of the incredible things that were going to be possible with big data.
I spent a few years there and then got the opportunity to come out here to Australia to work at Commonwealth Bank, and I couldn’t turn it down. And so I moved to Commonwealth Bank of Australia, where I ostensibly started with digital integration of what became the Customer Engagement Engine, this behemoth that we have now, which is the way that we communicate with our customers across all channels.
Liz: We asked Dan to tell us more about this “behemoth” – the Customer Engagement Engine.
Dan: The Customer Engagement Engine or CEE, is effectively our way of ensuring that as we serve our customers across many channels, we’re a multi-channel organization. We have branches, call centers, web chat, email, the mobile app, of course. As we do that, how do we make sure that we’re treating our customers in a way that treats them as individuals, is cognizant of their preferences, or is more relevant or helpful to them? So, the Customer Engagement Engine really is a way for us to combine those experiences and make sure that we’re consistent. If you come into a branch, we should know what’s the next best conversation to have with you, and that’s what this Customer Engagement Engine does.
It creates a series of prompts or suggestions around what is the most relevant thing that we can do for this customer in this moment right now, based on what we understand about them and their preferences? Originally it was conceived mostly to help think about messaging to customers, so how do we remain personal as a bank at scale, and with a digital transformation, how are we going to make sure that we’re joined up so that when you choose to bank with us as a customer, we treat you as that individual no matter how you choose to. You choose the most convenient for you, and we think about you as an individual, and are consistent and relevant with that.
But as it’s developed over time, we’ve started to realize that it actually gives us the opportunity to create new experiences for our customers, new ways to bank with us or new value added propositions that we haven’t really conceived previously. And machine learning plays a key part here. We now create, I think 35 million decision points every single day, around the next best conversation to have with a customer, whether you are a frontline agent with somebody coming into the branch, or somebody calls you up, or you log onto our mobile app, and we present, if we have something relevant and helpful to you, a notification.
Zena: Machine learning is one category of “artificial intelligence”. Its success as a technique depends on access to vast quantities of data.
If you want to use machine learning to create a model that can make decisions for you based on data, you need to “teach” it – a process called training.
The training process requires data that is representative of the kind of data you expect the model to encounter once it’s deployed.
Liz: There are a few different categories of machine learning, and the training process depends on the category you’re working with. For example, there’s supervised machine learning, which is often used for classification work. When you train a supervised machine learning model, humans first need to “tag” each entry of data with the answer they want the model to come up with.
Zena: Liz, I think it helps to think about this with an example. Can I have a go at sharing one?
Liz: Absolutely.
Zena: Let’s say you took your credit card transaction data for a month and tagged it with a purchase category. Ebay is tagged with ‘LEGO’, Woolworths is tagged with ‘groceries’, and so on. You could then train a machine learning model to classify your transaction data based on this tagged data – to figure out what percentage of your spending falls in the ‘LEGO’ category, for example.
Once you trained your model, you could then provide untagged data – say the next month’s credit card statement – and it would be able to calculate how likely each entry is to fall in one of those purchase categories, and return the most likely category as its outcome.
Liz: So what happens if you feed it someone else’s credit card data?
Zena: It’d still give you a category for each entry, and the likelihood that the category it provides is correct. If the other person has similar spending habits and lives somewhere with access to similar stores, the model could work really well.
Liz: But if, say, the credit card statement was from someone who lived on the other side of the world …?
Zena: In that case, it probably wouldn’t work very well.
Liz: There are other kinds of machine learning approaches, too, and the training works a bit differently in each case. For unsupervised learning techniques, you don’t tag the data. The model just looks for clusters of patterns.
Zena: So this is an approach they might use for fraud detection, right?
Liz: Yes. I can imagine that same credit card transaction data would have certain patterns to it, which an unsupervised model would pick up. Then, if a transaction that didn’t fit those patterns came along, the model could tell me it was different.
Zena: There are other approaches, too, but for now, I guess the important thing to note is these approaches are really sensitive to the data you train them with. If the data is rubbish, or is inappropriate for the context, well, so is your model.
Liz: That’s right, Zena. Recent breakthroughs in machine learning – along with vast improvements in computation and access to ever-increasing pools of data — are driving something of a renaissance in the use of these approaches for organisations like CBA.
Dan: 35 million times a day, that’s a huge amount. And so machine learning helps us to coordinate all of those messages that we have, all of those types of conversations that we can have across many individual segments. We have, I think 400 machine learning models in production, running over 157 billion data points. And because it runs in our mobile app where customer experience is incredibly important, and we can’t have you waiting around for a page to load, the response time is incredibly quick. Within a few hundred milliseconds, we have to respond. So, the scale of what we’re doing is quite extraordinary.
What does it mean for our customers? As I say in the early days, it was around thinking about if we would like to talk to you about how to avoid a fee, or how to produce a better financial wellbeing outcome for yourself, then we would orchestrate the messaging in that fashion. We now have the opportunity to do things that are more experiential. A great example is our benefits finder feature within the mobile app. Now this is an incredible thing that the team have created. I think it’s a way to think about all of the government, both federal and state benefits or rebates that are available.
People can get money back on compulsory third party insurance or toll relief, or all kinds of different types of benefits, but it can be difficult for customers to access those, or to know that they are entitled to them. So we thought, wouldn’t it be great to use this power that we’ve got with the Customer Engagement Engine, to provide relevant benefits back to customers to notify them that, “Hey, there’s some money here that you are entitled to potentially, would you like to know about that? Would you like to be directed to the place where you can go and see if that’s applicable to you?”
We can do that in an incredibly relevant and sophisticated way and make it much easier for our customers to access benefits that they’re entitled to, and that’s been in market for, I think, a little over a year now. In the past year, we returned half a billion dollars of money to customers back into their pocket that they were entitled to, they just didn’t realize. It’s really those sorts of experiences that the Customer Engagement Engine is critical to us delivering these days.
[Music by ummbrella from Pixabay]
Liz: The work CBA is doing with the Customer Engagement Engine is, in many respects, built on more than a century’s worth of change.
In 1911 – the same year that CBA itself was first established through the Australian Commonwealth Bank Act of 1911, a man named Frederick Winslow Taylor published a book called “The principles of scientific management.”
In it, Taylor proposed an approach for creating more efficient organisations using what he called a “scientific” approach to management.
Zena: Taylor’s work created massive change in a number of industries and laid the groundwork for how the Western world thought about automation – both in human work practices and in the integration of machinery into production processes. The financial sector was no exception.
Liz: Part of it was timing. Taylor’s ideas came in just as banks themselves began to see a clear need for improving the efficiency of their practices. Whereas banks were often small, local businesses before, the early 20th century saw banks significantly increase in scale – in terms of their internal operations, but also in terms of the number and type of customers they served.
Zena: CBA itself started as a single branch, right?
Liz: Yes, that’s right. Its first branch opened in Melbourne in 1912. Over the next few decades, it would expand its customer base by serving Australian servicemen both at home and abroad during World Wars I and II. It would also merge with other banks in New South Wales and Queensland, and establish an interest in international markets via the creation of the Commonwealth Trading Bank.
Zena: This is all before computers and the internet.
Liz: Yeah. Can you imagine the colossal pile of paperwork that would have involved?
Zena: Yeah, the scale of that would have been immense.
I’d imagine that all of those records would have had to be created, maintained, and updated by countless workers managing accounts over local, state, and national borders.
Liz: Yes, that’s right, Zena.
Zena: So there was a need to streamline things.
Liz: Yes. Taylor’s ideas helped banks like CBA establish standard practices for bookkeeping and encouraged financial institutions to introduce time-saving technologies like typewriters and accounting machines into their work practices. All of this very much paved the way for standardization of data practices in the financial world.
Zena: So it sounds like computers would have been natural additions to a financial sector in the 1950s and 60s.
Liz: That’s right. They had basically laid the groundwork well before the digital computer. Fast forward another half century, and banks have access to vast stores of data in the form of customer feedback, transaction data, demographic information, and more.
Zena: This data is both an opportunity and a responsibility. Given this, we wanted to know more about how Dan and his team think about their role in the bank. We asked Dan to tell us about his title, his team, and about the kind of impact the work they do could potentially have for CBA customers.
Dan: You know, the decision science thing was a carefully considered thing for us. So, although a lot of people look at what we do and think, “It’s machine learning, it’s AI, you are a data scientist.” we think about this much more broadly than that.
It’s apparent to us that in order to get the kind of outcomes that we want to from technology, from AI, from data science, we need to think about all kinds of aspects that go beyond the mere statistical modeling. How do we think about impacts on customers, how do we think about impacts on the staff that are affected by the new tooling or capabilities that we introduce? What are the wider community implications of that? How do we think about experimentation, behavioral economics? So we wanted to make sure that as we were developing these new platforms, we brought together a range of experts, but from very disparate, diverse backgrounds and experiences.
Through that, we create better experiences, more rounded experiences that are more reflective of the communities that we serve and more broadly helpful to a wider set of people, by bringing in those different views, so the decision that we create is really important to us. How do you drive improved decision making? That takes all of those skills coming together. One of the things that we’re really proud of really within the team is that although we run incredibly technical programs and pieces of work, a lot of the people who work within the team started their career with the Commonwealth Bank on the front line, serving customers, working in branches, working on the call centers.
And through that experience, you really get an invaluable perspective into how people, customers, experience financial services from a range of walks of life. What are the issues that pop up that we could really help with and deal with. So, we create, I think, a better decision making process throughout the bank by bringing these people together, and that’s why the decision science function that we have here, was such an important part of how we get to deploy the Customer Engagement Engine and other AI and machine learning and technological advancements in a way that sits nicely with our purpose and is true to what we want those outcomes to be.
Liz: CBA isn’t alone in needing to find ways to create better products and experiences for diverse customer sets. This is something that’s coming through in legislation and guidelines, too, both here in Australia and abroad.
Zena: In Australia, we see this in the Treasury Laws Amendment (Design and Distributions Obligations and Product Intervention Powers) Act of 2019.
This legislation puts the onus on places like CBA to make sure the financial objectives, situation, and needs of the customers are taken into account.
Liz: This can be a significant challenge, particularly since the tools organisations build are often shaped by the perspectives and lived experiences of those involved in building them. Given this, we wanted to hear more about what Dan’s team looks like, and how they come together to produce a product CBA is happy to release.
Dan: I suppose there’s a practical and a broader answer to that question. I start with broader, which is whenever you’ve got a group of people with different skills, culture becomes incredibly important. So, at the outset we think about improving a product or a service, or creating something new, but to get that early alignment and engagement and good faith idea of everybody pulling in the right direction, the commitment to purpose is uniform. Whilst we have a great deal of different experiences, different techniques, good, robust debate about the right or the wrong way, or different ways to do things, everybody starts from a position of thinking that the outcome here has to be true to our purpose.
We have to improve the financial wellbeing of our customers and communities, and if we all stick to that, then we’re going to produce something great. Let’s just think about the best way to do that. The other thing to consider is, we are a team within a large institution and we depend on, work closely with, others across data teams, product teams, marketing, HR. It has to be critical really in every interaction to think about the broader bank and the people that we’re connecting with to create these pieces of work.
It’s never the case that we will foist upon the organization, something that hasn’t had engagement with the people whom is going to affect, and broader stakeholders for whom it may have consequences or considerations that we haven’t thought of yet. Once we do that, we’re pretty well equipped to get executing quickly. There is a very dynamic culture here. Technology plays a part. We’ve got access to some great tooling that allows us to quickly and safely scale some of the implementations that we do. We’re very privileged in that respect. We’ve got some pretty cool kit that we get to play with. We’re also super smart people who are really adept at producing things quickly.
That’s why experimentation, and the culture of learning is very important too. We’re conscious that some of the things that we introduce have never existed previously, maybe never in anywhere else in the world, never mind within this bank. And so, experimentation is a critical part of ensuring that as we develop these things, the consequences that we hope for, happen, and if not, how do we need to adapt or respond to what we learn from pushing these things into production.
Zena: The idea that Dan shares of improving financial wellbeing of communities – not just individual customers — reminded us of another kind of ‘experiment’ – though perhaps those involved in it wouldn’t have called it that.
Liz: This experiment was called Project Cybersyn and the year was 1971.
Cybersyn was the brainchild of Stafford Beer, who was known for creating the field of ‘management cybernetics’. Back then, it was a pretty obscure field, but it caught the attention of a Chilean engineer named Fernando Flores, who had been tasked with helping nationalize major industries in Chile following President Salvadore Allende’s rise to power.
Allende had promised to fundamentally change Chile’s economy, and wanted to align the new nationalized economy with socialist principles. Basically, he wanted a decentralized government, and wanted workers to be active partners in managing the country (and its economy).
Zena: Beer said he had an idea that could help Allende achieve this. The idea was based on Beer’s Viable Systems Model – an approach for thinking about any complex system using the human biology as an analogy for system governance and communication. This would grow into Project Cybersyn.
Liz: Economic data – specifically relating to production — was central to this project. Beer proposed that Chile collect data on all the production facilities Allende’s administration was swiftly working to nationalize, and send this data to a central computer. This data could be used to predict – and potentially prevent – economic disaster.
Zena: How did it do this? By looking for signals in the data that signaled trouble, and sending this information back to local production managers so they could decide how to act.
Liz: We’re over-simplifying here, but that was the gist.
Zena: It was an interesting idea that came complete with important-looking graphs and futuristic control rooms. In practice, computational limitations, budgeting, technical challenges, and finally, a revolution, would come between Beer’s vision and success.
Liz: Now think back to Dan’s description of the Customer Engagement Engine. You have centralization of data and model-based insight, combined with messages sent back to the figurative coalface via bank tellers, messages in apps, and so on, to nudge individual action. When you put these two stories side by side, it’s hard not to see parallels between the two systems.
Given this, we wanted to learn more about what it was like to run an experiment with CEE.
Dan: Obviously experimentation is, as we’ve established, really, really important. We need to make sure that as we’re implementing the experiences that they land in the way that we think they do, that they’ll have a positive outcome for our customers, and we need to be able to do that in a controlled way and a measurable way, so the science becomes very important, of course. The flexibility of the system is such that as we think of new conversations, we can be really, really targeted in how many customers we go to, of what type, under what circumstance, and how do we hold out a statistically significant control group, to make sure that the incremental response that we generate from the conversation is positive for our customer’s financial wellbeing.
This is where decision science comes into its own. We have the data scientists who think about the modeling, that thinks about relevance. We have experimental scientists who know how to create a robust experimental framework and think about the statistics behind that, which can be very complicated as we start to implement multiple variables into the test. These are the sorts of things that would be, I think prohibitively difficult to do if we did not have the Customer Engagement Engine, which is very technologically advanced and allows us to switch these things on and off, get them into production, get them into different channels. If we want to try something in digital or try something through a push notification or an email, we have the flexibility to do that, and that’s critical because it means that we can learn on a small scale, make sure that we’re having the overall system impact that we would like to have that’s positive and beneficial for our customers, and then if it works, fantastic, we scale that up. And if it doesn’t, we understand why, we learn why, and we adapt the parameters and make sure that as we progress, we are learning all the time and getting better and better all the time. That’s why machine learning is really important too, because it creates this positive feedback loop for us.
The more we learn about our customers, the hope is that we become more and more relevant. The more customers say this is fantastic, we really like this, we implement more.
Zena: This idea of making sure that any impact communications have on their customers is beneficial comes up over and over again in Dan’s story.
Liz: There’s the implied possibility of harm, both in Dan’s choice of words and in the Legislation governing the financial sector in Australia and around the world.
Zena: But what does harm look like in this context? Is it just about making sure a customer’s money is safe? Or is it more complicated than that?
I asked Dan to help us understand what “safety” means in the financial world.
Dan: I think it’s a great question, and there are many dimensions to this. As you say as a financial institution, we have first and foremost, we have to protect our customers, their financial assets, and make sure that banking with us is a safe and secure thing for all our customers. But then in terms of safety, I think there are much broader implications now, as we start to have broader and hopefully deeper and more meaningful relationships with our customers and the services that we provide for them.
And particularly as we start to use techniques like artificial intelligence to produce personalization at scale, we start to think about things like bias and discrimination and think about what are the controls that we need to have in place to make sure that as we’re using technology, we’re doing so in a fair and transparent, and ethical manner. And so, a broad part of our agenda within the deployment of some of the great tooling and capability that we get to give to our customers, is to think about how we check for unintended consequences of some of the things that we do. So experimentation, guardrails, explainability, disparity, all of these things that are very topical within AI–we are diligent in our work and work incredibly hard on in terms of the capability that we provide to our people who are developing these things to make sure that we do so in a way that remains true to our purpose, so that not only do we have good intentions, but we have good, robust controls as well, that help to ensure safety in the things that we deploy.
[Music by madirfan from Pixabay]
Liz: We were also interested in how Dan and his team think about customer awareness of the data – their data – and how it’s used in the Customer Engagement Engine. I asked Dan how aware customers were about the types of data it collects and uses. How does he think about potential unintended consequences of a customer’s lack of awareness?
Dan: It’s obviously incredibly important to us as we deploy these capabilities, that customer consent and choice are considered paramount, and have to be adhered to of course, in the way that we deal with our customers and provide them with the service that they ask for, want, and is helpful to them. I think there’s an increasing awareness more broadly in the community that technology can have a really positive impact on people’s lives, and people have certain expectations. I think these days you think about the experience that you get on your smartphone, and the type of relevance and the type of activity that’s possible these days.
There’s an awareness that there’s technology underpinning this, but perhaps not necessarily a complete cognizance of exactly what’s going on under the hood of that. So, I think it’s really important to us as we develop these capabilities that we not only meet, but exceed community expectations. That’s why things like the ethical implementation of AI has been a critical point for us over the course of the last couple of years.
Liz: “Ethical AI” has also been a topic of conversation amongst creators and policymakers around the world. In Australia, for example, the Department of Industry, Science, Energy and Resources recently released an “Artificial Intelligence Ethics Framework”, which begins with eight ethics principles designed to “ensure AI is safe, secure and reliable ”.
Zena: In Europe, the European Commission published a set of “Ethics Guidelines for Trustworthy Artificial Intelligence” in 2019.
Liz: And late last year, UNESCO’s General Conference adopted the “Recommendation on the Ethics of Artificial Intelligence” – a first step towards establishing international standards for ‘ethical’ AI.
Zena: These guidelines and recommendations are voluntary, and are meant to provide a guide for organisations that want to create AI-driven technologies with ethics in mind. But this broader discourse, which introduces terms like “bias” and “explainability”, comes through in some of the thinking Dan shares about CBA’s work in this space.
Dan: We think about the explainability of the algorithms that we introduce, both at the global and the local level.
What that means is if you are using AI to create something that’s really helpful to the customer, how do you explain how you got there so that you can understand the system that’s being used? The drivers of the modeling, the type of features that are driving the outcomes, need to be explainable, and at the local level for an individual customer. How did you arrive at a certain type of decision? The second piece is on disparity. To your point about potential unintended consequences, we may wish to do good things, but how do we know that we are and how do we know, how do we safeguard against potentially introducing bias in ways that we hadn’t predicted or hadn’t expected?
So disparity measurement is one of the ways in which we address this issue, so think about groups of customers, and how does the impact of the messaging that we created land with those, who responds in what way and how many messages do we send across, say, demographic groups, for example, and does that meet what we would’ve expected to happen or does something look out of whack? And if it is out of whack, is that explainable, or is it symptomatic of an unintended consequence that is not something that we would want to happen for our customers?
It’s important to introduce tooling that provides a level of clarity and understanding and safeguarding against these things, but it’s also important to note that the concept of ethics is thousands of years old, and a school of philosophy that they only pretend to understand in its entirety. I don’t think you can algorithmically solve for that. You need to involve humans in the process to really understand what’s going on.
So every stage of the Customer Engagement Engine, we have humans in the loop thinking about the models that we are using, the type of messaging, the responses that we’re getting, the way it lands with customers, is this having the positive financial wellbeing impact that we hoped it would do for our customers? If it does, fantastic, if it doesn’t, let’s make sure that we have the checks and balances to make sure that we can adapt the approach accordingly.
Zena: This concept of financial wellbeing keeps coming up as a theme in Dan’s responses. We asked him to tell us a bit more about it.
Dan: We think about financial wellbeing as a good marker for are we having the positive impact on our customer’s lives that we would like to? So financial wellbeing, a broad concept, but at Commonwealth Bank, we’ve done a lot of work to codify and quantify, and think scientifically about it as a concept. We’ve published a financial wellbeing index and scale and methodology around that, that looks at both observed and self-reported financial wellbeing status among our customer base.
That provides a way for us to think about the outcomes that we’re having in the activities that we create, and a way to I guess check that we’re having the positive impact that you would like. One of the really fascinating things that we found as we developed the framework, is that you would think that financial wellbeing is closely correlated to income, and of course it is — if you earn more money, you are generally more likely to have good financial wellbeing — but actually one of the most important drivers is customer behavior. We find all the time, a very large instance where customers with pretty much exactly the same income and pretty much live in the same part of the country, look broadly the same from an analytical perspective, have hugely different financial wellbeing.
That’s because some people have very good habits around saving or paying their own debts, and other people live their lives in a slightly different way and choose to use their money in different kinds of ways. So the concept of behavioral economics in the way that we think about the messaging that we deliver through the Customer Engagement Engine is really important. And the financial wellbeing index allows us to make sure that as we’re implementing new strategies to communicate with our customers, help them to avoid fees, help them to make sure that their finances are arranged in a way that’s most advantageous to them, we have a way to measure the impact that we’re having. That’s one of the ways in which we can think about the financial wellbeing impact of the work that we’re doing.
Liz: Focusing specifically on the Customer Engagement Engine here, it sounds kind of like CBA is using CEE as an educational tool.
Zena: Sort of? It does makes sense.
The European Commission, for example, has found that financial literacy is important for financial wellbeing, and is currently working on a financial competence framework with the OECD to help EU citizens improve their understanding of finance.
But I don’t think CBA thinks of their messaging to customers as education. I would guess there’s no curriculum underpinning a Next Best Conversation, for example.
Liz: Yes, Zena, I’m guessing you’re right there. But by helping customers understand how their own behaviour can influence their financial wellbeing, it sounds to me like there’s some opportunity for customers to learn something about their own financial practices.
Zena: To me, this underpins the importance of making sure the messaging is responsibly designed and is having the intended effect.
Liz: At this point in our conversation, we wanted to take a closer look at one of the technologies CBA has built with the help of the Customer Engagement Engine. One of the tools Dan and his team implemented over the past couple of years is CBA’s “emergency assistance support”, which combines the CEE’s access to data and communications with tools for predicting extreme weather scenarios.
Zena: Think Customer Engagement Engine + weather model, combined with some thinking on how best to deliver messages to customers in challenging times.
Liz: We asked Dan to tell us more about the emergency support package and the role the Customer Engagement Engine has played in helping customers access it.
Dan: As a bank, for a long period of time, we recognize in Australia, a country that I get to call home and is magnificent and wonderful, unfortunately does experience extreme weather events, things like drought and bush fires. And when those happen, we want to be able to support our customers in those communities, so the bank offers a support package that involves things like fee forgiveness, or thinking about repayments that need to be made and taking into account that this probably isn’t a great time for the customer. Let’s think about how we can provide support, be cognizant of the circumstances.
So the emergency assistant package is a fantastic thing that the bank does for its customers in a time when they really need us most. The thing that makes the Customer Engagement Engine so critical into the process now, is that that’s gone from a service that is available to customers if they happen to find out about it, or if we’re able to reach them in time. With the Customer Engagement Engine, we’re able to proactively reach out to them or really in any channel that they come to us, be able to present that messaging to them so that they know that we have their back when they need us most.
Over the course of the last year, we’ve really upped the game in terms of the sophistication of that. Real time has become incredibly important. So, if an event is emerging, this is not the thing that you can wait a few days or a few weeks to implement something into production. You’ve got to respond in the moment. We get events now where within an hour or two of natural disaster emerging, our smart weather data model in real time, picks up those signals across any natural disaster that’s occurring, and feeds into the Customer Engagement Engine, looks at the customers who are likely to be affected in the region where the disaster is occurring, and makes sure for those customers, the next best conversation for them is absolutely about the support package that we have for them.
You think about how important that can be for a customer. Let’s take the example of a bush fire. If you’re in an area where a bush fire is happening, if you go into the branch there, everybody knows that there’s a fire. You can smell it in the air. In fact, your branch is probably not even open. So, in the local area, people understand that this is incredibly high stress and they need to support you. But if you need the bank and you call up, you may be speaking to somebody hundreds, thousands of miles away, so in that moment, how do we make sure that the agent who’s speaking to the customer knows, hey, this is somebody who’s in an area where they really need our help with the emergency support package.
The Customer Engagement Engine means in real time we are able to make sure that that agent as they’re speaking to that customer, knows their circumstances and knows that the very next best thing that I can do is talk to them through the support, we’ve got your back, and here’s how it works.
Liz: We asked Dan to share what he and his team needed to think through before deploying something like this to all its customers.
Dan: I think there are two dimensions to the problem statements. One is technological. It’s pretty advanced what we’re doing here, and in order to be able to ingest that data in real time, pass it into geospatial data, and the data sources that we get are not always consistent in the way that they think about labeling geographies. You may have postcodes, you may have GPS coordinates, so there’s a certain amount of processing that needs to happen to make sure that we’re mapping that appropriately to customer locations. Then obviously managing the messaging that goes through the channels. There’s a lot of clever stuff that happens technologically.
But I think the other thing that we consider, not just in the example of the emergency assistance, but in all of our next best conversations, is how is this going to be presented to the customer? What’s the messaging that we’re going to use? Not only from a customer perspective, as in how do we make sure that we’re clear and the customer will understand, and it portrays the help in the right way to the customer, but also particularly in the case of those Next Best Conversations that go to the front line, how is the member of staff going to understand the information that we are giving them, such that they can then go on to have the conversation in the right way?
How do we provide the right level of information, the right level of context without being too complicated or overwhelming? Coming back to the whole notion of decision science and the way that we think about the team, that’s why those people with frontline experience within the team are invaluable, because they really understand it’s great that you’ve given me all of this information, Data Scientist, but there’s no way in a phone call that I’m going to be able to read through all of that, and then quickly tell a customer to hold on for a little while, while I understand what’s going on.
We have things like language governance, what’s the language of the Next Best Conversation, and that brings in people from a range of different stakeholder groups to make sure that we get it just right.
Zena: There’s the act of creating something like this – and then there’s the act of making something like this robust. The latter presents a different kind of challenge – particularly for an organization with a customer base as large as CBA’s.
Liz: We asked Dan how his team thinks about making sure tools like this are robust before they roll them out to all their customers.
Dan: Data quality is incredibly important, and it has been for the Customer Engagement Engine for all sorts of things. Not just for smart data either, but everything we’re talking to customers about–their home loans or their credit cards, and we’re talking to them about their finances–you’ve got to be correct. One of the things that’s really inherent and fundamental to the Customer Engagement Engine is the checks and balances around the data that underpins it. So one of the things that we do to safeguard against this is implement a series of automated controls that look at the delivery of data that feeds the engine.
Do we have the right fields showing up the right time? Do we have nulls appearing where nulls shouldn’t happen? And if so, what do we do under that circumstances? How do we make sure we don’t have an unintended consequence from those? There are huge number of checks and balances in the underlying data. There are also things that we do like simulation. How do you think about the potential impact if this thing happens or this thing happens? It’s not just about making sure that you have good controls around the processes that underpin the system, but you have the ability to consider the what ifs – what are the things that haven’t happened to us previously, and if that thing happens, what would that do, and what would our response to it be, and do we need a control that we don’t currently have to safe guard against that? That’s the type of thing that I think is key in this situation.
Liz: And finally, we asked Dan what the term “socially responsible algorithms” means to him.
Dan: That’s a really big question, isn’t it? I think it’s an encompassing of a lot of the themes that we’ve talked about today. So what is social responsibility? I think you think about what’s the impact that you are having, and how can you think confidently that it’s having a positive impact across a range of society that isn’t necessarily reflective of you and your circumstances, but much broader than that. That’s why things like, as we deploy the Customer Engagement Engine, the people who work on it come from all walks of life. The customer groups that we deploy it to are experiencing a vast range of different things.
So the social responsibility means measuring the outcomes that you are having and checking that they’re positive for our society, and in our case we’re thinking about financial wellbeing, but we’re also thinking about things like our impact on things like ecology or the work force, or how are we having a positive impact on the communities that we serve? How we make sure everybody has access to the things that we are developing in a fair way? You think about things like our mobile app, where does that work?
What communities have access to good mobile services and how do we make sure that we have a footprint that covers all sorts of circumstances that maybe one particular channel isn’t appropriate for? How do you do it transparently? Social responsibility, I think is about making everybody that you are working for aware of what you are doing and being transparent about the way that you are doing, and opening a dialog about, we think this is a great way to do things. What do you think? Does this work for you? Does it work for different groups? Transparency is very important. I think fundamentally measurement, how do you make sure that you are measuring the impact that you are having?
I think I come to work every day with a bunch of wonderful people and we try to do great things for our customers, with the very best of intent, but we absolutely need to measure it because our best intentions may produce unintended results. So I think social responsibility in terms of algorithms is really about measuring the outcomes that you have. That’s what responsibility is. Taking responsibility for your actions, and you need to know what the impact of your actions were.
Liz: Thank you for joining us today on the Algorithmic Futures podcast. To learn more about the podcast, the Social Responsibility of Algorithms workshop series, Project Cybersyn, or some of the history we’ve shared on this podcast, please visit our website: algorithmicfutures.org. And if you’ve enjoyed this episode, please like us on Apple podcasts and share with others. It really helps us a lot.
Now, to end with some comments and disclaimers.
This podcast has come about because I am involved in a research collaboration with CBA through my work in the ANU School of Cybernetics. There’s no financial gain for me or for the School in producing this podcast, but I do want to maintain a good working relationship with CBA. That could be seen as a conflict of interest here.
All information we present here is for your education and enjoyment and should not be taken as advice specific to your situation.
This podcast episode was created in support of the Algorithmic Futures Policy Lab – a collaboration between the Australian National University (ANU) School of Cybernetics, ANU Fenner School of Environment and Society, ANU Centre for European Studies, CNRS LAMSADE and DIMACS at Rutgers University. The Algorithmic Futures Policy Lab receives the support of the Erasmus+ Programme of the European Union. The European Commission support for the Algorithmic Futures Policy Lab does not constitute an endorsement of this podcast episode’s contents, which reflect the views only of the speakers, and the Commission cannot be held responsible for any use which may be made of the information contained therein.
(End transcript.)
Intrigued by what you heard on today’s episode? Read more here:
Frederick Winslow Taylor, The Principles of Scientific Management
Eden Medina, Cybernetic Revolutionaries.: Technology and Politics in Allende’s Chile (MIT Press, 2014)
B. Bátiz-Lazo and T Boyns, “The business and financial history of mechanization and technological change in twentieth-century banking.” Accounting, Business & Financial History, 14(3) 225-232 (2006).