Every bank needs a nudge....

Updated: Mar 31

Create a nudge unit to drive risk management effectiveness and customer delight.



This article examines banking risk management organizations in the context of growing and improving information technology. It encourages organizations to have a risk management-oriented nudge unit. That is, an analytically focused organization that performs behavioral and automation-based testing and analytics. It helps banks understand the value of risk management and make decisions to optimize risk and improve customer delight.

 

Jeff Hulett and several collaborators authored this article. Jeff’s career includes Financial Services and Consulting related leadership roles in the decision sciences, behavioral economics, data sciences, loan operations management, and risk and compliance. Jeff has advanced degrees in Mathematics, Finance, and Economics.

 

This article is presented with the following sections:

1. How do you think about risk management?

2. How do you think about data?

3. The big convergence and the Nudge Unit

4. The Nudge Unit - examples:

  • The mortgage servicing example

  • Large credit card company behavioral testing takeaways

  • Compliance automation example

5. Banking automation - Every data science shop needs Radar O'Reilly

6. Conclusion, Notes, and Appendix

 

1. How do you think about risk management?


Many think of risk management as a cost center. That is, a necessary cost of doing business. Like a constraint, not an objective. Many do not think of overachieving in risk management as necessarily a good thing.


Business operations are different. They are associated with revenue generation. In a business context, overachieving is a code for making more money and making customers happy. There is nothing wrong with that. Correct?


I have heard risk management and business operations compared in this way:

“The business is responsible for making the money and risk management is responsible for keeping the money.”

I like this way of thinking about it, but it begs the question central to this article. How do you know if a risk management action caused the bank to keep the money or it would have happened anyway? What if risk-related budgets were more geared toward risk effectiveness? More on this later.


2. How do you think about data?


There are important differences between non-curated data and curated information. Our modern world is drowning in non-curated data. All the while, making curated information more difficult to realize. It is getting more challenging to separate the signal from the noise. In fact, even our definition of censorship has changed. It is no longer the "book burning" related withholding of curated information as per past generations. Today it is the opposite. Censorship is associated with drowning others in non-curated data noise and purposefully creating a confidence-reducing, information curation-challenged environment. As sociologist Zeynep Tufekci said:

“The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself.” [i]

Banks [ii], by their very nature, extensively use customer data. Just like a car manufacturer uses metal/metal alloys, plastics, and computer components to make cars, banks use credit, income, and wealth customer data to make financial products. The bank's financial product manufacturing process is, in a very real sense, the process of curating (structuring) non-curated customer data to create a curated information-based product. As such, the bank has a tremendous opportunity to leverage its own economic natural resource. It is a matter of transitioning non-curated data into useful curated information.


Just add a few data scientists and you are good to go! Right? Maybe?.....Not really? There is certainly more to it, but this intention is directionally correct.


Next, let’s summarize our perspective on the convergence of data and risk management. These are the building blocks of an operational nudge unit.


3. The big convergence and the Nudge Unit

The following "connects the dots" between the two converging risk management and data perspectives. This lays the groundwork for the behavioral economics-focused nudge unit.

  1. We appreciate it is difficult to quantify the true effectiveness of risk management. At a minimum, we take it on faith that risk management is important and bank leadership provides organizational budget funding. The budget allocation is often judgmental, anchored in last year's budget amount. At the other extreme, banks will react by potentially spending $billions to remediate regulatory mishaps. The number of headline examples is almost endless, certainly, the OCC's $25 Billion Mortgage Settlement is a good example. For those of us that have lived through these regulatory actions, we appreciate the headline regulatory cost is just a starting point. The actual cost, including operating, legal, consultants, etc., is some multiple of the headline cost.

  2. Data is more available than ever. True, there is an increased focus on data security for minding customer data. The point is, data availability has increased dramatically, especially as the bank's ability to efficiently manage the data has improved. To wit:

  3. Decreased cost of data storage and cloud technologies.

  4. Increased bandwidth and data transportation technology like APIs.

  5. Improved data analysis effectiveness with data science tools. (like Artificial Intelligence) and

  6. The ability to convert unstructured data (like documents or voice recording) into useful structured data with Optical Character Resolution (OCR) and Natural Language Processing (NLP) technologies.

The Nudge Unit defined

The remainder of this article is about the big convergence and the bank's nudge unit(s). That is:

  1. Utilizing the nudge unit to solve the causal and related risk management measurement challenge and

  2. Leveraging fast-growing and improving data technology. All the while,

  3. Driving risk management and broader customer objectives.

First, this concept is not new, only the products and scale are changing. In the 1980s and 1990s, starting primarily with Citibank, Capital One, and other bank data pioneers, credit card companies began realizing the power of data. In fact, Capital One, the brainchild of Richard Fairbank and Nigel Morris, was built specifically to utilize an "Information-Based Strategy" or "IBS." Full disclosure, I previously worked for both organizations (or predecessor organizations). To some degree, the use of data in credit card companies is easier, since the card product is already "data and automation-ready." The change today occurs because traditionally more "data and automation-challenged” bank products, like mortgages, are more analytical and automation available today. This occurred because of the data management improvements mentioned earlier. Please see our Automation Adaptability Framework in the appendix for key features for different bank product automation adaptability.


What is a nudge unit? These are bank organizations that 1) quantify the value of risk management (or other bank organization) actions and 2) drive Risk Management effectiveness via the use of Randomized Control Trials (RCT) or other related automation and analytical strategies. [iii] RCT is generally considered the gold standard for scientific studies, specifically for establishing causality. In the current data science world, causality is a big challenge. Done correctly, RCT is a full-proof way to determine that x caused y. In this case, a specific risk management policy (x) caused the bank to better serve a bunch of customers and/or save a bunch of money (y). As will be discussed in the next examples, RCT is not the only way to establish some level of causality, but is a good way AND is available today!

This article takes no position on exactly where such a unit should be organized in a banking organization. This depends on the specific organizational circumstances. With that said, next are considerations to maximize nudge unit success chances:

  1. Scope breadth - the successful nudge unit will be more successful the broader it’s organizational scope.

  2. Data resource access - data is the nudge unit’s raw material. Broad data access is a success enabler.

  3. Customer access - access to customer operations is a success enabler.

  4. Analytical resources - the nudge unit needs access to unique programming and analytical computing platforms. The nudge unit will hire top analytical talent from a variety of disciplines.

  5. Testing operations - the Nudge Unit will need its own testing operations.

I do want to point out a significant data science challenge today as it relates to banking. In today's platform company world, as led by companies like Google, Amazon, Netflix, etc., the need for causality is sometimes downplayed. In the case of selling, say, a consumer product via Google, a causal determination is not so important. For example, just because someone searched for a product before, does not cause them to buy the same product in the future, it is just a probabilistic correlation. That correlation is potentially enough to identify a group of customers more likely to buy a product when presented in a search engine advertisement.

For Google, that may be fine. Banking does not usually have this luxury. This means banking is in the business of why. If a loan applicant is declined for a loan, by law, the bank must provide the customer a causal-based "why" adverse action explanation. If a customer’s security portfolio value drops, a customer is going to demand why. If a person applies for a mortgage, it is declined, and the applicant belongs to a protected class (like race, gender, age, etc), the bank is required by law to track why the applicant was declined in terms of confirming the decline decision was not a result of disparate treatment. [iv]


In the next section, by example, we will provide an explanation of how banks could use a nudge unit (or similar) to drive risk management effectiveness, in the causal context.


Examples of nudge units and related organizations

 

The mortgage servicing example

It is no secret, the post-pandemic world is likely to have its share of financial troubles. The Consumer Financial Protection Bureau (CFPB) has been vocal about lenders, and in particular mortgage companies, being ready to help their customers. On April 1, 2021, they issued a bulletin that made those expectations very clear.

"CFPB Compliance Bulletin Warns Mortgage Servicers: Unprepared is Unacceptable"

But sometimes, borrowers are not always easy to reach. Generally speaking, pro-active customer contact can lead to a higher resolution rate and a lower loss rate. So, what can a mortgage lender do to improve its contact rates? The United States has something to learn from its neighbours across the pond.....


The Behavioural Insights Team (BIT), also known unofficially as the "Nudge Unit", is a U.K.-based social purpose organization that generates and applies behavioral insights, to inform policy and improve public services. [v]

In the build-up to and aftermath of the 2008 financial crisis, Northern Ireland was hit particularly hard by a housing boom and bust. Many homeowners still face negative equity, delinquency, and ultimately the risk of foreclosure. One of the key behavioral challenges is encouraging homeowners at risk to engage proactively with their lenders so that effective solutions can be found. BIT was commissioned by the Department for Communities Northern Ireland to develop and test a range of behavioral interventions to increase loss mitigation-related customer contact and engagement. A report was created in June 2018 that outlines the results and is summarized in the next section.


Please note, BIT is using Randomized Control Trials (RCTs). In this case, the control group is important to confirm the baseline customer contact business environment, before the new customer contact intervention. That is, RCTs are necessary to determine causality. Meaning, to determine whether the new risk management tactics caused improvement as compared to the existing risk tactics. Please keep in mind, these results need to be validated in your unique environment. While likely a good starting point, these results are not a substitute for performing your own RCT testing.


Given the potential post-pandemic challenges to mortgage servicers globally, this is a particularly interesting study. Customers with payment challenges are often difficult to contact. Plus, many people no longer have a traditional landline-based home phone. As such, if a customer does not wish to be contacted, it is easier today for them to avoid calls from their lenders.

Communicating with collections customers is generally challenging and a critical step to encouraging a customer payment or developing a loss mitigation strategy. The tests utilized several different communication approaches to drive customer response. These approaches included:

  • Letters, email, and text reminders;

  • Behaviorally informed calls to action including personalization, loss aversion, and reciprocity; and

  • Handwritten notes to increase salience.

The response was measured by contact rate for both inbound calling and collections agent outbound calling. The first and third test results demonstrate a significant customer contact improvement over the baseline control results. While a higher contact rate does not automatically lead to a 1 to 1 decrease in loss rates, it will very likely make directional improvements. The results will inform ongoing collections effectiveness by:

  1. Increasing collections customer contact and

  2. Effectively resolving customer delinquency.


Large credit card company behavioral testing takeaways

Earlier in my career, I was a behavioral economist in large banking and consumer lending organizations. I led BIT-like / nudge unit teams. Our job was to use a wide range of analytical techniques (some AI-related), data sources, Randomized Control Trial (RCT) techniques, and other behavioral techniques to manage bank credit loss exposure and optimize lending program performance. We were large enough that we had our own testing operation, which meant we had dedicated customer agents trained for behavioral testing. We also had systems designed to integrate results with the test design parameters. These were some of the coolest jobs I ever had!


Based on my nudge unit experiences, the following are some important takeaways:

  • Listen to your customers! Go beyond ”The Matrix" data view of your customers. Have customer round tables and focus groups. Great testing ideas come from listening to your customers. Personally, I think all data scientists and related should have some kind of regular customer interaction.

  • Seek unique data about your customers. This could be an insight from existing data or it could be new and unique data.

  • Some "data digging" is ok. Data Scientists often do not like data digging. Data digging is code for the messy ETL-related data processes needed for less structured data sets. It can be grinding work. I call this the "meta meta data." That is, the story behind the data dictionary. It can be time-consuming and take away from primary data analysis. While I hope Data Scientists spend the majority of their time analyzing, some data digging can be both instructive and can lead to a "digging for gold" outcome by finding unique competitive insights.

  • Test new Artificial Intelligence techniques. My observation is, usually, new analytical techniques are not always better than "tried and true" techniques like Regression and Decision Trees. However, we always learn something new and useful in the process, beyond the fact that new AI techniques were not always effective. It is worth the exploration, just not for the reasons you may expect.

  • Test execution is critical. Commit the resources for proper test execution. Testing systems may include:

  1. Testing program guides,

  2. Coding to differentiate test and control groups,

  3. Collecting performance results,

  4. Scripting for agents or customers,

  5. Availability of characteristic data and related testing information.

  6. Analytical resources to analyze and provide post-test results and recommendations.

  • Test with a successful scaling outcome in mind. Meaning, assuming success, how will this test be rolled out and scaled in our base business? Unfortunately, I know successful tests that failed to impact business results. This occurred because of a failure to scale.

  • Causality is key! RCT is necessary to drive confidence in the causal nature of your results. It will also help business leaders understand the value of risk testing. Often, a small (but statistically significant) percentage risk test gain will lead to a significant bottom-line improvement. By the way, not every test is suitable for RCT. If you test without a control, be very explicit about what you hope to learn and potential learning limitations.

  • Useful for all financial services products. While this example is likely most useful in the credit card context, with today's information technology, it can be useful for all financial services products.

Compliance automation example

This final example is specific to obligation compliance testing. This example will be different from the other article examples for a couple of reasons.

  1. It is newer. Traditionally, analytics has been more focused on credit risk management than compliance risk management organizations, and

  2. While Randomized Control Trials (RCT) are certainly possible, this example is focused more on automating compliance testing.

In the main, control groups are not always necessary or even desired for compliance.


Conceptually, a control in compliance testing could include being out of compliance. It would likely not make sense for a bank to purposely be out of compliance for the sake of an RCT! However, a bank may want to test the effectiveness of certain compliance communications or disclosures. For example, what if communication effectiveness was an objective for compliance disclosures. I know! Crazy talk! Sadly, most consumer disclosures seem to be designed to not be read and not to be understood. Behavioral economists describe such “not to be understood disclosures” as containing sludge. If a progressive organization wanted to reduce disclosure sludge, it could do so via an RCT structure. Also, sometimes natural experiments may occur across multiple operating groups performing a similar function. This could create a valid RCT - like test environment. [vi]


By the way, this example is a composite of actual recent experiences across multiple banking organizations.


Automated testing can:

  1. increase compliance testing coverage,

  2. decrease testing costs, and

  3. improve testing quality.

From a customer and regulator standpoint, the bank's customer communication and documents (letters, statements, emails, texts, promissory notes, disclosures, etc) are the customer's "system of record." That is, customer communication and documentation are the ultimate confirmation source that the bank has met various regulatory, investor, and other obligations. Because customer communication is often stored as unstructured data, it requires cost-effective automation capabilities to interpret documents, ingest data, and evaluate bank obligations. See the following graphic to compare the bank's and the customer's perspectives.

Also, an operational complication could arise if third parties are involved in the creation and transmission process of customer communication and documentation. Given this, the ability to structure data and apply obligation tests are critical for testing the “customer view” and is the essence of compliance automated testing.


In general, automated testing is an updating process as communication, documents, and algorithms are validated. Below are key automation outcome categories, resolutions, and success suggestions depending on the nature of the automated testing outcomes.

For more information, please see our article Making the most of Statistics and Automation. [vii]

 

5. Banking automation - Every data science shop needs Radar O'Reilly


The T.V. sitcom M*A*S*H ran in the 1970s and was a funny show about an Army hospital unit in the Korean War. (In case you are wondering, I only watched the re-runs....) What could data science possibly have in common with this T.V. show? Turns out, quite a bit. But first, let me build related perspectives on data science in the banking world.

Bank incrementalism impact on data science

Banks become big banks mostly through consolidations. The consolidation catalyst may occur from many sources:

  • Often a big economic downturn is a cause,

  • Sometimes law changes are a cause, (Think of the reduced interstate banking restrictions in the 1990s) or,

  • It may be the regulatory change is a cause, resulting in different scale economies. (Think of the CFPB requirements that kick in at the $10B bank asset size)

The following graphic shows consolidations for some of the biggest U.S. banks from 1990 to 2010. Certainly, a similar consolidation trend exists for most U.S. banks. "Eat or be eaten!" seems to be the mantra.

So, what does this mean to data science in banking? In a word, banking suffers from "incrementalism." This occurs for a multitude of reasons, including 1) our human nature to think shorter term (i.e., Recency Bias), 2) SEC registrant quarterly reporting requirements encouraging short term reporting and related short term thinking, and 3) the consolidation norm specific to banking. In the data science world, data is the raw material enabling analytical success. Access to data is critical. Unfortunately, in the incremental banking context, data can be hard to locate, access, and utilize. This is one of my favorite relevant aphorisms about data:

“Where is the wisdom? Lost in the knowledge. Where is the knowledge? Lost in the information.” - T. S. Eliot “Where is the information? Lost in the data. Where is the data? Lost in the damn database!” - Joe Celko

Generally, in bigger banks, data can be very silo'ed in different operating groups, different operating systems (aka, systems of record), with various levels of care. Also, because of bank incrementalism, acquired bank systems are not always fully integrated into target bank systems. Today, with an increasing focus on information security, data accessibility is generally more restricted and may require special permissions. All this creates friction for the data scientist. Often, doing really interesting data analysis and driving actionable business insight is only about 20% of the data scientist's job, the remaining time is spent wrangling data and other administrative tasks. So, this is the data scientist's reality. Is it getting better?

Some days, yes --> better data warehousing, API's, or tool access occurs, Some days, no --> the next wave of bank consolidations or more info security rules occur.

If you are in a data science group, especially groups focused on operational analysis and compliance testing, this reality is likely particularly acute. This occurs because you are closely tied to the operating system's data availability. For example, Compliance Testing, especially specific to customer obligations, requires access to a core system of record data and documents. The gold standard is to directly test the customer's communication media (letters, texts, statements, online, auto agent, etc.) against the regulatory, investor, or related obligations. Because of organizational complexity, separate systems, third-party involvement, infosec requirements, etc; automation-enabled testing of customer media may be very challenging. Please see our article AI and automated testing are changing our workplace for more information.

Building your data science shop like a M*A*S*H unit

A practical solution to enhance data availability may be found in the following metaphor. A M*A*S*H unit runs with a couple of primary operating groups. Those include the expert doctors, nurses, and orderlies that attend to the patients - think of Hawkeye or Margaret Houlihan. Also, the M*A*S*H unit includes leadership, like Colonel Blake or Colonel Potter. Naturally, data science shops also have both experts and leaders (the data scientists and the data science leadership) So far, so good. But what I see missing most often in data science shops is the single most important factor to make a M*A*S*H unit run. That is, Radar O'Reilly. Radar is not just a company clerk, he is the grease that makes the M*A*S*H unit run. Radar is the one that knows how to get things done, knows all the Army supply sergeants, and that knows the company clerks at the other Army M*A*S*H units. As such, Radar knows where to get the raw material to ensure the M*A*S*H unit effectively operates. Radar knows his way around the Army and how to work back channels. In the context of data science, Radar knows where the data is, whom to contact to get the data, how to get the metadata/data dictionary/data ontologies, the nuances of the infosec rules, and how to stay ahead of the next big change affecting data availability. To me, asking a data scientist to run down data is like asking surgeons to buy their own sutures. For whatever reason, big bank data science organizations do not seem to hire the Radar O'Reilly types. If I was starting a new data science organization in a big bank, my first hire would be Radar O'Reilly. Sure, I would hire a crack team of data scientists, junior analysts, application engineers, and license Python, R, SAS, RPA / OCR engines, or related tools and storage. But Radar would come first. Since it is hard to analyze something if your data raw material is elusive and regularly at risk.


6. Conclusion


This article considers bank risk management organizations in the context of the growing and improving information technology. Since banking is in the business of "why" we encourage organizations to have a risk management-oriented nudge unit. That is, an analytically focused organization that performs behavioral and automation-based testing and analytics. It helps you understand the value of risk management and make decisions to optimize risk and improve customer delight. We also suggested some of the organizational pitfalls and how to manage a data science organization in the context of a M*A*S*H unit.


Notes


[i] Please see the article It's the (Democracy-Poisoning) Golden Age of Free Speech. Tufekci points to the use of Twitter and other social media as an example of non-curated data drowning strategies used by some politicians. Also, see our article Information curation in a world drowning in data noise. This article provides insight and tools to be a responsible information curator in a world drowning in data noise.


[ii] In this article, the use of "bank" is meant broadly as a convenient synonym for all financial services, bank OR non-bank, companies. Certainly, there are nuances between financial product company regulatory charters that may drive the difference in the effectiveness of a Nudge Unit.


[iii] There is much literature on Randomized Control Trials. Please see this article for a nice overview: Randomized Controlled Trials


[iv] See our article for the banking legal structure impacting causality and the use of data:

Hulett, Resolving Lending Bias - a proposal to improve credit decisions with more accurate credit data, The Curiosity Vine, 2021


Judea Pearl, in The Book Of Why does a nice job of describing the importance of causation in terms of a ladder. Causality needs to get to at least the second rung of the causation ladder, whereas correlation is only at the first rung.


[v] Utilizing behavioral economics and behavioral psychology theory, BIT helps companies, governments, and related organizations improve various socially important goals. The Behavioural Insights Team is headed by psychologist David Halpern. BIT is affiliated with the United Kingdom government. It was originally chartered as a cabinet-level office. Today, it is a UK-based social purpose limited company. BIT has performed over 500 RCTs and runs over 750 projects. The following Mortgage Servicing example is one of BIT's projects.


[vi] Thaler, Sunstein, Nudge, The Final Edition, 2021

Thaler and Sunstein make a great case and provide an approach to reduce disclosure-based sludge. They call it Smart Disclosure. While it is more intended for the government, it could be used in banking and applicable for common product disclosures. While they do not advocate for a particular approach, my head goes straight to blockchain-enabled technology. What if people who close financial products provided anonymized disclosures to a central "smart disclosure" engine. This engine converts anonymized documents into data able to be loaded on a blockchain. The data could be made available to consumer-friendly apps to let users know what to "really" expect in a manner that is much more consumer-friendly than the current disclosure sets.


[vii] Key automation outcome categories, defining False Positives and False Negatives -


False Positives: A false positive error, or false positive, is a result that indicates a given condition exists when it does not. For example, a cancer test indicates a person has cancer when they do not. A false positive error is a type I error where the test is checking a single condition and wrongly gives an affirmative (positive) decision. However, it is important to distinguish between the type 1 error rate and the probability of a positive result being false. The latter is known as the false-positive risk.


False Negatives: A false negative error, or false negative, is a test result that wrongly indicates that a condition does not hold. For example, when a cancer test indicates a person does not have cancer, but they do. The condition "the person has cancer" holds, but the test (the cancer test) fails to realize this condition, and wrongly decides that the person does not have cancer. A false negative error is a type II error occurring in a test where a single condition is checked for, and the result of the test is erroneous, that the condition is absent.


Implications

Depending on the test context, the error type has significantly different implications. The cancer example is closest to banking transactional testing. That is, a false positive can be annoying or provide patient/client unnecessary apprehension. A false negative can be deadly, that is, cancer remains and is undetected. In the case of bank risk testing, a false positive can create a customer service problem or a false risk signal. A false negative can enable the very risk it is trying to detect. That is, not identifying credit, compliance, or fraud risk when it exists. False negatives are often the basis for regulatory enforcement action.


- Excerpt from our article Making the most of Statistics and Automation

Appendix

The following diagram describes typical loan products most adaptable to automation (on the right side of the axis), as opposed to those least adaptable to automation (to the left). Generally, higher volume, homogenous products will be more adaptable to automation. Below are the loan products and their related features. These features help dimension the products and their relationship to automation adaptability.


Today’s automation and AI related tools make it easier to unlock adaptability in traditionally “Lower Automation Adaptability” products.


195 views0 comments