Insights from a FinServ CISO Veteran: Achieving Operational Resilience Through Transparency

Let’s be real – achieving operational resilience, particularly in your financial institution, is difficult to say the least. For most organizations, this means big challenges around adapting existing operational and technology infrastructures to the increasing complexities of an increasingly remote workforce. The onset of rapid digital transformation has made it even more difficult to keep up with changes in operational risk controls.

How do organizations overcome these challenges? Transparency. When you manage risk, especially as you think along the lines of operational resilience, you need to have a thorough understanding of what is happening in the environment to accurately understand the scope of the system. This will enable you to better manage business risk of critical processes, and as well, achieve more focused compliance.

Read more below for an in-depth conversation I had with Charles Blauner who has years of experience as a global CISO in Financial Services, to find out how he has dealt with issues achieving operational resilience in his organizations. Charles, former CISO at JP Morgan and Global Head of InfoSec at Citi, provides his insight on shifting perspectives from risk topic oriented to business process oriented, and how transparency plays a key role in achieving operational resilience. We also talk about how Application Relationship Management solves these challenges.  

Kate: Operational resiliency seems to be top of mind right now for many organizations. Do you think this is a new trend, or have we just not paid as much attention to some of these issues that are prevalent now?

Charles: Operational resilience has been important for companies for a long time, particularly in finance. I think you have to credit the Bank of England for putting out one of the first whitepapers talking about operational resilience and bringing the topic to the attention of CEOs and the Board of Directors. 

Operational resilience is really defined as thinking about all the existing risks that we’ve had for a long time, but now we have to change the perspective from being one that is risk domain-oriented to one that is oriented around each business process. One aspect that is not new is that the foundation of good security and operational resilience is the identification of critical assets and core business processes with the intention of protecting those assets and operations regardless where they run or from where they are accessed. 

So at the foundation, nothing’s actually changed, but what has changed radically is everything around you. Those assets are now in a myriad of different places. How you’re accessing those assets and from where is also changed. So you have to take a step back and look and think about the world a little differently because of that. This requires a new approach to solve these challenges.

Kate: When you’re talking to a Chief Risk Officer (CRO) or when you’re talking to a board, how do you start to really correlate this concept of operational resiliency into the overarching management of risk?

Charles: Again, changing the perspective from being one that is purely risk topic oriented to one that is oriented around the business process is key. It’s understanding what the risk is to the resiliency of a business process. When put into the context of the business process, the discussion with the CRO or the board is one with which they are familiar and comfortable.

Under the covers, however, you have to know what the collection of systems are, and its assets that are actually responsible for delivering that business process. If you’re a bank, it may be your funds transfer system, or maybe you consider a Demand Deposit Account (DDA) system. If you are a beverage company, it’s the actual control of the manufacturing process. If you are an airline, it’s your safety systems.

If you start to evolve your thinking around operational resiliency you’ll start to change the conversation you have with your board members and senior management, and that will make a lot of other things simpler. By being able to actually look at the process and system in a deterministic way, it becomes the starting place for how you think about the operational resilience of this business process and what assets you should care about. 

Kate: How does compliance factor into this?

Charles:  When you manage compliance risk especially as you think along the lines of operational resilience, you really need to have a thorough understanding of what is happening in your environment to be compliant. For example, SWIFT and PCI specifically have a high bar that is set from a security perspective and you definitely want to manage business processes properly and their risk profile.

Looking more closely, if you’re a bank, you have your SWIFT environments and you have your funds transfer environments that have one risk profile. You have your HR systems which have a different risk profile. Then you have systems that are inside of compliance regimes like PCI.  You have to get these systems in order and secured before you can do anything else. So part of the challenge you always have is figuring out what’s in compliance, what’s in scope of compliance and what’s not in scope of compliance.

Now let’s take PCI – what systems are involved in processing the issuance of your credit card? Those systems are in scope of PCI. Using a technology that helps you understand the relationship between the application components helps you deterministically understand exactly which systems play a role. Now you can show an auditor. These are the flows of systems that are inside of my processing inside of the scope of PCI. And so that’s what you’re securing to the PCI standard, and everything else is out of scope.

These ideas start to interconnect in a lot of different ways as you start to think more about business processes and the ability to have more transparency and then to figure out what’s in scope from a compliance perspective.

Kate: If you think about some of the breaches and events we’ve seen in the past year, it’s really changed the scope of what to focus on. How should we think about it as we look at this new brave world of scope and really think of it more from an anomaly behavior standpoint?

Charles: The main challenge this year was that our old baselines of normality changed so rapidly it made anomaly detection even more difficult. To overcome this, you have to think about the protect, detect, respond, and recover core functions from the NIST Framework. Over the last year how we think about all of those functions have changed. 

Let’s think about detection just for a moment in and of itself. The reality is, in large environments detection is very difficult because of the vast amount of things that you’re required to monitor. If you could create a behavioral baseline of everything that exists in your environment, and then understand when things start to behave differently, that would help you pick areas to focus in on, in your security operations. The truth is you can’t secure your entire environment all to the right standard without understanding your entire environment.

Kate: So, what do you need to secure your entire environment to the highest standard and gain a complete understanding of your environment?

Charles: That is where transparency is required – the same transparency of operational resiliency and understanding the set of systems that make up a business process. If you really understand how your environment works using the same set of tools to create that transparency and you monitor behavior in the environment on an ongoing basis, it helps you detect bad things. If you think about a cyber defense posture, one of the things you can do to detect the bad things happening in your environment is to understand all things that make up your environment, understand how they all behave and start to understand when something behaves in an unusual fashion. This requires transparency and visibility, because you cannot protect what you cannot see. Transparency helps you understand where you need to manage risk and where you need to comply.

For example, if the behavior of the SolarWinds server had been understood prior and ongoing, and the things running on it, and how it interacted with your environment prior to the breach, it could have led to early detection of the bad actors. In general terms, if you had a baseline understanding of how the SolarWinds server behaved normally, investigating that anomaly would likely have led to early detection of bad actors in the environment.

 Kate: Ok, so tell me, how do you achieve this transparency in your environment? 

Charles: Using a technology that helps you understand the relationship between the application components helps you deterministically understand exactly which systems play a role. This can be achieved with Application Relationship Management, particularly with respect to detection. You need to detect everything, but you can’t without the appropriate tools. Detection ability is one of the things that Application Relationship Management starts to give you that you didn’t necessarily have before that provides a much better understanding of what that normal behavior patterns look like to detect anomaly behavior. So for example, you can start to actually use this data not just for helping you understand your risk management and operational resiliency scope or risk spheres, but you can actually start to use this data in real time as part of your event detection capabilities.

Kate: Going back to SolarWinds, how could Application Relationship Management have been leveraged, and what would have changed? 

Charles: If you had been a SolarWinds customer and you had implemented Application Relationship Management, it’s likely that you would have seen this behavior pattern change when the compromised version was introduced. While it wouldn’t have said explicitly that something bad is happening, it would have said something odd is happening that requires you to take a look at this and having looked at it, hopefully you would have actually found the compromise. 

Kate: Many companies that we speak with are just starting to embark on the journey of enhanced transparency through Application Relationship Management for operational resilience. What advice would you like to leave them with?

Charles: This trend has been coming for several years, but the events of the past year have greatly sped up the timeline for all organizations. With the rapid adoption of cloud, enterprise environments are more diverse, complex, and dynamic than ever. Managing operational resilience under these conditions is a difficult challenge, and Application Relationship Management is an excellent approach to address that. While you strive to protect everything you can, you just can’t do that on your own. The ability to detect is a fundamental requirement of security strategy – you can’t monitor every single thing that is happening in the environment because the volume is simply too high, which is a huge advantage of leveraging an Application Relationship Management solution.

I suggest following these key guidelines to achieve transparency in your environment for operational resilience: 

  1. Start from an ops-risk perspective with the environments running your most critical business processes. Additionally, from a vendor risk management perspective, focus on environments that are running third party applications and require administrative privileges. These steps are crucial because managing third-party risk is critical for operational resilience given the growing dependence on third parties to maintain specific functions of core business processes.
  2. Then as you map the environment, it will grow naturally until you get a complete understanding. You can then start to understand normal activity first by setting up a baseline of ‘normal’ and monitor in real-time to understand if systems are behaving abnormally.  
  3. With transparency and visibility, you’re able to be smarter about ‘bad behavior.’ Implement an Application Relationship Management solution to achieve this transparency and visibility, and monitor how things behave. Application Relationship Management enables you to fully understand your environment and detect anomaly behavior before permanent damage is done. 


To learn more about how to improve operational resilience with Application Relationship Management in your organization, watch the in-depth
Fireside Chat featuring internationally-recognized financial security veterans Boyd White, Director of Technical Solutions Engineering at Tanium, and Charles Blauner.

About Charles Blauner:
Charles Blauner is an internationally recognized expert independent advisor on Cyber Resiliency, Information Security Risk Management and Data Privacy. Charles has worked closely with banking regulators around the world (OCC, FRB, BoE, MAS, and HKMA) to help reduce the risk posed by cyber threats to the financial sector at large. Charles is a Partner and CISO in Residence at Team8 Ventures, President of Cyber Aegis Consulting, Strategic Advisor, Mentor, CISO village elder and Advisory Board Member at vArmour.

Previously, Charles had a distinguished career working on Information Security for more than 30 years, 25 years in Financial Services, including being the Chief Information Security Officer (CISO) at JP Morgan and Deutsche Bank, and most recently the Global Head of Information Security at Citi.

Related

Read More
August 3, 2021
vArmour Named Winner for Top 10 Black Unicorn Awards for 2021
READ MORE
Read More
July 28, 2021
Application Relationship Management and the “Shadow” Risks of RPA
READ MORE
Read More
May 20, 2021
Relationships Matter on the Road to Cyber Resiliency
READ MORE
close

Timothy Eades

Chief Executive Officer