Proponents of ERM frequently point to heatmaps as a primary deliverable, which may only make the situation worse. While heatmaps can be a good tool when used properly, they aren’t necessarily the end goal. Furthermore, when used improperly, they simply highlight risks that the organization already knows about. The article Five Benefits of Enterprise Risk Management summarizes what this can lead to:
“Many organizations struggle with implementing ERM and identifying how, and at what level, to integrate it into their organization. Managers often say they are already aware of the risks for their respective areas of the business. In these situations, what value does ERM provide, and how does it enable better perspectives and management of risks and risk data?”
In a North Carolina State University interview, Laurie Brooks, a member of the Board of Directors of Provident Financial Services, offers a definition of ERM. “... What ERM attempts to do is to collect the information about these activities, and to analyze it and to look for connections, correlations, concentrations of risk, to help senior management allocate resources and prioritize those resources for the management of the risks that could impact the company’s mission and strategy.”
This definition includes several key concepts that help to make ERM less nebulous. It defines ERM as a process rather than the system that supports the process. It also connects risk-related directly to the decision-making process, which is critical. Despite the additional clarity of this definition, it does not identify how ERM mitigates risk — how it performs the “magic” that the skeptics want to see.
Three Conditions for ERM
Stripped down to its most basic core concept, ERM can be explained in three points:
- A process where data tells a story leading to action
- The action is one the org would not otherwise take
- The action is related to strategic objectives
The first element separates data gathering (the focus of many ERM programs) from the ability to use the data that is gathered to actually do something. Collecting a ton of data means nothing if it doesn’t lead to taking action.
The second element is where the “magic” actually takes place. If reviewing the collected data makes it clear that additional action is needed, and that action would not take place without the review, that is how ERM programs mitigate risk.
The last element ensures program relevance. Collecting data that triggers the types of actions that do not move the needle on strategic objectives serves little use to stakeholders or the organization. Tying these actions directly to the most critical objectives, however, leads to better alignment with top priorities and makes the output highly pertinent.
Making it Familiar
Carol Williams, author of the ERM Insights by Carol blog, points out in the article 8 Ways Enterprise Risk Management is Different (... and Better) than Traditional Risk Management how commonplace this concept really is. “Whether they know it or not, everyone in an organization from the janitor to the CEO engages in ‘risk management’ of one sort or another on a daily basis,” she notes. She follows this with an example of a janitor putting out a “caution, wet floor” sign on a rainy day.
Her simple example of the caution sign meets all three conditions for ERM. Data tells a story (“It is raining outside, the lobby will be wet”). That leads to an action the organization would not otherwise take (put out the sign). That action is tied to strategic goals (prevent employee slip and falls). While this example may be less complex than the risks ERM programs typically manage, the underlying idea applies. This makes the concept much more relatable, and easier to explain to stakeholders. It’s not magic, it’s commonplace.
It’s all the SAME
Reducing the concept to the three conditions above can lead to another round of questions. “Sure, but how does that relate to cyber risk (or the coronavirus, or the Australian wildfires, or Brexit, or a host of other complex problems)?” While these types of risks initially seem way too complex to be addressed with this approach, they can all be made much more manageable by applying an easy-to-remember technique.
- S = Survey
- A = Assess
- M = Measure
- E = Execute
Identify what’s out there (everyone can help identify risks- crowdsource)Figure out which factors to measure (frequency and severity are not the only factors)Use thresholds to figure out triggers (based on risk appetite/tolerance)Use context to make a better decision or initiate a plan to address any “next steps”
This technique applies to all types of risks. Where many programs struggle is by focusing only on the first three steps — building a data collection system with no clear idea of how to generate the last “E” element. That element is the extra action required to actually address these complex risks.
Unpacking an example shows how this technique works. Say an organization was trying to address cybersecurity risk. The process could work this way:
S = Survey – Survey IT staff to find out “What keeps you up at night?” and build out a registerOne common response: Unpatched/out-of-date devices
A = Assess – Score this on the factors that matter most to the organization (severity, impact, velocity, preparedness, etc.) Score: High - this issue needs to be addressed
M = Measure – Set thresholds for a warning trigger based on the strategic importanceThresholds for unpatched rates: <5% Acceptable 5-10% Medium concern >10% High Concern
E = Execute – Decide what should happen (the extra action) if a threshold is reached <5% Take no action 5-10% Alert IT managers >10% Execute our “patch plan” and alert leadership
This could apply to unpatched rates for the entire organization, or be broken down by business units, lines of business, or geographic regions — each of which could further adjust the thresholds as needed. While there might be dozens of additional similar risks that all roll up to the broader cyber risk threat (and the data needed to set those triggers may come from multiple sources), the process is always the SAME.
There are many additional concepts that can add to ERM’s effectiveness such as residual risk, risk tolerance/appetite, and continuous monitoring. The underlying concept, however, remains clear and simple. Unpacking individual examples using the SAME technique, and explaining ERM in terms of the three conditions can serve to demystify ERM and gain stakeholder buy-in when building up any ERM program.
** NOTE: If you are a member of RIMS, your chapter can request the delivery of the RIMS PERK Program-approved presentation, Demystifying ERM, at a chapter meeting. The presentation dives deeper into examples of how an ERM system delivers the three conditions, and how to avoid the four most common pitfalls when launching an ERM program.
To learn more about Origami's ERM solution, contact us.