This page looks best with JavaScript enabled

Measuring security operations capabilities and improving their maturity, efficiency, and effectiveness

To slightly paraphrase Peter Drucker’s famous quote, one can’t manage what one can’t measure. This – of course – holds true even for Computer Security Incident Response Teams (CSIRTs) and Security Operations Centers (SOCs). The only question is, how can we “measure” what they do in a meaningful way? This is what we will discuss in this article, which is loosely based on a presentation called ‘How to measure Efficiency in Security Operations’, which I gave at the Open Cyber Security Conference (OSCS) in Tenerife in February of 2024.

Why should we measure anything?

To my mind, the aforementioned quote says it all. If something (e.g., a SOC or a CSIRT that is being operated or used by our organization) is basically just a “black box” from which only a report or an alert sometimes emerges, how can we say whether that black box functions efficiently? Worse yet, how can we say whether it fully satisfies the needs of our organization?
For example, can we be certain that our security monitoring service truly does detect threats relevant to our organization, and does not depend only on generic detection capabilities that ignore our specific threat profile?

It should be clear that without “measuring” various aspects of CSIRT and SOC operations, there is very little we can be sure of… This is, of course, troubling if relevant security services are provided by an internal department of our own organization, but potentially even more so if the services are being delivered to us by an external MSSP.
It is therefore in the best interest of any organization that avails itself of security operations services – be they internally or externally provided – to periodically evaluate whether these services function effectively enough to fulfill the corresponding organizational needs.

What do we actually want to measure?

The vaguely defined terms of “Blue Teaming“ or “Security Operations”, which are commonly understood to be the purview of SOCs, CSIRTs and teams hidden behind various other acronyms, do – for obvious reasons – mean different things in different organizations. In order for us to have a reasonable starting point for our discussion, we therefore first have to specify which areas we actually want to measure.

For the sake of simplicity, we will consider “Security Operations” to mean service areas covered by the FIRST Services Framework, i.e., Information Security Event Management, Information Security Incident Management, Vulnerability Management, Situational Awareness and Knowledge Transfer. Of course, services provided by a specific SOC, CSIRT or any other “blue team” do not necessarily have to cover all of these areas, however since the activities of some teams do encompass all of them, we will use the Services Framework as our starting point.

With that out of the way, the time has almost come for us to take a look at how to analyze and measure maturity, efficiency and effectiveness in the various areas that the aforementioned framework covers.
Before that, however, it’s important to emphasize that security operations rely not only on technology but also on processes and personnel – just like cybersecurity and information security as a whole. And while some organizations tend to see “effectiveness”, “efficiency”, “quality” or “maturity” of their security operations programs mostly as a function of the number and variability of technical security solutions that they have employed, such a view is – for obvious reasons – unacceptably limiting (or “blatantly incorrect”, to put it in more straightforward terms).

As such, this techno-centric view would hardly lend itself to any reasonable “assessment” or “measurement” of real effectiveness of security operations. Therefore, although we will certainly not disregard technologies in our further discussion, we need to keep in mind the fact that technologies are only one part of the puzzle… And not necessarily always the most important one.

How can we actually “measure” security operations?

This is the key question.

One could define and use any number of different metrics, KPIs and SLAs for various areas of security operations (and if you are looking for ideas in this area, try looking at the SOC-CMM metrics suite). However, these probably wouldn’t be of much help if one wanted to measure any of the aforementioned Service Framework areas in a more complex or formal manner.

For this purpose, one might – of course – develop a custom methodology. However, it may be wiser not to reinvent the wheel if an effective methodology already exists. We will therefore take a look at several methodologies and frameworks for measuring or assessing different areas of security operations that are currently available.

It should be mentioned that most of these methodologies solve the issue of “how to measure” various aspects of security operations by using some (perhaps simplified or modified) version of the CMMI, and therefore, they can be said to measure maturity of different areas, rather than their efficiency or effectiveness. Nevertheless, since maturity inherently includes efficiency, effectiveness, repeatability, and sustainability, these methodologies are well-suited to our needs.

Below is a non-exhaustive list of freely available maturity models, methodologies, and relevant frameworks, along with a brief description of their primary purpose, organized by the security operations area they cover. All methodologies and frameworks that are potentially suitable for more than one service area have been listed in the one, in which their use may be considered of most benefit.

Information Security Event Management

This service area of FIRST Services Framework covers security monitoring and detection and analysis of events, which is usually the domain of Security Operations Centers.
Although there are various methodologies and frameworks that may be useful in this area (you may find some additional ones in the Information Security Incident Management section below, since this area and security event management are umbilically linked), there are two that deserve a special mention.

SOC-CMM

SOC-CMM is undoubtedly the best-known and most commonly used maturity model for SOCs. As you can see from the following picture, it is quite comprehensive – in its current version (2.3), it covers 26 aspects of SOC operations split into 5 domains (Business, People, Process, Technology and Services), and it enables one to evaluate various factors of each of these aspects using a 5-level maturity scale, and some of them (those that fall into the Technology and Services domains) also using a 4-level capability scale.

SOC-CMM Model
Source: SOC-CMM

For practical application, the model is available in the form of a user-friendly Excel assessment tool. Or – rather – two tools, a “Basic” and “Advanced” one. However, the Basic version is likely the only one you’ll ever need.

SOC-CMM is quite useful for informal internal evaluations, as well as formal assessments performed by third parties, and it can also be helpful when it comes to defining the optimal “target” state of operations and developing corresponding improvement roadmaps for SOCs.

In addition to this, a 3-level certification scheme based on SOC-CMM has been introduced in the final months of 2024, which enables organizations to have their Security Operations Centers officially certified by an accredited certification body. This may be of interest especially to those who feel the need to assure their client base (be it internal or external) of the quality of service provided by their SOC.

MITRE ATT&CK

Given its long history and wide-ranging use in the cyber security community, the MITRE ATT&CK framework itself requires no introduction. Nevertheless, its potential as a tool for measuring effectiveness of Security Operations Centers does deserve some short explanation, since there is no formal methodology available for this use of ATT&CK.

In the SOC space, the ATT&CK framework is commonly used for specifying the scope of detection use cases and analytics. Having all relevant detection analytics that a SOC uses mapped to ATT&CK can be quite helpful, since it gives one the ability to effectively measure what (sub-)techniques SOC is capable of detecting, and what (sub-)techniques it most likely can’t detect.

This can be considered the simplest way to use MITRE ATT&CK in the context of a SOC. However, ATT&CK can also be used to measure security monitoring capabilities and their scope in a more complex way.

While larger detection coverage (i.e., the range of malicious activities that a SOC is capable of detecting) is generally better, no SOC in the world can effectively cover all (sub-)techniques that are listed in ATT&CK. Therefore, what any SOC should try to implement first and foremost are detections for those malicious activities (i.e., ATT&CK (sub-)techniques) that are most important to its client base.

Therefore, if one first identifies these activities through an appropriate threat modeling approach, one can then quite easily compare the list of ATT&CK (sub-)techniques that the SOC needs to cover – based on the needs of its clients – with the list of (sub-)techniques that it is actually capable of detecting. If coverage of the identified threat model is not close to full, then the SOC is obviously not delivering as effective detection service, as its client base truly needs.

Using ATT&CK as a basis for an assessment (internal one or one performed by a third party) of detection capabilities and their alignment with client requirements can therefore certainly be helpful. And although – as we have already mentioned – there currently isn’t any formal methodology for this, there is at least a freely available tool named MITRE ATT&CK Navigator, which can enable us to easily document such an assessment.

For completeness’s sake, it should be mentioned that alignment of detection capabilities with client needs based on their respective ATT&CK mappings is something that is – to a certain degree – also covered by SOC-CMM .

Information Security Incident Management

Security incident management is commonly considered to be the domain of CERTs and CSIRTs. And although one maturity model reigns supreme in this area, we will mention an additional one, since it brings a somewhat different – yet relevant – viewpoint to the table…

Security Incident Management Maturity Model (SIM3)

Globally, the most well-known methodology for evaluating CSIRTs and CERTs is undoubtedly the Security Incident Management Maturity Model, or SIM3, which is currently used by FIRST, TF-CSIRT or ENISA – just to name a few.

In its current version (SIM3 v2 interim), it consists of 45 “maturity parameters” split into 4 categories (Organization, Human, Tools and Processes) that cover most high-level aspects of security incident management. Evaluation of each parameter is performed using a 5-level maturity scale.

Probably the easiest way to use the model is with the help of a freely available on-line SIM3 self-assessment tool.

SIM3 Model

In practice, SIM3 is commonly used for both informal self-assessments as well as formal audits that evaluate whether maturity levels achieved by a specific team reached or exceeded some predetermined “baseline” (e.g., the Trusted Introducer Certification process is based on such a formal audit). Overall, the model is quite easy to use, and a quick, informal evaluation of a CSIRT with its help can be done in a few hours (formal assessments, of course, take significantly longer).

Although in its current form, SIM3 is primarily designed for assessing CSIRTs, it is sometimes also used for evaluation of Security Operations Centers and other types of security teams. And while, at the moment, it may not always be easy to map some aspects of SOC operations to the model, the situation is expected to change in the near future, since the Open CSIRT Foundation is currently in the process of developing modifications of SIM3 (so called “profiles”) intended for SOCs as well as PSIRTs and ISACs, which should significantly simplify application of the model (not just) within the SOC space.

CREST Cyber Security Incident Response Maturity Assessment

The Cyber Security Incident Response Maturity Assessment (or CSIR Maturity Assessment), which was developed by CREST, is another model/methodology useful for assessing incident response teams. However, unlike SIM3, SOC-CMM and most similar models, which evaluate maturity in various general areas that are important for effective SOC or CSIRT work (e.g., personnel situation, overall existence of processes, etc.), the CSIR Maturity Assessment evaluates maturity of organizational capabilities in various stages of incident response lifecycle.

CREST Cyber Security Incident Response Maturity Assessment
Source: CREST

Two Excel-based maturity assessment tools, both of which use a 5-level maturity scale, are available for practical application of the methodology. One of them is intended for quick, high-level evaluations, and allows users to set a single maturity level for each of the 15 steps of the incident response lifecycle shown above. The second tool is much more detailed, and (similarly to SOC-CMM Excel files) includes multiple questions for each evaluated area.

For most organizations outside of the United Kingdom, this maturity model will probably be most interesting “only” as a mechanism for informal self-assessments. Nevertheless, it can certainly serve as a useful tool. This holds true even for those who already use SIM3 to assess their CSIRTs, since the CSIR Maturity Assessment is – in its complex version – much more detailed than the aforementioned maturity model, and can therefore provide a more in-depth view into some areas.

Vulnerability management

While vulnerability management is sometimes the domain of specialized vulnerability management teams, in other cases, performance of corresponding duties falls to a SOC, CSIRT or to a general IT operations department. In any case, evaluating how effectively vulnerability management is performed in the context of an organization can certainly be beneficial.

To this end, we will mention one maturity model, which deals with this area, and which is probably the most interesting one in this space (that is, if one doesn’t want to go into specifics of bug bounty programs and vulnerability report handling, since there are specialized maturity models for these areas as well).

SANS Vulnerability Management Maturity Model (VMMM)

The VMMM started its life as “only” a maturity model for vulnerability management programs, without any explicit methodology for its application. Nevertheless, few years after its publication, one of its authors released an accompanying Self-Assessment Tool (VMMM-SAT) that can guide one in its practical use.

Overall, the model consists of 5 phases of a vulnerability management lifecycle (Prepare, Identify, Analyze, Communicate and Treat), that are split into 12 areas, that can be measured using a 5-level CMMI-based scale.

SANS Vulnerability Management Maturity Model (VMMM)

The accompanying Self-Assessment Tool is available as an Excel document that enables one to evaluate the 12 areas of the model with the help of approximately 140 yes/no questions.

As the name of the tool suggests, it – and VMMM itself – is primarily intended/useful for self-assessments, though it can also be helpful in the development of improvement roadmaps for vulnerability management programs.

Situational Awareness

In terms of Security Operations, the topic of situational awareness can be said to be heavily intertwined with Cyber Threat Intelligence (CTI). Given this fact, there are two main maturity models/methodologies that lend themselves to use within this space.

CREST Cyber Threat Intelligence Maturity Assessment Tools

Cyber Threat Intelligence Maturity Assessment Tools are a set of 3 Excel documents that all implement the same methodology for assessing maturity of CTI programs.

As you may see in the following picture, the methodology is built around the assessment of four “stages” of an overall “CTI process” (Governance, Program Planning & Requirements, Threat Intelligence Operation and Functional Management), that are further split into 18 “steps”. Each of the areas is evaluated using a 5-level maturity scale.

CREST Cyber Threat Intelligence Maturity Assessment
Source: CREST

The reason why CREST published 3 tools for use with the same methodology is that each of the Excel files implements the methodology on a different level of detail. While the “Summary Level” tool only requires answering two or three questions per each “step”, the “Intermediate Level” tool might require answering five or six questions, and the “Detailed Level” tool might go well over twenty questions in some cases. One can therefore always choose the right tool based on the need for detail and the available time.

The assessment tools may be quite useful for performing self-assessments, however they (especially the two more detailed tools) may also be interesting for conducting third-party assessments.

Cyber Threat Intelligence Capability Maturity Model (CTI-CMM)

The CTI-CMM is a relatively recent maturity model that is heavily influenced by the C2M2 framework and stresses the need for alignment of a CTI program with stakeholder/client needs.
In its current version (1.1), it is organized into 11 domains that are evaluated using a 4-level maturity scale.

For practical application of the model, a “Beta” assessment tool is currently available in the form of an Excel document, that enables one to evaluate a CTI program through specifying the current maturity level in a total of 230 measured areas.

The model may be useful for performing self-assessment of a CTI team (or SOC/CSIRT that delivers CTI-related services) or for development of an improvement roadmap for a CTI program.

Knowledge Transfer

Although this area is part of the FIRST Services Framework, it is commonly the domain of security awareness and education specialists and exercise developers, rather than SOCs or CSIRTs. As such, it is somewhat outside of the scope we usually wish to evaluate when it comes to security operations efficiency or maturity. Nevertheless, should you require some basic model or methodology to assess how a certain organization/team is performing in at least some parts of this service area, the Security Culture Maturity Model or the SANS Security Awareness Maturity Model may be of use to you.

Where should we start?

With the number of various methodologies shown above, one can almost feel spoiled for choice, and it can be quite difficult to identify an optimal starting point/optimal methodology to start with.

Although the “right” methodology will – of course – depend on the specific service areas one wishes to assess, an approach that has worked for me quite well in the past, when I needed to “somehow” assess a SOC or a CSIRT (or another team that performs at least some level of security monitoring and incident response), was to do a quick assessment using SIM3, followed by a more in-depth analysis with the help of SOC-CMM.

Therefore, if you don’t know where to start, feel free to use this approach. Though, as you can clearly see, it is far from the only one available to you…

And should you need any help with assessing your SOC or CSIRT, don’t hesitate to reach out – it is something we do for our clients regularly as part of our services at Nettles Consulting.

Share on