What’s Better – Complete Coverage With Multiple Tools Or Partial Coverage With One Tool?

The debate between complete coverage with multiple tools versus imperfect coverage with one tool regularly pops up in discussions between security professionals. What we are really talking about is attempting to choose between maximum functionality and simplicity. Having pursued both extremes over the course of my security career I offer this post to share my perspective on how CISOs can think about navigating this classic tradeoff.

In Support Of All The Things

Let’s start with why you may want to pursue complete coverage by using multiple technologies and tools.

Heavily Regulated And High Risk Industries

First, heavily regulated and high risk businesses may be required to demonstrate complete coverage of security requirements. These are industries like the financial sector or government and defense. (I would normally say healthcare here, but despite regulations like HIPAA the entire industry has lobbied against stronger security regulations and this has proven disastrous via major incidents like the Change Healthcare Ransomware Attack). The intent behind any regulation is to establish a minimum set of required security controls businesses need to meet in order to operate in that sector. It may not be possible to meet all of these regulatory requirements with a single technology and therefore, CISOs may need to evaluate and select multiple technologies to meet the requirements.

Defense In Depth

Another reason for selecting multiple tools is to provide defense in depth. The thought process is: multiple tools will provide overlap and small variances in how they meet various security controls. These minor differences can offer defenders an advantage because if one piece of technology is vulnerable to an exploit, another piece of technology may not be vulnerable. By layering these technologies throughout your organization you reduce the chances an attacker will be successful.

An example of this would be if your business is protected from the internet by a firewall made by Palo Alto. Behind this PA firewall is a DMZ and the DMZ is separated from your internal network by a firewall from Cisco. This layered defense will make it more difficult for attackers to get through the external firewall, DMZ, internal firewall and into the LAN. (See image below for a very simplistic visual)

Downside Of All The Things

All the things may sound great, but unless you are required to meet that level of security there can be a lot of downsides.

First, multiple technologies introduce complexity into an environment. This can make it more difficult to troubleshoot or detect issues (including security events). It can also make it more difficult to operationally support these technologies because they may have different interfaces, APIs, protocols, configurations, etc. It may not be possible to centrally manage these technologies, or it may require the introduction of an additional technology to manage everything.

Second, all of these technologies can increase the number of people required to support them. People time can really add up as a hidden cost and shouldn’t be thrown away lightly. People time starts the second you begin discussing the requirements for a new technology and can include the following:

  • Proof of Concepts (PoCs)
  • Tradeoff & Gap Analysis
  • Requests for Information (RFI)
  • Requests for Proposal (RFP)
  • Requests for Quotes (RFQ)
  • Contract Negotiation
  • Installation
  • Integration
  • Operation & Support

Finally, multiple technologies can cause performance impacts, increased costs and waste. Performance impacts can happen due to differences in technologies, complexity, configuration errors or over consumption of resources (such as agent sprawl). Waste can happen due to overlap and duplicated functionality because not all of the functionality may not get used despite the fact you are paying for it.

Advantages and Disadvantages Of A Single Tool

A single tool that covers the majority, but not all, of your requirements offers one advantage – simplicity. This may not sound like much, but after years of chasing perfection, technology simplicity can have benefits that may not be immediately obvious.

First, seeking out a single tool that meets the majority of requirements will force your security team to optimize their approach for the one that best manages risk while supporting the objectives of the business. Second, a single tool is easier to install, integrate, operate and support. There is also less demand on the rest of the business in terms of procurement, contract negotiation and vendor management. Lastly, a single tool requires less people to manage it and therefore you can run a smaller and more efficient organization.

The biggest disadvantage of a single tool is it doesn’t provide defense in depth. One other disadvantage is it won’t meet all of your security requirements and so the requirements that aren’t met should fall within the risk tolerance of the business or somehow get satisfied with other compensating controls.

A single tool that covers the majority, but not all, of your requirements offers one advantage – simplicity.

Wrapping Up

There are a lot of advantages to meeting all of your requirements with multiple tools, but these advantages come with a tradeoff in terms of complexity, operational overhead, duplicated functionality and increased personnel requirements. If you operate a security program in a highly regulated or highly secure environment you may not have a choice so it is important to be aware of these hidden costs. A single tool reduces complexity, operational overhead and personnel demands, but can leave additional risk unmet and fails to provide defense in depth. Generally, I favor simplicity where possible, but you should always balance the security controls against the risk tolerance and needs of the business.

If Data Is Our Most Valuable Asset, Why Aren’t We Treating It That Way?

There have been several high profile data breaches and ransomware attacks in the news lately and the common theme between all of them has been the disclosure (or threat of disclosure) of customer data. The after effects of a data breach or ransomware attack are far reaching and typically include loss of customer trust, refunds or credits to customer accounts, class action lawsuits, increased cyber insurance premiums, loss of cyber insurance coverage, increased regulatory oversight and fines. The total cost of these after effects far outweigh the cost of implementing proactive security controls like proper business continuity planning, disaster recovery (BCP/DR) and data governance, which begs the question – if data is our most valuable asset, why aren’t we treating it that way?

The Landscape Has Shifted

Over two decades ago, the rise of free consumer cloud services, like the ones provided by Google and Microsoft, ushered in the era of mass data collection in exchange for free services. Fast forward to today, the volume of data growth and the value of that data has skyrocketed as companies have shifted to become digital first or mine that data for advertising purposes and other business insights. The proliferation of AI has also ushered in a new data gold rush as companies strive to train their LLMs on bigger and bigger data sets. While the value of data has increased for companies, it has also become a lucrative attack vector for threat actors in the form of data breaches or ransomware attacks.

The biggest problem with business models that monetize data is: security controls and data governance haven’t kept pace with the value of the data. If your company has been around for more than a few years chances are you have a lot of data, but data governance and data security has been an afterthought. The biggest problem with bolting on security controls and data governance after the fact is it is hard to reign in pandoras box. This is also compounded by the fact that it is hard to put a quantitative value on data, and re-architecting data flows is seen as a sunk cost to the business. The rest of the business may find it difficult to understand the need to rearchitect their entire business IT operations since there isn’t an immediate and tangible business benefit.

Finally, increased global regulation is changing how data can be collected and governed. Data collection is shifting from requiring consumers to opt-out to requiring them to explicitly opt-in. This means consumers and users (an their associated data) will no longer be the presumptive product of these free services without their explicit consent. Typically, increased regulation also comes with specific requirements for data security, data governance and even data sovereignty. Companies that don’t have robust data security and data governance are already behind the curve.

False Sense Of Security

In addition to increased regulation and a shifting business landscape, the technology for protecting data really hasn’t changed in the past three decades. However, few companies implement effective security controls on their data (as we continue to see in data breach notifications and ransomware attacks). A common technology used to protect data is encryption at rest and encryption in transit (TLS), but these technologies are insufficient to protect data from anything except physical theft and network snooping (MITM). Both provide a false sense of security related to data protection.

Furthermore, common regulatory compliance audits don’t sufficiently specify protection of data throughout the data lifecycle beyond encryption at rest, encryption in transit and access controls. Passing these compliance audits can give a company a false sense of security that they are sufficiently protecting their data, when the opposite is true.

Just because you passed your compliance audit, doesn’t mean you are good to go from a data security and governance perspective.

Embrace Best Practices

Businesses can get ahead of this problem to make data breaches and ransomware attacks a non-event by implementing effective data security controls and data governance, including BCP/DR. Here are some of my recommendations for protecting your most valuable asset:

Stop Storing and Working On Plain Text Data

Sounds simple, but this will require significant changes to business processes and technology. The premise is the second data hits your control it should be encrypted and never, ever, unencrypted. This means data will be protected even if an attacker accesses the data store, but it also will mean the business will need to figure out how to modify their operations to work on encrypted data. Recent technologies such as homomorphic encryption have been introduced to solve these challenges, but even simpler activities like tokenizing the data can be an effective solution. Businesses can go one step further and create a unique cryptographic key for every “unique” customer. This would allow for simpler data governance, such as deletion of data.

Be Ruthless With Data Governance

Storage is cheap and it is easy to collect data. As a result companies are becoming digital data hoarders. However, to truly protect your business you need to ruthlessly govern your data. Data governance policies need to be established and technically implemented before any production data touches the business. These policies need to be reviewed regularly and data should be purged the second it is no longer needed. A comprehensive data inventory should be a fundamental part of your security and privacy program so you know where the data is, who owns it and where the data is in the data lifecycle.

The biggest problem with business models that monetize data is: security controls and data governance haven’t kept pace with the value of the data.

Ruthlessly governing data can have a number of benefits to the business. First, it will help control data storage costs. Second, it will minimize the impact of a data breach or ransomware attack to the explicit time period you have kept data. Lastly, it can protect the business from liability and lawsuits by demonstrating the data is properly protected, governed and/or deleted. (You can’t disclose what doesn’t exist).

Implement An Effective BCP/DR and BIA Program

Conducting a proper Business Impact Analysis (BIA) of your data should be table stakes for every business. Your BIA should include what data you have, where it is and most importantly, what would happen if this data wasn’t available? Building on top of the BIA should be a comprehensive BCP/DR plan that appropriately tiers and backs up data to support your uptime objectives. However, it seems like companies are still relying on untested BCP/DR plans or worse solely relying on single cloud regions for data availability.

Every BCP/DR plan should include a write once, read many (WORM) backup of critical data that is encrypted at the object or data layer. Create WORM backups to support your RTO and RPO and manage the backups according to your data governance plan. Having a WORM backup will prevent ransomware attacks from being able to encrypt the data and if there is a data breach it will be meaningless because the data is encrypted. BCP / DR plans should be regularly tested (up to full business failover) and security teams need to be involved in the creation of BCP/DR plans to make sure the data will have the confidentiality, integrity and availability when needed.

Don’t Rely On Regulatory Compliance Activities As Your Sole Benchmark

My last recommendation for any business is – just because you passed your compliance audit, doesn’t mean you are good to go from a data security and governance perspective. Compliance audits exist as standards for specific industries to establish a minimum bar for security. Compliance standards can be watered down due to industry feedback, lobbying or legal challenges and a well designed security program should be more comprehensive than any compliance audit. Furthermore, compliance audits are typically tailored to specific products and services, have specific scopes and limited time frames. If you design your security program to properly manage the risks to the business, including data security and data governance, you should have no issues passing a compliance audit that assesses these aspects.

Wrapping Up

Every business needs to have proper data security and data governance as part of a comprehensive security program. Data should never be stored in plain text and it should be ruthlessly governed so it is deleted the second it is no longer needed. BCP/DR plans should be regularly tested to simulate data loss, ransomware attacks or other impacts to data and, while compliance audits are necessary, they should not be the sole benchmark for how you measure the effectiveness of your security program. Proper data protection and governance will make ransomware and data breaches a thing of the past, but this will only happen if businesses stop treating data as a commodity and start treating it as their most valuable asset.

Security Considerations For M&A and Divestitures

I’ve been speaking to security startups over the last few weeks and some of the discussions made me think about the non-technical aspects of security that CISOs need to worry about. Specifically, things like mergers, acquisitions and divestitures and the different risks you will run into when executing these activities. There are a number of security issues that can materialize when combining businesses or separating businesses and in this post I’ll share some of the things you need to think about from a security perspective that may not be obvious at first glance.

What’s Going On Here?

There are a number of reasons for mergers & acquisitions (M&A) or divestitures. For the past two decades, the tech industry has used M&A to acquire smaller startup companies as a way to collect intellectual property, acquire specific talent or gain a competitive advantage. Divestitures may be the result of changing business priorities, separating business functions for regulatory reasons, eliminating redundancies or a way to sell a part of the business to cover costs. Mergers, acquisitions and divestitures are similar because you will want to review the same things from a security perspective, but it is probably easiest to think of divestitures as the reverse of an M&A – you are separating a business instead of combining a business. Divestitures are definitely less common than M&A in the tech space, but they aren’t unheard of. There are also differences in terms of the security risks you need to think about depending on if you are acquiring a business or separating a business. My best advice is to work with the legal and finance teams performing the due diligence and have a set process (that you have contributed to) so you don’t forget anything. With that, let’s dive into a few different areas.

Physical Security

Physical security is something you will need to think about for both M&A and divestitures. For M&A you will want to perform a physical security assessment on the facilities you are acquiring to make sure they meet or exceed your standards. Reviewing physical security controls like badging systems, fencing, bollards, cameras, fire suppression, emergency lighting, tempest controls (if required), safes and door locks will all help make sure your new facilities are up to standard. If you aren’t sure how to perform this, hire a company that specializes in physical security assessments or physical red teaming.

While physical security for M&A may seems straight forward, there are a few gotchas when performing divestitures. The biggest gotcha is understanding and reviewing the existing access of the people that are part of the divestiture because you will now need to consider them outsiders. All of your standard off-boarding processes will apply here such as terminating accesses to make sure someone doesn’t retain access to a system they are no longer authorized to access (like HR, Finance, etc.).

Things can get complicated if parts of the business are divesting, but not fully. Some examples of this are when the business divests a smaller part, but allows the smaller part to co-locate in their existing facilities. This may complicate physical security requirements such as how to schedule or access common areas, how to schedule conference rooms, how to separate wifi and network access, etc. In the above example, the larger company may act like a service provider to the divested part of the business, but there still needs to be effective security controls in place between the two parts.

Personnel Security

I touched on this a bit already, but personnel security is something to consider when performing M&A or divestitures. With M&A the biggest issue will be how to smash the two IAM systems and HR systems together without punching huge holes in your network. Typically what happens is the two parts operate separately for a while and then consolidate to a single system and the employees of the acquired business get new accounts and access.

For divestitures, particularly if they don’t result in a clean split, you will need to focus heavily on access control and insider threats. Think about how you will separate access to things like source code, financial systems, HR systems, etc. If the smaller company has physical access to your space then you need to build in proper physical and logical controls to limit what each business can do, particularly for confidentiality and competitive reasons.

What’s an example of where this can go wrong? Let’s say business A is going to divest a small part of its business (business B). The complete divestiture is going to take a while to finalize so company A agrees to allow company B to continue to access their existing office space, including conference rooms. However, the legal team didn’t realize the conference rooms are tied to company A’s SSO and calendaring system so company B has no way to schedule the conference rooms without retaining access to company A’s IAM system creating a major security risk. Whoops!

The biggest gotcha is understanding and reviewing the existing access of the people that are part of the divestiture because you will now need to consider them outsiders.

Contracts

Contracts may not seem like a typical security issue, but they should be part of your review, particularly when performing M&A. Why? You are acquiring a business that is worth something and that business will have existing contracts with customers. The contractual terms with those customers may not match the contractual terms of the acquiring company, which can cause a risk if there is a significant difference in contract terms. Smaller companies are more agile, but they also usually have less negotiating power compared to large companies and as a result are more likely to agree to non-standard contract terms. What are some terms you need to think about?

  • Vulnerability Remediation Times – How quickly did the new company promise to fix vulnerabilities for their customers?
  • Incident & Breach Disclosure Time Frames – How quickly did the new company promise to notify customers of a breach or incident? I have seen very small time frames suggested in contracts, which are impossible to meet, so I definitely recommend reviewing these.
  • Disclosure of Security Postures – Does the new company have contractual terms promising to provide SBOMs or other security posture assessments to their customers on a regular basis?
  • Compliance Requirements – Has the new company agreed to be contractually obligated to maintain compliance certifications such as PCI-DSS, SOC 2, ISO27001, etc.
  • Penetration Testing & Audits – Has the new company contractually agreed to have their products or services penetration tested or have their security program audited? Have they agreed to provide these reports to their customers on a regular basis?
  • Privacy & Data Governance Terms – Is the new company required to comply with privacy regulations such as allowing customers have their data deleted, or mandating certain data governance requirements like DLP, encryption, data deletion, etc?
  • BCP/DR and SLAs – Are there contractual uptime SLAs or response times and does the existing BCP/DR plan support these SLAs?

My advice is to set a timeline post acquisition to review and standardize all of your contracts to a single set of standard clauses covering the above topics. This is usually part of a security addendum that the legal team can help you create. The biggest challenge with contracts will be to “re-paper” all of your customers to hopefully get them on the same standardized contract terms so your security program doesn’t have a bunch of different requirements they have to try to meet.

Accuracy Of M&A’s

One of the biggest risk of performing M&A’s is trying to get an accurate picture of the existing security posture of the company being acquired. Why is this so difficult? The company being acquired is trying to look as good as possible so they get top dollar. They can’t hide things, but they aren’t going to tell you where all the skeletons are buried either. The acquiring company usually doesn’t get a full picture of the existing security posture until after the deal is done and you start trying to integrate the two parts of the business. If you have a chance to interview the existing security team before the M&A closes definitely ask to see their latest audit reports, compliance certifications, penetration testing reports, etc. Consider working with legal to set conditions for how old these reports can be (e.g. no older than 6 months) to hopefully give you a more accurate picture or require the acquired company to update them before the deal closes. Interview key members of the staff to ask how processes work, what are their biggest pain points, etc. Consider hiring an outside company to perform an assessment, or you can even consider talking to one of their largest customers to get their external view point (if possible).

Wrapping Up

M&A and divestitures can be exiting and stressful at the same time. It is important for the security team to be integrated into both processes and to have documented steps to make sure risks are being assessed and addressed. I’ve listed a few key focus areas above, but most importantly standardizing your M&A security review can help avoid “buyers remorse” or creating unnecessary risk to the acquiring business. Finally, having a documented divestiture process and reviewing the divestiture with legal can help avoid security risks after the fact.

Security Theater Is The Worst

We have all been there…we’ve had moments in our life where we have had to “comply” or “just do it” to meet a security requirement that doesn’t make sense. We see this throughout our lives when we travel, in our communities and in our every day jobs. While some people may think security theater has merit because it “checks a box” or provides a deterrent, in my opinion security theater does more harm than good and should be eradicated from security programs.

What Is Security Theater?

Security theater was first coined by Bruce Schneier and refers to the practice of implementing security measures in the form of people, processes or technologies that give the illusion of improved security. In practical terms, this means there is something happening, but what that something is and how it actually provides any protection is questionable at best.

Examples Of Security Theater

Real life examples of security theater can be seen all over the place, particularly when we travel. The biggest travel security theater is related to liquids. TSA has a requirement that you can’t bring liquids through security unless they are 3 ounces or smaller. However, you can bring a bottle of water through if it is fully frozen…what? Why does being frozen matter? What happens if I bring 100, 3 ounce shampoo bottles through security? I still end up with the same volume of liquid and security has done nothing to prevent me from bringing the liquid through. As for water, the only thing that makes sense for why they haven’t relaxed this requirements is to prop up the businesses in the terminal that want to sell overpriced bottles of water to passengers. Complete theater.

“Security theater is the practice of implementing security measures that give the illusion of improved security.”

Corporate security programs also have examples of security theater. This can come up if you have an auditor that is evaluating your security program against an audit requirement and they don’t understand the purpose of the requirement. For example, and auditor may insist you install antivirus on your systems to prevent viruses and malware, when your business model is to provide Software as a Service (SaaS). With SaaS your users are consuming software in a way that nothing is installed on their end user workstations and so there is little to no risk of malware spreading from your SaaS product to their workstations. Complete theater.

Another example of security theater is asking for attestation a team is meeting a security requirement instead of designing a process or security control that actually achieves the desired outcome. In this example, the attestation is nothing more than a facade designed to pass accountability from the security team, that should be designing and implementing effective controls, to the business team. It is masking ineffective process and technologies. Complete theater.

Lastly, a classic example of security theater is security by obscurity. Otherwise known as hiding in plain sight. If your security program is relying on the hope that attackers won’t find something in your environment then prepare to be disappointed. Reconnaissance tools are highly effective and with enough time threat actors will find anything you are trying to hide. Hope is not a strategy. Complete theater.

What Is The Impact Of Security Theater?

Tangible And Intangible Costs

Everything we do in life has a cost and this is certainly true with security theater. In the examples above there is a real cost in terms of time and money. People who travel are advised to get to the airport at least two hours early. This cost results in lost productivity, lost time with family and decreased self care.

In addition to tangible costs like those above, there are also intangible costs. If people don’t understand the “why” for your security control, they won’t be philosophically aligned to support it. The end result is security theater will erode confidence and trust in your organization, which will undermine your authority. This is never a place you want to be as a CISO.

Some people may argue that security theater is a deterrent because the show of doing “security things” will deter bad people from doing bad things. This sounds more like a hope than reality. People are smart. They understand when things make sense and if you are implementing controls that don’t make sense they will find ways around them or worse, ignore you when something important comes up.

With any effective security program the cost of a security control should never outweigh the cost of the risk, but security theater does exactly that.

Real Risks

The biggest problem with security theater is it can give a false sense of security to the organization that implements it. The mere act of doing “all the things” can make the security team think they are mitigating a risk when in reality they are creating the perfect scenario for a false negative.

How To Avoid Security Theater?

The easiest way to avoid security theater is to have security controls that are grounded in sound requirements and establish metrics to evaluate their effectiveness. Part of your evaluation should evaluate the cost of the control versus the cost of the risk. If your control costs more than the risk then it doesn’t make sense and you shouldn’t do it.

The other way to avoid security theater is to exercise integrity. Don’t just “check the box” and don’t ask the business you support to check the box either. Take the time to understand requirements from laws, regulations and auditors to determine what the real risk is. Figure out what an effective control will be to manage that risk and document your reasoning and decision.

The biggest way to avoid security theater is to explain the “why” behind a particular security control. If you can’t link it back to a risk or business objective and explain it in a way people will understand then it is security theater.

Can we stop with all the theater?

We Are Drowning In Patches (and what to do about it)

Last week I had an interesting discussion with some friends about how to prioritize patches using criticality and a risk based approach. After the discussion I starting thinking about how nice it would be if we could all just automatically patch everything and not have to worry about prioritization and the never ending backlog of patches, but unfortunately this isn’t a reality for the majority of organizations.

Whats the problem?

There are several issues that create a huge backlog of patches for organizations.

First, let’s talk about the patching landscape organizations need to deal with. This is largely spit into two different areas. The first area is operating system (OS) and service patches. These are patches that are released periodically for the operating systems used by the business to run applications or products. Common operating systems for production workloads will be either Windows or Linux and will have stability, security or new feature patches released periodically.

Second, there are patches for software libraries that are included in the software and applications developed by your business. Typically these are lumped into the category of 3rd party libraries, which means your organization didn’t write these libraries, but they are included in your software. 3rd party library security vulnerabilities have become a huge issue over the last decade (but thats a blog post for another day).

These two patch types, OS and 3rd party library patches, require different approaches to discover, manage and remediate, which is the first challenge for auto patching. When combined with the volume of new vulnerabilities being discovered, large heterogeneous environments and the need to keep business critical applications available, keeping your assets patched and up to date becomes a real challenge.

Why isn’t auto patching a thing?

Well it is, but…

There are a few challenges to overcome before you can auto-patch.

Stability and Functionality

First, both operating system and 3rd party library patches need to be tested for stability and functionality. Usually, patches fix some sort of issue or introduce new features, but this can cause issues in other areas such as stability or functionality. It can be a complex process to roll back patches and restore business critical applications to a stable version, which is why most businesses test their patches in a staging environment before rolling them out to production. Cash is king and businesses want to minimize any disruption to cash flow.

Investment and Maturity

It is possible to automate testing for stability and functionality, but this requires a level of maturity and investment that most organizations haven’t achieved. For example, assuming your staging environment is a mirror image of your production environment (it is right?), you could auto apply the patches in staging, automatically check for stability and functionality over a set period of time and then roll those updates to production with minimal interaction. However, if your environment requires reboots or you have limited resources, patching may require down time, which could impact making money.

In order to have an environment that can support multiple versions, seamless cut over, proper load balancing, caching, etc. requires significant investment. Typically this investment is useful for keeping your products functioning and making money even if something goes wrong, but this investment can also be used to buffer maintenance activities such as patching without disruption.

Software Development Lifecycle

The last section assumes a level of software development maturity such as adoption of Agile development processes and CI/CD (continuous integration / continuous delivery). However, if your engineering group uses a different development process such as Incremental or Waterfall, then patching may become even more difficult because you are now competing with additional constraints and priorities.

What are some strategies to prioritize patching and reduce volume?

If your business runs products that aren’t mission critical, or you simply can’t justify the investment to operate an environment without down time, then auto patching probably isn’t a reality for you unless you are Austin Powers and like to live dangerously. For most organizations, you will need to come up with a strategy to prioritize patching and reduce the volume down to a manageable level.

Interestingly, this problem space has had a bunch of brain power dedicated to it over the years because it resembles a knapsack problem, which is a common problem space in mathematics, computer science and economics. Knapsack problems are problems where you have a finite amount of a resource (space, time, etc.) and you want to optimize the use of that resource to maximize some sort of requirement (like value). In the case of patching, this would mean applying the largest volume of the highest severity patches in a fixed time period to realize the maximum risk reduction possible.

Critical Assets First

Staying in the knapsack problem space, one strategy is to start with your most critical assets and apply the highest severity patches until you reach your threshold for risk tolerance. This requires your organization to have an up to date asset inventory and have categorized your assets based on business criticality and risk. For example, let’s say you have two applications at your business. One is a mission critical application for customers and generates 80% of your annual revenue. The other application provides non-mission critical functionality and accounts for the other 20% of revenue. Your risk tolerance based on your company policies is to apply all critical and high patches within 72 hours of release. In this example you would apply all critical and high patches to the mission critical application as quickly as possible (assuming other requirements are met like availability, etc.).

Guard Rails and Gates

Another strategy for reducing volume is to have guard rails or gates as part of your software development lifecycle. This means your engineering teams will be required to pass through these gates at different stages before being allowed to go to production. For example, your organization may have a policy that no critical vulnerabilities are allowed in production applications. The security organization creates a gate that scans for OS and 3rd party library vulnerabilities whenever an engineering team attempts to make changes to the production environment (like pushing new features). This way the engineering team needs to satisfy any vulnerability findings and apply patches at regular intervals coinciding with changes to production.

Wrapping Up

With the proliferation of open source software, the speed of development and the maturity of researchers and attackers to find new vulnerabilities, patching has become an overwhelming problem for a lot of organizations. In fact, it is such a big problem CISA and the Executive Order On Improving The Nation’s Cybersecurity list software patches and vulnerabilities as a key national security issue. I’ve outlined a few strategies to prioritize and reduce the volume of patches if your organization can’t afford the investment to absorb downtime without disruption. However, no matter what strategy you choose, all of them require strong fundamentals in asset inventory, asset categorization and defined risk tolerance. While these investments may seem tedious at first, the more disciplined you are about enforcing the security fundamentals (and engineering maturity), the less you will drown in patches and the closer your organization will come to the reality of auto-patching.

Using Exceptions As A Discovery Tool

Security exceptions should be used sparingly and should be truly exceptional circumstances that are granted after the business accepts a risk. In mature security programs the security exceptions process is well defined and has clear criteria for what will and will not meet the exception criteria. In mature programs exceptions should be the exception, not the norm. However, in newer security programs exceptions can be a useful tool that provides discovery as well as risk acceptance.

Maturing A Security Program

One of the first things a new CISO will need to do is understand the business and how it functions. As part of this process the CISO will need to take an inventory of the current state of things so he or she can begin to form a strategy on how to best manage risk. As a new CISO your security program may not have well defined security policies and standards. As you begin to define your program and roll out these policies, the exception process can be a valuable tool that gives the perception of a choice, while allowing the security team to uncover areas of the business that need security improvement. Over time, as the business and security program mature, the CISO can gradually deny any requests to renew or extend these exceptions.

Rolling Out A New Security Process

Another area that is useful to have an exceptions process is when rolling out a new security process. For example, if you are rolling out a new process that will require teams to perform SAST and DAST scanning of their code and fix vulnerabilities before going into production, then allowing security exceptions during the initial rollout of the process can be useful to allow teams more time to adapt their development processes to incorporate the new security process. Allowing exceptions can foster good will with the development team and allow the security function visibility into the behavior and culture of the rest of the business. This can allow the security function and development team the opportunity to collaborate together with the ultimate goal of removing any exceptions and following the process to reduce risk to the business.

Tackling Security Tech Debt or Shadow IT

A common maturity evolution for companies is the elimination of shadow IT. The security function can assist with the elimination of shadow IT by creating an exception process and allowing an amnesty period where the business is allowed to continue to operate their shadow IT as long as it is declared. In reality you are giving the business the perception that they will be granted an exception when they are really giving the security function visibility into things they wouldn’t otherwise know about. This can be a useful tool to discover and eliminate policy exceptions as long as it is used sparingly and with good intent (not punitively).

Documentation Is Key

No matter how you choose to use exceptions within your security program there are a few best practices to follow.

  1. Exceptions should be truly exceptional. If you do grant one for discovery purposes make sure there is a plan to close the exception. Exceptions shouldn’t be the rule and they shouldn’t be expected. Sometimes the rest of the business just needs someone to tell them no.
  2. Time box the exception. Don’t just grant an exception without some sort of end date. The business needs to know an exception is temporary and there should be a well defined plan to make improvements and close the exception. The security team should grant a reasonable amount of time to execute that plan, but it shouldn’t be a never ending story.
  3. Review often. Security exceptions should be reviewed often. Part of your security program should review the open exceptions, which ones are ending, if there are patterns where there are lots of similar exceptions and if there are teams who request a high volume of exceptions. Reviewing exceptions gives you insight into how well security processes and controls are working. It also gives you insight into which parts of the business need help.
  4. Require the business owner to sign off. The reality of a well run security program is the business ultimately owns the decision if they want to accept a risk or not. The CISO makes a recommendation, but they don’t own the business systems or processes. As a result, the security exception process should require the business owner to sign off on any exception. This will ensure there is documentation that they were made aware of the risk, but this can also act as a visibility tool for the business owner into their own teams. I’ve often found a business leader is not always aware of what their teams are doing at the tactical level and the exceptions process can provide them the opportunity to check their team and correct behavior before it gets to the CISO.

Wrapping Up

The exception process can be a valuable tool for discovery of hidden risk throughout the business. By offering an amnesty period and giving the perception of flexibility, the security team can foster good will with the business while gaining valuable visibility into areas that may be hidden. The exception process also is a valuable tool for the security program to document risk acceptance by the applicable business owner, but can also provide business owners visibility into how well their team is meeting security requirements. Lastly, as the security program matures, the security team can gradually require the business to close down the exceptions by improving their security posture.

Will CVSS 4.0 Help Companies Manage Vulnerabilities Better?

About two weeks ago FIRST published version 4.0 of the Common Vulnerability Scoring Standard (CVSS), largely in response to feedback from the industry on the shortcomings of CVSS 3.1 and previous versions. The main complaint from industry with version 3.1 was that it didn’t offer any way to add additional context in a way that could help determine and prioritize risk. This led to companies to come up with their own processes to add context. In a previous blog about The Problem With Vulnerability Scanners I specifically highlighted how CVSS scores weren’t very useful and needed additional business context to make a risk prioritization decision. With that in mind, CVSS 4.0 attempts to address these shortcomings. Let’s take a look at what they changed and if it will help.

What’s New?

Both CVSS 3.1 and CVSS 4.0 include ways to evaluate vulnerabilities using the intrinsic characteristics of the vulnerability (Based), how the vulnerability changes over time (Temporal v3 or Threat v4) and how the vulnerability specifically applies to your environment (Environment). New for v4 is a Supplemental section which doesn’t impact the CVSS score, but allows you to add additional context for the vulnerability.

Additionally, CVSS 4.0 promises the ability to add real time threat context by allowing teams to use Threat Intelligence as an input to the CVSS score for a vulnerability. This additional context can be provided in new sections such as Attack Complexity, Attack Requirements, Vulnerable System and Subsequent System. CVSS 4.0 attempts to acknowledge unique environments by allowing additional fields for things like safety, ICS systems, etc. You can read about the full CVSS 4.0 specification here.

Finally! A Way To Prioritize Vulnerabilities!

CVSS 4.0 definitely seems like a huge step towards allowing teams to provide additional context to a vulnerability with the ultimate goal of influencing the score for better risk prioritization. The most common complaint I hear from engineering teams is there are too many vulnerabilities with the same criticality and they are unsure where to start. This was also feedback provided by industry to FIRST because it seemed like vulnerabilities were clustered more towards the critical and high range after the changes from v2 to v3.

CVSS 4.0 definitely answers some of the previous shortcomings and allows teams to add additional context to help make better decisions about which vulnerabilities should be prioritized for remediation over others. I know it is fairly common for the top priority to be given to external, publicly facing systems. The problem was CVSS 3.0 didn’t really provide a way to delineate between internal and external systems very well. So overall, the changes introduced in v4 are very welcome and should help teams really focus on what matters.

Is More Choice A Good Thing?

While it may seem like a good thing to be able to adjust the CVSS score for a vulnerability I do see this causing issues, particularly with external reporting. Security teams will need to have a robust process documented for how they are adjusting the score of a vulnerability and I can see situations in the future where companies are accused of subjectively adjusting their vulnerability scores down to paint a better picture than the reality.

Additionally, more choice comes with less transparency. Over the past year I have seen the volume and complexity of security questionnaires increase. The top questions focus around vulnerability remediation SLAs, incident response times and software supply chain security. Adding additional complexity into the CVSS scoring process, that allows companies to subjectively adjust the score up or down, will be extremely difficult for customers and regulators to navigate. Think back to Log4j and the reaction from your customers if you said you had Log4j vulnerabilities, but weren’t prioritizing remediation because they were on internal systems only. This may be a reasonable risk response for the business, but the perception from your customers will be difficult to manage.

Time Will Tell

Overall, it seems like CVSS 4.0 is attempting to become more of an overall risk score, rather than just a severity score. It is certainly welcome to be able to add additional context and take additional input to adjust the CVSS score as it applies to your environment and business. However, the new standard adds additional complexity and subjectivity that will make it difficult for customers and regulators to assess the true risk of a vulnerability to the business in a common way across the industry. Security teams will need to be particularly diligent in documenting a robust process for how they are adjusting the CVSS score to avoid being accused of arbitrarily adjusting the CVSS score down to make their company look better.

The Most Powerful Word A CSO Can Say Is No

A Tale Of Two Extremes

A few weeks ago I was sitting across the dinner table from a CIO and a CISO who were discussing how security works within their specific businesses. The CIO was complaining that the security team had unreasonable processes that slowed down his ability to deliver new technology projects within his org and as a result he ignored a lot of their requests. This resulted in an engaging discussion about the best way to balance security requirements against the needs of the business. I found it interesting that some of my fellow CISOs were adamant about having teams meet security requirements without exception, regardless of the impact to the business.

After this discussion I spent some time thinking about own my stance on this issue and how I have tried to balance security requirements against the needs of the business over the course of my career. I pride myself on finding ways to partner with and support the business, instead of just telling them no all the time. However, I have also found that sometimes the most powerful word in my vocabulary is NO. Saying no is particularly useful during the rollout of new processes or security controls where you are changing behavior more than you are implementing a new technology.

Tug Of War

Security requirements and business needs can often be in a perpetual tug of war. This isn’t necessarily a bad thing when you consider the purpose of a CISO organization is to help protect the business not only from attackers, but often from itself. However, it can be difficult when the tug of war is biased towards one side or the other. If the business simply steam rolls and ignores all security requirements then clearly the CISO isn’t valued and the business isn’t interested in managing risk. However, if the CISO says no to everything, then this can be a significant and costly drag on the business in terms of people, processes, technology and time. Lost productivity, lost revenue, inability to deliver a product to market quickly can be difficult to measure, but have real impact to the business. Worse, the business may just ignore you. It is important for CISOs to find an appropriate balance to allowing the business to function, while meeting the desired security objectives to protect it. I firmly believe when done correctly, CISOs can avoid being a drag on the business and can actually enable it.

Just Say No

Despite my general inclination to find ways to keep the business moving forward, I’ve also found saying no to certain things can be extremely useful. For some things, when teams want an exception I usually say no as my initial response. I have found often teams just need to hear an exception isn’t an option and this unblocks them to pursue another alternative to the problem. As a result the teams improve their security while also delivering their business objectives.

Sometimes the teams will ask for an exception a second time. In these cases, I usually reconsider, but often still tell them no. My expectation after telling them no a second time is to either get them to fix the issue or if the issue can’t be fixed to present a plan with different options along with their recommendation. When teams come back for the third time it ends up being an actual business risk discussion instead of an exception discussion. The outcome usually ends up being some sort of compromise on both sides, which is exactly what you want. Taking a balanced approach develops an appropriate level of partnership between security and the rest of the business where one side isn’t always sacrificing their objectives for the other side.

Seriously, Just Say No

Next time a team comes to you with an exception request try saying no and see if they respond differently. You may or may not be surprised when they find an alternate solution that doesn’t require an exception. For me, it has become a powerful tool to nudge teams towards achieving my security goals, while still delivering on their business objectives.

Software Supply Chain Security Considerations

Over the past five years there has been increased scrutiny on the security of supply chains and in particular software supply chains. The Solar Winds attack in 2020 brought this issue to the foreground as yet another requirement for a well rounded security program and also has been codified into several security guidelines such as, the Biden Administration Executive Order in 2021 and CISA software supply chain best practices. As businesses shift their software development practices to DevOps and Cloud, CSOs need to make sure software security is one of the components that is measured, monitored and controlled as part of a well rounded security program.

How Did We Get Here?

The use of open source software has increased over the past two decades largely because it is cheaper and faster to use an existing piece of software rather than spend time developing it yourself. This allows software development teams quicker time to market because they don’t have to re-invent software to perform certain functions and can instead focus on developing intellectual property that creates distinct value for their company.

There is also allegedly an implicit advantage to using open source software. The idea being open sourced software has more eyes on it and therefore is less prone to having malicious functions built into it and less prone to security vulnerabilities within the software. This concept may work well for large open source software projects that have a large number of contributors, but the concept falls short for smaller projects that have fewer contributors and resources. However, until the Solar Winds hack in 2020 the general attitude that open source software is more secure was generally applied to all projects regardless of size and funding. As we have learned, this flawed reasoning does not hold up and has allowed for attackers to target companies through their software supply chain.

The challenge with open source software is it is supposed to be a community led project. This means people use the software, but are also supposed to contribute back to that project. However, as companies have embraced open source software the two way street is more biased towards taking and using the software than contributing back. If corporations contributed back in ways that were proportionate to their use of open source software, the maturity, security and quality of the open source software would be drastically improved.

What Are the Risks?

There are several inherent risks involved in using open source software and they are as follows:

Can You Really Trust The Source?

How do you know the software you are pulling down from the internet doesn’t have a backdoor built into it? How do you know it is free from vulnerabilities? Is this piece of software developed and supported by an experienced group of contributors or is it maintained by one person in their basement during their spare time?

The point is, it is fairly easy to masquerade as a legitimate and well supported open source software project. Yet, it is difficult to actually validate the true source of the software.

What Is The Cadence For Vulnerability Fixes And Software Updates?

The size and scope of the open source software project dictates how well it is supported. As we saw during Log4j some projects were able to push updates very quickly, but other smaller projects took time to resolve the full scope of the issue. Any company using open source software should consider how often a project is updated and set limits on the use of software that doesn’t have regular and timely updates.

May Actually Require An Increase In Resources

There are ways for companies to manage the risk of using open source software. Assuming you can trust the source you can always pull down the source code and compile the software yourself. You can even fork the build to include fixes or other improvements to suit your specific application. However, this takes resources. It may not take the full number of resources that would be be required if you wrote the software from scratch, but maintaining the builds, version control and the general Software Development Life Cycle will need to be properly resourced and supported.

Can Complicate External Stakeholder Management

Another issue with using open source software in your supply chain is external stakeholder management. The Biden EO in 2021 requires companies to provide a software bill of materials (SBOM) for software sold to the U.S. Government. This trend has also trickled down into 3rd party partner management between companies, where contractual terms are starting to ask for software bill of materials, vulnerability disclosure timelines and other security practices related to software.

One major issue with this is: it is possible for software to be listed as vulnerable even though there may be no way to exploit it. For example, a piece of software developed by a company may include an open source library that is vulnerable, but there is no way to actually exploit that vulnerability. This can cause an issue with external stakeholders, regulators, auditors, etc. when they see a vulnerability listed. These external stakeholders may request the vulnerability be resolved, which could draw resources away from other priorities that are higher risk.

Standardization Is Hard

Finally, standardizing and controlling the use of open source software as part of the SDLC is advantageous from a security perspective, but exceptionally difficult from a practicality perspective. Unwinding the use of high risk or unapproved open source software can take a long time depending on how critical the software is to the application. If developers have to recreate and maintain the software internally that takes development time away from new features or product updates. Similarly, getting teams to adopt standard practices such as only allowing software to be a certain number of revisions out of date, only allowing software from certain sources and preventing vulnerable software from being used all takes time, but will pay dividends in the long run. Especially, with external stakeholder management or creation of SBOMs.

Wrapping Up

Using open source software has distinct advantages such as efficiency and time to market, but carries real risks that can be used as an attack vector into a company or their customers. CSOs need to include software supply chain security as one of the pillars of their security programs to identify, monitor and manage the risk of using open source software. Most importantly, a well robust software security supply chain program should consider the most effective ways to manage risk, while balancing external stakeholder expectations and without inadvertently causing an increase in resources.

Can Risk Truly Be Measured?

As a CSO, everything you do needs to be evaluated in terms of risk to the business. When you build a security program, prioritize objectives or respond to incidents all of your decisions will take into consideration risk and how to effectively manage it. Which begs the question: Can Risk Truly Be Measured?

The Problem With Risk

The problem with risk is that it is subjective. Everyone views risk, and the behaviors that contribute to risk, differently. Trying to measure a subjective concept can lead to inconsistencies, fluctuations or even conscious and subconscious influencing of the outcome. Even worse, as you engage in behavior that involves risk, your risk tolerance will adapt over time to accept the risk as normal. For the security space, this is the equivalent of failing to implement a specific security control and yet avoiding a breach or incident. As a result the business may try to rationalize that you don’t need to implement the control because you haven’t suffered the consequences of a breach. How do you consistently and objectively measure risk for yourself, your team or your business if it is subjective?

Objectivity Leads to Consistency

Risk can be measured, but it requires you to apply an objective and consistent process to gain consensus and get consistent results. There are several methods for obtaining consistent results when dealing with a subjective concept. In Operations Research there is a process known as the Analytical Hierarchy Process (AHP), which attempts to achieve a mathematical decision for a subjective concept (like deciding which house to buy). Similarly, when attempting to decide between different subjective concepts you can use simple mechanisms like pairwise comparisons or a triangle distribution to express the decision mathematically.

Here are two examples:

Pairwise comparison: This comparison is useful for comparing two things to determine which one is more important. Example: Patching our publicly facing systems with critical vulnerabilities is ten times more important than patching our systems with critical vulnerabilities that are only internally accessible.

Triangle Distribution: This distribution is useful when attempting to distill a decision down to a value when you may only know two data points, such as the extremes. Or, if two people offer different opinions for the value of something you can use the triangle distribution to find a compromise in the middle. Example: Manager 1 – The impact of this event is a 2 out of 5 (5 being highest). Manager 2 – The impact of this event is a 4 out of 5. Using a triangle distribution we decide the impact of this event is a 3.

Either of these methods can be used to determine the likelihood and impact for a specific security control in an industry standard security framework like NIST or CIS.

There are other objective methods for assessing and assigning risk. I’ve discussed the CIS Risk Assessment Method (RAM) in detail in my blog post about Building A Strategic Plan. The advantage of this method is you can apply a consistent process to assign and measure risk for every single CIS control and sub-control. This allows you to measure how you well you are managing risk over the lifetime of your security program in a repeatable way.

When Things Go Wrong

Following an objective process to measure risk requires you to take the time to methodically apply the process to each decision or control you need to asses. This can be time consuming at first, but once the initial baseline is set it is easy to update and maintain. However, it is easy to mess this process up. Here are a few things I don’t recommend:

  1. Don’t make up your own security control framework – There are industry standard security frameworks for a reason. The standard frameworks are comprehensive and have clear criteria for how to apply and asses the effectiveness of a control. The worst “frameworks” I have seen are typically made up by someone in the security team that things they know better. These “frameworks” are subjective, inconsistent, contradictory, redundant or unclear about how to apply specific controls. On top of that these “frameworks” require someone to update them, maintain them and communicate them to the rest of the org. Why duplicate work when you don’t have to?
  2. Failing to apply a consistent process – Failing to apply a consistent process for measuring and assessing risk means you will fail to get a consistent and objective result. This means you will have an inaccurate measurement of risk or the risk will fluctuate with each measurement, which will render it useless. Also in this category is allowing different teams within your company to use different processes for assessing risk. This means you will not be able to consistently compare the risk of different security controls across the business, which can lead to gaps or overconfidence.
  3. Failing to validate the result – This may be obvious, but each of your measurements should be backed up by evidence. Don’t assume your inventory is correct. Don’t assume your IDS is working and don’t assuming your DDoS protection will work properly. Test these things, audit them and spot check them periodically. Risk doesn’t always have to move in one direction. Controls can regress, postures can change and threats are constantly evolving. Validate your results on a regular basis to make sure you are getting the most accurate picture possible.

So Can You Really Measure Risk?

I believe you can measure risk and develop an accurate risk profile for your company. Despite the subjective nature of risk, you can reach an objective result by using simply mathematical principles and by using an consistent objective process. When applying these methods to an industry standard control framework, you can obtain an objective result that will accurately measure your risk for a single decision, or an entire control framework. Once you have an accurate risk profile you can use the risk to make prioritized decisions to manage risk for your business.