Should Security Be An Approver For IT and Business Requests?

Over the course of my career I have consistently seen security in the approval chain for various IT operations and business requests, such as identity, network and even customer contracts. Having security in the approval chain may seem logical at first glance, but it can actually mask or exacerbate underlying operations issues. Having a second set of eyes on requests can make sense to provide assurance, but the question is – does it make sense for security to be an approver?

Understand The Scope of Your Security Program

First and foremost, the scope of your security program will ultimately dictate how and when security should be included in an approval process. For example, if security owns networking or identity, then it will make sense to staff an operations team to support these areas and it will make sense to have security as an approver for requests related to these functions.

It may also make sense to include security in the approval chain as an evaluator of risk for functions security doesn’t own. For example, security won’t own the overall contract, finance or procurement processes, but they should be included as an approver to make sure contract terms and purchases align to security policies and are not opening up the business to unnecessary risk. They can also be included in large financial transfers as a second set of eyes to make sure the business isn’t being scammed out of money. In these examples, security is creating good friction to slow critical processes down in a healthy way to make sure they make sense and to use time as a defense mechanism.

Other Benefits Of Security As An Approver

Including security as an approver for general IT processes can have other benefits, but these need to be weighed carefully against the risks and overall function of the business. For example, security can help provide an audit trail for approving activities that may create risk for the company. This audit trail can be useful during incident investigations to determine root cause for an incident. It can also help avoid compliance gaps for things like FedRAMP, SOC, etc. where some overall business or IT changes need to be closely managed to maintain compliance. However, creating an audit trail is not unique to the security function and, if the process is properly designed, can be performed by other functions as well.

Another advantage of including security in the approval chain is separation of duties. For example, if one team owns identity, and they are requesting elevated privileges to something, it can present a conflict of interest if they approve their own access requests. Instead, security often acts as a secondary reviewer and approver to provide separation of duties to make sure requests by a team that owns the thing isn’t approving their own requests.

Where Including Security As An Approver Can Go Wrong

The biggest issue with having security in the approval chain for most things is they typically are not the owner of those things. If approval processes are not designed properly (with other approvers besides security), then the processes can confuse ownership and give a false impression of security or compliance. For example, I typically see security as an approver for identity and access requests when security doesn’t own the identity function. At first glance, this seems to make sense because identity is a critical IT function that needs to be protected. However, if security doesn’t own the identity function (or the systems that need access approved), how do they know whether the request should be approved or not. Instead, what happens is almost all requests end up being approved (unless they are egregious) and the process serves no real purpose other than creating unnecessary friction and giving a false sense of security.

Another issue I have seen with including security in the approval chain is they effectively become “human software” where they are manually performing tasks that should be automated instead. Using security personnel as “middleware” masks the true pain and inefficiency of the process for the process owner. This takes critical human capital away from their intended purpose, is a costly solution to a problem and opens up the business to additional risk.

When Does It Make Sense For Security To Be An Approver?

I’ve listed a few examples where it makes sense for security to be an approver for things it doesn’t own – like large financial transactions, some procurement activities and security specific contract terms. However, I argue security shouldn’t be included as an approver in most IT operations processes unless security actually owns that process or thing that needs a specific security approval. Instead, the business owner of the thing should be the ultimate approver and processes should be designed to provide appropriate auditing and compliance, but without needing security personnel to perform those checks manually.

One of the few areas where it will always make sense to have security as an approver is for security exceptions. First, exceptions should be truly exceptional and not used as a band-aid for broken or poorly designed process. Second, exceptions should be grounded in business risk, while documenting the evaluation criteria, decisions, associated policies and duration. This is a core security activity because exceptions are ultimately evaluating risk and deviation from policy. I’ve written other posts about how the exception process can have other benefits as well.

Wrapping Up

Don’t fall into the trap of using your security team as a reviewer and approver for IT operations requests if security doesn’t actually own the thing related to the request. This places the security team in an adversarial position, can be a costly waste of resources, masks process inefficiencies, gives a false sense of security and can open up the business to risk. Instead, be ruthlessly focused in how your security team is utilized to make sure when they are engaged it is to perform a function that is at the core of their mission – protecting the business and managing risk.

Should Companies Be Held Liable For Software Flaws?

Following the CrowdStrike event two weeks ago, there has been an interesting exchange between Delta Airlines and CrowdStrike. In particular, Delta has threatened to sue CrowdStrike to pursue compensation for the estimated $500M of losses allegedly incurred during the outage. CrowdStrike has recently hit back at Delta claiming the airline’s recovery efforts took far longer than their peers and other companies impacted by the outage. This entire exchange prompts some interesting questions about whether a technology company should be held liable for flaws in their software and where the liability should start and end.

Strategic Technology Trends

Software quality, including defects that lead to vulnerabilities, has been identified as a strategic imperative according to CISA and the Whitehouse in the 2023 National Cybersecurity Strategy. Specifically, the United States wants to “shift liability for software products and services to promote secure development practices” and it would seem the CrowdStrike event falls into this category of liability and secure software development practices.

In addition to strategic directives, I am also seeing companies prioritize speed to market over quality (and even security). In some respects it makes sense to prioritize speed, particularly when pushing updates for new detections. However, there is clearly a conflict in priorities when a company optimizes for speed over quality for a critical detection update that causes an impact larger than if the detection update had not been pushed at all. Modern cloud infrastructure and software development practices prioritize speed to market over all else. Hyperscale cloud providers have made a giant easy button that allows developers to consume storage, network and compute resources without consideration for the down stream consequences. Attempts by the rest of the business to introduce friction, gates or restrictions on these development processes are met with derision and usually follow accusations of slowing down the business or impeding sales. Security often falls in this category of “bad friction” because they are seen as the “department of no”, but as the CrowdStrike event clearly shows, there needs to be a balance between speed and quality in order to effectively manage risk to the business.

One last trend is the reliance on “the cloud” as the only BCP / DR plan. While cloud companies certainly market themselves as globally available services, they are not without their own issues. Cloud environments still need to follow IT operations best practices by completing a business impact analysis and implementing a BCP / DR plan. At the very least, cloud environments should have a rollback option in order to revert to the last known good state.

…as the CrowdStrike event clearly shows, there needs to be a balance between speed and quality in order to effectively manage risk to the business.

What Can Companies Do Differently?

Companies that push software updates, new services or new products to their customers need to adopt best practices for quality control and quality assurance. This means rigorously testing your products before they hit production to make sure they are as free of defects as possible. CrowdStrike clearly failed to properly test their update due to a claimed flaw in their testing platform. While it is nice to know why the defect made it into production, CrowdStrike still has a responsibility to make sure their products are free from defects and should have had additional testing and observability in place.

Second, for critical updates (like detections), there is an imperative by companies to push the update globally as quickly as possible. Instead, companies like CrowdStrike should prioritize customers in terms of industry risk. They should then create a phased rollout plan that stages their updates with a ramping schedule. By starting small, monitoring changes and then ramping up the rollout, CrowdStrike could have minimized the impact to a handful of customers and avoided a global event.

Lastly, companies need to implement better monitoring and BCP / DR for their business. In the case of CrowdStrike, they should have had monitoring in place that immediately detected their products going offline and they should have had the ability to roll back or revert to the last known good state. Going a step further they could even change the behavior of their software where instead of causing a kernel panic that crashes the system, the OS recovers gracefully and automatically rolls back to the last known good state. However, the reality is sophisticated logic like this costs money to develop and it is difficult for development teams to justify this investment unless the company has felt a financial penalty for their failures.

Cloud environments still need to follow IT operations best practices by completing a business impact analysis and implementing a BCP / DR plan.

Contracts & Liability

Speaking of financial penalties, the big question is whether or not CrowdStrike can be held liable for the global outage. My guess is this will depend on what it says in their contracts. Most contracts have a clause that limits liability for both sides and so CrowdStrike could certainly face damages within those limits (probably only a few million at most). It is more likely CrowdStrike will face losses for new customers and existing customers that are up for contract renewal. Some customers will terminate their contracts. Others will negotiate better terms or expect larger discounts on renewal to make up for the outage. At most this will hit CrowdStrike for the next 3 to 5 years (depending on contract length) and then the pricing and terms will bounce back. It will be difficult for customers to exit CrowdStrike en masse because it is already a sunk cost and companies wont want to spend the time or energy to deploy a new technology. Some of the largest customers may have the best terms and ability to extract concessions from CrowdStrike, but overall I don’t think this will impact them for very long and I don’t think they will be held legally liable in any material sense.

Delta Lags Industry Standard

If CrowdStrike isn’t going to be held legally liable, what happens to Delta and their claimed lost $500M? Let’s look at some facts. First, as CrowdStrike has rightfully pointed out, Delta lagged the world for recovering from this event. They took about 20 times longer to get back to normal operations than other airlines and large companies. This points to clear underinvestment in identifying critical points of failure (their crew scheduling application) and developing sufficient plans to backup and recover if critical parts of their operation failed.

Second, Delta clearly hasn’t designed their operations for ease of management or resiliency. They have also failed to perform an adequate Business Impact Analysis (BIA) or properly test their BCP / DR plans. I don’t know any specifics about their underlying IT operations, but a few recommendations come to mind such as implementing active / active instances for critical services and moving to thin clients or PXE boot for airport kiosks and terminals. Remove the need for a human to touch any of these systems physically, and instead implement processes to remotely identify, manage and recover these systems from a variety of different failure scenarios. Clearly Delta has a big gap in their IT Operations processes and their customers suffered as a result.

Wrapping Up

What the CrowdStrike event highlights is the need for companies to prioritize quality, resiliency and stability over speed to market. The National Cybersecurity Strategy has identified software defects as a strategic imperative because they lead to vulnerabilities, supply chain compromise and global outages. Companies with the size and reach of CrowdStrike can no longer afford to prioritize speed over all else and instead need to shift to a more mature and higher quality SDLC. In addition, companies that use popular software need to consider diversifying their supply chain, implementing IT operations best practices (like SRE) and implementing a mature BCP and DR plan on par with industry standards.

What the CrowdStrike event highlights is the need for companies to prioritize quality, resiliency and stability over speed to market.

When it comes to holding companies liable for global outages, like the one two weeks ago, I think it will be difficult for this to play out in the courts without resorting to a legal tit-for-tat that no one wins. Instead, the market and customers need to weigh in and hold these companies accountable through share prices, contractual negotiation or even switching to a competitor. Given the complexity of modern software, I don’t think companies should be held liable for software flaws because it is impossible to eliminate all flaws. Additionally, modern SDLCs and CI/CD pipelines are exceptionally complex and this complexity can often result in failure. This is why BCP/DR and SRE is so important, so you can recover quickly if needed. Yes, CrowdStrike could have done better, but clearly Delta wasn’t even meeting industry standards. Instead of questioning whether companies should be held liable for software flaws, a better question is: At what point does a company become so essential that they by default become critical infrastructure?

Navigating Hardware Supply Chain Security

Lately, I’ve been thinking a lot about hardware supply chain security and how the risks and controls differ from software supply chain security. As a CSO, one of your responsibilities is to ensure your supply chain is secure, yet the distributed nature of our global supply chain makes this a challenging endeavor. In this post I’ll explore how a CSO should think about the risks of hardware supply chain security, how they should think about governing this problem and some techniques for implementing security assurance within your hardware supply chain.

What Is Hardware Supply Chain?

Hardware supply chain relates to the manufacturing, assembly, distribution and logistics of physical systems. This includes the physical components and the underlying software that comes together to make a functioning system. A real world example could be something as complex as an entire server or something as simple as a USB drive. Your company can be at the start of the supply chain by sourcing and producing raw materials like copper and silicon, at the middle of the supply chain producing individual components like microchips, or at the end of the supply chain assembling and integrating components into an end product for customers.

What Are The Risks?

There are a lot of risks when it comes to the security of hardware supply chains. Hardware typically has longer lead times and longer shelf life than software. This means compromises can be harder to detect (due to all the stops along the way) and can persist for a long time (e.g. decades in cases like industrial control systems). It can be extremely difficult or impossible to mitigate a compromise in hardware without replacing the entire system (or requiring downtime), which is costly to a business or deadly to a mission critical system.

The risk of physical or logical compromise can happen in two ways – interdiction and seeding. Both involve physically tampering with a hardware device, but occur at different points in the supply chain. Seeding occurs during the physical manufacture of components and involves someone inserting something malicious (like a backdoor) into a design or component. Insertion early in the process means the compromise can persist for a long period of time if it is not detected before final assembly.

Interdiction happens later in the supply chain when the finished product is being shipped from the manufacturer to the end customer. During interdiction the product is intercepted en route, opened, altered and then sent to the end customer in an altered or compromised state. The hope is the recipient won’t detect the slight shipping delay or the compromised product, which will allow anything from GPS location data to full remote access.

Governance

CSOs should take a comprehensive approach to manage the risks associated with hardware supply chain security that includes policies, processes, contractual language and technology.

Policies

CSOs should establish and maintain policies specifying the security requirements at every step of the hardware supply chain. This starts at the requirements gathering phase and includes design, sourcing, manufacturing, assembly and shipping. These policies should align to the objectives and risks of the overall business with careful consideration for how to control risk at each step. An example policy could be your business requires independent validation and verification of your hardware design specification to make sure it doesn’t include malicious components or logic. Or, another example policy can require all personnel who physically manufacture components in your supply chain receive periodic background checks.

Processes

Designing and implementing secure processes can help manage the risks in your supply chain and CSOs should be involved in the design and review these processes. Processes can help detect compromises in your supply chain and can create or reduce friction where needed (depending on risk). For example, if your company is involved in national security programs you may establish processes that perform verification and validation of components prior to assembly. You also may want to establish robust processes and security controls related to intellectual property (IP) and research and development (R&D). Controlling access to and dissemination of IP and R&D can make it more difficult to seed or interdict hardware components later on.

Contractual Language

An avenue CSOs should regularly review with their legal department are the contractual clauses used by your company for the companies and suppliers in your supply chain. Contractual language can extend your security requirements to these third parties and even allow your security team to audit and review their manufacturing processes to make sure they are secure.

Technology

The last piece of governance CSOs should invest in is technology. These are the specific technology controls to ensure physical and logical security of the manufacturing and assembly facilities that your company operates. Technology can include badging systems, cameras, RFID tracking, GPS tracking, anti-tamper controls and even technology to help assess the security assurance of components and products. The technologies a CSO selects should complement and augment their entire security program in addition to normal security controls like physical security, network security, insider threat, RBAC, etc.

Detecting Compromises

One aspect of hardware supply chain that is arguably more challenging than software supply chain is detection of compromise. With the proliferation of open source software and technologies like sandboxing, it is possible to review and understand how a software program behaves. Yet, it is much more difficult to do this at the hardware layer. There are some techniques that I have discovered while thinking about and researching this problem and they all relate back to how to detect if a hardware component has been compromised or is not performing as expected.

Basic Techniques

Some of the more simple techniques for detecting if hardware has been modified is via imaging. After the design and prototype is complete you can image the finished product and then compare all products produced against this image. This can tell you if the product has had any unauthorized components added or removed, but it won’t tell you if the internal logic has been compromised.

Another technique for detecting compromised components is similar to unit testing in software and is known as functional verification. In functional verification, individual components have their logic and sub-logic tested against known inputs and outputs to verify they are functioning properly. This may be impractical to do with every component if they are manufactured at scale so statistical sampling may be needed to probabilistically ensure all of the components in a batch are good. The assumption here is if all of your components pass functional verification or statistic sampling then the overall system has the appropriate level of integrity.

To detect interdiction or logistics compromises companies can implement logistics tracking such as unique serial numbers (down to the component level), tamper evident seals, anti-tamper technology that renders the system inoperable if tampered with or makes it difficult to tamper with something without destroying it and even shipping thresholds to detect shipping delay abnormalities.

Advanced Techniques

More advanced detection techniques for detecting compromise can include destructive testing. Similar to statistical sampling, destructive testing involves physically breaking apart a component to make sure nothing malicious has been inserted. Destructive testing makes sure the component was physically manufactured and assembled properly.

In addition to destructive testing, companies can create hardware signatures that include expected patterns of behavior for how a system should physically behave. This is a more advanced method of functional testing where multiple components or even finished products are analyzed together for known patterns of behavior to make sure they are functioning as designed and not compromised. Some hardware components that can assist with this validation are technologies like Trusted Platform Modules (TPM).

Continuing with functional operation, a more advanced method of security assurance for hardware components is function masking and isolation. Function masking attempts to mask a function so it is more difficult to reverse engineer the component. Isolation limits how components can behave with other components and usually has to be done at the design level, which effectively begins to sandbox components at the hardware level. Isolation could rely on TPM to limit functionality of components until the integrity of the system can be verified, or it could just limit functionality of one component with another.

Lastly, one of the most advanced techniques for detecting compromise is called 2nd order analysis and validation. 2nd order analysis looks at the byproduct of the component when it is operating by looking at things like power consumption, thermal signatures, electromagnetic emissions, acoustic properties and photonic (light) emissions. These 2nd order emissions can be analyzed to see if they are within expected limits and if not it could indicate the component is compromised.

Wrapping Up

Hardware supply chain security is a complex space given the distributed nature of hardware supply chains and the variety of attack vectors spanning physical and logical realms. A comprehensive security program needs to weigh the risks of supply chain compromise against the risks and objectives of the business. For companies that operate in highly secure environments, investing in advanced techniques ranging from individual component testing to logistics security is absolutely critical and can help ensure your security program is effectively managing the risks to your supply chain.

References:

Guarding Against Supply Chain Attacks Part 2 (Microsoft)

Long-Term Strategy for DoD Trusted Foundry Needs (ITEA)

What’s Better – Complete Coverage With Multiple Tools Or Partial Coverage With One Tool?

The debate between complete coverage with multiple tools versus imperfect coverage with one tool regularly pops up in discussions between security professionals. What we are really talking about is attempting to choose between maximum functionality and simplicity. Having pursued both extremes over the course of my security career I offer this post to share my perspective on how CISOs can think about navigating this classic tradeoff.

In Support Of All The Things

Let’s start with why you may want to pursue complete coverage by using multiple technologies and tools.

Heavily Regulated And High Risk Industries

First, heavily regulated and high risk businesses may be required to demonstrate complete coverage of security requirements. These are industries like the financial sector or government and defense. (I would normally say healthcare here, but despite regulations like HIPAA the entire industry has lobbied against stronger security regulations and this has proven disastrous via major incidents like the Change Healthcare Ransomware Attack). The intent behind any regulation is to establish a minimum set of required security controls businesses need to meet in order to operate in that sector. It may not be possible to meet all of these regulatory requirements with a single technology and therefore, CISOs may need to evaluate and select multiple technologies to meet the requirements.

Defense In Depth

Another reason for selecting multiple tools is to provide defense in depth. The thought process is: multiple tools will provide overlap and small variances in how they meet various security controls. These minor differences can offer defenders an advantage because if one piece of technology is vulnerable to an exploit, another piece of technology may not be vulnerable. By layering these technologies throughout your organization you reduce the chances an attacker will be successful.

An example of this would be if your business is protected from the internet by a firewall made by Palo Alto. Behind this PA firewall is a DMZ and the DMZ is separated from your internal network by a firewall from Cisco. This layered defense will make it more difficult for attackers to get through the external firewall, DMZ, internal firewall and into the LAN. (See image below for a very simplistic visual)

Downside Of All The Things

All the things may sound great, but unless you are required to meet that level of security there can be a lot of downsides.

First, multiple technologies introduce complexity into an environment. This can make it more difficult to troubleshoot or detect issues (including security events). It can also make it more difficult to operationally support these technologies because they may have different interfaces, APIs, protocols, configurations, etc. It may not be possible to centrally manage these technologies, or it may require the introduction of an additional technology to manage everything.

Second, all of these technologies can increase the number of people required to support them. People time can really add up as a hidden cost and shouldn’t be thrown away lightly. People time starts the second you begin discussing the requirements for a new technology and can include the following:

  • Proof of Concepts (PoCs)
  • Tradeoff & Gap Analysis
  • Requests for Information (RFI)
  • Requests for Proposal (RFP)
  • Requests for Quotes (RFQ)
  • Contract Negotiation
  • Installation
  • Integration
  • Operation & Support

Finally, multiple technologies can cause performance impacts, increased costs and waste. Performance impacts can happen due to differences in technologies, complexity, configuration errors or over consumption of resources (such as agent sprawl). Waste can happen due to overlap and duplicated functionality because not all of the functionality may not get used despite the fact you are paying for it.

Advantages and Disadvantages Of A Single Tool

A single tool that covers the majority, but not all, of your requirements offers one advantage – simplicity. This may not sound like much, but after years of chasing perfection, technology simplicity can have benefits that may not be immediately obvious.

First, seeking out a single tool that meets the majority of requirements will force your security team to optimize their approach for the one that best manages risk while supporting the objectives of the business. Second, a single tool is easier to install, integrate, operate and support. There is also less demand on the rest of the business in terms of procurement, contract negotiation and vendor management. Lastly, a single tool requires less people to manage it and therefore you can run a smaller and more efficient organization.

The biggest disadvantage of a single tool is it doesn’t provide defense in depth. One other disadvantage is it won’t meet all of your security requirements and so the requirements that aren’t met should fall within the risk tolerance of the business or somehow get satisfied with other compensating controls.

A single tool that covers the majority, but not all, of your requirements offers one advantage – simplicity.

Wrapping Up

There are a lot of advantages to meeting all of your requirements with multiple tools, but these advantages come with a tradeoff in terms of complexity, operational overhead, duplicated functionality and increased personnel requirements. If you operate a security program in a highly regulated or highly secure environment you may not have a choice so it is important to be aware of these hidden costs. A single tool reduces complexity, operational overhead and personnel demands, but can leave additional risk unmet and fails to provide defense in depth. Generally, I favor simplicity where possible, but you should always balance the security controls against the risk tolerance and needs of the business.

Will CVSS 4.0 Help Companies Manage Vulnerabilities Better?

About two weeks ago FIRST published version 4.0 of the Common Vulnerability Scoring Standard (CVSS), largely in response to feedback from the industry on the shortcomings of CVSS 3.1 and previous versions. The main complaint from industry with version 3.1 was that it didn’t offer any way to add additional context in a way that could help determine and prioritize risk. This led to companies to come up with their own processes to add context. In a previous blog about The Problem With Vulnerability Scanners I specifically highlighted how CVSS scores weren’t very useful and needed additional business context to make a risk prioritization decision. With that in mind, CVSS 4.0 attempts to address these shortcomings. Let’s take a look at what they changed and if it will help.

What’s New?

Both CVSS 3.1 and CVSS 4.0 include ways to evaluate vulnerabilities using the intrinsic characteristics of the vulnerability (Based), how the vulnerability changes over time (Temporal v3 or Threat v4) and how the vulnerability specifically applies to your environment (Environment). New for v4 is a Supplemental section which doesn’t impact the CVSS score, but allows you to add additional context for the vulnerability.

Additionally, CVSS 4.0 promises the ability to add real time threat context by allowing teams to use Threat Intelligence as an input to the CVSS score for a vulnerability. This additional context can be provided in new sections such as Attack Complexity, Attack Requirements, Vulnerable System and Subsequent System. CVSS 4.0 attempts to acknowledge unique environments by allowing additional fields for things like safety, ICS systems, etc. You can read about the full CVSS 4.0 specification here.

Finally! A Way To Prioritize Vulnerabilities!

CVSS 4.0 definitely seems like a huge step towards allowing teams to provide additional context to a vulnerability with the ultimate goal of influencing the score for better risk prioritization. The most common complaint I hear from engineering teams is there are too many vulnerabilities with the same criticality and they are unsure where to start. This was also feedback provided by industry to FIRST because it seemed like vulnerabilities were clustered more towards the critical and high range after the changes from v2 to v3.

CVSS 4.0 definitely answers some of the previous shortcomings and allows teams to add additional context to help make better decisions about which vulnerabilities should be prioritized for remediation over others. I know it is fairly common for the top priority to be given to external, publicly facing systems. The problem was CVSS 3.0 didn’t really provide a way to delineate between internal and external systems very well. So overall, the changes introduced in v4 are very welcome and should help teams really focus on what matters.

Is More Choice A Good Thing?

While it may seem like a good thing to be able to adjust the CVSS score for a vulnerability I do see this causing issues, particularly with external reporting. Security teams will need to have a robust process documented for how they are adjusting the score of a vulnerability and I can see situations in the future where companies are accused of subjectively adjusting their vulnerability scores down to paint a better picture than the reality.

Additionally, more choice comes with less transparency. Over the past year I have seen the volume and complexity of security questionnaires increase. The top questions focus around vulnerability remediation SLAs, incident response times and software supply chain security. Adding additional complexity into the CVSS scoring process, that allows companies to subjectively adjust the score up or down, will be extremely difficult for customers and regulators to navigate. Think back to Log4j and the reaction from your customers if you said you had Log4j vulnerabilities, but weren’t prioritizing remediation because they were on internal systems only. This may be a reasonable risk response for the business, but the perception from your customers will be difficult to manage.

Time Will Tell

Overall, it seems like CVSS 4.0 is attempting to become more of an overall risk score, rather than just a severity score. It is certainly welcome to be able to add additional context and take additional input to adjust the CVSS score as it applies to your environment and business. However, the new standard adds additional complexity and subjectivity that will make it difficult for customers and regulators to assess the true risk of a vulnerability to the business in a common way across the industry. Security teams will need to be particularly diligent in documenting a robust process for how they are adjusting the CVSS score to avoid being accused of arbitrarily adjusting the CVSS score down to make their company look better.

Software Supply Chain Security Considerations

Over the past five years there has been increased scrutiny on the security of supply chains and in particular software supply chains. The Solar Winds attack in 2020 brought this issue to the foreground as yet another requirement for a well rounded security program and also has been codified into several security guidelines such as, the Biden Administration Executive Order in 2021 and CISA software supply chain best practices. As businesses shift their software development practices to DevOps and Cloud, CSOs need to make sure software security is one of the components that is measured, monitored and controlled as part of a well rounded security program.

How Did We Get Here?

The use of open source software has increased over the past two decades largely because it is cheaper and faster to use an existing piece of software rather than spend time developing it yourself. This allows software development teams quicker time to market because they don’t have to re-invent software to perform certain functions and can instead focus on developing intellectual property that creates distinct value for their company.

There is also allegedly an implicit advantage to using open source software. The idea being open sourced software has more eyes on it and therefore is less prone to having malicious functions built into it and less prone to security vulnerabilities within the software. This concept may work well for large open source software projects that have a large number of contributors, but the concept falls short for smaller projects that have fewer contributors and resources. However, until the Solar Winds hack in 2020 the general attitude that open source software is more secure was generally applied to all projects regardless of size and funding. As we have learned, this flawed reasoning does not hold up and has allowed for attackers to target companies through their software supply chain.

The challenge with open source software is it is supposed to be a community led project. This means people use the software, but are also supposed to contribute back to that project. However, as companies have embraced open source software the two way street is more biased towards taking and using the software than contributing back. If corporations contributed back in ways that were proportionate to their use of open source software, the maturity, security and quality of the open source software would be drastically improved.

What Are the Risks?

There are several inherent risks involved in using open source software and they are as follows:

Can You Really Trust The Source?

How do you know the software you are pulling down from the internet doesn’t have a backdoor built into it? How do you know it is free from vulnerabilities? Is this piece of software developed and supported by an experienced group of contributors or is it maintained by one person in their basement during their spare time?

The point is, it is fairly easy to masquerade as a legitimate and well supported open source software project. Yet, it is difficult to actually validate the true source of the software.

What Is The Cadence For Vulnerability Fixes And Software Updates?

The size and scope of the open source software project dictates how well it is supported. As we saw during Log4j some projects were able to push updates very quickly, but other smaller projects took time to resolve the full scope of the issue. Any company using open source software should consider how often a project is updated and set limits on the use of software that doesn’t have regular and timely updates.

May Actually Require An Increase In Resources

There are ways for companies to manage the risk of using open source software. Assuming you can trust the source you can always pull down the source code and compile the software yourself. You can even fork the build to include fixes or other improvements to suit your specific application. However, this takes resources. It may not take the full number of resources that would be be required if you wrote the software from scratch, but maintaining the builds, version control and the general Software Development Life Cycle will need to be properly resourced and supported.

Can Complicate External Stakeholder Management

Another issue with using open source software in your supply chain is external stakeholder management. The Biden EO in 2021 requires companies to provide a software bill of materials (SBOM) for software sold to the U.S. Government. This trend has also trickled down into 3rd party partner management between companies, where contractual terms are starting to ask for software bill of materials, vulnerability disclosure timelines and other security practices related to software.

One major issue with this is: it is possible for software to be listed as vulnerable even though there may be no way to exploit it. For example, a piece of software developed by a company may include an open source library that is vulnerable, but there is no way to actually exploit that vulnerability. This can cause an issue with external stakeholders, regulators, auditors, etc. when they see a vulnerability listed. These external stakeholders may request the vulnerability be resolved, which could draw resources away from other priorities that are higher risk.

Standardization Is Hard

Finally, standardizing and controlling the use of open source software as part of the SDLC is advantageous from a security perspective, but exceptionally difficult from a practicality perspective. Unwinding the use of high risk or unapproved open source software can take a long time depending on how critical the software is to the application. If developers have to recreate and maintain the software internally that takes development time away from new features or product updates. Similarly, getting teams to adopt standard practices such as only allowing software to be a certain number of revisions out of date, only allowing software from certain sources and preventing vulnerable software from being used all takes time, but will pay dividends in the long run. Especially, with external stakeholder management or creation of SBOMs.

Wrapping Up

Using open source software has distinct advantages such as efficiency and time to market, but carries real risks that can be used as an attack vector into a company or their customers. CSOs need to include software supply chain security as one of the pillars of their security programs to identify, monitor and manage the risk of using open source software. Most importantly, a well robust software security supply chain program should consider the most effective ways to manage risk, while balancing external stakeholder expectations and without inadvertently causing an increase in resources.

Security Vendor Questionnaires: Too Much or Not Enough?

Over the past few years there has been an increasing trend for customers and partners to request security teams to fill out lengthy security questionnaires seeking specific details about the state of their security program. These requests often come as part of routine audits, regulatory requirements or contract negotiations. As someone who has both sent out questionnaires and been a recipient of questionnaires I’m wondering if the industry has gone too far down this trend or if it hasn’t gone far enough? Let me explain…

As a CSO, I want to discover and manage as much risk as possible. This includes conducting business with partners, customers and other companies. I want to understand my supply chain and limit my exposure to any of their security weaknesses that could be used to attack my company. However, I also want to limit the amount of information I disclose about my security program because once I disclose it, I no longer control that information and it could eventually make its way to an adversary if the company I disclosed it to has a breach.

How do we balance these differing requirements and are security questionnaires really the best mechanism for understanding your supply chain?

How Did We Get Here?

Let’s take a step back and consider how we collectively arrived at the need for security questionnaires. There have been several high profile breaches that have set us down this path. The first was the Target breach in 2013 where Target had their point of sale systems compromised as a result of a third party HVAC vendor. The magnitude of this breach along with the realization that Target was compromised via a third party placed a spotlight on supply chain security for the entire industry.

The second high profile breach was the Solar Winds attack in 2020. This attack infiltrated the software supply chain of Solar Winds and placed a backdoor in the product. Given that Solar Winds was used by a huge number of companies this effectively compromised the software supply chain of those companies as well. This attack increased the scrutiny on the supply chain with additional emphasis on software supply chain and even leading to some sectors (like the government) requiring disclosure of a Software Bill of Materials (SBOM).

Increased Regulatory Pressure

These notable attacks (among others) have lead to an increase in regulations that force companies to disclose details around security breaches, but also to invest appropriately in security programs. Despite these investments and disclosures, companies can still face steep fines and costly lawsuits for security breaches. New regulations such as the SEC Cyber Risk Management rules and recent White House Executive Orders on Improving Cybersecurity and establishing a National Cybersecurity Strategy have elevated awareness and focus on supply chain security to the national stage.

Cyber Insurance Isn’t Helping

As Cybersecurity insurance premiums become more and more expensive, companies will continue to look for ways to decrease the cost, while still maintaining coverage. One of the most effective ways to do this is to establish and document a mature security program that you review in detail with your insurer to explain your risks and how you are mitigating them with appropriate controls. Questionnaires are one part of a security program that can demonstrate how you are evaluating and managing supply chain risk and hopefully drive down your premiums (for now). The problem is this creates an incentive where it is everyone for themselves in an attempt to lower their own premiums.

Transparency Is Lacking

The biggest issue security questionnaires are attempting to address is lack of transparency into the details of the security programs. In general, large publicly traded companies (particular cloud companies) and security product companies tend to be more transparent about security because it is built into their brand as a selling point to attract customers. However, details about technologies, program structure, response times, etc. generally lack specificity (for good reason) and the security questionnaire is an attempt to uncover those details to understand what risks exist when entering into a relationship with another company.

You might argue that companies should simply be more transparent with details about their security program, but this is not the solution. Companies should cover high level details with some specificity to demonstrate they have a security program and how it is structured. However, giving specific details about processes, response times, technologies, etc. will reveal details that can be used by an adversary for an attack. Additionally, do we really know what is happening with all this data from security questionnaires? It may be protected under Non-Disclosure Agreements (NDAs) and confidentiality agreements, but that doesn’t prevent the data from being leaked via an unprotected S3 bucket. It is extremely difficult to change a security program quickly and so it may be in the best interest of the responding company to refuse to answer the questionnaire and instead have an undocumented conversation (depending on your level of paranoia).

Yet More Audit and Regulatory Pressure

On top of all these issues, there are still very real requirements to respond to audits, questions from regulators or to provide these responses to your customers and partners that operate in heavily regulated industries (finance, healthcare, government, etc.). Responding to the questionnaires still takes time and places a burden on your security team and still comes with the risk that the information could be involuntarily disclosed to an adversary.

What’s Really Going On Here?

Responding to regulatory and audit requirements aren’t new requirements for our industry and so answering security questionnaires has been the norm for quite some time. However, I think the use of the security questionnaire has been hijacked by the industry as a catch-all way to accomplish a few things with respect to security:

  1. Assert your security requirements over another company (may work for small companies if they can fund it, but generally doesn’t work for large companies with mature programs).
  2. Minimize risk of doing business with and potentially pass on liability to the recipient company. I.e. “We asked them if they did this thing, but we got breached because of them so they clearly didn’t do that thing so it is their fault.”
  3. Create negotiating points as part of contract negotiations for concessions.

The problem with these is they are attempting to impose a solution or liability on a program they don’t control. As a CSO I can’t agree to these things because being contractually obligated to a specific security solution or SLAs removes my decision making power for how to best manage that risk within my security program. Security programs change and are constantly adapting to stay ahead of threats and risks to the business. Being boxed into a solution contractually can actually create a risk, where there wouldn’t normally be one.

Fatigue Is Real

I genuinely struggle with this problem because security questionnaires have their uses, but are causing real fatigue across the industry. The questions fall into one of two categories: either they are largely the same across customers or they are completely bonkers and don’t justify a response. As both a sender and recipient of questionnaires I can definitely understand both sides of the issue. I want as much information about my supply chain customers, but want to minimize the specifics that I share outside of my control. I want lower cyber insurance premiums and I want to pass all my audits and regulatory inquiries. However, I think the industry has deviated from the original intent of the security questionnaire due to the real fear of being held liable for a failure in their security program, which includes their supply chain.

A CISO Primer On Navigating Build vs Buy Decisions

Every year CISOs propose and are allocated annual budgets to accomplish their goals for the upcoming year. Within these budgets are allocations for purchasing tooling or hiring new headcount. As part of this exercise CISOs and their respective security teams are asking: should we build this thing ourselves or should we just buy it? It may be tempting to simply buy a tool or to build it yourselves, but both options have advantages and disadvantages. Here are my thoughts on how CISOs should think about this classic business problem.

Strategic Considerations

The first question I ask myself and my team is – will building this thing ourselves become a strategic capability or differentiator for our business? If we build it can we use it ourselves and sell it to customers? Are we building a capability unique to the industry that could lead to patents or a competitive advantage? Most importantly, do we have the resources to develop, maintain and support this capability for the indefinite future? If the answer to these questions is yes, then you should consider building the capability yourself, but this also comes with a cost in terms of people resources.

Use of People resources

While building a capability can look attractive at first, it generally has long term costs that can easily add up to be more than the cost of just purchasing a tool or capability. This is because CISOs will need to staff engineers or developers to build the thing. This means they will need to hire (or borrow) resources who will need coding skills, database skills, AI/Big Data Skills and a bunch of other skills that aren’t typically key skills in a traditional security team.

Let’s say you will need to hire or borrow people to build your new thing. These people have salaries, benefits, bonuses, equipment costs, facilities costs and other expenses that can easily cost as much as (if not more than) the annual cost of purchasing a tool. Additionally, if you hired people, they can’t just move on once the thing is built. They will need to support it, maintain it, etc. If you borrowed resources then you will need to figure out who is going to handle ongoing operations and maintenance of the tool and you need to consider the opportunity cost of using these borrowed resources to build something for you instead of doing something else for the business that could have value.

The point is people aren’t cheap and they tend to be the most valuable resources for a business. Using these resources wisely and in a cost effective way is an important consideration for every CISO.

Financial allocation (CAPEX vs OPEX)

One other consideration for Build vs Buy is how your company allocates financial costs towards either CAPEX or OPEX. The reason this is something to consider is it may be easier to get OPEX budget than CAPEX (or vice versa). This can influence your decision to buy something over building it depending on how finance wants you to allocate the cost (or how easy it is to get budget in one of these buckets).

Time to deploy

Another consideration for Build vs Buy is – when do you need the capability and how long will it take to build it vs how long it will take to buy something and deploy it? If you need the capability immediately it may make sense to buy the tool and deploy it rather than trying to hire resources, onboard them, build the thing, support it, etc.

Integration Costs

Similarly, integration costs can be a huge factor towards whether the capability is truly effective or not. For example, if you can stand up the new thing relatively quickly, but it takes six months or a year to integrate it into your existing tools then that could throw your overall timeline off and may sway your decision towards building it yourselves instead.

Security Considerations of SaaS / Cloud Products

Lastly, and most important, CISOs need to think about the security considerations of buying a product vs building it in house. Software supply chain security is a top security risk for businesses and CISOs need to evaluate new tooling to see if they are adhering to the security requirements required by the CISO. If the product is a SaaS or Cloud Product then CISOs need to think about the risk of sending their data, source code or other information to a third party environment they don’t directly control. Similarly, if the CISO chooses to build the capability in house then they will need to make sure the team is making the new capability as secure as possible so the business and their customers aren’t exposed to unnecessary risk.

Wrapping Up

Choosing to build or buy a new capability isn’t an easy decision. Both decisions have explicit and hidden costs that can be difficult to navigate. Like any decision the CISO should weigh the risk of the decision and ultimately choose the option that supports the strategic direction of the business, meets financial and budgeting requirements and is sustainable by the security organization for the life of the capability.

Chip War Book Afterthoughts

I recently read Chip War by Chris Miller and found it to be a thought provoking exploration of the global supply chain for semi conductors. Most interesting was the historical context and economic analysis of the complexities of the current semi conductor supply chain and how the United States has wielded this technology as an ambassador of democracy across the globe. This book was particularly interesting when considering the recent efforts by the U.S. Administration to revitalize semi conductor manufacturing in the United States via the CHIP Act. Even though the U.S. maintains control over this industry, their control is waning, which is placing the U.S. at risk of losing military and economic superiority.

The US Leads With Cutting Edge Design & Research

One advantage maintained by the U.S. is it leads the way with the latest chip design and research. The latest computer chip architectures increase computing power by shrinking transistors to smaller and smaller sizes, roughly following Moore’s Law to double the number of transistors per chip every two years. In the late 1970’s, the United States was quick to recognize the military and economic advantages provided by semi conductors. Overnight, bombs became more accurate and computing became more powerful allowing decisions to be made quicker and spawning an entirely new industry based on these chips. However, as the U.S. began to rely more and more on semi conductors, the cost needed to come down. This was achieved by outsourcing the labor to cheaper locations (mainly Asia), which subsequently made these countries reliant on the U.S. demand for chips. This allowed the United States to influence these countries to their advantage.

A Technology with Geo-Political Consequences

One side effect of outsourcing the manufacturing of semi conductors is the supply chain quickly became dispersed across the globe. Leading research was conducted in the United States, specialized equipment was manufactured in Europe and cheap labor in Asia completed the package. Until recently, most of this supply chain was driven by the top chip companies such as AMD, Intel and Nvidia. However, other countries, such as China, have recognized the huge economic and military advantages offered by semi conductors and as a result have started chipping away (pun intended) at the United State’s control of the semi conductor supply chain.

The US Can’t Compete On Manufacturing Costs

Despite the passing of the CHIP Act, the United States faces a significant battle to wrest chip manufacturing from the countries in Asia (and mainly Taiwan). The cost of labor in the United States is significantly higher than other countries. Additionally, countries such as Taiwan, South Korea, Japan, Vietnam and China have heavily subsidized computer chip manufacturing in order to maintain a foothold in the global supply chain. In order to compete, the United States will have to make an extreme effort to bring all aspects of manufacturing into the country including heavy tax breaks and subsidies. This will effectively turn into economic warfare on a global scale as the top chip manufacturing countries attempt to drive down costs in order to be the most attractive location for manufacturing.

Supply Chain Choke Points Are Controlled by the US and its Allies (For Now)

However, driving down costs won’t be easy. The highly specialized equipment required to manufacture chips needs to be refreshed every time there is a new breakthrough. The costs are tremendous and make it difficult to break into the industry. Instead, the U.S. has been focusing on maintaining control of particular aspects of the supply chain and even blocking acquisitions of strategic companies by foreign entities. The United States also exerts pressure on the countries within this global supply chain to allow it to maintain an advantage. Yet, as new countries rise to power (China) and seek to control their own supply chains, these choke points will dwindle. Additionally, as non U.S. allies (frenemies?) gain market share in the chip supply chain, the U.S. and its allies need to consider the security of the chips they are receiving from these countries.

Final Thoughts

Chip War by Chris Miller is a fascinating look into the history and global supply chain of semi conductors. For the past 50 years the United States has maintained military and economic advantages over its rival countries as a result of semi conductors. However, this advantage has been waning over the past two decades. The CHIP Act is recognition that the United States must begin to claw back some of the globalization of the supply chain and bring critical parts of the industry back to the U.S in order to maintain economic and military superiority in the future.