Are Traditional IT Roles Still Relevant In Today’s Modern Security Org?

As more and more businesses shift to the cloud and micro-services, the scope of responsibility for security and operations gets pushed up the stack. As a result of this scope compression, teams no longer need to worry about maintaining physical infrastructure like deploying servers, provisioning storage systems or managing network devices. As this scope falls off, the question becomes – are traditional IT roles still relevant in today’s modern security org?

Cloud Service Models

First, let’s talk about cloud service models most companies will consume because this is going to determine what roles you will need within your security organization. This post is also assuming you are not working at a hyper-scale cloud organization like AWS, Azure, Google Cloud or Oracle because those companies still deploy hardware as part of the services they consume internally and provide to their customers.

Infrastructure as a Service (IaaS)

Infrastructure as a Service (IaaS) is what you typically think of when you consume resources from a Cloud Service Provider (CSP). In IaaS, the CSP provides and manages the underlying infrastructure of network, storage and compute. The customer is responsible for managing how they consume these resources and any application that are built on top of the underlying IaaS.

Platform as a Service (PaaS)

In Platform as a Service (PaaS), the cloud service provider manages the underlying infrastructure and provides a platform for customers to develop applications. All the customer needs to do is write and deploy an application onto the platform.

Software as a Service (SaaS)

With Software as a Service (SaaS) customers consume software provided by the cloud service provider. All the customer needs to worry about is bringing their own data or figuring out how to apply the SaaS to their business.

IaaS, PaaS & SaaS Cloud Service Provider Logical Model

As you can see from the above model, organizations that adopt cloud services will only have to manage security at certain layers in the stack (there is some nuance to this, but let’s keep it simple for now).

What Are Some Traditional IT Roles?

There are a variety of traditional information technology (IT) roles that will exist when an organization manages their own hardware, network connections and data centers. Some or all of these roles will no longer apply as companies shift to the cloud. Here is a short list of those roles:

  • Hardware Engineer – Server and hardware selection, provisioning, maintenance and management (racking and stacking)
  • Data Center Engineer – Experience designing and managing data centers and physical facilities (heating, cooling, cabling, power)
  • Virtualization Administrator – Experience with hypervisors and virtualization technologies*
  • Storage Engineer – Experience designing, deploying and provisioning physical storage
  • Network Engineer – Experience with a variety of network technologies at OSI layer 2 and layer 3 such as BGP, OSPF, routing and switching

*May still be needed if organizations choose to deploy virtualization technologies on top of IaaS

Who Performs Traditional IT Roles In The Cloud?

Why don’t organizations need these traditional IT roles anymore? This is because of the shared service model that exists in the cloud. As a customer of a cloud service provider you are paying that CSP to make it easy for you to consume these resources. As a result you don’t have to worry about the capital expenditure of purchasing hardware or the financial accounting jujitsu needed to amortize or depreciate those assets.

In a shared service model the CSP is responsible for maintaining everything in the stack for the model you are consuming. For example, in the IaaS model, the CSP will provide you with the network, storage and compute resources you have requested. Behind the scenes they will make sure all these things are up to date, patched, properly cooled, properly powered, accessible and reliable. As a CSP IaaS customer, you are responsible for maintaining anything you deploy into the cloud. This means you need to maintain and update the OS, platform, services and applications that you install or create on top of IaaS as part of your business model.

Everything Is Code

One advantage of moving to the cloud is everything becomes “code”. In an IaaS model this means requesting storage, networking, compute, deploying the OS and building your application are all code. The end result of everything is code means you no longer need dedicated roles to provision or configure the underlying IaaS. Now, single teams of developers can provision infrastructure and deploy applications on demand. This skillset shift resulted in an organizational shift that spawned the terms developer operations (DevOps) and continuous integration / continuous delivery (CI/CD). Now you have whole teams deploying and operating in a continuous model.

Shift From Dedicated Roles To Breadth Of Skills

Ok, but don’t we still need traditional IT skills in security? Yes, yes you do. You need the skills, but not a dedicated role.

Imagine a model where everyone at your company works remotely from home and your business model is cloud native, using PaaS to deploy your custom application. As the CISO of this organization, what roles do you need in your security team?

From a business standpoint, you still need to worry about data and how it flows, you need to worry about how your applications are used and can be abused, but your team will primarily be focused on making sure the code your business uses to deploy resources and applications in the cloud is secure. You also need to make sure your business is following appropriate laws and regulations. However, you will no longer need dedicated people managing firewalls, routers or hardening servers.

What you will need is people with an understanding of technologies like identity, networking, storage and operating systems. These skills will be necessary so your security team can validate resources are being consumed securely. You will also need a lot of people who understand application security and you will need compliance folks to make sure the services you are consuming are following best practices (like SOC 2 and SOC 3 reports).

What Do You Recommend For People Who Want To Get Into Security Or Are Deciding On A Career Path?

I want to wrap up this post by talking about skills I think people need to get into security. Security is a wonderful field because there are so many different specialization areas. Anyone with enough time and motivation can learn about the different areas of security. In fact, the U.S. Government is kind enough to publish a ton of frameworks and documents talking about all aspects of security if you have the time and motivation to read them. That being said, if I was just starting out in security I would advise people to first pick something that interests them.

  • Are you motivated by building things? Learn how to be a security engineer or application security engineer. Learn how to script, write code and be familiar with a variety of technologies.
  • Are you motivated by breaking things? Learn how to be a penetration tester, threat hunter or offensive security engineer.
  • Do you like legal topics, regulations and following the rules? Look into becoming an auditor or compliance specialist.
  • Do you like detective work, investigating problems and periodic excitement? Learn how to be an incident response or security operations analyst.

Ask Questions For Understanding

The above questions and recommendations are just the tip of the iceberg for security. My biggest piece of advice is once you find an area that interests you start asking a lot of questions. Don’t take it for granted that your CSP magically provides you with whatever resources you ask for. Figure out how that works. Don’t blindly accept a new regulation. Dissect it and understand the motivation behind it. Don’t blindly follow an incident response playbook. Understand why the steps exist and make suggestions to improve it. If a new vulnerability is released that impacts your product, understand how and why it is vulnerable. The point is, as a security professional the more understanding you have of why things exist, how they work and what options you have for managing them, the more skills you will add to your resume and the more successful you will be in your career, especially as your security org collapses roles as a result of moving to the cloud.

We Are Drowning In Patches (and what to do about it)

Last week I had an interesting discussion with some friends about how to prioritize patches using criticality and a risk based approach. After the discussion I starting thinking about how nice it would be if we could all just automatically patch everything and not have to worry about prioritization and the never ending backlog of patches, but unfortunately this isn’t a reality for the majority of organizations.

Whats the problem?

There are several issues that create a huge backlog of patches for organizations.

First, let’s talk about the patching landscape organizations need to deal with. This is largely spit into two different areas. The first area is operating system (OS) and service patches. These are patches that are released periodically for the operating systems used by the business to run applications or products. Common operating systems for production workloads will be either Windows or Linux and will have stability, security or new feature patches released periodically.

Second, there are patches for software libraries that are included in the software and applications developed by your business. Typically these are lumped into the category of 3rd party libraries, which means your organization didn’t write these libraries, but they are included in your software. 3rd party library security vulnerabilities have become a huge issue over the last decade (but thats a blog post for another day).

These two patch types, OS and 3rd party library patches, require different approaches to discover, manage and remediate, which is the first challenge for auto patching. When combined with the volume of new vulnerabilities being discovered, large heterogeneous environments and the need to keep business critical applications available, keeping your assets patched and up to date becomes a real challenge.

Why isn’t auto patching a thing?

Well it is, but…

There are a few challenges to overcome before you can auto-patch.

Stability and Functionality

First, both operating system and 3rd party library patches need to be tested for stability and functionality. Usually, patches fix some sort of issue or introduce new features, but this can cause issues in other areas such as stability or functionality. It can be a complex process to roll back patches and restore business critical applications to a stable version, which is why most businesses test their patches in a staging environment before rolling them out to production. Cash is king and businesses want to minimize any disruption to cash flow.

Investment and Maturity

It is possible to automate testing for stability and functionality, but this requires a level of maturity and investment that most organizations haven’t achieved. For example, assuming your staging environment is a mirror image of your production environment (it is right?), you could auto apply the patches in staging, automatically check for stability and functionality over a set period of time and then roll those updates to production with minimal interaction. However, if your environment requires reboots or you have limited resources, patching may require down time, which could impact making money.

In order to have an environment that can support multiple versions, seamless cut over, proper load balancing, caching, etc. requires significant investment. Typically this investment is useful for keeping your products functioning and making money even if something goes wrong, but this investment can also be used to buffer maintenance activities such as patching without disruption.

Software Development Lifecycle

The last section assumes a level of software development maturity such as adoption of Agile development processes and CI/CD (continuous integration / continuous delivery). However, if your engineering group uses a different development process such as Incremental or Waterfall, then patching may become even more difficult because you are now competing with additional constraints and priorities.

What are some strategies to prioritize patching and reduce volume?

If your business runs products that aren’t mission critical, or you simply can’t justify the investment to operate an environment without down time, then auto patching probably isn’t a reality for you unless you are Austin Powers and like to live dangerously. For most organizations, you will need to come up with a strategy to prioritize patching and reduce the volume down to a manageable level.

Interestingly, this problem space has had a bunch of brain power dedicated to it over the years because it resembles a knapsack problem, which is a common problem space in mathematics, computer science and economics. Knapsack problems are problems where you have a finite amount of a resource (space, time, etc.) and you want to optimize the use of that resource to maximize some sort of requirement (like value). In the case of patching, this would mean applying the largest volume of the highest severity patches in a fixed time period to realize the maximum risk reduction possible.

Critical Assets First

Staying in the knapsack problem space, one strategy is to start with your most critical assets and apply the highest severity patches until you reach your threshold for risk tolerance. This requires your organization to have an up to date asset inventory and have categorized your assets based on business criticality and risk. For example, let’s say you have two applications at your business. One is a mission critical application for customers and generates 80% of your annual revenue. The other application provides non-mission critical functionality and accounts for the other 20% of revenue. Your risk tolerance based on your company policies is to apply all critical and high patches within 72 hours of release. In this example you would apply all critical and high patches to the mission critical application as quickly as possible (assuming other requirements are met like availability, etc.).

Guard Rails and Gates

Another strategy for reducing volume is to have guard rails or gates as part of your software development lifecycle. This means your engineering teams will be required to pass through these gates at different stages before being allowed to go to production. For example, your organization may have a policy that no critical vulnerabilities are allowed in production applications. The security organization creates a gate that scans for OS and 3rd party library vulnerabilities whenever an engineering team attempts to make changes to the production environment (like pushing new features). This way the engineering team needs to satisfy any vulnerability findings and apply patches at regular intervals coinciding with changes to production.

Wrapping Up

With the proliferation of open source software, the speed of development and the maturity of researchers and attackers to find new vulnerabilities, patching has become an overwhelming problem for a lot of organizations. In fact, it is such a big problem CISA and the Executive Order On Improving The Nation’s Cybersecurity list software patches and vulnerabilities as a key national security issue. I’ve outlined a few strategies to prioritize and reduce the volume of patches if your organization can’t afford the investment to absorb downtime without disruption. However, no matter what strategy you choose, all of them require strong fundamentals in asset inventory, asset categorization and defined risk tolerance. While these investments may seem tedious at first, the more disciplined you are about enforcing the security fundamentals (and engineering maturity), the less you will drown in patches and the closer your organization will come to the reality of auto-patching.

How Security Evolves As Organizations Move From the Datacenter To The Cloud And Beyond

Despite cloud growth slowing in the past quarter, the momentum of existing and planned cloud adoption remains. As a new or existing CISO, your organization may be just starting to migrate to the cloud or may be looking to improve efficiency by adopting newer technologies like Kubernetes. Wherever you are in your cloud journey security needs to be in the forefront with careful consideration for how your security org, its governance and controls will evolve along the way.

Avoid The Free For All

I’ve been through multiple cloud migrations and before anyone in your organization begins to migrate, the IT, Security and Finance organizations need to come together to lay the appropriate foundation in the new environment. This means you need to set up the appropriate structure for mapping and controlling costs. You also need to map all of your existing IT and security policies and controls into the new cloud environment before people migrate to avoid having to do clean up later. It doesn’t have to be perfect right away, but doing some preparation and implementation of guard rails before teams migrate will pay dividends later.

Not Everything Is Easier

As organizations migrate to the cloud, security teams need to consider how the tools and processes they rely on may change. For example, if you currently rely heavily on netflow or packet captures to monitor your networks, the methods to get the same visibility may be different in the cloud. Similarly, transferring large amounts of data or security events can incur significant costs and so your logging and SIEM infrastructure may need to be re-architected to keep the events as close as possible to the environment, while only shipping the most critical events to a centralized location.

Penetration tests are also different in the cloud. If you regularly penetration test your environment or have third parties conduct pentests for contractual, compliance or regulatory reasons, then these will need to be scheduled and coordinated with your cloud provider so you don’t accidentally disrupt another customer. When you move to the cloud you no longer “own” or control the network and so you have to operate within the terms laid out by your cloud provider. As a result, pentests may be less frequent or may need to have their scope adjusted as appropriate for the environment.

Asset inventory may also change. If you are used to assigning your own DHCP addresses and having these addresses be relatively static in your inventory this will change in the cloud. Your asset inventory will change based on how frequently your organization spins up and down resources. This could be a few hours or days. Your associated inventory, reporting, vulnerability scanning, etc. will all need to be adjusted to the frequency of resource utilization and this can make tracing security events difficult if your inventory isn’t correct.

Processes aren’t the only thing that will need to be adapted to the cloud. Let’s consider how the scope of security changes as you move to the cloud.

In The Beginning

Consider a traditional technology stack where an organization has purchased and manages the storage, network, compute, OS and software running in the stack.

In this model the security organization is responsible for ensuring the security of not only the physical environment, but the security of all of the other technology layers. In some ways this environment offers simplicity because a production application maps directly to a network port, firewall rule, operating system, physical server and dedicated storage. However, this simplicity comes with the full scope of security of the entire environment and technology stack. The leading tech companies largely moved away from this model in the early 2000’s because it is inefficient in terms of resource utilization, portability of applications and velocity of deploying new software at scale.

Enter Virtualization

Organizations looking for more efficiency and utilization from their technology assets found an increase as virtualization came onto the scene. Now companies can run multiple Operating Systems (OS) and application stacks on a single stack of physical hardware.

For security teams, virtualization increases the density of their asset inventory compared to physical assets. This means the asset inventory no longer has a 1:1 correlation with physical assets and the attack surface for the organization will shift towards the OS, Application and Network layers. In this model security teams still need to focus on the full scope of security, but it also allows the organization to begin taking advantage of modern IT infrastructure and deployment concepts.

One extremely important concept is the idea of immutable infrastructure. With immutable infrastructure the organization no longer makes changes to things in production. Instead, they update, patch or improve on their virtual machine images and production applications in their development or test environments and then push those into production. This means development teams can increase the velocity of the software development lifecycle (SDLC) by fixing once and deploying many times. It also means security teams can more tightly control the production environment, which is the highest area of risk for the business.

Moving To The Cloud

At some point your organization may make the decision to migrate to the cloud. Migrating to the cloud offers a number of benefits such as no longer having to purchase and manage depreciating assets, no longer having to staff people to physically manage hardware, no longer having to pay to protect and insure physical assets, increased development velocity and the ability to scale compute, storage and network as needed.

For the security organization, moving to the cloud means you no longer need to worry about physical assets such as network, storage or compute. Your cloud provider now takes care of those layers and so your team has reduced physical scope, but increased logical scope, which results in increased attack surface. Development teams can now deploy with increased velocity and so it is incredibly important to enforce good security hygiene. Shifting security as far left as possible within the CI/CD pipeline and automating the security checks are incredibly important. Similarly, putting guard rails in place to control the environment will be really important to avoid magnifying security issues at scale. Some things to think about are:

  • Tagging is required for security, finance, development, etc. otherwise the deployment fails or the instance is shut down
  • Object storage private and encrypted by default
  • Only specific and required network ports allowed
  • NACLs, ACLs, WAF and/or proxies configured and deployed by default based on service or application
  • Applications are not allowed in production with critical or high vulnerabilities
  • Security logging at each layer sent to object storage, filtered and then sent to a centralized SIEM
  • Control software libraries to minimize software supply chain risks
  • OS images patched, hardened and loaded with required agents
  • Identifying and controlling the flow of data to avoid data leakage
  • Setting and enforcing data retention policies to no only control costs, but reduce the volume of data that needs to be protected

Moving to the cloud allows organizations to dramatically improve the velocity of development and as a result security teams need to shift their controls left in order to improve security and increased visibility without impeding velocity.

Commoditizing The OS Layer

Lastly, once organizations are in the cloud they can begin to ask questions like – what if the OS didn’t matter? What if memory, compute, storage and everything below the application layer was taken care of automatically and all developers need to worry about is the actual application? Enter containers and kubernetes.

Containers and kubernetes allow organizations to scale their applications with incredible speed. All developers need to worry about is to package up their application in a container, deploy it into the cluster and let everything else happen automatically. This model presents both a challenge and an opportunity for security teams.

First, all of the security checks we discussed previously need to happen within the build process and deployment pipeline to make sure organizations aren’t amplifying a vulnerability across their applications.

Second, security teams will continue to make sure the underlying kubernetes clusters meet their security requirements, but the main focus will be on the application layer. Controlling ingress and egress of network traffic going to the application, making sure software libraries are approved and free of vulnerabilities, ensuring software security checks like SAST, DAST and even fuzzing of interfaces are performed before deploying to production will be incredibly important. It will also be important to maintain an inventory, but this wont be a typical inventory of who owns an OS or compute instance. Instead, this inventory will map which team owns a particular application. This will be important for events like Log4j so the appropriate dev team can identify and remediate software libraries or flaws in their applications quickly and then re-deploy. Remember the environment should be immutable so security teams will need to scan, monitor and respond to vulnerabilities detected in production quickly since the attack surface will be much larger in this model.

Wrapping Up

No matter where your organization is in their cloud journey security teams need to identify their scope of responsibility and apply security best practices within their environment. Organizations still in data centers will require security teams to address the full scope of security from the physical layer to the application layer and everything in between. As organizations begin to adopt technologies like virtualization, development velocity should begin to increase and security teams will need to adapt. Moving to the cloud is a big step, but will pay dividends to the organization in terms of increased velocity. Organizations no longer have to acquire or focus on physical hardware and so they can staff more software developers. Likewise, the security teams will need to adjust their requirements and controls to focus on the OS layer and above. Lastly, organizations that have moved to container technologies or embraced kubernetes will have tremendous velocity and security teams will need to make sure the appropriate checks are integrated into the CI/CD pipeline so vulnerabilities aren’t magnified across the entire environment. In order to avoid this security teams need to focus primarily on the application layer and automation will be key.