SKIP TO CONTENT

The Age of the Security Experience

Jason R. Weiss
May 11, 2023
5 min read

In software development, there are many languages: Java, Go, C#, Python, PHP, etc. In business, however, there is only one language known as risk. At a macro level, risk manifests itself in every aspect of business ranging from customer acquisition to customer retention, from capital investments to operational expense. At a micro level, risk manifests itself in a tweet received as off-color, to marketing materials being delayed for a major conference when the FDA intercepted the box because a small bag of gummy bears was added to the hand outs and not declared.

Software architects routinely deal with risk, although it isn’t often explicitly recognized as such. At a macro level software architects recognize there is no perfect software architecture but only a desire to select the least worst design tradeoffs that fit the functional and non-functional requirements they've been given. Elsewhere, software developers work with risk at the micro level, evaluating algorithm speed or maintainability of code structures as discussed during a peer code review.  

Since early 2021, a multitude of topics and artifacts associated with software supply chain risk management has become mainstream conversation. Despite threats like ransomware and phishing existing for decades, combating these risks is now a highly focused topic uniformly driven by governments, corporations, and citizens. Governments have grown increasingly concerned about the fragility of their operations that are predominately software defined. Corporations are dealing with cyber insurance policies that are doubling year over year and nervous customers evaluating if the organization can be trusted with their data. Citizens worry about losing access to their personal finances, family memories (photos), and home automation. Cybersecurity risk has become an all of nation problem, and it feels like everyone is pointing at software development as the source of the risk.

The National Institute of Standards and Technology (NIST) defines risk as, “a measure of the extent to which an entity is threatened by a potential circumstance or event, and typically a function of: (i) the adverse impacts that would arise if the circumstance or event occurs; and (ii) the likelihood of occurrence.” NIST further expands the definition to state that in software risks arise from the loss of confidentiality, integrity, or availability (CIA) of information.

In most organizations, developers have enjoyed unprecedented freedom for decades including administrator rights on their development machine, unfettered access to publicly hosted software repositories (both source code and binary), and virtually no latency in making a personal determination about using or not using a particular software language, library, or framework short of discussing it with a couple team members. These freedoms are likely to be curtailed over the next few years due to unacceptable business risk levels for both the software producer and the software acquirer.  

Concepts like “security controls” (or just “controls”) and NIST standards historically have carried little meaning to software developers. The controls have been applied all around you, but you may not have recognized them as controls or as part of various NIST standards:  corporate policies that require every endpoint run antivirus, that laptops use encrypted hard drives, that require password complexity rules, that require a forced password reset after n unsuccessful login attempts, etc.  

This is changing, and in fact, these types of controls will permeate the entire software development lifecycle (SDLC). The rollout timeline, the specific mechanisms, and the impact will not be universally felt or interpreted the same way. Why? No two software organizations view risk the same way because of differences in risk appetite seen manifested in how they define corporate policies or how they assemble their unique set of software tool chains to include languages, libraries, and frameworks.

The government is presently the dominant voice calling for changes to the SDLC, with insurance organizations, capital investors, and customers starting to line up as well to demand change. For its part, the government has bifurcated the software category into EO Critical Software and All Other Software. EO Critical Software is defined by NIST as any software that has, or has direct software dependencies upon, one or more components with at least one of these attributes:

  • Is designed to run with elevated privilege or manage privileges.
  • Has direct or privileged access to networking or computing resources.
  • Is designed to control access to data or operational technology.
  • Performs a function critical to trust.
  • Operates outside of normal trust boundaries with privileged access.

More succinctly, EO Critical Software includes operating systems, identity and access management, hypervisors, container environments, web browser, endpoint security systems, backup/recovery storage, remote anything and network anything.  

The very software that you design, develop, and deliver as a software developer relies on all or most of these categories, but that doesn’t make every application EO Critical Software. The bifurcation of software into two categories doesn’t change the recommended standards for securing software development, although it does place added scrutiny on the procurement of EO Critical Software. Adoption of these secure software development standards is where software developers will need to change their behaviors.

As a software professional, can you attest that you always confirm the checksum file for each binary or source code tarball you pull from the internet? Do you run downloaded source code from GitHub through any scanning tools before you build and execute that code locally on your endpoint? Do you review the build scripts or make file(s) to understand what added software they pull down from the internet and install on your machine? Do you always consider metrics like test coverage when deciding the suitability of a library or framework for use in your application, and is that metric universally defined and enforced across the entire development team? Realizing these types of risk mitigation activities are critical, and they are only the visible part of the proverbial iceberg in terms of changes to the SDLC.

For decades, functional experience was the dominant measure of software experience. Around 2010 and linked to the introduction of the iPhone, the second turning point of software experience became mainstream—the user experience. Today, the third turning point of software experience is upon us, security experience, as the risks of the software powering our software defined world have become unacceptably high. In a forthcoming blog series, we will explore the intrinsic components of the security experience from the perspective of the software supply chain to include the external forces driving change in the SDLC, the types of security activities expected in an SDLC, and impact of seemingly singular artifacts like the software bill of materials (SBOM) that upon closer examination require a cohesive generation strategy.

Sign up for our newsletter to join our impact-driven mission.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.