Skip to main content

Considerations for Securing your Applications

This article is based on my Microsoft Build 2022 session ‘Securing Applications’. I only had 15 minutes allocated to speak – this is what I squeezed in and delivered!

As organisations navigate digital transformation – there is no topic more important than defending yourself from attack.

Security is a complex topic because its wide ranging and has many facets, and therefore needs different skills and roles to be involved.

To be successful, organisations must eliminate the silos between the different teams and embed a security first culture into their processes and tooling.

Information Security

A black and white photo of a castle

Security is about protecting an organisation to ensure the resilience and continuity of its business operations.

A subset of security is Information Security (infosec). Infosec is about the protection of data and associated applications, and it’s so critical for the ongoing existence and success of an organisation.

There are three areas at the core of infosec – these are:

  • Confidentiality – making sure that data is protected from unauthorised access.
  • Integrity – making sure that data is kept accurate/consistent, and protected from
    unauthorised modification.
  • Availability – making sure that data is available when and where it is needed.

Whilst infosec is often associated with defending against malicious attackers, it also needs to consider other types of events that can cause loss, such as ‘acts of god’. For example a lightning storm which might cause a power outage and bring down systems/corrupt data.

It’s also about making sure everyone does the right things and that those things are right. For example, there is no point having a manual backup policy if people aren’t doing it regularly in accordance with the defined schedule.


A black and white photo of a person sitting behind a computer, wearing a mask

Cybercrime is a global threat and huge industry. Bad actors will attack organisations of any size, whatever their purpose. In most cases the motivation is financial gain, but some have political objectives and others just enjoy the challenge of causing mayhem and getting publicity.

Exploits are traded on the dark web at low cost, enabling less skilled people to be involved in malicious activity and swelling the number of attacks.

It’s often the case that exploiting just one vulnerability can open the door and provide a stepping stone into a network. An attacker can then move laterally through the network and systems to unleash further wide-ranging hostile actions, ultimately impacting the confidentiality, integrity, and availability of data.

Bad actors do bad things:

  • In many cases it will cause severe financial impact (for example – loss of customer trust, loss of intellectual property, severe compliancy fines, corrupted data).
  • In the worst cases, it will cause business ruin.
  • In the most catastrophic case, a malicious cyber-attack can cause loss of life.

The following is the Attacker’s Advantage and the Defender’s Dilemma:

  • Defender must defend all points – Attacker only needs one weak point.
  • Defender must defend against known attacks – Attacker can probe for unknown vulnerabilities.
  • Defender must be constantly vigilant – Attacker can strike at will.
  • Defender must play by the rules – Attacker can play dirty.

This problem can be compounded when there are huge numbers of attackers targeting the defender.

Risk Management

A black and white photo of a beach and a warning sign

A key part of information security is the practice of protecting systems/data by mitigating risks. The risk management process identifies risks, and for each risk the following is assessed:

  • The likelihood of that risk being exercised.
  • The impact that it will cause.

This process will result in a register of risks with a wide spectrum of ‘level of concern’. It’s then a business decision, based on their appetite for risk, to decide how to address each risk – the options are:

  • Avoid – is to resolve the risk so as to completely eliminate it.
  • Accept – is to acknowledge the risk and choose not to avoid, transfer or mitigate. Might do this if the assessed impact is small or the likelihood of it happening is remote.
  • Transfer – is to move the risk to a third-party, perhaps by taking out insurance.
  • Mitigate – is to do something to reduce the likelihood or impact of the risk.

This will be an iterative process, so that the results of ongoing monitoring are fed back into subsequent cycles of the process.

We can reduce risk by doing the right things, and this breaks down into four distinct categories:

  • Secure by Design.
  • Secure the Code.
  • Secure the Environment.
  • Secure the Operations.

Secure by Design

Four photos in circles representing 'design'

Security starts with the design before any code is written.

The design will transform user requirements into an architecture containing the platform components and software modules to be developed, and will define how they interact.

Threat modelling is done by people with a mindset that will think like an attacker rather than a defender. They will use a threat modelling methodology to tease out flaws in the design – which can then be addressed early, when they are relatively easy and cost-effective to resolve.

Zero Trust is a response to the way networks have changed. We used to have an internal network and an external network with a firewall between to keep the bad guys out. Today, your trusted people are working on untrusted networks, and most likely there will be untrusted people on your trusted networks. Today you have to protect resources and not network segments.

The core principal is that everything is a threat – don’t trust anything or anyone at anytime until identity has been fully verified and you have a high level of assurance that it is what it claims to be.

  • Verify explicitly – Always authenticate and authorise based on all available data points, including user identity, location, device health, service or workload, data classification, and anomalies.
  • Use least privileged access – Limit user access with just-in-time and just-enough-access (JIT/JEA), risk-based adaptive polices, and data protection to help secure both data and productivity.
  • Assume breach – Minimise blast radius and segment access. Verify end-to-end encryption and use analytics to get visibility, drive threat detection, and improve defences.

Identity is key and has become a popular attack vector. Do not reinvent your own identity – instead use industry standards and products/services from specialised organisations that have invested substantially in this and have offerings that are battle proven.

Data Classification is knowing your data – is it Highly Confidential, Confidential, Public, Business or Non-Business? Protect personal identifiable information (PII) because the Data Protection laws and regulations around that can impose heavy fines for any breach.

Confidential data should always be protected using encryption when sending over the wire or when stored at rest.

Secure The Code

Four photos in circles representing 'code'

This is about your code and external code – namely open source.

Use code scanning tools to ensure submitted code is high quality, safe and reliable – and conforms to best practice. You can do this with automated code reviews, that previously peers would have done.

Static code analysis tools helps identify areas in which areas of the code under analysis is suspect and may be compromised.

Secure your secrets and put in the source code. Once application source code is loaded into source control, it can spread widely and potentially be read by many. Secrets – like API keys, security tokens, certificates and passwords – are extremely sensitive as they open doors, so they must not be embedded into source code or they will leak. Put such secrets in a safe storage service and then get the application to retrieve them at run time.

Software Composition Analysis tools should be employed to analyse the dependency graph and keep an inventory of third-party components being used to build applications. More on this later when we discuss the software supply chain.

For private development, store source code in well-secured code repositories. Fully understand your branch management so you know what code is in dev/test/production, and have processes for hotfixes.

Secure The Environment

Four photos in circles representing 'environment'

When using cloud there are shared responsibilities for securing the environment. The cloud vendor will handle the physical security but the users must secure their own environments and resources. Analogy is a castle – doesn’t matter how high the walls are or how many crocodiles are in the moat around the castle – if you leave the drawbridge down then an intruder can easily walk in.

Access controls are used to impose rules on who can access what and what level of access they have. Dev environments may be more relaxed, but access to production environments should be strictly controlled and limited. Access controls need to combine with a strong identity foundation and use features such as conditional access, multi-factor authentication, just-in-time and just-enough-access (JIT/JEA).

Policy services are used to centrally set guardrails throughout your resources to help ensure cloud compliance, avoid misconfigurations, and practice consistent resource governance.

‘Infrastructure as Code’ is the practice of keeping infrastructure topology specified in scripts/templates and are stored in version control – in a similar fashion to the way developers manage code and deploy solutions. It then enables consistency, quality, repeatability and accountability of the configuration.

Network security controls and devices may be implemented to mitigate against various known attack vectors. Application security often involves discussion around networking such as firewalls, gateways and load balancers – ensuring the infrastructure is locked down from certain types of network attacks.

Patching ensures all known vulnerabilities in virtual machines and operating system instances are resolved. If you’re using PaaS then it’s not an issue, as it’s handled automatically/transparently by the underlying system. But it’s still a relevant topic in some cloud-native services – such as if using Kubernetes.

Secure The Operations

Operations are responsible for managing the live environments – their duties can be summarised as Protect | Detect | Respond. It’s important that response actions are scripted and so can be triggered as needed, as opposed to having to think and act on the fly/under the pressure of a live incident.

They must monitor everything that is happening and should be looking for the unexpected events or failures, and should it happen implement incident response protocols to take the appropriate preventative measures or contain any damage.

Threat intelligence is knowing the latest security landscape and possible threats, and so can help by planning in advance how to respond. It’s the need to avoid surprises and the unexpected.

After any incident, forensics and root cause analysis should be done – in particular to determine if there is any compromise to the confidentiality, integrity and availability of the data and associated applications.

And finally there’s business continuity – having processes in place to keep the business running during major disruption or disaster, such as earthquake, power outage, fire, cyber attack, etc.


Four photos in circles representing 'automation'

There’s a lot here and this wasn’t an exhaustive list of all things to do, but it highlights the variety of skills needed. The only way to be effective and achieve success is to automate as much as possible.

Embrace Everything as Code changes the focus from manual, repetitive tasks to workflows based on end-goals and desired states. Store things like configuration rules in version control – so enabling that consistency, quality, and accountability that DevOps offers.

Software Supply Chain

A black and white photo of an open sign

Open source is software made available with source code that anyone can inspect, modify, and enhance. It’s provided with a license that dictates how the software can be used, for example it might impose commercial restrictions or mandate that any modifications must also be shared back with the community.

It’s important that organisations understand and mitigate against the risks of open source. When an open source library is imported/used, then all the dependencies that library uses is also included and there could be many levels of dependencies resulting in the use of considerable amounts of software from unknown sources.

Infecting popular open source libraries with malware and vulnerabilities is on the rise – this is known as a software supply chain attack. It can wreak maximum havoc as the malware will be further distributed to all users of the software that includes the library code.

Software Composition Analysis tools should be employed to analyse the dependency graph and keep an inventory of third-party components being used to build applications. These can then provide ongoing monitoring to:

  • Report on known security vulnerabilities and software bugs.
  • Alert when updated versions are available.
  • Accurately track the open source licensing conditions to fulfil all the legal requirements helping to avoid any unfortunate surprises, such as jeopardising exclusive ownership over proprietary code.

Such tools can help software vendors document their Software Bill of Materials (SBOM) which lists any components, libraries and tools used. There has been discussion that future legislation may force software companies to make SBOM declarations public.


A black and white photo of an F1 car

DevOps is the engine that drives innovation – and reduces the time to deliver value.

A DevOps approach enables organisations to develop, deploy and improve products at a faster pace than they can with traditional software development approaches.

But DevOps is not just a product – it requires a culture of collaboration that is critical for DevOps to be successful.

In DevOps, we often discuss the inner and outer loop. The inner loop is the iterative process that a developer performs when they write, build and debug code. The outer loop is the build/deploy/monitoring and then driving the plan for subsequent development.


A black and white photo of an F1 car and a safety car

DevSecOps is the evolution of DevOps. It focusses on integrating security practises within the DevOps inner and outer loops to create a security first culture.

Furthermore, it mandates a shift left mentality – that is addressing security in the earliest stages in the development lifecycle. So not only is the development team thinking about building a high quality product efficiently, but they are also implementing security as they go.

Addressing security earlier will improve the robustness, saves costs and accelerates delivery.

Learn more