Security: The Foundation of Everything
If your cloud environment cannot be made secure, there is little reason to invest more time or money. Security breaches not only impact individual users of your applications but can inflict reputational and monetary damage on your company. Many careers have been upended by failure to take security seriously.
Security has to be planned from the beginning and woven into all aspects of your cloud environment. It is critically important and hard to retrofit afterwards. Your applications can take advantage of all the latest technologies and techniques — but if they are not secure, they are not worth having.
For years security was based on a perimeter defense model. The basic idea is simple — a secure system should let authorized users pass through the perimeter while keeping malicious users outside the perimeter. If more security was needed a company could find ways to make the perimeter stronger (stricter password requirements) or add extra perimeter boundaries (firewalls on both sides of a jump server). The strategy was to either make the perimeter harder to breach or force an intruder to invest more time by having to breach multiple perimeter layers before accessing valuable resources.
At the heart of the perimeter model is the assumption a perimeter can be identified — a simple barrier between “us” and “them.” Such a perimeter may have been common in years past, but modern IT is no longer structured like that. Where is the perimeter when employees work from home, or connect to a corporate network with their personal devices? Where is the boundary when one company’s microservice system makes calls to another company’s API?
A changing world requires a new way of thinking about security. Kenzan recommends following a Zero Trust security philosophy. In traditional security models once a user passes through the perimeter they are trusted to be who they say they are. In a Zero Trust environment there is no trust — “verify then trust” is replaced by “always verify, trust no one.” No user is trusted ever, no matter who they are or how long they have been in the system.
Instead trust is re-generated with every interaction to an endpoint, server, or IP address. This is accomplished through techniques like multifactor authentication, IAM, behavioral analytics, encryption, micro-segmentation of virtual networks. All information is encrypted at rest and in transit. Only users who generate trust are capable of decrypting that information.
Zero-trust embraces context-based security. This takes into account a user’s past behavior as well as authentication and authorization when generating trust. As an example take a user who has access to sensitive documents but has never accessed them before. Suddenly that user starts accessing those documents. In this scenario the change in behavior might require a further generation of trust. This could be by using a card/token or external key through an authenticator application.
Zero-trust relies on technologies like distributed and automated mutual Transport Layer Security (mTLS) between every service; entity authorization run by Encrypted JSON Web Tokens (JWE) with verifiable and rejectable nonces; and a robust automation suite to detect and respond to threats to the network in a fine-grained fashion utilizing machine learning to track deviations in typical user or service behavior all mediated by an identity-aware proxy, such as Istio.
Zero Trust is not an academic theory. Google has its own implementation of zero-trust called BeyondCorp. It is considered state of the art and shared publicly here: https://cloud.google.com/beyondcorp. The goal of BeyondCorp was to create a network so secure employees can work from any location without the need for a traditional VPN.
Microsegmentation is a cornerstone of Zero Trust security. Microsegmentation is a practice focused on logically creating finely grained network segments and completely controlling traffic within and between those segments. Lateral traffic is avoided.
Virtual networking can be a complex topic, but the principles all center around reducing the attack surface accessible by an intruder. The attack surface is reduced by limiting internet access to internal resources, limiting internal resources access to the internet, and strictly controlling corporate resources’ access to each other. Any internal or external access is limited to what is specifically needed and nothing else.
A secure network topology often follows a hub and spoke model, which looks a little bit like a multifaceted bicycle wheel. All traffic enters through a central hub and depending on the request is funneled to one or more spokes. Communication between spokes is strictly limited. In this way any breach is restricted to one of the spokes and cannot spread.
Implementing point-to-point networking secured by mTLS also mitigates lateral movement across networks that is typically employed by hackers to irreparably breach networks. All of this secure topology is robustly and flexibly managed by network-as-code enabling threats to be detected; mitigated; and dealt with in a dynamic, distributed, and automated fashion.
There are several less desirable network topologies from a security perspective. For example, a mesh model where every application has access to every other application. In some situations, such as a microservice architecture, this topology might be necessary. In this case vulnerabilities are reduced by restricting the services any one service can access and requesting a new generation of trust at every interaction. Without these protections, once an intruder enters a true mesh topology they have access to a wide range of resources which nullifies many of the benefits of adopting Zero Trust over the traditional perimeter defense method.
Microservices are an example of where identity-based microsegmentation takes everything a step further. Instead of allowing access based on IP address or port, access is allowed depending on the workload context of the caller. In short, the caller’s identity determines access. For example, instead of using network addresses in a microservice system (which are often ephemeral) identity-based microsegmentation are enforced using Kubernetes pod identity. This simplifies security by allowing for unified security policies to protect VMs and containers across multi-cloud environments.
To summarize, building the virtual network right the first time is important. Tweaks will always be made as the architecture evolves, applications added and removed, and improvements implemented. But a poor network structure invites attacks, and a substantial reconstruction or repair of a poorly constructed complex network is very difficult and expensive. Often these repairs take place after a breach has been discovered and the resulting pressure makes the work even more difficult.
Automating the creation and maintenance of infrastructure and provisioning of resources is not often discussed as part of a security practice. However, Kenzan believes automation is the cornerstone of an effective security posture. This doesn’t mean simply adding automated security tools, although these are important. Kenzan advocates automating your infrastructure and development processes as well as testing and security checks.
In a number of industries repetitive manual tasks end up being a security risk. Repetitive manual tasks are tedious and boring. When humans engage in tedious and boring tasks they tend to make mistakes. In the right scenario these mistakes end up being a security risk. This is why TSA luggage screeners are rotated at airports — looking at luggage is boring and repetitive and after a while screeners make mistakes and miss potentially dangerous items.
These benefits can be enhanced in synergistic ways with everything-as-code(EaC) by implementing techniques such as regular and automated network defense, key management, and regular mulching of systems. Automated network defense can be used to destroy private keys stored in random access memory (RAM) when breaches are suspected which can be in-turn replaced by an automated key management system to renew and refresh existing keys. An additional layer of security is gained by EaC through mulching all server instances in the entire infrastructure on a regular basis to destroy persistent threats such as root-kits and other advanced persistent threats.
The 2017 hack of Equifax is a good example of a manual practice being a security vulnerability. This breach impacted 148 million people worldwide and was caused by one low level technician failing to manually update Apache Struts to fix a vulnerability found months before. As a result hackers were able to rifle through the company’s systems by obtaining an unencrypted file of passwords on one server and using them to gain access to more than 48 databases containing unencrypted consumer credit data. There were layers of poor security practices at Equifax, but the fact remains that had updates to critical systems been automated this hack may not have occurred at all.