Best Practices for increasing Security with Zero Trust architecture in A Zero Trust World Cloud era.

In recent years, the Zero Trust model has been emerging in the IT world: the idea that you are skeptical about identities and constantly verifies users. But how do you build such a thing without driving users insane by typing in additional codes or other security questions?

Zero Trust architecture came from the realization that perimeter security solutions such as edge firewalls are not sufficient to prevent data breaches. Lateral movement inside a network to scan and obtain target data has been the approach in the recent serious attacks. The idea of Zero Trust is to build walls inside the datacenter by network segmentation to prevent lateral movement and always authenticate and authorize users for all data access.

 “Never trust, always verify.”

The Zero Trust approach uses the guiding principle of ‘never trust, always verify’. There is no assumption in advance about the degree of reliability, whether that concerns users, hosts or data sets. Also, access to data is limited – provided on a need-to-know basis.

Since the GDPR, more Dutch employees have been confronted with an important weapon in the fight against data theft: multifactor authentication (MFA). Think of your Office 365 account, where you may now also have to enter a code every two weeks after logging in with your password that you will receive on your phone. That gives users a lot of spills because in many of these implementations it is not really a user-friendly solution. It takes more time than before and creates a lot of confusion and delivers more tickets for the help desk.

Understanding the big picture of zero-trust in a cloud-native world

Even though cloud-native computing spans traditional virtualization, containers, and serverless computing within the broader hybrid IT context, today Kubernetes is at the center of the storm. And where Kubernetes goes, so too goes cloud-native computing.

To understand how zero-trust networking must evolve, therefore, it’s essential to understand how best to secure Kubernetes. Containers’ dynamic, ephemeral nature, as well as other essential Kubernetes properties such as stateless processing and declarative, configuration-driven behavior, require a top-to-bottom rethink of how zero-trust works.

But there’s even more at stake here. The best practices that these vendors exemplify are rapidly becoming essential for cybersecurity in general. Enterprise IT security professionals can’t afford to continue pouring billions of dollars into cybersecurity solutions that leave their organizations vulnerable to attack. Cloud-native zero-trust is shining a light on the path.

Zero Trust principles

Zero Trust Security should be based on principles, where the main goal is to reduce the impact of cyberattacks. Here are some examples of these principles:

  1. All resources are accessed securely, regardless of location
  2. Least privilege: Access control is on a ‘need-to-know’ basis and is strictly enforced
  3. Always verify and never trust
  4. Inspect and log all traffic
  5. The network is designed from the inside out
  6. Security by design
  7. Different business groups should have separate cloud accounts
  8. The more fine-grained accounts, the more micro-segmentation goal is achieved.

AWS Shared Responsibility Model

Enterprises are rapidly accelerating the pace at which they’re moving workloads to Amazon Web Services (AWS) for greater cost, scale and speed advantages. And while AWS leads all others as the enterprise public cloud platform of choice, they and all Infrastructure-as-a-Service (IaaS) providers rely on a Shared Responsibility Model where customers are responsible for securing operating systems, platforms and data.  In the case of AWS, they take responsibility for the security of the cloud itself including the infrastructure, hardware, software, and facilities. The AWS version of the Shared Responsibility Model shown below illustrates how Amazon has defined securing the data itself, management of the platform, applications and how they’re accessed, and various configurations as the customers’ responsibility.

Shared-Responsibility-Model

AWS version of the Shared Responsibility Model

AWS Management Console Use Case

To illustrate the principles more fully, let’s look at a concrete use case: enabling zero-trust access to the AWS resources like the Management Console.

Given the flexibility and scalability of cloud infrastructure, many companies are moving portions of their environment to the cloud. AWS often hosts critical components of a company’s infrastructure or codebase. Developers use AWS Management Console to access, review, and build out their AWS environment. Another common resource is the AWS remote desktop service WorkSpaces.

However, providing a simple second factor for access to AWS is often a challenge. Customers recreating users in AWS IAM lose out on the value of consistent corporate credentials and a multi-factor authentication (MFA) solution with coverage beyond AWS resources. For customers porting their corporate credentials via AWS Directory Service, AWS does not currently offer an MFA solution.

There are several solutions available from suppliers for AWS customers for secure access to their AWS resources.

5 Ways To Increase Security in AWS

  1. Vault AWS Root Accounts and Federate Access for AWS Console
  2. Apply a Common Security Model and Consolidate Identities
  3. Enforce Least Privilege Access
  4. Audit Everything
  5. Apply Multi-Factor Authentication Everywhere

Hopefully, you can make good use of this blog and insights for setting up your Zero Trust Architecture.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s