Now, let’s look at how we extend the existing solution to enable password authentication. The Redis password is stored inside nf file and inside the client configuration, so it does not need to be remembered by the system administrator, and thus it can be very long.Many passwords per second can be tested by an external client. Redis is very fast at serving queries.Since Redis 6+, a new ACL system is introduced which allows username + password authentication.Īs recommended by the Redis security model, the password should be long enough to prevent brute force attacks for two reasons: A client can authenticate itself by sending the AUTH command followed by the password. When the authorization layer is enabled, Redis will refuse any query by unauthenticated clients. Prior to Redis 6, it provides a tiny layer of authentication that is optionally turned on by either editing nf or passing - requirepass when starting the server. App B.1 and App B.2 are not allowed because they are not running in the same namespace namespace-a Enable password authenticationīy default, Redis doesn’t enforce any password authentication. App A.2 is not allowed because it is not running with an approved service account, although it’s running in namespace-a. Once deployed, only App A.1 is allowed to access the Redis server because it meets both conditions. Running with a custom service account namespace-a redis-auth-sa.The following policy allows access only if the app containers meet: Please note that the namespace is also considered as a factor of the authorization process. The next step is to define this service account redis-auth-sa as principal in the Istio authorization policy. Now we have the app pod running with a custom service account. We can’t simply deploy a container with a custom service account using kubectl. Unfortunately, we have to add serviceAccountNamein the deployment manifest file, as shown below. In the second scenario, we first need to create a custom service account and then use this service account to run the containers. Once deployed, both App A.1 and App A.2 are allowed to access the Redis server because they are running in the same namespace namespace-a App B.1 or App B.2 are not allowed to access because they are not running in the namespace namespace-a As shown in the below policy, we define namespace-a as the source and port 6379 as the destination. Please note that both App A.1 and App A.2 are running on the defaults service account. In the first scenario, we can apply the following Istio policy to allow only pods running in “Namespace A” to have access to “port 6379” which is the default Redis server listening port. Let’s look at how we implement these 2 restrictions now. That means only the containers running with an approved service account and in the same namespace can access the Redis server. In this case, we need to run the container on a custom service account (not the default one), and then define this service account as sourcein the Istio authorization policy. Apply on the container level with a custom service account.That means any pods can access the Redis server as long as they are running in the same namespace. There are 2 ways to apply the restriction. The magic is to use Istio Authorization Policy. Thanks to Istio, we don’t have to stick to this limited solution. All apps are not able to connect to the Redis server unless they are deployed to the same pod.
However, it blocks all the external traffic. How can we restrict the apps access to the Redis server? The easiest solution is to bind the Redis server to a single interface by adding bind 127.0.0.1 in nf file. This diagram illustrates an example solution implemented on a Google Kubernetes Engine (GKE) cluster with Istio enabled and enforce STRICT mode in Istio peer authentication, which means workloads only accept mutual TLS (mTLS) traffic. We can use a firewall to restrict access but let’s look at an alternative approach by utilizing the power of Istio service mesh. The Redis general security model suggests the restriction from the network layer should be your №1 consideration.
Helm install redis how to#
This blog is to share some ideas on how to secure the Redis server running in Kubernetes. That means the vanilla installation of the Redis server can be directly accessed by untrusted clients via TCP port or UNIX socket. However, it is designed to be accessed by trusted clients inside trusted environments.
It offers a rich set of features that make it effective for a wide range of use cases and is getting more and more popular in the Kubernetes ecosystem. Redis is an open-source, in-memory data structure store or a key-value store used as a database, cache, and message broker.