Is there a way with
okctl and a cluster configuration file (YAML) to not store passwords in the clear? Ideally, could we store them somewhere else?
The following configurations keys are considered sensitive:
For each of these we follow a specific procedure to determine the value:
- We first check if the given value starts with an URI scheme, where we handle the following:
s3://- this is assumed to be a path to a file in AWS S3
file://- this is assumed to be a local path, that is on the server where
okctl is executed
adl://- this is assumed to be a path to a file in Azure's ADLS Gen1
base64://- this is assumed to be a Base64 encoded value
For the file based schemes we will use the vendors API to retrieve the file content. This requires for our servers to be allowed to do so. For instance, for S3 there should be a bucket policy that allows the IAM role/user of the machine you are executing
okctl on be able to read the file.
For Base64 encoded values we simply decode what comes after the
base64:// and set that as the configured secret.
If there is no supported scheme given, we assume that the value is given as-is, that means we take it as a raw string and set it as the configured secret.
For example, the value for the LDAP service password could be set to the following:
Before that you would store the password in the protected bucket that only the current machine can read from. For instance, from an authorized machine run the following command (while changing the values to your need of course):
echo "LDAPpasswordSeCuRe@4878274" | aws s3 cp - s3://examplecorp/odas_configs/dev/ldap_passwd.txt
No matter how you set the sensitive value, once it is retrieved we store it as a Kubernetes Secret, which can be thought of as a protected key/value map. Our services are configured to mount the necessary secrets as volumes (as per the Kubernetes best practices) and read the content safely when the service is deployed.
Alternatively, for advanced user using a Kubernetes service, like AWS EKS or a self-managed one, you first follow the approach to deploy the services as outlined in our documentation. Then you may modify the provided Kubernetes YAML files to change the way the secrets are mapped into the containers. For example, our Planner Deployment file contains the following lines:
- mountPath: /etc/secrets
- name: secrets
This causes Kubernetes to mount the volume
secrets into the container using the path
/etc/secrets. If you decide to set up your own way to provide secrets, you can adjust the path accordingly. Note that the Secret named
secrets is managed by
okctl and should be left as-is.
For each configuration key that loads the value from the configured secret, we add, for instance, the following environment variable and value to the containers (via a referenced ConfigMap object):
The following shows the Okera managed secrets object and its content for a particular cluster:
$ kubectl describe secret secrets
JWT_PRIVATE_KEY_0: 3242 bytes
JWT_PUBLIC_KEY_0: 807 bytes
SSL_CERTIFICATE_FILE_0: 3802 bytes
SSL_KEY_FILE_0: 1704 bytes
SYSTEM_TOKEN_0: 839 bytes
USERS_FILE_LDAP_0: 2456 bytes
The content varies depending on how the cluster is configured.
As a pattern to use your own Secret object, follow these steps:
- Create your own Secret object and store the value under a unique key name
- Configure the new Secret object to be mounted into the container as a volume at a specific base path (see
volumeMounts). Note: Do not use
/etc/secretsbut your own unique path instead.
- Create your own ConfigMap object and connect an Okera config name with the path to the newly created secret and its unique name
- Configure the Okera Deployment of your choice to include a reference to the newly create ConfigMap as the last entry in that list to give your own keys a higher priority (see