You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I found that the s3cmd driver for AWS at least looks like it needs a parameter "encrypt" to encrypt bucket contents, parameter should be set to true. This is not a critical fix because the S3 bucket that is managed by wal-e is the only one that should need to be encrypted, but for my department policy will probably require that all three buckets should be encrypted.
(In case somebody slips up and uploads an image or a builder artifact with a key embedded in it. The buckets will all be private, and the release artifacts also private, so we would not usually need to emergency-rotate these keys, but there is always the possibility that someone will slip up and by mistake upload a production key into a git repo or through some Dockerfile ADD, and then they will need to be scheduled to be rotated.)
So, to alleviate this, I wanted to enable encryption on all three buckets via s3-uploader, that way my InfoSec department can enforce full encryption via policy on all of our s3 buckets, and we will never accidentally store unencrypted keys.
Does anyone know about the encryption support across other platforms such as Azure, GCE, Swift? We are on AWS/S3 and I know that Encryption is an optional flag to pass with the PUT. (If we can implement that in a way that supports all existing cloud storage options, I'd expect this to be a welcome change in the trunk.)
The text was updated successfully, but these errors were encountered:
If encryption is supported on Minio too, then we should use it... probably optionally with a global.encrypt flag in values.yaml
If not, we don't currently have a persistence story in Minio, so no big deal. There's really no need to store stuff encrypted in-memory.
It would also be good to check on wal-e and see what encryption story looks like across supported platforms here, too... wal-e connects to S3 directly and passes the encrypt flag, but it does not talk through the s3-uploader
So if the user passes global.encrypt then we ought to find a way to honor it, or balk at the setting on the target platform (balk and error)
I found that the s3cmd driver for AWS at least looks like it needs a parameter "
encrypt
" to encrypt bucket contents, parameter should be set totrue
. This is not a critical fix because the S3 bucket that is managed bywal-e
is the only one that should need to be encrypted, but for my department policy will probably require that all three buckets should be encrypted.(In case somebody slips up and uploads an image or a builder artifact with a key embedded in it. The buckets will all be private, and the release artifacts also private, so we would not usually need to emergency-rotate these keys, but there is always the possibility that someone will slip up and by mistake upload a production key into a git repo or through some Dockerfile ADD, and then they will need to be scheduled to be rotated.)
So, to alleviate this, I wanted to enable encryption on all three buckets via s3-uploader, that way my InfoSec department can enforce full encryption via policy on all of our s3 buckets, and we will never accidentally store unencrypted keys.
Does anyone know about the encryption support across other platforms such as Azure, GCE, Swift? We are on AWS/S3 and I know that Encryption is an optional flag to pass with the PUT. (If we can implement that in a way that supports all existing cloud storage options, I'd expect this to be a welcome change in the trunk.)
The text was updated successfully, but these errors were encountered: