The ever-increasing torrent of reports of “misconfigured” S3 buckets contributing to egregious breaches of customer data is an epidemic, unfairly placing a black mark on the name of Amazon Web Services’ (AWS) outstanding object store. Worse, these breaches are completely avoidable through the application of simple automated compliance enforcement. Tying my last two posts together, let’s take a quick look at how applying a DevOps mentality could save these companies the public embarrassment and expense of remediation that results from human carelessness.
What we have to say, what you want us to hear.
That’s how our blog works. It’s interactive. Let’s learn together.
From a business perspective, cloud migrations are driven largely by a desire for flexibility and resilience. When we move systems to the cloud, we expect them to be both more adaptable and more reliable than on-premise solutions. These two objectives are somewhat competitive, however. The Jenga tower is most likely to fall when you are moving a piece. Adding flexibility naturally introduces change which puts stability at risk. (photo credit: pwmag.com)
With a number of recent high-profile leaks of personal data on Amazon Web Services (AWS) S3 service, it seems like a good time to review the security mechanisms that govern storage and sharing of data from AWS. If your organization uses S3 for storage of PHI, PII, or other sensitive data, you should be aware that failure to properly secure and restrict access and log this information can carry hefty fines and even jail time. Leaking users’ personal data can also be detrimental to your business model. (Photo Credit: Flickr)
Serverless application architectures are still an emerging art form. One of the big challenges serverless architectures pose is traceability and debugging. Because we are dealing with many discrete entities performing as a single cohesive unit, we want to be able to see the impact of those entities on the system as a whole.