Amazon Adds Versioning Support for S3 Storage

amazon web servicesAmazon Web Services has announced the launch of versioning support for their S3 cloud storage product. By using the versioning function, it will help make sure you (or your users) never accidentally remove or update an object. The S3 versioning will allow you to rollback to a safe version if needed. You can also use the versioning option for storage and archiving as you can now just keep writing to the same file name and each new write will create a version.

Amazon notes, “Once you enable Versioning for a bucket, Amazon S3 preserves existing objects anytime you perform a PUT, POST, COPY, or DELETE operation on them. By default, GET requests will retrieve the most recently written version. Older versions of an overwritten or deleted object can be retrieved by specifying a version in the request.”

The AWS S3 versioning is available for the standard data storage rates. So if you store 2 versions of an image, you will be charged for the storage of both objects. To use versioning, you MUST set the bucket to use it – otherwise it will be the same bucket as you’ve had previously.

Amazon Lowers Prices for Amazon EC2 Reserved Instances

Amazon’s Web Services division has announced a new pricing model for EC2 reserved instances today. The reserved instances offering launched earlier this year and is basically a pre-paid option for the EC2 service. Amazon offers a lower rate in return for you locking into a one or three year contract.

From what I can tell, the pricing dropped quite a bit (some in the comments say 30%) – for example:

  • small instance one year price dropped from $325 to $227.50 and three year price dropped from $500 to $350

There’s no doubt that Amazon wants to own the web services market and continuing to provide more service for lower prices keeps the momentum moving forward. Check out my notes from the Amazon web services seminar earlier this year.

Amazon Cloud Computing Seminar Recap

amazon web servicesThis morning I attended Amazon’s Executive Cloud Computing Workshop. I was able to snap some photos and jot down some notes I’d like to share. The presenters included Werner Vogels – VP & CTO at and Marten Mickos – Sun SVP. I very much enjoyed Werner’s discussion – he basically took us on a tour of the history of AWS (ec2, s3, etc) and some examples of how customers are utilizing their cloud infrastructure services.

Werner explained that using Amazon web services (AWS) helps companies move from capital expenses to variable costs. The basic idea is that instead of buying enough hardware to make sure you can handle spikes, AWS can grow and shrink as needed.

Here you can see how fast AWS is growing and how in mid-2007, AWS bandwidth passed the bandwidth used by Amazon itself. Werner said if they showed 2008 on the chart, the Amazon line would be gone as the growth has been that big. In fact, he said that Amazon’s ecommerce sites combined is only a moderate customer of AWS.

amazon cloud computing workshop

Amazon Discusses The S3 Downtime

amazon s3Amazon has posted an announcement regarding what happened last weekend with their S3 storage service and the downtime of nearly 8 hours. Our sister site CenterNetworks covered the outage extensively. Overall the downtime ran over 9 hours from 8:40am Pacific Time to 5:00pm Pacific Time. They call it an "availability event" – I need to add this to my list of synonyms for the words dead, down, outage and not working.

Here’s their final conclusion:

We’ve now determined that message corruption was the cause of the server-to-server communication problems. More specifically, we found that there were a handful of messages on Sunday morning that had a single bit corrupted such that the message was still intelligible, but the system state information was incorrect. We use MD5 checksums throughout the system, for example, to prevent, detect, and recover from corruption that can occur during receipt, storage, and retrieval of customers’ objects. However, we didn’t have the same protection in place to detect whether this particular internal state information had been corrupted. As a result, when the corruption occurred, we didn’t detect it and it spread throughout the system causing the symptoms described above. We hadn’t encountered server-to-server communication issues of this scale before and, as a result, it took some time during the event to diagnose and recover from it.

As cloud computing becomes more mainstream, will we see more downtime as more developers move to this type of hosting solution over more traditional options?