Amazon Web Services has announced the launch of versioning support for their S3 cloud storage product. By using the versioning function, it will help make sure you (or your users) never accidentally remove or update an object. The S3 versioning will allow you to rollback to a safe version if needed. You can also use the versioning option for storage and archiving as you can now just keep writing to the same file name and each new write will create a version.
Amazon notes, “Once you enable Versioning for a bucket, Amazon S3 preserves existing objects anytime you perform a PUT, POST, COPY, or DELETE operation on them. By default, GET requests will retrieve the most recently written version. Older versions of an overwritten or deleted object can be retrieved by specifying a version in the request.”
The AWS S3 versioning is available for the standard data storage rates. So if you store 2 versions of an image, you will be charged for the storage of both objects. To use versioning, you MUST set the bucket to use it – otherwise it will be the same bucket as you’ve had previously.
Amazon’s Web Services division has announced a new pricing model for EC2 reserved instances today. The reserved instances offering launched earlier this year and is basically a pre-paid option for the EC2 service. Amazon offers a lower rate in return for you locking into a one or three year contract.
From what I can tell, the pricing dropped quite a bit (some in the comments say 30%) – for example:
small instance one year price dropped from $325 to $227.50 and three year price dropped from $500 to $350
There’s no doubt that Amazon wants to own the web services market and continuing to provide more service for lower prices keeps the momentum moving forward. Check out my notes from the Amazon web services seminar earlier this year.
It seems nearly every startup I talk to is using Amazon Web Services (AWS). Either S3 for storage, EC2 for processing or one of the other options. Here at CN we use S3 for storage of nearly all static files.
Today Amazon has announced their third annual “Startup Challenge” which offers startups that are using AWS a chance to win a variety of prizes and service credits. Startups in United States, United Kingdom, Germany and Israel are eligible to enter and the entry period ends on August 25th. Also, startups must have earned no more than $5 million in annual revenue and/or raised more than $5 million in venture capital funding.
The top prize is $50,000 in cash, $50,000 in Amazon Web Services (AWS) credits, mentoring sessions from an AWS technical expert, and premium gold support for one year. There are a variety of other winners as well.
What’s great about the Startup Challenge is that unlike many startup contests, it doesn’t appear that Amazon is taking any equity for the prize amounts (of course you should check the rules to verify). My take is simple – if you use AWS in any form, submit your entry because, at a minimum, you get a $25 credit (which for CN is like 6 mos free service).
I added a 404 tracking database to my blogs so I can track what pages are broken. The database includes the IP address of the computer attempting to access the item. Most of the listings in the database are webpages and/or images that are broken.
But what I’ve also noticed is that there are a huge number of rows for the following item (with variations):
When I lookup the IP address (174.129.123.x), it resolves to Amazon Web Services. I have an AWS account but have no idea why they would be trying to hit this file 100 times a day.
Anyone out there have an idea on why or how to correct the issue? Thanks in advance.
Amazon has posted an announcement regarding what happened last weekend with their S3 storage service and the downtime of nearly 8 hours. Our sister site CenterNetworks covered the outage extensively. Overall the downtime ran over 9 hours from 8:40am Pacific Time to 5:00pm Pacific Time. They call it an "availability event" – I need to add this to my list of synonyms for the words dead, down, outage and not working.
Here’s their final conclusion:
We’ve now determined that message corruption was the cause of the server-to-server communication problems. More specifically, we found that there were a handful of messages on Sunday morning that had a single bit corrupted such that the message was still intelligible, but the system state information was incorrect. We use MD5 checksums throughout the system, for example, to prevent, detect, and recover from corruption that can occur during receipt, storage, and retrieval of customers’ objects. However, we didn’t have the same protection in place to detect whether this particular internal state information had been corrupted. As a result, when the corruption occurred, we didn’t detect it and it spread throughout the system causing the symptoms described above. We hadn’t encountered server-to-server communication issues of this scale before and, as a result, it took some time during the event to diagnose and recover from it.
As cloud computing becomes more mainstream, will we see more downtime as more developers move to this type of hosting solution over more traditional options?