Always nice to see new options, but I wish Amazon hadn't dropped the ephemeral disk. Quite apart from the issue of surviving EBS failures, ephemeral disks are wonderful for swap space -- since the lifetime of swap exactly matches the lifetime of ephemeral disks. I even have a startup script in FreeBSD which autoconfigures swap space striped across a few GB of ephemeral disks.
It is quite possible that this is specifically why these new instance types can exist: as a lot of users don't use the ephemeral drives at all, even for swap, you can now build a new set of servers that are just CPUs and network cards, getting rid of the large disk drive; some tradeoff has to be made to increase some of the stats but not the price, and saying it is a shame that they didn't include everything seems to then miss the point.
> some tradeoff has to be made to increase some of the stats but not the price
I disagree. According to Wikipedia, it's been six years since EC2 launched. Since 2006-equivalent hardware is much cheaper in 2012 due to Moore's Law, (or alternatively the specs you can buy with a given number of dollars in 2012 is much better than it was in 2006), and since the original EC2 machines have probably been in service long enough to generate sufficient revenue to recover their upfront investment and make a profit, it wouldn't be unreasonable for Amazon to start retiring those old machines and use the better price/performance point of new hardware as a business justification to give customers some combination of price breaks and spec upgrades.
That would make sense across the board; in this case, you have to just compare the price point between different current offerings; these new instance types are similarly priced yet with higher specs in some areas, and I will argue that it is a tradeoff against the disks.
If you feel like Amazon should be cheaper overall due to Moore's law (and remember: they have lowered their prices numerous times since they launched on everything from hardware to bandwidth), then the price of the things we are comparing to would also be cheaper (as these are all just virtualized subsets of larger machines).
Specifically, the new instances (m3.*) are only available with ebs. There is probably a technical reason for this - eg maybe they move these bigger instances around to different servers so they can't be on local disk? Would love to know why...
I'm guessing that Amazon has wanted to phase out ephemeral disk on non-HPC instances for a long time, since it costs them money and most people don't use it; and the availability of EBS-optimized instances and provisioned-IOPS disks was what they needed to satisfy the people who insisted that ephemeral disk was the only way to get reliable I/O performance.
And I'm sympathetic to that argument where large amounts of disk are concerned... but I do wish we could have a small amount of ephemeral disk to be used for swap space.
I think there's a reliability argument to be made as well. Most (all?) of the EC2 outages and partial outages over the past year or so have had their root cause somewhere in EBS. It's also why I hesitate to touch ELBs or RDS, since they both rely on EBS.
I think it's moot. For heavy users, the use case for ephemeral disk is better served by their I/O instances. For light users, I think Amazon is betting that EBS is 'stable enough' by now. If you really don't like EBS, there's always the old instances, Amazon RDS, Amazon DynamoDB...
Yeah, applications servers (read: web front-ends) can easily run entirely from memory, so it's annoying to have the rest of the OS dependent on EBS when it really needn't be. (Although to be honest I'm not aware how easy or hard it is to pull out the hard disk from a booted instance without customizing the kernel).
All of the disk performance benchmarks (iops, throughout, buffered vs unbuffered) I ran on ephemeral disks matched exactly the performance of ebs. That could be a coincidence that my tests ran only on machines without local disk, but more likely ephemeral disk was really just EBS in most cases. So it always seemed like a silly or even misleading product to me.
Ephemeral is definitely not EBS, as its I/O does not contribute to saturating an instance's network connection (I've tested that). EBS has a ceiling for throughput (the instance's network throughput) and competes with your other network traffic, which is interesting for database workloads for obvious reasons. In addition, every single benchmark I've seen run reports ephemeral behaving very differently from EBS.
As ephemeral disappears when you stop an instance (presumably, when the system is given the opportunity to relocate your instance to another machine), I've always suspected that ephemeral storage was part of local disks on the virtual host's chassis -- as opposed to a SAN or other kind of network-attached storage. As such, you probably end up with the same unpredictable performance you find in all virtualized resources, since it is very unlikely that Amazon is giving you your own storage.
Before benchmarking ephemeral storage you have to pre-warm it, which might have contributed to your findings. Ephemeral is worlds better than EBS, particularly in outage scenarios; if I could convince everybody on planet Earth to stop using EBS, it would be a noble cause.
It's not a throttle, it's the "physical" capacity of the "interface". If you're on an instance with gigabit connectivity and you're doing 1 Gb/s of EBS I/O, other network chatter will suffer, probably fairly dramatically. That's why the high-I/O instances have 10 gigabit connectivity, as I understand it.
Happy to be proven wrong but this is based on a year or so of experience dealing with EBS. You can't see the EBS traffic in your tools (at least that I've been able to find), which complicates things.
It's all kinds of different on VPC instances, therefore I suspect the network interface model -- and possibly EBS connectivity -- is different on those. So, who knows? I'd kill for Amazon to be more forthcoming here so I could understand the infrastructure running my fleet, but, I don't and they aren't.
For some reason that is currently unknown to me (did not investigated) this instances are much better with Redis since they are able to fork using more or less 1/20th of the time needed before. (4.4 GB process forked into 44ms instead of 1 second).
Too bad they didn't reduce any of the reserved instance pricing. So effectively they've reduced the cost for things that scale up and down a lot, but not for permanent base load.
I'm not sure why this is a surprise. Real 'clouds' in my opinion are all about elasticity. If you can't benefit from that then you're wasting your money and should host elsewhere.
Because most users have a relatively fixed base load, and buying reserved instances for that base load lets you save lots of money. The ideal setup would look like: reserved instances for base load, on-demand for natural daily curves, and spot instances for batch or price sensitive work.
Having never used EC2, I wonder what an instance would cost me each month if I were to use it for something like hosting my website.
I'm very happy where I am now, but I'm interested in learning more about Amazon's offerings and moving my blog over would probably be a good first step.
I'm in your same position exactly but unfortunately the options aren't good for what you (and me) will use it for. Before this price drop a Micro instance would run you about $15 - $20 monthly. About the same as I pay for my 512 Linode. But I've heard horror stories about the CPU on the Micro not being able to handle any amount of moderate traffic for more than a few minutes at a time. So then there's the Small instances which are, at least for me, ideal anyway but when you do the math on a Small instance running 24/7 each month with just 10GB of EBS storage you get up past $60 a month. Okay, not bad - pretty close to VPS pricing, actually. But $60/month is a lot for me to spend on hosting a blog so its not for me. I don't know about you. Maybe your priorities and financial situation is different. I'm not hurting at all financially, I'm just cheap (which is why I'm not hurting financially).
Micro uses a different CPU provisioning model than all the other instance types. While most instances have a guaranteed CPU allotment, micro instances do not. They can burst up to two ECU, but will eventually be limited to something around 0.1 ECU if you exceed the burst period.
That's not to say they don't have their uses, but they will not handle much traffic unless its heavily cached (but then you only have so much memory).
If you have a blog that is static and uses external js components for things like comments (disqus), you could use it in conjunction with cloudfront to handle quite a bit more traffic. Most traffic would be routed via cloudfront. Admin panel would be handled on a subdomain.
This is exactly the sort of thing reserved instances would be good for (when you know it is going to be in use 24/7).
A three year reserved instance would be significantly less than $60 per month. The upfront reserve price of $300 over 36 months comes out to $8.33/month. Throw in an approximate cost of $14/month for the use of the compute time, EBS, network traffic.
That brings your monthly cost to just under $23/month, for a grand total of nearly $830 over a 36 month period.
You can run a small reserved instance for $11.71/month (with $195 prepayment for the year). That comes out to about $27.96/month, plus data transfer fees. If you prepay for 3 years, it averages out to about $17.85/month.
Amazon EC2 Micro instances have very poor performance - sure they have 1.7GB Memory but they have shocking performance. Not worth the price for small websites/blogs.
That would be a provisioning nightmare, trying to figure out which partially-used-up box to place a new instance into. Much easier to slice them into fixed sizes.
It definitely would be a nightmare! But they have a big enough pool of hardware and users that it might be possible. As it is, there are inefficiencies because of the lack of choice. For example, I had some large instances because I wanted at least 4 ECUs, and needed 4GB of ram. I had to take on the extra memory even though I didn't need or use it. Now I am using high cpu medium instances, but would gladly pay a little more for an extra gig or two of memory.
I like that the new instance types have a more balanced ratio of memory to ecu, but wish they had included some smaller sizes. I'd like an m3.medium with 4gb and 4 ecu (or an m3.large with 8gm and 8 ecu).
It's great that EC2 prices are coming down, but they're really not keeping up with the general trend in computing prices.
If you're running a static set of computers on EC2 today, and not making use of other features like S3, ELB, etc, you're almost certainly overpaying by a significant amount compared to even high end dedicated server providers, let alone the cheap end of the market like OVH and Hetzner.
The 'E' in EC2 is the major benefit you get using EC2 over a more traditional hosting provider. If you're just using static instances and don't use other AWS features, you're probably better off just not using EC2. In my opinion, it's priced with the elasticity in mind.
(Yes, there's the option of reserved instances, but they still may or may not be the most cost-effective option for what you want to do in that case.)
So who are the competitors that one should look at? Sincere question. When comparing EC2 to Linode or to Rackspace Cloude, Amazon EC2 still wins. At least when I looked. I could be missing something.
Alternately, AWS was underpriced to start with. There is nothing wrong with charging what people will pay, and a lot of very smart people find AWS to be a huge value-add...
Reserved Amazon instances are not the cheapest but are competitive with some hosting providers. I've seen some comparisons where they're right in line with Rackspace's cloud offerings, with each winning on slightly different specs. And of course Amazon has different bandwidth pricing than almost anyone else (charging per-request as well as per-GB) so that might make the difference.
I think that depends what you're comparing. For CPU bound applications, I think you're right, $/cpu is fairly expensive on EC2. However, memory tends to be cheaper on EC2 than most other VPS providers:
EC2 Small Instance with 3 year reservation:
1.7GB RAM, $18/month
Linode would be around $60/month for the same amount of RAM.
Of course, the EC2 reservation requires that you use the same instance for 3 years and amortize the reservation upfront cost, but there are lot of reasonable applications that fit into this mold.
No matter how small or how big a customer you are, it is a price decrease. The cost of these first-generation standard on-demand instances has been reduced by 18.75%.