Hacker News new | past | comments | ask | show | jobs | submit login
Amazon EC2 Instance Update – Faster Processors and More Memory (amazon.com)
131 points by nnx on July 17, 2018 | hide | past | favorite | 47 comments



Anecdotal comment: it looks like Amazon is putting an audio player at the top of each press release to showcase their "Amazon Polly" deep learning voiceover. Starting at 3:18 in the audio it starts reading the entire table and (expectedly) screws up reading the table, but not too bad.

However it's a great way of proving that we are not ready for computers to read to us yet.


The audio is generated by Amazon Polly; see https://aws.amazon.com/blogs/aws/give-your-wordpress-blog-a-... for more info!

I now use the audio version as a second form of proofreading, and find that it helps me to find places where my written transitions could be better.

I agree that there's room to make the contents of the table sound better, but I am not sure what direction this should go in. Suggestions are welcome!


Hi Jeff, here's a suggestion: When entering a table, maybe Polly can announce the table and read each data cell followed by the <th> descriptor for that cell.

Example:

"Table 1: Row 1: [Instance name: z1d.large] ... [vCPUs: 2] ... [Memory: 16 GB] ... [Local Storage: 1 x 75 GB NVMe SSD] ... [EBS-Optimized Bandwidth: Up to 2.333 Gbps] ... [Network Bandwidth: Up to 10 Gbps] ... [pause] ... Row 2: [...]"

This is how a human would read it, so Polly should do it that way too. What a human actually ends up doing though, is reference the preceding row for each subsequent row. Something like "z1d.xlarge has double the vCPUs, double the Memory, and double the Local Storage, with EBS-Optimized Bandwidth and Network Bandwidth the same." -- I don't think you are at that point yet with Polly ;)


Hmmm - cool idea, but definitely ambitious and re:Invent is almost here.

The plugin is open source and we welcome PRs at https://github.com/awslabs/amazon-polly-wordpress-plugin . Feel free to code something up and give it a try :-)


Even better would be to replicate screen reader behavior—not only does it handle tables well, but it can interface with all semantic elements (buttons, navs, links, images, etc) in a well-defined manner.


Would be good if the player had an option to speed up, e.g. 2x.


We just gave Polly the ability to ensure that a given block of text is spoken within a specified period of time:

https://aws.amazon.com/blogs/aws/amazon-polly-update-time-dr...

I am messaging the team to see how we might use this for the plugin.


Hi. I can get a physical (bare metal in newspeak) 8-HT Xeon with 64G RAM and 1TB RAID1 SSD for ~$100/month, <strike>10GB</strike> 1GB NIC uplink. How much is the equivalent z1d.2xlarge EC2 instance?


Where? Asking for a friend :-)


There are multiple providers. This is one of them, it is in Europe, but has datacenters in USA. Is not the cheapest or has the best hardware specs but is good enough. A few google searches will give you alternatives if you care.

The Xeons are not the lastest or fastest models in the city but are good enough, the difference with newer models and higher clock speeds will probably be in 1-10% processing power for most workloads, unless you are _very_ CPU bound.

https://www.arsys.net/servers/dedicated

Of course this is _not_ a 1:1 comparasion with EC2. This are unmanaged bare metal servers (with good http APIs and webpanels to admin though) but sometimes that is good enough and you can save a good chunk of money. You can up/down a server instantly (well - minutes) and you wont be charged if you are not using it.


Ah, there's also hetzner.com which I've had a good experience with.


> to showcase their "Amazon Polly" deep learning voiceover

It may also be an attempt at making the content accessible to blind or otherwise visual-reading-impaired users.


Text is pretty accessible to visual or reading-impaired users. Screen reader tech is pretty good at handling articles like this. Anybody who needs it probably has a better solution than Polly.

It is a cool way to demo their tech though


I think the majority of the perceived screwed-upness comes from its failure to enunciate "vCPU" and "NVMe", so I think it'll sound a lot better once they fix this bug.


Very interesting observation, I waited 3 minutes for it to read the table :P

How would Human read a table though? I think table is mainly for the eyes, reading tables seems wasteful.


I think it sounds pretty good for normal text though.


"Sustained all-code Turbo Boost" sounds like a euphemism for "We want to say overclocked but Intel won't let us".


It means the turbo button is guaranteed to be pressed on all z1d and r5 instances.


Isn't turbo boost an Intel trademark? If so, that sounds more like a "factory overclock", which isn't really an overclock at all.


To elaborate on this, the processor is guaranteed to function correctly at turbo speeds (while overclocking has no such guarantee) but Intel doesn't guarantee that turbo frequency will be reached all the time. The more intense your code, the lower frequency you get. Fortunately most server code is integer-only and has low IPC so it should hit 4.0 GHz.


> What is Intel® Turbo Boost Max Technology 3.0 and how does it work?

Intel® Turbo Boost Max Technology 3.0 uses a driver coupled with information stored in the CPU. It identifies and directs workloads to the fastest core on the die first. The driver also allows for custom configuration via a whitelist that enables end users to set priority to preferred applications. The driver MUST be present on the system and configured correctly, as current operating systems can't effectively route workloads to ordered cores.

https://www.intel.com/content/www/us/en/support/articles/000...


If my memory serves me well, Knight Rider had a Turbo Boost before Intel :-)


In the show, look for the one bush that isn't burning in the lava, or otherwise shouldn't be there. They hid a ramp behind it every time.


> Z1d instances use custom Intel® Xeon® Scalable Processors running at up to 4.0 GHz, powered by sustained all-core Turbo Boost.

Sounds rather impressive. Are similar Xeons available anywhere else?


Azure has the 2.7 GHz Intel Xeon® Platinum 8168 SkyLake processor. It has clock speeds as high as 3.7 GHz with the Intel Turbo Boost, but that's for a single core not all core, and also not sustained.

And on GCP the best CPU is 2.6 GHz Intel Xeon E5 (Sandy Bridge)

So when these Z1d instances are GA they will be the fastest available VM's across the major clouds.


There is the E3-1270v6 with 3.8GHz base Kaby Lake over at Vultr. Packet also seems to have nice offerings, but both are in these configurations supplied as bare-metal. It always depends on what you need.


So in other words, no, there is nothing comparable on the market if you want all core performance like this.


Z1d sounds very similar to a 2x Gold 6146 system which has 3.9 GHz all-core turbo.


Google Cloud instance types are so much easier to grok and simple. I think AWS adds complexity and makes poor user experience decisions on purpose sometimes (tongue and cheek).

AWS T2, M5, M5d, C5, C5d, X1e, X1, R4, H1, I3, D2 vs GCE standard, memory optimized, or compute optimized. Want a custom amount of cores and memory on GCE? No problem, just punch it in.

Then there is EC2 billing... Google Cloud sustained use and committed use discounts are superior yet again.


Amazon groups the types into standard, memory optimized, compute optimized, and storage optimized.


Still no way to get 16+ cores without huge amounts of RAM (and cost). My workloads are CPU bound but not particularly memory intensive.


c5.4xlarge

* 16 vCPU at 3.0 GHz/3.5 GHz Turbo

* 32 GB RAM

* $0.68/hr on demand, $0.43/hr reserved, ~$0.24/hr spot

You could argue that's too expensive, but the cost is coming from the compute, not the 32 GB of memory.


Did you consider C5/C5d ? they typically get to 3.5Ghz and have smaller DRAM/vCPU ratio


C4 spot instances work for me at the moment. The price increase for C5 wasn't worth it as I was only seeing a 5-10% performance increase. I need to re-benchmark though.


The price increase for C5 wasn't worth

I don't know about spot pricing, but on demand pricing for c5's is about 15% less than the c4's.

If you look at c5d's with ephemeral storage they are still a bit cheaper than c4's.


Z1d, R5 and R5d are publicly available now

https://aws.amazon.com/blogs/aws/now-available-r5-r5d-and-z1...


The z1d.6xlarge is a great match for what I was just searching for... I'm looking for a spot to park my 16 core 192gb SQL Servers, which have licensing for exactly 16 cores. I was looking at the 32 core, 244gb r4.8xlarge instances and trying to figure out what to do with the unused cores!


You can actually take most EC2 instances and specify the number of cores it has for this type of licensing issue: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance...


Note that this doesn't save any money because you're still paying for the disabled cores. IIRC GCE does not have that limitation.


That's true, but given that 32 cores of something like Oracle is something like $800k, I'm not sure it's all that relevant.


Would running two VM's on the 32 core server be too much overhead?


Lease them out to me for my build farm :)


Nice, the z1d instances might be perfect for interactive raytracing we do at $employer. The tracing itself is quite efficient and parallel, but scene manipulation has a lot of single core and IO bottlenecks. Plus the metal versions are useful to profile the whole thing.


I'd like to see nested virtualization on EC2.


The press release mentions Z1d.metal bare metal version coming soon, that would save the need to run hypervisor on hypervisor in the largest instance.


I take it these vCPUs are hyperthreads? Or are they actual cores with HT turned off?


Yeah, vCPUs are hyperthreads.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: