Anecdotal comment: it looks like Amazon is putting an audio player at the top of each press release to showcase their "Amazon Polly" deep learning voiceover. Starting at 3:18 in the audio it starts reading the entire table and (expectedly) screws up reading the table, but not too bad.
However it's a great way of proving that we are not ready for computers to read to us yet.
Hi Jeff, here's a suggestion: When entering a table, maybe Polly can announce the table and read each data cell followed by the <th> descriptor for that cell.
Example:
"Table 1: Row 1: [Instance name: z1d.large] ... [vCPUs: 2] ... [Memory: 16 GB] ... [Local Storage: 1 x 75 GB NVMe SSD] ... [EBS-Optimized Bandwidth: Up to 2.333 Gbps] ... [Network Bandwidth: Up to 10 Gbps] ... [pause] ... Row 2: [...]"
This is how a human would read it, so Polly should do it that way too. What a human actually ends up doing though, is reference the preceding row for each subsequent row. Something like "z1d.xlarge has double the vCPUs, double the Memory, and double the Local Storage, with EBS-Optimized Bandwidth and Network Bandwidth the same." -- I don't think you are at that point yet with Polly ;)
Even better would be to replicate screen reader behavior—not only does it handle tables well, but it can interface with all semantic elements (buttons, navs, links, images, etc) in a well-defined manner.
Hi. I can get a physical (bare metal in newspeak) 8-HT Xeon with 64G RAM and 1TB RAID1 SSD for ~$100/month, <strike>10GB</strike> 1GB NIC uplink. How much is the equivalent z1d.2xlarge EC2 instance?
There are multiple providers. This is one of them, it is in Europe, but has datacenters in USA. Is not the cheapest or has the best hardware specs but is good enough. A few google searches will give you alternatives if you care.
The Xeons are not the lastest or fastest models in the city but are good enough, the difference with newer models and higher clock speeds will probably be in 1-10% processing power for most workloads, unless you are _very_ CPU bound.
Of course this is _not_ a 1:1 comparasion with EC2. This are unmanaged bare metal servers (with good http APIs and webpanels to admin though) but sometimes that is good enough and you can save a good chunk of money. You can up/down a server instantly (well - minutes) and you wont be charged if you are not using it.
Text is pretty accessible to visual or reading-impaired users. Screen reader tech is pretty good at handling articles like this. Anybody who needs it probably has a better solution than Polly.
I think the majority of the perceived screwed-upness comes from its failure to enunciate "vCPU" and "NVMe", so I think it'll sound a lot better once they fix this bug.
To elaborate on this, the processor is guaranteed to function correctly at turbo speeds (while overclocking has no such guarantee) but Intel doesn't guarantee that turbo frequency will be reached all the time. The more intense your code, the lower frequency you get. Fortunately most server code is integer-only and has low IPC so it should hit 4.0 GHz.
> What is Intel® Turbo Boost Max Technology 3.0 and how does it work?
Intel® Turbo Boost Max Technology 3.0 uses a driver coupled with information stored in the CPU. It identifies and directs workloads to the fastest core on the die first. The driver also allows for custom configuration via a whitelist that enables end users to set priority to preferred applications. The driver MUST be present on the system and configured correctly, as current operating systems can't effectively route workloads to ordered cores.
Azure has the 2.7 GHz Intel Xeon® Platinum 8168 SkyLake processor. It has clock speeds as high as 3.7 GHz with the Intel Turbo Boost, but that's for a single core not all core, and also not sustained.
And on GCP the best CPU is 2.6 GHz Intel Xeon E5 (Sandy Bridge)
So when these Z1d instances are GA they will be the fastest available VM's across the major clouds.
There is the E3-1270v6 with 3.8GHz base Kaby Lake over at Vultr. Packet also seems to have nice offerings, but both are in these configurations supplied as bare-metal. It always depends on what you need.
Google Cloud instance types are so much easier to grok and simple. I think AWS adds complexity and makes poor user experience decisions on purpose sometimes (tongue and cheek).
AWS T2, M5, M5d, C5, C5d, X1e, X1, R4, H1, I3, D2 vs GCE standard, memory optimized, or compute optimized. Want a custom amount of cores and memory on GCE? No problem, just punch it in.
Then there is EC2 billing... Google Cloud sustained use and committed use discounts are superior yet again.
C4 spot instances work for me at the moment. The price increase for C5 wasn't worth it as I was only seeing a 5-10% performance increase. I need to re-benchmark though.
The z1d.6xlarge is a great match for what I was just searching for... I'm looking for a spot to park my 16 core 192gb SQL Servers, which have licensing for exactly 16 cores. I was looking at the 32 core, 244gb r4.8xlarge instances and trying to figure out what to do with the unused cores!
Nice, the z1d instances might be perfect for interactive raytracing we do at $employer. The tracing itself is quite efficient and parallel, but scene manipulation has a lot of single core and IO bottlenecks. Plus the metal versions are useful to profile the whole thing.
However it's a great way of proving that we are not ready for computers to read to us yet.