Hacker News new | past | comments | ask | show | jobs | submit login

A compelling product. The dashboard looks great. They even replaced the confusing term "user data" with "launch script", but they fall back into it later. SSH in-browser is great too and can be bookmarked/opened in a fullscreen tab. Uploading (instead of pasting) your SSH pubkey is a bit annoying.

The docs appear to say you can add these to a VPC but I don't see how to do it.

They don't say the SSD storage is local, so I'm sure it's not.

A few runs with `fio` confirms this is EBS GP2 or slower:

The bench: "fio --name=randrw --ioengine=libaio --direct=1 --bs=4k --iodepth=64 --size=4G --rw=randrw --rwmixread=75 --gtod_reduce=1"

Lightsail $5:

  read : io=3071.7MB, bw=9199.7KB/s, iops=2299, runt=341902msec
  write: io=1024.4MB, bw=3067.1KB/s, iops=766, runt=341902msec
DigitalOcean $5:

  read : io=3071.7MB, bw=242700KB/s, iops=60674, runt= 12960msec
  write: io=1024.4MB, bw=80935KB/s, iops=20233, runt= 12960msec
More than an order of magnitude difference in the storage subsystems.

These appear to just be t2.nano instances (CPU perf is good, E5-2676 v3 @ 2.40 GHz, http://browser.primatelabs.com/geekbench3/8164581).

For advanced users, there isn't much compelling here to make up for the administration overhead. It's a little cheaper than a similar-spec t2.nano (roughly $4.75 on-demand + $3 for a 30GB SSD). The real win is egress cost; you can transfer EC2->Lightsail for free. 1TB of egress would be nearly $90 on EC2, but is only $5 on Lightsail.

In other news, EC2 egress pricing is obviously ridiculous.




The terrible IOPS performance on AWS is the biggest downside for me.

All competitors seem to outstrip AWS on this. Do they have some legacy infrastructure that is just too big too upgrade to something more modern, or is this "on purpose"?


As @STRML said, the difference is that Amazon is using network attached (EBS) storage as the primary instance storage, instead of local SSD. This provides a ton of benefit to Amazon, and some benefit for the user as well: CoW backend allows for nearly instant snapshots and clones, multi server/rack redundancy with EC, ability to scale up with provisioned IOPS easily, etc.

The downside is that the access methods for blocks mean some operations are more computationally and bandwidth intensive, meaning you will get fewer IOPS and less sustained throughput without paying more money. In addition, there is always going to be a bit more latency when going over the network versus a SAS RAID card.

As with all things in life, it's a tradeoff. If you look at other large providers' (GCE, DO at least) network storage offering, you'll also see a significant performance regression from local SSD.


> CoW backend allows for nearly instant snapshots and clones

LOL => A 80GB EBS SSD snapshot takes more than 1 hour.

Subsequent snapshots are incremental and will be less slow.

> multi server/rack redundancy with EC

You can move drive manually after you stop an instance, if that's what you call redundancy.

> ability to scale up with provisioned IOPS

Nope. You need to unmount, clone the existing EBS volume to a new EBS volume with different specs, and mount the new volume. You can't change anything on the fly.

The last time we had to change a 1TB drive from a database, we tried it on a smaller volume... then we announced that it would be a 14-hours-of-downtime-maintenance-operations (if we do it this way) :D


> Subsequent snapshots are incremental and will be less slow.

Depends how much they've been written to since last snapshot. Heavy writes and it can be just as slow again.


Hence why I say "less slow" and not faster. There is nothing fast when it comes to EBS volumes :D


It doesn't appear that Lightsail actually allows hooking up data volumes. This is a surprising regression considering AWS basically invented it, and a major downside compared to DO.

So network-attached storage for Lightsail is upside for AWS, but all downside for the customer.


And they're still the same price as the others (traffic more expensive). How comes that? Do they just have such a higher margin or are there zero economies of scale?


It's just an artifact of network-attached storage.

Many competitors just use local storage, which comes with its own serious downsides for the company and customer. DigitalOcean just recently launched its Volumes service, but it's very limited compared to EBS, and not nearly as fast as its local SSDs.

EBS is generally fine but I would really enjoy the option to have ~40GB local SSDs for caching (but I suppose you can always grab an r3/r4 and cache in memory if that's your bag).

The best cloud I/O perf I've seen, bar none, comes from Joyent's Triton containers. Beats even DO by 3-4x. Beyond that you need to go bare-metal.


That's truly terrible performance. Don't understand how they advertise SSD storage which is then not local. It's something they should mention clearly. With that performance difference I don't see a reason to prefer them over DO/Vultr/Linode. Even if the CPU is better, disk iops will likely be the limiting factor here.


Thank you for the information, most informative reply in this thread


Re: VPC peering, the blog gets into more detail. I tried it out, worked great. Simple egress. https://aws.amazon.com/blogs/aws/amazon-lightsail-the-power-...


Anyone know what their web terminal is based on? I've been on networks that are so locked down that it'd be really useful... although maybe not 5USD/month useful.


You can always run an ssh bastion server on port 443. It's indistinguishable from https traffic without deep packet inspection or great-firewall-of-china like pattern analysis.

Just have a bastion host with that and you'll have no trouble ssh -A'ing your way there and then on to the real box on whatever port it's on.


Heh. I'm often on a network where traffic that doesn't start with HTTP CONNECT on port 443 is dropped, and they're not China.


well for that money it's really simple to interconnect the 3 and still use all the cool AWS things.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: