Hacker News new | past | comments | ask | show | jobs | submit login
Linode NextGen: RAM Upgrade (linode.com)
395 points by TheSwordsman on April 9, 2013 | hide | past | favorite | 230 comments



Linode is hard to beat these days. I've been a customer for a few years and adore the shit out of them.

Digital Ocean has become quite a competitor (and I have a box with them now for a staging environment) but the fact that they do not provide a private/internal network between your boxes is something that I can't live without. I like to keep app servers and db servers isolated from the real world by one box in front of all of them ... which is not something you can currently do with Digital Ocean.

I love Heroku's platform and think they've done some amazing things for our industry, but I don't agree with any of their recent PR moves. Linode, on the other hand, has never stirred up drama or bullshit since I have started working with them. They're straight shooters and this gives me a great deal of confidence.

I had a personally hair raising experience this weekend trying to migrate a smaller server that kept swapping over to a larger one without too much downtime. I was able to do this pretty painlessly but man I wish I would have waited just a few days for this!


At $20/mo Digital Ocean offers twice as much memory, 1 TB more transfer and and 4 GB less disk (put it is SSD...)

Digital Oceans lack of "private" network is silly, if you really need it, setup a encrypted tunnel yourself, I would not trust anything else anyway. Also I highly doubt that if your DO DB and App server talk the traffic leaves the datacenter, but you could test this.


But their network is horrible. I moved my irc client/mumble server to DO when they were first announced on HN, and the intermittent lag made it impossible to even chat (mtr confirming multiple times that the issue was on DO's end). If I can't even irc from their servers it doesn't matter what price they charge.


Their network is unreliable in my experience as well.


IRC's the main thing I use my Digital Ocean VPS for - and it's been great.


Hmm, so far I haven't experienced anything like this. /me crosses fingers.

How long ago was this?


I think all in all I spent almost a month on DO, and I believe I left under 2 weeks ago. I have a friend that's still on DO (he likes the low price and needs the RAM to compile rust) and in his experience the network issues are still around but not as bad. Either they've improved things or droves of people like me tried it for a month and ditched.


We've been running many nodes (in the Dallas DC) in production on linode for years and have only seen one significant network issue like you described that we "fixed" by rebuilding the node before Linode support was able to narrow down the cause, so in our experience this isn't widespread.


I think he was referring to DigitalOcean, not Linode.


The entire point of Linode's private network is that you're not routing across the Internet to deliver traffic to a machine in the next rack. If you're not using RFC 1918 space that is properly configured, at least one router has to make a decision on whether to eject the packet onto the Internet or keep it inside, as you allude, which means you've added at least one hop to all private communications. The reasons you don't want to do that will be obvious once you scale a bit.

By all means, encrypt your traffic on the private network if you're so inclined, but encrypting across the public IP space and encrypting across RFC 1918 space do not accomplish the same goal, particularly not with the same latency or redundancy characteristics.


Routers route packets between networks, regardless of the address space being used.

Nothing about the use of a public addresses forces an extra hop in the way you suggest.


I don't have


You only pay for outbound traffic on Linode, is that the same on Digital Ocean?


Since we don't have customer facing analytics for it at the moment, we're not charging :)


Yes.


If you can get your 20 bucks to them in the first place. Their payment processing doesn't seem to be quite state of the art...


But 4x less CPU. And support is a joke.


"But 4x less CPU." You are seriously naive if you think the number of logical cores the hypervisor presents to your VM is the sole determiner of CPU execution resources.

Here is a counter example, imagine I have two VM host servers with 16 logical cores. On one I could pin each VM to 1 logical core, on the other I could run 300 VMs and give each VM 24 logical cores... The first one is going to perform much better.

Also some hypervisors (for example VMware) only executes a VM when they have as many cores as the VM has cores available for execution. So having many logical cores in your VM can negatively influence CPU scheduling.

"And support is a joke." Not in my experience


He's not naive; you're being overly cynical. This is like pointing out that an SSD could technically be programmed to work much more slowly than a 5200 RPM drive and thus be a worse value than Linode's spinning platters — unless you have a reasonable belief that somebody is actually doing that, it's just FUD.

Based on the benchmarks I've seen, it appears that Linode really does give the kind of concurrency you'd expect from four cores (i.e. if your problem is parallelizable, you can scale up on Linode better than Digital Ocean, whereas Digital Ocean will work much better if your program is serial and needs to hit the disk a lot).


Linode's support, like GitHub, is so good they should be tested for performance enhancing drugs.


Actually, hypervisor details aside, 8 VCPUs fully pegged at 100% user or system time will consume 4x the capacity of 2 VCPUs fully pegged at 100% user or system time in a domU, assuming comparable chips. Always and regardless of how Xen schedules the VCPUs onto actual nodes.

Your hand-waving about the hypervisor is unwarranted, since hypervisor interference under Xen shows up as its own time from the perspective of the domU (steal%) and nobody worth mentioning actually does hosting with VMware.


I was only comparing what each advertises.

You don't know if the first one will perform better or not. It depends on the workload of each VM. If most of the VMs are idle most of the time, when one of them needs CPU the second setup may even perform better.

And I doubt that Linode runs more VMs per host than DO. Judging by the pricing is probably the other way around.


> Also some hypervisors (for example VMware) only executes a VM when they have as many cores as the VM has cores available for execution. So having many logical cores in your VM can negatively influence CPU scheduling.

Good thing Linode and DigitalOcean don't use VMware.


No. Just because you get 8 vCPU on Linode does not mean you have 4x the CPU. You share those 8 vCPU with dozens or hundreds of other servers (depending on VM size). More small slices of a pie that is the same size, is not really more.


On Linode it is never hundreds. "On average, a Linode 512 host has 40 Linodes on it. A Linode 1024 host has on average 20. Linode 2048 host: 10 Linodes; Linode 4096 host: 5;"

On the other hand as far as I know DO don't tell how many VMs share a host.


Support is awesome.


I've had a good experience with Digital Ocean, although they did completely lose one of my snapshots last month.


I would rate this as a bad experience. It just happened that this wasn't important to you, but it can be for others.


You can just use IPSec for securing your private network?


This is great, but it would be nice to have a plan < 1GB for < $20/month. I love Linode, I've been using it for years, and the pricing is great in general. But it'd be really nice to be able to spin up a $10/month, 512MB server for a new personal blog or other project.

$20/month is prohibitively expensive for one-off side project hosting and there's lots of utility to be had from a cheap low-RAM, low-CPU server. I'd buy 10!


I suspect that, in the general case, keeping away people/projects for whom $20/month is prohibitively expensive ends up being a virtue for caker and the Linode crew.

(Note that I'm not intending this to refer to you specifically, since you're already a subscriber anyway.)


The problem is that most projects make (possibly their only) hosting provider decision from a position of being unprofitable.

Thus when you segment the market by "people who can afford $20-100/month", you are also segmenting the market by the correlating variables "people who are mad at their current host" and "people who can spare several days to migrate their infrastructure".

As a member of that class who migrated to Linode, I can attest that the size of that market is nonzero. But I can also attest that a great many projects start and remain on AWS and lowendbox where the barrier to entry is much lower, and even as a satisfied Linode customer I continue to start and continue to maintain many projects in those environments.


This is one of those arguments that seems to makes sense when you're on the purchaser side of an SaaS product, but often breaks down when you're on the seller side.

One of the biggest costs of running a SaaS business is support. I'd guess that there's not a whole lot of difference for Linode in the cost of supporting a $10/month vs. $320/month box. In fact, there may be negative correlation -- I wouldn't be surprised if the cheaper customers are more expensive.

Now, your argument hinges on there being a high conversion rate from low-end plans to high-end plans. I'd guess that rate is quite low, in fact. Most accounts probably never upgrade and / or cancel after a few months. Now, the canceling in the first few months is a big problem as well, because for most SaaS businesses, your biggest support cost is going to be concentrated in those first months where the customer is getting set up. I wouldn't be surprised if it takes Linode a few months, on average (not median since support isn't evenly distributed), to get into the black on the low-end accounts.

The variables in that equation are really important in deciding if it's worth putting up with cheap, less knowledgable, high churn customers just in the hopes of upselling them over time, or if it's better to slice off the segment from the beginning that finds $20/month to be onerous.


just to be reminded that linode is unmanaged host.


Having deployed significant environments on both Linode and AWS I can safely say that the barrier to entry on AWS is nowhere near lower than Linode's, for all but the tiniest Wordpress blog.

The hourly, broken-up billing is a smokescreen; for exactly what you get out of a (now) Linode 1024, you'd pay $50+/month or so on AWS. Don't forget total bandwidth, an elastic IP, and so on.


The barrier-to-entry on AWS is in fact infinitely lower than on Linode:

http://aws.amazon.com/free/

The ongoing TCO of AWS is much higher than Linode, of course. But that's precisely my point: you want to capture customers at birth, incubate them into adulthood, and then charge them a lot of money now that they can afford it.


Yeah, do you know how awful a t1.micro is? It's a joke. I've never seen steal% that high before. There must be thousands on a single machine.

Although if you want to be pedantic, let's say "actually useful, potentially-production barrier to entry". There. Fixed.


A friend of mine is running an ASP.NET MVC/Mono/MongoDB ecommerce site on micro with around a quarter million of annual sales. The site is very very fast.


If he hasn't already, get him to do a write-up on that. I'd sure be interested, and I'm sure many others would be as well.


The high steal% is intentional, as the CPU is allowed to burst to take advantage of unused resources:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts_...

I don't find my cheap t1.micro based IRC bouncer / VPN / reverse SSH endpoint / static-content generation box to be an awful joke at all. They're quite useful for many things.


You also want to avoid scammers and fly by night operations that don't value your business other than to find cheap CPU and bandwidth. There is a set level of overhead for each new customer, and new customers are probably more likely to need additional support. The lower you go with pricing, the larger this portion of your costs eats into your profits.

There are $10 VPS solutions out there, but their poor reputations lends credence to the idea that $10 VPS solutions are not where the money is at.


There is Hetzner.de which I like quite a bit as a company. I have been hosting with them for 3 years. They have a 512box for 6.5 EUR if you are exempt from taxes like a Non-EU citizen .


The $10 VPS solutions I see out there are either in a European country, or are startup/smaller shops that don't have a proven track record.


Just don't try to use EFnet or QuakeNet from them.


> The hourly, broken-up billing is a smokescreen

Its not a smokescreen if you are using AWS for a cloud (e.g., dynamically provisioned according to need) server infrastructure rather than as a simple VPS.


A lot of people are on AWS. If you are migrating from AWS, it doesn't matter any more what AWS' barrier to entry is, and it does matter what Linode's barrier to entry is.


>The problem is that most projects make (possibly their only) hosting provider decision from a position of being unprofitable.

Being unprofitable and not being able to spend $240 / year is a different issue.

The first is about the project being unprofitable. The second is about you being broke or not believing on it to spend even 1/150 of your income (I'm speaking of course for US/EU citizens/wages).

There are however other vendors that have even $5/month plans for VPS (and even offer SSD drives). Try Digital Ocean.


It would be neat if it was an option for people who already have Linodes. a tiny $10 server would be great as a build server or a puppet/salt master.


I agree, but can understand why Linode might want to stay away from the real lowend side of the scale.

Personally I've started using Digital Ocean for those smaller non-production servers (staging, dev etc). I still wouldn't trust DO for anything production.

The Linode service and support is second to none, and I'm happy paying the price (which now gives effectively twice the value!).


You're on to something here. I imagine it was people with < 1GB plans that churned out the most (quit linode for a better offering) as soon as folks like Digital Ocean showed up. I should know, I was on a 768MB plan and as DO showed up with better pricing and more RAM I was gone. So I imagine that they've eliminated this plan to bring more stability to their recurring revenue, which makes sound business sense.

The interesting thing is that linode started out as a cheaper alternative to slicehost, so its ironic that they are now trying to differentiate themselves in much the same way slicehost did back then.


> The interesting thing is that linode started out as a cheaper alternative to slicehost

You have that backwards, given that Linode predated Slicehost by upwards of three years.


> The interesting thing is that linode started out as a cheaper alternative to slicehost

Linode have been around since 2003. I think Slicehost was founded later than this (in 2006 according to a quick search for their name.)


> Linode have been around since 2003. I think Slicehost was founded later than this

This is correct. I signed up with Linode in 2005 and there was no Slicehost then. I was moving from another VPS provider (Redwood Virtual) which had twice the resources for the same price as Linode but were really, really crummy (they don't even exist anymore). That experience really taught me the value of considering quality in addition to the price/feature ratio, which is why I'm not jumping ship to DigitalOcean. (I might if/when they establish a solid reputation, but no sooner.)


I use DigitalOcean and really like them:

- Provisioning is conceptually cheap, and this is their killer feature. You click the "Make a new VPS" button, you pick your distro, how much RAM you want, and an SSH key, and they spit out an IP address less than a minute later. `ssh [email protected]`

This is much nicer than eg. finding or downloading a box to mess with Vagrant or taking who knows how many hours to set up your own distribution in VirtualBox. Just a few minutes ago, I spun up a box, messed around with Gitlab, and destroyed it. Completely seamless; has all but replaced VBox for these "one-off VM experiments" that I sometimes try.

There's also an API for automatic provisioning if that's your thing.

- Pricing: Cheap, as you'd expect. 512MB for 0.7¢/hr or $5 a month. There's no difference between monthly and hourly pricing so if you want to pay a buck fifty to rent the 24-core 96GB monster for an hour, have at it.

The thing about DigitalOcean is that they're still such a new service that they haven't gotten everything all polished yet:

- No kernel upgrades. The kernel is kept outside of your virtual machine, which means that if you upgrade your kernel, then you'll get mismatched modules and your network interface won't come back. They've pinned kernel updates in ubuntu so this shouldn't be a problem, but you have to say "IgnorePkg = linux" in your /etc/pacman.conf if you're on arch. Security updates will be a pain though...

- You can't boot into a recovery image/liveCD. If the above happens to you, I assume you'll have to either restore from backup or file a ticket and ask them to pick things out of your VM's hard drive. (You do get raw console access provided by an HTML5 VNC client, which can be useful)

- Payments are weird because you can't see your VM or network usage, so it's unclear how much you'll be charged until they bill you at the beginning of your month. If you select to pay via paypal, you can't pay by a credit card linked to your paypal account (at least for the first transaction any way); I had to give them my credit card information directly. (I imagine they do this to cut down on fraud or spam)

- Services are still a bit barebones. You have to roll your own load balancing. There's no support for mounting an image into another VM to recover files. No internal networks; every VM has a public IP address, which makes me curious to see how they're going to handle IP address depletion if they ever get popular. You have to use your own firewall via iptables or similar.

They will manage your DNS for you though, if you like, and you can ask them to take a snapshot and automatically back up your VM every day or so.

My verdict? I think they're lovely and I really reccomend them especially for personal projects, but as of this writing (early april 2013), you should think about these things before you decide to use them in production.


> Provisioning is conceptually cheap, and this is their killer feature. You click the "Make a new VPS" button, you pick your distro, how much RAM you want, and an SSH key, and they spit out an IP address less than a minute later. `ssh [email protected]`

> There's also an API for automatic provisioning if that's your thing.

Oh, it is, and I wish these comparisons would consider the API more, since deploying a node via Linode's API and DigitalOcean's API are probably comparable. Declaring "my UI is faster!" as a sales point is disingenuous, since Linode is merely offering more choices during the provisioning workflow. Since you're saying "click", I assume you mean the UI too; at scale, absolutely nobody uses the UI any more. I manage a fleet in the thousands on AWS, and the last time I logged in to the AWS Web Console was about six months ago. It takes at least two minutes of clicking to spawn an instance in Amazon, but earlier today I launched 38 instances in about thirty seconds using the API.

> - No kernel upgrades. The kernel is kept outside of your virtual machine, which means that if you upgrade your kernel, then you'll get mismatched modules and your network interface won't come back. They've pinned kernel updates in ubuntu so this shouldn't be a problem, but you have to say "IgnorePkg = linux" in your /etc/pacman.conf if you're on arch. Security updates will be a pain though...

This isn't unique to DigitalOcean and is fundamental to the way Xen works (the kernel is loaded by the host, not the domU). However, Linode solved this problem better by providing kernels with most modules that you'd ever need built-in. You can upgrade all you want on your filesystem and not run the risk of hosing your machine, because your modules generally aren't considered at all. You can still add them by compiling against the upstream Linux sources, but the core modules aren't loaded from your filesystem (on my Linodes, my modules/ directory for the running kernel is empty). There's also a loader for your own kernel on the filesystem, PV-Grub, which DigitalOcean doesn't do.

> - You can't boot into a recovery image/liveCD. If the above happens to you, I assume you'll have to either restore from backup or file a ticket and ask them to pick things out of your VM's hard drive. (You do get raw console access provided by an HTML5 VNC client, which can be useful)

Only if you screwed networking. Lack of recovery = showstopper.


>This isn't unique to DigitalOcean and is fundamental to the way Xen works

Digital Ocean uses KVM, not Xen, per https://www.digitalocean.com/faq so I'm not sure why they would need to pin the kernel.


Thanks for the clarification!

Here's the docs for DO's API: https://www.digitalocean.com/api I agree that UI isn't strictly necessary, but the reason why I brought up the UI was because as an "amateur" user, that's what I care about.


Using external kernels isn't in any way fundamental to the way Xen works. Most standard setups (even PV) have the kernel inside the VM, which allows for standard upgrades, etc.


Why are you mounting customer images on your host fleet? Put another way, what do you supply for "kernel=" in xen.conf? A file from your customer's filesystem?

Why are you doing that?


"""- Provisioning is conceptually cheap, and this is their killer feature. You click the "Make a new VPS" button, you pick your distro, how much RAM you want, and an SSH key, and they spit out an IP address less than a minute later. `ssh [email protected]`

This is much nicer than eg. finding or downloading a box to mess with Vagrant or taking who knows how many hours to set up your own distribution in VirtualBox."""

I don't get this; finding or creating a base box for each disribution you want to use is a one-time cost. There are many existing boxes listed at http://www.vagrantbox.es/.


For 512MB/1GB Droplets in their EU ___location (Amsterdam) they have run out of IPs. I was planning on switching to DigitalOcean but that is holding me back at the moment.

And since you mentioned it, I have been following their Twitter replies and apparently private networks are coming soon (https://twitter.com/digitalocean/status/321650732703023105).


(A bit of followup regarding payment tracking: you can click on "Billing" --> "View my current charges" to see a breakdown of just how much you owe. I presume it's updated daily if not hourly, but I didn't realize this when I composed my post.)


I understand you don't want to trust DO for anything production, yet. But several companies are already using them in production mode. JSFiddle, NewsBlur,AudioBox are the ones that I can remember.


Use one decent server for all your side projects. $20/month is not very much, unless it's just sitting there doing nothing.

Alternatively, Hetzner will give you a 512MB VM for about $10.34/month.


DigitalOcean VPSs go for 512MB, 20GB disk at $5 per month/0.7¢ per hour.

Prgmr is also really great too, 64MB for $5/month (by comparison, 512MB for $12/month). Run by an HN fellow, so that's also a plus.

EDIT: goodness gracious, i didn't realize that literally everyone in this thread is mentioning DO when I wrote this post!


https://www.digitalocean.com/ will give you a 20GB SSD and 512MB of RAM for 5 bucks a month. Perfect for those little side projects.


Signing up with SSDTWEET nets you $11 of account credit too.


Saw that tweet this morning!


I think buying a few raspberries pi is competitive for a few projects over the course of a year.


Good luck with the upload bandwidth on residential cable. Also in some cases simultaneous up/down kills the download speed. I guess if you had google fibre, though.



That's what I do now, but it's a pain to serve multiple HTTPS sites and an extra pain (practically impossible with default HTTPS in browsers nowadays) to serve mixed HTTP/HTTPS. I'd much prefer sandboxed Linodes for all my little projects, but if I bought a Linode at $20/month for each of them I'd be poor pretty fast :)


Serving multiple HTTPS sites is a piece of cake with nginx. What does your stack consist of?


You can get an extra IP for $1/month to solve the SSL problem (or alternatively look at something like Cloudflare).


Which, unfortunately would cost another $20 a month (+ $5 a month extra for each additional site with SSL).


Not sure why you say that, why not just support SNI-based browsers (i.e., NOT IE) to allow multiple TLS connections? And what does the difficulty of mixed HTTP/HTTPS have to do with the server?

SNI: http://www.digicert.com/ssl-support/apache-multiple-ssl-cert...


You don't even need SNI if all you want is some SSL goodness for a side project. Just make your web server listen to a different port for each HTTPS site. Most of your users won't care about the port as long as there's a padlock somewhere, and the ones who know better will probably know that any port is as good as port 443.

  1st site: https://www.firstsite.com:12345/
  2nd site: https://www.secondsite.com:24328/
  3nd site: https://www.thirdsite.com:37712/
  etc.
You can have thousands of HTTPS websites on a single IP address while fully supporting every browser.


In Apache, one server can serve both HTTP and HTTPS (and multiple HTTPS with SNI). The problem occurs when you have both on the server with name-based virtual hosts and you request the HTTPS version of the HTTP site. If Apache can't find an HTTPS virtual host for the requested ___domain (because it doesn't exist, the site we're asking for only has an HTTP version), it will default to serving the first HTTPS site it finds instead. This means your ___domain will serve the wrong site and probably show an SSL warning to boot because the domains don't match.

Normally this isn't a problem, except that some browsers nowadays default to requesting the HTTPS version first. That means if you're serving an HTTP site and an unrelated HTTPS site, and a user accesses the HTTP site via an HTTPS connection, there's a good chance you'll end up serving the unrelated HTTPS site plus an SSL warning instead, because that's how Apache does things.


The problem is that you have not enabled a ssl version of that vhost.

How is that Apache HTTPD's problem? Nginx would do the same thing. When you don't define a ssl version of site.com for ip:443, you get whatever the default cert/vhost is for that ip:443. What would you have a webserver do differently?


I didn't say it was Apache's problem. For various reasons one project does not have HTTPS at all, and I understand how the web server handles things and the reasons why. My point was that a cheaper option from Linode would let me spin up a new VPS to host that small side project, instead of cramming it on a server that also hosts HTTPS; but $20/month for a not-for-profit side project just to edge around new browser defaults is steep for some of us.

Sure there are other options but I'd rather not have accounts all over the place, and I like Linode.


Sure there are other options but I'd rather not have accounts all over the place, and I like Linode.

Linode's problem with $10 accounts is not with customers like you who are paying them decent money otherwise - it's with customers who would sign up for $10, break things and expect $50 a month worth of support on an ongoing basis.

Most customers looking for a cheap VPS (to learn on, to experiment etc) are not going to be profitable unless you provide virtually no support, so by slicing off that part of the market they can offer a uniform level of service without compromising their profitability.


Heroku is able to provide as many free low use dynos as you want and that's a managed system with presumably more support cost.


>What would you have a webserver do differently?

How about not revert to a "default"? Just serve a redirect to the HTTP version if a HTTPS is not available for that vhost.


Sounds like the problem is with Apache then (I haven't used it in ages). As someone else suggested, have you thought about using nginx?


I'm running my micro-VPS on TransIP which offers 1GB mem, 50GB HD, 1TB traffic for €10/month.

It's not the same as $10 a month but it's less than $20 and their support is excellent (5+ year customer, but more for their excellent DNS management console.)


Didn't know TransIP, thanks for the tip!

Your package is only € 5/month now - for €10 you get 4GB/150GB/5TB which is only the quarter of the price of an almost equal (no SSD, but more HD and traffic) DO instance.

https://www.transip.eu/vps/pricing-and-purchase/

Almost unbelievable.


That reduced price is for the first month only I'm afraid...

Still a great deal though :)


Unfortunately its all in Dutch, otherwise I would buy one on the spot.

/e Apparently google sent me to the dutch one by default! Thanks google!


> $20/month is prohibitively expensive

If you're capable of using a bare-bones Linux VPS but are unable to pay $20/mo for your side projects, it's time for you to find a better paying job.


Why? If side-projects are just that, hobby tasks, then it's a toss up of spending $X disposable income on things I/my family enjoy, vs $X - $20. $20 is an additional day out with the kids, or half a pair of shoes for one of them... or an extra $20 overpayment on the mortgage (worth so much more in the long-term).

Bottom line: it has nothing to do with income, but what people value.


With ARIN likely to be 100% out of contiguos space by the end of this year, everybody really wants to discourage things like simple blog hosting taking up a /32 on its own. I'm sure they'd be happy to cut you a good deal if you were willing to only use 6, but fair market value for an IP will arguably be > $5/month in the not too distant future.


http://www.nephoscale.com/nephoscale-cloud-servers

$7.30 / month for a 0.25G instance (on the members plan)

$11.68 / month for a 0.5G

$22.63 / month for a 1G

I've worked with them a lot, and they are excellent. The underlying hardware is really, really fast, their tech support guys are very competent, and they are highly responsive.


There are definitely providers that give you that.

Checkout www.lowendtalk.com/.

BuyVM.net is also very reliable and competitive for what you get.


Big up BuyVM!

http://www.doesbuyvmhavestock.com/

They actually have stock which only seems to happen a few times a year.


Hmm, I guess they could offer 2,4,10 packs to make the total cost reasonable.


512MB (burst), 20GB HDD VPS for 4.99/month

http://quickweb.co.nz/lowendbox.html

I've been using these guys for small projects for about a year now - never a problem.


They're going to have a certain amount of fixed overhead per instance, mostly in the form of support, so it may not be worthwhile to them to drop below $20 for any size.


For a simple side project, you don't really need 512mb dedicated. Voila: http://www.lowendbox.com/


I have single linode with several domains/projects on it. It's not to difficult to setup virtual domains.


This is long overdue, I've been a linode customer for almost 5 years and I moved to Digital Ocean just yesterday, because their offering was just too compelling to pass up.

Looks like their (Digital Ocean's) pricing shook linode's cages a little bit, and that's a good thing.


I too tried moving to DigitalOcean because their offering seemed very compelling. Turned out to be a huge mistake.

After waiting some weeks for Arch to be re-released I finally booted an Arch image. Not a week had passed and some kernel upgrade had already made the system unavailable (no network on vm). I forced nothing, just ran sudo pacman -Syu as I always did on Linode.

Support didn't care. For days I tried responding to the ticket that I opened, talking to them on the irc, via mail. Nothing. I tried to request help to at least recover the files. Nope, nothing they can do.

I did not understand the true value of good support. Now I do. Back to Linode.


They provide virtual hardware, how is it their if fault you can't manage your software using of it? I'm glad I use Debian/Ubuntu. It also sounds like you never took a backup snapshot.


For two reasons: First they created a "supported" image that had no IgnorePkg = linux as it should since they don't support upgrading the kernel. They did this on their Ubuntu image. They failed to it with the Arch image. Second, you have no access to the kernel that is used to boot the system. Even after upgrading system kept booting the old kernel but network was still down. Nothing a user can do at this point.


You can access a recovery image on linode I'm guessing there is no such thing available for Digital Ocean is why he was asking for help in recovering files (even prgmr offers a recover images that you can use to recover files).

The fact that he didn't get any help is a problem that is what support is supposed to do and if he expects a higher level of support than Digital Ocean provides and Linode provides that I see no problem.


No, Ocean provides backups, though they're currently 'in beta'.


I wasn't talking about backups I was talking about a Recovery option which allows you to boot into linux and mount your existing virtual drives to extract data or fix errors/look at logs.


Hmm, I still don't follow you. Does Linode have this? After moving I haven't found any difference in features. Perhaps I just wasn't using this on Linode? What does it let you do that restoring from a backup can't?

(The use case in this thread is backups as voidlogic mentioned.)


The use case I gave was looking up logs to see how something failed and to access files that aren't backed up but still exist on the virtual drive (this can happen when the OS fails to boot). You can also repair the virtual install if you accidentally messed up a configuration file or something like that.

And yes linode has this: http://www.linode.com/wiki/index.php/Finnix_LiveCD_Recovery_...


Ah, thanks.


Well, Arch is not that good for a server because of this reason. I love it as a developer for a desktop system, but for server I'd definitely use Debian.


How is this Arch's fault? The user did the same thing he/she normally does on Linode. This is a case where the Linode ops really understand Arch and everything just works (the 'linux' package isn't installed on the image) but the provide the latest kernels. On DO I also originally ran into this issue but support never gave a clear reason as to why, from digging around it appears to be with the 8139cp/too kernel modules (they don't load). It seems like some kind of version issue but I haven't had the time to play around with this more.


This was not Arch's fault.

This was DigitalOcean's fault, for not supporting changed kernels without providing safeguards against kernel updates.


It was a development box/env. Anyway it was not a problem with Arch. Same set of updates on Linode worked absolutely fine.


I moved over the weekend as well. Even after this update, Digital Ocean is cheaper: 1GB RAM + 30GB SSD for $10 vs 1GB RAM + 24GB disk for $20. Linode offers 8 CPUs compared to Ocean's 1, but the SSD seems to more than compensate in my experience.

The one aspect that has me considering going back: Linode's backups seem much more confidence-inspiring.


Yeah. I was paying $29.99 for a 768MB VPS, with backups which made it about $40. For half of that I got a 2GB VPS with SSD and free daily backups. total no brainer.

Why do you say Linode's backups are more confidence inspiring though?


Backups in DO are free only for now (until June 1st). Price will be 20% of what you pay for your VPS. Plus - snapshots will be 0.02$ per GB. Plus DO bacukps does not happen every day and they are put on randomizer (based on load of backup system). For me every 3rd day is skipped.


wow ... thats good to know. I thought they'd be free forever.

Do you work for digital ocean? how did you get this info?


No, I dont work for DO. I'm just using their VPS to do some testing and to compare it to Linode (I use both). Source: https://www.digitalocean.com/blog_posts/snapshot-and-backup-...


Partly it's the UI. It's hard to tell how many snapshots they promise and what frequency.

Partly it's observations over the past week. I noticed a few days after moving that I had two backups the day I created my account, and no backups for 3 days after. Contacted support and was told they have some 'randomization for load balancing'. Asked what the worst-case frequency could be, and was told it's "usually 24 hours". This is the kind of thing that can bite you when the shit hits the fan.

(And the experience definitely shows what other commenters said about Linode's support.)


"the SSD seems to more than compensate" that really depends. If you are using the server for a web app, disk IO should be minimized and CPU is more likely to be a bottleneck. This assertion largely depends on how well your database can cache its queries. For the web apps I host, I'd rather have CPU over faster disk IO.


Tbh, I actually moved for the RAM and not the SSD, but again if you're running stuff like Sphinx or Elasticsearch (search engines which have their entire index stored on disk and then read into memory as needed), then having an SSD helps.

Also in the case where you're running with swap because you don't have enough memory, SSD comes in handy too, mysql is particularly badly behaved in this respect, it swaps too easily unless you load the entire db into memory by setting innodb_buffer_pool_size equal to the size of the db + a little bit.

long story short, an SSD can help out quite a bit more than you seem to suggest.


If you are hitting swap often, you're doing it wrong. The second any machine I'm running is hitting swap I'm reconfiguring the machine or resizing the machine so that it does not hit swap. MySQL can be tuned to work with the memory you allocate to it such that it does not hit swap. With the doubled memory it will be easier to allocate for query cache hits.

There are definitely some things where disk IO is important, but swapping should not be in that equation unless you have a problem. I prefer better CPUs and Memory to disk IO on servers usually. The database server is the only server I care about having decent IO, and Linode does have decent IO for non-ssd drives (a lot better than AWS).


Depends on the characteristics of the webapp, and how much I/O it needs per request.

On the other hand multiple CPUs only matter if you have a ton of traffic. My side projects do not :) At least for hobbyists Digital Ocean is very compelling.


I'm also looking at moving and was down for an upcloud.com trial later this month precisely because of the fact that Linode had let the ball slip on the quality of their offering.

This does play catchup, and the things that they have had in private beta are pretty nice too... but whether it will make a difference is yet to be seen.

Hosting is far more a retention game than pure acquisition, these changes are long overdue... SSDs are also overdue.


Honestly I don't see this as a reaction to DigitalOcean. If you look at http://blog.linode.com/category/upgrades/ you can see that Linode has a history of passing on the value of faster/cheaper hardware over to their customers. It just seems to be part of their business model to continually upgrade their hardware/features as it becomes economically feasible to do so.


An interesting side to this is they seem to be keeping various hidden/old levels going even with the upgrade. I went to upgrade a Linode 768 (no longer available) and it said it'd upgrade to a Linode 1536. But I don't need that much RAM.. so I might first downgrade to a Linode 512 then "upgrade" to a 1024 for free.. ;-) Update: Seems I can't, but I can upgrade direct to a 1024.

Random observation: The Linode 1024 is now the same price as prgmr.com's equivalent.


I was in the same situation. Downgraded few days ago to 512MB and now will get 1GB.


If you have a linode512 plan with backup, they will automatically upgrade your backup from the backup512 version to backup1024, which is $10/month, where the old was $5/month.

I'm sure it's a good deal all around, but it seems like a slight of hand to me: Linode, now with more RAM, AND more expansive backups for the same disk space.


I've just checked with linode - the backup price is shifting too, so the cost will not increase. They have not updated their FAQ yet.

--------------- Hello,

The backup prices are shifting just as the RAM prices shifted, so Linode 1GB backups are now $5, Linode 2GB is $10, and so on.

I hope this clears things up. Please let us know if we can be of further assistance.

Regards, Jonathan --------------


I just checked with Linode support, and their backup pricing will be adjusted to match with the old rates as well. So if you’ve been paying $5/month for backup service, your price will stay the same post-migration.


This RAM change is now reflected here. http://www.linode.com/backups/


"with the exception of Fremont which may take another week or two – we’ll be explaining more on Fremont in another post."

Interesting... the Fremont data center had some pretty serious issues a couple years ago, and though it has been better since then the bad memories remain. I wonder if they're finally switching to a different data center vendor.


Hrm. I don't know about a couple years ago, but I recently moved my server from Fremont to Dallas because I was having weird network issues from my California friends. (I host a small Mumble server.) Moving to Dallas fixed everything completely.


The Fremont issues have mostly been due to HE, AFAIK. That wouldn't really affect RAM upgrades, would it?


No, but if they're going to imminently switch to a new data center vendor they wouldn't bother rolling out the new hardware at their old data center, would they?


But they upgraded to the 8 core machines recently even in Fremont!


The old L5520 processors they are using have 4 cores with hyperthreading, so it appears as 8 to the OS - so they don't have to have changed the host machines to present 8 cores to the VMs.


He's referring to the recent move to the Sandy Bridge E5-2670 processor. All linode customers have this, but it will require a reboot to activate it.


Does anyone have a suggestion/option for those of us with bigger disk space needs? I love Linode and have been running a privately-hosted email server on there for a while (yes, yes, I know, I know, Gmail, Gmail. I like running my own server.). But my resource constraint these days is disk - not memory or CPUs. Feels silly for me to get bigger a bigger instances just for the disk space.

A while back I spoke to the Linode guys (I think it was at SXSW 2011) and they said they were going to be releasing an EBS-like product "soon". But I guess not that soon.

Perhaps I'm just not a good fit for Linode in this case? Any thoughts?


Dedicated servers, even the cheap ones, tend to offer a ton a disk space. In Europe, Ovh and Hetzner have entry-level offers with 2 or 3 TB at 50€, you might find something in the same prices geographically closer to you.

Other than that, there are probably VPS providers who let you upgrade disk space independantly from RAM (I'm using Europe based providers, so I probably can't give you relevant names, unfortunately).


OVH has their offering in North America as well (located in Canada). $40 a month for a Core i3 3.4 Ghz, 8 GB of RAM, 2 x 1 TB drives.

And it has been fantastic when it comes to speed and whatnot.


Budget VM has really good plans if you need disk space. A warning though, the support sucks. I used the 2GB plan for while, it has okay performance but for $10 bucks it was perfect for the analysis I had to do.

http://www.budgetvm.com/linux-vps.php


This is my only complaint about Linode. We run our one stack that requires a lot of storage on EC2 and use EBS (and we might consider moving to just leasing real hardware in the future, since that is more straightforward & cheaper than EC2+EBS).


You should check out www.tilaa.com, there you can individually increase disk space (as well as cpu and ram). I'm running two servers there and they've been solid.

Update: The traffic is a bit expensive though.


Unfortunately there aren't many VPS providers that offer decent storage space. Go dedicated. I found a deal on a quad core Xeon, 24GB RAM, and 500GB disk for $50/m.


I use Transip.eu. You can buy the cheap plan and upgrade only disk space afterwards. I'm on my phone, so can't check the exact prices now though.


I can't comment on their VPS / disk space offering, but I've been a customer of TransIP for well over 5 years now and their support is stellar.

Disclaimer: They're Dutch and so am I, so you should factor national pride into this endorsement to some degree.


When you were talking to them, did they mention any possibility of just plain up adding extra disk space and paying X extra dollars per month?


They already offer this. Sadly though, it's really expensive: +12GB - $144 a year


And here I just finally finished migrating to RamNode for this very reason a week ago. Frick.


Never bet against Moore's Law.


You still benefit from SSDs...


Linode claims their HDDs are faster than el-cheapo SSDs.. Still waiting for migration to test myself though.


Yeah, such generalizations make no sense. Your mileage will depend on the characteristics of your workload.

I've been working on a HN clone. The HN codebase creates a ton of small files for storage (one per post or comment), which need to be loaded up en masse on startup. Switching to Digital Ocean from Linode reduced startup time by 50x.


That's interesting because cost is Linode's reason for not offering SSD. Digital Ocean actually uses very expensive enterprise SSD in the servers. I'm a current customer of Digital Ocean after being with Linode for over 2 years.


Where did you get this information?

When I looked at their site I didn’t see information about what kind of SSDs they are using and, based on the price, assumed they are MLC (consumer grade). I’d never want that on a server.


It was no surprise this was coming after the handouts at SXSW.

We did an interview with Linode last month about some of their new upgrades: http://blog.serverbear.com/interview/linode/

You can also see some benchmark data from the new E5 CPU's here:

Dallas - http://serverbear.com/benchmark/2013/04/07/FxVC0fV6q5GAMa1j Fremont - http://serverbear.com/benchmark/2013/4/6/TX3w1Zx5dRK13MJi


"Your position is 1 out of 1 queued migrations in this Linode's datacenter." 8 minutes later, the system was copied and powered on. Also got a few upgrades that had been delayed due to the necessity to reboot.

Linode is very painless, but as always, that convenience comes with a cost. I am very sad that they do not allow for extra IPs anymore. I let my service lapse one month and lost my extra IP.

(Don't forget to increase swap sizes, change configurations to deal with extra memory, etc)


Really? I still see "add extra IPs" option for my Linode


Yep but it's not automatic. I believe they still require you to file a ticket justifying why you need the extra IP address. I always found it weird needing to jump through those hoops if you are only trying to add one additional IP address (if you are adding a bunch then I can see) when you can create a new node and have a new IP address instantly (obviously it would cost 20x more in this case).


That's coming from ARIN, not Linode. ARIN won't hand out more IP addresses unless ISPs can show they are being effectively utilized. I've had to fill out a form at Rackspace to justify additional IPs.


Oh, I guess the higher-ups had a bigger hand in this than I realized.


That may be due to their ISPs/datacenters rules. Did you try to file a ticket and got refused? I'd be really surprised. Whenever I needed their help, they were awesome every single time. One time I was traveling and I couldn't pay on time due to bank issues. They gave me extra time to pay rather than terminating on 10th day!


No I haven't had a problem with getting an additional address in the past except for one case where I decided to get the process started before completing the purchase on GoDaddy. I just take the SNI route these days just not to be bothered. Their support responds quickly but this is almost akin to having to wait for a hosting company to setup your vps. I'm probably just spoiled by the instantaneity of everything else.


I believe the only justification for the process is a SSL certificate. In the past, it was "pay 1.00 and get 1 extra IP, automatic, no questions asked."

I understand why Linode follows that procedure. I just miss the old days. I have to be more careful about cross porting, cross data, cross domains, and hosts which can't access anything but port 80. I used to just follow the principle of website x, y, z on port 80 on IP1, website a, b on port 80 on IP2.

Luckily, digital ocean basically sells ips for $5/pop.


The old days are not coming back. You can have as many ipv6 addresses as you want though.


Yeah, if you tell them it's for another https site, they'll make you prove you've bought the cert. A bit of a hassle, but I can see why they don't want to just hand out IPv4 addresses and have them sit there unused.


Most providers ask you to justify extra IP allocations.


They probably should but there are many providers (e.g. Softlayer) who, on the more expensive plans, will give you your own /28 or /29 for free without even asking if you need it. (Comcast and AT&T are also guilty of this on business tier cable/DSL plans.)

Linode is definitely the strictest in my experience when it comes to enforcing ARIN's rules (if HTTPS is your justification, Linode actually checks to make sure you're serving up a valid certificate). This is a good thing for the Internet but somewhat frustrating when other providers are so much laxer.


Or, viewed another way, it is evidence of Linode's attention to detail, which will help you many more times than it will hurt you.


Verizon handed me a /24 for a T1 line. No questions asked but that was 10 years ago :-)


I needed to fill out a form and list the use of each IP address before Comcast would give me a /29 and that was ~5 years.


Wow - about ~2 years ago we had to convince Comcast not to give us a /29! At least they could be convinced - AT&T couldn't.


Maybe it's just the luck of which rep sets up your account. I wouldn't presume that they ever look at the form or decline a request.


I wonder if Slicehost would have followed suit if they weren't owned by Rackspace now. Rackspace has never been about low prices, though.


I'm on Rackspace and I might switch now.


We probably would if we didn't use RackConnect between our cloud servers and our managed DB server, which is @ Rackspace.

Their cloud server management tools are a joke compared to Linodes.


Just as a warning to anyone else who clicks the upgrade button too quickly you get entered straight into the queue and it doesn't look like there is anyway to cancel after you've committed.

My curiosity got the better of me and would have preferred to have planned a bit better for the downtime but am now 23 out of 28 in a queue and moving pretty fast...


might be able to remove you via ticket


I don't have immediate need for the new capacity, so I'm actually going to wait a few weeks on upgrading. With any luck the early adopters (and probable heavy users) will have all upgraded and I won't have to share a machine with too many.


That, fine control over when my linode will go down and let the early adopters catch any quirks (Linode doesn't half ass stuff, but I have no hurry).


I found it hard to compare VPS offerings and ended up making pizza instead:

http://workforpizza.com/posts/2013-04-04-comparing-apples-an...


See now all i got out of that is hunger and what looks like an awesome idea for pizza.


Awesome upgrade.

I did notice though after the migration (512->1024 Dallas) I'm now on a node with the older CPUs (model name : Intel(R) Xeon(R) CPU L5630 @ 2.13GHz ) as opposed to the E5-2670s mentioned in the blog March 18th.


I bought a 1024 in Dallas yesterday.

* You: Intel(R) Xeon(R) CPU L5630 @ 2.13GHz * Me: Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz

(cat /proc/cpuinfo)


You dont even get 1670, most got a 2650L which is actually 2.0Ghz only.


I work @ a competitor Atlantic.Net (www.atlantic.net/cloud - Free Trial message me for extra free credits) but I think the market is segmenting between Digital Ocean which is very aggressively priced, and Linode which is trying to throw in more features and pack alot of value at a higher price point.

Interestingly, the largest providers AWS + Rackspace provide no free bandwidth, while Linode not only included it, but increased it significantly. Will these providers push the bigger guys to start offering free bandwidth which is a significant cost of the product [well, if its used].


Your pricing actually looks _great_ for what I'm trying to do. With your pricing model I'll have to keep an eye on my bandwidth, but it's pretty competitively priced. ( I'd only run a Bulletin board and a Mumble server w/ ~8 concurrent users. So I should easily be in "tens of GB.")

I'm currently running 2x512mb instances at Rackspace. (It's cheaper than 1x1024 instance, sadly, and I don't need the extra bandwidth/disk space, just the RAM.)

I think I'm going to spin up a trial today. Though I tried to do the same w/ digital ocean and my CC is giving me nothing but headaches. Thanks for the link!


Any thoughts as to why nobody else seems to be competing with a spot market? Plenty of competitors look compelling on a per-hour basis, but if you don't require much bandwidth it's really hard to beat EC2's spot market pricing.


Rounding from x9.95 to x0 is a welcome change. Less game playing.


I like it, but I wonder what happens with the prepaid ones. I paid for 2 years to get the 15% discount...


Staff in their IRC channel indicated people will not be charged the new amount until their next invoice date.


Ooops, my Linode keep crashing before kernel going live, newer Xen bug? (Ubuntu 12.04 with stock kernel configured with pv-grub-x86_32).

Opened a ticket 6 minutes ago :o


In the past, they've flatly refused to support anything to do with pv-grub for me.

It was highly disappointing. I hope you have a better experience.


Looks like it was an host misconfiguration, they fixed it after I've get in touch with one of them on IRC.


"The upgrade is not mandatory, so if you’re not down with the 5 cent increase you can keep your existing resources and pricing."


Rigth now using heroku, and a new app I'm working will need a postgress backend. Because heroku docs state that the use of many schemas cause performance degradation I wonder if linode or other host could provide the kind of reliable postgress host as heroku?


If you're running any kind of intensive database I/O and your databases are larger than RAM, then look for a dedicated server. It tried switching many VPS offerings, but there are always bottlenecks when you share the disk I/O with everyone else. For example, on one provider, I had problems that everyone is placing some heavy cron jobs at midnight and completely kills the disk I/O for 10-15 minutes.


Ok. But my main fear is about backups and DB maintenance. Heroku do it, but wonder who else


London now available too. "You are in the Migration Queue! Your position is 4 out of 4 queued migrations in this Linode's datacenter."

~1min in queue and 5min to migrate 512MB to 1024MB.

However I'm still on old CPU. :/ Intel(R) Xeon(R) CPU L5520 @ 2.27GHz 8 cores Memory 1001 MB real


You did get 8 cores, which is what they promised.


Yes (3 weeks ago). However when they wrote post about new hardware it was told, that new Linodes will start to land there after a week. It would be only reasonable, that after 2 extra weeks and migration to get extra RAM, Linode would land on new hardware. However it looks like it will take a while. I'm not as much after more GHz from new CPUs, but rather than new hard drives which should be faster than old ones. Will wait. :)


Really great! Thanks Linode. I have been a Linode user for 3 years, got one server crash which lost everything(that's before their backup), otherwise great and reliable. I need the 1GB RAM for some Ruby stuff, which is, DDR-hungry.


Does anyone know if others (e.g. Digital Ocean) follow the Safe Harbor like Linode?


Just after the Linode Upgrade Blogpost is posted, we have people from all the other VPS provider flocking the Hacker news to spread the message about "how awesome they could be.."

Shall we stop and Concentrate on Linode now please..


  Your position is 160 out of 389 queued migrations in this Linode's datacenter.
I got mine to migrate ahead of the queue by accident. I just resized my disk, and it started migrating after the resize.


Yes! I've been considering switching to Linode for a few months now. Last month I made the leap when they introduced 8 cores. RAM was the only thing I found lacking for the price. This now completes it :-)


Newark and Atlanta are now able to be upgraded as per the blog.


Well Done Linode. As with most things, you get what you pay for, and I'm willing to pay a bit of a premium for the rock solid (virtual) hardware and awesome support.


Finally! Who said #3 isn't going to be RAM? :)


I did. Many times. But I'm glad to have been wrong!


Freaking Freemont.

I can't wait to hear their "upcoming explanation", and I hope it's along the lines of "we're nuking it from orbit".


Assuming you can stomach a little scheduled downtime, isn't it pretty easy to move a Linode to a different data center?


Fremont has to be Linode's ugly ducking of a data-centre. Stay away from it if you can.


Hopefully this puts pressure on other VPS vendors such as prmgr (nod) to upgrade their RAM offering.


Oh well, another machine reboot. :) Still, need to wait, "Your position is 272 out of 293" :D


$0.05 more per month! Ludicrous! :) but alright, I'll just go along with the upgrade.


Looks like London and Tokyo are now available! (as per the blog)


The migration time seems to be largely dependent on how much disk space you're actually using. In London, I'm getting a 43.75 MB/s rate on my primary disk. It's using some kind of compression though, as an empty disk copied across instantly; it's been about 10 minutes for a 10gig drive.

Edit: Ah, just misread the console. It's still "copying" my empty drives, at the advertised rate. So dividing your disks by the 40ish MBps is a good estimate of migration time.


Now they need to introduce lower priced option for 512M.


Finally! Now I can move back to Linode. :D


How is this comparing to EC2?


Comparing an m1.medium EC2 instance w/ 96GB EBS volume to a 4GB Linode (ignoring the fact that you're probably getting at least 2x CPU with Linode), you're looking at ~$100/mo for EC2 and $80/mo for Linode. If you take full advantage of Linode's 8TB/mo of included transfer, AWS is going to cost you an extra ~$1000 (assuming 100% outbound).


Linode continues to win.


WAO!! THIS IS AWESOME!


awesome upgrade..


I wonder if they can skip the host migration after half of the linodes have been migrated and let the people who are not in a hurry just reboot to double the RAM.


Probably not, because the migration also gets you onto new Sandy Bridge hosts:

http://blog.linode.com/2013/03/18/linode-nextgen-the-hardwar...


Not in London. I'm still on the Xeon L5630 after the migration.


Argh, you may be right - I haven't migrated yet and I was just assuming based on their previous blog post. Here's another report that this doesn't get you onto Sandy Bridge:

http://blog.linode.com/2013/04/09/linode-nextgen-ram-upgrade...

Are they really going to make all their customers go through two migrations? That would be disappointing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: