Hacker News new | past | comments | ask | show | jobs | submit login
Linode NextGen: The Hardware (linode.com)
200 points by rk0567 on March 18, 2013 | hide | past | favorite | 113 comments



According to the linode FAQ [1], current linode servers have roughly 20GB of RAM shared out. Does anyone know the spec of the new servers and what the max memory would be? The max for the CPU seems to be 750GB [2]. Presumably servers would be far less than the theoretical max, but fingers crossed they'll be able to bump the memory at some point soon so that they're a bit more competitive with others.

I was really hoping that was going to be this announcement, however the fact they've titled it 'The Hardware' hints that there might be nothing new on RAM in the next announcement, which would be a shame. Upgrades to CPU are nice, but it'd be nicer to see a RAM upgrade, as almost everyone is constrained on RAM rather than CPU or disk, particularly on newer hardware and with their new bandwidth limits.

I feel rather ungrateful now having said all that. Thanks anyway Linode!

[1] http://www.linode.com/faq.cfm#how-many-linodes-share-a-host [2] http://ark.intel.com/products/64595/Intel-Xeon-Processor-E5-...


Same here but I'm still holding out hope. I recently wrote a blog post comparing my experiences with both Digital Ocean and Linode and both companies ended up commenting on it. Linode hinted that there'd be a few changes coming soon that'd please me. Considering I mentioned RAM and pricing I want to believe they were hinting that they really were planning on bumping up the RAM for current customers. They didn't say anything we haven't heard them state publicly before but even so, I'm still holding out hope. I think more RAM is probably the number one request and they'd have to be living under a rock not to know it. So I'm hoping that maybe they're playing it close to the chest for now until all the other upgrades are complete and then maybe they'll increase our RAM when it's all over. Who knows. We can always dream.


Just seen this link to a pic on twitter lower down in this thread:

https://news.ycombinator.com/item?id=5396634

Not sure where that pic is from or whether it is legit, but if they double the RAM on all plans as it implies I'll be really pleased... there's some intriguing text top right too which looks like it might be some sort of stats monitoring system (longview?).


I can "confirm" that double the ram is coming, they were handing out flyers at SXSW stating as such. Here is a photo I took in high resolution: http://code.mglinski.com/X3ps/30zoh6Yf Of course it is all still plans, but after talking with them directly about it I feel very confident they will be doing your ram junkies sweet justice once they have had the chance to move everyone over to the new E5 boxes.

They had a demo of Longview on the show floor too, from what I gathered it is a package install based daemon to monitor processes and the like, something like the server part of NewRelic but sitting directly in their datacenters.


That's good news! They're just stringing the announcements out for more publicity. Thanks for the better pic.


Most dual E5 motherboards have 16-24 DIMM slots, 32GB DIMMS exist but are insanely expensive and big, most likely they would use 16GB DIMMS so they probably have 256-384GB of ram per box. They could in theory be using some custom motherboard shenanigans to increase the capacity, but that is unlikely.

Keep in mind that the CPU's they mentioned are ~$1600 each in lots of 1, I'm sure they are getting a good discount for volume but that is still a lot of $ per box in CPUs, I would imagine they need to pack them full of VMs to offset their costs.


FWIW there was a big RAM increase some years ago: http://blog.linode.com/2010/06/16/linode-turns-7-big-ram-inc...

Taking Linode's past behavior (history of reinvesting profits into upgrading hardware/infrastructure and passing that value over to the customer rather than just milking higher margins), I'd expect them to someday upgrade RAM when it becomes economically feasible.


ram is the only thing you can't share, and hence, the only thing that makes shared hosting providers any money.



> So what about SSDs? There’s no question SSDs are in Linode’s future, however enterprise-class SSDs (SLC or eMLC based) are prohibitively expensive. And although MLC-based drives are cheaper we just don’t feel right about using consumer grade laptop drives to power your Linodes. So we will wait until capacities for enterprise SSDs increase.

Well that is disappointing and arguably more important for many people. Not to be a debbie downer or anything.


One of the reason why I had to switch to Digital Ocean is exactly this. I need my DB to be extremely fast, I ran some tests and, unsurprisingly, their SSD's were sometimes 10-20x faster. If Linode had an expensive SSD plan, I would totally use it for my DB and keep the same HDD linodes for the rest.


How big is your database? Why are you not keeping it in memory? Was this an actual, real bottleneck? How many servers do you currently rent?


SSDs are cheaper than RAM. I'm not keeping a 1TB database in memory.


How about SAP HANA? They keep 100TB in memory...

They have startup program: http://www.saphana.com/community/learn/startups


I've used SAP HANA and it's a fantastic piece of software arguably the best SQL database on the market today.

It also costs an absolute fortune (only approved hardware allowed) and even the AWS instance which costs $3.50/hour is not certified for production deployments. I don't know many startups who can afford $5000 a month for minimum redundancy.


That's not an option for me, my database is more than 20G.


32GB of RAM is $220 on Newegg.



Preach!


DB in RAM still requires to write to disk, which is often the bottleneck, not the reads.


That's suprising, when I benchmarked I/O on several DigitalOcean VPS I was getting mechanical disk performance. Which plan are you running? 10-20x faster is kind of high, are you sure you weren't hitting a cache? I usually see around a 5x speed increase on a good host with sSD.


I had some benchmarks show odd results and others were the expected increase.

You might try building a new server on Digital Ocean with VirtIO enabled. It's an experimental option for new servers and you can contact support to have it enabled on existing ones, but should show further improvements according to some other folks in the support community.


My database is on their $20/month plan. I'm not sure, maybe I was just unlucky with Linode. Just to give you an example - I have an SQL query that was taking ~90 seconds on Linode and when I moved it became 5 seconds. Again, YMMV.


Linode's 8GB($319) plan to Digital Ocean's 4GB($40) plan and we were seeing a 10-12% increase in performance for large queries.


That's my conundrum.

To get what I want from my database, I want SSDs. If I move my database to another provider, then I have to move everything else to keep the latency to the database low.

Not having an SSD option even when it's expensive leaves me shopping around for where to move all of my Linodes.


Enterprise-class SSDs are grotesquely expensive at the moment. A quick gander around shows that a 200 GB SSD from HP runs you about $1,400. Since Linode needs ample space, the largest I found was 800 GB which runs you almost $7,500.

When you're looking at around $8/GB for storage of this type, it's no surprise that Linode didn't bother.

Surprisingly, HP is offering the drives with 3-year warranties, which is a far cry from sometime ago where they were something like a few months.


Well I would argue the customers should decide what's too expensive. It really depends on your use case, for some people even $8/GB might be worth it. Linode can at least have some offering, even if it involves waiting lists.

It might not be a profit center on its own, but it could land them customers who otherwise would have to skip them.

I would take this further and argue that consumer grade SSD's shouldn't be discarded out of hand either for similar reasons.


(Just for comparison) HP 600GB 15K SAS drives are $700 (direct from HP, which is marked up a bit). So even at that price, it's about $1.17/GB. So ya, SSD is definitely still around 8x the price.

[1] http://h30094.www3.hp.com/product/sku/10238268/mfg_partno/VM...


The Intel DC S3700 800 GB is $2,000.00 and has a 5-year warranty ("DC" stands for "data center"). Based on the stellar reviews, it will be interesting to see what effect this drive has on enterprise SSD pricing.


People often say SSDs are expensive. But they're assuming that the capability of a hard drive is some scalar value (storage capacity) that makes it easy to compare SSDs to spinning rust.

The fact is, if you care about IOPS, SSDs are cheaper than spinning disks. In other words, you need to stripe out across way more spindles (and thus consume a lot of power) to get the IOPS you get from a decent SSD.


Exactly, to say an SSD is 8x the cost of a hard drive, or even 2x the cost, is missing the point.

Even an expensive SSD is _cheaper_ than a HD on an IOPs to IOPs comparison, and IOPs is what matters to most people.


Intel's new DC SSD is about $2.50/GB, so only 2-3x the price of 15K disks.


Many newer enterprise raid cards let you "supplement" the cache from your raid card (normally 512mb to 1Gb) to an SSD. That price would be ugly for a large amount of space, but for extra 200Gb of extra read cache, that might not be a bad thing.


Exactly what is in my Mind, an Intel DC SSD Cache in between would be a much better option. Or In fact ANY SSD Cached solution. HDD just dont hold up any more. Even if they are Enterprise HDD with 2x Cache and 2x Port Speed.


Don't care. Do it. Will pay.


You did forget to quote this part though:

> We’ve also moved to the latest generation of the enterprise-grade hard disk drives — doubling their cache, increasing their port speed, and decreasing latency and access time

It would be nice to have SSD as an (expensive) option for those that can absorb the cost.


That is an interesting choice. I'm sure there reasoning was a bit more involved than that statement, but there's a gap between an enterprise SSDs and consumer grade laptop drives, and plenty of use cases that fit in that gap. Anandtech, for example, recently transitioned from enterprise-grade spinning disks to SSDs for their site, but using Intel mainstream SSDs:

> "we instead decided to go with mainstream SSDs to lower the risk of a random mechanical failure. We didn't need the endurance of an enterprise drive in these machines since they weren't being written to all that frequently, but we needed reliable drives. Although quite old by today's standards, we settled on 160GB Intel X25-M G2s but partitioned the drives down to 120GB in order to ensure they'd have a very long lifespan." (http://anandtech.com/show/6824/inside-anandtech-2013-allssd-...)

In the comments, Anand mentions that he went with the X25-M since the upgrade was last year; if he were doing it now, it would be the Intel 710 or S3700. Also worth noting that reliability, not speed, was their primary motivator.

I wonder if there's a way that Linode could open up that as an option to subscribers? The disks have a shorter lifespan with high writes, but that's measurable, and could be worked into the cost.


What's the difference between an enterprise-class SSD and a regular SSD? Is it performance or reliability? Is an enterprise-class HDD even more performant and reliable than a regular SSD? I thought SSDs are more reliable because they don't crash and have better access time than HDD because they don't need to seek.


The difference is between MLC (unreliable) and SLC (reliable) SSDs.

MLC (Multi-Level Cell) SSDs are slightly slower, a lot cheaper, and less reliable than SLC (Single Level Cell) SSDs.

Fundamentally, an MLC stores more data in each cell (hence the Multi), while an SLC stores 1 bit of data in each cell.

There's now all sorts of "eMLC" drives which are pushing the reliability of MLC up towards the SLC levels, but they're not very common yet.

http://en.wikipedia.org/wiki/Multi-level_cell


Enterprise SSDs are supposed to have better reliability and endurance than consumer SSDs (and some of them actually do). Performance is kind of a toss-up.


Sure, but what about consumer SSD vs enterprise HDD?


It's no contest; SSDs beat HDDs.


yes, very happy with more cores, and as usual extremely happy being a linode customer.

Would be even happier with more / cheaper memory.

also perhaps a more elastic storage option to stretch/shrink your linode disks, take snapshots (not assigned to a given linode) would be great.


I'm not super up on os level virtualization providers, but I'm curious why exposing 8 cores is an advantage. Based on the ram sizing it seems like they expect to run 100+ containers per processor - so surely they're restricting cpu time per process (instance) somehow. If you use a bunch of cpu time won't you just get starved out when you hit an invisible cpu wall? If you're only getting 1/10th or less of a cpu core wouldn't it make more sense to pin each instance to just a couple of cores minimizing context switches?


You can use as much spare CPU as is available. Upon contention, the CPU is (in theory) dished out fairly. Other schemes are of course possible (like Amazon's EC2 where you get a fixed CPU allotment).


EC2 doesn't actually reserve physical CPU time per vm, do they? I had assumed that the lack of burst just means they are keeping the statistical multiplexing windfall for themselves.


Excluding t1.micro instances you are getting a fixed, dedicated amount of CPU per instance, CPU is not a shared resource on EC2.


Why is it that you get such variable CPU performance on different instances of the same type then?

I was using a whole bunch of c1.medium instances and found the CPU performance varied by a factor of at least 2.


Could that be because of old vs new hardware?


So just to be clear: If I have a reserved instance I'm not using there is a CPU sitting somewhere doing running HLT cycles?

Wow.


If you have a reserved instance that is in the running state but you aren't using it, then yes. If you have a reserved instance that isn't running then that slot is likely available on the spot market.


Vs needing CPU cycles and you having to wait if someone else is using that shared CPU time? There are always tradeoffs.


I didn't mean to be critical, customers certainly get better latancy on a fixed time slot implementation over an interrupt based QoS system. I'm surprised from Amazon's perspective, especially because they are competitive on price.


They're relying on the fact that not everyone maxes out their CPU usage all the time.

Context switching is less of a problem when you consider that the L1 and L2 caches of the CPU(s) are also shared amongst the 100+ VMs running on it.

If you're running something that is CPU intensive, or sensitive to something else taking over the cache, then running in a VM can be painful especially when you're not in control of what else is running on that server.


Wow, first time in my life i was able to found a link to the company.com at the header of blog.company.com! Good Linode.


I like my approach: http://www.instahero.com/blog/

There's a link to the site at the top, and a link to the blog below that.


yes! i like this approach.. rather than hitting the blog main page when i click on the header


This also annoys me.


Just like a good showman would, Linode will be saving their RAM announcement for last. Network, CPU, disk again (maybe?), anything else (new services/options/load balancing/etc), and finally, memory.



Interesting that that mentions 4-core Xen instances.


If that is the case then pretty disappointing. It really needed to be 4x to compete with DigitalOcean and the like.

https://www.digitalocean.com/pricing


Digital Ocean may be cheap but I honestly think both Linode and DO are on even footing right now. DO has to be cheap to make up for what they lack in a web UI, IPv6, less CPU cores to work with and just all around lack of established trust (they're super new and I for one am slightly wary). Oh, and their network can use some work too. I can get a bit slow sometimes.

Linode is expensive but they give you lots of control in your account panel, you know they'll be around for a long time, excellent support, IPv6, and on and on. Not that DO is bad though. They've got SSDs which is awesome, they're cheap, free snapshots, and so on.

I'm a happy customer of both services and I don't think one is better than the other. I think both serve different types of customers. For me, personally, Linode is where I go to host my serious projects - the money making ones. Digital Ocean is where I go for a quick, cheap server to mess around with. I have one "main" server on both services but I trust my Linode more. Maybe that'll change as Digital Ocean matures.


I recently went DO for a VM because they use KVM, not Xen paravirt, so 64bit VMs have decent performance. I needed to run a software package that was 64bit only.


If ram were the only thing in consideration, perhaps. I doubt linode want to compete with digital ocean given their prices start at $5 a month. That's not enough to give decent support to any number of people, though I'm sure they promise it. If linode double ram I'll be very happy with the comparison with budget providers (who are great for some people/uses of course).


Everyone is waiting for RAM and Linode knows its customer better - hope they make it soon.. people will come in hordes to Linode then from amazon..disk performance is not so good with Amazon - linode should have known betterm with SSD they could have had a bigger impact then they currently are.. As for SSDs,it should be their in another 6 months..as linode believes in quality over just providing the damn thing.. see what they did in Network Routers and Cores..


I've been thinking about this more and more. I don't think RAM is going to be step 3/3 in this series. The first step was "the network" and this step is clearly "the hardware" (which I assume would include RAM given that it includes HDD and CPU).

On top of this they mentioned before this process started that they had something new coming that doesn't require you having a Linode on your account.

Edit: though I certainly agree it has been long enough since the last RAM upgrade, and I would love to see one soon.


Third Step would be "software" ..may be an upgrade to features on dashboard on linode..


I wonder if they considered allowing 4 cores, but double the CPU allotment? 4x1000MHz CPU instead of 8x500MHz CPU, for example.

8 cores does give you the highest best-case performance, since you can use 100% of each core if the other Linodes on your host are not using them.

They show the 1024 Linode with 8 CPU's as an example. But they also state that there are on average 20 1024 Linodes on each host machine, making 20x8 = 160 vCPU's being used.

Marketing-wise, more cores sounds impressive. But I wonder if performance per physical host is reduced splitting them into so many vCPU's


The E5 came out last March, is based on the previous Sandy Bridge architecture, and is hardly new. It was also practically a year late, launching the same time as the Ivy Bridge based E3 v2's. The 55xx series was launched in Q1 2009, and the 56xx series was launched in Q1 2010, so their existing hardware is 4 years old at this point. Yes, this is a major improvement, but in reality, they are simply catching up to last year's hardware, not exactly being innovative. The claim that their average hardware will be less than a year old when their upgrade is complete comes off as deceitful PR spin. The E5's are already a year old now, so they must be basing the age of the hardware on when it was purchased, and not when it was released to market.

The Ivy Bridge based E5 v2 is also coming out in Q3 this year, and supposedly with 12 core models, so it seems that this is a somewhat poorly timed upgrade on Linode's part. They should have either upgraded to the E5 v1's much sooner, or just wait it out for the E5 v2's.


Exactly my thoughts. The Ivy E5 is just so much better in Power/Performance. I dont understand why they need to update now, not Earlier or later.


I must admit Linode IO is pretty bad,

:~# dd if=/dev/zero of=test bs=64k count=3k oflag=dsync && rm test 3072+0 records in 3072+0 records out 201326592 bytes (201 MB) copied, 41.3478 s, 4.9 MB/s

this a 512MB Linode but still...

I hope they fix this with the new hard drives


    London 512:
    201326592 bytes (201 MB) copied, 4.93267 s, 40.8 MB/s
    201326592 bytes (201 MB) copied, 5.89248 s, 34.2 MB/s
    201326592 bytes (201 MB) copied, 5.97411 s, 33.7 MB/s

    London 2048:
    201326592 bytes (201 MB) copied, 4.50047 s, 44.7 MB/s
    201326592 bytes (201 MB) copied, 2.90073 s, 69.4 MB/s
    201326592 bytes (201 MB) copied, 3.30499 s, 60.9 MB/s

    another London 2048:
    201326592 bytes (201 MB) copied, 3.16025 s, 63.7 MB/s
    201326592 bytes (201 MB) copied, 2.86327 s, 70.3 MB/s
    201326592 bytes (201 MB) copied, 4.0267 s, 50.0 MB/s

    Newark 4096:
    201326592 bytes (201 MB) copied, 3.19571 s, 63.0 MB/s
    201326592 bytes (201 MB) copied, 4.17497 s, 48.2 MB/s
    201326592 bytes (201 MB) copied, 4.60173 s, 43.8 MB/s

    Atlanta 4096:
    201326592 bytes (201 MB) copied, 2.19477 s, 91.7 MB/s
    201326592 bytes (201 MB) copied, 2.63818 s, 76.3 MB/s
    201326592 bytes (201 MB) copied, 3.04094 s, 66.2 MB/s

    Dallas 2048:
    201326592 bytes (201 MB) copied, 2.65925 s, 75.7 MB/s
    201326592 bytes (201 MB) copied, 3.11197 s, 64.7 MB/s
    201326592 bytes (201 MB) copied, 3.45745 s, 58.2 MB/s


Another 512 data point (London DC, all 5 tests run within a ~30 second window):

201326592 bytes (201 MB) copied, 2.61587 s, 77.0 MB/s

201326592 bytes (201 MB) copied, 3.06237 s, 65.7 MB/s

201326592 bytes (201 MB) copied, 5.70759 s, 35.3 MB/s

201326592 bytes (201 MB) copied, 6.24756 s, 32.2 MB/s

201326592 bytes (201 MB) copied, 3.98084 s, 50.6 MB/s

EDIT:

For comparison, here's the same test on a 4GB DO instance (Amsterdam DC):

201326592 bytes (201 MB) copied, 6.16116 s, 32.7 MB/s

201326592 bytes (201 MB) copied, 5.08132 s, 39.6 MB/s

201326592 bytes (201 MB) copied, 7.73071 s, 26.0 MB/s

201326592 bytes (201 MB) copied, 7.55657 s, 26.6 MB/s

201326592 bytes (201 MB) copied, 7.9936 s, 25.2 MB/s


Linode512:

    201326592 bytes (201 MB) copied, 3.11716 s, 64.6 MB/s
    201326592 bytes (201 MB) copied, 2.70099 s, 74.5 MB/s
    201326592 bytes (201 MB) copied, 3.28993 s, 61.2 MB/s
    201326592 bytes (201 MB) copied, 2.98622 s, 67.4 MB/s
    201326592 bytes (201 MB) copied, 2.93612 s, 68.6 MB/s


Tried with 4 tests (512Mb instance) and I got quite different results (though nothing like 4.9 MB/s)

201326592 bytes (201 MB) copied, 4.08309 s, 49.3 MB/s

201326592 bytes (201 MB) copied, 5.22498 s, 38.5 MB/s

201326592 bytes (201 MB) copied, 5.85255 s, 34.4 MB/s

201326592 bytes (201 MB) copied, 2.94301 s, 68.4 MB/s


Here's what I got:

<code> 3072+0 records in 3072+0 records out 201326592 bytes (201 MB) copied, 5.52118 s, 36.5 MB/s </code>


That's pretty slow. I get this:

3072+0 records in 3072+0 records out 201326592 bytes (201 MB) copied, 2.31272 s, 87.1 MB/s


On my 512MB Linode, repeating this command 10 times:

74.5, 70.5, 55.4, 56.1, 84.8, 75.1, 73.7, 58.0, 67.5, 61.8

Average of 67.7 MB/s

Dallas


I got this

  201326592 bytes (201 MB) copied, 3.05352 s, 65.9 MB/s
Atlanta, GA


another datapoint:

Linode 512: 3.96647 s, 50.8 MB/s

DigitalOcean 512: 5.87315 s, 34.3 MB/s

(though the digitalocean one feels much snappier)


wow, in what DC is your linode on?


London


"We’re investing millions to make your Linodes faster. Crazy faster. "

Skeptical. I'm wondering if this is just advertising hyperbole because Linode doesn't seem to be large enough (and there is no evidence to indicate any "funding") to be able to "invest millions".

They operate out of a suite in an office park outside Atlantic City NJ.

(I think it's a great company by the way I just don't think they are investing millions it doesn't make any sense given what I know about them.)


Linode has over 45,000 customers (as of 2011), paying a bare minimum of $20 usd/month. Realistically they're probably bigger than that now, and the avg $/customer is higher. I'd say they're pretty good sized.

http://www.prweb.com/releases/2011/8/prweb8739680.htm


"Linode has grown to over 45,000 customers"

I'd like to point out as someone that has made the INC 500 list and knows the process very well that there is no vetting to the numbers that you give them. Back when I did it you simply needed a letterhead from your accountant and Inc went with whatever you said. (Might have changed but that's the way it was). There is obviously no audit. And there is no question that people fudge to get on the list in various ways because it's a good marketing tool.

Linode is not a public company releasing information. Once again I'm not saying the info isn't correct and that they don't have that many customers. I'm simply pointing out that the fact that they issue a press release saying they have 45,000 customers (along with the math that you are assuming gives them $20/customer) isn't necessarily correct.


Linode has been around since 2003 and only made the list for the first time in 2011, which happens to be right after several years of massive growth in the industry. It seems quite plausible that they would legitimately make the Inc 500 list.

Another way to approximate their number of customers would be by IP address assignments:

http://bgp.he.net/search?search[search]=linode&commit=Se...

I count 176,640 IPv4 addresses, and that's only direct allocations - their older customers are still using IP addresses owned by Linode's datacenter vendors (SoftLayer, HE, etc.).

If what you say is true, I have no doubt that some companies fake it to get on the Inc 500 list, but given the facts about Linode it seems fairly unlikely that they faked it, and quite possible that they could be spending millions on infrastructure upgrades.


They do $23mil in revenue - http://www.inc.com/profile/linode


How far does a million dollars get you when you're buying redundant Cisco equipment for 6 datacenters?


Far if its all leases acquired with cheap money.

Who buys equipment upfront for cash at that scale?


In any event, they're deploying obsolete gear. The Nexus 7000 and Nexus 5000 switches they just deployed are basically end of life.


End of life does not equal non-functioning. Companies continue to use end of life gear in production environments (Cisco Pix anyone?).


Pfff. Try harder linode.

   # grep MHz /proc/cpuinfo
   cpu MHz		: 3922519.116
   cpu MHz		: 3922519.116
   cpu MHz		: 3922519.116
   cpu MHz		: 3922519.116
That's on a rackspace small instance. 4000 GHz, baby!


That's only 4 cores. I'd rather have the cores.


They could flip a switch tonight and tell you you've got 128 cores. That doesn't mean those cores are going to do you any good.

Linode is the only VPS I've seen that gives a gross number of cores to everyone. Instead of allocating a cpu count correlated somewhere along side plan-tiering, they just give everyone a lot of CPUs (8), and then they use tiers to determine priority, who gets to actually use those cores.

An entry level plan has 1/16th the CPU priority of a top tier user, and pays 1/16th as much and has 1/16th as much ram. If you're not RAM bound, for a somewhat loaded system you'll see slightly better than linear scaling (since you aren't paying as many context swaps) as you move up in tiers.

I think it's a really bad policy to give entry level users scheduling on so many cores: their workloads will never stick on a local cache! Their jobs are going to be bouncing around the system getting evicted from everywhere and bouncing to the next place where they can trash it's cache next. This is a marketing gimmick, a "our number is higher" trick, played on people like you jedbblue!

Some time in the past: the Pentium IV netburst core was going to hit 4GHz, then 5GHz. The mhz wars were on, mhz sold, they were what people saw, first and foremost, and it's more the fact that the insanely-deeply-pipelined core didn't even scale that scrapped the idea than people wizening up. I'm not sure if Linode's scheme here is as bad as all that, but it seems unnatural to me, and I certainly would not consider 8 cores to be a boon in this circumstance.

Edit- this does change things a good bit! A quote from elsewhere in this thread I was not aware of-

  We limit the number of Linodes placed on each host 
  machine. We also only place one plan type on each host. In 
  the worst-case scenario, you're splitting CPU time evenly 
  with your fellow Linoders, but are still able to use the 
  full potential of the host if others are idle.
This is a little different circumstance than what I'd guessed at above as you're not going to be outclassed by higher tiers, you just have to play with peers, and pay to have less of them.


Yup, I've tried many over the years and Linode is the best I've found. :)


Anyone knows whats the clock speed for each virtual cores? Isn't that a relevant and important information?

For instance if the previous virtual cores were all 1ghz and new cores are all 500mhz, doubling the cores won't do much good if you don't know their clock speed.

I was critical of last Linode's NextGen post discussion, because I didn't think upgrade was impressive compare to others, but this is one is a nice bump, memory is probably what most people will care about more though. So maybe next refresh is the memory?


Just rebooted and made cat /proc/cpuinfo and the and its 2.27Ghz, so the last of the 8 CPUs is

processor : 7 vendor_id : GenuineIntel cpu family : 6 model : 26 model name : Intel(R) Xeon(R) CPU L5520 @ 2.27GHz stepping : 5 microcode : 0x11 cpu MHz : 2266.746 cache size : 8192 KB physical id : 0 siblings : 8 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 1 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu de tsc msr pae cx8 cmov pat clflush mmx fxsr sse sse2 ss ht nx constant_tsc nonstop_tsc pni ssse3 sse4_1 sse4_2 popcnt hypervisor bogomips : 4533.49 clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management:


I just rebooted one of my Linodes, and I got this:

    processor       : 7
    vendor_id       : GenuineIntel
    cpu family      : 6
    model           : 45
    model name      : Intel(R) Xeon(R) CPU E5-2650L 0 @ 1.80GHz
    stepping        : 7
    microcode       : 0x70a
    cpu MHz         : 1800.059
    cache size      : 20480 KB
    physical id     : 0
    siblings        : 8
    core id         : 0
    cpu cores       : 1
    apicid          : 0
    initial apicid  : 13
    fdiv_bug        : no
    hlt_bug         : no
    f00f_bug        : no
    coma_bug        : no
    fpu             : yes
    fpu_exception   : yes
    cpuid level     : 13
    wp              : yes
    flags           : fpu de tsc msr pae cx8 cmov pat clflush mmx fxsr sse sse2 ss ht nx constant_tsc nonstop_tsc pni pclmulqdq ssse3 sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes hypervisor ida arat epb pln pts dtherm
    bogomips        : 3600.11
    clflush size    : 64
    cache_alignment : 64
    address sizes   : 46 bits physical, 48 bits virtual
    power management:
Edit: I should perhaps mention that this is a 512MB Linode, on the off chance that it makes a difference.


I rebooted mine as well and got @ 2.27 GHz, so I guess that they probably haven't gotten to your server yet.


That is the old 4 core cpu they have been using. We will probably have to wait some weeks before this all takes effect.


Pretty sure just reading the post answers this, "Linodes will start landing on NextGen hardware in the next week or so".


The post is a bit confusing, though, because well before you get to the line that you quoted, the article states:

"And we’re upgrading all Linodes to 8 cores! Right now. As in all you need to do is reboot to double the computing power of your Linode. By the time the host refresh is completed the average Linode will be running on hardware that is less than 1 year old."

I, for one, thought that it was ready now after reading that. In fact, from the comments, it's clear that for some customers, this is in fact ready:

"BizzarTech: Rebooted my 1024 and moar cores!!! Thank you!!"


No, the move to new hardware and increase in the number of available cores is separate. If you reboot right now you'll get access to more of the 'cores' on the host system (though I suspect they're really just "threads" in Intel terminology). But you're still running on the older L5520 hardware.


I rebooted and confirmed; indeed the older L5520 is still the processor being used on my particular linode:

<snip>

    processor       : 7
    vendor_id       : GenuineIntel
    cpu family      : 6
    model           : 26
    model name      : Intel(R) Xeon(R) CPU           L5520  @ 2.27GHz
    stepping        : 5
    microcode       : 0x11
    cpu MHz         : 2266.746
    cache size      : 8192 KB
    physical id     : 0
    siblings        : 8
</snip>


This is from a plan that is less than a week old:

  processor	: 7
  vendor_id	: GenuineIntel
  cpu family	: 6
  model		: 44
  model name	: Intel(R) Xeon(R) CPU           L5630  @ 2.13GHz
  stepping	: 2
  microcode	: 0x15
  cpu MHz		: 2133.460
  cache size	: 12288 KB
  physical id	: 0
  siblings	: 8


> Powering our NextGen hosts are two Intel Sandy Bridge E5-2670 processors. The E5-2670 is at the high end of the power-price-performance ratio. Each E5-2670 enjoys 20 MB of cache and has 8 cores running at 2.6 Ghz.

http://www.linode.com/faq.cfm#how-do-i-get-my-fair-share-of-...

> We limit the number of Linodes placed on each host machine. We also only place one plan type on each host. In the worst-case scenario, you're splitting CPU time evenly with your fellow Linoders, but are still able to use the full potential of the host if others are idle.


> So maybe next refresh is the memory?

Probably - according to their FAQ it looks like each host has about ~20GB of memory which is pretty small these days.


Hmm. Just rebooted and I'm on 8 cores but an L5520. Does this mean the CPU upgrade is skipping my Linode?


> Linodes will start landing on NextGen hardware in the next week or so. Linodes on servers that are being retired will be required to migrate onto newer hardware. For those affected, you’ll receive support tickets with the details and with plenty of lead time (weeks). You’ll also have the opportunity to perform the move early and at your leisure if you prefer.


I wonder if this means an upgrade to security procedures and host operating systems?


A problem with linode is they don't charge by the hour.


http://www.linode.com/faq.cfm#how-am-i-billed

"This will issue a pro-rated credit to your account"

It's not per hour billing as such, but if you spin up Linodes and cancel about X hours/days you do get the leftover fees prorated against your account.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: