Hacker News new | past | comments | ask | show | jobs | submit login
Samsung Foundry Announces 10nm SoC in Mass-Production (anandtech.com)
90 points by desdiv on Oct 17, 2016 | hide | past | favorite | 26 comments



Intel's 14nm process uses 2 fins per gate for an SRAM cell size of 59nm x 59nm [1, corrected]. Samsung's 10nm process uses 3 fins per gate -- unknown what an apples-to-apples SRAM cell size would be on this node.

It will be interesting to see final benchmarks of performance, power consumption and price (PPP).

It's worth noting that Samsung announced 10nm DRAM back in April [2].

[1] http://www.tomshardware.com/reviews/intel-14nm-broadwell-y-c...

[2] https://news.samsung.com/global/samsung-starts-mass-producin...


Note that this is (59nm)^2, not 59nm^2. The latter being smaller than the feature size of a 14nm process.

Tom's Hardware, of course, gets it wrong.


Thanks, I updated my post with that correction.


Here is another metric that measures logic density [1]. At the 10nm process, Intel has a 21% advantage over Samsung using these numbers.

[1] https://www.semiwiki.com/forum/content/6160-2016-leading-edg...


Ok, using Intel's currently shipping 14nm numbers vs Samsung's currently shipping 10nm numbers from your source:

• Intel: 13.4nm "standard node value"

• Samsung: 12.0nm "standard node value"

• TSMC: 18.3nm "standard node value"

• Global Foundries: 16.6nm "standard node value"


The question is moot for the mobile market, since Intel don't fab a competitor to Snapdragon 400/600/800 with the same power/performance categories.


AT should really be pointing this out. Samsungs announcement by itself it not much more than marketing propaganda.


It's like GHz. Pretty soon people will figure out that this number is not useful to compare different foundries, or even members of different families from the same company. You just have to test what it can do.


The company has an exclusive deal with Qualcomm to manufacture its Snapdragon 830 processors using the 10nm process, according to http://english.etnews.com/20161005200001


Being so fast(at such a complex node), it makes me wonder: was moore's law, i.e. a doubling every 18 months just an anti-competitive collaboration among the industry to slow down ? could they have improved the technology much faster ?


It doesn't have to be anticompetitive conspiracy, milking current technologies for all they've got make sense in a lot of ways. Making your previous generation profitable by delaying your next generation a little means your business can last in the long term. Releasing things ASAP in every instance means in many circumstances competing with yourself.

This is a cost of capitalism but it comes with the benefits of a self-sustaining system and more predictable futures. If politicians were funding processors, you'd bet they wouldn't want to spend the resources to double processor parameters every 18 months.


If politicians were funding a processor we will have processor operator licenses that every computer user should have to legally use a computer and we will have 12cm process with processor connected via landline to the ministry of truth. There would also be half processors for free for the poor.


Wait so would half processors have gates half as wide or just be half as fast?


I think that would be literally half of the processor ;)


If they could spend it in their district, it's a safe bet they would...


Considering that a lot of current semiconductor demand comes from games, politicians would be hard pressed to come up with that channel of funding in the first place.

Heck, I probably won't think to tax games to fund chip foundries.


AMD has lost so much market share, there's no way they would collaborate with Intel.

If AMD had tech that Intel didn't, they'd use it.


Meh, I'll wait for Zen before I draw conclusions.


When Moore made his famous observation, there were many competitors in the chipmaking industry. Do you have a single sign from that era of anti-competitive behavior?


Not to support OP's point, but there is a decent example. Intel, for a while, kept AMD in the game and licensed them x86 simply to avoid anti-trust laws (basically, Microsoft just got shafted so they wanted to be able to point to AMD and say "we have competition"). IIRC they later on actually invested in AMD to keep them solvent for the same reasons.


Not quite what happened, Intel got stuck itself in several anti-trust quagmires because of their attempts to rescind the x86 license that IBM forced Intel to give to AMD when the 8088 was selected for the original IBM PC.

Intel did everything it could to avoid keeping AMD in the game, including paying Dell off to not use AMD chips.

I don't recall ever hearing about Intel investing in AMD, maybe you're thinking of the $1.25 billion settlement Intel had to pay AMD for various lawsuits Intel was in danger of losing and potentially having to pay triple damages.


Thanks for the correction :)


Its not just anti trust. .mil is a lot happier if there are multiple sources for chips.


The Microsoft aspect you mention I feel you may be confusing with the time in the 90's iirc that Microsoft invested into Apple to prop up competition. AMD is not competition to Microsoft or ever has been.


Calling it 10nm or 14nm or 22nm is just marketing. Every manufacturer can call their manufacturing process whatever they want. If you actually look at the physical metrics like the minimum gate length, contacted gate pitch and minimum metal pitch then you will see that on the same "nm" process intel's process is always better.


The power/performance advantage holds up. First or exclusive use of new chips might aid them offsetting the fallout from N7 disaster.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: