As an outsider I would never had anticipated Intel being leapfroged like that in such a short window of time. As side effect it probably largely contributed to the resurrection of AMD (coupled with the fact they probably produced with Ryzen a good design at the right time). I would like to know if Intel just fumbled or if TSMC stepped-up their game, or more likely a little bit of both?
A little bit of both. Intel's 10nm is now officially 2 years late. If they had 7nm now they would still be in the lead given its ( original ) 7nm plan were roughly equivalent to TSMC 5nm.
TSMC also gained lots of momentum from the Smartphone Revolution. All of a sudden you have a 1.3B Smartphone Market, from SoC, Wireless BaseBand, and all sort of other component Fabbed with TSMC, compared to Intel's 250M PC Market. Now of course Intel make many times higher margin, but considered TSMC has a diverse group of clients utilising its current and old Fab compared to Intel doing it all by themselves, TSMC now has similar resources to Intel and innovate.
And compared to Intel which has big leaps across generation, TSMC's approach were to iterate, so you get 16nm, 16mn+, 10nm, 7nm, 7nm EUV, 5nm EUV, all of them were iteration of previous generation.
TSMC manufacture circuits based on customer design, this could be CPU, 5G modems, SoC, GPU etcetera. TSMC is like circuits printing company. Therefore it is more difficult to compare progress of TSMC versus Intel. But also, because they print so much (and so diverse) nowadays, they get priority at all the semiconductor machine companies.
>> Compared with TSMC’s 7nm process, its innovative scaling features deliver 1.8X logic density and 15% speed gain on an ARM® Cortex®-A72 core, along with superior SRAM and analog area reduction enabled by the process architecture.
Since process names have largely lost any meaning I just wanted to see if they compared density to their own previous node. I was not disappointed, though they don't mention the actual SRAM density change.
Don’t conflate “effective feature size” (the 5nm used in this article) with “real” feature size (or, more importantly, pitch), with transistor size. Transistors are still huge compared to atoms—think hundreds of thousands, or millions. The issues at this scale are all electrical. (Bullshit terms like “quantum tunneling” are bandied about; the scientific use isn’t wrong, but any journalist using the term probably is.)
There’s alot of room down there, below the extended Moore’s law, just not always so CMOS-ey.
It's my personal way of politely saying "marketing bullshit". The real importance of 5nm vs. 7nm is the density of the transistors. A large portion of the recent gains in transistor density have been from reducing the pitch (the space between) transistors, rather than shrinking the size of the transistor.
That's an ultra-qualified "yes". Hardware isn't magic. Once you've got a Turing-complete device, software can cover the rest. However, HW implementations of functionality are 'better' than software in the sense of being either faster or more power-efficient. (Or some trade-off.) Our chips are better because we can throw more customized hardware to side-step (slow; power-hungry) software implementations. We have access to more customized HW because we have more transistors.
If you're asking long-run questions in terms of HW/SW stack performance, I strongly suspect we've got another 3-10 doublings with "just more HW". If any of the post-patterning mechanisms pan out we could, theoretically, get another 15-20 doublings by going all the way down to atoms. After that ... I dunno; sort of the realm of scifi at that point.
Thanks that all makes good sense. I was curious about "post-pattering mechanisms"? Is this a new area in fab process technology? Might you have some links? Another 15-20 doublings from where we are today is pretty awesome.
Patterning is the use of multiple masks to get fine features smaller than the wavelength of light you're using where in previous larger feature nodes one mask sufficed: https://en.wikipedia.org/wiki/Multiple_patterning
It's very expensive in terms of tooling, since you need a lot more more masks, and production time spend in lithography steps. See this comment in this discussion https://news.ycombinator.com/item?id=19570724 for a bit more.
Interestingly, the semiconductor industry has thought it was close to its theoretical limits since feature size was measured in microns 40 years ago. Back then, one of the big worries was that we were approaching the wavelength of light used in the photolithography process and that there was no clear path to getting below 1000 nm. I expect that it will always be difficult to see beyond a few years of innovation in this area. It certainly does seem that we will soon need to switch to other materials and forms of signal transport though.
- TSMC are the only company currently shipping real products at 7nm, and 5nm is a full generation ahead beyond that - nobody else is anywhere close to 5nm.
- Samsung will have a 7nm later this year that is broadly equivalent to TSMC's current 7nm (data point: the Galaxy S10 ships with either the Exynos 9820 on Samsung's "8nm" (enhanced/rebranded 10nm) process, or the broadly-similar-performance-with-better-power-efficiency Snapdragon 855 on TSMC 7nm)
- Intel will have a 10nm later this year that is broadly equivalent to TSMC's current 7nm. They are still shipping 14nm as their leading node.
- Global Foundries have stopped further investment beyond 14nm.
>- Intel will have a 10nm later this year that is broadly equivalent to TSMC's current 7nm. They are still shipping 14nm as their leading node.
Just to add, by that time TSMC will have an improved 7nm based on EUV.
I wonder what happens after TSMC 3nm, that is roughly 2022 / 2023. I am pretty sure we can do 2nm, but without another market expansion to further spread the cost of unit, I wonder who will be able to afford these leading nodes. With every generation being much more expensive than previous gen. Smartphone unit shipment are not growing, in fact leading node Mobile SoC are likely shrinking on a YoY basis due to slower replacement cycle.
We surely haven't reach the technical limit of SemiConductors, but it looks to me we reach the Market / Economical Limit.
>Equipment sales were actually going down for close to 3 years now. We are up for long winter in the industry.
I heard some say this, Would it really be "Winter" though? I guess from equipment manufacture perspective that is yes. From the industry as whole I guess it is just longer cycle, more cost reduction from technology stand points.
>A poorly held secret in the industry is that fabs want to use EUV to not to make <7nm, but to make economical <40nm litho with single exposure.
I guess that is still many years out? Considering all the ASML EUV unit are fully booked till 2021, and with increasing use of EUV from Intel I guess that is likely to continue till 2023 / 2024.
It would be quite some time before we can make super cheap 20nm components.
> I heard some say this, Would it really be "Winter" though?
Yes, serious economists hired by fab companies almost all think so. See, fab ecosystem can't create demand by itself, it relies on clients selling new fancy things, and there are no new fancy things on the horizon, and even "megaclients" are scaling down new orders.
The one overt ways fabs can stimulate consumption is by inventing ground breaking new concepts, and then giving them away... As was with chip cameras (smartphones, optical mouses) RF integration (think of every SoC with wireless today,) MEMS, power on silicon...
Even on that from, there is little new things coming.
> I guess that is still many years out? Considering all the ASML EUV unit are fully booked till 2021, and with increasing use of EUV from Intel I guess that is likely to continue till 2023 / 2024.
1 single exposure on planar EUV process can replace 10+ multiple patterning exposures. So, even with a dramatically lower throughput, it can slash process times, and shoot up yields.
While smart cars are still nowhere near the horizon, they already contain all kinds of chips and 5G is just being laid out, so ... talk about mobile ;)
100M Car Sold every year and majority of them ( Until AV Comes ) don't require any more Silicon than a Smartphone. The 5G hype will hopefully move the market to slightly faster replacement cycle, although I doubt it will happen.
The only possible lift for the market is the SEA and India make major leaps and bounds in their economical growth. But that is unlikely to happen either.
At current progress, Intel will have a 10nm ready for a Q4 2022 release, excluding processors that would rather melt through the mainboard than run demanding tasks (or doubling as space heater).
The thing is, by 2022 they will likely have 7nm which is a totally separate team and effort and it's much more likely to succeed thanks to EUV. I have even seen late 2021 estimates for that and it's not entirely impossible they can crunch off a few months off arriving to a point where 10nm will have 2020 for it and that's it. An effort that took more than seven years and it's all going to waste.
You could still cast doubt on that, even if it were a completely separate team though.
Without knowing the causes for why Intel are failing to deliver 10nm we can't say if it's an Intel problem, or a 10nm team problem.
And even then, would they not be building on the 10nm process to develop the 7nm process to some extent? Otherwise you're reinventing the wheel twice, and may as well jump straight to 5/3/2nm processes.
There have been a bunch of rumors leaking out of Intel on the reasons. The biggest one was apparently that the Contact Over Active Gate just didn't work. That was used extensively in the GPU section of the die but not the CPU part which is why the 10nm chips released had working CPUs but no graphics. They've cut that feature from the process for the new version of 10nm which should help.
The other big rumored problem they were having was with cobalt interconnects and I have no idea if they've made it work or abandoned it or what.
Of course we don't know for sure, but there's a little precedent.
While the Pentium 4 team was churning out ever more power-hungry space heaters, a separate team came up with Pentium M. It became the basis for the Core architecture that gave Intel a 10-year monopoly.
If anyone knows that having separate teams reinvent their own wheels can have a massive payoff, it's Intel.
Who would have an advantage building a 5nm processor. Intel, or some random company that doesn't build processors?
There is institutional knowledge, built up over every new generation. Now 7nm could be different to 10nm, but they've learnt transferable knowledge from that effort, even if its just how not to do it.
> Global Foundries have stopped further investment beyond 14nm.
That's an interesting approach since they'll be able to refine the 14nm step itself much like Intel has been doing for some time. That leaves them in a pretty good position as the finer nodes (10nm, 7nm etc.) will be inherently limited for a long time due to their unfavorable NRE costs, use will be possible only for the highest-profile stuff with the broadest feasible market.
Unless NVidia skips 7nm altogether, AMD is more likely to take advantage since AMD is already releasing Navi on 7nm later this year.
It also remains to be seen what the yields are for 5nm. It might be the case that only small area chips like cell phone SoCs and laptop CPUs can be manufactured for a long time, as was the case with 12nm and 7nm. GPUs typically need as much area as they can get to fit more compute cores.