Hacker News new | past | comments | ask | show | jobs | submit login
Intel, TSMC and other chipmakers weigh extreme ultraviolet lithography (ieee.org)
99 points by Lind5 on Oct 31, 2016 | hide | past | favorite | 51 comments



ASML has been saying "we'll have EUV by next year!" for a whole bunch of years now. Most people around the company believe they'll pull it off, but maybe they won't. ASML has a culture of selling machines they can't produce (and then working like maniacs, 2000 engineers at the same time, to pull it off anyway before the delivery date 6 months from sell-date, and then shipping a half-working machine along with 5 engineers plus a remote force of 1000 engineers trying to get it to actually meet its requirements while it's being built up in the customer's fab).

The sales folks don't care - they just sell. History has shown that time and time again, if they promise to deliver, they do - through sheer force of will and money (turns out the Mythical Man Month isn't so mythical if you're really willing to put 30x the amount of cost/people on it as is technically necessary). So when the researchers said they thought they could pull EUV off, the sales guys went ahead and sold it.

And the customers bought it, as you can read in this article. But if ASML can't pull it off, then we'll see a bunch of really interesting changes in the semicon landscape I think. ASML isn't the only company who made the full-on bet on EUV. If ASML can't deliver, then Intel may have an even bigger problem than ASML. I'm really curious what would happen then, almost to the point of hoping EUV will catastrophically fail.

ASML has no believable competition in this space.


First paragraph reminded me of most software project stories in the old days. Fun.

There are a bunch of other roads that have been popping up recently IIUC.


The "light source" is a very hard problem. "Extreme ultraviolet" is really "soft X-rays." Until recently, it took a synchrotron to generate those. Now there's a complicated scheme where a laser vaporizes droplets of tin (at 200,000° C) and the plasma emits soft X-rays.[1] It's amazing that's usable in a manufacturing process at all. It's more like a physics experiment intended to run for short periods.

I had hope for e-beam lithography, which works fine and has been able to get down to similar resolutions for years, but is just too slow for production. No masks, just writing the wafer with a scanning beam under computer control. Writing is one pixel at a time, which is why it's slow.

[1] http://spie.org/newsroom/4493-making-extreme-uv-light-source...


Indeed, the difficulty getting a reliable EUV source with enough power is arguably the reason why this technology has been delayed so much. Cymer, the company who was in the process of developing this source, was having so much trouble getting it to work ASML ended up simply buying them so they could focus all their resources on that one problem.

A fall-back scenario, should the tin-vaporizing method fail to deliver, was the use of a Free Electron Laser, which can produce a huge range of wavelengths directly. However, since compact FEL's are not really feasible, this would mean a single source for an entire fab, with complex infrastructure required to distribute the EUV light to multiple scanners. Far from ideal.


individually tracked, double-pulsed droplets of tin...

I know the article you linked lists the many issues that arise, but to even try such a technique- I think it demonstrates just how hard the problem of further photolithography improvement is.


It also illustrates how desperate the industry is for another tick on the clock and how much money stands to be made for the company that cracks this in an economically viable way.


There has been recent advantages in multi-column E-beam that might fix some of the throughput issues. I mostly see it used on the quality control side for SEM imaging faster, but in a decade or too it might make its way into production.

http://spie.org/newsroom/4609-multiple-electron-beam-direct-...


I wonder what happened to that. Encouraging news in 2012-2013, then nothing. Lots of interest in E-beam for mask making, but not direct-writing ICs.


> "I had hope for e-beam lithography, which works fine and has been able to get down to similar resolutions for years, but is just too slow for production. No masks, just writing the wafer with a scanning beam under computer control. Writing is one pixel at a time, which is why it's slow."

Couldn't this process be sped up by multiplying the number of scanning beams in operation at any one time?


That would essentially mean more machines, so you are not reducing the (cap expense)/(time per unit) equation. Additionally electron beam optics are magnets, so driving multiple beams in the same space would be complex.


> "Additionally electron beam optics are magnets, so driving multiple beams in the same space would be complex."

Could you not use a Halbach array?

https://en.m.wikipedia.org/wiki/Halbach_array

Furthermore, I had in mind that you'd have synchronisation between the beams. Consider the use case of one e-beam per chip. As each wafer has multiple chips you would still have plenty of room for parallel beams working at any one time.

As for cost, do you have a ballpark figure for how much the devices cost?


How much of that 100 million euro per scanner is for the light source? Low energy synchrotron sources can't be too far away from being competitive as a EUV source.


Some googling indicates that synchrotrons are not as desirable as EUV sources as I had thought. This article from ~2000 lays it out:

http://digbib.ubka.uni-karlsruhe.de/volltexte/fzk/6606/6606....

Unsurprisingly, synchrotron sources produce too much out of band radiation so heat load on the optics is high, which is not so good for fragile EUV optics that need to last a long time.

As background: EUV lithography requires high power in a narrow bandpass. 250 watts of power at 13.5 nm (92 eV) with a <2% bandpass is a really tough task when you think about it. Synchrotrons fundamentally produce full spectrum radiation, and parameters of the storage ring are varied to get the desired energy distribution. Typically you then throw away all the photons other than the ones you are interested in (turning waste photons into heat), and the inherent brightness of the synchrotron source still leaves you lots of photons in your bandpass of interest. Heat loads on front end optics on insertion device beamlines at high energy storage rings like SPring-8 are hundreds of watts per square mm. cf. the power dissipation of your CPU at a few square mm die size and <100 watts power.

See also: https://pure.tue.nl/ws/files/3872978/banine2014.pdf


I'm guessing this produces more power than a diagnostic x-ray machine by many orders of magnitude? The article talks about a 250 Watt source needed.


You don't need that many photons for a diagnostic imaging machine since they typically use hard x-rays (20-70 keV) that are very penetrating.

Very soft x-rays/EUV is amongst the hardest regions in which to work as the photons are just the right energy to be strongly absorbed by most materials, so as the article states, losses to the optics are significant, i.e. you lose ~half your photons in every mirror.


Man, this is a great followon to the article about Intel's business model that floated up to the front page a few weeks ago. It really illustrates how Intel is betting the farm, year after year, on reliable performance increases. Even with how many semiconductors I build into products, I can't help but boggle at the sheer scale of the semiconductor industry's investments. $1.38 BILLION dollars committed to a research project that's not set to pay off for three years is a rare thing in the private sector.

On a less fanboy note - I'd have loved to see a more detailed description of how they ship the EUV tools. I can only imagine the logistical headache involved for Intel - they spec out all of their fabs to be identical for quality reasons. Shipping nine school-bus-sized, vacuum sealed containers to some far corner of the globe has to be a shitload of work. And then you have to set it up when it finally gets there! In vacuum! And then when you're done, you have to do it again for each Intel fab!

(Fanboy rant over now, I swear, I just get really excited about making all these miniscule things for some reason.)


Intel can afford to, and also has to, make that 'bet' because that is their competitive advantage.

Intel has 'the best' chips because they literally pay the price for being, at least on their headline grabbing products, a full process generation ahead of more or less the entire rest of the semiconductor industry.


Does Intel still have a competitive advantage? It seems like they've hit a wall and competitors like TSMC and Samsung have basically caught up.

I think they've acknowledged that they're slowing down with their new strategy [1].

[1] http://arstechnica.com/information-technology/2016/03/intel-...


I suggest reading the following article: http://www.anandtech.com/show/8367/intels-14nm-technology-in...

The plot 2/3 down the page compares Intel's process with competitors. It's a bit out of date in the sense that it only shows projections instead of the result after-the-fact, but the plot shows that Intel is still ahead.


Intel has been leading the world in semiconductor process technology almost since the beginning. Because they've been the world's largest semiconductor company for a long time, they can afford to keep R&D spending as a major expenditure.


Won't those competitors hit a wall as well? Seems like Intel doesn't need to be fast, just faster than the competition. I imagine the rate of change will slow for the whole industry.


You're right they are all hitting the wall at the same time. The 7 nm node is going to be a problem node for all of the companies [1]. Popular opinion is that Intel always has enough tricks up it's sleeve to effectively be a node ahead (even when on paper it isn't), but I've never been able to find a source on that. But honestly the industry has been hitting this wall for over a decade, and making less progress then it appears on paper (through not fault of their own. Moore's law has been dead for a long time now. "Node size" has only a loose relation to the size of any structure on the chip and is really just a marketing term.

[1] http://semiengineering.com/7nm-market-heats-up/


I would suggest reading the following article to see a source for comparing Intel vs competitors: http://www.anandtech.com/show/8367/intels-14nm-technology-in...


If they aren't a generation ahead as they have been then I think it ceases to be a competitive advantage. Then they would have to start asking questions like "Why not fab Intel chips at TSMC or Samsung if they offer better pricing?"

They've already decided they're going to fab ARM chips, so I think they've already come to this conclusion.


Here's the trick: they already do. Only the biggest margin products get fabbed at Intel fabs.


Is there some x86 fanned outside Intel ?


Isn't the new tech that comes out of R&D patented?


Intel is only one of the ASML customers. ASML has been founded 32 years ago, and before it was part of Philips. It has been making the same bet for years and years.

The logistical headache is not the problem of Intel, but for ASML. A downtime costs around ten thousand euros per hour and can run up in the millions. Distribution of spare parts is very important.


You can sort of look at Moore's law opposite the usual way -- as a commitment to make the investment required to make it happen.

Took a long time for physics to start to get in the way.


This isn't quite what you're looking for, but ASML one of the biggest tool suppliers has a youtube channel: https://www.youtube.com/user/ASMLcompany

This video has a little bit of the tool assembly and support systems: https://www.youtube.com/watch?v=ttbaaI5xUcg


Yes! Excellent! Thanks for the link!


In California, drug makers have spent nearly $110M on campaigning against a single proposition. $1.38B spent over many years doesn't sound like much.


The oil and gas industry regularly spends that kind of money. Look into offshore drilling.


What if computer technology now is like space technology in the late 70s? After 40 years of continuous progress, suddenly the rate of improvement in technology completely flatlines and all the future predictions of going to the outer planets and such don't come true.

How will the future be different?


The semiconductor manufacturing processes might be flatlining a bit, but that's not entirely synonymous with "computer technology". One recent example we've already witnessed was in the GPU sector. They (nVidia and AMD) were stuck on 28nm for at least 5 years, and instead of just being able to rely on the "standard improvements" (by way of Moore's law) to get better performance at lower power, they had to rely on architectural improvements. nVidia fared significantly better, managing to double performance-per-watt on the same manufacturing node with Maxwell (GTX 980). Is such a feat easily repeatable? Very unlikely. But it also goes to show that when these companies can rely on improvements in the manufacturing process for their speed/efficiency boosts, they don't necessarily put as much effort into architectural improvements as they otherwise might.

If you haven't heard of the Mill CPU, it's one example of completely rethinking things: http://millcomputing.com/docs/belt/ Of course, the problem there is the missing software/OS toolchain. But it also points towards there being some huge inefficiencies present in "classic" von Neumann architectures, and to me, the possibility that there's still room for things to improve.


The next "total rethink" of GPUs is multiple dies on a single package. Not just for HBM, there's no reason you can't have the dies "hanging off" the interposer (supported by an inactive substrate), only overlapping the interposer on the regions necessary for chip bumps. Essentially each die becomes a large SMX Engine or other hierarchal unit that is marshalled by some central memory controller or other controller that presents the illusion of a single GPU rather than SLI/Crossfire.

This is actually the direction AMD is going with Navi, not because of the performance gains per se but because it helps yields big time. You can pre-bin your chips, then stitch a bunch of small chips into medium chips while keeping your yields high.

In theory you can scale up for quite a while. At the long term, you will eventually be limited by clock degradation/signal propagation time however.

The short-term problem is heat and power. This doesn't help efficiency gains per se. If you are stitching together four 600mm^2 dies then you are going to be pulling 1000-1200W and dumping that back out into your cooling system. For US consumers, their circuit breakers are a much more immediate limit. Most household circuits are 15A @ 120V, and that's an instantaneous limit. You are not supposed to continuously pull more than 80% of a circuit's instantaneous rating, so that's 12A (1440W at the wall). Factor in the losses from the PSU's 80% efficiency and you are now talking 1152W continuously. Again, that works out to about four 600mm^2 GPU dies, plus some power budget for the CPU and so on. And you'd probably need to take drastic measures to keep that cool - that's a lot of heat in a small surface area.


The Mill is far from proven. It's a big idea, but big ideas always look good initially - it's when you get down to the implementation details that most of your revolution in something tends to dry up and just become an evolutionary improvement in performance.


> the possibility that there's still room for things to improve.

Of course but notice that your desktop computer is using an x86 ISA instead of a RISC ISA because in many case backward compatibility and network effect trump performance improvement..


Interesting idea. One major difference though is that there is considerable economic incentive to keep making better chips, whereas the flatlining rocket technology was probably due in large part to lack of economic pressure to make it better (very high cost of entry for potential competitors and low volume).


Indeed. Moore's law is about economics rather than technology.


The difference here is that we're much more likely to see moderately short term adoption of the technology as we already have 'downscale' companies using much older, cheaper to buy second hand equipment.

If a wall is hit, the companies that /make/ the equipment will still need to sell units to someone, so the market equilibrium will push them closer and closer to the source cost for the (very often small batch/one off) parts. The race then between the rate they need/want to produce units to stay in business, and how long those in the market can defer purchasing new units.


The wall has been hit in some areas. CPU clock speeds maxed out around 3-4 GHz years ago. Getting rid of heat now dominates chip design. This is why 3D chips are limited to mostly-inactive structures such as flash memory. Feature density is already approaching the size of atoms, which is why ICs now fail over time from electromigration.

There's no longer plenty of room at the bottom. Atoms are too big and the speed of light is too slow.


The speed of light is not really an issue in VLSI chip design. RC delay, on the other hand...


Most technology follows that curve. In the last century, coal power plants have gotten only a factor of two more efficient. Almost all the progress in the field happened from 1880 to 1940. Physics is a harsh mistress.


I suspect that's exactly what's going to happen unless we transition to a completely new architecture, like neural nets or The Diamond Age style nanomachinery.


There currently doesn't appear to be a Richard Nixon to say 'shut it down.'


"Extreme Ultraviolet Lithography..."

Because saying "We're using x-rays" just sounds too scary right now...


I doubt that has anything to do with it.. Probably due to the 15nm wavelength they're using and a <10nm wavelength being the typical cutoff for x-ray.



So one thing I wonder - if you loose 30% of the energy with every mirror, is it really necessary to use a dozen mirrors between the laser and the waver?

Getting rid of two mirrors would double the energy at waver, removing 6 mirrors increases it by a factor of ten.


EUV has been talked about for so long, I heard about it when I was in college, in the 1990s, I'm just amazed that it's finally coming to fruition. But good to see that the technology has made great strides.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: