Hacker News new | past | comments | ask | show | jobs | submit login
Open Source CPU Core Advances (eetimes.com)
124 points by programmernews3 on July 14, 2015 | hide | past | favorite | 38 comments



The RISC-V activity is very interesting. I particularly love that they did a 1.4GHz core on 48nm SOI and are working on 28nm level. This knocks out some of the early, hard work of getting competitive hardware at an advanced process node. I'd like to see two things in this work: microprogrammed variant with an assembly or HLL to microcode compiler; tagged variant like SAFE/CHERI processors with IO/MMU that seemlessly adds & removes tags during DMA. That would be way better for security-critical applications than most of what's out there. Multi-core would help, too.

Meanwhile, Gaisler's SPARC cores are commercial, open-source, customizable, support up to 4 cores in Leon4, integrated with most necessary I.P., and can leverage the SPARC ecosystem. Anyone trying to do an open processor can get quite the head-start with that. A few academic and commercial works are already using it. Plus, the SPARC architecture is open as such that you only pay around $100 for the right to use its name.

So, Gaisler's SPARC cores with eASIC's Nextreme seems to be the best way to rapidly get something going. The long-term bet is RISC-V and they could do well copying Gaisler's easy customization strategy. Might be doing that already with their CPU generator, etc: I just read the Rocket paper so far. The solutions built with one can include a way to transition to the other over time.


You might be interested in our (lowRISC's) tagged memory implementation. We describe our planned approach in http://www.lowrisc.org/docs/memo-2014-001-tagged-memory-and-... My colleague Wei gave an update at the workshop a couple of weeks ago http://riscv.org/workshop-jun2015/riscv-tagged-mem-workshop-...


It would be great if you also implement ECC. Memory tagging is only as good as you can trust the memory you are tagging.


See: rowhammer. ;-)


eAsic is only using it's eBeam direct-write technology(which is great for prototype development for cheap) for 90nm chips. So maybe using 14nm FPGA's is better.


Nextreme 1 is 90nm, Nextreme 2 is 45nm and Nextreme 3 is 28nm. Their easicopy service mentions all three. So, am I missing something? Or can't you just put a Gaisler SPARC design on an arbitrary Nextreme for S-ASIC benefits and convert it to ASIC later when cash is available?

Note: Gaisler's LEON3 and LEON4 are listed in their I.P. library list for Nextreme devices.

Note 2: Avoiding FPGA's except in early stage due to high unit costs and energy use. My favorite, though, are the Achronix Speedsters and their asynchronous circuits. Badass. They'll get bought out, though. (sighs)


While all eAsics are via programmable , only the 90nm version is programmable by eBeam. the rest requires a mask and minimum volumes - ie large expense. sorry on mobile so no link.

So maybe building something on(the gaisler is a good idea) an FPGA ,and proving enough market value and finding customers (not easy) ,should be the first part , and only than , using eAsic or Fujitsu's similar service for mid-volume ?


Oh OK. I thought all of them required masks lol. I didn't know they shortcutted around that for 90nm. That's awesome! Thanks for the tip.

Far as FPGA, that was my model: prove it with FPGA, get to a certain volume, and then do a S-ASIC run. Important to note that I'm focused on security-critical apps. A FGPA's reprogrammability is a risk here. I thought there was a chance I might be able to skip the FPGA off and go straight to S-ASIC if I sell customers on fixed logic being safer. The anti-fuse FPGA's are my backup plan on that although I haven't thoroughly evaluated their actual security.


The best flash FPGA that would fit your goal is probably on 65nm. if that's the case , you're better of verifying on FPGA,selling low volumes on 90nm easic -which will be far more efficient than 65 nm fpga ,expect very expensive prices at low volume , but than you could scale.

BTW if security is what you sell, isn't any option to do it at the software only level and reach bulletproof security?


I used to do that with separation kernels, security through diversity, and so on. My framework (below) I eventually published for free after Snowden leaks because people weren't keeping up with attackers.

http://pastebin.com/y3PufJ0V

Thing is, though, that the hardware kept getting in the way. I discovered too slowly that the problem, as Snow said, was the hardware inherently made safe or secure computing difficult. So, my counterpoint to Dan Geer (below) pointed that out, gave many examples, and showed it must be the first step. So, we need hardware like Burroughs, System/38, SAFE (crash-safe.org), or Cambrige's CHERI that makes our job inherently easy instead of FUBAR.

https://www.schneier.com/blog/archives/2014/04/dan_geer_on_h...

FPGA's might be modified by clever software attack along with being slow and expensive. ASIC's with right logic might be immune to software attack and fast but are EXPENSIVE. So, I was exploring S-ASIC option for non-modifiable, cheaper NRE than ASIC, and faster/cheaper than FPGA (maybe). I was going to do anti-fuse if I had to do FPGA's. Not sure what their cost or tooling is like vs typical Altera or Xilinx FPGA's.

Anyway, that's what I was aiming at. Appreciate your tip. Hmm, Opterons and PowerPC G5's were done at 90nm albeit custom. Can probably squeeze quite a bit more performance out of that easic than I thought. Could always go old school and do 4-way SMP box. :)


BTW, i'm not even sure a structured asic will give you good security against physical attacks, because you probably need to design below the logic layer for that.


Intel bought Altera; the are or were doing fab work for Achronix, which they invested in as well.


The Intel and Altera combination is especially exciting. The first time I got to see the real potential of FPGA's in practice was when SGI added them to NUMA machines. The onboard memory and interconnect meant no huge slowdown for going through PCI bottleneck. A FPGA fabric as a layer of a high-end, Intel chip with software & FPGA cooperation using on-chip interconnect would kill everything else in performance.

The only thing better might be one of the POWER chips with a FPGA. Maybe IBM should buy Xilinx or Achronix... mwahahaha. I'll probably be able to afford Intel's, though, unlike IBM's. ;)


There's good coverage of the June workshop at http://www.lowrisc.org/


The concept of 'open source hardware' is very interesting to me, however, it looks so different to software in many ways (e.g.: much harder to understand, many external dependencies to build it yourself), but maybe is just my perception due to zero knowledge of so many things.


Software looks, to the layman, as very hard to understand. The infrastructure concerns of external dependencies are hard; I spent a full day configuring a build server the other day. We persevere despite these things. I doubt that the open source hardware community will ever be even a tenth the size of the software community, but if it can be large enough it can have amazing knock-on effects. Look at the outsized effect even simple things like the Raspberry Pi and Beaglebone have. They're reference boards slapped together out of commodity parts, but represent a very real gateway to software and hardware hacking to many people.


I think it is just as "simple" as software, except that it is very different. Also hardware has typically been the realm of patents and software the realm of copyright when it came to protecting IP. So you can always re-implement your take on a software idea if it is only copyrighted but you can't make work alike hardware if it is patented.

Fortunately a lot of hardware patents are expiring which makes the amount of useful hardware you can implement that much more useful. Over the next 5 years a huge number of microprocessor patents are due to expire into the public ___domain so I am looking forward to a lot of inexpensive hardware coming out of places like TSMC and China.


Not as simple at all. I can teach people how to write software pretty quick. Those same people might take years to learn to make working hardware. Without extensive tool support, it can take them a lot longer as process nodes they work on get smaller. And you don't know if it works until you shell out big money for an ASIC prototype and probably do several iterations of that.

The fact that you have to spend more money to test the design is the biggest difference. Eliminate all the rest with brilliant, well studied people and you still have that huge cost on a decent process node. Services such as MOSIS help but only so much. Open-source, gate-level testing and general verification tools would be a great investment in that they reduce this significantly. Well, a full suite of open source (or cheap) EDA tools in general would be nice but that's a whole different, pessimistic discussion. :(


Perhaps it's context? I certainly agree that teaching someone chip design is a long process, so is teaching them to design a compiler. For me, it is a level of complexity which cannot be abstracted against. So, for example, it is easy to teach middle school kids to make working digital circuits out of TTL gates, but much harder to teach them how to bias a transistor amplifier to operate in its linear region. Similarly its easy to teach them basic game mechanics with PyGame or something similar but harder to teach them out to develop inverse kinematics and shader based rendering in the Unity engine. All about scale.

For me the big difference is that you see a program that does something, and if you're skilled enough write your own version that does the same thing, even if it was protected by copyright you "own" that new code you wrote. But if you saw some hardware that did something, and were skilled enough to implement that same thing yourself, if it was protected by patent, you couldn't share your version. Sad really.

Even an open source RISC cpu couldn't be built with out of order execution until those patents[1] expired. But now a lot of those patents are expiring and so its a great time to be in the bespoke hardware business.

[1] https://www.google.com/patents/US5666506


I think scale matters but so does complexity. Do you really think someone building TTL circuits will be able to do a 48nm design without EDA that's useful for something? Stuff quickly gets out of control to the point that designers depend on tools to catch all the problems. I can use a method such as Cleanroom, basic abstraction, and a homebrew Oberon/BASIC/LISP to design software of about any size on the market's cheapest PC. The ASIC's? I'm going to be begging for synthesis (RTL at the least) & place-and-route before the day is up on a non-toy project.

The patent vs copyright law issue is huge. It's one of the reasons I haven't tried to bring a solution to market: can't defend the lawsuit and don't want to take one down. I thought on it a long time to try to determine a way to reduce risk.

https://www.schneier.com/blog/archives/2014/03/friday_squid_...

There's still risks as Clive Robinson notes further down. Making sure the technology is produced outside the U.S. could help. As Clive noted, there was a hardware engineer both of us talked to whose company specifically avoided the American market due to all the patent nonsense. He said they did fine without it. Corruption in Congress on the issue means it's not changing anytime soon.


Are you sure about that last part? I thought whitebox engineering was completely acceptable in hardware. Additionally the patent says:

A method of issuing and executing instructions out-of-order in a pipelined microprocessor in a single processor cycle...

In other words, it's hardly a patent on OoO.


Most of the OoO patents are lapsed. This one is recent:

http://www.freepatentsonline.com/y2015/0026442.html

Who knows what it's patent status is. One can always do an in-order design. Has the advantage of determinism and ease of covert channel analysis.


>> Well, a full suite of open source (or cheap) EDA tools in general would be nice but that's a whole different, pessimistic discussion. :(

Actually for the leading edge - the cost of EDA tools is among the largest bottlenecks ,possibly even stopping moore's law - through greatly reducing design starts and breaking the economics of the law.


That's one point in the pessimistic discussion. Worse that most of the tools hit NP-hard problems whose solutions leverage a combination of public knowledge, esoteric wisdom (often trade secret), and many PhD's worth of algorithms. It's going to be... really hard... to duplicate them plus their results in proprietary or FOSS.

Personally, I'd just suggest Bill Gates or another fat cat buy Mentor (cheapest) then dual-license their stuff and offer discounts for early years to startups. Or discount + royalty strategy. Results might get very interesting in terms of innovation.


Personally i wouldn't interrupt Bill Gates. He's doing amazing stuff. But it could really appeal to other philanthropists who seek recognition and impact.

Another idea i had in that area - why won't great scientists collaboratively start an open DARPA - creating propositions for truly breakthrough science/engineering projects - and letting philanthropists buy and make them happen ?

Assuming philanthropists get at least one success from a few trials - it's guaranteed impact - on a very large scale.


Thing is, what individual or small group of them could I convince to spend almost $2 billion? Bill Gates has plenty more than that and would understand the value of the purchase. He's also trying to give away most of his fortune. Weren't any other names that popped into my mind.

Suggestions?


I think you're right in that there needs to be an easy way in. I have some FPGA hardware, I was an FPGA hardware engineer in the dim and distant past, and tried to follow a tutorial, but still didn't get very far trying to make a RISC-V machine. I did however manage to install a simple RISC-V virtual machine (on x86-64) [1]. But yes, it needs to be much easier.

[1] https://rwmj.wordpress.com/2015/06/11/booting-risc-v-linux-w...


Everything to play for - 64 bit ARM is not yet established, and is a complete change from 32 bit ARM.


So if these comparative benefits are objectively measurable and designs are freely available, why haven't they been commercialized yet?


Echoing the other sub-thread, see this lowRISC summary from the introduction of the workshop:

State of the RISC-V Nation: many companies ‘kicking the tires’. If you were thinking of designing your own RISC ISA for project, then use RISC-V. If you need a complete working support core today then pay $M for an industry core.

If you need it in 6 months, then consider spending that $M on RISC-V development.


Given the estimates for time spent closing a deal with ARM, You could probably be actively engaged for those 6 months, and get your core in the same time frame.


Is this one of the reasons MIPS is still getting design wins? How many years does it take to negotiate with ARM???


Is MIPS still getting new design wins? I know that existing MIPS SoCs are still being used all over the place (eg. wireless routers) but is anyone actually working to put a MIPS core on a 20nm or smaller chip?


There already are shipping, commercial RISC-V cores. The end-products are things like cameras though, so they're not very likely to make the tech blog rounds.


Source? Link?


Slide 35 from the RISC-V workshop (http://riscv.org/workshop-jun2015/riscv-intro-workshop-june2...). I've also personally talked with some of the engineers who have shipped RISC-V cores in their products.

The issue though is that nobody HAS to tell us they used RISC-V (that's part of the beauty of it). For small micro-controller type cores, the decision is made by some engineer to go with RISC-V instead of rolling his own ISA ($ and time are too tight to go external for a simple micro-controller), but that doesn't mean he has the power to publicly announce to the world what tools and IP his company uses in its products.

Anyways, we're trying to get more people to open up so we can help make a stronger case for RISC-V by using more success stories from industry.


The spec isn't even finished and it takes time to commercialize technology.


Indeed, it's way too soon to expect many, if any. MMU/protected mode draft? specs weren't released when I looked at it earlier this year (they're out, now, as I understand).




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: