I think that really the hard part of learning to design chip stuff really is other stuff - understanding where/how to use storage (flops) and combinatorial logic - and simultaneity: how to handle things that happen at the same time.
Initially you really need a strong understanding of digital logic (not a language), in particular pipelines.
Once you have that stuff in your head you can turn to verilog (or vhdl or whatever) and learn how to map these ideas into the language
In System Verilog it's easy this is the only way to reliably make synthesisable flops:
bit a, b;
always @(posedge clk)
a <= b;
And you can make combinatorial logic 2 ways:
wire c; bit d;
assign c = a&b;
always @(*)
d = a&b;
They'll make the same gates, notice the use of = vs <=, you typically use the always when you want something more complex
"always X" just means loop waiting for X, '*' means "anything important changes"
That's it, the most important concepts you have to get your head around, everything else is just this at scale - also please ignore the async reset in the "Verilog is Weird" example - they tend to be timing nightmares in the real world, use a synchronous reset instead.
One more thing - Verilog is an early object oriented language - modules are objects - but they are static, the entire design can be elaborated at compile (or synthesis) time - why? because you can't new or malloc more gates on the fly on a real chip - almost everything in verilog is static by design (it does support local variables in functions but anything that's vaguely recursive wont make gates in synthesis)
> Once you have that stuff in your head you can turn to verilog (or vhdl or whatever) and learn how to map these ideas into the language
Yes, and per the other comment this is a huge limitation of them - you don't have Javascript authors saying "you need to work out what you want in this other paradigm and then translate it". You occasionally see people writing C this way though, by working out what they want from the assembly and working backwards, which tends to get wrecked by UB or the optimizer at some security-critical point.
If the main point of the language is to generate flops, flops should be an explicit primitive!
> Verilog is an early object oriented language - modules are objects
Maybe, but IMO a more useful place to apply object orientation would be interfaces, which are nets not modules. I should be able to instantiate "AXI bus" somewhere and connect things to it.
Thirdly, the whole reason that we have "synthesis" is to allow a much higher-level description that is not so explicitly "structural". It gets really unwieldy otherwise:
reg [127:0]out;
always_ff @(posedge clk)
out <= in;
There - I just made 128 of those flip-flops. Also, told you it was a "128 bit register that was clocked on the positive edge". One can at once describe the meaning of the circuit, describe the circuit, and make a event-driven-simulator of the circuit. For really large numbers of elements in a terse but understandable format.
It ain't perfect, but it's doing a lot more than I think a lot of people realize and a lot more than a lot of similar looking procedural languages do.
> you don't have Javascript authors saying "you need to work out what you want in this other paradigm and then translate it"
Senior JavaScript developers will. They'll be thinking in terms of types and allocations, even if the language doesn't enforce them. Even junior javascript developers will usually have some concept of async IO execution.
> 'you don't have Javascript authors saying "you need to work out what you want in this other paradigm and then translate it"'
Well, React with the hooks API has basically become that. You need to imagine your code first as computational effects triggered by state dependencies in an implicit stack, and then you translate that into a language where all of that must be represented as either calling functions or passing functions around.
The main point of verilog is to simulate digital logic, of which flip flops are just one part.
As to the second point, you can do exactly what you said, instantiate a SystemVerilog “AXI bus” interface and connect things to it. With the interface being a single line item in the port map. This comes with its own set of headaches and I’m not always sure it’s worth it.
> The main point of verilog is to simulate digital logic
I disagree. A HDL, which verilog is an instance of, provides a precise description of a digital machine.
Once you have that description, many things can be done with it. For example you can simulate the machine which is to be produced.
The problem is HDL looks like an algorithm, because it is. But it’s not a precise description of a computation, it’s a description of an object.
Most people who I’ve observed program verilog badly do so because they confuse the two. Kinda like confusing giving directions with giving instructions to make a map. Directions and maps are highly related, but they’re not the same thing.
This article is really dated (2013?). Since it was written "Verilog" is now merged with "SystemVerilog". Pre-Verilog 2005 to me looks as crufty now as K&R (pre-ANSI C89).
> I should be able to instantiate "AXI bus" somewhere and connect things to it
There's a powerful concept called Interfaces which is pretty exactly what you just invented. You can create (and I have) an AXI Interface, even parameterized by address or data widths, etc. You can create these interfaces and connect them to modules very efficiently.
SystemVerilog/Verilog has grown to a very excessive bulky language, but unless you are doing exclusively validation/simulation and want things like UVM, the "synthesizable subset" is pretty sensible and one can very productively stick to that.
> "you need to work out what you want in this other paradigm and then translate it"
Not really. You wouldn't expect good programs from someone who has no idea how a computer works. The language is the knobs and dials. You need to know what the knobs and dials do in the machine.
> Not really. You wouldn't expect good programs from someone who has no idea how a computer works.
I wouldn't expect them to write a good operating system, but for the vast majority of code that is being written you can get pretty far with "your cpu is faster than your memory" as the only bit of knowledge about how computers work.
I've once read a blog entry from a developer who was hired to speed up some piece of software. He achieved a few hundred fold speed up. Not primarily because he is so good (though he is), but because the original developer had a large matrix of numbers stored as a 2d-array of strings and converted them back and forth whenever he had to do an operation on that matrix. You simply can't make up the shit that some people will do.
I think this is actually not right. There's not any kind of loop that is running, it's triggered by signal edges. That's the fundamental difference between hardware and software that makes hardware difficult for software people. Everything happens simultaneously.
A little background - I've written a couple of Verilog compilers - you can have always statements with multiple events, protocol state machines a bit like:
always @(posedge clk) begin
do some stuff
if (condition) begin
@(posedge clk);
do some more stuff
end else begin
@(posedge clk);
do some different stuff
end
......
end
Synopsis will happily synthesise this, generating the implied state variable. There really is a loop in there. In fact 'always' is exactly 'initial #0 forever'
Always implies an outer loop, when you reach the bottom of the loop you go back to the top and do the original event wait again, but not until you've waited for the intermediate event waits
In simulation this is easy to implement, but in synthesis the synthesis tool needs to rewrite it as some hidden flops for state and a top level case statement
No - the example I gave above with multiple @(posedge clk) in the single always statement is synthesisable (by synopsys at least, I don't know about others) and contains an embedded loop - it's arguably a higher level way to make logic than the traditional always/case/state state machine. Maybe a simpler example will do:
always begin
@(posedge clk) v <= a+1;
@(posedge clk) v <= a-1;
end
is identical to:
always @(posedge clk) begin
v <= a+1;
@(posedge clk)
v <= a-1;
end
the shift register example you give would normally be synthesised to something with a synchronous clock and an async reset - during simulation the loop is waiting for either
> during simulation the loop is waiting for either
Ok, then we agree. Of course a simulator will have an implicit loop, that's not contentious.
However you did not mention this in your initial statement, and I've read and been told explicitly not to think of it like this because there's no loop in hardware.
That’s not exactly the right way to think about it. In HDL, you’re writing an algorithm, but the algorithm produces a physical device, not a computation.
It’s like programming a machine to make a watch. At the end you have a watch. Would you say the gears execute in parallel while it measures time?
In some sense it’s right, but in another it’s missing the point.
I think the loop term is OK, it's just gated by the event sensitivity list for each iteration. If you have an always without a sensitivity list then it is a while(1) equivalent.
This is not exactly true, because the event you expect in software, the loop ticking over, can itself happen at any time. posedge(*) will trigger on _any_ change in the always block, and those changes can happen at the same, slightly different, or much different times.
When digital designers talk to each other about state, they draw logic clouds and a box, with a clock input, that holds state. The logic clouds are disjoint - there's always a box between them.
Yes, there are two kinds of boxes (flip-flops and latches) and how the clock works varies (and there are often multiple clocks, and the clocks may be driven by logic).
More to the point, I'd argue that the hardware designers job is to manage state and logic so a language which makes them explicit is a good thing.
It's possible to design a language that works exactly like that. The simulator is easy to make very fast and is easily parallelizable. (The logic clouds are acyclic. Moreover, everything that needs to be evaluated at a given clock event or combination thereof can be determined AND partitioned statically.)
No, the problem with Verilog is that it tries to be a procedural language like C when digital circuits are inherently parallel and declarative (sequential circuits just being special cases with feedback). VHDL is better, but still pretty bad. Learning digital circuit design on FPGAs would be so much easier if there was a well supported hardware description language that isn't stuck in the 1980s.
You have high-level-synthesis tools on many FPGA vendors tools, using C/C++ or even Python for hardware description. I personally don't think they are superior than typical HDLs, the thing that makes the difference is to think about the circuit you ought to describe, the language you use then is not as important IMHO.
It doesn't. Just because it shares some syntax with C does not make it 'want to be procedural'. I think that comes from the programmers bias from first learning a procedural language.
I learnt C then Verilog, I am fluent in Verilog. The mindset for designing digital logic is completely different to programming computers, it literally is a completely different skill. The problem is, people think just because they can type a (procedural) programming language they can program anything that is programmed with a code which is where peoples initial distaste for Verilog and VHDL comes from. When you learn how to do it, I think designing logic is refreshing compared to writing software, there's so much baggage with software.
I do totally agree with the comment, and even upvoted it, but in some sense an HDL does 'want to be procedural'. For example:
Procedure to make brick wall:
For brick in wheel-barrow do
To brick, apply mortar, place it, pound it with trowel
(Apologies to all masons. I think you can tell my training is electrical engineering.)
I've thought for some time that the main problem with Verilog is that it looks really close, visually, to C and that gives people the wrong impression.
Things are happening simultaneously in time and space. The ripples are happening while the skips are happening. The skips go stable because the ripple goes stable or the carry will skip so the ripple has "extra time" to go stable.
This is the simplest add faster than naive ripple carry. Note that it has no clocks. This is a purely combinatoric problem. And most "digital designers" will get it wrong.
Hardware design is NOT software design. Good digital designers are always thinking about how to interleave things in time and space like this.
Any idea when this was written? It seems a bit out dated, and I'm especially surprised at the mention of hardware companies not using linters, that seems insane to me. Lint, Logic Equalivance, CDC, etc are all absolutely required sanity checks on hardware design, irrespective of the language. Also SystemVerilog is quite esperessive. I definitely agree Verilog has many pitfalls you basically get used to and know to avoid, but I don't agree that knowing in your head the hardware that will be generated is a bad thing. Knowing how the hardware will be constructed from your code is key to getting it running at 4Ghz instead of 100Mhz. What your code will infer could be less or more area efficient, etc. Modern synthesis tools do help map sometimes inefficient code to the right hardware, but still knowing what it should create is a positive thing.
SystemVerilog is too expressive. It has more language features and syntax than anything else by an order of magnitude. So insanely complicated.
I can't see anything dethroning it without multivendor support though. Verilator is not enough - you need GUI tools, formal, coverage, etc. and that pretty much means you're stuck with SV.
And hugo is technically correct (the best kind of correct) in that parens are in the sub-delims reserved set of characters for HTML (https://www.ietf.org/rfc/rfc3986.txt section 2.2), and since they're being used in a context where the enclosing language considers parens to have special meaning, they should be octet-encoded in the hugo file.
(In practice, my experience with hugo is that it will generally get them right if you use them anyway, but it risks mis-parsing).
On the contrary, because parentheses are reserved, percent-encoding them is not necessarily safe:
> Percent-encoding a reserved character, or decoding a percent-encoded octet that corresponds to a reserved character, will change how the URI is interpreted by most applications.
That is, unfortunately, one of those chunks of English that is hard to interpret.
"Safe" here isn't really what the RFC is talking about; the RFC is saying that if you take a URI and add or remove percent encoding for those characters, you might change the way that the URI is interpreted (because those characters mean something in the enclosing context). In other words, you can't encode or decode them without risking a change to the interpretation, but whether encoding or decoding is "safe" is going to be context-specific (broadly speaking, the safest thing to do is to encode them and leave them encoded).
It happens that Wikipedia doesn't care whether parens are percent-encoded or not. So yes, you can use %28%29 instead (), but that's despite them being sub-delims. For example, you definitely should change & (another sub-delim) to %26 in your URLs.
And, as I noted earlier, escaping parens wouldn't even help in this case, because it's an underscore that causes the mess.
It's remarkably strange that someone who appears to be blogging semi-professionally thinks it's reasonable to publish an article of this sort without a date attached.
Maybe Verilog doesn't change much, but statements like "Commercial vendors are mostly moving in the other direction" and "There have been a number of attempts [...] but they've all fizzled out" are date-sensitive.
In the old days there was a movement / trend / opinion / hype cycle ( or whatever you want to call it ) that articles without date was a way to make it ever green.
Some academic types seem to believe that no paper should ever bear a date other than the date of formal publication. Not quite sure why that is, the date of a draft seems like incredibly helpful information to me.
Yes, especially when trying to reconstruct the timeline and context of related research, to help understand the methodology and implicit (lack of) reasoning. With time context one can often go from "why would they do it like that?" to, "given that X wasn't discovered at the time, it makes sense that they did it like that".
It's pretty obvious that Verilog and VHDL, modeled after C and Ada respectively, both imperative languages, follow a drastically mismatched paradigm for hardware design, where circuits are combined and "everything happens in parallel". It becomes even more obvious when you have tried a functional alternative, for example Clash (which is essentially a Haskell subset that compiles to Verilog/VHDL: https://clash-lang.org).
The problem is, it is hard, if not downright impossible, to get the industry to change. I have heard many times, in close to literally these words: "Why would I use any language that is not the industry standard". And that's a valid point given the current world. But even for people that are interested, it might just be hard to switch to something like Clash and not give up pretty quickly.
Unlike imperative languages, functional languages with a rich modern type system like Haskell are hard to wrap your head around. It's no news that Haskell can be very hard to get into for even experienced software engineers. In 2005, after already having more than a decade of programming experience in C, C++, Java, various Assemblers, python (obviously not all of these for the same time) and many other languages, I thought any new language would mostly be "picking up new syntax" at that point. Yet Haskell proved me very wrong on that, so much that it was almost like re-learning programming. The reward is immense, but you have to really want to learn it.
And to my surprise at the time, when I got heavily into FPGAs, the advantage proved to be even stronger when building sequential logic, because that paradigm just fits so much better. My Clash code is much smaller, but also much more readable and easier to understand than Verilog/VHDL code. And it's made up of reusable components, e.g. my AXI4 interfacing is not bespoke individual lines interspersed throughout the entire rest of the code. That's mainly because functional languages allow for abstraction that Verilog/VHDL don't, where often the only recourse is very awkward "generated" code (so much so that there is an actual "generate" statement that is an important part of Verilog, for example).
So by now, I have fully switched to using Clash for my projects, and only use Verilog and VHDL for simple glue logic (where the logic is trivial and the extra compilation step in the Verilog/VHDL-centric IDE would be awkward) or for modifying existing logic. But try to get Hardware Engineers who probably don't have any interest in learning a functional programming language to approach such an entirely different paradigm with an open mind. I've gotten so many bogus replies that just show that the engineer has no idea what higher order functional programming with advanced type system is on any level, and I don't blame them, but this makes discussions extremely tiring.
So that basically leaves the intersection of people that are both enthusiastic software engineers with an affection for e.g. Haskell, and also enthusiastic in building hardware. But outside of my own projects, it just leaves me longing for the world that could exist.
Logged in just to upvote this and largely agree with you. Verilog/VHDL are stuck at the 1980s coding paradigm level, as if the industry grabbed the first working solution for automated hardware developement and has clung on to it.
> The problem is, it is hard, if not downright impossible, to get the industry to change.
Yes. I don't think it will until either, say, Intel does it by CEO fiat, like the Amazon memo, or a startup from outside somehow dominates the industry by using a different technology.
(I have worked both sides of this, a chip design startup that was bought by Cadence, and a medium size fabless semi company)
> I've gotten so many bogus replies that just show that the engineer has no idea what higher order functional programming with advanced type system is on any level, and I don't blame them, but this makes discussions extremely tiring.
"A monad is just a monoid in the category of endofunctors, what's the problem?"
(I'm joking, but this is an us problem and not a them problem, you can't evangelize things to people that they don't understand, and you have to reach them where they are. Yes, this is very hard work)
However one thing that us software types may not appreciate is that all the weird imperative stuff in Verilog that isn't synthesizable probably gets used in more lines of code than the synthesizable subset - because testbenches are absolutely critical to shipping hardware.
The hardware industry can't benefit from rapid iteration because every iteration costs a mask set.
> The hardware industry can't benefit from rapid iteration because every iteration costs a mask set.
Before tapeout there are steps which could benefit I suppose. But as a functional-programming n00b I don't see how that could be the solution. I would argue that improving place and route algorithms to the point that deploying to a (large) FPGA is almost as quick as compiling software would be a huge step forward.
One thing I don't quite understand about using Haskell for circuit design is that at a first glance it also seems like pure functional programming has an impedance mismatch with what the circuitry physically does. (For reference, I've programmed in Haskell extensively before.)
For example, much of Haskell directly or indirectly relies on recursion -- but this nearly nonsense when talking about silicon! There's no call stack, for one! More importantly, any algorithm requiring repeated operations like this would be inherently inefficient and undesirable in a design space where latency matters.
I have a feeling that one reason some of these alternatives haven't "taken off" and swept away the legacy languages is because they're not really that ideal either. [1]
Perhaps there's an ideal "concurrent data processing" programming paradigm waiting to be discovered that is neither like procedural languages nor pure functional languages.
[1] This is purely my uninformed layman perspective, of course. I'd love to be corrected by people who've worked in the field.
Why do you feel there is a mismatch? Any digital circuit can be modeled with a pure function that transforms an input stream of values to an output stream of values. And that is actually precisely what Clash does.
Regarding recursion. I believe Clash does support a limited form of recursion. Namely if you prove every recursive call leads to the problem size strictly decreasing. e.g. recursion on a vector of size n must mean every recursive call is applied to a vector of smaller size.
How does (non-tail) recursion work there. That inherently requires a stack, unless you limit the recursion depth and give each recursion step its own circuit.
I could imagine limiting yourself to a specific subset of haskell keeps things under control, but the mis-match seems obvious. At least from the pov "just write haskell but compile to an fpga instead of a binary".
Sorry, I edited my comment and added a part about recursion.
So you are right that not all of Haskell is compileable using Clash. But that also isn't the goal. You are using Haskell to describe your digital circuit. That generally is very different from writing a normal Haskell program. It just so happens that Haskell is good at both.
Can this be because in digital circuits state (memory) plays an important role, while its treatment in a (pure) functional language is not straightforward?
It's straightforward to implement memory in a pure function. For example if you say a digital circuit is a function from infinite list of inputs to an infinite list of outputs. You can create a "register" by simply adding a value to the start of the list. For example in Haskell:
delayCircuit xs = 0:xs
take 10 (delayCircuit [1..])
-- [0,1,2,3,4,5,6,7,8,9]
Whether a function is pure doesn't mean it can't have internal state. As another example it's perfectly fine to have a completely pure function that has internal state:
withInternalState a = let go = get >>= \s -> put (s + 1)
in fst (runState go a)
What is important for purity is that this internal state doesn't leak outside the function. Which it doesn't for circuits. Outside of things like cosmic rays, heating etc a circuit is completely pure in reality.
> Whether a function is pure doesn't mean it can't have internal state.
This seems to be in direct contradiction to what is written on Wikipedia[1], specifically that "the function return values are identical for identical arguments".
If it has non-trivial internal state, how can it return identical values for identical inputs?
Sadly your example eludes me as I don't know any Haskell so it reads like line noise.
It does have identical return values for identical arguments. That doesn't mean the function can't have internal state. Maybe this is more clear in pseudo code:
That function example is not allowed in Haskell. It's pure functional throughout, not just at "function boundaries".
I agree that conceptually a language can be made where only function boundaries are required to be side-effect free, and internally "anything goes" as long as it doesn't pollute the outside world. This might be a good model for circuit design, especially if using something like an IO monad to store persistent state across clocks, such as flip-flops or registers.
However, this is not what Haskell does, at all. There are no mutation functions like "fill(x)". You have to create the buffer filled with x right from the beginning.
More importantly, in Haskell that buffer is defined with a recursion, so it looks something like: (x,(x,(x)))
Effectively it is an immutable linked list. Any modification involves either rebuilding the whole thing from scratch (recursively!) or making some highly restricted changes such as dropping the prefix and attaching a new one, such as (y,(x,(x))).
Haskell is not LISP, F#, or Clojure. It's pure and lazy, which is very rare in functional programming. It's an entirely different beast, and I don't see how any variant of it is a good fit for circuit design, which is inherently highly stateful and in-place-mutable.
Such a function is perfectly allowed and even encouraged, see here where I allocate a mutable array with undefined values and write and read from it in a pure function. Basically replicating the pseudo imperative code I have written before:
Monadic computations are very much in the spirit of Haskell. I don't think the Haskell community view the ST monad as a hack. It's one of many tools in the box of any Haskell programmer.
What I was thinking of was more like a linear shift feedback register. It has zero inputs and outputs random bits, thanks to its non-trivial internal state.
Or as we were saying, a register. Could for example be a function which takes a read/write flag and a value, writes the value to the internal state if write flag is set, and returns the internal value.
edit: In my softcore I have a function to read/write registers, it modifies global state rather than internal state so would not be pure either.
edit2: I guess my point is, for me "state" is something that is non-trivial, and retained between invocations. Your example is trivial and not retained (it's completely overwritten always).
You are seeing a single invocation as running your circuit for a single clock cycle after an arbitrary amount of clock cycles. That's the wrong approach. A single invocation of a circuit takes a stream of values and produces a stream of values. Every invocation start from clock cycle 0.
So yes if your circuit depends on external signals driven by registers living in a different circuit you need to pass those in as inputs. Essentially circuits are composeable just like functions.
A linear feedback shift registers always produces the same stream of values no matter how many times you run it. It's completely pure.
Still, I would be very interested to see how one would implement a shift register, lets say something like the 74HC165, so I can see how the pure functions interact with the register state.
Runnable using replit. To make it as simple as possible I didn't use Clash and used types available in prelude Haskell (Bool instead of bit, 64 bit integer as register state etc. Every single element in the list represent a value coming out of the register on the rising edge of a clock cycle. You can easily extend it if you want latches, enables etc. But that doesn't really change the core point of the code.
Much appreciated. I think I get the gist of it, even though it looks very complex compared to the Verilog counterpart. To be fair, my lack of Haskell knowledge doesn't help.
But it's really helpful to get a feel for the different approach.
I mean Haskell is definitely and acquired taste if you haven't used other languages in the same style before. Also note that writing this in actual Clash would be a one liner. Because stuff like registers and shifts are available as part of the standard library.
I've tried to get into Haskell, and while I don't have a huge problem with the overall concepts most of the time, although a bit alien at times, my brain just can't seem to handle the syntax.
I've found myself thinking differently about code and using a lot more functional-ish concepts when writing my "normal" code though, so I do like the exposure.
So your point is you can have temporary variables in pure functions. That's fine.
However how do you implement registers? Do you have to pass around the "register file"? And if so, how does that work?
Like, how would a parallel-to-serial shift register look like? Ie an asynchronous latch updates the internal shift register and an independent clock shifts out the values.
Why not something declarative, like Prolog (bonus: sounds like it could be the advanced version of Verilog) or HCL (Hashicorp Configuration Language, as used for Terraform)?
I've been thinking of writing a tool to use the latter for EDA, so you could build libraries of composable reusable blocks, take application note type examples directly rather than re-draw, etc. and (what motivated me initially) store it all in git. FPGAs seem at least as good a fit.
This has been tested several times, and used be a fairly common EDA academic reserach subjest. (As well as other languages like Haskell, SML). I first saw Prolog used for HW design in 1995 or so. A bit newer example is this:
https://www.researchgate.net/publication/220760154_A_Prolog-...
One important area is digital systems testing. There stuff like Prolog seems to me a better fit than for actual HW description. For test cases I want to efficiently describe the behaviour, esp inputs, expected outputs and changes to state.
For the design, I'm not only interested in that I get the correct behaviour (functionally correct), but also _how_ I get the behaviour. How much resources are needed, what does the routing look like, how many metal layers does it to route, how much power will it consume, how fast can I clock the design.
I had a similar experience with HardCaml back in 2015/2016. The benefits where overwhelming: higher productivity, higher reusability, less intractable bugs, tighter TTM, and no more slippage in deliverables timeline. The performance where comparable to Verilog-only project (density, path length). In the end, the effort was shut down by management because the approach was “too complicated”.
And commonly also because it's, currently at least, very hard to find people for it. Which is an entirely valid concern, but is still so frustrating. Because while C for system programming might not be ideal, it still makes sense, whereas Verilog (that was meant to look like C because people knew C for programming already) for hardware design is just so much more of a mismatch.
At least for me, OOP was the obvious better model for modelling HW than either imperative and functional based paradigm. Modules, cores in HW have an internal state and defines interfaces. The internal state is updated due both to the internal state itself and changes to inputs.
SystemVerilog fixed several pain points in Verilog, one of the more important the ability to encapsulate, bundle ports and protocol handling (logic and state) in interfaces that can be instantiated in modules. You can still shoot yourself by "progam in Verilog", that is forgetting that you are in fact describing physical and electrical reality, but you can write code with less errors.
Ome thing I see SW people not understand is that the toolchains for ASIC development are much biggers. You have simulators, compilers, linters, floorplanner, signal integrity analyzers, detailed Place & Route, several levels of formal verification tools, test insertion and test automation tools etc etc. All of these tools need to parse one or more source files. And all these tools need to parse the source files in the same way. This, in combination with the possible HUGE cost (money and time to market) of taping out a chip that requires a respin, change of one or or more of the masks in the maskset are big reasons why the industry is very conservative to radical changes in language. The subset of language features that are safe to use (i.e. correctly parsed and usable through the complete toolchain) is often surprisingly small. And to be honest, describing the HW you want is often the least hard part of chip design
Thinking that the chip industry and associated EDA industry simply don't know CS and modern tools, SW paradigms probably means missing what is actually hard when designing chips.
A final note on functional languages - as you state "It's no news that Haskell can be very hard to get into for even experienced software engineers." This is also a possible reason for lack of adoption. You basically reduce the set of potential engineers. And finding good digital, SoC and chip designers is already very hard. After seeing several attempts and cool ideas I'm quite convinced that the subset of engineers groks and likes functional languages and groks and likes electrical engineering, digital and chip design is too small to scale to meet industry needs.
Rant off. I'm accepting that yes I'm probably in the "has no idea what higher order functional programming with advanced type system is on any level" camp.
Higher order functional thinking is such a ridiculous superpower, but it's also really hard to do correctly without some programming tool helping you along and tons of experience.
In your case, any piece of hardware can effectively be abstracted by some top-level function that takes arguments of time, inputs, outputs, environmental variables, etc.
Mastering higher order function development is how one can properly address complexity in any ___domain.
SQL views are another good example of this. In fact, you could very likely build a competent digital circuit design system around SQL ideologies. SQL is the best tool I've ever found for modeling and applying constraints.
SV and VHDL are nasty languages, but as you said, they are "industry standard". Chisel and Haskell-based approaches are better but virtually nobody adopts them.
I tried to go in another direction, to make design code shorter by using Clojure syntax. The result is here: https://github.com/m1kal/charbel and works for simple modules. I don't expect wide adoption, but we need to look for new directions instead of sticking to the methods and languages from the 80s.
Writing performant DSP for an FPGA involves using intrinsics/structures specific to the device. A multiply and accumulate for example. Can you easily take advantage of stuff like that in clash? What about using the existing device dependent IP linraries? I agree verilog/vhdl are fairly dated, and not designed for how we use these devices today. An increasing number of people I talk to are generating their designs from hdl coder or model composer type tools.
> So that basically leaves the intersection of people that are both enthusiastic software engineers with an affection for e.g. Haskell, and also enthusiastic in building hardware. But outside of my own projects, it just leaves me longing for the world that could exist.
It seems to me that a shop made exclusively of people with this background would be massively more productive, and quite an attractive place to work for anyone with the relevant skill sets.
What keeps such a shop from existing? The need for ___domain specific knowledge? Or does scale require too many people? Are hardware design shops usually in house and thus harder to out-source? General worries about communicating with customers expecting verilog? Worries about being able to on-board new people?
Honestly, despite all of the above, I think a haskell based FPGA quick itteration hardware design shop sounds amazing. If I had any kind of experience with hardware design I would jump at the chance to join. Even without that experience this sounds worth a shot, but I fear that lack of experience might be why I think this is feasible.
The size of that intersection. And the need to at some point interface to the rest of the world, including the EDA world.
Unless you build your own products from system to physical chips in house (have your own fab) sooner or later you will need to interact, exchange source code or netlists in some form. Fabs expect you to use certain (golden) tools before accepting a design for manufacturing.
I'd be interested in trying one of these more modern languages, but everytime I see a basic example it looks clunky, as rather than write a new language for hardware, it's 'bolted on' to a language for SW and sometimes this looks inelegant. Maybe it's just my bias,but often looking at baisc examples they look overly complex to my SystemVerilog addled brain.
Also, how are multiple clock domains handled, complex types, fixed point arithmetic, are those better/safer?
If you have any examples I'd be eager to see them.
Verilog definitely sucks but I think the problem with new HDLs (Clash, Bluespec, Chisel etc) is they don't necessarily make the hard bits of hardware design easier, they just help with the tedious stuff that whilst annoying ultimately doesn't take up much of your time.
For example a good type system definitely makes module interfaces cleaner, saves you having to dig through warnings/lint reports to find stupid errors and in general makes in easier to rapidly build a new system out of IP. However for your typical hardware project you're not normally wanting to radically configure things or continuously build whole new systems. You have a fixed or slowly evolving spec so when you first put things together and occasional need to add or change blocks there's a bunch of tedious error prone wiring to be done but ultimately improving that process makes your job easier but doesn't open up radical new ways to do it.
Significantly more powerful generation/parameterisation capabilities is another thing new HDLs can excel at. However for anything sufficiently complex (e.g. a cache) building something scalable that functions correctly and performs well across a broad parameter space is just incredibly hard. Perhaps a new HDL lets your build a wonderful crossbar for some interconnect protocol (e.g. AXI) for instance where you can have arbitrary numbers of ports, arbitrary data widths or each, clock ___domain crossing etc etc and it just handles whatever you throw at it and you get some nice elegant code that generates it all too. Though depending upon the actual configuration you want you'll want very different micro-architectures for it. The one size fits all approach will be a one size fits this corner of the parameter space in reality and if want a truly one size fits all you'll find your initially nice elegant code ends up with a bunch of special cases all over it to produce optimal designs. Also in reality you don't need something super flexible for all cases in any given project, building the thing your specific use case requires works fine. Then for the next project you can adapt it. A tedious process you'd love to see improved? Sure. One which is holding you back from doing amazing things? That I'm less convinced off.
There's also the downside in that it is important to have a reasonable idea of the circuit you actually produce and this is generally where the hard stuff happens. You have tricky timing paths to deal with, power issues to sort out, area to reduce etc. If you're working on secure hardware (like I do) you've got side channel and fault injection attacks to detect and defeat. Doing this requires a deep understand of how the HDL you're writing becomes standard cells on the chip.
With a new HDL mentally mapping some output of an implementation tool back to the original HDL can be very tricky and consequences of code changes can be surprising. This makes the hard stuff harder.
Ultimately hardware is not software, there's a different set of constraints you're working to and a rather different end product. There's plenty of good stuff to take from the software world to apply to hardware but it doesn't all just map across cleanly.
Of course some of the problems I talk about above can be solved by tooling, they're not inherent to the languages. Still that tooling needs to be created and may be very hard to build (how many times have you seen someone claim something will be amazing just as soon as the tools exist to make it useable?).
I think my perfect HDL right now would look rather like SystemVerilog but with a decent type system and restricted semantics so unsynthesisable (or synthesises but not into a circuit you'd ever actually want to build) code simply won't build along with improvements around parameterisation and generative capabilities.
Taking a step further from there is also perfectly possible but I think we need to do a lot of work around tooling and verification for more flexible designs first to understand how to do that well.
I do need to spend more time with new HDLs. The last serious project I did in one was a CPU (well two CPUs but both dervied from the same code base) in Bluespec around 10 years ago for my PhD. Bluespec has an opensource compiler now plus there's various languagues to explore. Maybe I'll try building a RISC-V core in each and seeing how it goes.
I've tried pretty much every 'NeoHDL' out there, and so far Bluespec was the one that actually made me feel like it's a step in the right direction, instead of more of the same.
Its atomic transaction based modelling maps superbly well into hardware clock cycles, and after using it for a while you learn to visualize quite well what shape of RTL will be generated. Plus its standard library provides some really powerful _and practical_ interfaces/implementations that make building things fast and safe. Oh, and a lot of things you would generalize with fairly heavy-handed and bespoke abstraction in things like Chisel/Spinal/Migen/Amaranth just map directly into Bluespec's type system constructs, making interacting with existing codebases much less of a reverse engineering effort.
The Bluespec SystemVerilog syntax is a bit... quaint, but if you're used to (System)Verilog you know the deal. I personally want to spend some time soon learning the Bluespec Classic/Haskell instead, but that's mostly because I already know some Haskell...
Overall, highly recommend giving it another shot. It made me enjoy writing HDL code again after years of frustration with other offerings. It's one of the few languages out there that actually make me feel 10x as productive thanks to the strict type/runtime semantics.
Yes I did quite like bluespec, though after my time using it I drew two conclusions.
1. The extra power I had to build parameterisable/configurable designs didn't help as much as you'd hope as I said in my earlier post. Making something that can deal with a large parameter space still remains hard, the easier things look a lot nicer and are less fiddly to work with but on a practical level you're not gaining much.
2. One of bluespec's big ideas is you write the rules, with conditions and it works out the rest. The problems come when this maps to verilog and you need to start fixing implementation issues. I ended up having to develop good intuition around exactly how the compiler would schedule the rules so I could avoid issues (e.g. combinational loops). It felt like bluespec really pushed you into deep pipelining, rule does one thing result gets flopped another rule will take over next cycle. Fine for some designs but sometimes you've got a whole bunch of stuff you want to get in a single cycle (e.g. within a CPU) and there I felt like the rules construct ultimately just made it harder to build anything.
I do remember discovering some 'hidden' language features. Knowing it all turned into Haskell underneath via term rewriting I tried some syntactic constructs that weren't documented but I felt should work and often found they did! This gave some fairly powerful stuff to play with though never good to rely on undocumented features.
As I said it was over 10 years ago so my memory is hazy, things may have improved since there as has my skill and experience. I should really take another look now the compiler is open.
Agree Bluespec is great. In college we went from 0 to a Bluespec multicore RISC-V processor running Linux on an FPGA in a single undergrad semester, every step along the way felt intuitive.
Do you have a pointer to Bluespec Classic (not the "modern" Bluespec)?
I would make a distinction between "generator HDLs" like Verilog, Lava, Chisel, MyHDL, nMigen, etc and languages that actually are interpreted as logic (Verilog, Bluespec, Clash, Silice, Handel-C, ...). Yes, Verilog can do both and is a ghastly language. The former class doesn't really raise the abstraction, it just makes generators easier to write.
As I've been saying for decades, I really want a language in which it is as easy to write an implementation as it is to write a [timing] software model; ideally I can use the same language for both! I know it can be done and I have been trying to scratch that itch a few times.
Wow, didn't know that Bluespec was open sourced. I used in my Computer Architecture class for assignments 6 years ago, and my experience was that it was much much better than Verilog (the type system was far better) but there was too little learning material/documentation out there on the Internet.
bsc, the bluespec compiler, has been open sourced under the MIT license. It's the real deal, the same bsc that has been used in the industry for years now.
I've been using it in my own personal projects for something something like two years.
> Yet Haskell proved me very wrong on that, so much that it was almost like re-learning programming. The reward is immense, but you have to really want to learn it.
I've given it 3 attempts and each time I get closer to understanding how to use Haskell IRL, instead of toy things. Turning it into reality with all the different... styles or paradigms?.
Looking forward to my 4th attempt, I think I'll get it next time! Just need the spurt of motivation.
If anyone is interested in alternatives to Verilog, check out PipelineC. Great for software folks getting into hardware, helps avoid issues like blocking vs non blocking confusion, whats a register, etc. Lots of other features too.
This article starts out by assuming that understanding sequential programming is sufficient to understand hardware. Yes, Verilog is a simulation description language, and yes, it can be compiled to hardware. But it is a specific subset that translates to hardware, not the whole language. Large parts of Verilog/SystemVerilog exist to drive stimulus into the model hardware and to observe the results; those parts don't have any sensible mapping to hardware.
The problem with all these articles is that they start from wrong assumptions and keep going. In practice, industry has no problem using these languages to produce chips with billions of transistors that largely work the first time, with the expected performance and power. As others have mentioned, this industry is conservative, and if there isn't a problem to fix it will remain with the tried and tested.
(I'm one of the people named in the Verilog/SystemVerilog standards, worked with Phil Moorby and once upon a time was responsible for chunks of ncsim and vcs)
Counter-opinion: industry simply has no plausible alternative to Verilog/SystemVerilog and succeeds in producing chips only in spite of the language's glaring flaws.
I worked for several years at a world class company verifying CPU designs. Even into the late 2010s, designers were afraid to use basic features like structs because who knows what tool might not support it correctly. They emulated structures using piles of defines that set all the bit offsets. I wouldn't know how to begin to estimate the amount of senseless work that this led to during debugging. The lack of any type safety was appalling. If you were lucky, you could catch gross type confusion by having a linting tool notice a size mismatch. These tools found hundreds of real errors that should never have compiled in the first place. I could go on and on...
But there's no escaping it. There are so many amazing tools built around it. The vendors love the moat the complexity gives them. The IP developers have huge investments in legacy code bases. Compatibility is worth too much.
The industry is extremely conservative. Costs of designing and taping out a modern chip range from $100M to $1B; these are the type of costs that will kill most companies if it goes wrong and very few companies can afford to have it go wrong more than once.
Tried and true approaches, small incremental developments are the only way things get done in this space. And the technological moat has many sources, not just Verilog etc. Process technology is continuously evolving and tools need to adapt to all the new physical rules and constraints. What gates are most efficient depends on the problem and the process and even by what the team means by efficient (least power? least energy? most mips/sec? least time to solve a problem? ...)
Everyone loves to complain about the tools though, that is universal. They'll just never adopt a new tool that hasn't survived the test of time and had enough successful designs "by someone else" that it doesn't make them nervous about being the first.
I hate math. Not because "math is hard", but because the language of math is hard. It's unintuitive and minimalist and full of obscure runes and intonations and nothing is ever explained simply. Many of its properties are just conventions the first person came up with and nobody ever decided to consolidate all the quirks into a simpler form. It's unnecessarily hard to learn - especially for those with learning disabilities.
It turns out there are lots of languages like this. C and Verilog and Assembly, and ... Cantonese, Albanian, Russian, Arabic. Ask people why they're like that and there is always a story, and usually some good reasons (or at least useful qualities). But language is a part of culture, and sadly, culture is incredibly hard to change.
I wonder why building circuits doesn't use a language comparable to the nand2tetris course.
For those who haven't seen it: It gives building blocks like NAND, wires, and an abstraction method, allowing to treat a group of wires and building blocks as a new building block. So you wire up a few gates to, say, an adder, then you can use adders in your circuit.
Things like circuit synthesis and timing analysis get trivial. I have the impression digital designers already tend to think in this way.
I don't have much experience,so read the next as a question instead of a suggestion: what's wrong with using this as a basis for a new design language?
I think you mean block based designs where you define blocks as inputs and outputs with the logic inside, and then you can piece together and group blocks with wires possibly in some sort of visual editor or similar format? This is already how Verilog and VHDL work, they are oriented around functional blocks that you wire together to build up more complex units. Often HDL environments ship libraries of pre-made blocks for various functions, many of which are very powerful and might be tuned by the vendor to exploit specific hardware features of your target (the more powerful ones are typically expensive and black box proprietary stuff that you just get input and outputs). They also often provide an actual graphical editor for wiring blocks together, or you can specify connections in a text format.
I'm not really professionally experienced with it, but I understand that a lot of the common work in practice is wiring together these pre-made block libraries from vendors more than writing them from scratch.
The different would be that it is never possible to have an unsynthesable design, as long as everything connects together. So the problem designed in the article can't happen if you only have blocks and wires.
Maybe a rephrased question is, what causes Verilog or VHDL to be unsynthesable if they are based on this style of block design
There are lots of reasons why some HDL could be unsynthesizable:
- multi-driven nets where two or more signals are driving the same node
- the target not supporting certain features like multi-clocked FFs
- missing components or libraries (not uncommon if you're trying to use VHDL 2008, for example)
- timing analysis failing, meaning the HDL isn't fast enough / the critical path is too long for how fast the fpga is going to go
And while not necessarily being unsynthesizable, a lot of tools like Vivado will heavily optimize the HDL, meaning if you forget to connect a part of the HDL to the output or do something with it, it will be removed from the design.
Because the logic inside blocks can contain constructs that do stuff that aren't expressible as physical hardware, but are useful for other purposes/otherwise expressible in Verilog (some people think it's an anti-feature such things are even possible to express, but it's open to debate). You still have to implement the content of the blocks somehow, be it grabbing it from a library or writing the logic yourself. I didn't mean to imply that Verilog/VHDL are purely piecing together functional blocks. It's just one of the main abstractions to allow you to compartmentalize, like implementing functions that you can reuse in a traditional programming language. The other reply also goes into some good examples.
You could of course only build on top of very simple blocks that are trivially correct like say piecing together logic gates and other very simple circuit elements, but that kinda defeats the purpose of using an HDL and doesn't scale well. Some of the blocks you get included with your development tools might be quite complex, e.g. entire complex DSP operations that are designed to exploit specific hardware features of your target.
HDL (hardware description languages - Verilog and VHDL the most representative ones) do actually work like that.
They give you access to structural descriptions for bitwise control, but also and mostly behavioural descriptions which allows hierarchies of modules (the blocks you described) and abstraction up to anywhere basically. This last part is generally referred to as RTL (register transfer level) design.
To make a circuit from this RTL description you usually need more steps: synthesis (where RTL gets "compiled" into the actual gates and flops to be used) and implementation (where the logic gates of the previous step are actually translated in the "fabric" you have available - FPGA logic blocks, a foundry's standard cell library ...).
VHDL springs off of ADA, so I wouldn't say it was specifically developed.
Anyway, ADA and VHDL are strongly typed, which incidentally works well with critical applications development such as hardware design.
VHDL springs off of Ada (not an acronym, no need to shout or to bring in the Americans with Disabilities Act into a discussion on programming) syntax, but not its semantics (or, its not a continuation of Ada semantics, it does borrow some of them).
PL/SQL (I've never used it) also borrows from Ada's syntax, but not its semantics in any comprehensive sense. This sort of thing was deliberate, similar to how Verilog borrows from C's syntax, or Java apes C's syntax, or JavaScript apes Java's (and C, indirectly). The goal was to extend something familiar with new semantics.
After using verilog/vhdl for awhile, I learned that trying to be "clever" in ANY way is just going to result in errors and frustration. So I write small dead-simple verilog modules, test them in isolation (icarus verilog is great! Use it with a systemverilog testbench to poke values into your module, and use gtkwave to visually verify the output), and then compose them into a larger design.
Title should say "hardware description languages are weird for software people". JS guys have it easier, but for the rest, it takes time to rewire (no pun intended) your brain to think in async terms.
A concise but not-completely-accurate way I explain it to a lot of pure software folks is that every line of HDL code executes "simultaneously". This can sometimes help them wrap their heads around Verilog/VHDL a bit better.
They have different flaws. Neither is actually very good.
But if you are coming from the perspective of a newbie asking which to learn, there is a solid answer: absolutely, definitely, 100% learn VHDL. Do not learn Verilog. (And ignore anyone who says otherwise.)
There are two reasons for this. First, if you learn VHDL, it is then easy to learn Verilog. The reverse is not true. So even if you need to use Verilog later, and because it's more popular you probably will, this will not be difficult for you. The second reason is that VHDL makes certain very bad habits impossible at the language level (including the compiler/synthesizer). VHDL's model for how things happen is better than Verilog's, and prohibits many strange cases that you never actually want to generate. Thus, by learning VHDL, you will naturally train yourself to write code that is structurally better, and this style continues to be very successful in Verilog. In this case Verilog is too permissive: it lets you write bad code that does not map effectively to what you need. This is why it's much easier to move from VHDL to Verilog: you will not have bad habits to unlearn.
Initially you really need a strong understanding of digital logic (not a language), in particular pipelines.
Once you have that stuff in your head you can turn to verilog (or vhdl or whatever) and learn how to map these ideas into the language
In System Verilog it's easy this is the only way to reliably make synthesisable flops:
And you can make combinatorial logic 2 ways: They'll make the same gates, notice the use of = vs <=, you typically use the always when you want something more complex"always X" just means loop waiting for X, '*' means "anything important changes"
That's it, the most important concepts you have to get your head around, everything else is just this at scale - also please ignore the async reset in the "Verilog is Weird" example - they tend to be timing nightmares in the real world, use a synchronous reset instead.
One more thing - Verilog is an early object oriented language - modules are objects - but they are static, the entire design can be elaborated at compile (or synthesis) time - why? because you can't new or malloc more gates on the fly on a real chip - almost everything in verilog is static by design (it does support local variables in functions but anything that's vaguely recursive wont make gates in synthesis)