Whenever I see "bare-metal" thrown around, I think about how Atari Pong was literally all hardware, as in the entire game was printed on the circuit board as actual physical logic gates. With the advancements of modern technology, I wonder if it's possible to put something much more complex yet completely functional as a standalone program on a circuit board.
The "easy" way is a configurable logic device like an FPGA or PLD. But that's still a programming effort more than logic circuit design.
Modern discrete logic chips can be tiny. A cursory browse through Digikey shows some single NAND gates in a 1x1.5mm package. You can probably get smaller if you want, but that's still ridiculously tiny.
Given that and the insane accuracy of modern PCB printing and pick and place machine assembly, you can design a very complex circuit in an extremely small footprint.
From there, the hard part is designing the logic. Today, you'd probably use something similar to an FPGA compiler that turns a program into a logic circuit. Which leaves the hardest part as laying out the circuit on the board.
So yes, you could do it and do it extremely small. But there's good reasons this isn't generally done anymore.
I've been looking at the various very small microcontroller/SBC alternatives out there and it is extremely impressive what you can get for $50, $10, $5 or even $3 in some cases. Arduino's, Esp32's, The Raspberry Pi's (including the Pico!) and the Teensy's (a bit more expensive but very capable) all offer an amazing amount of computer for very little money.
Between those four there isn't a whole lot that you couldn't make. I'm interested in these because my kids are of an age where toying around with hardware is something they fancy and this makes it all very much real-world, seeing an LED blink or a computer respond to a button press and move a servo is a completely different experience from what's happening on their screens of mobile computers and laptops and that half-way house between hard core electronics and software is a nice step for them to get more familiar with what computers really are (besides entertainment and homework).
The only missing thing in NativeAOT (for my use cases) is EF Core. It's getting there. Once EF Core works well on NativeAOT, C# would be a fantastic alternative to Go.
Yeah, because specially you can do everything with JIT and only AOT when publishing, and it is still reasonable anyway.
Go isn't special, the world just happened to forget how fast AOT compiled languages with modules used to be, before C and C++ took over with their text file inclusion model.
Raspberry Pi (assuming 4?) supports UEFI externally. bflat can target Aarch64 ISA and UEFI as a platform producing compatible binaries that can be booted up without any further post-processing.
It looks like the article was intended for x84 UEFI and then they changed the title to Raspberry Pi for the clicks. Nothing in the article applies to the RPI, nor mentions it.
At a first browse, this looks incredibly cursed. Compiling C# to native code is almost antithetical to everything C# stands for.
I love it. Cursed programming abominations are one of my favorite things to poke at.
I'd be extremely interested to see how the output binary compares to that of an equivalent C++ program. If the performance and code size optimization is anywhere close, we're just a few steps away from C# on microcontrollers.
Whether or not that's a good idea remains to be seen. It sure would be cool though.
> If the performance and code size optimization is anywhere close, we're just a few steps away from C# on microcontrollers
My short fooray into Microcontrollers has shown that all the software development experience for them is absolute garbage.
The quality of IDE's is terrible, the quality of libraries is terrible, portability of libraries is terrible, community support and hope of finding existing library for some sensor is low, etc.
The actual programming language is only a small part of the problem.
So if we just replaced C++ with C# nothing would change.
But actually it would. Because expectations of developers of C#/Python/JS are different, they are not used to C hell of managing individual registers. They want libraries that actually work, their culture is that libraries should be portable.
So it would require a different attitude to work. It would force a different attitude.
Really all of your complaints apply equally to all domains of programming. Windows development is awful, Linux development is awful, web is awful, backend is awful. Bad libraries and bad APIs are universal and there is no escaping it no matter what ___domain you work in.
Overall you really seem to not have a clue what you're talking about. If you use real grownup tools, the experience of developing firmware is not any different from any other ___domain. With the obvious exception of debugging being more difficult. Arduino and everything involved with that ecosystem is trash.
Good libraries are available. Good drivers exist. If you simply leave the arena of arduino trash libraries you end up in a world that looks extremely similar to the rest of the OSS world. Quality varies of course, but you'd be a fool to think that no one in the entire industry writes good code.
Besides that, writing a driver is usually a very trivial thing to do. I always write my own, and in the majority of cases it's simply copying registers from the datasheet into a nice C structure. It takes like an hour.
And your core argument is plainly wrong. If the language didn't matter, we'd still be writing straight C, if not direct assembly. Every embedded environment I've seen supports C++. Some even support the standard libraries.
Rust is also becoming quite popular for embedded. But clearly there's no reason or use for that, right?
I mean, if the language doesn't matter, why aren't we still writing COBOL? Why does any language higher than C exist at all?
And no, I don't think C# on a microcontroller would be practical. I want to see it because I think it'd be funny, precisely because it's so inappropriate.
>My short fooray into Microcontrollers has shown that all the software development experience for them is absolute garbage.
Read: I don't know anything about this industry, but I'm still gonna tell you it's all trash.
I'm not sure it's a very smart idea to attempt to dunk on something you admit you know very little about.
hey, i was upfront that my commend could be wrong.
However, software is typically not portable between different bdands, and the avaliability of libraries is not the same as desktop / web development, it is significantly behind?
I find ASP.Net Core to be a fantastic dev platform. I'm running it on a $3/mo Hetzner ARM server. ASP ships with its own highly optimized web server out of the box and Microsoft also has their own Nginx competitor now too. Plus, in some scenarios you can compile down to native and run a very slim system.
What do you mean by that? I know what the term means but considering it's one of the most used programming stacks I don't understand what you mean when you use it.
I don't know what they're talking about. I've always viewed .NET, C# in particular, as not being very flashy and more targeted towards "getting stuff done". It does not shine particularly bright in any one specific area, but it is competent enough in most use cases to be a very safe choice for a wide array of applications.
I would think C# is meant to be a general purpose language based on that. Python too is a general purpose language but community libraries (which I think the support for C extensions helped with) enabled amazing things like scipy.
Similarly I think C# excels at game development because of the many engines it is used. Maybe it began because of XNA but then got popularised because of Unity but I do not think it is due to any special features in C# itself that are not found in other languages. I am not sure about this so please correct me if I am wrong.
Hmm, out of all languages with GC, C# offers lowest level constructs (pointers, tracked references (refs), pure C structs and low/zero-overhead interop)*, which something all other options either don't offer at all, or offer partially or with higher overhead and worse UX. All of these are very useful for interoperating with graphics APIs directly and managing memory manually for performance sensitive code paths.
I don't think these were the reason behind C#'s rise in gamedev historically, but they do ensure it continues to be its mainstay.
*today you can also add powerful SIMD and BLAS-adjacent abstractions to this list, a good showcase of which would be Bepuphysics2 which is a heavily vectorized physics engine written in pure C# that is as fast as PhysX and faster than Jolt (another SIMD-using physics library written in C++)
By comparison, .NET's Blazor targets LLVM, and they either AOT or JIT, however the client has to download a heavier runtime that has at least a garbage collector (nevermind the JIT), and is less than ideal. Basically, the original Wasm was designed for languages with linear memory and still makes a great target for C++ or Rust, but not for managed languages like Kotlin/Java. dotNET's WASM is there only to support Blazor, which is a web framework, a sort of successor to Web Forms and whose future is uncertain. Speaking of which, you're better off with MVC + HTMX, but I digress. So for more interesting use cases, Kotlin is actually ahead in their Wasm support.
For multi-platform support, the company behind it has a vested interest in targeting multiple platforms, and Kotlin Multi-platform support also has Google backing. So, for one, you can share business logic on iOS, as you can integrate Kotlin libraries into Swift applications:
Over on dotNET side, the blessed solution by Microsoft, for targeting iOS, Android, or the desktop, is right now .NET MAUI. So, where's Xamarin? Where's Silverlight for that matter? That's right, Microsoft changes multi-platform solutions like they change socks, and I don't understand how anyone could trust them for a multi-platform solution.
WasmGC is a prototype that only supports the bare minimum that is enough for languages with high level constructs only but not for something like C# which has interior object pointers (ref) and uses them heavily for performance (spans are built on top of them).
I don’t understand how the logic of your post works however, WASM in .NET is already used in production versus something that is an early alpha? Also, on Kotlin and targeting something that is not Android - “Java Interop” that’s all I need to say.
Kotlin is really only relevant for Android, and because Google says so.
And even in spite of that, they had to backtrack on leaving Java behind, as Android was slowly not able to consume modern Java libraries.
Kotlin on the browser lags behind Blazor, and Kotlin/Native was so badly implemented they had to redo the whole memory management concept, originally incompatible with JVM semantics.
Kotlin is used in many places where Java is or could be used. Some teams prefer Kotlin over Java for new Spring or Quarkus or Vert.x or Micronaut projects. Kotlin is relevant everywhere outside Google’s realm, too.
I’m unaware of that backtracking you’re mentioning, could you give some context?
How do you measure it lagging behind in the browser? According to Krause’s benchmarks Blazor’s WASM performance is steadily in the bottom 5% of tested frameworks, and devs generally tend to express that Blazor is suitable only for internal enterprise apps where the end users are lenient enough to accept such performance.
The limited GC in the initial Native implementation was reworked long time ago.[1]
MAUI is rebranded Forms and while MS is pushing it, it's by far the only way to build an Android UI. You can build it natively, sort of like since the beginning. While still supported, Xamarin transitioned quite nicely into .NET where can take all of the advantages. And you get a good reply about WasmGC in the other post.
Also .NET is ahead when it comes to cross platform UI compared to Kotlin - there are libraries such as Avalonia, Uno and others. And so on and so forth.
In my experience Uno and Avalonia UI are quite experimental still. And I totally dislike their looks. I spoke to Mike on the topic of theming, hope they introduce something more modern. To be fair, Avalonia UI is only becoming multiplatform.
Uno has been very unstable, not production ready in my books.
As for Kotlin, it’s just a language running on a JVM or any other supported target. The interop with Java is stellar. You can use Swing or JavaFX, which now goes by OpenFX, for the desktop. Vaadin and Hilla target V8. There’s also Kotlin/js. There’s really a lot. Compose is much like SwiftUI, composable, similar to React in a way.
How have I missed Uno! I was familiar with Avalonia, but first time hearing about Uno. I'd love to see more UI frameworks for .NET that also work on Linux.
They might have meant Visual Studio Code, which due to Microsoft's continued commitment to naming things poorly, causes a good deal of confusion. VS Code doesn't have all the features of Visual Studio but it's turning into its own impressive ecosystem that even diehard Microsoft haters love.
Visual Studio for Mac is a thing and I find it pretty good. It doesn't feel as feature complete as the Windows version, but I've never found anything I couldn't work around.
Unfortunately it's being retired at the end of August next year. Thanks a bunch Microsoft.
Rider and VS Code are slowly killing Visual Studio for .NET development. Unless you're using .NET Framework (legacy framework with confusing name), you're likely better served using anything else.
For a long time, the only reason I ever needed to open VS was for WinForms and WPF visual editing. Earlier this year, Rider implemented a WYSIWYG editor for those frameworks. I think based on Avalonia.
Now I have no reason to use VS at all, and my life is better for it
> Microsoft already acknowledged VS Code will never provide feature parity with VS.
That's correct and misleading. The features that VS supports that aren't present in VS Code are mostly legacy features that need to exist for legacy support reasons and are not being utilized in modern development.
My only complaint with VS is that it's optimal resource requirements always seem to be just beyond the limits of my work issued machine. Functionally I have no issues with it and it's worlds better than a lot of other IDEs that I've used.
Could you share the specs of your mschine? Visual Studio seems to run decently on my 4 GB laptop with an 8th generation i3 CPU. It does have an SSD though.
It does not run perfectly though. There is definitely a difference in input lag between VS and Sublime text.
I have an i9-9880H with 16GB of RAM, I'm in the queue for an upgrade.
VS 2022 Enterprise takes a solid 78 seconds to load to the "Open recent" splash page and idles there at 5%. Opening a 16 project solution takes another 34 seconds and the CPU hovers around 48-67% CPU for a minute after opening the solution before dropping down to around 6%. Any coding causes CPU spikes with each keystroke up to 67%. Loading a 150 line class is pretty instantaneous but it takes another 15-20 seconds for the references, last changes, collapsible UI, and inherit/override dialogs to appear.
It always feels like I'm waiting, even if it's just 5 or 10 seconds, for what I need.
Yes, VS has been terribly slow all the years. Having 64g RAM doesn’t always help much. Rider is fine on 16g, though the most recent release turned a bit sluggish after updating the SDK from RC2 to final, likely something misconfigured on my machine.
Huh. That seems odd to me but I only open 1 project solutions with not more than ~5 files open at a time, with no other applications other than my browser running. But those CPU spikes upon each keystroke seems very odd indeed.
If you're looking for a free alternative, VS Code with the most recent .NET/C# extensions is a very good experience. Good enough that I suspect it is the reason MS felt it no longer worth maintaining Visual Studio for Mac.
Why do you want that? This does not really win performance when you start to care about it yet imposes severe requirements on the programmer to ensure that manually deallocated object is not referenced anywhere.
All I want for Christmas this year is for Java Epsilon GC [0] to be made available in .NET.
Maybe I am missing something, but this doesn't seem like a very complicated ask. Granted, some developers would absolutely blow their feet off with this, but that's why you don't make it a default thing. Hide it behind a csproj flag and throw a runtime exception if the developer presses the GC.NapTime() red button without configuring the runtime appropriately.
There are a lot of use cases where you can strategically reorganize the problem and sidestep the need to meticulously clean your heaps. Running a supervisory process (which is allowed to GC itself) that periodically resets the principal ZGC process via business events or other heuristics is a pretty obvious path to me. Round-based multiplayer gaming seems well-aligned if you can keep allocations to a dull roar.
Given that GC exists as a standalone .dll / .so when targeting JIT, you could theoretically replace it with your own implementation that only supports allocation. But realistically this may not be desirable since the allocation traffic in applications can be pretty high and require a lot of RAM to not quickly go OOM without reclaiming the memory.
> the allocation traffic in applications can be pretty high
My use case is such that I have already done everything in my power to minimize allocations, but I have to use some parts of the framework which generate a small amount of ambient trash as a consequence of their operation. My desire to suppress GC is not so much about my own allocation story as it is keeping the gentle, gradual allocations from auxiliary framework items from triggering a nasty GC pause.
My biggest pain point - every time an HttpContext is used there is some degree of trash that accumulates. I have zero control over this unless I want to write my own HTTP+web socket implementation from zero.
The current solution for me is to GC as often as possible in workstation mode, but I would find a zero GC solution even better for this specific scenario. I suspect "rapid GC" is only sustainable while total working set remains small-ish. In the future, I may want to operate with working sets measured upwards of 10 gigabytes.
Unless you have a remarkably specific need, disabling GC is almost always the wrong path.
The "correct" way is to manipulate GC. You can suspend, manually start a GC run, pin memory, exempt an object from GC, and a few other things I've never played with.
AFAIK the only thing you can't do is immediately deallocate an object. But honestly I can't come up with a situation where an object must be deallocated immediately. Depending on your exact use case, there's ways around that if you really, really need to.
In general, the GC will almost always perform better than any manual manipulation you'd do to it. The main "acceptable" use is suspending GC inside very hot code and deferring the collection until your program is less busy. Like suspending GC while processing one frame in a game, and manually running GC in the space between frames. Even then, this is still discouraged.
Out of curiosity, what's your use case here? I really can't think of any situation that doesn't have a better solution.
It's been pretty straightforward to do this since the start, since C# has had structs and pointers the whole time. It's gotten a lot easier now that there are safe bounds-checked pointer abstractions (Span and Memory) so that you can work with raw memory anywhere you want without constantly having to manually do safety checks to avoid heap corruption. 'where T : unmanaged' means you can finally write generics around pointers too.
now that you can return refs instead of just passing them around, that's also really nice for cases where you want to pass around an interior pointer into some sort of data structure - you can use refs whether a value is allocated via the GC or malloc or stackalloc.
Mojo’s model looks interesting here. No GC and no ref counts, something akin to a reference count mechanism at compile time. Not sure how it works exactly and the documentation (as is the whole project) very alpha.
It is like Xerox and ETHZ use of memory safe systems languages for graphical workstations, monetary and human issues hinder adoption of great research ideas.
The problem with doing memory model stuff as a compile time flag is it's all-or-nothing, it becomes very hard to transition a whole app to it or even dream of making something like the whole standard library support it.
Typically you'd see a model that allows file-by-file transitions so that you can slowly clean up a codebase to make it 'safe' for a given model. This is (afaik) roughly how non-nullable references were added to C#.
If you look at it from that perspective, I don't think you would end up wanting the compiler to help you with this. You'd introduce some sort of 'refcounted pointer' type, maybe called Rc<T>, which wraps a T* + a deallocator. I can't imagine C# ever letting you overload the -> operator though so the ergonomics would be bad.
Programming the hardware ("metal") natively would be more bare-metal than imp!ementing a hosted UEFI app that does stuff through UEFI APIs. In this case eg programming the VideoCore hw in the Pi.
I guess something like an OS that boots and then only plays the game using a raw memory page for vga and interrupts for keyboard input, not something that is hosted on UEFI.
Or controlling a DAC hat to drive s video, or using lvds. Microcontroller techniques make more sense than trying to use the gpu in a pi, at least it seems that way to me. Is there is enough known about the gpu in any of the versions of the pi to forgo the blob when making a video buffer?
Targeting UEFI is potentially practical - you could build something hard-realtime like something that controls motors on a robot and does visual output for readouts at the same time.
I have a hard time imagining possible practicality from going lower level than that, it's really wild west.
https://i.redd.it/kxks306cu9y81.jpg