Hacker News new | past | comments | ask | show | jobs | submit login
First triangle ever rendered on an M1 Mac with a fully open-source driver (twitter.com/asahilinux)
307 points by pabs3 on June 5, 2022 | hide | past | favorite | 60 comments



Neat. Getting a triangle to render is 99% of the work to getting games running. Only the remaining 99% to go... :)



Interesting read. Thanks for the links! Explains the hardware rationale for tiled renderers really well.

I remember tiling techniques being used on desktop mostly for localising lighting passes. Being able to cache framebuffer tiles to get past memory bottlenecks is a more interesting insight.


> The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.[1][2] — Tom Cargill, Bell Labs

* https://en.wikipedia.org/wiki/Ninety–ninety_rule


This is extremely fascinating. I can’t even fathom how people can create open source for closed hardware. Any resource to learn the process?


Here's the main channel (everything but GPU) of reverse-engineering and writing open drivers for M1: https://www.youtube.com/c/marcan42/videos


There's also this other channel where the GPU work is happening. The author is the same.

https://www.youtube.com/c/AsahiLina/videos


Wait, I don't think that's Marcan, right?


Alyssa Rosenzweig is doing the heavy GPU engineering.

https://rosenzweig.io/


That's Marcan, all right.


See the blog posts linked above, especially the earliest one.


This is incredible work. An M1 Macbook running Linux would be a dream setup for me.


They already run Linux. Wifi works. It's already quite usable, I'm just too happy with macOS to bother with it.


Is it normal that the first time you manage to render a triangle it has what looks like a pretty complex gradient colour? I’d have expected rendering a solid colour or without fill to be easier and to happen first, but that’s just a completely uninformed guess. Or is this person just way ahead of anyone else working on the same thing, and they would have had it working without that colour at some earlier point but didn’t show it off until now?


Since the color interpolation is done by the hardware from per-vertex attributes it's not any additional effort compared to a solid-color triangle (from a coder's point of view at least).

But the simple triangle still means that most of the render pipeline already works. The next step in the "Hello Triangle" evolution would be to apply a texture to the triangle.


Ah that hardware interpolation is what I wasn’t aware of, thanks! :)


Does modern hardware still have this "default" behavior? I could imagine you would have to write a (basic) shader to get any kind of output, whereas back in the day the 3D accelerator knew how to do Phong shading etc. in hardware.


Modern GPUs don’t know Phong shading, they expect developers to supply a pixel shader for that. However, they still interpolating all these numbers in hardware.

For traditional rendering, a vertex shader outputs float4 position and a bunch of other per-vertex attributes which may contain normals, colors, texture coordinates, or anything else. GPU hardware itself only uses the positions.

The hardware rasterizes triangles into pixel grid, interpolates the rest of the per-vertex attributes over the triangle, and calls the pixel shader for each pixel covered by the triangle passing the interpolated per-vertex attributes.

It’s more complicated in reality: clipping, early depth rejection, pixel shaders need partial derivatives so the pixels are grouped into 2x2 blocks, some GPUs do tiled rendering. But still, on the high level the workflow stayed about the same since shaders arrived in Direct3D 8 in 2001 with GeForce 3.

The only major disruption so far is UE5 which uses compute shaders for nanite meshes, instead of the hardware rasterizer.


Is there a particular book you recommend to approach learning this subject?


Not quite a book, but if you already know, at least to some extent, some 3D GPU API, I can recommend this series of blog posts: https://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-...



Thanks!


You need a vertex- and pixel-shader, but AFAIK even on modern GPUs the vertex attribute interpolation happens as fixed-function feature between the vertex- and pixel-stage (e.g. the vertex shader runs per vertex and outputs different colors for each vertex, and the pixel shader runs per pixel and gets the hardware-interpolated per-pixel color as input).


I think that's just how the hardware does it when you assign a different color to each vertex.


Hmm, I thought Linux already supported ARM architecture. Is there something different about M1? Is it not like other ARM?


You are right but this is about the Apple GPU (Graphics) embedded in the M1 rather than the CPU. There is no existing driver for this Apple GPU (for doing "accelerated" 2D/3D graphics)


There's more to a computer than just the CPU ISA.

While this particular thread is about the GPU, there is indeed "something different" about the M1 otherwise too. It is quite the non-standard SoC: unlike basically everything else (other than early Broadcom RPi crap) it even uses a custom Apple interrupt controller instead of the ARM GICv2/3. Also a custom IOMMU "DART" and a bunch of other peripherals. Very unusual: NVMe exposed as a plain hardcoded MMIO device (not enumerated over PCIe).


Could this ultimately wind up producing eGPU support for the M1?


Nope, this would be the open source driver for a the GPU inside the M1 chip, which has to be reverse engineered. But it’s great news for anyone hoping for Vulkan support on M1 based GPU cores!

For external graphics support we would at a minimum need an open source thunderbolt driver to expose the PCIe lanes to an external GPU and then we would need to recompile the open source GPU drivers against the Apple Silicon or ARM64 machine targets. The good news there is that the AMD open source driver is very performant, the bad news is that it’ll require machine specific patches as I believe it has some low level assembly in it.

I’m sure we’ll get there with time, especially with so many smart people working toward Linux on Apple Silicon chips.


For the uninitiated, could someone explain why this is important? Is rendering triangles really that difficult?


As Carl Sagan purportedly said, "In order to draw a triangle, you must first invent the universe". Once you have a thorough understanding of the proprietary hardware and its drivers, it's not a complicated thing to draw. But you need to do a lot of groundwork to construct that simple shape on less-than-friendly turf.


Not just purportedly - you can watch him say it in the original Cosmos, which I highly recommend.


Strange that the GP's comment is the only google result for the phrase...

edit: Ah, I think the original was "If you wish to make an apple pie from scratch"


It means that most of the rendering pipeline is working: getting data from the CPU- to the GPU-side, compiling, uploading and running shaders (big deal!), fixed-function render states / pipeline-state-objects, rasterization to the framebuffer and displaying the result. I'd wager that getting an untextured triangle on screen is probably 70% of the total work (I'm not a GPU driver coder though, but that number is about right for writing a "user mode" 3D API wrapper).

Also: this is on hardware without any public information available that has to be reverse engineered from scratch, this is the actually incredible part.


This is the first time that the Apple M1 GPU has been able to be used outside of macOS. The earlier work (links elsewhere in thread) was all under macOS. So this means that there has been some reverse engineering of the macOS kernel side of the proprietary macOS driver and we are one step closer to a Linux or BSD kernel driver, which the macOS open source userspace driver could be ported to.


Very hard, if you have to figure out most of the hardware along the way. This triangle was rendered with literally no other software running on the main CPU than what they wrote. So certainly no high level API to call...


Rendering a triangle usually means you have a working driver, meaning you can talk to the GPU and get it to render things via OpenGL / Vulkan / etc.

https://en.wikipedia.org/wiki/Rasterisation


https://rampantgames.com/blog/?p=7745:

“It was sometime in my first week possibly my first or second day. In the main engineering room, there was a whoop and cry of success.

Our company financial controller and acting HR lady, Jen, came in to see what incredible things the engineers and artists had come up with. Everyone was staring at a television set hooked up to a development box for the Sony Playstation. There, on the screen, against a single-color background, was a black triangle.

“It’s a black triangle,” she said in an amused but sarcastic voice. One of the engine programmers tried to explain, but she shook her head and went back to her office. I could almost hear her thoughts… “We’ve got ten months to deliver two games to Sony, and they are cheering over a black triangle? THAT took them nearly a month to develop?”

What she later came to realize (and explain to others) was that the black triangle was a pioneer. It wasn’t just that we’d managed to get a triangle onto the screen. That could be done in about a day. It was the journey the triangle had taken to get up on the screen. It had passed through our new modeling tools, through two different intermediate converter programs, had been loaded up as a complete database, and been rendered through a fairly complex scene hierarchy, fully textured and lit (though there were no lights, so the triangle came out looking black). The black triangle demonstrated that the foundation was finally complete the core of a fairly complex system was completed, and we were now ready to put it to work doing cool stuff. By the end of the day, we had complete models on the screen, manipulating them with the controllers. Within a week, we had an environment to move the model through.

Afterwards, we came to refer to certain types of accomplishments as “black triangles.” These are important accomplishments that take a lot of effort to achieve, but upon completion you don’t have much to show for it only that more work can now proceed. It takes someone who really knows the guts of what you are doing to appreciate a black triangle.”

This is similar, but slightly different. There isn’t months of work on writing tools to render scenes using Sony’s documentation to figure out how to render stuff, but months of work on figuring out what Apple’s internal documentation says about rendering a triangle.


It allows 3D acceleration on M1 Macs when running an OS other than macOS. Linux, specifically.


The tweet doesn't refer to a Linux driver, but to a driver for the Asahi m1n1 bootloader thing:

https://www.phoronix.com/scan.php?page=news_item&px=Asahi-Li...


Currently, yes. But that's just for development purposes. Nobody is expected to run m1n1 itself. One of the reasons for this development is to create a Linux driver doing the same.


It's not difficult as such (I mean a triangle is not hard, but getting something out is), but it's the basic building block. If you can render a triangle, you can render it lots of times and create any scene. (simplifying a bit, but still...)


All the models in 3D games are made up of triangles rendered on the GPU. This is the first time even one triangle has been rendered on an Apple M1 GPU without the macOS kernel driver, using fully reverse engineered open source code.


It's the rendering with undocumented GPU part that's difficult, triangle meshes are the geometry representation used on current GPUs.


Thank you Alyssa Rosenzweig for your hard work to get us here.


I imagine they will receive a cease and desist?

Did they reverse engineer something?


This is a clean room open source implementation. Not property blobs are used.

Additionally Apple intentionally designed the M1 architecture to support additional OSes. They have even made changes to their code that have fixed issues the Asahi project was having. So while there's no official support this project is certainly known to Apple and has raised no concerns so far.


Could you explain what Apple changed in their code to help Asahi?

Didn't you just say no proprietary code is used?

I'm confused.


https://twitter.com/marcan42/status/1471799568807636994

[...] they *also* added a raw image mode that will never break again and doesn't require Mach-Os. And people said they wouldn't help. This is intended for us.

HN discussion about this tweet: https://news.ycombinator.com/item?id=29591578


One of recent macOS updates added a new option to boot non-macOS OS kernels easier. Apple engineers do not need this option, so it is presumably added to help Asahi (as there are no other known efforts to run non-macOS on these machines).


Perhaps releasing internal code or documentation is a mile of red-tape so some forward-thinking people are trying to do what they can to support the project while avoiding the above-mentioned issues.


AFAIK to reverse engineer Asahi runs a virtual machine between macOS and hardware. If Apple made changes in the code then perhaps these helped make the instruction logs more obvious as to what the GPU was doing.



> Additionally Apple intentionally designed the M1 architecture to support additional OSes

How can you say that? Where is the documentation for M1 that can be used to implement support for other OSes?


See the section "Permissive Security Policy" at https://support.apple.com/en-ca/guide/security/sec7d92dc49f/...

If you're looking for specific hardware documentation it doesn't really exist per se but the xnu kernel is open source, so if you want to look a lot of the basic architecture is out there I'm pretty sure.

That said, afaik asahi is avoiding looking at the xnu source and is mostly using their own m1n1 bootloader/hypervisor to run macos under and monitor hardware interaction and deriving their drivers from that.

(m1n1 is honestly some incredible work and I actually used it to get a non-Linux, non-macos os to boot to serial console a few weeks ago, it was fun)


Now that’s an interesting teaser! Care to share more, even if it was just a personal project?


can't right now but hopefully eventually :)


Not an attorney but from what I understand in the traditional view of copyright it is often legal to reverse-engineer a proprietary system for integration with third-patty hardware/software if nothing copyrighted from the original product otherwise not licensed to you ends up the final third-party product.

Some people make a big deal out of reverse-engineering blobs through disassembly particularly for certain discrete components (e.g DSP or GPUs) but a) writing a set of clean-room specifications not containing any copyrighted information to use as a reference is a known task and often considered legal and b) m1n1 makes it in some cases easier to understand how the hardware is being used versus manual disassembly of blobs or drivers.


Surprised there's not an NFT of it :P


Yesss, let the hate flow through you!




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: