Hacker News new | past | comments | ask | show | jobs | submit login
AMD RDNA 4 – AMD Radeon RX 9000 Series Graphics Cards (amd.com)
105 points by pella 68 days ago | hide | past | favorite | 182 comments



Is modern Radeon support on Linux still great? I've heard that NV Geforce on Linux was kinda bad since ever, but it has massively improved over the past 3-4 years. Is Radeon still the way to go for Linux?

EDIT: My usecase is a Linux gaming PC. I was fortunate enough to score an RX 6800 during the pandemic, but moved from Windows 10 to Linux a month ago. Everything seems solid, but I'm looking to upgrade the whole PC soon-ish. (Still running a Ryzen 1800X, but Ryzen 9000 looks tempting.)


I have both a modern Radeon and Nvidia card in my PC.

Radeon overall offers a smoother desktop experience, and works fine with gaming, including proton-enabled games on Steam. OpenCL via rocm works decent, with some occasional crashes, in Darktable. Support for AI-frameworks like jax is however very lacking, and I spent many days struggling to get it to work well, before giving up and buying a nvidia card. geohot's rants on X are indicative of my experience.

The nvidia card works great for AI-acceleration. However, there is micro-stuttering when using the desktop (gnome shell in my case) which I find extremely annoying. It's a long-known issue related to the power manager.

In the end, I use the nvidia card as a dedicated AI-accelerator and the Radeon for everything else. I think this is the optimal setup for Linux today.


I use KDE plasma and experience no microstutter with a Nvidia graphics card. Is there an open gnome issue for your issue on gnome?


I think it has something to do with the power management on NVIDIA card. When actively using it things are fine, but there will be a stutter when activating Activities or switching workspaces.

I've tried using the triple buffering mutter patch[0] and still was experiencing issues. COSMIC is looking great but I have some weird screen artifacts randomly.

Hyprland is the only environment that hasn't had any performance issues with my 4070 Super.

[0] https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1441


I'm not sure, but I've found a couple of reports at nvidia's linux forum and their github issue tracker. This is with the open source kernel module & wayland. It's possible that X11 with the closed source module works better, but I prefer wayland.


Buy AMD or Intel. Not Nvidia.

Rationale:

Both AMD and Intel provide well maintained open-source drivers for many years[1]. Nvidia doesn't. You need to use closed-source drivers with Nvidia in practice, which causes issues with kernel upgrades, various issues (for which you don't get help, because the kernel is now marked as tainted) and problems with Wayland. The later are caused by the fact, that Nvidia refused[2][3] to support Wayland for a long time. Blaming that implicit sync was a disadvantage for Nvidia is not appropriate. They could've participated a long time ago and either adapt to implicit sync or help adding explicit sync years ago.

Furthermore:

  * AMD gave us Vulkan as standard, which builds upon AMD's Mantle.
  * AMD gave us FreeSync i.e. VRR, which is usable by others.
  * AMD gave us FSR, which is usable by others.
What did Nvidia?

  * High prices.
  * Proprietary and expensive GSYNC.
  * Refused to publish documentation and open-source drivers for many years.
  * Own video-decoding libraries.
It is the documentation, drivers and Wayland. But the complete track record of Nvidia is bad. Their only argument are some benchmarks wins and press-coverage. The most important feature to all users is, reliability. And, Linus Torvalds[4] said everything. If Nvidia has changed its company politics and proofs that over years through good work it is maybe possible the reevaluate this. I don't see this within the next years.

[1] Well. Decades?

[2] https://web.archive.org/web/20101112213158/http://www.nvnews...

[3] https://www.omgubuntu.co.uk/2010/11/nvidia-have-no-plans-to-...

[4] https://www.youtube.com/watch?v=tQIdxbWhHSM


> Both AMD and Intel provide well maintained open-source drivers for many years[1]. Nvidia doesn't. You need to use closed-source drivers with Nvidia in practice.

Is that what this is? https://github.com/NVIDIA/open-gpu-kernel-modules


Nvidia drivers are not open source.

Nvidia has open-sourced the kernel module "driver", which recently have been declared stable enough for general consumption.

But the kernel-mode driver is only a small part of what you would call the graphics card driver, and the bulk of it is still very much a proprietary blob.

The true open source driver for NV cards is Noveau[1] which works ok with older cards but is slow to support newest cards and features. Performance, power mgmt and hw acceleration are usually worse or not working at all compared to the official drivers.

[1] https://nouveau.freedesktop.org/FeatureMatrix.html


Thanks. All of that is correct.


I see. Thanks for explaining.


https://developer.nvidia.com/blog/nvidia-transitions-fully-t...

It has only been a year since NVIDIA actually recommended using the open-source drivers and that's only for Turing (NV160/TUXXX) and newer. It's easier to use my AMD integrated GPU on (wayland) Linux than it is dealing with bugs from NVIDIA drivers.


AMD furthermore:

* since 20 years ECC on simple unbuffered RAM on Desktop.


I helped a friend install Linux last year; they had an NVIDIA card, aaaand they had problems. Both in X and Wayland. I'm was on a 6000 series at the time and it worked great, as you know. I am on a 7000 series now, and it's as smooth as ice. AMD is the way to go for Linux.

There's that new NVIDIA driver cooking, but I'm not sure what the status is on it.

BTW, even Ryzen 3 is a substantial upgrade over 1. Any X3D processor (I think 5 is when they first appeared) is even more so. Ryzen 5 is also less sensitive to RAM timings. I applaud you for avoiding latest-and-greatest syndrome, but even upgrading to an older CPU generation would be a significant boost.


> I applaud you for avoiding latest-and-greatest syndrome, but even upgrading to an older CPU generation would be a significant boost.

Being out of a job for a while makes me very loathe to upgrade hardware that still works OK. That 1800X still does what I want it to do, and does it 'fast enough', though how far into the future that will last is unclear. Cyberpunk 2077 being the most demanding game that I've played probably helps. :D


I feel that.

It's difficult to justify any new hardware until I'm in a better place; while it'd be nice, I'm not suffering enough to /need/ a new system.

Until the beginning of 2020, during university, I was still on a 3930K from launch-day in ~2011 and GTX 680. Honestly, I'm not sure I would've bothered if it weren't also for the fact that I wanted to be able to test AVX2 implementations of some of my code without relying on an emulator or someone else's machine every time.

It probably helped that I mostly only care about Source games and RuneScape. But I haven't really played anything since my ex-girlfriend and I broke up in ~2022.

I took his RX 480 to have a display-out and gave him my 2070 Super so it wouldn't go to waste.


Oops, realized too late that I deleted a sentence by accident; I gave the GPU to my younger brother.


I run Arch on my home desktop so I get the very latest stuff pretty quickly, the latest drivers have improved the nvidia situation a LOT. I've found everything pretty stable for the most part over the past month. Multi-monitor VRR was still having problems for me but I just installed a driver update that supposedly has bug-fixes for it so I'll be trying that out later tonight.


Which GPU Are you running on? I just got a 4080 SUPER and I’m thinking of trying Linux with it.


2080ti for me!


To be fair, I have an nvidia card, I had problems a year ago, but it's gotten a lot better in the past year.


The status is that NVIDIA latest mostly works on Wayland now, but if you are on Debian the timer just started for it to not suck by default.


I had a laptop with a mobile GeForce RTX 4060 - it was a pain to run Linux on, mainly because of Nvidia.

I got rid of it and built myself a PC (not just because of Nvidia, gaming laptops are just a disappointing experience). And went all in with Team Red - Ryzen and Radeon. It's a joy to run Linux on, I work on it and have played several games - 0 problems.

Nvidia may be the leader in performance, but if it means you're stuck with Windows - I'd say it's not worth it.


I built my current home PC the day 4090 was release, and set up Fedora on it. It was usable at first, with some hiccups in certain updates, where I needed to stay with an older kernel for some time.

Wayland on Nvidia was a mess, though.

But as time goes by, things get better and better.


Older AMD cards have excellent Linux support across the board.

Brand new AMD cards have great support -- in mainline Linux and MESA. But if you're using a kernel and MESA from a distro, you're gonna have a hard time for the first ~8 months. You'll either have to run a rolling release distro or make your own kernel, linux-firmware and MESA packages. I had to do this when I got a 7900XT card a few months after launch while running Ubuntu.

This is the Linux driver model working as intended, and it sucks.

I suspect you may also encounter issues with Flatpaks, since from my understanding, these use the MESA userspace from a runtime so they wont work until the Flatpak runtime they use updates their MESA. I don't think it helps to have a working MESA on the host system. I'm not 100% certain about this though, would love some feedback from people with a better understanding of these things.


I use a 6800XT with Debian 12. As long as you have the `non-free-firmware` repo enabled in Debian 12 (it is by default), AMD cards should just work.

I've played quite a few games on Linux. Performance is still better on Windows though.


I'm using Debian testing with a RX7600XT . Performance it's the same or better on Linux. Helps a lot that the OS isn't a spam/spyware suite.


That isn’t my experience at all. e.g Helldivers 2 I have to lower to medium setting, Black Myth Wukong I have to run at lower settings than Windows. This is with Debian Testing.

It isn’t just I that have observed this. I have a friend using Arch and he noticed there is a higher input latency and worse performance.

I’ve also noticed some weird mouse behaviour in Doom Eternal.

It is enough of a difference that I just reboot into Windows now to play games.


Like another poster, I tried to help a friend move to Linux recently and he had an nvidia card, and it was nothing but problems. He eventually went back to windows and I don't blame him.

But instead of random anecdotes, here's valves word on it: https://www.pcguide.com/news/nvidia-drivers-are-holding-back...


I use Radeon 7900GRE for gaming on linux mint without any problems, even tried some demoscene entries through steam (they didn't work with plain wine). Managed to run LM-Studio and deepseek-llama-70B, but installing ROCm support was a little more involved than "just download this and run", had to do several steps and install some missing packages, took me half hour.


I resorted to scripts from smxi.org for installing the Nvidia drivers on Debian and my GTX 770, but AMD RX 7800 XT is just fine with some firmware enabled.

Ubuntu seems to be even better. I turned my system into a Frankendebian for a day to try out ROCm, the AMD install scripts work out of the box on newer Ubuntus.


As always, wait year or two so drivers catch up.


A decade or two with Nvidia.


Nvidia has long been ahead of AMD. They seem to be close to parity right now thanks to Valve’s efforts, but that has historically not been the case. AMD was the one that needed a decade to catch up until Valve started helping.

I wonder if the Radeon driver has gotten reliable GPU restart support. The GPU becoming unrecoverable if a shader crashed was long a major issue on Radeon on Linux, and I am not sure if valve fixed it for AMD.


On Windows Nvidia has better drivers. On Linux, AMD is far ahead.


> On Linux, AMD is far ahead.

Depends on a metric. I found Nvidia drivers to be better on every platform. It's just because it's closed-source it's PITA to install and update, but everything in between is great.

I've switched to AMD for religious reasons, tho. Even update story is worse on Windows for AMD, at least it was when I had RX 6900 XT.


When I was using my 1080Ti on Debian with the official driver, I would frequently have problems on screen 2 where there seemed to be no video acceleration. I could fix this temporarily through some setting in the Nvidia drivers but it would stop working after a reboot.

I had several bad updates (this was when using Fedora) and was left with graphics 30 minutes before I had to start work, I ended up plugging in a really old AMD GPU to be able to work for the day and then spent several hours faffing to get graphics back up.

I will only buy AMD/Intel cards now because it is plug and play on Linux. I've had no problems with the AMD card on Debian 12. On Debian 11 I had to enable a non-free repo to install the relevant firmware. The 1080Ti as awesome at the time as it was, only worked properly and reliably under Windows.

The other issue with Nvidia is when they stop supporting your card, their drivers will sooner or later not work with the kernel. I have an older machine that works quite well and I had to buy another GPU as the legacy Nvidia drivers do not work with new kernels. The hardware itself works fine.


Since it was a debian, i have to ask - was a driver 10 years old already when you used it? I also had 1080 Ti (and a regular 1080), worked flawlessly in: Windows 10, FreeBSD, NixOS. I don't recall versions I ran tho.


As I said there was issue where one screen wouldn’t be accelerated. This was on both Debian and Fedora this was around 2020. I started using Debian around 10 I think.


Did you report the issue?


It is mostly the same driver on both platforms. In any case, I use Nvidia’s 565.77 driver on Linux with KDE Plasma 6.2 and Wayland. It works well for me on my RTX 3090 Ti.


Most of the Linux Nvidia driver bugs seem to be related to the DRM/KMS layer, not the shared code. That's still a pretty big surface area.


Was never my experience over decades. Nvidia’s proprietary drivers were a pain the ass to install but they were great to use.


Ditto. I was around to remember the fglrx days on the AMD side. Ugggggggly. And that situation persisted for over a decade.


Implying Windows Nvidia > Linux Nvidia?


Your info looks a decade old. AMD driver on Linux are better that NVIDIA. They simply just work out of the box.


I find Nvidia has a great out of box experience too, or as close to out of box that it can be when you run Gentoo like I do. Gentoo needs to have the drivers added to it. This applies to both AMD and Nvidia equally, since a basic Gentoo install has no graphics support.

Anyway, Valve has improved the AMD community driver on Linux, but I have yet to hear any news on RADV being able to recover from GPU hangs. Nvidia’s driver can. The last I looked a few years ago, the security guys were begging the AMD driver developers to look at static analyzer reports. Meanwhile, the open source driver that Nvidia released shows that Nvidia uses static analyzers given that it has preprocessor directives from Coverity. I really would not regard AMD as being at Nvidia’s level yet on Linux, even if it is coming close.


Claims about the Nvidia drivers on Linux being bad are largely a myth. Nvidia has long been the gold standard on Linux (for decades). The main reason for complaints is that Nvidia was slow to support xwayland acceleration (mostly due to Linux mainline making it harder for them - see dmabuf being GPL symbol exported), but that has since been fixed.


> support xwayland acceleration (mostly due to Linux mainline making it harder for them)

This is not true. There were many differences with how Nvidia had been doing things in their drivers (partially due to them reusing code on all platforms) and how Mesa/Wayland was already doing things, namely with explicit/implicit sync and GBM/EGL streams, but there was no intention to make things hard for Nvidia.

Mesa/Wayland ended up supporting explicit sync so Nvidia's driver could work.


DMA-buf was GPL symbol exported specifically to make things harder for Nvidia. This is well known and is not the first time mainline developers have intentionally sabotaged third party driver developers. Saying otherwise is contrary to reality.

Reusing code on all platforms is a reason why the Nvidia driver has long been so good. It deduplicated effort so that they could focus on making the driver better. AMD avoided this prior to vulkan and their drivers had long been a disaster until Valve started helping on Linux. AMD tried copying the unified driver approach for the vulkan portion of their driver with AMDVLK and had some minor success, although the community driver developed by valve is better and they would be better off porting that to Windows.


Because any code using it was considered a derivative work and thus should be under GPL too.

That's what happens when you try and shoe horn non-GPL code into the kernel.


It was a maintainer decision. The symbol could easily have been a regular export.


Alternatively Nvidia could of respected the software license of the kernel.


The kernel’s graphics drivers are usually MIT licensed. This dates back to code being moved from userland into the kernel for DRM, which was Linux’s idea. The Linux kernel developers are the ones who rocked the boat here.


You're confusing the userland graphics libraries (Mesa) and the kernel drivers (nouveau etc). The kernel ones are all GPL.

The userland libraries are MIT because everything links to them, including propriety software.


Contrary to what other people are saying, I've had no significant issues with nvidia drivers on Linux either, but I use X not Wayland so that may be why. Play tons of games, even brand new releases, without issues.


I have used both. The experience on both of them is at parity right now.


They are not a myth, those cards frequently have issues with newer hardware and a whole handful of issues wayland itself such as screen tearing.


The thing is if you don’t need graphic output they are rock solid. I can understand how someone who might use them primarily in headless servers would say the drivers are great.


Graphics output is largely issue free on Nvidia too.


this has never been the case for me, and evidently for the vast majority of others as well.


Those who have had or perceived issues tend to be the loudest.

I have had a good experience with Nvidia and when there is an issue, I report it and Nvidia fixes it within months. The only time I have seen a better turn around was Intel graphics, where they fixed things within 24 hours after I pointed out a bug in their kernel driver. AMD graphics on the other hand seems unresponsive to community reports or outright refuses to handle issues.

For example, they refused to implement VK_EXT_fragment_shader_interlock in AMDVLK despite implementing the equivalent for D3D12 on Windows:

https://github.com/GPUOpen-Drivers/AMDVLK/issues/108

The damage in that case is limited to Windows, as the community driver for Linux implemented it, but it made cross platform support in emulators more difficult. If it were not for Valve working on the community driver, the Linux experience on AMD graphics would be no where near how good it is on Nvidia graphics.


I am running Wayland on a 3090 Ti and there are zero tearing issues in KDE plasma 6.2.

I do not know what you mean by “issues with newer hardware”. It has always worked just as well on Linux as it worked on Windows as far as I know.


The 3090Ti, while still an excellent card is hardly "newer hardware" at this point in 2025.


That is irrelevant considering the quote was “those cards frequently have issues with newer hardware”. I was asking what this referenced newer hardware is.


In my experience, with KDE Plasma, Nvidia is quite awful. Animations are laggy, and the desktop slows down when a new window opens. As well, there are a bunch of small issues I’ve only seen when using Nvidia.


I run KDE Plasma. I have no issues with the animations. I am not sure why your experience would be different.

In the past, anyone experiencing that sort of thing had a MTRR issue from how the BIOS set things up that could be fixed by setting NVreg_UsePageAttributeTable=1 on the Nvidia kernel module, but I have not heard of anyone in that situation for a few years now. I had been under the impression that the setting was obsolete as Nvidia had made it the default behavior.


Dunno, I have an Nvidia card in my laptop, while it "works" on Linux (even Wayland), it also heats the laptop up so incredibly hot that the laptop's thermal settings just turn it off. No way to throttle it, increase fan speed, etc..., at least not without a bunch of hackery I don't want to deal with. All for performance that's in many cases worse than the integrated Intel card (since the integrated card has access to 16gb ram versus the GPU's 4gb).


You might want to look into Nvidia coolbits. Those give fan control and frequency control. Alternatively, you could set a power target using nvidia-smi.


nVidia drivers for laptops on Linux are still garbage tier. I either have a choice of no hardware acceleration and working external displays on noveau (which toasts my CPU) or no external displays, but working hardware acceleration, on nvidia drivers which is useless for me.

I am definitely not buying an nVidia GPU ever again.


I have a friend who uses multiple displays on his 3080 using the Nvidia driver. They work for him.

It sounds like your issue is related to the dma buf issue that the kernel developers created. I thought that was fixed in recent versions of the open source Nvidia driver. What was the last driver you tried?


Even on Arch it was usually quite fine the last 10 years or so. And if you go back that far you could've certainly have a lot of fun with fglrx (I know I did).

In a commercial setting, with a supported distro, it's really very solid for desktop use.


Personally, I've seen way more Nvidia specific issues in the mpv bug tracker than AMD.

CUDA vs ROCm (Rusticl?) is probably another story, though.


> I've seen way more Nvidia specific issues in the mpv bug tracker than AMD

Could that maybe be because Nvidia now has almost an order of magnitude more customers? I think they closed 2024 with 90% market share.


mpv is still very much Linux majority and AMD is (relatively) more popular on that platform, I'd wager.


It's possible they have a higher percentage of AMD users than other communities, but I highly doubt given the market share that it's statistically possible for AMD to outnumber Nvidia.


It's just not stable at all. It seems to work fine when it can work stably.


It is largely a just works experience for me. I report when there are bugs (like the most recent driver causing black screens on my monitor when VRR is enabled) and Nvidia usually fixes them in a few months.


Agreed. I've been using Linux desktops with nvidia cards for like 25 years. Every now and then I think "Well, everyone insists that Radeons work better on Linux, I'll give one a try", and every single time it turns out to be worse and I end up switching back.

Yeah the drivers aren't open source but, like, they work?

Yeah I may have been held back from switching to Wayland for a while but... honestly I did not care.

With all that said, I would really like to see AMD compete particularly in the AI space where nvidia seems to be able to charge totally wild prices that don't seem justified...


No ROCm at launch? After they delayed it for months? What a joke. That's like not having CUDA available at launch.

https://www.phoronix.com/news/AMD-ROCm-RX-9070-Launch-Day


This card is marketed to consumers, not AI developers. Most people who will buy these cards don't even know what ROCm is.

Edit: it just doesn't matter for launch day. It'll likely be supported eventually.


Every Nvidia card has CUDA support from day one, regardless of who the market is for it. I wouldn't much as much if all their slides weren't covered in AI, AI, AI, and it didn't ship with some stupid LLM chatbot to help you tune your GPU.


Why can't consumers dabble in AI?

When I explained my non-technical friend who is addicted to ChatGPT that she could run models locally her eyes got lighten up and she wants to buy a graphics card, but she is not doing any gaming.


I think having CUDA available in consumer cards probably played a big part in it being the de facto "AI framework." AMD would be wise to do the same thing.


> not AI developers

Yet their slides show INT8 and INT8 with Sparsity performance improvements. As well as "Supercharged AI Performance".


Those features are probably used by FSR4. The 9070 isn't very attractive for LLMs though.


The slides mention Stable Diffusion and Flux as a benchmark.


Developers introduction to a technical ecosystem is more than likely using the card they already own. And "It'll likely be supported eventually" just signals that they're not serious about investing in their software. AMD is selling enough CPUs to finance a few developers.


I mean that's one way to guarantee availability for gamers!


The ROCm libraries were built for RDNA 4 starting in ROCm 6.3 (the current ROCm release). I'm not sure whether that means it will be considered officially supported in ROCm 6.3 or if it's just experimental in that release.


Im running ROCm ok on my 9070XT. You can build it from source today if you have a card.

rocminfo:

**** Agent 2 **** Name: gfx1201 Uuid: GPU-cea119534ea1127a Marketing Name: AMD Radeon Graphics Vendor Name: AMD Feature: KERNEL_DISPATCH Profile: BASE_PROFILE Float Round Mode: NEAR Max Queue Number: 128(0x80) Queue Min Size: 64(0x40) Queue Max Size: 131072(0x20000)

[32.624s](rocm-venv) a@Shark:~/github/TheRock$ ./build/dist/rocm/bin/rocm-smi

======================================== ROCm System Management Interface ======================================== ================================================== Concise Info ================================================== Device Node IDs Temp Power Partitions SCLK MCLK Fan Perf PwrCap VRAM% GPU% (DID, GUID) (Edge) (Avg) (Mem, Compute, ID) ================================================================================================================== 0 2 0x73a5, 59113 N/A N/A N/A, N/A, 0 N/A N/A 0% unknown N/A 0% 0% 1 1 0x7550, 24524 36.0°C 2.0W N/A, N/A, 0 0Mhz 96Mhz 0% auto 245.0W 4% 0% ================================================================================================================== ============================================== End of ROCm SMI Log ===============================================


Can't Vulkan backends do the job? Not to defend AMD, but so long that perf/dollar stays above NVIDIA just any how, isn't that more than bare minimum effort for them?


I am not surprised, ROCm focuses on CDNA. Was RDNA in ROCm ever officially supported?


RX 7900 (RDNA 3 flagships) are officially supported. I think the 6900 (RDNA 2) was supported at one point as well, and in both cases other cards with the same architecture could usually be made to work as well.


Do you assume that it working means official support? I had been under the impression that RDNA had unofficial ROCm support.


https://rocm.docs.amd.com/projects/install-on-linux/en/lates...

Looks like on Linux the only GPUs which are currently officially supported are the 7900 series.

https://rocm.docs.amd.com/projects/install-on-windows/en/lat...

On Windows the official support covers most (all?) of the discrete desktop 7000 and 6000 series.


Thanks for the clarification.


On Linux, there is official support for a few Navi 21 (gfx1030), Navi 31 (gfx1100) and Navi 32 (gfx1101) cards [1]. As you mention, in practice, there's also quite a few cards that work but are not officially supported.

[1]: https://rocm.docs.amd.com/projects/install-on-linux/en/docs-...


Thanks for the link. Those are the professional cards. For consumer cards, the only supported models on Linux are the 7900 series and Radeon VII according to that. Interestingly, all of the supported cards aside from Radeon VII have 24GB of RAM or more as if high VRAM were a prerequisite for support.

Seeing this, I am not surprised I misremembered the state of RDNA support in ROCm. The support matrix it pitiful compared to Nvidia’s CUDA support matrix.


The official support list is anemic, but in practice, any RDNA 2 or RDNA 3 GPUs will either work out of the box or can be made to work without much difficulty.

Debian has a particularly good compatibility story. Its ROCm packages carry patches to work on all discrete Vega, RDNA 1, RDNA 2 and RDNA 3 GPUs. That's not official support, but it is tested.


I've been using my 6750XT with rocm ever since I picked it up ~2 years ago. Idk if it's "officially" supported but it's works with SD and LLMs just by setting an environment variable.


My 6900 XT has had pretty good support in everything I've tried.


But it didn't launch with AMD support, it is support now.


The hardware itself seems like it will be welcome in the market. But I find AMD GPU launches so frustrating, the branding is all over the place and it feels like there is never any consistent generational series. Instead you get these seemingly random individual product launches that slot somewhere in the market, but you never know what comes next.

In comparison nVidia has had pretty consistent naming since 200 series, with every generation feeling at least somewhat complete. Only major exception was (mostly) skippin 800 series. Not saying they are perfect by any means in this regard, but AMD just feels like a complete mess.

Checking wikipedia, Radeon has recently gone through (in order):

* HD 7000 series

* HD 8000 series

* 200 series

* 300 series

* 400/500 series

* RX Vega

* RX 5000 series

* RX 6000 series

* RX 7000 series

* RX 9000 series

Like what happened to 8000 series? And also isn't it confusing to partially change the naming scheme? Any guesses what the next generation will be called?


AMD seriously needs to fire its marketing team. Think how you would do it if you could introduce a naming schema and think of how to name technical features - it would be very easy to improve on what AMD is doing.


They've got a pretty amazing apu that just came out and competes with the M series macbooks based on early benchmarks. The name of this APU: AMD Ryzen AI MAX 300 "Strix Halo"


I thought it was the AMD Ryzen AI Max+ 395


grand parent is referencing the "product line" and the "code name" while you're naming one specific model.


> Like what happened to 8000 series? And also isn't it confusing to partially change the naming scheme? Any guesses what the next generation will be called?

I read somewhere that Radeon jumped to 9000-something for this generation to align with current Ryzen 9000 series CPUs (Ryzen 8000 wasn't a thing either, last I checked). Lets see if next gen Radeons match next gen Ryzens.

EDIT: confirmed, found slides from CES posted in articles like these:

https://www.pcworld.com/article/2568373/amds-radeon-rx-9070-...

https://www.techspot.com/news/106208-amd-reveals-rdna4-archi...

https://www.notebookcheck.net/AMD-previews-RDNA-4-with-new-R...


Ryzen 8000 was mostly mobile CPUs like the 8840U (there were a few desktop ones like the 8700G).

Ryzen 6000 series was all mobile.


Marketing department at AMD missed the targets. But honestly, I don’t care about that.

Price is very competitive, and as long as benchmarks and QC are on point. Then this is a massive win for AMD.

NVDA as a “market leader” dropped the ball with the 50 series. So many issues that have turned me away from their products.


The 8000 series are iGPUs, part of the AMD Strix Halo APUs (e.g. AMD Ryzen AI Max+ 395). [1]

I do agreed it's confusing though, when they were already using a three digit naming convention for iGPUs (like the Radeon 880M - the M made it clear it was a mobile chip).

[1] https://www.notebookcheck.net/AMD-Radeon-RX-8040S-Benchmarks...

https://www.notebookcheck.net/AMD-Radeon-RX-8040S-Benchmarks...


In this release, AMD has aligned with nVidia naming. It feels like an acknowledgement they've screwed this up and are giving up. So it feels like a genuine improvement. Hopefully they maintain this alignment, but I'm sure they won't.


You forgot a couple too, eg.

The Radeon VII which is really just a RX Vega variant made on TSMC rather than Samsung nodes but has entirely new branding for that model that doesn't mention Vega in the name.

The Radeon R9 Fury, Fury X, R9 Nano and Radeon Pro Duo which are really just 300 series but all of those have entirely new branding just for those models that don't mention the 300 series convention in their names. You also can't tell at a glance which are better.

It is a fucking mess. They cannot for the life of them stick to a convention.


Vega II = VII = Radeon VII

It's like a really dumb Easter egg.


Pretty sure the primary reason for the name was because it was moved to 7nm.


0xA070


So about 20% better value than Nvidia comparing MSRPs. Not terrible but that’s what they offered last generation and got 10% market share as a result. If the market price of Nvidia cards remains 20-50% above MSRP while AMD can hit their prices, then maybe we have a ballgame.


MSRP doesn't mean anything right now, it's just a made up number at this point. Let's see the real sticker prices once the cards hit the scalpers cough-cough I mean shelves.


I also hear that the stock is almost non-existent for the 5000 series. Ball is in AMDs court, let's see what happens...


5070xt will likely be at $800-850 with my local/import tax while 5070ti is now at $1200-1300.


I'm pretty torn to self-host AI 70 B models on Ryzen AI Max with 128gb of ram. The market seems to be evolving fast. Outside of Apple, this is the first product to really compete in this category Self-host AI. So... I think a second generation will be significantly better than what's currently on offer today. Rationale below...

For a max spec processor with ram at $2,000, this seems like a decent deal given today's market. However, this might age very fast for three reasons.

Reason 1: LPDDR6 may debut in the next year or two this could bring massive improvements to memory bandwidth and capacity for soldered on memory.

LPDDR6 vs LPDDR5 - Data bus width - 24 bits, 16 bits Burst length - 24 bits, 15 bits Memory bandwidth - Up to 38.4 GB/s, Up to 6.7 GB/s

- Camm ram may or may not be maintain signal integrity as memory bandwidth increases. Until I see it implemented for a AI use-case in a cost-effective manner, I am skeptical.

Reason 2: - It's a laptop chip with limited PCI lanes and reduced power envelope. Theoretically, a desktop chip could have better performance, more lanes, socketable (Although, I don't think I've seen a socketed CPU with soldered RAM)

Reason 3: In addition, what does hardware look like being repurposed in the future compared to alternatives?

- Unlike desktop or server counterparts which can have a higher cpu core count, PCEe/IO Expansion, this processor with its motherboard is limited on re-purposing later down the line as a server to self-host other software besides AI. I suppose could be turned into a overkill, NAS with ZFS and HBA Single Controller Card in new case.

- Buying into the framework desktop is pretty limited based on the form factor. Next generation might be able to include a 16x slot fully populated, a 10G nic. That seems about it if they're going to maintain the backward compatibility philosophy given the case form factor.


Rx 9070 seems perfect for compact builds on a power budget but I can't see a single two-slot, two-fan card from partners so far. They all look like massive, three slot cards.


So Framework launches with Ryzen AI Max+ 395 with Radeon 8060S Graphics which has RDNA 3.5.

RDNA 4 has a 2x performance gain over 3.5 (4x with Sparsity) at FP16.

It just makes it all harder (the picking and choosing). Let's see what Project DIGITS brings once it launches.


One has 16 GB VRAM and one has 96 GB. It's not really comparable.


The high-end Framework has 128GB, just like Project DIGITS. VRAM size has always been the problem, so I'm not comparing these APU SBCs to PCIe GPUs.

The comment was more about the timing of the announcements. Maybe Framework should have waited a year and then use an RDNA 4-based APU, developed and marketed in close coordination with AMD. The competitors are Apple and Nvidia, where both build their own chips and get to design their entire systems.


It seems very silly to me to change the naming scheme just for the 9000 series when they're going to have to change it again for the next series. Well I suppose they could pull an Intel and go with RX 10070 XT next time. I guess we can be thankful that they didn't call it the AMD Radeon AI Max+ RX 9070 XT.


RX 2025 7+


> AMD Radeon RX 9070 XT 64 16GB 2.4 Up to 3.0 256-bit 64 MB 304W $599

> AMD Radeon RX 9070 56 16GB 2.1 Up to 2.5 256-bit 64 MB 220W $549

Ignoring the awful naming scheme that marketing cooked up.

I really do love the price point here. Very competitive with the joke offerings by NVDA —- 5070 and 5080. Look forward to the benchmarks.

Been itching to upgrade my gaming PC for quite awhile now. But the issues with NVDA (12VHPWR cable issues, non-competitive pricing, paper release, missing CUs, QC issues, …) have encouraged me to put it off until later.


Hopefully, finally on par with nvidia with hardware BVH for raytracing and not those horrible GLSL shaders (or at least published and provided optimized GPU machine code or similar).


RDNA 4 card announcement without mentioning about ROCm support ?

When will these guys learn/catchup....


Their reveal video[0] from an hour ago.

[0] https://www.youtube.com/watch?v=GZfFPI8LJrc


My hot take is that the 9070 XT at $600 will do OK as long as they can ship at MSRP and nvidia can't, but would have been more impressive at $500 or even $550.

The 9070 non-XT seems DOA at $550.

The most I've ever spent on a GPU is about $300 and I don't really see that changing any time soon. (And that was for a 70-class card, so...)


Totally, the non-XT is the medium soda.


And like a medium soda, it's only purpose is to make the large soda's price seem attractive. Seems to be working.


I'm really intrigued to get one, but where is the 32GB version for all the LLM goodness?


It’s a $600 card with 16GB of RAM. That’s a good deal.


It does seem like there is room for an $700-800 9070AI with 32 gb vram.


There's room but neither GPU vendor is willing to sell 32gb at that price


I get their point, but at the end of the day it's politics and marketing having it their own way.

With a 32GB card well below 1000$ it would sell like candies for anybody doing anything AI-related that's not training (you can easily run inference and fine tuning on such a card).

But it would massively eat in their data center sales which is what executives and investors want to see.

It's a tragedy because such a card would get a lot of love and support from amateurs to make it work great in the ML/AI context and thus improve their data center offerings long term.

So this is gonna end up in the same fashion AMD turns: it will disappoint or be ignored by most gamers cuz it has less brand power and no DLSS, and AMD will still disappoint at the data center level.


I think it could work out with a weak gpu (or high TDP). You want to make the card have higher TCO for datacenter, but if you make it a 3 slot card with 400W TDP that's 2x slower than your server GPUS, I think it works out. Once you have $10k of server (cpu+ram+networking) if your options are adding 2 9070AIs or 3 MI-300whatevers, the server GPUS would win for a server.


If you created a 32GB card that was great at AI workloads and cheap, it doesn't matter what you set the MSRP to. Street price would rise to the same level as other 32GB cards with similar performance.


The 4060 16GB was only about $440 a couple of months ago.


That is likely to be AMD’s workstation variant. Here is the workstation variant of the 7900 XTX that had double memory:

https://www.techpowerup.com/gpu-specs/radeon-pro-w7900.c4147


There were rumors that a 32GB skew is coming.


And AMD already said there is not a 9070XT 32GB coming. Which I understand as "we're building a 32GB card with this chip, but it's not coming before christmas and will cost you a kidney".

I really would like to upgrade from my 2070 Super but I'm not getting a 16GB card now just to buy another one with 32GB later on.


I don’t believe that considering that there was a 48GB 7900 for the workstation market:

https://www.techpowerup.com/gpu-specs/radeon-pro-w7900.c4147

The denials are probably more saying that there will be no consumer targeted 32GB version than that there will be no 32GB version at all.


Yes, I agree. It will probably be a 3000$ unit.


I'd rather have a $3000 one with 80GB


If I'm spending $3k, I'm probably getting a Nvidia project digits box with 128GB.

Then again, if the tokens/s of digits is comparable to a M4 Max 128GB, then I'm getting a MBP instead.


16GB? In 2025 that seems very small for anything other than gaming.


These are gaming cards. If you think AMD is lacking here with just 16GB I can only assume you never bought NVIDIA gaming cards in the last 10 years.


You can buy used nvidia tesla p40 with 24GB from 2016 and unlike AMD, they still have CUDA support. The only thing they are missing is support for the latest data types like fp8.


I don't know if I'm in a parallel Universe or hallucinating, but these are graphics cards, designed and created for people to install in they deskop PCs and run games. Just like they have been doing it for decades now.

What's with the P40, CUDA and fp8? Seriously people, chill. AI has been using graphics cards because that was the best available route at the time, not the other way around.

Otherwise I must question, why don't you talk about DisplayPort2.1, DirectX12, FidelityFX.. and other dozen features graphics related.


Because, unless you are a Big Corp, you can't afford to buy H100s. Being able to run AI on consumer hardware is essential for many AI hackers out there. And no, renting hardware on "AI clouds" is not reliable — still outrageously expensive, and the rug can be pulled out of you at any moment.

My rig is 4x4090, and it did cost a fortune to me already (still less expensive than a single H100). I would have happily used cheaper AMD cards, but they are not available for reasons like this.

Last time I checked, this site was called "Hacker News", not "Bigtech Corporate Employee News".


> I don't know if I'm in a parallel Universe or hallucinating, but these are graphics cards, designed and created for people to install in they deskop PCs and run games.

Graphics cards have been for more then running games even before they were broad-purpose computer engines that they have been since NVidia radically reshaped the market—decades ago.

Heck, Nvidia has had multipe driver series for the same cards because of this, even before AI was a major driver.


> You can buy used nvidia tesla p40 with 24GB from 2016

Then do that instead if it fits your workload?

There's more to a video card than the amount of RAM, though.


Which games support fp8?


I'm actually happy about that. It specifically targets gamers who want to play video games.


NVIDIA offers one new consumer card with over 16gb.


And that one card is $2500 in practice and sold out (only launched with like 500 inventory across USA). 5090 is pure unobtanium right now unless you feel like paying scalper prices in $3k++


> And that one card is $2500 in practice and sold out (only launched with like 500 inventory across USA).

And the power connector melts if you look at it funny.


These cards are $550-$600

16GB is great for what these are: Gaming cards.

Even on the nVidia side you have to spend $2000 (likely more) to get more RAM than that.

You could buy 3 of these for the price of a single nVidia 5090.


I suppose it's dissapointing that the 6800xt launched over 4 years ago at $649 with 16gb as welll. Inflation can partly explain this, buts still - we had been used to progress across the board each generation previously, at least with AMD.


On the Nvidia side every GPU released in the last ten years competes against what AMD is doing today. People who need more VRAM go with dual 3090.


For the low low price of $2.5k you can outcompete a $600 card. Wow.


They don't want to make cards good for AI, at the price gamers will pay, thus annoying the gamers and cutting into their AI chip sales.

nVidia created the AI chip market on the backs off gamers, but that doesn't mean that trend will continue forever. The demand is there, so they can extract greater profits and margins on AI chips.


To be fair, you can buy ~3 of these for the price Nvidia charges for 24GB/32GB models.


If people want more VRAM the 24GB 7900xtx is right there and has been there for years.


Yes but you can't easily put 3 GPUs in 1 PC


High ram cards are just nice for their longer lifespans though.

Since Nvidia dominates the AI market anyway, AMD has the opportunity here to not try and help them protect it, and sell cards with more RAM. IMO that’d be a good move. They’d be keeping gamers happy by not hampering their product artificially. And, some poor grad students might use their cards to build the “next thing.”


It's a gaming card.


The problem is that AMD is nowhere close to DLSS performance/quality, AMD cards are good for rasterization but it's not enough. The other thing is that most game don't implement FSR because 90%+ cards are nvidia.


https://www.amd.com/en/products/graphics/technologies/fideli...

Lots of games do and will support FSR because current gen consoles are all AMD (and console gamers outnumber PC gamers).


> and console gamers outnumber PC gamers

It's my understanding that there are 2-3x more PC gamers than console gamers, but the console gaming market generates more revenue when you include hardware. In fact there are probably more PC gamers today than total consoles ever sold.


A little too early to tell where AMD is when FSR 4 is not even out yet.


I would be shocked if it gets close to DLSS4 quality


> but it's not enough

How do you mean?


I’ve never used DLSS or ray tracing or whatever Nvidia cooks up. I don’t know a single person that cares about those features and I know a lot of gamers.


Nowdays a lot of people care if you want decent fps without expensive setup.


The reason people care about fps because it used to be a fairly decent proxy for latency. Latency is what's actually important. DLSS makes latency worse.


Nobody wants fake frames.


I do. Besides, all frames are fake.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: