Hacker News new | past | comments | ask | show | jobs | submit login

As long as you've got enough RAM for a file cache for the active program binaries and header files, I've never noticed any significant difference between SD card, eMMC, USB3, or NVMe storage for software building on the SBCs I have. It might be different on a Pioneer :-)

I just checked the Linux kernel tree I was testing with. It's 7.2 GB, but 5.6 GB of that is `.git`, which isn't used by the build. So only 1.6 GB of actual source. And much of that isn't used by any given build. Not least the 150 MB of `arch` that isn't in `arch/riscv` (which is 27 MB). Over 1 GB is in `drivers`.

riscv-gnu-toolchain has 2.1 GB that isn't in `.git`. Binutils is 488 MB, gcc 1096 MB.

This is all small enough that on an 8 GB or 16 GB board there is going to be essentially zero disk traffic. Even if the disk cache doesn't start off hot, reading less than 2 GB of stuff into disk cache over the course of a 1 hour build? It's like 0.5 MB/s, about 1% of what even an SD card will do.

It just simply doesn't matter.

Edit: checking SD card speed on Linux kernel build directory on VisionFive 2 with totally cold disk cache just after a reboot.

    Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
    permitted by applicable law.
    Last login: Tue Dec 10 07:39:01 2024 from 192.168.1.85
    user@starfive:~$ time tar cf - linux/* | cat >/dev/null
    
    real    2m37.013s
    user    0m2.812s
    sys     0m27.398s
    user@starfive:~$ du -hs linux  
    7.3G    linux
    user@starfive:~$ du -hs linux/.git
    5.6G    linux/.git
    user@starfive:~$ time tar cf - linux/* | cat >/dev/null
    
    real    0m7.104s
    user    0m1.120s
    sys     0m8.939s
Yeah, so 2m37s seconds to cache everything. vs 67m35s for a kernel build. Maximum possible difference between hot and cold disk cache 3.9% of the build time. PROVIDED only that there is enough RAM that once something has been read it won't be evicted to make room for something else. But in reality it will be much less that that, and possibly unmeasurable. I think most likely what will actually show up is the 30s of CPU time.

I'm having trouble seeing how NVMe vs SATA can make any difference, when SD card is already 25x faster than needed.

I'm not familiar with the grub build at all. Is it really big?




The build directory is 790M (vs 16GB of RAM), but nevertheless the choice of underlying storage made a consistent difference in our tests. We ran them 3+ times each so it should be mostly warm cache.


Weird. It really seems like something strange is going on. Assuming you get close to 400 MB/s on the NVMe (which is what people get on the 1 lane M.2 on VF2 etc) then it should be just several seconds to read 790M.


"Thanks Andrea Bolognani for benchmarking the VF2 and P550 with NVMe"

omg. I didn't notice that before.

The two tests were run on DIFFERENT MACHINES by different people.

NVMe result is 28.9% faster than SATA result.

1.8 GHz EIC7700X (e.g. Milk-V Megrez) is 28.6% faster than 1.4 GHz EIC7700X (HiFive Premier)

Mystery explained?

Try on an SD card ... I bet you don't see a significant difference.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: