Hacker News new | past | comments | ask | show | jobs | submit login

If that were to be solved (if at all possible, and feasible / competitive) I can definitely see "LLM mining" be a historic milestone. Also much closer to the spirit of F@H in some sense, depending how you look at it. Would there be a financial incentive? And how would it be distributed? Could you receive a stake in the LLM proportional to the contribution you did? Would that be similar in some sense to purchasing stock in an AI company, or mining tokens for a crypto currency? Potentially a lot of opportunity here.



This would require a revolution in the algorithms used to train a neural net: currently LLM training is at best distributed amongst GPUs in racks in the same datacenter, and ideally nearby racks, and that's already a significant engineering challenge, because each step needs to work from the step before, and each step updates all of the weights, so it's hard to parallelise. You can do it a little bit, because you can e.g. do a little bit of training with part of the dataset on one part of the cluster, and another part elsewhere, but this doesn't scale linearly (i.e. you need more compute overall to get the model to converge to something useful), and you still need a lot of bandwidth between your nodes to synchronize the networks frequently.

All of this makes it very poorly suited to a collection of heterogeneous compute connected via the internet, which wants a large chunk of mostly independent tasks which have a high compute cost but relatively low bandwidth requirements.


The models are too large to fit on a desktop GPU's VRAM. Progress would either require smaller models (MoE might help here? not sure) or bigger VRAM. For example training a 70 billion parameter model would require at least 140GB of VRAM in each system, whereas a large desktop GPU (4090) has only 24GB.

You need enough memory to run the unquantized model for training, then stream the training data through - that part is what is done in parallel, farming out different bits of training data to each machine.


Data parallel training is not the only approach. Sometimes the model itself needs to be distributed across multiple GPU.

https://www.microsoft.com/en-us/research/blog/zero-deepspeed...

The communications overhead of doing this over the internet might be unworkable though.


or if the internet became significantly faster fiber connections


A single GPU has memory bandwidth around 1000 GB/s ... that's a lot of fiber! (EDIT: although the PCIE interconnect isn't as fast, of course. NVLink is pretty fast though which is the sort of thing you'd be using in a large system)


Latency still matters a lot…




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: