Easier said than done, training is usually done on "big iron" GPUs which are a cut above any hardware that consumers have lying around, and the clusters run on multi-hundred-gigabit networks. Even if you scaled it down to run on gaming cards, and gathered enough volunteers, the low bandwidth and high latency of the internet would still be a problem.