Actually I wish they hadn't reduced the number of slots to 4, because part of the "fun" is dealing with the fact that with 7 nodes, using ssh and individual node management is no way to manage a cluster, so you're forced to treat it as a real cluster. I feel with 4, I might be tempted to individually manage each node. But I also understand why changes in the Pi Compute Module 4 made this necessary. The CM4 is physically much larger than the CM3.
Edit: Actually my real wish is for a compute module that has more I/O channels. I would love to build a hypercube-style supercomputer (like the Meiko Computing Surface) but these require 5+ high speed I/O interconnects say to build a 32+ node cluster. I wonder if PCIe offers a solution?
I too would be interested in playing around with more processor to processor interconnects. Just for fun I built a 16 way SAMD21 board that used the serial interconnects to make a hypercube arrangement and it was very cool to play with.
It would be possible to build an interconnect over PCIe, but of course it might just be better to use a 10g ethernet PCIe interface chip for each node and a local to PCB network.
Actually I wish they hadn't reduced the number of slots to 4, because part of the "fun" is dealing with the fact that with 7 nodes, using ssh and individual node management is no way to manage a cluster, so you're forced to treat it as a real cluster. I feel with 4, I might be tempted to individually manage each node. But I also understand why changes in the Pi Compute Module 4 made this necessary. The CM4 is physically much larger than the CM3.
Edit: Actually my real wish is for a compute module that has more I/O channels. I would love to build a hypercube-style supercomputer (like the Meiko Computing Surface) but these require 5+ high speed I/O interconnects say to build a 32+ node cluster. I wonder if PCIe offers a solution?