The amount of data that comes out of the different detectors on the LHC is staggering. It takes a fair amount of post processing to get meaningful results from collisions, and there can be millions of collisions in a run. There's some more information about how they process these results on the LHC computing grid page: http://wlcg-public.web.cern.ch/.
When I did my traineeship at ATLAS DAQ around 10 years ago, one issue was how to per-process the data that could be considered invalid fast enough, to be able to throw it away before it arrived into the cluster.
Talking about ATLAS experiment: protons in LHC collide 40 million times a second, but "only" around 100 events per second are stored for later analysis. LHC grid concerns the 100; three levels of trigger (one hardware and two software levels) are used to select the useful 100 out of 40 millions, in real time.