All of these techniques already exist, sometimes in more than one form. There are multiple published papers on sampling millions of light sources efficiently, mainly in offline rendering. There are other approaches that e.g. cluster far away light sources. What is far away is different for each pixel, so the solution is a global cluster hierarchy and a ___location specific cut through that tree - a light cut).
Efficiently determining visibility, even approximate visibility, is what makes rendering such a tough problem in the first place. Naturally, many ideas have been gathered in this space. Shadow maps for light sources act as caches of discretely sampled visibility information per light source. Their sampled nature makes them highly problematic in practice. Many workarounds for shortcomings have been found over the years, but it's like building a jenga tower on a shaky foundation.
Similarly, you can reasonably estimate shadowing from objects in a local area around a pixel by doing screen space tracing, that is, walking the projected line towards the light source in the depth buffer and see if any fragment in that line has a depth close enough to your ray at that pixel). Again, this is dealing with sampled and incomplete information (e.g. any shadow caster occluded from view or off screen is missed). But this method is often used on top of other methods to ensure that contact shadows are accurate.
There are many other approaches that try to precompute or cache visibiliy or lighting information in one way or another. The big problem they all run up against is that the data needs to be sampled somehow and the memory requirements start to explode with increasing sampling resolution.
Efficiently determining visibility, even approximate visibility, is what makes rendering such a tough problem in the first place. Naturally, many ideas have been gathered in this space. Shadow maps for light sources act as caches of discretely sampled visibility information per light source. Their sampled nature makes them highly problematic in practice. Many workarounds for shortcomings have been found over the years, but it's like building a jenga tower on a shaky foundation.
Similarly, you can reasonably estimate shadowing from objects in a local area around a pixel by doing screen space tracing, that is, walking the projected line towards the light source in the depth buffer and see if any fragment in that line has a depth close enough to your ray at that pixel). Again, this is dealing with sampled and incomplete information (e.g. any shadow caster occluded from view or off screen is missed). But this method is often used on top of other methods to ensure that contact shadows are accurate.
There are many other approaches that try to precompute or cache visibiliy or lighting information in one way or another. The big problem they all run up against is that the data needs to be sampled somehow and the memory requirements start to explode with increasing sampling resolution.