>> the ghost of NAN past will visit you, just be careful.
LOL, solid advice. Early openGl only allowed 8 lights in a lighting pass. So if you had a bunch of lights, you'd have to find the closest ones and just use those. One technique I liked was the index deferred lights, which at the time could run on older hardware that didn't have use of floating point buffers:
https://mynameismjp.wordpress.com/2012/03/31/light-indexed-d...
It’s not really software vs hardware, it’s the fixed function pipeline api vs the shader api.
The fixed function pipeline api only does one thing one way with a lot of limitations while the shader pipeline can basically do anything (doesn’t even have to be rendering a 3D scene) as long as it’s fast enough for your purpose.
It's not something I've played with before but it seems to me it would make sense to include a concept of local intensity in selecting lights. Luminosity/distance/distance.
I also suspect that in most situations you can cache most results. Cast one shadow ray at a light selected randomly distributed based on intensity (thus if half the light that could hit a pixel comes from one source that light gets picked half the time) and a cache of the other answers. Sit still and it will be basically perfect, the more you move the more error there will be but there will also be less ability to notice that error.
I've also been curious about using Z-buffer techniques to determine visibility. Render a frame off screen, "color" it with the distance to the pixel. An object is visible if this frame shows it in the color corresponding to it's distance.
All of these techniques already exist, sometimes in more than one form. There are multiple published papers on sampling millions of light sources efficiently, mainly in offline rendering. There are other approaches that e.g. cluster far away light sources. What is far away is different for each pixel, so the solution is a global cluster hierarchy and a ___location specific cut through that tree - a light cut).
Efficiently determining visibility, even approximate visibility, is what makes rendering such a tough problem in the first place. Naturally, many ideas have been gathered in this space. Shadow maps for light sources act as caches of discretely sampled visibility information per light source. Their sampled nature makes them highly problematic in practice. Many workarounds for shortcomings have been found over the years, but it's like building a jenga tower on a shaky foundation.
Similarly, you can reasonably estimate shadowing from objects in a local area around a pixel by doing screen space tracing, that is, walking the projected line towards the light source in the depth buffer and see if any fragment in that line has a depth close enough to your ray at that pixel). Again, this is dealing with sampled and incomplete information (e.g. any shadow caster occluded from view or off screen is missed). But this method is often used on top of other methods to ensure that contact shadows are accurate.
There are many other approaches that try to precompute or cache visibiliy or lighting information in one way or another. The big problem they all run up against is that the data needs to be sampled somehow and the memory requirements start to explode with increasing sampling resolution.
One thing I loved about raytracing was all the ways to play around randomness. Like deciding the ray should go towards that light instead of going into the ceiling but accounting for the probability of this lucky draw. Or playing roulette at each recursion (continue or not?) instead of having a fixed threshold.
Right now it would be pretty noisy even on slow-moving games, but it’s a good start. I’ve never done graphics work professionally so take this with a grain of salt, but I’d wager that fast-moving games can probably actually get away with tricks like this even more than slow-moving ones because you can add motion blur on top of it and players will rarely have a chance to closely investigate the graphics, which allows you to take more shortcuts.
When i was ray tracing 15+ years ago it occurred to me that lights belong in an acceleration structure similar to geometry. It made querying lights that may be relevant for a given x,y,z point in the scene fast and easy. This article covers a bit more that just that, but the concept is similar.
That's tricky, though -- if you just query the acceleration structure for the nearest lights, you might discard some that are important but far away, and that introduces bias into the results.
The unbiased thing would be to use all the lights, but use some kind of weighted random sampling where you pick nearby lights most of the time, but occasionally you pick one that's far away and give it's results a higher weight to counteract that it's not chosen very often.
In a lot of contexts, though, I can agree that the pragmatic choice might be just to cull the number of lights you have to deal with by defining them to have a maximum range, and if you store them in an acceleration structure it'll be easy to query just the ones that matter.
I think "pragmatic choice" is the keyword here. When working on a general renderer for a general game engine, lighting is a much tougher problem than when working on a specialized renderer for a specialized game engine, where an acceleration structure on the lights easily solves most of the problem IME.
I guess the approach you're looking for to make sure that relevant light sources aren't ignored is covered by the ideas in the clustered deferred shading or clustered forward shading algorithms: you divide your view frustum into a 3d grid and for each light source you rasterize its sphere of influence (up to an arbitrary cutoff) into that grid. For each grid cell, you keep a list of relevant lights. For each shading point, you look up its grid cell and only shade based on the lights that got rasterized into it. Naturally, you have to decide on an arbitrary cutoff radius for each light source which introduces bias.
>> That's tricky, though -- if you just query the acceleration structure for the nearest lights, you might discard some that are important but far away, and that introduces bias into the results.
Agreed. A wide area light can be handled, but 1000 far away lights my system would ignore but they might actually be better modeled as ambient. Bit hey, I was ray tracing a maze with hundreds of lights at 10-20 fps on a CPU ;-)
I wonder if it would be possible to store the lights in a 4d spatial data structure with the fourth dimension being intensity, and then find the k nearest neighbors as defined by something akin to the minkowski metric?
LOL, solid advice. Early openGl only allowed 8 lights in a lighting pass. So if you had a bunch of lights, you'd have to find the closest ones and just use those. One technique I liked was the index deferred lights, which at the time could run on older hardware that didn't have use of floating point buffers: https://mynameismjp.wordpress.com/2012/03/31/light-indexed-d...