> The original comment was - "When writing my first OpenGL code, I was stunned by the amount of boilerplate required."
Well, you have to consider what the minimum requirements are to draw something at all. Not just at a computer display, but also in the physical world.
You need a canvas to draw on (window). You have to prepare your drawing tools (GL context). You have to sketch out what you want to draw. You have to prepare your paint (either setting up the OpenGL fixed function pipeline) and last but not least to the actual drawing.
Old style OpenGL (fixed function pipeline with immediate mode vertex calls) and modern OpenGL (shader based) pretty much even out in the amount of characters to type for drawing a triangle (fixed function slightly wins if using client side vertex arrays). The main difference is, that the fixed function pipeline gives you a default state that will show you something. But that's something that only OpenGL does and you'll find in no other graphics system (and I include also all offline renderers there).
> It's clearly not minimal because we've outlined ways it could be reduced
But this reduction works only for the most simple most basic images drawn. As soon as you're drawing something slightly more complex than a RGB triangle the shader based approach quickly wins out. The very first shader languages were created not for programmable hardware (didn't exist yet) but to kill the dozenz of lines of OpenGL code required to setup a register combiner and replace it with an API that'd allow the programmer to write down the intention in a high level manner as mathematical expressions.
Also lets not forget that if you were really after simplifying things for newbies, the right approach would be providing a library that does simple shader setup. Completely ignoring OpenGL for a moment, you'd hardly put a newbie through the paces of touching directly the native APIs to setup a GL context (well, okay, that was how I learnt it back then, because I couldn't make GLUT work with my compiler back then). So maybe provide something like a GLU-2 or GLU-3 for that; the GLU spec has not been maintained for two decades.
I feel this discussion is moot (regarding OpenGL) until everybody participating has implemented at least one complex OpenGL scene done with just register combiners. I'll throw in my purely register combiners based water ripples effect I implemented some 16 years ago (have the sources for that somewhere, but if I don't find them I can reimplement) and compare that with the straightforwardness that is a shader based approach.
Sorry, I didn't mean to.
> The original comment was - "When writing my first OpenGL code, I was stunned by the amount of boilerplate required."
Well, you have to consider what the minimum requirements are to draw something at all. Not just at a computer display, but also in the physical world.
You need a canvas to draw on (window). You have to prepare your drawing tools (GL context). You have to sketch out what you want to draw. You have to prepare your paint (either setting up the OpenGL fixed function pipeline) and last but not least to the actual drawing.
Old style OpenGL (fixed function pipeline with immediate mode vertex calls) and modern OpenGL (shader based) pretty much even out in the amount of characters to type for drawing a triangle (fixed function slightly wins if using client side vertex arrays). The main difference is, that the fixed function pipeline gives you a default state that will show you something. But that's something that only OpenGL does and you'll find in no other graphics system (and I include also all offline renderers there).
> It's clearly not minimal because we've outlined ways it could be reduced
But this reduction works only for the most simple most basic images drawn. As soon as you're drawing something slightly more complex than a RGB triangle the shader based approach quickly wins out. The very first shader languages were created not for programmable hardware (didn't exist yet) but to kill the dozenz of lines of OpenGL code required to setup a register combiner and replace it with an API that'd allow the programmer to write down the intention in a high level manner as mathematical expressions.
Also lets not forget that if you were really after simplifying things for newbies, the right approach would be providing a library that does simple shader setup. Completely ignoring OpenGL for a moment, you'd hardly put a newbie through the paces of touching directly the native APIs to setup a GL context (well, okay, that was how I learnt it back then, because I couldn't make GLUT work with my compiler back then). So maybe provide something like a GLU-2 or GLU-3 for that; the GLU spec has not been maintained for two decades.
I feel this discussion is moot (regarding OpenGL) until everybody participating has implemented at least one complex OpenGL scene done with just register combiners. I'll throw in my purely register combiners based water ripples effect I implemented some 16 years ago (have the sources for that somewhere, but if I don't find them I can reimplement) and compare that with the straightforwardness that is a shader based approach.