I don't know what you are talking about. My production setup is dumber than you would think. A bunch of bash scripts in a git repository. You just do git pull and then run a single bash script. ./app build builds the entire thing. If you want to run it then do ./app start.
Yes you do need to configure some environment variables like in almost any other form of deployment.
Messing around with tar files or flat binaries doesn't work for me. I tried that. I build a python app into an executable with Nuitka but it generates a folder anyway. Ok throw it in a tar file. Nope, doesn't run because the glibc version is too new on the developer machine. Nice try. I have to build it on the target machine. Amazing.
I feel the same about your post. I have no idea how deploying a flat binary or Python tarball is better than a container image. The entire system is effectively a dependency. Docker (or equivalent) lets me ship the entire system. Moreover I can take quite literally an image of that one system and deploy it to any number of machines identically, without any rebuilding. And I can host it interchangeably on any number of cloud providers with only small changes in configuration. The benefits go on and on. I would only ever bother with a "plain" deployment in the future if I had complete control over my deployment environment (+ the in-house expertise and time budget to manage it) and the performance overhead of the container was unacceptable.
In general, without some kind of qualification, deploying an image is way worse than a flat binary, or a tar from some interpreted language.