FWIW: There are a bunch of ways to "do" php execution, and a lot of them are wrong. That's not exactly PHP's fault, just that there's been a lot of blind-leading-the-blind.
Assuming that you're not spinning up and tearing down a container for every request, you want to be sure you're running php with a php-fpm configuration (preferably talking over a unix socket) -- this is the fastcgi process manager, which maintains a pool of "hot" php interpreters for you that are immediately ready to execute code for an inbound request. This is usually good enough for most applications without going into the weeds on things like opcode caching, but that's all available as options too.
I'd be happy to help troubleshoot this with you if you're interested. I've also got a fully automated build script that works pretty well. You can find my contact info via the link in my profile. I promise it doesn't have to take anywhere near 300ms for php to reply to a request.
My experience is that Symfony, in its fat, batteries-included form, does take quite long time to initialize on cold caches. The first request can take hundreds of milliseconds but usually the very next request is in the normal 10ms range. This is especially noticeable on (cheap) shared hosting which has always been a common place to run PHP.
I've never found it to be a problem on a VPS but if you're developing on something slow like a Raspberry Pi I can see this happening regularly. If you're used to deploying stuff in containers and constantly throw out the cache the initialisation problem can happen very easily as well.
IIRC back in symfony 1 part of it was becasue symfony used the first request to write optimized versions of the code into a folder that I cannot remember the name of.
I think there was a way to do this up front in Symfony 1 and there might be some way to do this as part of a build and deploy step. Of course on applications with less traffic and less strict requirements just hooking something up to run a request immediately after deploy might work just as well, but if it runs across n small pods it might be a much better idea to do it on a beefy build machine instead of on n resource constrained pods causing slow loading for n customers.
Assuming that you're not spinning up and tearing down a container for every request, you want to be sure you're running php with a php-fpm configuration (preferably talking over a unix socket) -- this is the fastcgi process manager, which maintains a pool of "hot" php interpreters for you that are immediately ready to execute code for an inbound request. This is usually good enough for most applications without going into the weeds on things like opcode caching, but that's all available as options too.
I'd be happy to help troubleshoot this with you if you're interested. I've also got a fully automated build script that works pretty well. You can find my contact info via the link in my profile. I promise it doesn't have to take anywhere near 300ms for php to reply to a request.