That sounds positively dystopian. Is it really that hard to dump to private/non-vendor storage for local analysis using your own tools?
I do not do cloud or web development, so this is just totally alien. I generate multi-gigabyte logs with billions of events for just seconds of execution and get to slice them however I want when doing performance analysis. The inability to even process your own logs seems crazy.
You can absolutely dump the traces somewhere and analyze them yourself. The problem is that this falls apart with scale. You are maybe serving thousands of requests per second. Your service has a ton of instances. Capturing all trace data for all requests from all services is just difficult. Where do you store all of it? How do you quickly find what you need? It gets very annoying very fast. When you pay a vendor, you pay them to deal with this.
I do not do cloud or web development, so this is just totally alien. I generate multi-gigabyte logs with billions of events for just seconds of execution and get to slice them however I want when doing performance analysis. The inability to even process your own logs seems crazy.