I know you are sharing your experience, others are as well. Let's not dismiss other's experience just because it doesn't match our own, the truth is most likely somewhere in the middle. Especially when so many people are clamoring saying that they had pain using K8s.
The initial deployment for EKS requires multiple plugins to get to something that is "functional" for most production workloads. K8s fails in spectacular ways (even using Argo, worse using Argo TBH) that require manual intervention. Local disk support for certain types of workloads is severely depressing. Helm is terrible (templating Yaml... 'nuff said). Security groups, IAM roles, and other cloud provider functions require deep knowledge of K8s and the cloud provider. Autoscaling using Karpenter is difficult to debug. Karpenter doesn't gracefully handle spot instance cost.
I could go on, but these are the things you will experience in the first couple days of attempting to use k8s. Overall, if you have deep knowledge of K8s, go for it, but It is not the end-all solution to Infra/container orchestration in my mind.
I fought with a workload for over a day with our K8s experts, it took me an hour to deploy it to an EC2 ASG for a temporary release while moving it back to K8s later. K8s IS difficult, and saying it's not has a lot of people questioning the space.
The way I see it is it starts off easy, and quickly ramps up to extremely complex. This should not be the case.
I worked at a company that had their own deployment infra stack and it was 1000x better than K8s. This is going to be the next step in the K8s space I believe and it may use K8s underneath the covers, but the level of abstraction for K8s is all wrong IMO and it is trying to do too much.
The initial deployment for EKS requires multiple plugins to get to something that is "functional" for most production workloads. K8s fails in spectacular ways (even using Argo, worse using Argo TBH) that require manual intervention. Local disk support for certain types of workloads is severely depressing. Helm is terrible (templating Yaml... 'nuff said). Security groups, IAM roles, and other cloud provider functions require deep knowledge of K8s and the cloud provider. Autoscaling using Karpenter is difficult to debug. Karpenter doesn't gracefully handle spot instance cost.
I could go on, but these are the things you will experience in the first couple days of attempting to use k8s. Overall, if you have deep knowledge of K8s, go for it, but It is not the end-all solution to Infra/container orchestration in my mind.
I fought with a workload for over a day with our K8s experts, it took me an hour to deploy it to an EC2 ASG for a temporary release while moving it back to K8s later. K8s IS difficult, and saying it's not has a lot of people questioning the space.
The way I see it is it starts off easy, and quickly ramps up to extremely complex. This should not be the case.
I worked at a company that had their own deployment infra stack and it was 1000x better than K8s. This is going to be the next step in the K8s space I believe and it may use K8s underneath the covers, but the level of abstraction for K8s is all wrong IMO and it is trying to do too much.