I have a disclaimer to make at first. The right answer is "it depends" as anything tech related but I want to offer you a few tips from my long experience with containers, cloud and Kubernetes.
Kubernetes is complex
I agree, but it solves complicated problems. Complexity is part of our job as engineers or developers. We deal with complexity everyday. The right question is: "do we care about this particular complexity?". In order to answer this question you have to look at your skills. Are you familiar with containers already? If containers are a good way to package your application I think you should use them. It helps to move your applications around and nowadays you are gonna do it all the time. When doing continuous integration you ship your application and its system requirements to a local runner who builds and run it. Containers helps you with isolation for free. You ship your application to collaborators, or outside QA engineers or to customer. Containers helps you to reproduce a program because it comes with all its dependencies well described. Containers are a solid standard nowadays, make them part of your build system.
I am ok with containers but we are here for Kubernetes
I know! And I am sorry, but building blocks are important and if you are familiar with the small unit of work (the container) you are almost there. Because now the question is: "how do I run my containers?" Docker compose, systemd, dockerd, Kubernetes, EKS (Amazon Elastic Kubernetes Service) or any other solution like Rancher, VMWare Tanzu or even as a service, any cloud provider sells a Kubernetes environment nowadays. All those solutions are perfect for you. Some of them are low risk, familiar, static and I am speaking about compose, systemd, dockerd. You configure your systemd service to run a container, or you type:
$ docker run -d --restart=always yourcontainer
And you have a long running containers, that restarts automatically with the possibility to get logs out of it, to manage rolling update (manually or via configuration as code), and so on...
The problem is, I don't think real life is that boring. Your environment is not static. It is dynamic, because you develop quickly, and you want your users to benefit from your code as soon as possible. It is high risk, your customers will use your application in a way that you didn't think about and an update should be easy to track, rollback and validate. A server can fail, it can disappear and the network can become unreliable, you need more than a few nodes, you want a system that survive to all of that and an application that can handle such failures.
Having Kubernetes onboard since the early days if you can invest on something like that set the stage for developing an application that survives a bit better to a realistic high load environment. I am not saying you should run it yourself, and you don't need to be an expert, you have to treat it as a commodity, as a platform.
Is this the only answer?
Luckily for us no! And I didn't write this blog post to sell Kubernetes licenses. Based on your skill set a container orchestrator is a tool that I think you should adopt as soon as possible. But what if you don't have the time to develop those skills? There is value if you can avoid an extra layer of complexity (containers), achieving the same goals when it comes to isolation, immutability and security. VMs is the answer.
VMs works as well as Kubernetes?
Many people will say the opposite but I think VMs have still a lot to say but what we have learned from Docker is crucial. This is an operation I do very often in tech. I jump into a new technology driven by social media, influencers, Twitter. As soon as possible I attempt to simplify those concept to something that I find valuable. Being able to build and push an application along side all its dependency changed the game for me. But it is something you can do with Hashicorp Packer, or with LinuxKit (spoiler it uses containers) or NixOS (a tool I love). You don't need to onboard containers to build an push an artifact that is a bit more complicated than a 10 years old zip file containing your application. Just a reminder, a docker image is a tarball as well.
Being able to orchestrate such images, containers, or VMs is another story. If you read until this point you know that Kubernetes is a good solution. But you can use other hypervisor if you are so incline. I am not an expert with hypervisor or at least not with the traditional one. I know very well the API AWS, I used EC2, autoscaling groups, VPCs and so on. I see this tools as an hypervisor. You can achieve great result building your own solution with those services. I had a chat with Gwen Shapira about this topic and my past experience developing a control plane on top of AWS EC2, VPCs and so on https://www.youtube.com/watch?v=b3ZE6KQtJ1c during SaaS Developer Community podcast.
Pros and Cons what should I do?
Are you more or less confused about this topic? If you feel like I didn't give you a solid answer you are right. Personally I pick the right solution for me. When I was developing a SaaS where the core product was the orchestrator, being able to develop and control the full lifecycle was a plus, and a lot of fun as well. At the end the company after a few years and thousands of EC2 moved to Kubernetes, I can't tell how is it going but I have miss feeling about that decision. For a company with the idea of interoperability and scalability in mind developing their operational experience on top of the Kubernetes API is a plus in my opinion. And as I said, there are solution that does not require you to be a Kubernetes expert. You can use a cloud provider, or you can hire a consultant for example.
I have a lot of friends concerned about remote work, because they feel disconnected and lonely. All the time I tell them than being at home all by yourself is not mandatory. You can visit your local bar, or you can work from a library. With Kubernetes is the same. You don't need to your it on your own to be able to use it.