Interesting:
Docs is the result of a joint effort lead by the French 🇫🇷🥖(DINUM) and German 🇩🇪🥨 governments (ZenDiS). We are always looking for new public partners (we are currently onboarding the Netherlands 🇳🇱🧀). Feel free to reach out if you are interested in using or contributing to docs.
https://docs.numerique.gouv.fr/login/
#docs #opensource #eu #europe #france #germany #netherlands #collaboration #fosdem
Honestly, a lot of the time I don’t understand why a lot of businesses use k8s.
At my company especially, we know almost exactly what our traffic will look like from 9am-5pm. We don’t really need flexible scaling, yet we still use it because the technology is hyped. Similar to cloud, we certainly don’t need to be spending as much as we do, but since everyone else is on or migrating to the cloud, we are as well.
The “problem” with k8s is not that it’s abstract-y (it’s not inherently any more abstract than docker), it’s that it’s very complex and enterprise-y.
The need for such a complex orchestration layer is not necessarily immediately obvious, until you’ve worked on a complex infra setup that wasn’t deployed with kubernetes. Believe me when you’ve seen the depths of hell that are hundreds of separately configured customer setups using thousands of lines of ansible playbooks, all using ad-hoc systems for creating containers/VMs, with even more ad-hoc and hacked together development and staging environments, suddenly k8s starts looking very appetizing. Instead of an abominable spaghetti of bash scripts, playbooks, and random documentation, one common (albeit complex) set of tools understood by every professional which manages your application deployment & configuration, redundancy, software upgrades, firewall configs, etc.
A small self-hosted production kubernetes cluster doesn’t have to be hard to operate or significantly more expensive than bare-metal; you can buy 3U of rack space, plop in 3 semi-large servers (think 128 GB plus a few TB of SSD RAID), install rancher and longhorn, and now you’ve got a prod cluster large enough for nearly every workload such that if you ever need to upgrade that means you have so many customers that hiring a k8s administrator will be a no-brainer.
Or you can buy minutes from AWS because CapEx is the absolute devil and instead you pay several times as much in OpEx to make it someone else’s problem. But if you’re doing that then you’re not comparing against “installing things the old-fashioned way”.
I personally haven’t rolled a k8s or k3s cluster, so it’s always felt a bit abstract to me. I probably should though, to demystify it to myself in my work environment.
Complex is definitely what I have noticed when I see my devops team PR into the ingress directories.
I guess the abstract issue I see, that ties in to the meme i shared above, is that sometimes around deploys where we get blips of 503/4’s and we appear to be unable to track them down. Is it the load balancer? Ingress? Kong? The fact that there is so many layers make infra issues rough to debug
Kubernetes is not really meant primarily for scaling. Even kubernetes clusters require autoscaling groups on nodes to support it, for example, or horizontal pod autoscalers, but they are minor features.
The benefits are pooling computing resources and creating effectively a private cloud. Easy replication of applications in case of hardware failure. Single language to deploy applications, network controls, etc.
Honestly, a lot of the time I don’t understand why a lot of businesses use k8s.
At my company especially, we know almost exactly what our traffic will look like from 9am-5pm. We don’t really need flexible scaling, yet we still use it because the technology is hyped. Similar to cloud, we certainly don’t need to be spending as much as we do, but since everyone else is on or migrating to the cloud, we are as well.
The “problem” with k8s is not that it’s abstract-y (it’s not inherently any more abstract than docker), it’s that it’s very complex and enterprise-y.
The need for such a complex orchestration layer is not necessarily immediately obvious, until you’ve worked on a complex infra setup that wasn’t deployed with kubernetes. Believe me when you’ve seen the depths of hell that are hundreds of separately configured customer setups using thousands of lines of ansible playbooks, all using ad-hoc systems for creating containers/VMs, with even more ad-hoc and hacked together development and staging environments, suddenly k8s starts looking very appetizing. Instead of an abominable spaghetti of bash scripts, playbooks, and random documentation, one common (albeit complex) set of tools understood by every professional which manages your application deployment & configuration, redundancy, software upgrades, firewall configs, etc.
A small self-hosted production kubernetes cluster doesn’t have to be hard to operate or significantly more expensive than bare-metal; you can buy 3U of rack space, plop in 3 semi-large servers (think 128 GB plus a few TB of SSD RAID), install rancher and longhorn, and now you’ve got a prod cluster large enough for nearly every workload such that if you ever need to upgrade that means you have so many customers that hiring a k8s administrator will be a no-brainer.
Or you can buy minutes from AWS because CapEx is the absolute devil and instead you pay several times as much in OpEx to make it someone else’s problem. But if you’re doing that then you’re not comparing against “installing things the old-fashioned way”.
Thanks for the response!
I personally haven’t rolled a k8s or k3s cluster, so it’s always felt a bit abstract to me. I probably should though, to demystify it to myself in my work environment.
Complex is definitely what I have noticed when I see my devops team PR into the ingress directories.
I guess the abstract issue I see, that ties in to the meme i shared above, is that sometimes around deploys where we get blips of 503/4’s and we appear to be unable to track them down. Is it the load balancer? Ingress? Kong? The fact that there is so many layers make infra issues rough to debug
Kubernetes is not really meant primarily for scaling. Even kubernetes clusters require autoscaling groups on nodes to support it, for example, or horizontal pod autoscalers, but they are minor features.
The benefits are pooling computing resources and creating effectively a private cloud. Easy replication of applications in case of hardware failure. Single language to deploy applications, network controls, etc.