Re: Docker

From: Tim Daneliuk <tundra_at_tundraware.com>
Date: Mon, 17 Apr 2023 19:11:12 UTC
On 4/17/23 14:00, Paul Pathiakis wrote:
> 
> I was pretty crotchety when people started with containers and virtualization and... MICROSERVICES....  I was like:  you do understand what a scheduler and time sharing are all about right?  You know about UNIX, right?  You do know about "nice"  why would I want to put limits on some simulated machine and expect it will work better than the job control and scheduler of the kernel?


I will say that having now spent the better part of a decade
around microservices, docker, K8s, et al that the current
state of that art allows for a lot of control around
resource consumption and scaling/performance.

K8s, in particular, gets away from the idea that
you have to tune the host and instead allows you
to think about optimizing the containers that
run there.  At least for business applications, the
vast majority of the time, the throughput issues
are far more about I/O constraints than they are CPU-bound
or concurrency issues.

Optimizing schedules, nice, pinning CPUs and all the rest
make sense when your point of tuning is a single, monolithic
kernel running a process model.  But with K8s, you can
think in terms of logically infinite capacity (at a price,
of course) where your optimization surface is the service
itself and the class of K8s pod you are running on.

This model is not useful for everything.   There have been
attempts, for example, to decompose the FreeBSD kernel itself into
a set of loosely coupled services coordinated via message
passing.  If memory services, this was the intent of
DragonflyBSD but I'm not certain of that.

"There are 20 ways to solve a problem. 3 of them will work well enough."