Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

Linux is so grown up, it's ready for marriage with containers
The Register ^ | 07 April 2016 | Liam Proven

Posted on 04/07/2016 10:46:02 AM PDT by ShadowAce

Linux is all grown up. It has nothing left to prove. There's never been a year of the Linux desktop and there probably never will be, but it runs on the majority of the world's servers. It never took over the desktop, it did an end-run around it: there are more Linux-based client devices accessing those servers than there are Windows boxes.

Linux Foundation boss Jim Zemlin puts it this way: "It's in literally billions of devices. Linux is the native development platform for every SOC. Freescale, Qualcomm, Intel, MIPS: Linux is the immediate choice. It's the de facto platform. It's the client of the Internet."

Linux is big business, supported by pretty much everyone – even Microsoft. Open source has won, but it did it by finding the niches that fit it best – and the biggest of these is on the millions of servers that power the Web. Linux is what runs the cloud, and the cloud is big business now.

Which is why last year's LinuxCon Europe was full of smartly dressed professionals rather than beards and beer-guts, and also why every other talk seemed to be about containers.

One of the core technological enablers of the cloud is virtualisation: to achieve the fêted "web scale" by dividing tasks across multiple separate servers, and bringing those servers online as and when the load requires by starting and stopping VMs.

But VMs are expensive. Not in terms of money – although they can be – but in resources and complexity. Whole-system virtualisation is a special kind of emulator: under one host OS, you start another, guest one. Everything is duplicated – the whole OS, and the copy that does the work is running on virtual – in other words: pretend, emulated – hardware, with the performance overhead that implies. Plus, of course, the guest OS has to boot up like a normal one, so starting VMs takes time.

Which is what has led one wag to comment that: "Hypervisors are the living proof of operating system's incompetence."

Fighting words! What do they mean, incompetence? Well, here are a few examples.

The kernel of your operating system of choice doesn't scale well onto tens of cores or terabytes of NUMA RAM? No problem: partition the machine, run multiple copies in optimally sized VMs.

Your operating system isn't very reliable? Or you need multiple versions, or specific app versions on the operating system? No problem. VMs give you full remote management, because the hardware is virtual. You can run lots of copies in a failover cluster – and that applies to the host hardware, too. VMs on a failed host can be auto-migrated to another.

Even down at the small end of the scale, a SOHO operation with one server, it still helps. Operating system needs specific drivers and config to boot on a particular model of machine? If the box dies, the backup can't just be restored onto a newer replacement – it won't boot. No problem: dedicate the box to run a single VM. This provides a standard hardware template, eliminating driver problems. You can move the installed OS from one machine to another with impunity, unlike a bare-metal install. It facilitates backup and restore, capacity planning and more.

Make no mistake, virtualisation is a fantastic tool that has enabled a revolution in IT. There are tons of excellent reasons for using it, which in particular fit extremely well in the world of long-lived VMs holding elaborately configured OSs which someone needs to maintain. It enables great features, like migrating a live running VM from one host to another. It facilitates software-defined networking, simplifying network design. If you have stateful servers, full of data and config, VMs are just what you need.

And in that world, proprietary code rules: Windows Server and VMware, and increasingly, Hyper-V.

But it's less ideal if you're an internet-centric business, and your main concern is quick, scalable farms of small, mostly-stateless servers holding microservices built out of FOSS tools and technologies. No licences to worry about – it's all free anyway. Spin up new instances as needed and destroy them when they're not.

Each instance is automatically configured with Puppet or Ansible, and they all run the same Linux distro – whatever your techies prefer, which probably means Ubuntu for most, Debian for the hardcore and CentOS for those committed to the RPM side of the fence.

In this world, KVM and Xen are the big players, with stands and talks at events such as LinuxCon devoted to them. Free hypervisors for free operating systems – but the same drawbacks apply: running Linux under Linux means lots of duplication of the stack, lots of unnecessary virtualisation of hardware, inefficient resource-sharing between VMs, slow VM start-up times, and so on.

And the reason that everyone is talking about containers is they solve most of these issues. If your kernel scales well and all your workloads are on the same kernel anyway, then containers offer the isolation and scalability features of VMs without most of the overheads. We talked about how they work in 2011, but back then, Linux containers were still fairly new and crude.

Since then, though, one product has galvanised the development of Linux containers: Docker. Originally a wrapper adding some handy additional facilities to LXC, Docker has expanded to support multiple back-ends. A whole new section of the software industry is growing around Docker. New types of Linux distro are being built to host Docker containers, such as CoreOS and Red Hat's Project Atomic. CoreOS also has its own rival format to Docker, called Rocket. Docker isn't limited to Linux, either. Existing Docker containers can be run on Joyent's SmartOS, based on a fork of OpenSolaris, and a version of Docker will be available to manage the Windows containers of Windows Server 2016, too. Even Oracle is making interested noises.

Meanwhile, Canonical has a different take on the containers model with its own flavour, LXD.

None of this means the end of "traditional" virtualisation. Containers are great for microservices, but at least in their current incarnations, they're less ideal for existing complex server workloads. The current generation of management tools is also far weaker, and as such, most people are running their containerised workloads on top of a host OS inside a VM – even though there are performance penalties to doing so.

Plus, as containers pose a clear threat to existing hypervisor vendors, companies are scrambling to find ways to make VMs that behave more like containers.

Now that operating-system level virtualisation has finally arrived on the default Unix of the Web era, it is poised to radically transform the market – and Linux. And that means, too, lots of new code and lots of new buzzwords. ®


TOPICS: Computers/Internet
KEYWORDS: computers; computing; containers; linux; vm

1 posted on 04/07/2016 10:46:02 AM PDT by ShadowAce
[ Post Reply | Private Reply | View Replies]

To: rdb3; Calvinist_Dark_Lord; JosephW; Only1choice____Freedom; amigatec; Ernest_at_the_Beach; ...

2 posted on 04/07/2016 10:46:24 AM PDT by ShadowAce (Linux - The Ultimate Windows Service Pack)
[ Post Reply | Private Reply | To 1 | View Replies]

To: ShadowAce
the whole OS, and the copy that does the work is running on virtual – in other words: pretend, emulated – hardware

Yeah, there is a penalty to be paid for that kind of emulation, but the fact that the hardware that is presented to the OS running in the VM is always the same is an awesome benefit. I'm surprised more appliance-type devices aren't actively marketed as a VM package, because it really simplifies support. You don't have to worry about what ethernet card is installed on the customer's hardware. You know what it is in the VM. Same goes for other various subsystems.

Let's go further, and suppose that the end customer is having an issue with a VM. Rather than asking for logs, config files, or other stuff, you could just ask for a copy of the VM (obviously, any confidential data in the image is a concern, but that can be worked out in a lot of cases.) So, your support dept gets a copy of the VM. They can now test the actual software, exacty as it exists at the customer site to attempt to replicate and troubleshoot the issue. I'd see that as an absolute Godsend.

If the VM's you were offering as an appliance were constructed with foresight, Upgrades would be a matter of just downloading the VM, linking in a virtual disk that contains the customer-specific configs and data, and turning the thing on. No "upgrade" is actually performed. You always start with a clean image.

Containers are a cool tech though, that has a lot of applications. I haven't dealt with them yet, but through my reading of it, I suspect that some folks are trying to make the containers too granular. i.e., each container does one thing, and only one thing. If you want it to have additional capabilities, you have to add in another container. Seems to me, without proper controls in place, you're going to end up with configuration and eventual documentation headaches.

3 posted on 04/07/2016 11:06:56 AM PDT by zeugma (Vote Cruz!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: ShadowAce

Containers rock! I used to run several Linux instances on my dev server with Hyper-v that consumed 8 gigs of ram, even though they were idling most of the time. Since moving them into containers running on the Proxmox distro, they rarely consume more than two gigs of ram and CPU utilization is significantly lower. Hard drive IO is a bit quicker, too, since IO is managed by a single Linux kernel. Also, I can spin up a new container in less than a minute.

I have so much more overhead available due to how much fewer resources containers consume, I won’t need to upgrade my hardware for a long, long time.


4 posted on 04/07/2016 11:18:28 AM PDT by Sparticus (Tar and feathers for the next dumb@ss Republican that uses the word bipartisanship.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: ShadowAce
Why did Linux go mainstream? Two reasons:

1. IBM porting Red Hat Linux to run on IBM mainframe hardware made it viable to run Linux on mission critical applications.

2. Android--the world's most prominent cellphone operating system--runs on the Linux kernel. (Note Apple's iOS and MacOS X runs on the Mach kernel--one that was originally refined by NeXT in the late 1980's to middle 1990's.)

5 posted on 04/07/2016 11:39:39 AM PDT by RayChuang88 (FairTax: America's Economic Cure)
[ Post Reply | Private Reply | To 1 | View Replies]

To: RayChuang88; Swordmaker
(Note Apple's iOS and MacOS X runs on the Mach kernel--one that was originally refined by NeXT in the late 1980's to middle 1990's.)

Actually, I think OSX is a BSD derivative. I don't think it's Mach. Sword would know.

6 posted on 04/07/2016 11:41:30 AM PDT by zeugma (Vote Cruz!)
[ Post Reply | Private Reply | To 5 | View Replies]

To: ShadowAce

The other (not mentioned) benefit of containers is that they are write once run anywhere (or at least purport to be). You build your container on your laptop, do all your development and testing, and then stick the container on the server and it “just works”. In fact this is sort of the idea behind docker’s name (and probably container’s as well). You think of containers on a ship - where you could have anything in the container but when it gets loaded on to the boat everything “just works”. It’s a powerful idea. I’ve not done much of this myself but I suspect it does live up to the hype.


7 posted on 04/07/2016 12:03:12 PM PDT by 2 Kool 2 Be 4-Gotten
[ Post Reply | Private Reply | To 1 | View Replies]

To: zeugma

No reason it can’t be a BSD derivative with a Mach kernel.


8 posted on 04/07/2016 12:08:47 PM PDT by Darth Reardon (Would I lie to you?)
[ Post Reply | Private Reply | To 6 | View Replies]

To: ShadowAce

If I could have Dreamweaver CS6 and CorelDraw X3 to work then I could switch. I would use LibraOffice.
What do you use for a virtual window like VirtualBox for Linux?


9 posted on 04/07/2016 3:19:25 PM PDT by minnesota_bound
[ Post Reply | Private Reply | To 1 | View Replies]

To: minnesota_bound

I use kvm with qemu


10 posted on 04/07/2016 5:11:29 PM PDT by ShadowAce (Linux - The Ultimate Windows Service Pack)
[ Post Reply | Private Reply | To 9 | View Replies]

To: zeugma; RayChuang88
Actually, I think OSX is a BSD derivative. I don't think it's Mach. Sword would know.

Apple OS X since Leopard is one of the five certified POSIX® compliant UNIX® 03. It is registered and permitted to use the UNIX® 03 Trademark by The Open Group, the organization which registers UNIX® 03.

So OS X is a BSD based UNIX® 03 which uses a MACH micro kernel.

11 posted on 04/07/2016 7:09:40 PM PDT by Swordmaker (This tag line is a Microsoft insult free zone... but if the insults to Mac users continue..)
[ Post Reply | Private Reply | To 6 | View Replies]

To: minnesota_bound

Virtualbox: https://www.virtualbox.org/


12 posted on 04/07/2016 8:35:14 PM PDT by Bikkuri ((...))
[ Post Reply | Private Reply | To 9 | View Replies]

To: Swordmaker

Thanks Sword. I knew you’d know.


13 posted on 04/08/2016 8:23:06 AM PDT by zeugma (Vote Cruz!)
[ Post Reply | Private Reply | To 11 | View Replies]

To: Bikkuri

Thanks 10 and 12
I did not think to check if virtualbox worked on Linux.....


14 posted on 04/08/2016 9:19:29 AM PDT by minnesota_bound
[ Post Reply | Private Reply | To 12 | View Replies]

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson