Yeah, there is a penalty to be paid for that kind of emulation, but the fact that the hardware that is presented to the OS running in the VM is always the same is an awesome benefit. I'm surprised more appliance-type devices aren't actively marketed as a VM package, because it really simplifies support. You don't have to worry about what ethernet card is installed on the customer's hardware. You know what it is in the VM. Same goes for other various subsystems.
Let's go further, and suppose that the end customer is having an issue with a VM. Rather than asking for logs, config files, or other stuff, you could just ask for a copy of the VM (obviously, any confidential data in the image is a concern, but that can be worked out in a lot of cases.) So, your support dept gets a copy of the VM. They can now test the actual software, exacty as it exists at the customer site to attempt to replicate and troubleshoot the issue. I'd see that as an absolute Godsend.
If the VM's you were offering as an appliance were constructed with foresight, Upgrades would be a matter of just downloading the VM, linking in a virtual disk that contains the customer-specific configs and data, and turning the thing on. No "upgrade" is actually performed. You always start with a clean image.
Containers are a cool tech though, that has a lot of applications. I haven't dealt with them yet, but through my reading of it, I suspect that some folks are trying to make the containers too granular. i.e., each container does one thing, and only one thing. If you want it to have additional capabilities, you have to add in another container. Seems to me, without proper controls in place, you're going to end up with configuration and eventual documentation headaches.
Containers rock! I used to run several Linux instances on my dev server with Hyper-v that consumed 8 gigs of ram, even though they were idling most of the time. Since moving them into containers running on the Proxmox distro, they rarely consume more than two gigs of ram and CPU utilization is significantly lower. Hard drive IO is a bit quicker, too, since IO is managed by a single Linux kernel. Also, I can spin up a new container in less than a minute.
I have so much more overhead available due to how much fewer resources containers consume, I won’t need to upgrade my hardware for a long, long time.
1. IBM porting Red Hat Linux to run on IBM mainframe hardware made it viable to run Linux on mission critical applications.
2. Android--the world's most prominent cellphone operating system--runs on the Linux kernel. (Note Apple's iOS and MacOS X runs on the Mach kernel--one that was originally refined by NeXT in the late 1980's to middle 1990's.)
The other (not mentioned) benefit of containers is that they are write once run anywhere (or at least purport to be). You build your container on your laptop, do all your development and testing, and then stick the container on the server and it “just works”. In fact this is sort of the idea behind docker’s name (and probably container’s as well). You think of containers on a ship - where you could have anything in the container but when it gets loaded on to the boat everything “just works”. It’s a powerful idea. I’ve not done much of this myself but I suspect it does live up to the hype.
If I could have Dreamweaver CS6 and CorelDraw X3 to work then I could switch. I would use LibraOffice.
What do you use for a virtual window like VirtualBox for Linux?