Scott Dowdle's blog

OpenVZ: Past and Future

| |

Kir posted the following this evening on the OpenVZ blog:

Looking forward to 2015, we have very exciting news to share on the future on OpenVZ. But first, let's take a quick look into OpenVZ history.

Linux Containers is an ancient technology, going back to last century. Indeed it was 1999 when our engineers started adding bits and pieces of containers technology to Linux kernel 2.2. Well, not exactly "containers", but rather "virtual environments" at that time -- as it often happens with new technologies, the terminology was different (the term "container" was coined by Sun only five years later, in 2004).

Anyway, in 2000 we ported our experimental code to kernel 2.4.0test1, and in January 2002 we already had Virtuozzo 2.0 version released. From there it went on and on, with more releases, newer kernels, improved feature set (like adding live migration capability) and so on.

It was 2005 when we finally realized we made a mistake of not employing the open source development model for the whole project from the very beginning. This is when OpenVZ was born as a separate entity, to complement commercial Virtuozzo (which was later renamed to Parallels Cloud Server, or PCS for short).

Now it's time to admit -- over the course of years OpenVZ became just a little bit too separate, essentially becoming a fork (perhaps even a stepchild) of Parallels Cloud Server. While the kernel is the same between two of them, userspace tools (notably vzctl) differ. This results in slight incompatiblities between the configuration files, command line options etc. More to say, userspace development efforts need to be doubled.

Better late than never; we are going to fix it now! We are going to merge OpenVZ and Parallels Cloud Server into a single common open source code base. The obvious benefit for OpenVZ users is, of course, more features and better tested code. There will be other much anticipated changes, rolled out in a few stages.

As a first step, we will open the git repository of RHEL7-based Virtuozzo kernel early next year (2015, that is). This has become possible as we changed the internal development process to be more git-friendly (before that we relied on lists of patches a la quilt but with home grown set of scripts). We have worked on this kernel for quite some time already, initially porting our patchset to kernel 3.6, then rebasing it to RHEL7 beta, then final RHEL7. While it is still in development, we will publish it so anyone can follow the development process.

Our kernel development mailing list will also be made public. The big advantage of this change for those who want to participate in the development process is that you'll see our proposed changes discussed on this mailing list before the maintainer adds them to the repository, not just months later when the the code is published and we'll consider any patch sent to the mailing list. This should allow the community to become full participants in development rather than mere bystanders as they were previously.

Bug tracking systems have also diverged over time. Internally, we use JIRA (this is where all those PCLIN-xxxx and PSBM-xxxx codes come from), while OpenVZ relies on Bugzilla. For the new unified product, we are going to open up JIRA which we find to me more usable than Bugzilla. Similar to what Red Hat and other major Linux vendors do, we will limit access to security-sensitive issues in order to not compromise our user base.

Last but not least, the name. We had a lot of discussions about naming, had a few good candidates, and finally unanimously agreed on this one:

Virtuozzo Core

Please stay tuned for more news (including more formal press release from Parallels). Feel free to ask any questions as we don't even have a FAQ yet.

Merry Christmas and a Happy New Year!

Since Russia has 10 days of holidays in January, I really don't expect anything to be released until late January or more likely in February. One major change in the upcoming RHEL7-based Virtuozzo Core release is the move from the internal chkpoint code to CRIU. Although there are a lot more details and specifics to come, overall I see this as a very possitive move.

Video: Yes, I'm Linux. Are You?

| |

This was released by the Linux Foundation yesterday and I thought I'd share. Enjoy!

Video: Security Features in systemd

| | |

Lennart Poettering gave a presentation for NLUUG on Nov. 20th, 2014 entitled, "Security Features in systemd". I think veteran system admins would be interested in this stuff. :) Enjoy!

Direct download link: 5_Lennart_Poettering_-_Systemd.webm

Video: FreeBSD - The Next 10 Years

| |

Jordan Hubbard... should need no introduction but if you don't know who he is, look him up... anyway, Mr. Hubbard spoke recently at the MeetBSD 2014 conference giving a presentation entitled, "FreeBSD: The next 10 years".

One thing I want to point out is his section on the init system. Here's a direct link to that section that I couldn't figure out to get to with the embedded video. Anyway, in the embedded video feel free to move the play head to about 27 minutes and 32 seconds into it manually if you don't want to watch the whole thing.

So FreeBSD may very well be moving to an init system modeled after Apple's Mac OS X's launchd... and since systemd also borrowed some ideas from launchd (as well as a few other systems)... systemd haters can move to FreeBSD... but how long before it also changes in ways they don't like? Oh, and I like the way Mr. Hubbard refers to systemd. :)

Here's some choice bullet points from one of his slides:

  • We need to be open to fundamentally new approaches and ruthlessly cull what is no longer demonstrably useful to the 99%
  • We need to be willing to shamelessly steal^H^H^H^H^H adopt things that are working well for others
  • We need to take on some big-picture challenges that will appeal to the next generation of hackers (where's the next mountain?)

Enjoy.

Opinion: The Masters of Click-Bait Misinformation

Argh, I am *SO* tired of seeing various sites linking to the inflammatory and factually incorrect articles by the following three guys:

  • Jim Lynch (ITworld)
  • Sam Varghese (iTWire)
  • Paul Venezia (InfoWorld)

I would give examples but I just want to forget about them. Don't feed the trolls.


Video: KvmGT - GPU Virtualization for KVM

| | | |

Here is a video I've been waiting for by Jike Song from Intel. The KVM Forum 2014 was held in conjunction with the recent LinuxCon Europe and someone (from the Linux Foundation or the KVM Forum) has been processing and posting presentation videos to YouTube in a staggered fashion. About 13 hours ago this video appeared. When I noticed the topic on the KVM Forum schedule (along with the slide deck [PDF]) a week or two before the event, I was really looking forward to learning more.

The current implementation, so far as basic features go, seems to be fairly complete but it is currently targeted specifically at the Intel Haswell architecture using the i915 video driver. The presenter says that the approach taken should be adaptable to other GPU architectures beyond Intel. Their initial goal is to get the code released (it is under a dual GPL/MIT license) and to work with the KVM development community to get it upstreamed and part of KVM proper... and to work on more advanced feature implementation. As it stands now the basic features are present: hardware assisted GPU functionality for VMs in a shared fashion that offers 80-90% of native speed. Near the end of the presentation is a demo video that shows two Linux KVM VMs each running GPU intensive software (one game, one benchmark). As I understand it, when a GPU-driven application is displayed it is full-screen and there isn't currently a windowed mode to show more than one VM at a time. I do wonder how well 3D accelerated graphics would display over a remoting protocol like SPICE? Enjoy!

Video: Getting Ready for systemd (in RHEL7)

| | |

I found the link to this video (Getting Ready for systemd) on the systemd documentation page. It is a Red Hat "Customer Portal Exclusive" and "Not for Distribution" but it is ok for me to provide a picture that links to it... that looks like a video-ready-to-play. :) Enjoy.

Video: Systemd the Core OS (no coughing)

| |

There has been so much negative stuff about systemd on teh Interwebs lately. It is so sad. Quite a few distros picked systemd because they liked a lot of the features it has. Why do the people who like systemd actually like it? Sure, if you look hard enough, you can find those answers... but I remembered a video where the man himself explains it.

The only problem with the original video on YouTube is that the volume is sort of low so you have to crank it up... and then there is coughing that blows your eardrums... so I took the time to edit out the coughing. The A/V sync isn't great and the sound leaves a bit to be desired... but it is still worthwhile viewing for anyone who wants to better understand why systemd. Enjoy!

If you want the coughing, you can find the original here.

Videos: LinuxCon Europe 2014

|

Here's Linus with Intel's Chief Linux and Open Source Technologist, Dirk Hohndel on the next 12 months of the Linux kernel:

Here's a kernel panel hosted by LWN's Jon Corbet featuring Grant Likely, Linaro; Borislav Petkov, SUSE; Thomas Gleixner, linutronix GmbH; Julia Lawall, Inria; Frédéric Weisbecker, Red Hat.

Enjoy!

MontanaLinux: Using Fedora 21 (pre-beta)

|

Fedora 21 pre-beta LightDMFedora 21 pre-beta LightDMI've been following the development of Fedora 21 since a little before the alpha release. Getting my MontanaLinux remix to build was actually quite easy and the fact that rpmfusion has a rawhide repo means all of the multimedia codecs / applications were good to go as well. I've done few dozen installs as KVM virtual machines and thought it was time to try physical hardware.

Hardware Problems?

First I installed it on my Acer netbook that is 32-bit only and about 5 years old now. The battery in it is shot and smartd has been telling me for over a year that the hard drive has been getting more and more bad sectors... which is a fairly good indicator that the hard drive is going bad. Doing the install from a LiveUSB it took a while because the installer was finding some of the bad spots on the drive. For whatever reason during the install the progress bar immediately said 100% and I knew that was wrong... so I kept switching over to a text console to periodically do a df -h to see how much had been written to the hard drive. Oddly whenever I'd switch over to the text console, the green illuminated power button would go amber and the screen would go blank... which to me meant it was suspending to RAM or something. At that point I'd have to hit a few keys on the keyboard and it would wake back up. For whatever reason it did this at least a dozen times during the install. I really wasn't expecting a good install given the flaws in my hardware and how they were manifesting themselves during the install process... but being patient paid off... and it actually was successful... and seems be working just fine post-install.

Installing it on my Optiplex 9010 desktop at work was also more complicated than I was expecting. For whatever reason (maybe a BIOS setting?) I could NOT get my machine too display the bootloader menu from a LiveUSB although other Dell models at work seemed to work fine. So I burned a DVD with the burner in the Optiplex 9010. Oddly the same drive that wrote the DVD seems unable to read it about 19 out of 20 tries. That meant that I couldn't get it to boot from the DVD either. I finally decided to try something different... and I got an external / USB optical drive and plugged it into the USB port and I was able to get it to successfully read the DVD and the bootloader to appear. With a functioning bootloader I was able to boot the DVD and the live system worked great... and the installer went flawlessly.

Fedora 21 pre-beta actually seems quite stable. As you may recall I have all of the desktop environments installed as part of my remix so I can check them all out... but I primarily use KDE. On both of my machines I have /home as a separate partition so my personal data is retained across installs. I also backup /etc and /root to /home/backups/ so any of my previous configurations (stuff like ssh keys) can be retrieved and used if desired.

Some Notes

I picked lightdm as the default login manager. In the past I've mainly used kdm but KDE is in the process of transitioning to sddm which seems a bit buggy still.

One of the main features in Fedora 21 I'm wanting to play with actually is provided by the rpmfusion repos... ffmpeg 2.3.3. I'm wanting to do some testing with the newer ffmpeg that does a reasonable job at webm encoding with vp9 and opus. I'd also like to try out GNOME 3 under the Wayland display server... which is supposedly working fairly well in Fedora 21... but I haven't tried it yet.

One weird glitch I ran into was with the Google-provided google-chrome-stable package. I'm not much of a Google Chrome user but I do occasionally use it for (legacy) sites that require Adobe Flash. I use Firefox the vast majority of the time... but I've decided to no longer install the Adobe provided flash-plugin package (at version 11.x). As you probably know Google has taken over maintenance of newer Flash versions (currently 15.x) on Linux and include it as part of Google Chrome. As a result, whenever there is a Flash update from Adobe, there is a Google Chrome update that soon follows. Anyway, very early in the Fedora 21 development cycle (pre-alpha), the Google Chrome package refused to install because Fedora 21 had a much newer version of some library (I don't recall which one) and it wanted the older version. A few Google Chrome package updates later... and it is happy with regards to dependencies... but installing it with rpm... it gets stuck on the post-install and just sits there. I had to ^c rpm (which you generally don't want to do) because it wasn't going to finish... and just to be safe I did an rpm --rebuilddb and everything seems fine. The google-chrome-stable package verifies just fine (rpm -V google-chrome-stable) and the package works as expected.

Conclusion

Overall everything I've tried works fine. I like to get started with new Fedora releases as early as possible in the development cycle so I can help report any bugs I find (in Fedora provided packages) and be up-to-speed with all of the new features on release day so I can deploy to other machines immediately. I've been doing it that way for several releases now. I do really appreciate all of the work the Fedora developers put into each release.

Syndicate content