Scott Dowdle's blog
MontanaLinux: Please tell me about yourself... as much as you feel comfortable with... as open or as closed as you want to be... family, education, work, hobbies, religion, volunteering, diet, music tv / movies, etc.
Chris Smart: I'm just another Linux geek. I love computers and Free and Open Source Software, having been involved in it for over a decade for both work and pleasure. Let me see... I'm an Australian bloke in my 30's, coming from a family of six (evenly split boys and girls), happily married with one child (and another on the way). I'm currently studying part-time Masters of IT at Australian National University and I work for a local firm full-time developing software that runs on Linux based products.
ML: How long have you been using Linux and how did you get into it?
CS: In 1999 I was working for a local Internet Service Provider and most of our systems were Linux based, so that was my initial introduction. I recall Linux Care was renting office space next door on the same floor and I got to know a bunch of great hackers, including Andrew (Tridge) Tridgell, Hugh Blemings, Paul Mackerras, Stephen Rothwell, Rusty Russell and others. I didn't understand much of what they talked about but I loved spending time with them, although I'm sure that I was quite annoying. I was very lucky to have such an introduction to open source software and in a way these guys became my open source heroes who I still look up to.
ML: What was the first Linux distro?
CS: If I recall correctly it was Red Hat 5.2, although I also dabbled in Debian and shortly thereafter Gentoo. The reason I switched to Gentoo was that I was hungry to learn more about Linux and that seemed like a great way – build your own operating system. Cool. I have to say that I learned more using Gentoo in 3 days than I did the others in 3 months. I loved it, and I was hooked.
ML: Do you use Linux in your job?
CS: Yep, exclusively.
ML: Do you listen to any Linux audiocasts and if so, what?
CS: I have in the past but I can't recall what they were now. From memory I found that the news was usually quite old for me given my massive daily RSS feed haul, so I stopped shortly thereafter.
ML: Where did the name Korora come from? Is it Kororaa or Korora?
CS: Kororā is the Māori word for the Little Blue Penguin (also known as the Fairy Penguin) which is native to Australia and New Zealand, so it seemed like the perfect name for an Australian based distro when we started back in 2005. I guess I could have also called it Fairy Linux but that didn't seem to have the ring to it. Originally we spelled this with two a's on the end due to the accent on the letter however some years later it seemed unbalanced and so I changed it from “Kororaa Linux” to “Korora Project” when we wanted to better reflect the Fedora Project.
ML: You started Korora in 2005 and it was originally based on Gentoo. How did that get started?
CS: I loved playing with Gentoo but copped a lot of criticism from others about “compile time” and “over optimising GCC flags”. I wanted to show that Gentoo could be more than just that, so made a binary install method for Gentoo. It started out as a script and then later became its own install media. One day I was playing with the new AIGLX/Compiz 3D desktop and thought, “I could turn this into an awesome live CD,” and so I did. That was pretty popular back in the day.
ML: In 2010, you switched to Fedora as the base for Korora. What lead to the switch?
CS: I stopped work on Korora for a few years for a number of reasons, including the fact that it was just too much work for me. I started using other distros like Ubuntu and Arch, however I was always drawn to Fedora because I liked the principles of the project. As many who know me can testify, I eagerly tried every new Fedora release but quickly became frustrated because I just couldn't get it to do the things I wanted: config files were in different places, packages were different, etc. After using apt-get, yum was horribly slow and painful. I always gave up.
Then one day in 2009 I decided, “no, I'm going to force myself to use Fedora for 3 months, no matter what!” It was hard, but it worked and at the end of that 3 months, I was loving it. Although it's quite bleeding edge, I surprisingly found it much more stable than Ubuntu which I had starting becoming very frustrated with. I think that's a testament to the awesome Fedora community as well as their dedication to working with upstream to get things fixed.
ML: What do you like about Fedora?
CS: I spell out many of the reasons why I like Fedora and therefore why Korora is derived from it on our website. Really it comes down to the project's core values; Freedom, Friends, Features, First.
A central goal for Fedora is advancing Free Software and content freedom, which benefits everyone. They don't support proprietary software, they create replacements for it (take the Nouveau driver for instance). New releases often showcase the latest in Free Software innovation; technologies that many other distros will also adopt.
I really admire Fedora's policy of working as closely with upstream as possible. Features and fixes are shared to the benefit of everyone, which helps to make existing projects better. That's the Free Software way. To me it creates a much better, more stable operating system.
ML: Has the Fedora community been friendly towards Korora?
CS: It is now, I think, although I doubt that many in the community have even heard of it. Originally some members were very vocal about not supporting us, particularly if anyone came onto the #fedora IRC channel and asked for help. This turned out to be because of a misunderstanding that Korora was not open source and that we didn't document the changes we make to the system. Once I cleared that up and wrote a new page on our website dedicated to explaining what's inside our remix, it seems that “Korora” isn't a dirty word any more. And that's great. It really makes me happy because at the end of the day Korora is about making Fedora more attractive to people like I used to be , when I kept trying it all those times but not sticking with it. Fortunately I didn't give up in the end, but I'm sure that many do and that's a shame. We hope to fix that and show what using Fedora can really be like.
ML: If you had a magic wand and could change something about Fedora, what would it be?
CS: I honestly don't know! I think if I could change something about Korora, it would be more great ideas and more contributors to help us make them become a reality.
ML: Obviously Korora is fun for you. What do you enjoy about it the most?
CS: I enjoy the fact that Korora is useful to others. It's definitely a lot of work and very time consuming, but I like making things and I like seeing those things enjoyed.
On Korora Development
ML: Korora 19 was released on the same day as Fedora 19. Is matching Fedora's release dates a goal of yours?
CS: I think that releasing 19 on the same day was actually a mistake, as Korora got drowned out in Fedora's deserved limelight. We do want to release as close as possible, so we are aiming for a release within two weeks of Fedora's beta and stable releases. Having said that, it also depends on third party repositories like RPMFusion, which can sometimes lag behind a Fedora release. We're now also releasing five desktops instead of just two (plus two architectures of each) so without help from the community this can blow out the time frame.
ML: What is "Korora Package" aka KP?
CS: Building Korora involves the use of a number of command line tools like git for revision control, mock to build packages, and livecd-creator to make the iso images. We wrote kp to manage that process which makes it easy for us to work collaboratively. Being a small project we don't have the resources for large build farms. It's not fancy, but it works well and means any user can rebuild Korora for themselves.
ML: Looking at your Team page (https://kororaproject.org/about/team/), three people are listed... you, Ian Firns and Jim Dean. With the death of Fuduntu and more recently, SolusOS... there seems to be a growing fear of giving smaller distributions a chance. What assurances can you offer potential Korora users that Korora isn't going away any time soon?
CS: Well, we all work full time jobs and have families, so there are no guarantees. We don't have commercial backing, no-one pays us, in fact like many small projects it costs us our own hard-earned cash to keep Korora going.
Having said that, I don't think that it's a risk to anyone who wants to try Korora, I mean you just install it and away you go. Ideally, you upgrade every 6 months or a year. So even if we disappeared tomorrow I don't think it would hurt anyone who's a user of the system. If you are talking about contributors, well I'd like to think that we have a strong enough community to keep it going, even if some of the main developers couldn't continue, but we're not at that stage yet. I hope that what we're trying to achieve resonates with contributors out there who would be able to carry on.
Having said that, I don't know what the motivations for those other projects were, but we make Korora because we love Fedora. As long as Fedora is around and won't ship the software we are able to, we feel there's a place for Korora. We all use Korora ourselves because it does what we want and makes it simple.
Hopefully as Korora gets popular and more and more people support us, the more our motivation grows.
ML: To what degree would end users be affected if you did get hit by a bus?
CS: I don't think they'd be affected at all immediately, as Firnsy can fill the gap. What it would mean is that resources are stretched a little more thin. It might change the direction of the project somewhat if someone else is running it, but I think if it's true to our goal it would be fine.
ML: Are there any particular development areas you are looking for help with?
CS: There is so much that could be done and much of it is really simple. The best is testing and feedback, just telling us what worked and what didn't is super helpful (even better with solutions!).
There are a number of packages that we ship (like some Mozilla plugins) which someone could easily manage, or even just notifying us of new versions.
It would be great to have people take over the configuration of certain desktops like KDE and GNOME. We have a new contributor who has done just that with Xfce and that was the key to making it possible at all. We may end up dropping some desktops if there isn't enough support or desire for them.
It would be great to be able to have a fantastic, unique look and feel for Korora, so anyone with artwork skills could help there.
Any of that would all mean we have more time to concentrate on future direction and features we want to implement.
Finally, just spreading the word on Korora. If you like it, let everyone know. The more popular we get the more the community grows and the more we can achieve.
ML: Is there anything I neglected to ask about that you'd like to mention?
CS: Download Korora today! :-)
Thank you for taking the time to answer my questions.
I was lucky enough to be a guest on the Sunday Morning Linux Review episode 115 to talk about OpenVZ. In prep for the show I wanted to provide the hosts with some recent, updated videos that show off OpenVZ. I made the following videos which are in webm format... so you can play them in your browser or download and play with a media player:
- 1-openvz-2013-intro.webm (Slides 18min/11MB)
- 2-minimal-centos-install.webm (Optional - Basic CentOS Install 5min/7.5MB)
- 3-openvz-install-on-centos.webm (Installing OpenVZ on CentOS 14.5min/21MB)
- 4-openvz-demo.webm (OpenVZ demo 41min/51MB)
- 5-openvz-gui-container.webm (How to make a GUI container 25min/82MB)
Just noticed that the Linux Foundaton posted this video 2 days ago. I recognize at least 5 of the voices in the video: Jim Zemlin, Richard Stallman, Eben Upton, Mark Shuttleworth and of course Linus Torvalds. A few other voices seem really familiar and I have some guesses but I'm not sure if I'm correct. Anyone else care to guess?
Oh, here's another by the Linux Foundation from about two weeks ago that basically merges clips from the recent Gabe Newell presentation and an interview with Linus Torvalds into a nice short, easily digested package.
I've been playing with / using x2go more lately and I sure do like it. I originally learned about it by reading the Fedora 20 ChangeSet and saw that it will be a new feature in the upcoming Fedora 20. I started using Fedora 20 shortly before the alpha release came out. Fedora 20 Beta was released on 2013-11-12... and I've been building my MontanaLinux remix about once a week. Anyway, I'm getting off track. Back to x2go.
I figured out how to make sound work. This FAQ entry did the trick for me.
I have used the x2godesktopsharing applet to connect to an existing X11 session and that works great too.
x2go has been making quite a bit of progress especially on Fedora / EL systems thanks to volunteer packager orionp (orionp on #x2go on the FreeNode IRC network). orionp is not only responsible for the Fedora 20 packages, but has also built them for Fedora 19 (currently in updates-testing) and EPEL6. I've been connecting to a number of different systems (physical, virtual KVM and OpenVZ containers) testing it all out. Who knew that an OpenVZ container could be a pretty good desktop system complete with sound? Sound does use a bit of network bandwidth so it is only appropriate for LAN and really fast WAN connections.
I've tried several desktop environments and in general, they work well. Some seem to tolerate multiple users more than others. For example, I don't think KDE handles sound for multiple users very well, whereas with XFCE it seems to work fine. x2go seems to outperform SPICE from my anecdotal experiences thus far. Multimedia works quite well except when I go beyond 1920x1080.
I still haven't tried x2go folder sharing (which uses sshfs which has always worked well for me), USB device access, nor printing. I haven't needed those features yet. The Microsoft Windows x2goclient works well.
There are a few bugs that orionp will be fixing ASAP. I reported this bug for the EPEL packages. Luckily there are easy work-arounds until the updates are done. I've learned a little about x2go troubleshooting as a result.
I also got a chance to install my Fedora 20 remix on a remote machine over x2godesktopsharing. I had to manually boot the target machine from LiveUSB which automatically logs in as user "liveuser". I enabled sshd, set a password for liveuser, enabled x2gocleansessions, ran x2godesktopsharing, enabled sharing... and then connected from my workstation and did the install. It went well. Thanks x2go developers!
Update: (25 NOV 2013) The x2go packages in Fedora 19 have been moved from the testing repo into updates. An updated x2goserver package has been submitted to the EPEL6-testing repo. I'll test it ASAP and if good, give it some karma.
LinuxCon 2013 Europe was this week... and videos from it have started being published. Here's a video with our favorite Linux leader about the future of the Linux kernel. Enjoy!
Oh, and here is the Kernel Developer Panel
As you may know, Fedora totally redid their Anaconda installer starting with Fedora 18. There are many reasons for it and I'll not go into that here but one perception out there in Internet land is that the partitioning section of the newer Anaconda installer is a pain to use. I must admit that when I first started using it (installing Fedora 18 alpha and beta releases), I really did not like the changes. This dislike persisted for some time until I finally got used to it. Then time passed. Fedora 19 development started, ran its course, and then Fedora 19 was released. It offered some Anaconda refinements. Now Fedora 20 is approaching its beta release and there are yet more Anaconda refinements.
Since I build my own personal remix of Fedora with the stuff I want pre-installed, I do a lot of installs... to test stuff out. I've definitely gotten used to the newer Anaconda now and I actually like the partitioner. The last time I installed Fedora 17 (to test my last remix build, it has since gone EOL) I actually felt weird using the older Anaconda. I actually prefer the newer one now.
Few people do as many installs as me... and some are still stuck in the "not liking the newer Anaconda" stage. Their main gripe seems to be that the partitioner is very confusing and somewhat broken. I disagree with them and I've been doing some troubleshooting with a couple of problem installs users were having. Turns out their problems had less to do with Anaconda and more to do with having terrible pre-existing partition layouts on their hard drives. I though it might be useful to examine two cases where I have the actual fdisk listings of their partition tables. I'll not mention the names of the users who provided them to spare them some negative attention.
Example one - Let's just jump right in. Here's an image that shows a really poor pre-existing partition table:
Here is a somewhat incomplete
fdisk -l listing for it.
Device Boot Start End Blocks Id System /dev/sda1 63 80324 40131 de Dell Utility /dev/sda2 81920 20561919 10240000 7 HPFS/NTFS/exFAT /dev/sda3 * 37459968 204937215 83738624 7 HPFS/NTFS/exFAT /dev/sda4 307337216 312578047 2620416 f W95 Ext'd (LBA) /dev/sda5 307339264 312578047 2619392 dd Unknown
That almost looks reasonable until one examines the details closely. There is a gap between the end of sda2 and the start of sda3. There is a gap between the end of sda3 and sda4. sda4 (the extended partition) is very small and as a result sda5 (a logical inside of the extended) is very small. What we have here is a bunch of free space but no way to get to it. One can NOT make any additional primary partitions. One can NOT make any additional logical partitions... and to the best of my knowledge... one can NOT make any more extended partitions. All of that free space (> 50GB) is in a virtual no-man's land.
How did this poor partition layout manifest itself in the Anaconda installer? The partitioner said there was plenty of space to install Fedora but whenever you went to actually create partitions / mount points it would give an error about there not being enough room to create said partition. Basically Anaconda was confused by the layout but really didn't have a way to communicate that the layout was unworkable. The end user is left with the impression that Anaconda is horribly broken when it was really a badly mangled pre-existing partition table that was to blame.
Example two - Here's another example of a really poor pre-existing partition layout as witnessed by the complete
fdisk -l output.
Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0008cbe0 Device Boot Start End Blocks Id System /dev/sda1 476313598 625137344 74411873+ 5 Extended /dev/sda2 16065 157765631 78874783+ 83 Linux /dev/sda3 160312635 476295119 157991242+ 83 Linux /dev/sda5 602389368 625137344 11373988+ 82 Linux swap / Solaris /dev/sda6 * 476313600 528878053 26282227 83 Linux /dev/sda7 528891993 602389304 36748656 83 Linux
I wish I had a screenshot of what that looks like inside of gparted but I don't. Just look at the start and end sectors for each partition. sda1 starts somewhere in the middle of the drive. sda2 is near the front. sda5 is after sda7. I don't think there is freespace and if an install is to be done, the user needs to reuse one or more existing partitions.
What did Anaconda do with this? The user reported that the install progress bar just hung at a very low single-digit number. The user just ended up having to power cycle after waiting entirely too long for it to finish when it wouldn't. After rebooting the install seemed to be functional but what exactly happened to make the installer get stuck is unknown. Anaconda gave no indication that there was an issue and did its best to work but obviously got confused.
How did these partition tables get mangled? - For both partition tables, I don't think any sane partitioning program would allow a user to create those partitions on purpose so you really have to ask... how did they get that way? To the best of my knowledge, both users engage in the practice of distro hopping. Distro hopping is where one is interested in using or playing with a different Linux distribution with some regularity. One of those users might be the host of a popular Linux-related podcast who reviews one or more different Linux distros every week... or not. :) But seriously, those partition tables can only be the result of multiple Linux installers, using different methods, strong-arming their way. An odd partition operation to make this distro install... and another one later... and maybe more down the line... and you get a mangled layout. Or at least that is the best explanation I've been able to come up with.
How should Anaconda respond to pre-existing unusable or sub-optimal partition layouts? - It would be nice if Anaconda could play the part of partition therapist and recognize when a user has a really bad partition table that is virtually impossible to work with... and just inform the user in a kind but clear way that they need to fix that before Fedora will install. Historically Anaconda seems to just get confused and error out... thinking that it could do something with it but failing. Can this be fixed? I'm not sure but I hope so. I certainly don't expect Anaconda to figure out methods of fixing the bad partition layouts but they do exist and a small portion of users are going to run into trouble. Luckily I've yet to see a situation where Anaconda makes the situation worse by breaking existing OS installs.
But wait, there's more - It turns out that both users had additional partition related problems.
User one has a Dell laptop that offers a special featured named DirectMedia. Go ahead. Take a little time to read that wikipedia link especially the Design Controversy portion of it. It turns out that the existence of that odd extended / locigal partition combination might just have been the result of MediaDirect... and even if they had been able to install Linux, at some point later when Windows was booted again, MediaDirect would have probably regenerated the problematic sda4/sda5 combination probably breaking any Linux install that was done. Now that takes "Made for Microsoft Windows" to a new and more scary level doesn't it? :(
User two once used dd to backup the contents of one partition to another and as a result had two partitions with the same UUID. As you will recall, one of the U's in UUID stands for unique but in this case it wasn't. Just what confusion might that lead to? They reported on a few occasions that most boots of their computer things were normal but other boots the contents of their home directory would totally change... only to change again next boot. After they figured out that they had two partitions with the same UUID it started to make more sense.
Conclusion - The point is that where there is smoke there is sometimes fire and no Linux distro installer can be a complete fire extinguisher in all situations. Some users have bad partition layouts and it would be a good idea to take that into account. Oh, how about a recommendation... boot your machine with some live media that includes gparted and get your partitions in ship-shape before installing. gparted is more battle tested for partitioning, resizing, etc than any distro's installer. Lastly, the newer Anaconada isn't so bad so get over it! :)
It was recently announced that the Raspberry Pi (RPi) has sold 1.75 million units in the 1.5 years they have been available. That is seen by most folks as a screaming success. In contrast, the One Laptop Per Child project (OLPC) has sold about twice as many OLPC units in about 6+ years... yet it is seen by some as a quasi-failure.
Target Audiences - Both projects have targeted the education market. OLPC is aimed at general education and early grades whereas the RPi is targeted at the same and older age groups specifically in computer science education.
The OLPC has been sold specifically to governments and school systems and while they dipped their toes in the consumer market with the Give 1 Get 1 programs, those efforts were not well executed and as a result only became a small fraction of their sales. The bulk of the RPi sales have been directly to the retail market. RPi is still working on educational materials and it is still unclear how well it will do within school systems. Retail sales may still help with the RPi goals but to what degree is unclear. It appears the bulk of their sales are going to hobby type projects and I'm not sure if those qualify.
Copy Cats - The hardware of both projects launched new mass-market product categories because oddly the retail and wholesale mega-corps got scared and had to introduce competing products. The netbook did quite well for a couple of years but the pricing on more traditional laptops dropped low enough to practically eliminate it. There are a large number of ARM-based RPi like single-board computers (not counting the embedded market that pre-existed) but I'm not sure that any of those will be notably successful. RPi clones are somewhat overlapped by the mini-Android clone market.
Hardware and Pricing - Both projects have had unanticipated delays in their design and production phases but they made it through. OLPC ended up being more than double their target price, which appears to have been unrealistic... while the RPi, perhaps having a closer relationship to a component manufacturer, hit their targets. The OLPC is a more complete gadget whereas the RPi requires a significant amount of hardware extras before it is usable. While it is easy to overlook the extra costs associated with the RPi, they are definitely real.
Both projects have revisited their designs coming out with newer hardware releases. The OLPC is in their fourth iteration and they completely changed CPU arches yet the outsides look exactly the same. The second RPi release doubled the RAM without a price increase... awesome.
Software - One major difference is in software. The OLPC Project designed a custom user interface and several stock educational activities on top of Linux whereas Raspberry Pi hasn't had to do as much in the way of software. Both projects have successfully relied on the communities that sprung up around them to augment the software both provide but the RPi seems to have a ways yet to go with the software for their target audience.
Roots and Expectations - Given the similarities and differences between both projects, why is one more often perceived as a greater success than the other? I think a lot has to do with the roots and expectations. One had a big name and lots of media up front and was expected to "change the world" and the other, not so much. I think that is the main reason for the perceptions but there is certainly more to both projects than such a simplified type of judgement conveys.
Unanswered questions - Here are some of the questions that come to my mind. Did I leave any out?
How well will RPi do inside of school systems?
Will RPi rev their hardware like OLPC has with major changes? If so, how will it impact manufacturing costs and pricing?
Will any of the wanna-be products be able to significantly poach users away from RPi or will RPi do that themselves with a later model?
Does the current RPi model B have such a strong mindshare that a future model may be overshadowed by it?
What future plans does the OLPC project have?
How many more OLPCs could have been sold if they had added the consumer retail market as a target and executed as well in that space as RPi has?
OLPC has taken a small foray into the consumer market by allowing their name and look be to be used by one budget Android tablet maker. How well will that do and will there be others?
How exactly do these OLPC licensed third-party projects benefit the OLPC project?
And lastly, will any of those questions ever be answered??! :)
Conclusion - In any event, I'm glad we have both projects and wish them continuing success and a long future. Keep up the great work everyone. Will there be any other projects that make it this level? I certainly hope so. We definitely need more open hardware in the market.
I saw this mentioned on the Fedora Planet. Andy Grover gives a presentation on The Linux Way and how while it is based on The Unix Way, it has been updated for a new era. The real content starts about 4 minutes into it. Enjoy!
I use CentOS quite a bit myself and I know a lot of other CentOS users. Here is a video of one of the main developers (Karanbir Singh ) within the CentOS Project explaining how the CentOS Project works and builds what it builds. Enjoy!
Anyone who has been following my computing adventures on this website for very long will probably already know I'm a big fan of remoting protocols. Ones I've used so far include SPICE, VNC, RDP, X11 redirection, X11 tunneled thru SSH, and NoMachine's NX 3.
There are two new developments in this area. The first is the recently released NoMachine NX4 and the second is x2go. I've had a chance to play with them both and what follows are my initial reactions.
NoMachine NX 4 - If you didn't notice, NoMachine finally released their NX 4 product line. I say finally because I think I saw the first "NX 4 is coming soon" blurb in the pages of Linux Journal magazine about 2.5 years ago. NoMachine has also totally redone their website for the new release. NX 4 is closed / proprietary software. They do offer a gratis download where you can have one user and you don't have to fill out any annoying forms to get to the downloads. If a single user meets your needs, you are good to go. If you want more users, pick out one of their products that meets your needs. Their commercial offerings allow for multiple users on a single host or a session broker to scale users across multiple servers.
What's new in this release? Besides moving to complete closedly software (NX 3 had GPL'ed libraries), NX 4 offers a lot of new features. The NX 3 protocol was basically an extremely efficient rejiggering of the X11 protocol and as a result, NX 3 only ran on servers that offered X11... primarily Linux. The NX 4 protocol is completely new / different and as a result they have made it multi-platform. In the past they had NX 3 client applications for Windows and Mac but not the server side. With NX 4, they now support Linux, Microsoft Windows and Mac OS X servers. So far I've only tried NX 4 on Linux but I'm sure Mac users are going to be very happy because their remote access options have been very limited.
NX 4 has gone a long way to add features that people want in a modern remoting protocol. For example, it now supports multi-media quite well even at fairly modest bandwidth limits. That isn't to say that it is magic but it does amazingly well. It supports sound and video. It has client side session recording. Files can easily be copied between client and server. I also believe client-side USB devices are supported although I have yet to try that. NX 4 has a new, modern interface that is pleasing. It is easy to use especially when you learn the config hotkey in the client (ALT+CONTROL+0). NX 4 is very scalable and can dynamically adjust to changing bandwidth conditions.
Some things I don't like: The NX 4 protocol no longer runs through SSH on port 22, as NX 3 did. NX 4 runs on port 4000... and a result I think it takes a little more effort to initially set up... mainly because anyone using a firewall will have open up a new port. If you are the one with control of the firewall that is no problem, but if not, it might take some effort. In the past, NoMachine had separate packages for the server and the client. Now there is only one package that includes both client and server... so if you want only the client, you are going to end up installing the server as well. I hope in the future they split the package in two again. I wonder if the single package is a marketing ploy or a sign that they didn't spend as much time on the packaging as they should have. At least they offer the Linux version in rpm, deb, and tar.gz formats.
Update: (25 NOV 2013) I got an email from No Machine's Gian Filippo who mentioned that they do have separate client and server packages but they are in the enterprise section. He also said that NX4 can still use ssh for transport except that it was extra work to do so and Windows and Mac users won't have sshd by default so ssh isn't the default. Again, I think this is an option in the enterprise packages.
x2go - The Fedora Project released Fedora 20 Alpha this week. Being the big Fedora fanboi that I am, I was reading about the new features in the Alpha release. Many of the new features caught my eye but the one relevant to this blog post is x2go.
x2go is based on the GPL'ed NoMachine NX 3 libraries. x2go is NOT the first project based on those... as the more well known FreeNX came first. Unfortunately the development of FreeNX stalled and the last updates seem to be from 2008. x2go appears to have picked up NX 3 ball and run with it. Not only have they modernized the code to run on contemporary systems but they have also added a number of new features. They supposedly have a session broker application that can scale connections across multiple hosts turning x2go into a more complete terminal services solution. I have only tried the simplest case of installing the server on a physical host as well as a KVM virtual machine and connecting to it from a single client. In the simple case, it works quite well. They supposedly have added sound, file sharing, and client-side USB... all running through SSH... but I haven't tried all of that just yet.
x2go is still based on X11 so the server side is not available for Microsoft Windows nor Mac OS X. They do have client applications for Windows and Mac. I tried the Windows client and it looked and worked great. x2go will be available in quite a few distributions via stock repos. Is x2go available for your preferred Linux distribution yet? I wish Fedora had x2go packages for Fedora 19 as it will be a few more months before Fedora 20 is released. I certainly hope the EPEL packagers will make x2go available for EL6 (and EL5 if possible) at some point and I expect it to be a stock package in EL7.
Overall, x2go works quite well.. and over a LAN it easily rivals SPICE... except maybe for video. I haven't had a chance to try it in lower bandwidth situations although I expect it to work quite well as NoMachine's NX 3 product line was excellent.
Conclusions - Which do I like best... NX 4 or x2go? Both have their pluses and minuses. I prefer FLOSS software but yes NX 4 has some additional capabilities that under certain conditions I might want. For example, if I wanted remote access to a Windows machine or a Mac, x2go is not an option. NX 4 is going to be a hard sell on Windows systems because Microsoft's RDP is there by default and well entrenched.
When Fedora 20 comes out and is my distro of choice, adding x2go will be a no-brainer. If it becomes available for EL, I'll use it there too. At this stage it is hard to say because I haven't tried out any of the more advanced use cases. How well does x2go scale? Which desktop environments are written well enough to accommodate multiple users? Once I've had more time with both of them I might have a more complete answer... but at this point I'm glad there are more options in remoting protocols. Which ones do you prefer?