Voice of the Masses: Are rolling releases the future of distros?
|We were at SUSECon 2015 earlier in the month, where the company announced the release of OpenSUSE Leap 42.1. (We’ll have more on the event and a review of the distro in Linux Voice issue 23!) Richard Brown, Chair of the OpenSUSE board, made an interesting statement at the show: rolling releases are the future of distros. And not just hobbyist desktop distros, but enterprise ones as well (somewhere far down the line).
So for our next podcast, we want to hear from you: do you agree with this prediction? In the next few years, will regular, scheduled distro releases go out of the door, and we’ll all be running rolling distros like Arch and OpenSUSE Tumbleweed? Can such distros be made sufficiently reliable that constant updates won’t break anything? Or will big businesses never take the risk, and still require “traditional” releases which barely change for years?
Let us know your thoughts in the comments below!
Mostly fine for home use and would be welcome but in the Enterprise where downtime on a manufacturing line results in losses in the millions we would hesitate to adopt. It’s hard enough to get SSL and bash patches tested and installed, let alone a wave of libraries etc.
I always liked the concept of the rolling release.
Sometimes it gets tricky – like when you would introduce Systemd or the next fundamentally system-altering idea – but I often find that people object to the idea with the argument that they need more solid software; and there is no reason that rolling needs to be bleeding edge Arch-style – there is really no reason that Red Hat, say, could not be rolling release, and I think the Opensuse angle here is interesting.
When it comes to big system-altering changes our ‘Factory Development Process’ (https://en.opensuse.org/openSUSE:Factory_development_model) gives us a way of ‘Staging’ those big changes alongside the regular shift and swapping of the rest of Tumbleweed
This has allowed us to do some pretty humongous overhauls of deep internals in Tumbleweed, the latest I can think of is rebuilding the entire distribution with GCC5.
Took us about 2 months, used everything OBS and openQA has to offer, found lots of bugs, but throughout the whole process not a single Tumbleweed user was impacted because we didn’t publish Tumbleweed-built-with-GCC5 until the whole distribution was ready
Last I heard, I don’t think Arch had even started migrating yet (Not sure, can’t find any documentation either way now)
I’m running Arch, gcc -v returns ‘gcc version 5.2.0 (GCC)’
I can’t speak for enterprises but my experience with rolling (Arch) was one of one thing breaking after another. Basically it wasn’t worth it. What I would like to know is if this is an intrinsic ‘feature’ of rolling or just of a distro with a limited userbase (and by extension a limited set of test subjects)?
The big difference between openSUSE Tumbleweed and Arch is that we test each set of updates to our rolling release before releasing them
Therefore, we believe we’ve managed to create a reliable rolling release that doesn’t break one thing after another – if we feel there is a risk of that, we put a hold on updates until openQA (our automated test tool) lets us know everything is okay
As we rely heavily on automation, this approach doesn’t require a huge userbase of testers. The more the merrier, but the idea is we really just need people to help write openQA test cases – once it’s automated, we can check for that potential failure or key functionality every single day
You make a strong case and I’m definitely gonna have to investigate it. But on my desktop one of the major breakdowns was an incompatibility between Gnome and the proprietary Nvidia driver – and the way I understand it, those proprietary drivers aren’t part of your testing setup? So gaming rig is out and server is… probably out as well, as stability is more important than anything else. But it sounds pretty tempting for a hybrid laptop that might stand to benefit from kernel and DE updates.
We’re looking at testing NVIDIA drivers – yes, it’s tricky and we’d be totally beholden to NVIDIA to fix issues, and they’re not exactly famed for their ability to work well or keep up with upstream kernel developments..
but, yes, I still want to have openQA testing them
We already have some real hardware testing in openQA (we use it internally at SUSE for testing SLE) but that is bound to IPMI BMCs, which is no good for testing video cards
I’m looking at buying/finding donations for a few of these for openqa.opensuse.org – http://www.adder.com/products/adderlink-ipeps with a little bit of hacking, we should be able to get openQA to drive a machine using them quite comfortably, in which case we’ll be able to actually see the real video output of the real nvidia or ati cards…assuming we can find someone to donate the NVIDIA and ATI cards 😉 – btw, did you know we have an email address for offering hardware donations (donations@opensuse.org)
Once we have a solution like that in place, at the very least we’ll be able to have a good keen eye on what’s going on with the NVIDIA and ATI drivers before deciding to release a Tumbleweed update, or not.
Thanks for the explanation – it must say I’m intrigued. Also, I would gladly ship you my GTX560 if I hadn’t fried it a couple of months ago, attempting to replace the power supply 🙂
For what it’s worth, three things have to align:
– atomic update, ether one gets all new packages or none (rollback – rollforward)
– heavily automated test of updates (in the case of Arch users are the test)
– containerized apps (shiny new system libs don’t break my dated software)
and you have perfectly acceptable enterprisy rolling release.
IMHO this scenario has a lot of benefits so it just might be a future of all distros.
Oh, did I mention that non of this is not trivial. It requires a lot of changes both in upstream and downstream projects (read: a lots of breakage and contention) 😀
– atomic update (all updates or none) – already done in Tumbleweed, we publish everything together, or nothing at all
– heavily automated tests – already done in Tumbleweed see http://openqa.opensuse.org
– containerized apps – hmm, not sure I agree generally speaking, I think the first two points mean this third is less important, after all containerised apps just mean more libraries that could be vulnerable in the long rung
If they are like Arch Linux, I don’t think so.
I’m sick of removing a bloody soft link to get an update work. Every single new version of systemd means potential problems. Everything is fixable, of course, but I would love something that just works.
I have been using Arch Linux for about 2 years. So it’s no so bad, but you can get tired of so much fixing all the time.
I love arch Linux but the rudimentary way how to deal with *.pacnew and *.pacsave files laying around in the system are still a mystery for me. Pacman should deal with those in a much more smarter and user friendly way.
I really like the rolling releases and in my home office I never had a problem with it. The backup partition with an LTS Distro has Bern umused for two years now.
There is no way why it shouldn’t work. The sysadmin can make any rolling release distro into a quarterly updated distro. Simply run the package manager only every 4 month, or whenever you feel you have time to deal with some “after-update”-problems.
Server vs. Rolling Release debate: Running one server per service, I am always surprised to see that even after months without updating, all it takes to update a server is often less then a dozen of upgradable packages. If you run a server which requires you to update 1800 packages on a weekly basis, I guess you are doing something wrong. Go checkout docker… 😉
I see two major challenges for rolling releases: First, they might have issues with stability. But I am sure with enough (automated) testing and a well-designed release model, a distribution could get a rolling release pretty close to a fixed version distro, stability-wise. And openSUSE might just be an example of how seldom a rolling release breaks…
But here’s another thing: I work in an environment, where we have millions of lines of codes developed and maintained in-house. And any change to any part of our development chain (be it a new GCC, CMake, C++ version, …) can easily keep us busy for months. We can simply not afford to change the systems we work on more often than every few years. Because if we developed on a rolling release, we would probably never create any new stuff, and only do maintenance.
And from my perspective, that’s what happens to a lot of enterprises. That’s why we still see COBOL mainframes, OS/2 systems and Lisp scripts. And imho that’s why rolling releases will be an incapable strategy for a lot of enterprises.
One last word, because it has been mentioned: I agree that you could use a rolling release as a basis and some stabilized container/virtualisation on top. But, come on, that’s cheating.
I’ve been using Arch for a few years now, and it’s hands down the most stable distro I’ve ever used. I do a `–sysupgrade` approximately three to five times a *week*, on a laptop and desktop, and there are months between each time there’s even a slight problem. Compare to Ubuntu where the machine was unbootable after almost every point upgrade.
As for servers, by now the team I’m on are used to doing upgrades & data migrations on staging machines and deploying a new server rather than updating existing servers, so a rolling release would work just fine (as long as the package repository keeps at least moderately old versions around for download).
Small changes often >> large changes every couple years. I thought that was the \*nix way?
I’ ve been using Gentoo on my main laptops (used for all my professional work plus most of my leisure computing) since March 2010, and have no complaints. I did switch from the Testing Branch to the Stable Branch when I installed Gentoo on my new laptop this March, because I decided to forgo dual booting this time (i.e. no safety net), and have no qualms about rolling it. The key has to be QC/QA; any rolling distribution that does not do that super-thoroughly will cause a lot trouble to users. But openSUSE has the means to do it properly, so I look forward to trying Leap. Another key factor would be the ability to roll back reliably if an upgrade goes badly. In the Windows world that has saved me a lot of grief on a number of occasions.
I surely hope not but YMMV. As long as the packages for the specific distro version have the required functionality and stability, I’m happy with an LTS type release. Been very satisfied with Mint on the desktop and Debian or Ubuntu LTS running most server configs.
I think this question is a bit old as there will soon be a new way of doing things with containers.
With Snappy/Docker and the like, pushing out the latest apps will be a lot simpler and can be done on an app basis.
The biggest hurdle will be pushing out graphic drivers for the various hardware out there.
My employer runs epos, stock control and backoffice software on top of Centos 5 and I’m sure Tumbleweed as described by Richard Brown would work perfectly well for them and many others. But Philipp has got a point and there are many cases where any change can only happen after a months and months of preparation. Regardless of anything, I can’t see my beloved Arch Linux becoming the gold standard enterprise distro any time soon 😀