I wanted to write this for a long time. Sometimes I was not really about how to put this. Those who know how I tick will also know that I am not the guy who is beating around the bush. I am usually going straight forward with the matters that bother me. And here it is again.
I am tired of supporting legacy hardware
I am so tired of supporting legacy hardware because it takes way too much effort these days. To my personal luck, Arne is the one in this project who is taking care of supporting almost everything of the hardware. If he has got one of the devices, he will install IPFire on it. Every time. For every release that we push out. This takes unfortunately days and one patch after an other to get them all working fine.
IPFire 2.x has grown very far. When we released the first version of this series the world was quite different. There was almost no hardware available in a small form-factor. Fanless and silent devices where extremely uncommon. People used to run the distribution on old computers. Even the companies. Hence there was not so much diversity between the small systems that power networks with a couple of others and those which provide for hundreds and thousands of users.
Those two extremes went further and further apart over the years. While VIA was the only vendor for Mini-ITX form-factor hardware with its C3 and later C7 processors there were not so many people who ran IPFire on it because of the high price tag that came with the size. But when Intel introduced the Atom processor these devices became extremely popular within a short span of time. Today we have low-power systems with a SoC on a single board. Neat. But there are also disadvantages. On the other side of the spectrum we got huge multi-core machines with gigabytes of memory. Some software is still adopting to using multiple cores. This is not that easy to implement in all the cases and as there is still the necessity to scale down to those small systems up to systems that are in almost no way similar to the smaller ones any more.
The area of ARM single board computers (SBCs) is going completely bonkers in my humble opinion. Although I have been a huge fan of these from very early on this is becoming more and more of a headache to me and the other distributions as well. New boards rise and fall in no time. They are usually completely incompatible to any predecessors or boards that are based on the same SoC. The quality of the code in the Linux is varying from release to release and as soon as you got it all right the hardware is discontinued. Shortly after that many drivers get buggy again as there is nobody caring about this hardware any more. The developers have moved on to new things.
Regular readers of this blog know about the ARM situation. It is just not the ARM SBCs any more with faster and faster product cycles. Since there is no innovation any more there is also no need to wait until the engineers have come up with something new. Just small things are changed and products are marketed as brand new. We have seen this for a whole bunch of networking chipsets as well. Most famously the Wifi modules that are based on the rt73usb chipset family, the RTL8111/69 network chipsets and many others. You won’t notice the difference at first. It is just that the driver has to adopt a little bit for each new revision.
SoCs are going to become the standard solution for the small form-factor appliances for the x86 architecture, too. That means that not only smaller things change a little bit from time to time (like the Ethernet MACs and PHYs). The entire SoC with all its component changes. Not in a fundamental way. It is just that all the little things together can cause real pain as well.
So you can easily see now that there are many examples why supporting the current hardware is a tough job. Not all bugs are too hard to find. Some result in a one-line patch. But it is always extremely time consuming. And now comes legacy hardware on top of all of it.
What is legacy hardware?
Legacy hardware is hardware from the older days. The days when there was only a single CPU core and 256MB of memory. Those which have been engineered in the last century and which have not adopted anything of the recent developments. These machines are raising new problems all of the time. These are in my opinion just pointless hardware and we are going to stop supporting some of them.
IPFire is getting bigger and bigger
Some people regard this as a bad thing and simply ignore the facts that are a) operating systems are becoming more and more complex and b) bring more and more features that evidently need space on disk and memory. We cannot do a lot about the complexity. Our networks are getting more and more complicated. We require more and more services to be available all of the time. We have many devices that need connectivity and we also want to protect those devices. Therefore there is more traffic, more packets, more bandwidth, more data in general. It is quite obvious to me that this cannot be managed by a device from the past decade.
That all parts of the operating system are becoming bigger and consuming more memory is in no way a bad thing. It is right the opposite: A deliberate decision to make software faster. When you are writing plain C code you will probably think the most about this. Should I consume more RAM or processor power? Every algorithm can be implemented in different ways where one requires more memory and an other one requires more CPU cycles. Memory is cheap. Hence people opt for the memory-consuming option in almost all of the times. A CPU cycle always costs time and energy. Both is essentially money. Imagine you have a very huge data centre. Putting more memory in it is way cheaper than high-end processors. We also all want that Linux runs great on Intel Atom processors. That is the price we are paying: memory. I find this a great deal. It is only that legacy hardware has not enough memory in order to make this trade-off work.
256 MB is just not enough any more and we are not going to support this for forever
Last month we updated the recommended hardware specifications and are now requiring 1 GHz CPU clock speed and 1 GB of RAM. We also updated the minimum requirements to at least 512 MB of memory. This might sound like a huge step but really is not. Again: Memory is cheap. Get some more.
We do not want to do anything against using IPFire on a system that is below the hardware specifications. We just cannot guarantee that it will work any more.
Before you start throwing out your old systems out of the window hold on for a second. I would really appreciate if you would replace your current systems with modern ones. Obviously this costs money and may be unnecessary. If you are wondering why I am then writing all of this, here is the message:
Do not buy legacy hardware any more
At least do not buy any legacy hardware any more. Lots of users are still buying legacy hardware which I strongly disagree with. Two of the most common representatives of these are the PC Engines ALIX and the Raspberry Pi One. If you want a powerful IPFire system these ones are not going to do this for you. You can not use the web proxy server properly on them. Networking is slow. Updates will take a long time. We are not willing to leap far to ease any of these issues.
Loosely linked to the fact that we do not have enough testing feedback, we do not feel that we can keep on doing what we are doing indefinitely. The world is moving on. Supporting newer hardware properly matters more to us than the past decade. I am sure that you will agree as you of course want the latest things to function very well, too.
There are many alternatives like the PC Engines APU and the Fountain Networks IPFire Prime Box. Both are certainly slightly more expensive than the former ones. They are also way more powerful.
The money you are spending on old hardware is not really less when you consider the higher maintenance on the software part. The work is just done invisibly from your eyes and maintenance and engineering cost money. We all have to accept that Linux is not the operating system that turns your old computer into a much faster machine that runs your network.
It may work for a while but will most certainly not hold up to my expectations of the year 2015. I thought we will have fast and permanent Internet access everywhere we need it by now.
We can have that.
We just need to invest a little from time to time.