Thursday, September 03, 2015

The Enigma of IT-Controlled Development Hosts

Long ago a friend of mine once said, "Software doesn't wear out." He said that to defend his position that a non-profit organization we were both involved in should continue to use some MS-DOS based software that required a lot of effort to use and maintain, but it was paid for and staff knew how to use it.

There is value in having a stable host platform for software development. Often, a company's processes become embedded in shell scripts that live and run on software development hosts. This kind of investment requires stability of the host software environment. Stability means you run old software. It departments worldwide actively encourage running old software.

What my friend and modern IT departments have in common is a misconception that software doesn't wear out. In reality, it does wear out, if you think about it a certain way.

The fallacy is relating the software world to the physical world when it comes to expectations of usability. In the physical world, a car's tires wear out from use. The tires were bought with a set of requirements - size, tread type, stickiness, cost - and once bought, those requirements are static with respect to the tires. We don't change our expectations of the tires simply because our environmental requirements change. We just replace the tires using new requirements.

The tires wear out. They change relative to a static set of expectations of the car's environment. The car doesn't change the size of tires needed. The car doesn't change the tread required or the stickiness of the rubber the driver wants. If the car's owner wants to operate the car in an environment that requires different tires, they replace the tires.

In the world of software however, the software doesn't change relative to its environment. It doesn't wear out, but in a different way. Unlike the example of the tires, the software environment changes relative to other software components. While it doesn't wear out, like the tires, it suffers relative degradation with respect to its environment. The effect is the same as the tires wearing out. You eventually have to upgrade or replace the software.

IT departments don't like change. They like static things and I completely understand why. Often it is the IT department that is on the main battle line to keep companies operations going. The more change that is introduced into the system, the harder it is to keep things going smoothly. And if things don't go smoothly, they get yelled at.

So I understand where they are coming from.

Development hosts are usually a very different beast when compared to software systems used for production. The primary difference is that often a development host is used by a single developer. That developer will by necessity, alter the software environment of their development host many times per year.

Such alterations include things like adding or creating new tools, updating development toolchains, adding and changing applications based on changes in personal workflows, and simply because the developer wants to make changes. This doesn't mean there are no boundaries. Each developer has build environment requirements that must be maintained for building the source code, but the principle remains that developers have very dynamic environments for very good reasons (even the reason of "I want to" is a good reason).

Enter the problem of IT-controlled development hosts. Remember, IT wants static environments and for good reason. The developer wants a dynamic environment and for good reason. 

Just as it is not practical for the developer to tell the IT guy that he should arbitrarily update software on a production system, it is not practical for the IT guy to tell the developer he cannot update software on his or her development host.

Let's look at real world example. Consider a company that uses RHEL 6.x as their primary development host operating system. RHEL 6 was first released in 2010 with some components dating from 2009. It is now 2015 and some of those original components still date back to 2009. They use this older version because IT has invested a lot of effort into setting up a development production environment. They want to preserve that investment and so they keep it as static as possible.

But why are some of RHEL 6 components so old? It's not some conspiracy or ignorance on the part of Red Hat. Since the software doesn't wear out, there was no need to replace it. Components that saw security patches or other major updates were updated or replaced. However, many were not. Often these components are the ones that software developers touch.

This enigma of maintaining stability by preserving the past in software is not unique to Red Hat or any one company. Most are guilty of doing something similar.

If it ain't broke, don't fix it.

The problem with that phrase is the definition of "broke". To the IT staff, if some software has no security bugs, no significant feature bugs, and no known incompatibilities, they don't consider it broken.

However, to the software developer if some software no longer meets the requirements of the developer, it's broken.

If it's broken, fix it.

Both of those positions are valid. Both should be respected. So when it comes down to it, as a software developer, I won't insist that IT update their production software out of respect for them knowing how to do their job. I will insist that IT not interfere when I update my development host software out of their respect for me knowing how to do my job.

These two sides of the issue do not have to be at odds. We can actually get along, provided we play in our own backyards.