Agile, DevOps, & US Fighter Pilots

Russian PAK-50
Russian PAK-50 – Now Considered the World’s Greatest Fighter Jet


“Agile” & “DevOps”; Two Sides of The Same type of Coin

In spite of the promotion of the “Agile” development lifecycle, it is a highly flawed technique, which often succumbs to being nothing more than what is known as “controlled chaos”. This is exactly the term that naval aviators use to describe landings on an aircraft carrier; a technique filled with so many dangers and variables that if but a single issue occurs the pilot and craft could be both done. Modern computerized systems aboard such ships have lessened the dangers to some extent by incorporating remote control flight to the planes. However, again if anything goes wrong with the software or the hardware, the pilot is left to land the craft him or herself making the reliance on such software an added danger to pilots in such circumstances. Using such software regularly, the pilots lose their hard learned skills in landing such aircraft, if modern training has taught them such hands-on skills in the first place.

“Agile” has no features or advantages that mature software engineering practices have not already devised and demonstrated successfully across a breadth of different types of software projects. It is merely a methodology that allows developers to escape the necessities of good project implementation and the fact that so many “Agile” projects still incur some level of failure in their endeavors is a testament to this contention.

One of the biggest failings of “Agile” however, is the idea that a developer can do it all following on the trends in the early 2000s that began with businesses eliminating crucial departments that supported the software development process (ie: Quality Control). Between the outsourcing and the reductions in staff, IT organizations were unfairly left to themselves to devise methods to keep the organizations afloat despite dwindling resources. This was the catalyst for such concepts as “Agile”.

However, it was already well known by this time that developers had more than enough on their plates than to be able to start handling increasing technical responsibilities.

In the 1990s before the onset of outsourcing was in its full throws and the Internet had become the accepted medium for enterprise development, it was believed that developers could not only assume the role of developer but that of the Local Area Network (LAN) specialist. This implementation of such a job-mix was attempted and short-lived due to the massive knowledge bases that LAN specialists had to contend with on their side of the fence.

Senior developers wrote about such foolishness and confirmed that no developer could adequately acquire and handle two different but enormous knowledge bases. As mentioned, this attempt was short-lived.

However businesses were not done in their attempt to reduce costs by trimming down IT organizations and the next idea to be floated was that developers “should know the business”. The question is why? Developers are technical personnel and they are not there to “know the business” but instead to implement what the analysts, who are supposed to know the business, provide them. By now such a concept may seem alien to younger readers of this piece. But really, what is “knowing the business” going to do for a quality developer except be able to act as a business or systems analyst? However, that was gist of being able to cut those levels in technical staffing.

Nonetheless, this idea took hold and allowed businesses to trim their business analysts departments while eliminating system analysts leaving IT organizations to resolve these losses on their own.

The result was a bunch of emerging technical managers demanding that developers now have some
in-depth knowledge about business processes. And the game was afoot allowing “Agile” promoters to incorporate this delusion into their own proposals. What they seemed to have ignored is that all businesses operate along the same paradigms. It doesn’t matter if they sell insurance or toys. True, the cultures may be different but the underlying processes are all the same.

In terms of technical refinement, “Agile” promoters love to tout the idea that it is a replacement for the “Waterfall” approach in software development. However, as many senior engineers have written, “Waterfall” was but one approach to such endeavors. However, “Agile” promoters couldn’t get past this idea and promoted it to the point that many now believe it. And given this emphasis, one could easily understand “Agile” as a singular replacement for the “Waterfall” approach but then the question is, “What is the replacement for all the other life-cycle models that have been used over the years?”

Now that “Agile” was now overwhelming many IT organizations with it’s adoption, such technical evangelists began thinking of a new way to merge the rest of the IT organization into the software development environment. What resulted was a concept that was believed to be an answer to better “communication” between the different personnel as well as increase the speed at which applications could be updated or released to production. This new paradigm is now known “DevOps” and the same marketing hype that surrounded “Agile” is now making it’s rounds for “DevOps”; the combination of the two as a way to solve all of the issues that IT organizations have been known for. This has generated the chimera of the “full stack” developer, which simply means that now a developer is expected to do it all. What else could it mean?

One of the main concepts that both “Agile” and “DevOps” share is that of closer collaboration through communication. Communication used to mean for software developers, developing project plans and quality requirements gathering. So what is this supposed to mean since “Agile” doesn’t really encourage or emphasize these two critical aspects of software development?

“DevOps” appears to mean that you are now mixing such operational processes such as Quality Control and “Release Control” (among the various other types of processes that operations sections handle) into the software development environment. This can only add to the pressures and stresses on developers that they are already under from the constant delusion from technical management of having to speed up everything ever faster as each year goes by.

To begin with, there are very sound reasons for the compartmentalization of various functions in the software development and implementation processes.

On the one hand, a company that is publicly traded must undergo auditing inspections on a regular basis and unless someone has changed the SEC statutes, combining operations within software development organizations simply won’t get a company through such an audit. Instead, it would most likely get such a company heavily fined for “comingling” critical processes. The reason why such audits have these requirements is to formalize impartiality towards the production environment.

And for all the blather from “DevOps” promoters about developers having better access to production implementation processes, there are also laws in public companies that restrict this type of access for developers for the same reasons just noted.

These restrictions of course do not apply to privately held companies, which startups and small business usually are. However, no matter the company type there is a good reason why development staff should be kept separate from any form of mixing with operations. And this is because it makes such environments more conducive to having developers take on additional responsibilities allowing management to see an additional opportunity to reduce staff. This happens all the time.

With the software profession rapidly adopting the “Agile” ideology adding this new ideology of “DevOps” is sure to make things even worse in development organizations that are already suffering under pressures from the general chaos that “Agile” promotes.

However, younger technical professionals seem not to understand this since they have no other experiences in environments that worked successfully without “Agile” to compare the current avalanche of nonsense. Instead they see better access to their process needs a seemingly good thing without understanding the drawbacks.

“DevOps” promoters, like their “Agile” counterparts, all tout the idea that one of the major underlying foundations of this new paradigm is “communication” between the different types of specialist in such environments. So let’s then understand this; with email, land-lines and a raft of smart-devices all available to developers in today’s technical environments they now need to be in close proximity to those people who actually do the software implementations into production.

The other side of this coin is that “DevOps” with it’s encouraging operations personnel to be closely aligned with the software development organization, developers will supposedly find it even easier to implement their code into production with this additional access to these people. The question is how?

No matter how you slice it, operations personnel are still responsible for implementing completed software into production; at least until management figures out a way to get rid of these people as well. But here is the kicker as we are not just talking about software implementers but Quality Control personnel, “Production Control”, “Desktop Support” and the management of the applications that are used to support all these processes as well.

If you read between the lines of a lot of this hype, there is an underlying suggestion that software developers can do their own Quality Control, which has already been scientifically proven to be impossible.

Over the many years that our profession has existed a lot of research has gone into the capacity for software developers to eliminate their own defects. The combined results of such studies have clearly shown that developers can at a maximum find approximately 60% of all defects that may have entered their code. 60%! That’s it. And the simple reason for this is because developers are too close to their own work. It is not a criticism but a simple fact.

The new ideologies also appear to hype the idea that with better tools developers will be able to eliminate a greater amount of defects on their own. I wouldn’t count on that. Just learning the complexities of all these new tools will introduce their own issues into the mix.

It seems that such ideological evangelists appear to believe that tools and products are the way to make software development faster, more accurate, and more efficient. And yet can they be entirely blamed for this when all of US society in particular has become enamored with technology as the answer for everything? Have a requirement; we have an app for that. Using one’s brain for even sedate functions such as reading a map is becoming a lost skill.

Nowhere is it ever mentioned that software developers have to do what they do best, which is technical design of applications and systems as well as code software.

So let’s see how these type of ideologies worked out in the real world using an entirely different profession that implemented the same type of short-sighted ideologies for this profession as “Agile” and “DevOps” both promote for software development.


US Fighter Pilots

The real-world example is the United States Fighter Pilot training and mission capability across the Air Force, Navy, and US Marines. The results from similar ideologies that infested these areas of our Armed Forces have been horrendous.

Being a fighter pilot is a specialty unto itself just like software engineering is. Though the professions have nothing in common with each other, the skills that both types of professionals require are at the same level of complexity in that both fields have enormous knowledge bases to contend with and both must deliver quality to either survive in air combat or reduce the costs of ownership to business organizations while also maintaining their computerized processes efficiently.

Though the delivered results are of course different for each profession, the underlying factor here is the quality of what is being delivered, which either promotes the longevity of a pilot or the longevity of a business organization.

The US Army Air Service, which became the US Army Air Corp in WWII and the US Air Force in 1947 has never produced a good share of top combat pilots while both Allied and enemy Air Forces have always demonstrated an ability to produce far more highly capable pilots during all periods of engagement. Similar results have been shown for US Naval and Marine Corp aviation.

For the United States Air Force in particular there are two specific reasons for this failing. Starting with WWI, the interest in US combat aviation capability was always frowned upon by the US military bureaucracy. Prior to WWI there was very little interest in investing in air power technologies. Once WWI began and the US became in engaged in 1917, US Army Cavalry officers became the foremost detractors of developing combat aviation out of fear that their own roles would be diminished consider that up to that time Calvary was always considered an elite unit.

This led to the US not being able to field a single plane developed in the US for combat duty. The result was that the US developed only the JN-Jenny two seat trainers, which later became the common plane in use for barnstorming fame and the US Air Mail Service in the 1920s. The US did receive a license to build the two-seat bomber, “DH4” from the British firm of Airco during the war for which these variants did see combat duty in Europe. For fighter aircraft, the US was initially provided with Nieuport 28s, a rather ineffective fighter when placed against equals or better equipment until later when the Americans were provided with British SE5.a aircraft and French Spads.

Thus for all of the innovation done with air power during WWI by other nations, the US remained far behind and acquired little knowledge from it’s own allies to develop its own combat aviation capabilities.

The ignoring of the needs of the fledgling Air Service after the war was maintained by US Cavalry Officer Corp as well as the drawdown of funding for the US military in general since it was US tradition to disband the extended units created for servicing a conflict. Both of these situations remained in place up through the beginning of WWII when it became evident that the US would have to develop its air power capabilities and recruit pilots to develop an aviation arm.

When the US entered the war it’s top of the line aircraft were the Army’s P-40 Warhawk of Flying Tiger fame and the US Navy Wildcat fighter, both capable planes in the hands of skilled pilots who knew how to use their best traits. The P-40 was durable that it remained in production throughout the war. Lesser known and derided by pilots for it’s difficult handling was the emerging long-range fighter, the P-38 Lightening.

Again however, the overall drag on US air power development remained evident even after the initiation of hostilities with US. It was further demonstrated when General Curtis Lemay, the commander of the US Air Force in WWII, allowed unarmed US bombers to enter into combat in Europe without proper escort fighters, though one was available; the P-38 (The P-38 had the capability and the endurance to support such bomber missions but it was tough plane to fly so pilots who weren’t provided in-depth training in it avoided flying it leaving few pilots to do so). The result was the dramatic loss of US bomber aircraft and air crews up through 1943 until the famed P-51 finally entered service.

Again, few American fighter pilots emerged as part of the top echelon of pilots during the war when compared to Britain, Russia and Germany. Though the US did produce quite a number of fighter squadrons with some excellent aircraft their ace-making capabilities remained below average in terms of excellence due to the ideologies that had been fomented by the earlier US Army Calvary, which encouraged the view that the Air Corp was there to service the Army.

Though this type of ideological issue could be found in all of the Air Forces that engaged in both world wars, it was only the US which maintained this short-sightedness while the other Air Forces more quickly shed the limitations placed on them by such conservative thinkers, which allowed for better training of pilots.

Even Canada, the US’ northern neighbor, consistently produced better pilots in general and more aces.

Whatever bright moments the US Air Force had in WWII it would be once again be contained when General George Marshall shortly after the war ended imposed an “Agile” like ideology on all of the US air arms, which culminated in the “up & out” culture throughout the services that is still pervasive today whereby pilots are forced to consider their careers by fulfilling the requirements to move up instead of what they are supposed to be doing, which is flying fighters. Not doing this meant that pilots and others such specialists were forced out of their military careers altogether.

In the case of fighter pilots, Marshall wanted his pilots to assume a plethora of military skills to prepare them for any and all eventualities that may arise during the Cold War with Russia that was just beginning to heat up.

What did this edict do? Well, fighter pilots are there to train for and fly in combat or as they get older use their experience in air operational positions keeping their huge knowledge bases and competencies as well as the best of the best within the military aviation communities. Marshal’s plans put fighter pilots everywhere else except where they were supposed to be. Fighter Squadrons would simply have to do with less capable resources.

Similarly, the results of US military aviation in the later Korean War turns out to have been fairly dubious at best when all of the statistical research has been evaluated, since US pilots could not compete on an equal level with their counterparts in Allied Air Forces or their opponents. And subsequent fighter exercises with other nations after the war demonstrated that US pilots simply did not have the fighter skills their foreign colleagues were provided with.

By the late 1960s Navy air operations officers were seeing the detrimental trends that had been placed on their pilots as a result and decided to create the Navy Fighter Weapons School or what has become more popularly known as “Top Gun”. The school’s program was designed around the same techniques that the US Air Force had come up with for their own fighter school in the 1950s.

The “Top Gun” pilots however, were selected from the top one percent of naval aviators disallowing other good pilots from attending thereby preventing a wider knowledge and competence base within the naval aviation community. In addition, “Top Gun” pilots, like their Air Force counterparts, came under a new “ideology”; that of missile-only fighter training since senior aviation commanders, weapons manufacturers and analysts had come to the conclusion that the era of the “dog fight”, the very thing that fighter pilots were supposed to learn, was over as a result of the more modern missile technologies now being implemented. Well, no other Air Force on the planet forgot any of the valuable lessons of that “era” and maintained in-depth training for such types of fighting and used them consistently in conflict operations.

The results of US military aviation in the Vietnam conflict were actually not much better than that of Korea despite the new top training of the “Top Gun” pilots in the Navy and similar training in the Air Force. Soviet fighter pilots have been actually found to be much better than their American opponents due to the increased in-depth training provided by the Soviet Air Force.

Since that time US fighter pilots, no matter the advanced aircraft they were flying, have never won an international competitive exercise against any foreign competitor simply because they were denied the ability to maintain and extend the valuable lessons of experience obtained in actual combat simply as a result of ideologies.

With Marshall’s original edict still in play, US fighter pilots serve an average of four years before they must rotate out to a different type of position to fulfill their career goals. This leaves a paltry knowledgebase for new fighter pilots coming into the services while diluting their overall capabilities to a bare minimum.



Has anyone noticed the parallels here with the current state of the US software profession? Marshall’s “ideology” is in essence the same as that of “Agile” where developers must assume different roles in order to remain functional in such environments. Like their counterparts in US military aviation fighter squadrons, software developers today are asked to take on undue burdens that do not let them concentrate on what they are actually there to do, which is design and develop quality software. This was initiated with the business “ideology” of outsourcing, which began in the mid-1990s and led companies to expect more out of their remaining developers, while first the “Extreme Programming” adherents and then later those of “Agile” fostered this trend within the software profession itself.

Instead of resisting such pressures through the use of proper negotiation techniques with management that could have developed better outcomes for scheduling development timelines while maintaining necessary and vital functionality across technical departments, the people who proposed “Agile” went along with their business counterparts and fostered the breakdown of software development paradigms to fit their own narrowly focused agendas allowing the trend to become the prevalent ideology it is today.

“DevOps” in retrospect is similar to the late addition of the “missile only” ideology in the Vietnam War, which caused many tragic deaths unnecessarily as well as the loss of expensive equipment. Here, however, “DevOps” hopes to “streamline” the production implementation process while quietly undoing compartmentalized operational procedures as software developers along the way are encouraged to take on these roles as well. US Fighter pilots have been literally “streamlined” out of their effectiveness.

The result is a growing similarity to what has happened in US combat aviation where software developers are involved in so many different aspects of creating software they are no longer doing what they really should be doing on a regular basis.

“DevOps” won’t really streamline anything. Instead it will foster additional chaos within software development organizations as they all try to cope with this new “ideology” that promises them better implementations of production products. But will it?

With manufacturing, leading process flow analysts refined “existing” processes to develop better results in manufacturing processes, which found its way into the US manufacturing sectors. The US subsequently threw that all away with outsourcing.

“DevOps” will simply encourage companies to eliminate operations departments that house quite a number of areas all necessary to proper production implementation. And we have all seen this with the results of outsourcing and the hyping of and acceptance of the “Agile” paradigm. There is no reason to suspect that a similar process will not occur with “DevOps”.

“Quality Control” is the leading area that documents have mentioned as something that “can” be merged with software organizations. However, there is also “Production Implementation” and “Production Control”, the latter which is often tasked for verifying the accuracy of the output of various systems. There are also the hardware and network teams, which often come under the purview of operations. All this is now expected to come under the guise of “DevOps”. And exactly how are these various areas to be held accountable to other areas in the IT organization when now they could just as easily lose their ticketing systems that control reporting of issues to software organizations as well as other such systems that provide necessary statistical information for accountability. They could also lose “Quality Control’s” ability to do quality initial and regression testing for production implementations, while “Production Control” can also rely on software developers to verify the accuracy of their own results. Why bother with any of this when software developers can take on new responsibilities for testing and new updates can be rushed into production very quickly?

In both ideologies, “better communication” between users and software developers and software developers and operations is touted as a primary motivation for all of this. But is there really a problem here? Do operations departments have to be somehow melded with software for “better communications”. This doesn’t seem plausible with all the tools that vendors tout as the answer for everything so why meld two types of departments into one to streamline existing processes? Can’t these processes be streamlined without disrupting existing structures in the IT organization?

Part of the answer to this that “DevOps” promoters offer is the concept of “continuous integration”, the process by which software developers can more easily get their out6put into production environments. It is a nice idea but in reality does it really work? Microsoft offered a software aspect of this when they upgraded IIS to allow for real-time module updates with .NET web applications, which included the dynamically interpreted web-site component to the .NET infrastructure. This allowed operations personnel to merely copy over a new C# or VB.NET source-code module to an IIS web-site without having to take the server down.

This new enhancement, which appeared sometime around 2005/2006 had its drawbacks however. For one, source code is not as secure as MSIL pseudo-code (though that has proven to be not so secure either but it can be obfuscated), which .net compiled applications produce and second, modules that had not been JIT compiled into the cache needed to be compiled or recompiled again if removed. This did not present overly serious issues but it did make web-sites somewhat sloppier in smaller organizations where accountability was often overlooked; except when necessary.

The long-term outgrowth of this improvement is that now entire web-based applications can be easily modified on a regular basis with hyper-defined schedules that allowed for many releases of such modifications on daily basis. What has this given rise to; more carelessness on the part of software developers and\or their technical managers since if a defect was found it could be quickly modified and implemented. This in turn lowered the scale of quality in such organizations as software personnel are now promoting the idea that it doesn’t matter if defects enter into production any longer since they can be so quickly corrected.

That is all well and good as long as a critical defect doesn’t get promoted into production and left there like one did in the 1980s at a highly reputable hospital, which seriously affected life and death analysis. At this particular hospital a new diagnostic-results package was implemented through a third-party vendor that did such development. The vendor had not done their quality control properly within the area of data entry for primary and secondary blood test results that were to be entered into the system. These two blood tests were standard procedure in all hospitals since the second test was used to confirm the first test’s results to ensure a proper diagnosis or dispute it. The life-and-death nature of this issue was that when the second blood test’s results were entered and successfully saved, the system did not record these tests as the secondary test results but merely copied over the results of the first blood test to that of the second set of results and displayed them to the doctors when they did their patient analysis. This not only presented the viewing of such information inaccurately but eliminated the data of the vital results of the second set of tests. Thus, the two blood tests ALWAYS confirmed the original diagnosis. This very serious defect was never caught by anyone; not the vendor, not the implementers of the software on site, nor the nurses or the doctors since they were probably looking at these tests from different aspects when doing their research. And so that defect was implemented into production and stayed there for 6 months being processed on a daily basis until the hospital’s employed senior analyst in charge of the data caught the issue quite by accident and was able to contact the software vendor to have it quickly corrected.

Luckily, very luckily, no one died as a result as this hospital would have been brought to its knees had such an issue ever been exposed publicly. Again, luckily, it did not.

This is what happens when even proper IT structures were in place. Think of what can happen if they aren’t as “DevOps” promoters propose. Does anyone really believe that the DevOps infrastructure would have saved the day if businesses see a new opportunity to cut IT staff further and software developers are under increasing pressures to ensure that their development can be free of defects way beyond the 60% level already noted? It is not very likely.

The final fly in this ointment is the actual “brain drain” as a result of both “Agile” and “DevOps” that has become the norm in the professional software field with the near glee that younger professionals appear to have concerning the pushing out of more experienced senior professionals.

If we go back to the fighter pilot example we find that currently, military pilots often have to move out of their positions after around 4 years of active flying or they lose their ability to maintain their military careers. While they are on active duty, they receive far less than adequate training and less flying hours than their foreign counterparts due to the constant re-allocation of military funds for political adventurism in foreign conflicts that has literally sapped the entire US economy and the consistent purchasing of highly expensive weapon systems that are now being proven to be nothing more than syphons into the military weapon contractors’ pockets since none of these new systems work as required. There are so many documents on this situation in the US on the new F-35 for example that a book could written from them.

In the software profession, vendors and software managers along with their business counterparts are increasingly looking for younger and younger professionals who often have no desire to work with senior professionals. The result is much younger IT organizations with less and less mature experience to catch many of the issues that are in the process of being developed allowing them to subtly undermine these new cultures in the work-place by becoming just another aspect of them.

To validate the truth behind age discrimination in the current software development industry simply go to and search on the words, “age discrimination” and a bevy of recent documents will come up describing the seriousness of the issue.
The result of all this is that when a field allows for a “brain drain” to occur as it has with the software profession, knowledge bases contract substantially, younger professionals lose the ability to absorb mature disciplines and experiences of the older professionals, and quality seriously deteriorates. This has happened to US manufacturing, US military weapons development, US fighter pilots, and many other professions in the United States now including the software profession, where very little is produced of quality any longer.

When you start stripping away basic assets and resources for the purposes of cost containment (merely a nice phrase for making business executives wealthy) and “streamlining” operations, it affects everything in business organizations like a cancer. What happens then is that you get things like “Agile” and “DevOps” both of which are attempting to support existing sociological deterioration instead of fighting it.

Both these paradigms are continuing to promote the deterioration in quality software output and the younger professionals are not entirely to blame since they do not have access to the experience that can show them anything different. And when the few older professionals left have tried, they are shunned and pushed out of their positions due to the youth oriented arrogance.

Though there are fewer and fewer senior professionals left in the software profession, it may behoove the younger crowd to begin supporting their retention in order to be able to qualify what the younger crowd is proposing with their development paradigms. Who knows, maybe “Agile” and “DevOps” could be refined to the point where they actually make sense for their implementation while using the best of both software engineering and the newer concepts being promoted. However, what is happening now, it appears the possibility of this scenario occurring is not very likely…



What Is DevOps?
Is devops killing the developer?
How ‘DevOps’ is Killing the Developer
Reforming America’s Overhyped Airpower


  1. Hi,

    it’s an interesting article, and I wonder whether the author has just finished a tricky or failed Agile project, or has just read a book about Agile.

    There were a number of themes, some I’d like to challenge.

    Theme 1: Agile is “controlled chaos”.
    It is true that a poorly run Agile project is chaos, but any poorly run project is chaos. A well run Agile project is Controlled, it’s a simple as that.
    Sure there is no fixed scope at the start of the project, this makes budgeting difficult, architecture difficult, but once you get over that and accept that the reality is that the scope is never truly fixed and will change it becomes easier to accept and the ability to adapt to this change is welcome.

    During development there must be discipline to produce technically excellent code, the last thing that is needed is discovering issues later on that slow progress, a focus on efficiency, on automation. The team should be measuring, inspecting and adapting to ensure optimal performance, Agile encourages this, but really and project team should be doing this.

    There must be a feedback loop with the customer, the iterative nature of Agile means you can regularly present progress back to the customer, if the product isn’t turning out how the customer wants it, they can feedback and it can be adjusted while it’s cheap to do so. Sure scope can change during this, and that needs to be controlled, if its not as with any project – chaos.

    In my personal experience, I have had more success delivering agile projects than I have any other sort. in my personal experience I don’t believe businesses are adopting Agile to loose headcount or save money, it’s the benefits of Predictability, sustainability and transparency that are driving this change.

    Theme 2: Agile Developers have to do everything/know everything.
    I don’t think this is totally true, but I do see where you are coming from. There is a lot of talk about T shaped people being able to pick-up and do all kinds of work. In reality there’s a mix of specialists and also some general skilled people who can put their minds to a mix of tasks. All teams need a mix, I don’t think you can or should get rid of specialist.

    Theme 3: DevOps
    This is an emerging discipline, but I feel it’s for the good. I’d rather have developers and ops people working together through a project than the “INCOMING” approach that a lot of teams have for releasing software. Ops people can develop the release scripts in collaboration with the developers, these scripts can be run automatically dozens of times during the development phase testing the release to production-like environments, this increases the chances of success massively.
    Creating feedback loops to tell the developers what the impact of their changes has been is also vital, if they have introduced some code that’s affected production performance for example they can quickly see this and prioritise. If the team has a rapid release cycle this issue can be addressed quickly.
    I have seen DevOps used very effectively, it’s a challenge to set it up, a challenge to change the culture to get it to work, but once in place, it’s results are incredible.

    Theme 4: Younger people
    As an “Old” my self this does worry me, I’ve always tried for a mix in my teams, the older, steadier and experienced chaps have been the backbone, the younger staff, frankly cheaper, but also keener to learn, faster to pick-up new things and more willing adapt to change.


    • Thank you for your valuable comments. They are appreciated.

      I would also like to make a few notes of my own in response…

      Yes, I have worked in an “Agile” environment where the project was quite successful.

      However, to get the project in on deadline required 14 hour days and some weekends. The faults of the project did not lie so much with the development cycle but the technologies used. However, the poorly defined deadlines, which was somewhat an outgrowth of the project’s “Agile” proponents was related to how proper user-interactions were completely ignored.

      Though your experiences overall have been positive ones with “Agile”, I have had positive experiences with other styles of development that were more robust than “Agile”. That being said, such experiences were few and far in-between because so many technical managers I have worked with simply had no interest in pursuing proper development life-cycles.

      My most successful project was with an assistant when the both of us followed pure software engineering practices with a phased in project target date. In this vein, following software engineering protocols, we not only did proper project scheduling analysis using Function Points but also insisted in using the initial estimate as it was meant to be used, as an estimate.

      As the project progressed, we kept the project manager informed of our progress always providing him with an updated target date, which increasingly got closer to the original estimate due to the re-calibrations of of our Function Point Analysis against the work done and the subsequent updates to our forecasting of the work left to be completed.

      We hit our project date with the deliverable in production within 4 days of the original estimate.

      Though “Agile” has been reported to have a good success rate, much of that reporting has been against small projects and maintenance tasks. When applied against enterprise level projects and\or complex endeavors the statistics are not as good.

      Does this mean that “Agile” cannot be adopted to such projects? No, not at all. It means that it still has to be refined. However, by doing so it would return to adopting many of the techniques already part of the software engineering universe.

      Beyond technically-base organizations such as start-ups or technical companies that are involved in technology, most businesses see “Agile” as a buzzword to allow them to reduce staff or keep existing staffs overly lean.

      In 42 years in the profession in all types of companies of all sizes, I can count on one hand how many good project managers I served under who understood what developers were doing and what they needed. The rest were simply people that pushed the development staffs to get the products out the door and if “Agile” or whatever else was in vogue at the time sounded good they paid lip-service to it.

      That being said, organizations today, staffed by younger personnel, are experiencing development in a different style; much of it promoted by ideological allegiances to one development paradigm or another. This is not necessarily bad but no development paradigm can be all things to all projects and I believe this is where “Agile” has failed most severely.

      The problem with “Agile” promotion is that it’s promoters see “Agile” as a hammer so every project looks like a nail, which is what I have seen and read.

      And as I mentioned numerous times in many of my writings on the matter, there was little need to develop any new development paradigms since software engineering had all the necessary practices in place for any type of project. One merely had to research the offerings and decide, which one was the most appropriate for the project at hand.

      Instead, “Agile” promoters narrowed their focus on the “Waterfall Approach”, which for what it was designed for was perfectly suited for but no one prior to the introduction of “Agile” ever sought to foster such an approach on all projects and we rarely used it for smaller and medium sized projects for which we mostly used “incremental” approaches where were allowed to.

      As to “DevOps”, as I mentioned also, production processes can be streamlined as necessary with such things as “Continuous Integration” and the like but there is little need to have Operations departments or their various sections melded with that of software development as it serves no purpose but to add further casualness to the process. And if companies can get away with it they will use “DevOps” as another method to trim staff or keep under-staffed departments that way.

      In addition, if companies are under SEC auspices as a result of being publicly traded, such a melding would see such companies fail their external audits as IT organizations must have a division of processes to pass.

      In the end, one thing that should be understood is that the majority of corporations in the US and increasingly in other countries are “evil’. It is not the fault necessarily oft he individuals involved but what modern organizations are expected to produce, which is not products of quality but simply profits for their investors. This goal has been made worse with the emergence of finance capital in the last 20 to 30 years as the primary driver of US and European GNPs. In the US, finance capital now makes up approximately 47% of our GNP when it should be manufacturing. This is also why you see so much speculation in US markets.

      The idea that corporations are “evil” has been very well documented by Canadian jurist, Joel Bakan in is treatise, “The Corporation”, which is still popular and available at Amazon.

      If you would like to expand your view points towards business in general this is an excellent and rather frightening sociological study…


  2. May I suggest this book:

    The Good, the Hype and the Ugly
    Authors: Meyer, Bertrand
    The first exhaustive, unbiased review of agile principles, techniques and tools

    Are you attracted by the promises of agile methods but put off by the fanaticism of many agile texts? Would you like to know which agile techniques work, which ones do not matter much, and which ones will harm your projects? Then you need Agile!: the first exhaustive, objective review of agile principles, techniques and tools.
    Agile methods are one of the most important developments in software over the past decades, but also a surprising mix of the best and the worst. Until now every project and developer had to sort out the good ideas from the bad by themselves. This book spares you the pain. It offers both a thorough descriptive presentation of agile techniques and a perceptive analysis of their benefits and limitations.
    Agile! serves first as a primer on agile development: one chapter each introduces agile principles, roles, managerial practices, technical practices and artifacts. A separate chapter analyzes the four major agile methods: Extreme Programming, Lean Software, Scrum and Crystal.
    The accompanying critical analysis explains what you should retain and discard from agile ideas. It is based on Meyer’s thorough understanding of software engineering, and his extensive personal experience of programming and project management. He highlights the limitations of agile methods as well as their truly brilliant contributions — even those to which their own authors do not do full justice.
    Three important chapters precede the core discussion of agile ideas: an overview, serving as a concentrate of the entire book; a dissection of the intellectual devices used by agile authors; and a review of classical software engineering techniques, such as requirements analysis and lifecycle models, which agile methods criticize.
    The final chapters describe the precautions that a company should take during a transition to agile development and present an overall assessment of agile ideas.
    This is the first book to discuss agile methods, beyond the brouhaha, in the general context of modern software engineering. It is a key resource for projects that want to combine the best of established results and agile innovations.


  3. Still a great read 3 years later Steve; I tend to agree with the general tone of your thoughts;

    “most businesses see “Agile” as a buzzword to allow them to reduce staff or keep existing staffs overly lean.”

    I too have done Agile in various teams, where no one had heard of the “Agile Manifesto”, yet we were supposed to be doing Agile. What a joke, the undertone was “let’s just keep changing our definition of what it means, to suit the current situation” Where are the specs? it’s agile, we don’t need them;

    My overall impression is that “Agile” has become a buzzword to throw around, when people (management, domain experts, analysts) can’t be bothered with proper or well-defined process and/or specs up front; Which would have been fine, if the budget or deadlines were unlimited and flexible, but that’s never the case.

    Biggest annoyance. DAILY stand up meetings.. stop asking me to report how am i doing or what have i done. If i told you yesterday this task is going to take 3 days, it is going to take 3 days. Don’t eat up my time.


  4. Glad you enjoyed the piece… 🙂

    To me, Agile was a natural followup to the corporate destruction of the various and vital departments and or divisions (ie: systems analysts) that once made up the IT profession.

    Now we have a mess, much of it based on “MeTooism” combined with the arcane JavaScript language, whose creator never intended for its use in the way it is being used now.

    In my view, the Zenith of all web development was Microsoft’s ASP.NET WebForm construct, which made web development compartmentalized and open to the mass of developers that needed or wanted to be able to develop web applications.

    Sure, there were plenty of things wrong with the WebForm construct but nothing so much as to have an entire industry change lanes to work with far greater complexity, which inherently lowered productivity.

    In current recruitment letters I still continue to receive from IT hiring agencies, the specifications and requirements for a professional appear to be far worse and far more muddled with the accoutrements of web skills, mostly based on JavaScript, than they were several years ago when a developer could accrue the necessary skills to develop web applications with the WebForm paradigm.

    Though the MVC paradigm was created in the 1970s, it was the Java Community that popularized it for their own web development needs. However, Java was originally put into the commercial development environments (it was originally developed for small appliances which was attributed euphemistically for toasters :-)) for large scale applications on the web. As a result, the MVC paradigm may have made sense for such large scale development.

    However, the Java language itself was a product of computer scientists, which was then eagerly supported by academics and the younger generations of developers entering the profession in the late 1990s.

    Fearing for a loss of market share as a result, Microsoft thought that by offering a similar development construct to our own technical community it would be able to entice many Java developers over to the .NET platform; an endeavor that did not quite work out all that well. Java continued on while Microsoft simply sowed a lot of confusion in its own ranks.

    Needless to say, Microsoft already had a .NET version of the MVC paradigm in the freely available, third party project, “MonoRails” from the Castle project. And yet few were really interested in it when many found WebForms to be a superior development construct and saw MVC as merely a throwback to “Classic ASP”.

    Yet, it wasn’t until Microsoft literally copied the project for its own use that it became eventually popularized within the Microsoft development Community. However, to do this, the entire community had to be upended, which actually began with the completely stupid concept of XP programming, which was based off of an entirely failed project within the Chrysler Corporation. However, one year before this project collapsed under the weight of its own design stupidity, XP had become popularized as the new way towards development.

    Out of this mess came Agile, which is still torturing serious and competent developers today.

    This is why the entire paradigm of software engineering started and stopped for me with Stephen McConnell’s 1996 excellent treatise on the subject, “Rapid Application Development”, which is still in a first edition printing.

    Everything you want and need to know about quality project development is in that book. However, the Function Point Analysis technique, which I used very successfully in a pilot project back in and around 2006 or so, has been upgraded and refined to what I believe is now called the Dutch Model, which can be found in a subsequent McConnell book.

    As you have noted, the entire profession is a mess with so many conflciting development paradigms, practically all of which remain with little common sense, that it is no wonder that so many sites today are in production with many issues and idiosyncrasies.

    All of this has put client-server development, a far superior form of internal application development to the web, on a back-burner that with its implementation could have limited the increasing exposure of companies to data breached on the web.

    But corporations just want to have fun…


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s