(This piece — written in collaboration with Bhushan Nigale — is the fifth in a series that explores the evolution of the software technology stack and software development methodologies in the last two decades. In this instalment Bhushan and I examine the interplay between the old and the new worlds, and also look at how the stack and methodologies play together in this evolution)
The first four articles in this series examined the transition from the traditional stack and Waterfall methodology (common two decades ago) to the New Stack we see today in cloud-native products and the Agile/LEAN methodologies common in software development today. Those articles looked at the drivers (that led to the changes), and the impact (of these changes). Some of the key challenges the new stack and methodologies brought in were also discussed.
Given all this, it’s fair to ask: where does this leave the traditional stack or the Waterfall methodology? Where are they used (or relevant) even today? Do they have a role to play in future? How do these co-exist with the new stack and methodologies?
This article explores the interplay between old and new, and also how the stack and methodologies play together.
The traditional stack today
The traditional stack, dominant two decades ago, is still widely in use today. It figures mostly in the enterprise software products built around the 1990s and deployed ‘on-premise’. Some of these products have been rewritten for the cloud, some others have followed the ‘lift and shift’ path to the cloud, but a majority — close to 60 percent [1]— remain where they were originally deployed: in the on-premise data centres maintained by the IT departments of enterprises.
These legacy enterprise products — and thus the traditional stack they are based on — can be expected to stay operational for decades. The reasons for this are many.
Firstly, the large amount of investment (both in hardware and software) that has gone into these systems results in a lot of inertia. Having invested so much into these systems, the natural inclination is to keep them running for a long time.
Next there’s the tricky matter of switching costs — costs that include not just building or buying new software, but also migrations costs, end-user training costs, etc — that need to be justified: unless there’s a compelling business reason, such transformation projects do not get the budget.
Then there’s the question of skill. Enterprise IT departments are experienced in maintaining and operating the traditional stack, but they lack skills the new stack demands. Unless there’s a demographic change — which can take decades — this factor will continue to play a role in decisions involving a move to a new architecture.
Ultimately, it is a matter of business priority. These enterprise products also are typically ‘systems-of-record’, which do not face the same kind of demands — to change fast or scale flexibly — as the ‘systems of engagement’ (or, in the B2C world, any consumer facing apps) do. And while they may be mission critical, these transactional systems are often not seen as strategic: so why touch them if most of the innovation is anyway happening elsewhere? As long as the data from these systems of record can be accessed quickly and used (for AI related capabilities, for instance), there’s little business need to rebuild these solutions on the new stack.
So these legacy products built on the traditional stack will continue to be in use in the foreseeable future. One important consequence of this is the rise of Robotic Process Automation (RPA) tools in the software industry [2]. These tools make up for the deficiencies in legacy software (like missing APIs, or fragmented toolsets) and add a layer that further removes the need to modernize legacy solution landscapes among enterprise customers.
Continue reading “The New Stack and Methodology: Interplay”