(This piece is the third in a seriesthat explores the evolution of the software technology stack and software development methodologies in the last two decades.It examines the first and second order effects of the new stack and explores the challenges this stack has given rise to.)
The first article in this series began with an outline of the “traditional” technology stack that was common in the early 2000s. It then examined how the internet, mobile, and cloud revolutions exposed the limitations of this stack, deficiencies that led to the new stack we see today. The article outlined the key characteristics of the new stack, and we also saw how these traits solved problems this traditional stack could not.
The stack today looks very different from the one we saw two decades ago. It consists of small, loosely-coupled (and mostly open-source) pieces that are distributed over a network and communicate using APIs. These aspects — the breakdown of the stack into smaller components, the ubiquity of APIs, the widespread adoption of open-source, and the distributed architecture — have had a huge impact in the last decade or so. This article will look at these consequences, both positive and negative.
First-order effects
Perhaps the most important consequences (of this breakdown of the traditional stack to the new one) have been the creation of a software supply chain and an API economy.
With the traditional stack, it was common for vendors to build most parts of the stack themselves. Vertical integration was seen as a competitive advantage, and software companies like Oracle even acquired hardware vendors (like Sun Microsystems) to offer the full stack, from infrastructure to user interface. And it was common for enterprise consumers to go to a small set of vendors to meet their software needs.
What we see today — thanks to the new stack that leans towards single-purpose solutions — is a best-of-breed approach for constructing the stack. Vendors (or open-source projects) offer specialised solutions or frameworks across the stack and across different stages of the software lifecycle [1]. The entire supply chain of software — from planning, development, delivery, to operations — can now be composed of tools from niche vendors or open-source offerings [2]. This trend highlights the growing maturity of the software industry: we’ve gone from a model where most parts of the solution come from one vendor (or a few vendors) to a model where a rich ecosystem of vendors is powering the entire software supply chain.
(This piece is the first in a series — written in collaboration with Bhushan Nigale — that explores the evolution of the software technology stack and software development methodologies in the last two decades. It examines why the “traditional” stack could not meet the needs of a new class of applications that began to emerge in the late nineties, and outlines the characteristics of the “new” stack we see today.)
One of the privileges of working in the same industry for a couple of decades is that you can look back and reflect upon the changes you’ve seen there. But this isn’t something that comes easily to us. Why are things the way they are in software? is a question we don’t ponder enough. For youngsters entering the industry, current challenges may seem more relevant to study than past trials. And for veterans who’ve seen it all, the present carries a cloak of inevitability that makes looking at history seem like an academic exercise.
But it doesn’t have to be that way. Understanding the forces that led to the evolution in software we’ve seen in these last two decades can help us make better decisions today. And understanding the consequences of these changes can help us take the long view and shape things going forward. To see how, let’s begin with the technology stack that was common two decades ago.
The Traditional stack
When I started working in the enterprise software industry back in the late nineties, the software we built was deployed on large physical servers that were located ‘on-premise’. The application was a monolith, and it used an SQL-based relational database. The fat-client user-interface ran on PCs or laptops. Most of this stack was built on proprietary software. Put simply, this is how the stack looked like:
This was the state of the client-server computing model used in business applications in the nineties. At SAP, where I worked, the client was based on a proprietary framework called SAPGui; the application server was another proprietary piece of software that enabled thousands of users working in parallel; the database layer was open (you could use options like Microsoft SQL server, Oracle DB, or IBM DB2, among others); and the infrastructure beneath was an expensive server (like IBM AS/400 or Sun SPARC) that sat in the customer’s data center.
This architecture was optimized for the needs of business applications that evolved in the nineties, and such a stack — from SAP or other vendors in that era — is still used in a majority of on-premise installations. But in the second half of the nineties a different story was unfolding elsewhere.
Internet-based applications were gaining traction as the dot-com era blossomed, fell dramatically, then picked up again (no longer bearing the ‘dot com’ label). And for those applications, the traditional stack proved woefully inadequate. The reasons included cost, availability, performance, flexibility, reliability, and speed: key demands placed by the new types of applications being built on the internet.
The internet breaks the traditional stack
The internet ushered in a scale that was unimaginable in on-premise enterprise software. Websites like Google, eBay, and Amazon had to serve a large number of concurrent users and cope with wide variations in demand. With the traditional stack, adding more capacity to an existing server soon reached its limits, and adding new servers was both expensive and time-consuming. In the new business context, infrastructure costs could no longer grow linearly with user growth: applications needed an architecture that enabled close to zero marginal cost of adding a new user; the old way of adding expensive hardware was unviable.
The internet also placed a much higher demand on availability: these applications needed to be “always on”. Initially a requirement mainly with B2C applications, availability caught up with the B2B world as consumerization of IT gained speed. Soon ‘continuous availability’ turned into a competitive differentiator for businesses that moved (partially or fully) to the web. Five nines or six nines (99.9999 % availability) became the benchmarks, and a new architecture was needed to achieve this level of availability without driving up costs. Again, the old way of installing expensive servers for failover was simply too expensive and inefficient.
The need to scale applications better also arose due to performance expectations from internet-based (and later mobile) applications. E-commerce applications also saw peak usage in some periods (like Christmas or Black Friday), and others had ad hoc expectations (like planning an ad campaign for a few weeks). Meeting this unpredictable demand needed a different level of flexibility in resource allocation, something that the traditional stack — and hardware-based methods — could simply not offer.
Businesses that moved to the internet also had to evolve much faster than the systems of record (built on the traditional stack) that had dominated the previous era of business applications. Parts of the application that needed more frequent changes had to be deployed independently — and at a different pace — from other slow-moving parts. This was not possible with the monolithic applications built on the traditional stack: it required a new architecture that allowed teams to build and deploy smaller pieces at a faster pace. (It wasn’t just the technology stack that was inadequate — the traditional waterfall model could also not cope with this pace of change and the flexibility this new world demanded. This parallel evolution of development practices will be discussed in a separate article.)