The New Methodology: Impact

(This piece — written by Bhushan Nigale — is the fourth in a series that explores the evolution of the software technology stack and software development methodologies in the last two decades. In this instalment Bhushan examines the consequences of the widespread adoption of Agile and Lean.)

In article two of the series I presented the various forces that have led to the evolution of software development practices from the Waterfall model to Lean and Agile. We saw a variety of proximate causes that caused this evolution: the increasing role software plays in all spheres of our life, the massive changes in software architecture and the mainstreaming of Open Source software, the increasing consumerization of IT and the changing demographics of the software industry. 

This article examines the consequences of these changes. Mainly, it answers the question: did Agile and Lean hold on to their promise? When a species evolves to adapt to its new environment, manifest changes appear. Can we discern such changes in the industry, for instances in workplaces and the roles played by practitioners? If we live in a post-Waterfall world, what are the obvious signposts that the changes have ushered?

In what follows, I provide an overview of how other industries have begun to adopt Agile, to what extent the hierarchies still matter, the roles of teams over individuals, and the rising importance of roles such as the Product Manager.

Agile delivers

The agile movement that arose from the Agile Manifesto is now widespread to the extent that software development organizations consider it the de facto style for delivering innovation at scale. Software development and implementation projects are risky, failure-plagued endeavors: while statistics widely differ, reliable studies (such as the Standish Group’s Annual CHAOS report) report as high as two-thirds of technology projects ending in partial or total failure. 

With its emphasis on involving end-users as early as possible and then collaborating with them, smaller release cycles and a clear articulation of user requirements, Agile addresses the most crucial reasons for these failures: users become stakeholders, vested in the success of the project, rather than just using the project ‘thrown over the fence’ to them (e.g. by IT departments). The transparency in progress improves trust and the health of interdepartmental relationships – truth is the best disinfectant.

Hierarchies matter less 

A counterintuitive, but welcome, change has been the gradual flattening of organizational hierarchies. While Lean originated in manufacturing companies, traditionally hierarchical with a ‘command-and-control’ operational model, its fundamental principle of putting customer value first meant that employees need to be more empowered to ensure this principle lives in practice. Thus, a product owner several levels below the unit head, takes significant decisions and takes accountability in the success of the product: brand new announcements in products and cloud services are increasingly made by Product Managers and not development departmental heads.

Continue reading “The New Methodology: Impact”

The New Stack: Impact

(This piece is the third in a series that explores the evolution of the software technology stack and software development methodologies in the last two decades. It examines the first and second order effects of the new stack and explores the challenges this stack has given rise to.)

The first article in this series began with an outline of the “traditional” technology stack that was common in the early 2000s. It then examined how the internet, mobile, and cloud revolutions exposed the limitations of this stack, deficiencies that led to the new stack we see today. The article outlined the key characteristics of the new stack, and we also saw how these traits solved problems this traditional stack could not.

The stack today looks very different from the one we saw two decades ago. It consists of small, loosely-coupled (and mostly open-source) pieces that are distributed over a network and communicate using APIs. These aspects — the breakdown of the stack into smaller components, the ubiquity of APIs, the widespread adoption of open-source, and the distributed architecture — have had a huge impact in the last decade or so. This article will look at these consequences, both positive and negative.

First-order effects

Perhaps the most important consequences (of this breakdown of the traditional stack to the new one) have been the creation of a software supply chain and an API economy.

With the traditional stack, it was common for vendors to build most parts of the stack themselves. Vertical integration was seen as a competitive advantage, and software companies like Oracle even acquired hardware vendors (like Sun Microsystems) to offer the full stack, from infrastructure to user interface. And it was common for enterprise consumers to go to a small set of vendors to meet their software needs.

What we see today — thanks to the new stack that leans towards single-purpose solutions — is a best-of-breed approach for constructing the stack. Vendors (or open-source projects) offer specialised solutions or frameworks across the stack and across different stages of the software lifecycle [1]. The entire supply chain of software — from planning, development, delivery, to operations — can now be composed of tools from niche vendors or open-source offerings [2]. This trend highlights the growing maturity of the software industry: we’ve gone from a model where most parts of the solution come from one vendor (or a few vendors) to a model where a rich ecosystem of vendors is powering the entire software supply chain.

Continue reading “The New Stack: Impact”

The New Methodology: Origins

(This piece — written by Bhushan Nigale — is the second in a series that explores the evolution of the software technology stack and software development methodologies in the last two decades. It examines the journey from the Waterfall model to Agile and LEAN, outlining the main factors that drove this change.)

A benefit of spending over two decades in an industry is that one develops a perspective to separate hype from substance. This viewpoint is especially useful in an industry like software, where minor feature increments are hailed as innovation, and press releases, blogs and Tweets tout routine upgrades as revolutionary. After having lived through several such hype cycles that have a high probability of going bust, one learns to exercise caution, and appreciate genuine path-breaking innovations (the first article in this series — written by Manohar Sreekanth — lists the technology changes that have stayed).

Innovation in software development methodologies is even harder to achieve and sustain. A paradigm shift is rare – at least in the original sense of the term (Thomas Kuhn used it to define  a fundamental change in basic experimental practices in a scientific discipline). Inertia is difficult to overcome, especially if established methodologies seem to be getting the job done.

I’ve been privileged to witness and experience firsthand such a paradigm shift in software development, namely the shift from Waterfall to Agile methodology. The shift has been so complete that new entrants to the industry have little – if at all – any familiarity with the older methodologies. Agile is their new default mode now.

Examining and reviewing this shift is both useful and important, because the promises of any established order need to be constantly reexamined as flaws and digressions inevitably creep in. Over time, unless tended carefully, practices tend to return to older routines — regression towards the mean is an iron-clad statistical law. Understanding the older practices and the change drivers that led to their evolution help us better appreciate the advances and detect costly deviations. An appreciation of the historical developments helps practitioners not only to address flaws, but also iterate over the methodology to adapt to the changing operational environments.

A variety of forces have led to this evolution from Waterfall to Agile: the increasing role software plays in all spheres of our life, the massive changes in software architecture and the mainstreaming of Open Source software, the increasing consumerization of IT and the changing demographics of the software industry. We examine these factors in this article, and treat the consequences of these changes in a subsequent one. 

From Waterfall to Agile

The previous article in this series traced the fundamental change in the technology stack used to build software applications. A parallel evolution, in the methodology of developing software, has accompanied these mammoth technological shifts.

When I entered the industry in the late 1990s, Waterfall had none of the negative labels one finds associated with it today. Terms such as ‘Software Requirements Document’ and ‘Handover to Maintenance’ were ubiquitous and carried a certain respect – passing a Quality Assurance Gate was a big milestone that invited celebration. The software development process flowed from a high perch (hence ‘Waterfall’) of analysis and design to the plains of testing and release, where software was then finally delivered to customers.

But cracks had already started to appear. Disenchantment was rising, both with long delivery cycles and the obsession with adherence to the strict development processes. The internet – which broke the traditional stack as we saw in the previous article – was triggering foundational changes in which software was consumed, and these consumption-driven pressures were now being transmitted to how software was being built. Consumers wanted their software delivered to them faster and better, even as it began to occupy an increasingly central part in their lives. 

Continue reading “The New Methodology: Origins”

The New Stack: Origins

(This piece is the first in a series — written in collaboration with Bhushan Nigale — that explores the evolution of the software technology stack and software development methodologies in the last two decades. It examines why the “traditional” stack could not meet the needs of a new class of applications that began to emerge in the late nineties, and outlines the characteristics of the “new” stack we see today.)

One of the privileges of working in the same industry for a couple of decades is that you can look back and reflect upon the changes you’ve seen there. But this isn’t something that comes easily to us. Why are things the way they are in software?  is a question we don’t ponder enough. For youngsters entering the industry, current challenges may seem more relevant to study than past trials. And for veterans who’ve seen it all, the present carries a cloak of inevitability that makes looking at history seem like an academic exercise.

But it doesn’t have to be that way. Understanding the forces that led to the evolution in software we’ve seen in these last two decades can help us make better decisions today. And understanding the consequences of these changes can help us take the long view and shape things going forward. To see how, let’s begin with the technology stack that was common two decades ago.

The Traditional stack

When I started working in the enterprise software industry back in the late nineties, the software we built was deployed on large physical servers that were located ‘on-premise’. The application was a monolith, and it used an SQL-based relational database. The fat-client user-interface ran on PCs or laptops. Most of this stack was built on proprietary software. Put simply, this is how the stack looked like:

This was the state of the client-server computing model used in business applications in the nineties. At SAP, where I worked, the client was based on a proprietary framework called SAPGui; the application server was another proprietary piece of software that enabled thousands of users working in parallel; the database layer was open (you could use options like Microsoft SQL server, Oracle DB, or IBM DB2, among others); and the infrastructure beneath was an expensive server (like IBM AS/400 or Sun SPARC) that sat in the customer’s data center. 

This architecture was optimized for the needs of business applications that evolved in the nineties, and such a stack — from SAP or other vendors in that era — is still used in a majority of on-premise installations. But in the second half of the nineties a different story was unfolding elsewhere. 

Internet-based applications were gaining traction as the dot-com era blossomed, fell dramatically, then picked up again (no longer bearing the ‘dot com’ label). And for those applications, the traditional stack proved woefully inadequate. The reasons included cost, availability, performance, flexibility, reliability, and speed: key demands placed by the new types of applications being built on the internet.

The internet breaks the traditional stack

The internet ushered in a scale that was unimaginable in on-premise enterprise software. Websites like Google, eBay, and Amazon had to serve a large number of concurrent users and cope with wide variations in demand. With the traditional stack, adding more capacity to an existing server soon reached its limits, and adding new servers was both expensive and time-consuming. In the new business context, infrastructure costs could no longer grow linearly with user growth: applications needed an architecture that enabled close to zero marginal cost of adding a new user; the old way of adding expensive hardware was unviable.

The internet also placed a much higher demand on availability: these applications needed to be “always on”. Initially a requirement mainly with B2C applications, availability caught up with the B2B world as consumerization of IT gained speed. Soon ‘continuous availability’ turned into a competitive differentiator for businesses that moved (partially or fully) to the web. Five nines or six nines (99.9999 % availability) became the benchmarks, and a new architecture was needed to achieve this level of availability without driving up costs. Again, the old way of installing expensive servers for failover was simply too expensive and inefficient.  

The need to scale applications better also arose due to performance expectations from internet-based (and later mobile) applications. E-commerce applications also saw peak usage in some periods (like Christmas or Black Friday), and others had ad hoc expectations (like planning an ad campaign for a few weeks). Meeting this unpredictable demand needed a different level of flexibility in resource allocation, something that the traditional stack — and hardware-based methods — could simply not offer.

Businesses that moved to the internet also had to evolve much faster than the systems of record (built on the traditional stack) that had dominated the previous era of business applications. Parts of the application that needed more frequent changes had to be deployed independently — and at a different pace — from other slow-moving parts. This was not possible with the monolithic applications built on the traditional stack: it required a new architecture that allowed teams to build and deploy smaller pieces at a faster pace. (It wasn’t just the technology stack that was inadequate — the traditional waterfall model could also not cope with this pace of change and the flexibility this new world demanded. This parallel evolution of development practices will be discussed in a separate article.)

Continue reading “The New Stack: Origins”