Jim's Journal

Why and How we went from SaaS to Cloud, Services to APIs – It’s just Business!

Just how does Amazon or Google scale to such a huge number of transactions or inquiries? How are these companies able to afford to serve so many customers every day, with new services, nonstop again and again?

The economics behind the lift and shift from operating environments as physical servers to virtual servers or containers is incredible. The change from monolithic applications on expensive fault tolerant hardware to small applications in containers on nodes on internet infrastructure is earth shattering. All with the control and flexibility previously only available for those that invested in building their own infrastructure. In this journal entry I want to share my thoughts on:

  • The history behind this model
  • Why you should be making plans to shift to it
  • The value it brings

Note – this post appears as two articles on LinkedIn.

A good friend and industry expert Brian Snyder reminded me that all the innovation in technology has to be backed by the business seeing value in that technology. In my first post on microservices I talked about how we as salespeople and the business people who we sell to have had to learn all sorts of new tech speak because the way software has been written has changed so much over the years. The same for my post on the enterprise service bus, since managing integration and APIs to these services has become complex and inefficient. Well, the value isn’t just there as creative new technologies – it’s the incredible impact it has made on the cost and efficiency in running and using these applications.

When you stop thinking about singular applications that operate in isolation…. When you start to think of servers as containers… or simply an API that you use when you need it; business models are turned upside down and profitability becomes exponential: Capex moves to opex and TCO becomes more about finding creative ways to drive new revenue rather than worrying about long term investment.

services in the cloud are building blocks for future applications

How you consume microservices, or services in general even if they aren’t microservices really matters. You don’t have to use them all the time, just when you need them. And if you are thoughtful in how you analyze the business domains you are targeting for each, they can be reused across the enterprise. But what if you then want to share them with other companies, or tap into the microservices that other companies have built? And what if there are also legacy applications that you need data from as well, like a weather app, identity information, credit scores, etc.? These business services are often available via APIs (Application Program Interface) that you securely access for specific business purposes and pay for by the transaction. Think of them as a timeshare of lego bricks to help you assemble new applications without investing in building each and every brick. Can you afford to invest in a new cloud based infrastructure? Do you really want to implement a complex and extensive ESB. I’d say that these kinds of investments will pay off in the long run; but you can also time share servers today by running your applications in the cloud, and you can access other company’s services the same way, pick them out of the cloud as you need them as APIs. I’ll dig in on this later, but first – how did we get here?

One of the key challenges over the years in helping large organizations implement enterprise software that works in this manner has always come down to efficient software running on powerful hardware. I wrote about the challenges in P2P vs traditional payment and then faster payments back in 2015. Back then, banks pushed back hard on network based APIs and instead looked to run in-house or with traditional core providers. That’s because processing and applications are traditionally centralized into platforms in banking – thanks to the core provider mentality. Originally it all ran on one box – a mainframe that centralized processing with dumb terminals throughout the organization. We all wrote code as efficiently as possible because we were time-sharing processing power with every other application. In the 80s and early 90s, when the PC came around we distributed some of that computing to workstations, PCs running some of the application locally, and leveraging the mainframe more as a database or record, or a centralized batch processing engine.

Things almost changed with the shift from mainframes to distributed computing; deploying a three tier architecture that included a presentation layer, an application server set, and a database server set with the integration layer – sometimes another server that managed communications to other applications. These systems ran on less expensive hardware platforms, including PCs – reducing the cost of entry and ongoing operating costs. That approach also translated to early online applications – including early online banking solutions that supported terminals not in the organization, but rather connecting via direct dial and dedicated private dialup or direct connect networks. This involved a communication server with either a bank of modems or a partnership with a network provider. I was lucky to be a part of an early innovative company providing commercial banking applications in this way. We eventually added PC software on the client side as well – again distributing some of the processing power to the workstation. Then came the Internet.

Magnet Communications Web Server, DB Server, App Server
Magnet’s architecture back in 1999

I was also privileged to join a team of founders of one of those technology companies that recognized the value of the internet back in 1996. This approach made the shift back to a more centralized processing approach with little to no dependency on the client side. Dan Myers and his team of engineers took a chance on this new tech, and I did my best to convince bankers that the Internet would stick. Our application was cash management (treasury management) – essentially commercial online banking. We added a web server to replace the presentation layer and also had a three tier architecture: Database –> Application/Integration –> Webserver. We weren’t alone, businesses ditched their communication server for a whole new presentation layer that leveraged new technology on the client side in the form of web-browsers. We weren’t alone. The industry for web servers boomed and companies emerged serving up all kinds of powerful solutions that only needed a browser on the client side. Dot.Com Boom.

Processing became incredibly efficient, with horizontally scalable applications that simply required adding additional database or application servers, routers that managed traffic to multiple web servers and a whole new industry of tools that created efficient ways to securely server up these web-based applications to the customers. There were so many web servers and therefore so many pages that companies were formed just to index them, then search them, then advertise on those index and search engines. (Yes kids there were others before Google). I’m not even going to address email here – that’s a whole other animal, you can read up on Slack to see what that evolution involved. At the same time the internet based businesses got way out of hand – companies over extended themselves and there were simply too many slices of the economic pie. Dot.Com Bust.

It was also incredibly dangerous and pretty unstable – new threats and processing bottlenecks were introduced and security turned from locking down that single access point on a private network to terrible new nefarious external attacks like nobody had seen before. Again, a new industry came about and there were creative and powerful ways to support 24/7 operations and secure these web applications and connections. Unfortunately it also made the hardware and networking platforms incredibly expensive. Non-stop/Fault Tolerant hardware with redundant processors, storage and network connections became the norm for hyper-sensitive customer bases like the banking industry. Costs went up and companies began to even consider moving back to the mainframe.

Microsoft of course continued to evolve – first deploying server technology to centralize documents and data, then eventually moving their applications to the cloud when faced with competition from companies like Google. It is pretty amazing how they rebuilt themselves as a cloud company – and now they are the most valuable company on the planet! At the same time, from the wreckage of the internet crash, consolidation happened, powerful companies started to become even more efficient and innovative. Companies like Amazon changed the way they built applications and managed data. How did they do it? What technology powered these new and efficient ways of serving up technology? They focused on internal challenges of dealing with the growth, rearchitecting to become more efficient and funding it all with our data, eCommerce and advertising. They innovated and then they shared this innovation to those that wanted to think differently.

When the economy tanked, what was happening in the banking industry while all this was going on? To the organizations that had moved from the mainframe to distributed computing? They began to build closed, walled gardens of internal web applications. Serving up private, locked down applications that only employees could access, but with a thin-client approach, running in a web browser. COOs were doing anything they could to find efficiency; trying to replace every system tied to huge ERP or core processing data stores from Oracle, Sybase, SAP and others. Internet banking became restrictive too – the online banking systems I sold in the last decade were powerful but inflexible, black box applications with cumbersome integration tools; or no tools at all – entirely outsourced as Software as a Service (SaaS) with a single connection to the back office. The in-house online banking systems and database/ERP systems became powerful but expensive – often one-off implementations with “fork lift upgrades.” The SaaS systems were isolated to small banks that weren’t interested in unique solutions, they just needed to “get on the web” or “get a mobile App with little to no differentiation”

While these traditional economy companies made this incremental change the new-economy companies planned for disruption. They turned their infrastructure into a new line of business, timesharing their own systems, and Cloud platforms were born (we are back to the evolution to SOA, then ESBs and finally Microservices). This new way of developing and using software in the enterprise is about the internet maturing and levering these programming and processing technologies to scale on virtual machines using internet technology even more efficiently in their back office.

We can thank Amazon, Google and many other companies for investing in these new ways to scale the internet. These new ways are microservices, with continuous deployment through devops, software defined infrastructure, and “The Cloud.” When you stop thinking about singular applications that operate in isolation, or with cumbersome interfaces that treat realtime as a batch of one. When you start to think of servers as containers, security as part of the infrastructure and everything as an endpoint, or simply an API that you use when you need it; business models are turned upside down and profitability becomes exponential. Capex moves to opex and TCO (total cost of ownership) becomes more about finding creative ways to drive new revenue rather than worrying about long term investment. That’s why I just took you on this journey, today’s cloud base solutions are grounded in this evolution – and those that don’t accept it are doomed to struggle to keep up.

In conclusion, the value of cloud isn’t just the efficiency of moving to even more distributed architecture and happy creative programmers. It’s shopping for API vendors or bundles of microservices to get just what you need and enjoying continuous deployment and innovation. It is moving to flexible, containerized deployments on commodity hardware or; for those that don’t want to own and run the technology, shifting to cloud providers like Google Cloud, Amazon AWS and Microsoft Azure. These captains of industry have created an operating model that is incredibly secure and infinitely scalable. Not everyone is there yet and that’s what’s happening now in my world.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.