Introduction
When I was involved in my very first website, back before some readers perhaps even knew what a website was (the late 1990s!), the project undertaking was massive ... I mean, really really big. Having decided what the service was to do, we then put together a project plan and a budget. At the time, circa 1998, these were our project costing estimates -
- Purchase of Servers: $250,000
- Hosting of Servers for the first year: $75,000
- Engineers and other 'people' related costs: $300,000
In addition, to ensure we gave our project success, we had to move to a new office nearer the digital exchange so we could get a FAST GUARANTEED 128k <ahem> speed Internet connection.... it was a *big* project...
Don't get me wrong, it wasn't something from the dark ages ... it didn't involve punch-cards or the like, but it was a big project at the time... if you converted it to today's projects, it was a basic website - back then, it was a career move!
So, for what in retrospect was a pretty basic website, it seemed extraordinary complex and BIG, and it used correspondingly big resources. Yesterday, I had an idea for a new
SaaS website ... and here's what I did... in Visual Studio, 'File -> New -> Website (cloud publish).
That's pretty awesome .. gone are the days when I had to publish to a particular folder, FTP the DLLs and supporting files up to my Server host, beg them to allow me to include some kind of new non-standard DLL that I *really really needed* for this particular website (so I could get paid)... and then, hope everything connected as it should and that my website would deploy. Only the gods themselves could help me when (not if) something went wrong, because there was very little support for logging or debugging on hosted machines at that stage ... and even now, some hosts are still in the dark ages in this regard.
Evolution of hosting and cloud offerings...
So, how did we get to this stage, and what exactly are server-less computing and a cloud function?
Let's step through the progression of things over the past few years.
Local servers
15 years ago, it was commonplace to have a Server in your office (or home), that was directly connected full-time to the web - by this, I mean you had a physical server with a fixed IP, that physically sat in your office and was exposed to the web. Some places still have this kind of setup, but it's becoming increasingly rare for obvious reasons. When we wanted to publish a website or expose a service on the net, it was really easy - primarily because we have *control* ... we had direct access to the physical machine and could do *what we liked* on that machine ... install this or that DLL, this or that service or that exotic ActiveX plugin that your website simply could NOT live or work without...
The problem with this setup is it was wasteful, expensive, and left you to be the manager of the entire infrastructure, not just the website you built.
Co-hosted servers
Someone then came up with the idea of renting out 'rack space' in their data centers to people who had their own physical machines and wanted. This meant that you still owned the physical machines, but you could send it to dedicated space where people knew how to keep the 'lights on' for Internet-facing servers, and would also take care of backups, database connectivity, keeping servers patched, security etc. Stuff that to be honest, website developers shouldn't need to worry about...
The thing with co-hosting your server, however, is that it's still your server, and while someone else was managing the machine and its related overhead issues, its not terribly efficient ... the machine was not being used to its optimal, and perhaps 80%+ of its actual compute life cycle was spent just lying idle.
Shared servers
The next step in the evolution of things was shared servers. This is where a rack-space hosting company installed some fancy-pants software on *their own* hardware/operating system, that allowed them to isolate certain folders/processes, and share resources, among a number of different website customers. This had the immediate benefit of using the server for a lot more of the time, at the cost of flexibility to the customer.
One of the major problems with shared servers is that things are shared ... the operating is shared, the machine resource are shared... and if for example, one process goes 'rogue', and hogs 100% of the CPU for a space of time, well, any other websites/processes on that particular machine will suffer as a consequence... not a great situation really.
Virtual servers
Some clever guy or girl then came up with the clever idea of virtual machines. This is where we have a system that can very specifically isolate parts of a servers hardware and present it to an Operating System, which sees these isolated parts of the server 'as the entire server' ... think about that - it's like we take a snapshot of an entire machine, and break it into chunks, and hand this to full operating systems and say 'this is all you get'. This was very cool, because we could now, for example, take a powerful machine, with say one motherboard, 1GB of HD space, 32 GB ram, and virtually split it into for example 8 virtual machines with 2GB of ram, and 100GB of HD space each .. being more than enough for a reasonable instance of SQL Server say, and still have enough carved out to run the underlying Virtual Machine operating service itself.
Virtual servers worked great, and still do, but they still have a limitation - they are a full-blown virtual operating environment ... and if you need to spin up a full OS for each service that needs it, that kind of wasteful and not optimal on resources (wow, how far we've come already from the days of dedicated machines in your office!)
Containers & Docker
Right then, the next major step in this evolution was the move towards containerization. What container technology allows us to do, is to create an isolated block of resources *within a machine* (virtual or otherwise), and share the underlying machine resources, without bleeding into other services that are using the same resources. So in effect, it's like a shared server, but without the side effect of one installed system having unintended control over another due to bad resource management.
The other major benefit of a container is that you can specify particular versions of system-level dependencies. Let's say for example your system needed a particular version of a DLL or other installable ...but the problem is its a custom or even older out of date version, and it's not compatible with other processes that *share* that resource/dependency on the virtual machine.
With containers, you can isolate dependencies like this, and keep them effectively fire-walled from one another. The added benefit of this is that if something *works on your machine*, then you can take a snapshot of this configuration, and transport it to an online host (or another developers machine), where it *will simply work*, no questions asked. This is an incredibly powerful feature of containers and worth checking out for this alone if nothing else.
Enter the function as a service
The container paradigm moved us into an area where we can have 'surgically sliced up parts' of the OS just for our simple system. We can use containers to host simple, single services like for example an SQL server, or more complex arrangements of different services in combination. But sometimes, we need something leaner again. Sometimes, we need a simple little 'thing', that really you think, actually, this particular part of my system/architecture would be better just being in a shared instance ... doing its thing when I need it, but without the overhead of a virtual machine or even having to manage (and orchestrate) a container. Well, now you can have your cake and eat it ... enter the 'function as a service', or 'server-less computing'. Presented by Amazon AWS as a 'lambda' service, and Microsoft Azure as 'Functions', you can now write a simple function, with no supporting website or container, and simply say 'run this when X happens'. This image explains how I feel about this...
Functions as a service allow us to write a simple function/method that does something on the web, and deploy it to run, without having to worry about the underlying infrastructure, without having to worry about setting up a container or virtual machine, and without having to worry about all of the usual things we need to do to get to the starting point. It is defined as 'server less computing' because simply that is what it is ... the ability to write a function (or set of functions in reality), and deploy these to what seems to us, like a server-less environment. The cloud provider worries about the deployment, isolation, and critically, auto-scaling where necessary.
Unlike virtual machines and Azure 'Platform as a Service' type offerings, Functions are not changed by the hour, but rather by the execution of the function and the micro-second. This raises a really interesting question ... in the past, and now really, we look at our applications and think 'where is the bottleneck ... where is slowing things down' ... well, with the introduction of server-less computing and charging *by the function*, we could now look at things from a 'what FUNCTION is costing the business the most money ... really, really interesting stuff. But I digress...
Let's say, our customer came to us and said 'hey, can we implement a simple thing that when an image is uploaded, we check to see if it's within our specified dimensions, and, if not, we edit/resize the image to make it fit our requirements.' Before Functions, we would have had to either add this new functionality to our existing web offering in code somewhere, integrate it, and upload the changes. With functions, we can simply go into an online editor, define an endpoint (in this instance, of where to monitor for incoming image files), and write some code that will be processed when the image lands. And that's it. No hosting setup, not even 'file | new | project' for goodness sake! (well, you can do that, but actually it's not needed).
Now, Functions in Azure and AWS are not for everything, and like a lot of new tools, they have the possibility of being abused for the wrong things. However, I truly believe that in the area of Micro-services, Functions hold great promise and are very much worth investigating.
To get sucked in (and you should, really!), go check out
Amazons Lambda offering and Azure functions. You won't be sorry you did.
For a more detailed introduction, check out these videos,
and this,
Happy server-less computing!