The Basics of Serverless and Microservice Architecture

Severless is a type of cloud computing service model which offers function as a service (FaaS). Some service providers bill users based on the actual code execution time and number of requests for functions. A serverless architecture can be coupled with microservices, meaning developers can build small pieces of programs and integrate them through APIs to serve the end user. The microservice architecture advocates the loosely coupling of lightweight services. Applications built with microservice patterns are regarded to be more scalable because scaling sub-systems in a containerized application can be more efficient relative to a monolithic application – let’s say a containerized e-commerce system comprises a number of sub-systems including a shopping cart and search bar. In case there is a growing number of visitors refreshing their carts, the shopping cart container can be scaled out horizontally. A container is usually run inside a pod and managed by Kubernetes, which is where the deployment and scaling take places.

https://kubernetes.io/docs/concepts/workloads/pods/

The serverless architecture has been brought up for discussion in recent years. With the introduction of Amazon Web Services (AWS) Lambda, S3 and DynamoDB, more options are now available to create serverless applications which do not require a dedicated back-end server to be up and running all the time. The serverless architecture does not mean that there is no server, however, servers and OS are abstracted from developers as they are maintained by cloud providers – developers do not have to take care of hardware scaling and maintenance routines like OS updates because service providers are responsible for such tasks. 

Use Cases

To get a better understanding of the serverless concept in action, let’s take a look at AWS’s Compute and Storage solutions and their use cases.

Compute: AWS Lambda

Lambda is an event-driven compute service offered by AWS. Lambda executes backend code and responds to events like #1) updating database tables and #2) uploading files to an object storage service such as Amazon S3 bucket. You are charged per request and for the execution time. Creating lambda functions is a way to make use of AWS’s cloud compute infrastructure without provisioning an instance (also known as VM). Instead, you specify the runtime environment (eg. node.js), RAM to be consumed by your function (eg. 128MB) and the timeout criteria (eg. 3 sec). You can start by creating an event source mapping which allows Lambda to identify the events to track (eg. file upload: an object is created in a S3 bucket). You should also define the function to invoke should the event takes place. Lambda will then run the function, specifying the event as a parameter.

An event is a JSON-formatted document that contains data for a Lambda function to process

AWS Lambda Developer Guide

What makes the serverless architecture great is its potential cost-saving and high scalability benefits. A lambda function will only be invoked in response to an event so idling time does not incur charges. Let’s say an image resizing app is developed using a serverless architecture. An event can be pushed by S3 (PUT, POST, COPY and DELETE). A function will then be invoked, that is to resize the image stored in S3 and return the URI. If the upload action is not frequently performed, it may help reduce bill because you are not paying for a dedicated server running 24×7 to wait for the upload action. On the other hand, in case there is a surge in demand for image processing, you do not have to handle the provisioning of more hardware resources.

Memory and timeout settings are specified for a Lambda function
A simple export function: take an event, output messages and return a value
A test event is defined to supply the value to be handled by the function
The execution result shows the response after running the Lambda function, as well as the console logs. Note that billed duration and RAM information are displayed.

Storage: Amazon Simple Storage Service (S3)

Amazon S3 is an object storage service which offers a scalable storage infrastructure. There is no need to specify a volume size because technically the storage capacity of S3 is unlimited. It offers a web-based interface which users can browse the files in a Google Drive alike format. To start using S3, first you need to create a bucket and define its access policy –  you can specify whether a bucket is publicly accessible. For security purposes, you may want to limit S3 bucket access to authenticated IAM users as defined in the principal element of your bucket policy. Files / objects can then be (programmatically) uploaded to the bucket and each object uploaded to S3 can be accessed via a S3 URI. Therefore, S3 can even be used to host a static website. Depend on the permission settings of your S3 bucket, the public can view your files either through a S3 URI or a CloudFront distribution (a CDN service). S3 is highly available and scalable. You let Amazon handles the maintenance and availability of the storage devices. Accordingly, it is possible to build serverless applications with Amazon S3 and Lambda. 

Images are stored in a folder within a S3 bucket. 

Conclusion

Cloud native computing advocates the use of containers, microservices and serverless functions. With the serverless architecture, you can spend less time on back-end infrastructure maintenance because serverless offerings are often highly available and scalable. Serverless computing represents a shift of focus away from the routine infrastructure maintenance towards application development (CI/CD). The push for microservice architecture and application containerization means that cloud native computing puts heavy emphasis on efficiency and scalability.

In spite of the many benefits, it is worth mentioning that serverless and microservice architecture are not suitable for every use case. Service which has a low latency requirement may not run the best in a serverless architecture due to the cold start time for a function. There is also a concern regarding vendor lock-in because the implementation of microservices using vendor-specific tools may make the application difficult to migrate to other platforms in the future. Containerizing a monolithic application is also a costly investment. Accordingly, whether it is worthwhile to adopt a serverless-based and microservice architecture is debatable and should be carefully examined in a case-by-case basis.