Serverless Applications Basics on Amazon Web Services

Serverless Applications Basics on Amazon Web Services

What do you know about Serverless application architecture? This post is about this cloud technology that is actively gaining strength in the IT world.

Cloud technologies are becoming more and more popular. The reasons for this are quite logical: easy accessibility, relative cheapness and lack of initial capital (both knowledge to maintain and deploy infrastructure, and monetary nature).

Serverless technology is also becoming more and more favored, but for some reason, it has quite poor coverage in the IT industry, unlike other cloud technologies such as IaaS, DBaaS, PaaS.

Here we consider AWS (Amazon Web Services) as the undoubtedly the largest and most thoughtful service (based on Gartner’s analysis for 2015).

Serverless AWS

All that we need are:

  • AWS account (Free – tier is enough for tests and minimal development)
  • Development platform (Fedora (Linux) is ok, but you can use any distribution that supports Node at least 4.3 and NPM);
  • Serverless Framework 1. *Beta – it’s worth to describe it in a separate post later)

So, let’s start with the basics.

Serverless: what is the secret of popularity?

Serverless is the serverless application architecture. However, it is not really serverless. The basis of the architecture is microservices or features (lambda) that perform a specific task and run on logical containers hidden from others. So the end user only has the interface of loading the feature (service) code and the possibility of connecting event sources (events) to this feature.

Considering the example of the Amazon service, the source of events can be many of the same Amazon services:

  • S3 storage can generate many events on almost any operation, such as adding, deleting and editing files in buckets.
  • RDS and DynamoDB. Moreover, Dynamo allows you to generate events on the addition or change of data in a table.
  • Cloudwatch is a cron-like system.
  • And, the most interesting thing is API Gateways. This is a software HTTP protocol emulator that allows you to abstract requests to a single microservice event.

In reality, as soon as you upload the feature code to Amazon, it is saved as a package on the internal file server (like S3). At the time of receiving the first event, Amazon automatically starts the mini-container with a specific interpreter (or virtual machine, in the case of JAVA) and runs the resulting code, substituting the generated event body as an argument.

As is clear from the principle of microservices, each feature cannot have a state (it’s stateless), since there is no access to the container, and the time of its existence is not determined by anything. Thanks to this quality, microservices can easily grow horizontally, depending on the number of requests and workload.

In fact, practice shoes that the balancing of resources in Amazon is done fairly well, and the feature reproduces itself quickly even with abrupt increases in load.

On the other hand, another advantage of such a stateless launch is that payment for using the service, as a rule, can be made based on the execution time of a particular feature.

This pay-as-you-go makes it possible to launch startups or other projects without initial capital.

After all, there is no need to redeem hosting for placing a code. Payment can be made in proportion to the use of the service (which also allows you to flexibly calculate the necessary monetization of your service).

The pros of such architecture are:

  • Lack of hardware – servers
  • Lack of direct contacting and administration of the server side
  • Almost limitless horizontal growth of your project
  • Payment for used CPU time.

The cons are:

  • Lack of clear control of the container (you never know where and how they are launched, who has access). that can often cause paranoia.
  • The lack of “integrity” of the app: each feature is an independent object, which often leads to some application scattering and difficulties to put everything together.
  • The “cold” start of the container leaves much to be desired (at least in Amazon). The first launch of a container with a lambda-feature can often slow down for 2-3 seconds, which is not always well perceived by users.

In general, technology has its own segment of demand and its consumer market. The technology looks rather suitable for the initial stage of startups, ranging from the simplest blogs, ending with online games and more.

In this case, we pay special attention to independence from the server infrastructure and unlimited performance gains in automatic mode.

Serverless framework

As it was mentioned above, one of the drawbacks of the Serverless is the fragmentation of the app and heavy control of all the necessary components – such as events, code, roles, and security policies.

The regulation of all the listed components is a real headache. It often leads to the failure of services with the next update.

To avoid this problem, a very useful utility with the same name – Serverless was created. This framework has been designed solely for use in the AWS infrastructure (and, although the 0.5 version branch was completely aimed for NodeJS, a big plus was the redirection of branch 1. * towards all AWS-supported languages).

In the future, branch 1.* will be described since its structure is more logical and flexible to use. Moreover, in version 1 most of the garbage was cleaned up and Java and Python support added.

Having read the basic framework installation instructions and its configuration, you have probably already installed it, but if there are beginners among our readers, let us list the necessary steps. Hopefully, you already have a console with Centos, so let’s begin with the installation of NPM/Node.

Stage 1

NVM is good for Node version control.

stage 1

Stage 2

Overloading the profile as indicated at the end of the installation:

stage 2

Stage 3

Now we install the Node/NPM tuck

stage 3

Stage 4

After a successful installation, it’s time to set up access to AWS (here we’ll skip setting up a specific AWS account for development and its role – detailed instructions can be found in the framework’s manual.

Stage 5

Usually, it’s enough to add 2 environment variables to use an AWS key:

stage 5

Stage 6

Suppose the account is set up and configured. (Note that the SLS framework requires administrator access to AWS resources — otherwise, you can spend hours trying to figure out why things aren’t working the way they want).

Stage 7

Install Serverless in global mode:

stage 7

Note that without specifying the beta version, you would probably have installed a 0.5 branch. These 0.5 and 1.0 are different, so the instructions for 1.0 on version 0.5 will not work.

Stage 8

Now we create a project directory. And, at this stage – a small digression about the architecture of the project.

The architecture of Serverless project

Now let’s see how the lambda feature can be loaded into Amazon. There are two ways:

  • Through the web console – a simple copy-paste. The method is quite simple and convenient for a monosyllabic feature with the simplest code. Unfortunately, this way the feature cannot include third-party libraries (you can read about the list of libraries supported by lambda features in the Amazon documentation, but as a rule, this is a language pack out of the box and AWS SKD).
  • A feature package can be uploaded through AWS SKD. This is a regular zip archive, which has all the necessary files and libraries (in this case, there is a limit on the maximum archive size of 50 MB). Do not forget that lambda is Microservice, and it makes no sense to fill the entire software package into one feature. Since the payment for the feature goes on the execution time of the code – so, do not forget to optimize.

In our case, Serverless uses the second method — that is, it prepares the existing project and creates the necessary zip package from it. Below is the example of a project for NodeJS, otherwise the same logic will not be difficult to apply for other languages.

One important detail about the Serverless project should be noted here: you cannot include directories and files located higher in the directory tree than the project directory. Or rather – ../lib will not work.

Now we have a configuration, let’s move on to the feature itself.

Stage 9

Now we create the project with the default configuration.

stage 9

After this command, you will see the project structure.

Stage 10

The feature itself is in the file handler.js. The principles of writing the feature can be read in the Amazon documentation. But in general terms, an access point is the feature with three arguments:

  • Event is the event object. This object contains all the info about the event that caused the feature. In the case of the AWS API Gateway, this object will contain an HTTP request (in fact, Serverless installs the default HTTP request mapper in the API Gateway, so the user doesn’t need to configure it himself, that is very convenient for most projects).
  • Context is an object containing the current state of the environment — the info such as the AVR of the current feature and, sometimes, authorization information. Remember, that for the new version of NodeJS 4.3 Amazon Lambda, the result of the feature should be returned via callBack, rather than the context (e.g. {done, succeed, fail}).
  • Callback is a feature of callback format (Error, Data) that returns the result of an event.

To illustrate it, let’s try to create the simplest Hello World feature:

stage 10

Stage 11

It’s loaded.

stage 11

Usually, this command takes time to package the project, prepare the features and environment in AWS itself. But, in the end, Serverless will return ARN and Endpoint, by which you can see the result.

Final thoughts

Despite the fact that here only the basics of using Serverless technology were covered, in practice, the range of application of this technology is almost limitless. From simple portals (made as a static page using React or Angular) with a backend and logic on lambda features – to processing archives or files through S3 storage and quite complex mathematical operations with load distribution.

In fact, the technology is still at the very beginning of its inception, and will certainly continue to evolve. So, we take the keyboard and try to test (the benefit of Amazon Free Tier allows you to do it completely free of charge at first).

THE CLOUD
IS IN OUR DNA.

GET STARTED
2019-04-08T14:50:55+00:00