For many people, Amazon Web Services has always been associated with IaaS (Infrastructure as a Service), based on which everyone built their services and apps.
However, there are some that claim for the role of the platform as a service, for example, Elastic Beanstalk and OpsWorks. Although, they hardly can be considered a PaaS, as there remains access to the infrastructure, and at the same time a headache for its administration also exists.
The main things about PaaS are zero administration costs, ease of use and, as a result, the ability to focus on the application code, forgetting how to deploy, integrate and maintain it.
According to AWS representatives, Lambda will let you forget about the infrastructure and run apps in-cloud getting integration with other Amazon services, scalability, low costs of using computing resources. All you need to start is to write a feature and associate it with events. After that, Amazon will automatically perform the feature with each new event.
You can not think about scaling and high availability: our feature will be able to process tens of thousands of requests per hour without our efforts and without backend in its traditional meaning.
The key working power is lambda-feature. This lambda-feature is associated with the context:
- Environment: PL, amount of RAM, access settings
- Resources whose changes need to be tracked
- Code – the same feature that is executed when a resource change message is received
How does it work?
When the resource changes, a message that activates the feature is generated.
In turn, the feature has access to a JSON object that contains all the necessary info about this change, or about another message.
For example, we can associate a feature with s3-bucket. When a new object enters it, the lambda will be launched and will have access to all data about it.
Suppose this is a new image for which you want to make a set of sketches of different sizes. Our feature will be launched with each new image loaded into the bucket. We can save the result in the same or a separate bucket.
Do not forget that our feature does not retain its state (stateless), so the results of the work should be stored in any data warehouse. In our example, this is S3 bucket.
What about current limits and future plans?
First of all, the following are noted:
- the lack of ready CI/CD
- no integration with version control systems (git, svn)
Plans are connected with the expanding the list of supported services (now it is S3, DynamoDB, and Amazon Kinesis) and increasing the number of supported PL.
This service is paid for in two ways: for the number of requests and their total execution time, taking into account the memory consumed.
Number of requests
- The first million requests per month – for free
- Everything that is above this limit is $0.20 for 1 million requests ($ 0.0000002 for one request)
The total execution time
- the start time is counted from the start of the feature until the result is returned, or until it stops by timeout (it is set for each feature)
- time is rounded up to the nearest multiple of 100 ms
- the cost of each second depends on the amount of allocated memory, i.e. $ 0.00001667 for each gigabyte-second
Typically, AWS provides a free period (free tier). More info about prices can be found here. Find out more about pricing here.
Here is a random example:
If the feature execution time is 1 second, and it will be launched 3 million times within a month, then we will receive a bill of $ 18.34.
The topical links:
The final thought
AWS Lambda is in the “preview” stage; in order to register and gain access, you need to fill out a request by reference. It’s worth a try, keeping in mind the very good free tier.