DevOps on Amazon AWS

DevOps on Amazon AWS

The effectiveness of using cloud services is one of the main current trends for the transformation of many IT companies. Processes automation on the way from the GIT code to deployment to Development and/or production, as well as follow-up monitoring, response to incidents, etc. (which also can and should be automated) – all this significantly changes many of the universally accepted ITIL practices.

A look at the DevOps processes from the Amazon AWS point of view is: how they can be implemented on its services within the framework of the IaaC (Infrastructure as a Code) concept – all this will be described and shared later.

This is a common review.

The goal of this article is not an educational program on DevOps and not a trivial duplication of available materials and the original sources. The main idea is to present the implementation of DevOps on Amazon AWS services in its general form, from which everyone is free to choose the desired option.

The target audience here is people knowing about the existence of Amazon AWS, who plan to use its capabilities in their work, to transfer their processes there partially or completely. The provided Amazon AWS services are multiplying at high speed, their number has already exceeded the round mark of 100 pieces, because even having a good experience with AWS may be useful to find out what new appeared new, that they just missed, and this can successfully engage in their work.

Not (only) cubes

Amazon: not only cubes

A couple of common but important words for someone, especially for those who do not have enough experience with Amazon AWS. The cubes depicted on the logo and logically correctly convey the basic idea: Amazon AWS is a constructor that gives a set of services and from which everyone can collect what they need.

However, this moment can repel some people in case the desired cube (service) does not satisfy their needs. In this case, you need to remember that the same task can be solved with the help of different services, many services can largely duplicate each other, and therefore Amazon AWS, which did not meet your needs, does not cancel its use, but only makes it possible to find suitable.

After all, Amazon AWS services appeared as a result of operation primarily for their own needs. Amazon AWS employs thousands of small teams (they call them two pizza team), each is free to choose its own way of working (language, OS, structure, protocols, etc.), and hence the services available on Amazon AWS must maintain the diversity of such a zoo they are already operating simply within the company Amazon AWS.

This is especially important for Amazon AWS Code Services, which are not meant to tell you what to use, but to give you, on the one hand, the ability to integrate with your usual, existing and streamlined processes tools; on the other hand, to offer your own implementation, which is usually most effective in using Amazon AWS functionality.

DevOps Amazon AWS Constructor

Even well understanding each cube separately, it is not always clear which house can be built from them. Therefore, we will try to assemble these services in one picture in order to get a general idea of what can be assembled from such a constructor. If you select the Code-services and correlate them with the usual processes, you get something like this:

Amazon web services

Let’s briefly go through each of the Code-services:

AWS CodeCommit

AWS code commit

The most obvious and clear service is the Amazon GIT implementation, which repeats it completely. There are no differences from GIT in terms of work and (team) interaction, as a matter of fact, for direct integration into the structure of AWS services, including access to IAM services. The code itself is stored on S3.

AWS CodeBuild

AWS code build

Assembly server is for projects that require assembly before deployment, for example, Java. By default, it launches a Ubuntu-based container, but you can specify your own.

Specify a Docker image

a Docker image

It supports Jenkins integration via the plugin.

In addition to assembly, tests can be carried out in the same way. And although this is not its main purpose, it can be chosen like this way.

AWS CodeBuild as Test provider

AWS CodeBuild

As AWS CodeBuild is also present at the “Test” stage.

AWS CodeDeploy

The service for deploying code that works with any pre-installed agent and flexible settings in any environment. The special difference is that the agent works not only with Amazon AWS virtuals but also with “external” ones, which allows you to centrally deploy the most disparate software, including and locally.

AWS CodePipeline

AWS CodePipeline

As you can see from the main diagram, it interacts with the previous three services, launching them in the right sequence and, in fact, providing automation for DevOps processes.

It allows you to organize branching processes, run third-party services (for example, for testing), make parallel branches, request confirmation (Approval Actions) before starting the next stage. In general, it is the central tool for organizing DevOps on Amazon AWS.

AWS CodeStar

AWS CodeStar

The service, essentially duplicating CodePipeline but sharpened for ease of launching and customization, which is solved using a wide range of ready-made templates (for a bundle “application-language”) and really convenient Dashboard with plug-ins that have some integration with the monitoring service (CloudWatch) and a plug-in for integration with Jira.

AWS CodeStar

Amazon AWS IaaC Services

Amazon AWS services, implementing the concept of “Infrastructure as code”, are:

  • Elastic Beanstalk
  • OpsWorks
  • CloudFormation

There are many questions about which one to choose and why the three services are designed for the same (infrastructure and application deployment), located in the main diagram at the Deploy stage.

That is, in order to do something quickly (this does not mean that it is bad), this is some kind of standard functionality (for example, a simple site) – it is convenient to use Elastic Beanstalk. If this is a complex project with numerous nested elements and serious requirements for network settings, you cannot do without using CloudFormation. As something in between, the use of OpsWorks based on Chef appears.

However, in reality (and just on complex projects, a combination of all three is commonly used): CloudFormation raises the basic infrastructure (VPC, subnets, repositories, creates the necessary roles in IAM for access, etc.), then launches the OpsWorks stack, which can already flexibly configure the internal component of the running virtuals.

And for the convenience of the development process, CloudFormation can also raise the stack for Elastic Beanstalk components so that developers using .ebextensions themselves can change some parameters of a running application (the number and type of virtual machines used, use Load Balancer, etc.) by changing the simple configuration file in a folder with a code when changes are applied (including to the application infrastructure) automatically after the commit.

AWS Lambda

It’s worth to mention the Lambda service, which implements the concept of ServerLess architecture, which, on the one hand, is similar to Elastic Beanstalk, and can be entered into the DevOps process using AWS Code services. On the other hand, AWS Lambda is an excellent automation tool for everything on Amazon AWS. All processes that involve interaction with each other – can be linked using Lambda.

It can process and respond to the monitoring results of CloudWatch, for example, by restarting the service (virtual cluster) and sending a problem message to the admin. It is also used in connection with DevOps processes, for example, to run its build methods and tests with subsequent transfer to Deploy in a general manner. And in general, with the help of AWS Lambda, the most complex logic that is not yet available using the current set of Amazon AWS services can be implemented.

In addition to these services, other services may be involved in DevOps processes. Therefore, if you try to present a general scheme of the services used, the picture can turn out to be quite confusing (below is just a conditional example).


Not everything can be transmitted visually, because global entities such as the AWS IAM access control service penetrate and are present in almost all components. All other services operate on the basis of the computing power of the EC2 service. The S3 data storage service is used to transfer data between so many other services. And such high-level services like AWS Service Catalog can provide interaction, including between different Amazon AWS accounts.

Then, an intricate and incomprehensible scheme will emerge into a clear set of tools, where everyone can choose the right one.

Generally, the most popular bundle may look like: CodePipeline/ CodeCommit + ElasticBenstalk/ OpsWorks as Deploy. And in order to “just to look quickly” – CodeStar will work well. True, AWS CodeStar is paid, but the cost factor was not specifically taken into account here to first give a general idea of ​the choice, because each component can be taken at will, including using the necessary partners through Jenkins plugins of CI/CD projects such as Jenkins.