Server applications
This page describes the procedures for deploying and interacting with server applications. The instructions refer to the deployments on AWS Elastic Container Service, using Fargate containers, and relying on standard server infrastructure (TODO add link). Deployments are carried out using Bitbucket Pipelines.
Setup
In order to carry out the operations described below, the following software should be installed on the development machine:
- AWS CLI: this is used by the other software and is needed for everything
- AWS Session Manager Plugin: this needs to be installed after the CLI, needed for interactive tasks
- awslogs: this is needed for streaming CloudWatch logs in the terminal
- Jira and Bitbucket extension for VS Code: this is not needed but highly recommended (for instance to monitor deployments from the editor)
In order to operate on a project, the AWS user needs to be configured. The recommended way to proceed is to set up a different named profile for each project. Eventually, ~/.aws/credentials
should contain a list of projects, something like this (adding the region is highly recommended, so that there’s no need to type it for each command
[coolproject]
aws_access_key_id = XXXXXX
aws_secret_access_key = XXXXXX
region = eu-central-1
Deployment
Deployment can be initiated by pushing to the environment branch (nightly
, staging
and production
, as described here). For instance ,from the develop
branch, a nightly deployment can be initiated by typing this and the staging pipeline will start:
git push origin develop:nightly
TIP: if you get BitBucket errors, you might need to add
--force
The deployment can be monitored in real time from the Bitbucket website (in the Pipelines page of the relevant repository) or using the VS Code extension. What happens behind the scenes is completely described in bitbucket-pipelines.yml
(in the project root).
Configuration
Docker tasks are configured using environment variables. Very often, special .env files are used and are subsequently uploaded on an S3 bucket. These files should be placed in the aws
folder in the project and SHOULD NOT be committed. Configuration changes can be applied by just copying new versions to the bucket using a command like this (for nightly environment):
aws s3 cp aws/nightly.env s3://coolproject-nightly-server-storage/env/nightly.env --profile coolproject
Path and filenames should be changed for different environments. After uploading a new configuration file, redeployment needs to take place for changes to be included. New *.env
files should be thus uploaded BEFORE starting a new pipeline.
Logs
Combined logs can be streamed for all containers in a task with this command (for nightly
environment):
awslogs get /ecs/coolproject-nightly-server ALL --watch -G --profile coolproject
Interactive tasks
Sometimes running interactive tasks on the server (like seeding or a console) is required. Now, this is slightly more complex, because one needs to identify the ID of the task in execution so that a command can be run into it (this is very similar to docker-compose exec
). First of all, all tasks in a service should be listed (this is for nightly
)…
aws ecs list-tasks --cluster infra-nightly-cluster --service-name coolproject-nightly-server-service --profile coolproject
…which returns something like this:
{
"taskArns": [
"arn:aws:ecs:eu-central-1:12345678:task/infra-nightly-cluster/
fdsjfdksfnji2923kdsi239jew9ji432"
]
}
fdsjfdksfnji2923kdsi239jew9ji432
is the task ID. At this point, one can simply execute a command in the task, this is for an interactive Rails console:
aws ecs execute-command --cluster infra-nightly-cluster --task fdsjfdksfnji2923kdsi239jew9ji432 --container app --command "bundle exec rails console" --interactive --profile coolproject
IMPORTANT: pay attention to the
--cluster
and--service-name
arguments, when in doubt look it up on the project credentials.