Search Topic

Your workflow, your way: Leveraging the Lagoon API for custom CI/CD pipelines & development workflows

Your workflow, your way: Leveraging the Lagoon API for custom CI/CD pipelines & development workflows

When the concept for Lagoon was originally founded back in 2016, the fundamental aim was “to make developers’ lives easier.” Since its inception, Lagoon has grown to become the Swiss army knife tool for application delivery onto Kubernetes clusters, removing the need to actually understand how to use Kubernetes itself. Lagoon sits between your git repository and a Kubernetes cluster to build, deploy, and manage your applications entirely from pre-defined YAML configuration (‘.lagoon.yml’). Although Lagoon takes a lot of the infrastructure hassle away, as an open-source tool it still provides developers with the freedom to build solutions that integrate with the technology stacks they work with. 

One of our customers, Synetic, showcases this integration wonderfully in the implementation of their development CI/CD system. Due to Lagoon’s versatility, they were able to build a system that works for their business case and their own development workflow.

Lagoon’s versatility empowers development teams to build the workflows and tools they need.

Synetic

Synetic is a digital agency based in The Netherlands that has been building fantastic digital solutions since 2004. Recently they demoed their impressive automated continuous deployment system inside GitLab with us. 

As they use Lagoon to build and deploy their websites, they wanted a way to be able to use GitLab pipelines to control this process automatically. Their intentions were to have a solution that would ensure their pipeline jobs were consistently reliable at every step in their CI/CD workflow. 

Lagoon Build & Deploy Controller

Building, testing, and deploying Lagoon environments

GitLab pipelines

GitLab makes use of runners that run executors (in this case Docker containers) to perform Synetic’s configured pipeline that runs their build, test, teardown, and deployment jobs. The pipeline provides the ability to manage Lagoon environments based on the results of the runners. For example, when PRs (pull requests) are successfully tested and merged into their upstream, then a subsequent teardown process occurs which will remove the Lagoon environment after a set period of time. Automating this teardown process against redundant environments not only tidies up messy projects, but it is also more economical as it dramatically reduces your platform footprint and therefore costs. 

During the build job, GitLab job template files are generated and populated with data retrieved from the Lagoon GraphQL API. Synetic were able to leverage the benefits of extracting only the project and environment data that they needed upfront from Lagoon; this information is then used at later stages in the pipeline.

Example GraphQL query made to Lagoon API to retrieve project data

Stages

Pipeline stages example showing upstream push

Deployment

Will trigger a deployment job that will execute a request to deploy a new environment, based on a merge request. (Pretty neat.)

Teardown

A teardown job is triggered that will send a request to destroy a Lagoon environment.

Lighthouse

A lighthouse stage will run post a successful deployment.

Inside these individual stages, there is a lightweight CLI deployment tool that has some nifty Python commands in there that provides integration with our Lagoon API.

Deployment commands

Example production deployment and environment teardown

Retrieving logs (get_logs_by_remote_id(remote_id))

  • Will wait until the logs are available and then fetch them.
  • As not all their developers have access to Lagoon, they are unable to see the build logs that appear in the UI. Synetic wanted to centralise the visibility of the build logs so everything was manageable and readable via GitLab. To do this they run this command that waits for a deployment to finish and then proceeds to pull the logs down via the API.

Wait (wait_for_env_ready())

  • This command is really handy. It will poll to see if the last deployment is ready and hasn’t failed, then will set the environment route and attempt to retrieve the build logs (logs are also retrieved on deployment failures also). The extracted route is later used for Lighthouse auditing and Diffy visual regression testing jobs.

Update environments (promote_env_toprod())

  • When a release has been tested and successfully merged, this command can be executed to promote that environment as the production environment

Delete environments (destroy_env())

  • After an environment has been finished with, then this command will begin the teardown process. For example, given a project name and an environment name, an environment teardown can be executed as seen here.

These commands essentially allow Synetic to be able to run Gitlab CI jobs that interact with Lagoon seamlessly.

Dynamic CI/CD tooling

Synetic uses polysites to deploy a single repository to multiple lagoon projects. A Python script is used to generate dynamic GitLab CI/CD jobs. Those jobs are triggered by a second job that just executes the YAML as a normal GitLab-ci pipeline.

Inside the UI you will see the jobs after each other. After a successful deploy job for master, the production deploy can be activated manually.

Dynamic CI/CD/ tooling

Generate

The generate job runs for all branches we can create a pipeline for. In this case it runs for master. The job will save a generated pipeline as an artifact. This pipeline contains 3 stages:

  • Deploy
  • Deploy_prod
  • Teardown

amazee-generate-jobs

Trigger

The trigger job executes the pipeline for the branch it is generated for. Gitlab just pulls the file from the artifacts and runs it as a normal pipeline.

trigger-amazee-deploy-master

Other Lagoon tooling

Using the Lagoon API, Synetic was able to both query for Lagoon environment information and fire various GraphQL mutations that run deployment and teardown tasks. This could also, however, have been implemented using the Lagoon CLI. For example, we can make use of the ‘lagoon deploy’ and ‘lagoon get’ commands to deploy, promote branches, and extract project data directly from Lagoon. On top of this, we could also run Lagoon tasks against an environment with the ‘lagoon run’ command. These tasks could include database syncs, backups, file syncing, or running the clearing caches, for example. Lagoon also allows you to define custom tasks that will run inside a shell on one of the containers in your environment.  

To install lagoon-cli you can use brew:

brew tap amazeeio/lagoon-cli
brew install lagoon

For a list of usable Lagoon-cli commands, see here.

Summary

A core aim of Lagoon is to loosen technical constraints and empower developers with the functionality they need to build the processes that work best for them. It’s great to see users such as Synetic utilise this to produce a really slick CI process.

Want to do something similar, or have another innovative idea you’re ready to try? We’re here for you! We can offer support and help you integrate your tools and processes with Lagoon. Reach out to us on amazeeio.rocket.chat. Or, get in touch with us and we’ll set up a demo.

To find out more about upcoming Lagoon features, check out our Roadmap.


Writer