Migrating from legacy to container-based infrastructure


This document is from a switch in our default infrastructure in 2015 and may contain outdated information.


Not using sudo? Containers sound cool? Add sudo: false to .travis.yml and you’re set.

For more details check out the awesome information below.

Why migrate to container-based infrastructure?

Builds start in seconds

Your builds start in less than 10 seconds. The new infrastructure makes it much easier for us to scale CPU capacity which means your builds start in seconds.

More available resources

The new containers have 2 dedicated cores and 4GB maximum of shared memory, vs 1.5 cores and 3GB on our legacy infrastructure. CPU resources are now guaranteed, which means less impact from ‘noisy neighbors’ on the same host machine and more consistent build times throughout the day.

Better network capacity, availability and throughput

Our container-based infrastructure is running on EC2, which means much faster network access to most services, especially those also hosted on EC2. Access to S3 is also much faster than on our legacy infrastructure.

Caching available for open source projects

The best news for open source projects is that our build caching is now available for them too. That means faster build speeds by caching dependencies. Make sure to read the docs on caching before trying it out.

For Ruby projects, it’s as simple as adding cache: bundler to your .travis.yml.

How can I use container-based infrastructure?

If you see This job is running on container-based infrastructure in your build log it means you are already running builds on our new container-based infrastructure.

If you don’t, to use the new infrastructure add the following line to your .travis.yml:

sudo: false

What are the restrictions?

Using sudo isn’t possible (right now)

Our new container infrastructure uses Docker under the hood. This has a lot of benefits like faster boot times and better utilization of resources. But it also comes with some restrictions. At this point, it’s not possible to use any command requiring sudo in your builds.

If you require sudo, for instance to install Ubuntu packages, a workaround is to use precompiled binaries, uploading them to S3 and downloading them as part of your build,then installing them into a non-root directory.

Databases don’t run off a memory disk

On our legacy infrastructure, both MySQL and PostgreSQL run off a memory disk to increase transaction and query speed. This can impact projects making heavy use of transactions or fixtures.

How do I install APT sources and packages?

As you can’t use sudo on the new container-based infrastructure, you need to use the addons.apt.packages and addons.apt.sources plugins to install packages and package sources.

Adding APT Sources

To add APT sources before your custom build steps, use the addons.apt.sources key, e.g.:

    - deadsnakes
    - ubuntu-toolchain-r-test

The aliases for the allowed sources (such as deadsnakes above) are managed in a whitelist. If you need additional sources you must use sudo: required.

Adding APT Packages

To install packages before your custom build steps, use the addons.apt.packages key, e.g.:

    - cmake
    - time

The allowed packages are managed in a whitelist, and any attempts to install disallowed packages will result in a log message detailing the package approval process.

How Do I Install Custom Software?

Some dependencies can only be installed from a source package. The build may require a more recent version or a tool or library that’s not available as a Ubuntu package.

Install custom software by running a script to handle the installation process. Here is an example that installs CasperJS from a binary package:

  - wget https://github.com/n1k0/casperjs/archive/1.0.2.tar.gz -O /tmp/casper.tar.gz
  - tar -xvf /tmp/casper.tar.gz
  - export PATH=$PATH:$PWD/casperjs-1.0.2/bin/

To install custom software from source, you can follow similar steps. Here’s an example that downloads, compiles and installs the protobufs library.

  - wget https://protobuf.googlecode.com/files/protobuf-2.4.1.tar.gz
  - tar -xzvf protobuf-2.4.1.tar.gz
  - cd protobuf-2.4.1 && ./configure --prefix=$HOME/protobuf && make && make install

These three commands can be extracted into a shell script, let’s name it install-protobuf.sh:

set -e
wget https://protobuf.googlecode.com/files/protobuf-2.4.1.tar.gz
tar -xzvf protobuf-2.4.1.tar.gz
cd protobuf-2.4.1 && ./configure --prefix=$HOME/protobuf && make && make install

Note that you can’t update the $PATH environment variable in the first example inside a shell script, as it only updates the variable for the sub-process that is running the script.

Once you have added to the repository, you can run it from your .travis.yml:

  - bash install-protobuf.sh

We can also add a script command to list the content of the protobuf folder to make sure it is installed:

  - ls -R $HOME/protobuf

How Do I Cache Dependencies and Directories?

In the previous example, to avoid having to download and compile the protobuf library each time we run a build, cache the directory.

Add the following to your .travis.yml:

  - $HOME/protobuf

And then change the shell script to only compile and install if the cached directory is not empty:

set -e
# check to see if protobuf folder is empty
if [ ! -d "$HOME/protobuf/lib" ]; then
  wget https://protobuf.googlecode.com/files/protobuf-2.4.1.tar.gz;
  tar -xzvf protobuf-2.4.1.tar.gz;
  cd protobuf-2.4.1 && ./configure --prefix=$HOME/protobuf && make && make install;
  echo 'Using cached directory.';

See here for a working example of compiling, installing, and caching protobuf.

More information about caching can be found in our Caching Directories and Dependencies doc.

Need Help?

Please feel free to contact us via our support email address, or create a GitHub issue.