Customizing Enterprise Worker Configuration
- Credentials for Connecting to the Platform
- Setting Timeouts
- Configuring the Number of Concurrent Jobs
- Changing the Worker Hostname
- Disable SSL Verification Messages
- Enabling S3 Dependency Caching
- Configuring Jobs’ Allowed Memory Usage
- Setting Maximum Log Length
- Mounting Volumes across Worker Jobs on Enterprise
- Worker behind an HTTP(S) Proxy
- How to set LXD worker specifics
- Contact Enterprise Support
Credentials for Connecting to the Platform #
With Ubuntu 16.04 as host operating system #
The configuration for connecting to the Travis CI Enterprise platform can be found in
If you need to change the hostname the Worker should connect to, or the
RabbitMQ password, you can do so by updating:
export AMQP_URI="amqp://travis:<rabbitmq password>@<your-travis-ci-enterprise-domain>/travis"
With Ubuntu 14.04 as host operating system #
The configuration for connecting to the Travis CI Enterprise Platform,
including the RabbitMQ password, can be found in
If you need to change the hostname the Worker should connect to, or the RabbitMQ password, you can do so by updating:
export TRAVIS_ENTERPRISE_HOST="<your-travis-ci-enterprise-domain>" export TRAVIS_ENTERPRISE_SECURITY_TOKEN="super-secret-password"
Setting Timeouts #
The following options can be customized in
It is recommended to have all Workers use the same config.
By default, jobs can run for a maximum of 50 minutes. You can increase or decrease this using the following setting:
If no log output has been received for more than 10mins, the job is cancelled as it is assumed the job stalled. You can customize this timeout using the following setting:
Configuring the Number of Concurrent Jobs #
The number of concurrent jobs run by the worker and the number of CPUs
allowed for a job to use are configured with the
variables, respectively. Each Job requires a minimum of 2 CPUs, and by
default, each Worker runs 2 jobs. The product of
TRAVIS_WORKER_POOL_SIZE * TRAVIS_WORKER_POOL_SIZE cannot exceed the
number of CPUs the worker machine has, otherwise jobs will error and
To change the number of concurrent jobs allowed for a worker to use, please update the following setting:
To change the number of CPUs a job is allowed to use, please change the following setting:
To completely disable this setting, set the value to 0. Then resources will be used as needed, which means a single job can for example use all CPU cores.
Changing the Worker Hostname #
Each Worker should have a unique hostname, making it easier to determine
where jobs ran. By default this is set to the
hostname of the host the
Worker is running on.
Disable SSL Verification Messages #
The Platform comes set up with a self-signed SSL certificate. This option allows the Worker to talk to the Platform via SSL but ignore the verification warnings.
Enabling S3 Dependency Caching #
If you would like to set up S3 dependency caching for your builds, you can use the following example config:
export TRAVIS_WORKER_BUILD_CACHE_FETCH_TIMEOUT="10m" export TRAVIS_WORKER_BUILD_CACHE_PUSH_TIMEOUT="60m" export TRAVIS_WORKER_BUILD_CACHE_S3_ACCESS_KEY_ID="" export TRAVIS_WORKER_BUILD_CACHE_S3_SECRET_ACCESS_KEY="" export TRAVIS_WORKER_BUILD_CACHE_S3_BUCKET="" export TRAVIS_WORKER_BUILD_CACHE_S3_REGION="us-east-1" export TRAVIS_WORKER_BUILD_CACHE_S3_SCHEME="https" export TRAVIS_WORKER_BUILD_CACHE_TYPE="s3"
Configuring Jobs’ Allowed Memory Usage #
The Worker comes configured with the RAM defaulted to 4G. If you want to change it, you can add the following. To completely disable it, set the value to 0.
export TRAVIS_WORKER_DOCKER_MEMORY=4G # OR export TRAVIS_WORKER_DOCKER_MEMORY=0
Setting Maximum Log Length #
The Worker comes configured with
defaultMaxLogLength = 4500000 which
is 4.5MB. The setting is measured in bytes, so to get 40MB you need
Mounting Volumes across Worker Jobs on Enterprise #
You can use Docker bind mounts when the worker launches the container of a job. This lets you share files or directories across all jobs ran by a worker. Multiple binds can be provided as space separated strings.
For example, the setting below shows how to share the
/tmp directory in read/write mode,
as well as the
/var/log directory in read-only mode (
:r is the default):
export TRAVIS_WORKER_DOCKER_BINDS="/tmp:/tmp:rw /var/log"
A full list of options and mount modes is listed in the official Docker documentation.
Worker behind an HTTP(S) Proxy #
If you’re using Travis CI Enterprise behind an HTTP(S) proxy, we’ve got you covered. Since travis-worker 4.6 it is possible to run builds behind a proxy.
How do I find out if I have the correct travis-worker version installed? #
Ubuntu 16.04+ #
Connect to your worker machine via SSH and run:
$ sudo docker images | grep worker travisci/worker v4.6.1 ef7a3419050c 17 hours ago 44.7MB
Ubuntu 14.04 #
Connect to your worker machine via SSH and run:
$ travis-worker -v travis-worker v=v4.6.1 rev=73392421d0ca807b83d4d459ad3dd484820fd181 d=2018-10-30T16:13:39+0000 go=go1.11.1
Upgrade travis-worker #
If you need to install a newer version of travis-worker, please follow the instructions in our Updating your Travis CI Worker docs.
Configuring an HTTP Proxy #
On the worker machine, please open
/etc/default/travis-worker in your editor and add the two lines from the example below. The value for
TRAVIS_WORKER_DOCKER_API_VERSION depends on the installed Docker version.
export TRAVIS_WORKER_DOCKER_HTTP_PROXY="<YOUR PROXY URL>" export TRAVIS_WORKER_DOCKER_API_VERSION=1.35
In this example we’ve used Docker-CE 17.12. According to the API mismatch table we need to choose
Below you can find the full list of available environment variables and how they’re accessible during the build:
|Environment variable||Available as:|
Please note, that all
apt-getcommands by default respect
TRAVIS_WORKER_DOCKER_HTTPS_PROXYwhich means that all package installs will go via the HTTP Proxy as well. If you don’t want this to happen, please whitelist your apt package mirror by adding it to TRAVIS_WORKER_DOCKER_NO_PROXY` like this:
How to set LXD worker specifics #
After running the
lxd_install.sh the LXD worker configuration is stored in
All parameters mentioned on this page with the exception of Docker parameters apply to LXD. Below you can find a list of available parameters:
||defines how many CPU’s can be used by LXD, the default is
||defines if all CPU’s can be used by LXD if not already in use, the default is
||LXD disk size limit, the default is
||overrides the architecture defined in the job configuration, not present by default.|
||defines the memory available for each container, the default is
||defines the network bandwidth, the default is
Contact Enterprise Support #
To get in touch with us, please write a message to firstname.lastname@example.org. If possible, please include as much of the following as you can:
- Description of the problem - what are you observing?
- Which steps did you try already?
- A support bundle (see table below on how to obtain it)
- Log files from all workers (They can be found at
/var/log/upstart/travis-worker.log- please include as many as you can retrieve).
- If a build failed or errored, a text file of the build log
|TCI Enterprise version||Support bundle|
Support bundle generation instruction is available in ‘troubleshoot’ menu or directly at:
A command for generating support bundle will appear after selecting:
|2.x+||You can get it from
Have you made any customizations to your setup? While we may be able to see some information (such as hostname, IaaS provider, and license expiration), there are many other things we cannot see which could lead to something not working. Therefore, we would like to ask you to also answer the questions below in your support request (if applicable):
- How many machines are you using / what is your Kubernetes cluster setup?
- Do you use configuration management tools (Chef, Puppet)?
- Which other services do interface with Travis CI Enterprise?
- Which Version Control system (VCS) do you use together with Travis CI Enterprise (e.g. github.com, GitHub Enterprise, or BitBucket Cloud)?
- If you are using GitHub Enterprise, which version of it?
We are looking forward to helping!