Pipeline YAML
This page describes the formal pipeline YAML specification for Semaphore.
Overview
Semaphore uses YAML for the definition of pipeline. Every Semaphore project requires at least one pipeline to work. If you don't want to write pipelines by hand, you can use the visual workflow editor.
Execution order
You cannot assume that jobs
in the same task
run in any particular order. They run in parallel on a resource availability basis.
To force execution order, you must use block dependencies
. Semaphore only starts a block
when all their dependencies are completed.
Comments
Lines beginning with #
are considered comments and are ignored by the YAML parser.
version
The version of the pipeline YAML specification to be used. The only value supported is v1.0
version: v1.0
name
A Unicode string for the pipeline name. It is strongly recommended that you give descriptive names to your Semaphore pipelines.
name: The name of the Semaphore pipeline
agent
Defines the global agent's machine
type and os_image
to run jobs
. See agents to learn more.
The agent
can contain the following properties:
- [
machine
]: VM machine type to run the jobs - [
containers
]: optional Docker containers to run the jobs
agent:
machine:
type: s1-kubernetes
os_image: ''
containers:
- name: main
image: 'registry.semaphoreci.com/ubuntu:22.04'
The default agent
can be overridden inside tasks
.
machine
Part of the agent
definition. It defines the global VM machine type to run the jobs.
It requires two properties:
agent:
machine:
type: s1-kubernetes
os_image: ''
containers:
- name: main
image: 'registry.semaphoreci.com/ubuntu:22.04'
type
Part of the agent
definition. It selects the hardware or self-hosted agent type that runs the jobs.
By default, Semaphore uses the built-in s1-kubernetes
agent which runs your jobs in a pod on the same cluster where the Semaphore server is running.
agent:
machine:
type: s1-kubernetes
os_image: ''
containers:
- name: main
image: 'registry.semaphoreci.com/ubuntu:22.04'
os_image
Part of the agent
definition. This is an optional property to specify the Operating System image to mount on the machine
. The value is not used when running Docker based environments or Kubernetes self-hosted agents.
containers
An optional part of agent
. Defines an array of Docker image names to run jobs. The container
property is required when using Docker based environments or Kubernetes self-hosted agents.
The first container in the list runs the jobs. You may optionally add more items that run as separate containers. All containers can reference each other via their names, which are mapped to hostnames using DNS records.
Each container
entry can have:
name
: the name of the containerimage
: the image for the containerenv_vars
: optional list of key-value pairs to define environment variablessecrets
: optional list of secrets to import into the containeruser
: optional user to run the container ascommand
: optional override for the Docker CMDentrypoint
: optional override for the Docker ENTRYPOINT
agent:
machine:
type: s1-kubernetes
os_image: ''
containers:
- name: main
image: 'registry.semaphoreci.com/ubuntu:22.04'
- name: db
image: 'registry.semaphoreci.com/postgres:9.6'
name
Defines the unique name
of the container. The name is mapped to the container hostname and can be used to communicate with other containers.
agent:
machine:
type: e1-standard-2
containers:
- name: main
image: 'registry.semaphoreci.com/ruby:2.6'
image
Defines the Docker image to run inside the container.
agent:
machine:
type: e1-standard-2
containers:
- name: main
image: 'registry.semaphoreci.com/ruby:2.6'
env_vars
An optional array of key-value pairs. The keys are exported as environment variables when the container starts.
agent:
machine:
type: s1-kubernetes
os_image: ''
containers:
- name: main
image: 'registry.semaphoreci.com/ubuntu:22.04'
- name: db
image: 'registry.semaphoreci.com/postgres:9.6'
env_vars:
- name: POSTGRES_PASSWORD
value: keyboard-cat
secrets
An optional array of secrets to import into the container. Only environment variables defined in the secret are imported. Any files in the secret are ignored.
agent:
machine:
type: s1-kubernetes
os_image: ''
containers:
- name: main
image: 'registry.semaphoreci.com/ubuntu:22.04'
secrets:
- name: mysecret
user
An optional property that specifies the active user inside the container.
agent:
machine:
type: s1-kubernetes
os_image: ''
containers:
- name: main
image: 'registry.semaphoreci.com/ubuntu:22.04'
- name: db
image: 'registry.semaphoreci.com/postgres:9.6'
user: postgres
command
An optional property that overrides the Docker image's CMD command.
agent:
machine:
type: s1-kubernetes
os_image: ''
containers:
- name: main
image: 'registry.semaphoreci.com/ubuntu:22.04'
command: ["bundle", "exec", "rails", "server"]
entrypoint
An optional property that overrides the Docker image's ENTRYPOINT entry.
agent:
machine:
type: s1-kubernetes
os_image: ''
containers:
- name: main
image: 'registry.semaphoreci.com/ubuntu:22.04'
entrypoint: ["/bin/sh", "-c"]
execution_time_limit
Defines an optional time limit for executing the pipeline. Jobs are forcibly terminated once the time limit is reached. The default value is 1 hour.
The execution_time_limit
property accepts one of two options:
hours
: time limit expressed in hours. Maximum value is 24minutes
: The time limit is expressed in minutes. The maximum value is 1440
You can only either hours
or minutes
. Not both.
This property is also available on blocks
and jobs
.
version: v1.0
name: Using execution_time_limit
agent:
machine:
type: e1-standard-2
os_image: ubuntu2004
execution_time_limit:
hours: 3
fail_fast
This optional property defines what happens when a job fails. It accepts the following properties:
If both are set, stop
is evaluated first. If fail_fast
is not defined, jobs continue running following declared dependencies
when a job fails.
stop
The stop
property causes all running jobs to stop as soon as one job fails. It requires a when
property that defines a condition according to Conditions DSL.
In the following configuration, blocks A and B run in parallel. Block C runs after block B is finished. If Block A fails and the workflow is initiated from a non-master branch all running jobs stop immediately.
version: v1.0
name: Setting fail fast stop policy
agent:
machine:
type: e1-standard-2
os_image: ubuntu2004
fail_fast:
stop:
when: "branch != 'master'"
blocks:
- name: Block A
dependencies: []
task:
jobs:
- name: Job A
commands:
- sleep 10
- failing command
- name: Block B
dependencies: []
task:
jobs:
- name: Job B
commands:
- sleep 60
- name: Block C
dependencies: ["Block B"]
task:
jobs:
- name: Job C
commands:
- sleep 60
cancel
The cancel
property causes all non-started jobs to be canceled as soon as one job fails. Already running jobs are allowed to finish. This property requires a when
property that defines a condition according to Conditions DSL.
In the following configuration, blocks A and B run in parallel. Block C runs after block B is finished. If Block A fails in a workflow that was initiated from a non-master branch:
- Block B is allowed to finish
- Block C is canceled, i.e. it never starts
version: v1.0
name: Setting fail fast cancel policy
agent:
machine:
type: e1-standard-2
os_image: ubuntu2004
fail_fast:
cancel:
when: "branch != 'master'"
blocks:
- name: Block A
dependencies: []
task:
jobs:
- name: Job A
commands:
- sleep 10
- failing command
- name: Block B
dependencies: []
task:
jobs:
- name: Job B
commands:
- sleep 60
- name: Block C
dependencies: ["Block B"]
task:
jobs:
- name: Job C
commands:
- sleep 60
queue
The optional queue
property enables you to assign pipelines to custom execution queues or to configure the way the pipelines are processed when queuing happens.
There are two queueing strategies:
- Direct assignment: assigns all pipelines from the current pipeline file to a shared queue
- Conditional assignment: defines assignment rules based on conditions
See Pipeline Queues for more information.
Direct assignment
This option allows you to can use the name
, scope
, and processing
properties as direct sub-properties of the queue
property.
The following rules apply:
name
orprocessing
properties are requiredscope
can only be set ifname
is definedname
should hold the string that uniquely identifies the desired queue within the configured scope- if you omit
name
if you only wish theprocessing
property. Thename
is autogenerated from the Git commit details. scope
can have one of two values:project
ororganizations
. The default isproject
When scope: project
the queues with the same values for the name
property in different projects are not queued together.
When scope: organization
the pipelines from the queue will be queued together with pipelines from other projects within the server that have a queue configuration with the same name
and scope
values.
The processing
property can have two values:
serialized
: the pipelines in the queue will be queued and executed one by one in ascending order, according to creation time. This is the defaultparallel
: all pipelines in the queue will be executed as soon as they are created and there will be no queuing.
Conditional assignment
In this option, you define an array of items with queue configurations as a sub-property of the queue
property. Each array item can have the same properties, i.e. name
, scope
, and processing
, as in direct assignment.
In addition, you need to supply a when
property using the Conditions DSL. When the queue
configuration is evaluated in this approach, the when
conditions from the items in the array are evaluated one by one starting with the first item in the array.
The evaluation is stopped as soon as one of the when
conditions is evaluated as true
, and the rest of the properties from the same array item are used to configure the queue for the given pipeline.
This means that the order of the items in the array is important and that items should be ordered so that those with the most specific conditions are defined first, followed by those with more generalized conditions (e.g. items with conditions such as branch = 'develop'
should be ordered before those with branch != 'master'
).
b
auto_cancel
Sets a strategy for auto-canceling pipelines in a queue when a new pipeline appears. Two values are supported:
At least one of them is required. If both are set, running
will be evaluated first.
running
When this property is set, queued and running pipelines are canceled as soon as a new workflow is triggered. This property requires a when
property that defines a condition according to Conditions DSL.
In the following configuration, all pipelines initiated from a non-master branch will run immediately after auto-stopping everything else in front of them in the queue.
version: v1.0
name: Setting auto-cancel running strategy
agent:
machine:
type: e1-standard-2
os_image: ubuntu2004
auto_cancel:
running:
when: "branch != 'master'"
blocks:
- name: Unit tests
task:
jobs:
- name: Unit tests job
commands:
- echo Running unit test
queued
When this property is set, only queued are canceled as soon as a new workflow is triggered. Already-running pipelines are allowed to finish. This property requires a when
property that defines a condition according to Conditions DSL.
In the following configuration, all pipelines initiated from a non-master branch will cancel any queued pipelines and wait for the one that is running to finish before starting.
version: v1.0
name: Setting auto-cancel queued strategy
agent:
machine:
type: e1-standard-2
os_image: ubuntu2004
auto_cancel:
queued:
when: "branch != 'master'"
blocks:
- name: Unit tests
task:
jobs:
- name: Unit tests job
commands:
- echo Running unit test