Reference for the gitflic-ci.yaml
file
This documentation lists the keywords for configuring your gitflic-ci.yaml
file.
Keywords
Keywords | Global | For Job | Description |
---|---|---|---|
stages |
+ | Names and execution order of pipeline stages | |
include |
+ | + | Including external .yaml files into the configuration |
image |
+ | + | Docker image |
cache |
+ | + | List of files and directories to cache between jobs |
variables |
+ | + | Declaring variables and a list of predefined variables available for use |
parallel:matrix |
+ | Creating parallel jobs using a matrix of variables | |
stage |
+ | Defining the stage for a job | |
scripts |
+ | List of shell scripts to be executed by the agent | |
before_script |
+ | + | List of shell scripts to be executed by the agent before the job |
after_script |
+ | + | List of shell scripts to be executed by the agent after the job |
artifacts |
+ | List of files and directories to be attached to the job on success | |
needs |
+ | Array of job names whose successful execution is required to start the current job | |
when |
+ | Defining job execution conditions | |
rules |
+ | List of conditions to influence field behavior | |
tags |
+ | Job tags for the agent | |
allow_failure |
+ | Rule allowing the pipeline to continue in case of job failure | |
except |
+ | Branch names for which the job will not be created | |
only |
+ | Branch names for which the job will be created | |
trigger |
+ | Rule allowing one pipeline to trigger another | |
extends |
+ | Rule allowing configuration inheritance from other jobs or templates | |
environment |
+ | Defines the environment in which deployment from the related job will be performed |
stages
Keyword type: Global keyword.
Use stages
to define the list of pipeline execution stages.
If stages are not defined in the gitflic-ci.yaml
file, the following predefined stages are available by default:
- .pre
- build
- test
- deploy
- .post
The order of stage elements determines the order of job execution:
- Jobs in the same stage run in parallel.
- Jobs in the next stage start after successful completion of jobs in the previous stage.
- If the pipeline contains only jobs in the .pre or .post stages, it will not run. There must be at least one job in another stage. .pre and .post stages can be used in the required pipeline configuration to define matching jobs that should run before or after project pipeline jobs.
Example:
stages:
- build
- test
- deploy
- All stages are executed sequentially. If any stage fails, the next one will not start and the pipeline will end with an error.
- If all stages are successfully completed, the pipeline is considered successfully executed.
include
Keyword type: Global keyword.
You can use include
to add additional yaml files.
include:local
Use include: - local
to add files located locally in the repository.
Example:
include:
- local:
- "gitflic-1.yaml"
- "gitflic-2.yaml"
include:project
Use include: - project
to add files located in other repositories.
Example:
include:
- project:
project_path: 'my-group/my-project'
ref: v1.0.0
file:
- 'gitflic-ci.yaml'
include:remote
Use include: - remote
to add files located in remote repositories.
This feature is available in the self-hosted version of the service
Example:
include:
- remote:
- "https://external/link/file1.yaml"
- "https://external/link/file2.yaml"
Using variables in include
You can use the following variables in the include
section:
- Variables declared in the project
- Variables declared in the configuration yaml file
- Predefined variables
- Variables passed via
trigger
- Variables declared in the pipeline scheduler
Example:
variables:
project_path: "adminuser/include"
file_path: "file1.yaml"
local_include: "gitflic-1.yaml"
include:
- project:
project_path: '$project_path'
ref: '$CI_COMMIT_REF_NAME'
file:
- 'gitflic-ci.yaml'
- remote:
- "https://external/link/${file_path}"
- local:
- "$local_include"
image
Keyword type: Global keyword. Can also be used as a job keyword.
Use image
to specify the Docker image in which the pipeline or job runs.
Global example:
image: maven:3.8.5-openjdk-11-slim
Job example:
job:
stage: build
image: maven:3.8.5-openjdk-11-slim
image:name
Use image:name
to specify the name of the Docker image for the job. Equivalent to using the image
keyword.
Keyword type: Job keyword.
Supported values: Image name, including registry path if needed, in the following format:
<image-name>:<tag>
Example:
job:
stage: build
image:
name: maven:3.8.5-openjdk-11-slim
image:entrypoint
Use image:entrypoint
to specify the command or script as the container entrypoint.
Keyword type: Job keyword.
Supported values: String or array of strings.
Example:
job:
image:
name: maven:3.8.5-openjdk-11-slim
entrypoint: [""]
image:docker
Use image:docker
to pass additional options to the GitFlic Runner with Docker type.
Keyword type: Job keyword.
Supported values: Set of additional agent options, which may include:
platform
: Selects the image architecture to download. If not specified, defaults to the host architecture.user
: Specifies the username or UID to use when running the container.
Example:
job:
image:
docker:
platform: arm64
user: gitflic
Additional information:
image:docker:platform
is similar to the --platform
option in the docker pull
command.
image:docker:user
is similar to the --user
option in the docker run
command.
image:pull_policy
Use image:pull_policy
to define the Docker image pull policy.
Keyword type: Job keyword.
Supported values:
always
- always pull the imageif-not-present
- pull the image only if it is not presentnever
- never pull the image
Example:
job:
image:
name: maven:3.8.5-openjdk-11-slim
pull_policy: if-not-present
cache
Keyword type: Global keyword. Can also be used as a job keyword.
The cache
keyword allows you to specify a list of files and directories to cache between jobs and pipelines.
The cache is a shared resource at the repository level and is distributed between pipelines and jobs. Note that cache data is restored earlier than artifacts.
cache: []
Use cache: []
to disable cache for a specific job.
cache:paths
To select files or directories to cache, use the paths
parameter. This parameter only supports relative paths.
Example:
cache:
paths:
- .m2/repository/
- core/target/
- desktop/target/
cache:key
The key
parameter allows you to assign a unique identifier to the cache. All jobs using the same cache key will share the same cache.
If the key is not explicitly specified, the default value is default
. This means all jobs with the cache
keyword but without an explicit key
will use the shared cache with the default
key.
Note: The
key
parameter must be used together with thepaths
parameter. If paths are not specified, caching will not be performed.
Example:
cache:
key:
- test
paths:
- desktop/target/
Cache usage notes
- The cache is available between different pipelines and jobs. This allows you to reuse data, such as dependencies or compiled files, which can significantly speed up job execution.
- Cache data is restored before artifacts. This ensures cached files are available at early stages of job execution.
- Caching is only supported for files in the project's working directory. You cannot cache files or directories outside the working directory.
variables
Keyword type: Global keyword. Can also be used as a job keyword.
Use variables
to declare additional CI/CD variables for the job.
Supported values
- The variable name can contain only digits, Latin letters, and underscores (
_
). - The variable value must be a string (enclosed in single
'
or double"
quotes).
Example:
job_with_variables:
variables:
VAR: "/variable"
scripts:
- echo $VAR
You can nest variable values inside each other. This allows you to create new variables based on existing ones, combining them as needed.
Example:
job_with_variables:
variables:
VAR1: "This_is_my_"
VAR2: "new_var"
scripts:
- VARS=$VAR1$VAR2
- echo $VARS
Notes
- Variables defined in the YAML file are publicly visible, so it is unsafe to define sensitive information in them. Declare variables whose values should not be public via the CI/CD tab in the project settings.
- Variables can be wrapped in curly braces to clearly define the boundaries of the variable.
CI/CD variables
Currently, you can use variables from the following sources in the pipeline configuration (and its jobs):
- Declared via UI in CI/CD settings for the project, team, or company. Such variables can be masked to prevent their values from appearing in logs, increasing security.
- Variables declared in the pipeline scheduler
- Variables declared via UI when creating the pipeline
- Predefined variables.
- Global variables declared in the
variables
field for the pipeline. These variables are available to all jobs in the pipeline. - Variables declared in the
variables
field for a job. These variables are available only to the job in which they are declared.
Notes:
- If a variable with the same name is declared in different places, its value will be overwritten according to the CI/CD variable override priority.
Predefined CI/CD variables
Variable name | Description |
---|---|
CI_PROJECT_URL |
Project URL for which the pipeline is created (e.g. https://gitflic.ru/project/someuser/some-project-name ). |
CI_PROJECT_TITLE |
Project name displayed in the UI (e.g. Some Project Name ). |
CI_PROJECT_NAME |
Project directory name (e.g. some-project-name ). |
CI_PROJECT_VISIBILITY |
String private or public , depending on project visibility. |
CI_PROJECT_DIR |
Path where the repository is cloned and where the job runs (e.g. /builds/ownername/projectname ). |
CI_PROJECT_NAMESPACE |
The alias of the project owner (user, team or company) where the pipeline is running (e.g., adminuser ). |
CI_DEFAULT_BRANCH |
Default branch for the project (e.g. master ). |
GITFLIC_USER_EMAIL |
Email of the pipeline initiator. |
GITFLIC_USER_LOGIN |
Username of the pipeline initiator. |
CI_COMMIT_REF_NAME |
Branch or tag name on which the pipeline runs (e.g. feature/rules ). |
CI_COMMIT_SHA |
Full commit hash on which the pipeline runs. |
CI_COMMIT_TAG |
Tag name on which the pipeline runs. Returns an empty string if the pipeline is not run on a tag. |
CI_COMMIT_MESSAGE |
Commit message. |
CI_PIPELINE_ID |
Pipeline UUID. |
CI_PIPELINE_IID |
Project-local pipeline ID, unique at the project level. |
CI_PIPELINE_TYPE |
Pipeline type. Possible values are listed here |
CI_PIPELINE_SOURCE |
Indicates the source that triggered the current pipeline. Possible values are listed here |
CI_REGISTRY |
Container registry server address, in the format <host>[:<port>] (e.g. registry.gitflic.ru) |
CI_REGISTRY_IMAGE |
Base container registry address, in the format <host>[:<port>]/project/<full project path> (e.g. registry.gitflic.ru/project/my_company/my_project) |
CI_REGISTRY_PASSWORD |
Password for container registry authentication |
CI_REGISTRY_USER |
Username for container registry authentication |
CI_JOB_TOKEN |
Token for accessing artifact/package/release resources via API (used as: --header 'Authorization: token $CI_JOB_TOKEN' ) |
CI_ENVIRONMENT_NAME |
Environment name in the current job. Available if environment:name is defined in the job. |
CI_ENVIRONMENT_SLUG |
Simplified environment name suitable for DNS, URL, Kubernetes label, etc. Available if environment:name is defined. CI_ENVIRONMENT_SLUG is truncated to 24 characters. An uppercase environment name is automatically suffixed with a random suffix. |
CI_ENVIRONMENT_URL |
Environment URL in the current job. Available if environment:url is defined in the job. |
CI_ENVIRONMENT_ACTION |
Value of environment:action defined for the job. |
CI_ENVIRONMENT_TIER |
Deployment environment tier. Value of environment:deployment_tier defined for the job. |
KUBECONFIG |
The KUBECONFIG variable is set by GitFlic in the CI/CD job runtime environment for Docker-type agents if a Kubernetes agent is connected to the project. The value set by the user takes precedence over the value set by GitFlic. |
Predefined CI/CD variables for merge requests
These predefined variables are available in merge pipelines and merge result pipelines, which are created as part of working with merge requests.
Variable name | Description |
---|---|
CI_MERGE_REQUEST_ID |
Internal UUID of the merge request |
CI_MERGE_REQUEST_PROJECT_ID |
UUID of the project where the merge request is created |
CI_MERGE_REQUEST_SOURCE_PROJECT_ID |
UUID of the source project of the merge request. Differs from CI_MERGE_REQUEST_PROJECT_ID if the request is created from a fork |
CI_MERGE_REQUEST_PROJECT_PATH |
Path of the project where the merge request is created. Format: project/{ownerAlias}/{projectAlias} |
CI_MERGE_REQUEST_SOURCE_PROJECT_PATH |
Path of the source project of the merge request. Format: project/{ownerAlias}/{projectAlias} . Differs from CI_MERGE_REQUEST_PROJECT_PATH if the request is created from a fork |
CI_MERGE_REQUEST_PROJECT_URL |
URL of the project where the merge request is created. Format: http{s)://{Gitflic_domain}/project/{ownerAlias}/{projectAlias} |
CI_MERGE_REQUEST_SOURCE_PROJECT_URL |
URL of the source project of the merge request. Format: http{s)://{Gitflic_domain}/project/{ownerAlias}/{projectAlias} . Differs from CI_MERGE_REQUEST_PROJECT_URL if the request is created from a fork |
CI_MERGE_REQUEST_SOURCE_BRANCH_NAME |
Source branch name of the merge request |
CI_MERGE_REQUEST_TARGET_BRANCH_NAME |
Target branch name of the merge request |
CI_MERGE_REQUEST_APPROVED |
Returns true if all merge request approval rules are met. Otherwise returns an empty string. |
Possible values for the CI_PIPELINE_SOURCE
variable
push
- Pipeline was triggered after pushing code to the repository.parent_pipeline
- Pipeline was triggered using thetrigger
keyword.schedule
- Pipeline was triggered by schedule.web
- Pipeline was triggered manually via the web interface.api
- Pipeline was triggered via API.merge_request_event
- Pipeline was triggered by a merge request event.
Possible values for the CI_PIPELINE_TYPE
variable
branch_pipeline
- Pipeline created on a branchtag_pipeline
- Pipeline created on a tagmerge_request_pipeline
- Merge pipelinemerge_result_pipeline
- Merge result pipelinetrain_car_pipeline
- Merge train
Prefilled variables
When starting a pipeline via the web interface, you can specify prefilled variables. These variables can have unique values for each pipeline. The keywords description
, value
, and options
allow you to prefill the key and value of such variables:
description
- required field. Variable description, where you can specify all necessary information. This keyword is required for a standard CI/CD variable to become prefilled.value
- optional field. Allows you to specify a default value for the variable, which can be overridden when starting the pipeline.options
- optional field. Set of values available for the variable. If this keyword is present, it is not possible to override the variable value. When using this keyword, you must specify one of the possible values in thevalue
keyword.
Prefilled variables are only available at the configuration file level and cannot be defined at the job level.
Example
variables:
VAR1:
description: "Variable without default value"
VAR2:
description: "Variable with default value"
value: "test-value"
VAR3:
description: "Variable with selectable values"
value: "1-value"
options:
- "1-value"
- "2-value"
CI/CD variable override priority
The override priority for different variable sources is as follows (in descending order):
- Variables declared when creating the pipeline or in the pipeline scheduler.
- Variables declared for the project via UI.
- Variables declared in the job when using the
parallel:matrix
keyword in the.yaml
configuration file. - Variables declared for the job in the
.yaml
configuration file. - Global variables declared for the pipeline in the
.yaml
configuration file. - Predefined variables.
Variables with equal priority override each other.
Example of working with different variable types
- A masked variable
POPULAR_VAR
with the valuePopular from project
is declared for the project via UI. - In the
variables
field for the pipeline, the variablePOPULAR_VAR: "Popular from pipeline"
is declared, as well as the variableOBSCURE_VAR: "Obscure from pipeline"
. - In the
variables
field for the jobjob 0
, the variables are declared as follows:POPULAR_VAR: "Popular from job 0"
,OBSCURE_VAR: "Obscure from job 0"
. - As a result, for the job
job 0
, the variable values will be:POPULAR_VAR
will have the valuePopular from project
and will be masked, andOBSCURE_VAR
will have the valueObscure from job 0
. - The value for
POPULAR_VAR
from the job or pipeline did not override the value declared via UI for the project. The value for the variable declared in the pipeline (OBSCURE_VAR
) for the jobjob 0
was overridden.
Using CI/CD variables
All CI/CD variables are set as environment variables in the pipeline runtime environment. To access environment variables, use the syntax of your shell.
Bash
To access environment variables in Bash
, sh
, and similar shells, add the $
prefix to the CI/CD variable:
variables:
MY_VAR: "Hello, World!"
my_job:
script:
- echo "$MY_VAR"
PowerShell
To access environment variables in Windows PowerShell, add the $env
prefix to the variable name:
variables:
MY_VAR: "Hello, World!"
my_job:
script:
- 'echo $env:MY_VAR'
parallel:matrix
This keyword allows you to define a matrix of variables to create multiple instances of the same job with different values of the specified variables within a single pipeline.
Keyword type: Job keyword.
When using this keyword, consider the following requirements:
- The matrix is limited to 200 created jobs
- In the
needs
keyword, you must pass the full name of the matrix job with brackets and variable values, or use the needs:parallel:matrix job keyword
Example
matrix_jobs:
stage: deploy
script:
- echo "here some script"
parallel:
matrix:
- PROVIDER: name1
STACK:
- value1
- value2
- value3
- PROVIDER: name2
STACK: [value4, value5, value6]
- PROVIDER: [name3, name4]
STACK: [value7, value8]
The example generates 10 parallel matrix_jobs
, each with different pairs of PROVIDER
and STACK
values:
deploystacks: [name1, value1]
deploystacks: [name1, value2]
deploystacks: [name1, value3]
deploystacks: [name2, value4]
deploystacks: [name2, value5]
deploystacks: [name2, value6]
deploystacks: [name3, value7]
deploystacks: [name3, value8]
deploystacks: [name4, value7]
deploystacks: [name4, value8]
stage
Use stage
to define the stage for a job.
Keyword type: Job keyword.
Example:
stages:
- build
- deploy
job:
stage: build
scripts
Use scripts
to specify commands to be executed by the agent.
Keyword type: Job keyword.
Supported values: Array of strings.
job1:
scripts: apt-get update
job2:
scripts:
- apt-get -y install maven
- apt-get -y install git
before_script
Use before_script
to specify commands to be executed before the main commands of each job after restoring artifacts.
When using before_script
as a global keyword, the commands will be executed before each job.
Keyword type: Job keyword. Can also be used as a global keyword.
Supported values: Array of strings.
Example:
job1:
scripts: apt-get update
before_script:
- apt-get -y install maven
- apt-get -y install git
If any command in the before_script
block fails, then:
- all commands in the
scripts
block will be skipped - jobs in the
after_scripts
block will be executed
after_script
The after_script
keyword allows you to specify commands to be executed after the main job commands.
-
If
after_script
is defined globally, the specified commands will be executed after each job, regardless of its status. -
If
after_script
is defined inside a specific job, the commands will be executed only after that job.
Keyword type: Job keyword. Can also be used as a global keyword.
Supported values: Array of strings.
Example:
job1:
scripts: apt-get update
after_script:
- apt-get update
- apt-get -y install maven
- apt-get -y install git
Additional information:
If the job times out or is canceled, after_script
commands are not executed.
artifacts
Use artifacts to specify which files to save as job artifacts. Job artifacts are a list of files and directories attached to the job upon execution.
By default, jobs automatically download all artifacts created by previous jobs within the same stage.
When using the needs
keyword, jobs can only download artifacts from jobs defined in the needs
configuration.
Keyword type: Job keyword.
Supported values: Array of file paths relative to the project directory.
artifacts:paths
Paths are relative to the project directory and must be specified in relative format. Absolute paths or references to files outside the repository directory are not supported.
Example:
artifacts:
paths:
- bin/usr/
- bin/path
- frontend/saw
artifacts:name
Use artifacts:name
to set the name of the artifact created as a result of the job.
Example:
artifacts:
name: job_artifacts
paths:
- bin/usr/
- bin/path
- frontend/saw
artifacts:reports
Use artifacts:reports
to upload SAST/DAST reports and junit reports.
Junit test report results are displayed on the job's
Tests
tab.
Keyword type: Job keyword. You can only use it as part of a job.
Supported values: Array of file paths relative to the project directory.
Example:
artifacts:
reports:
sast:
paths:
- sast_report.json
dast:
paths:
- dast_report.json
- dast_report_2.json
junit:
paths:
- target/surefire-reports/*
artifacts:expire_in
Use artifacts:expire_in
to specify the artifact retention period. After the specified time expires, the artifact will be deleted.
By default, the retention time is specified in seconds. To specify time in other units, use the syntax from the examples:
'55'
55 seconds
20 mins 30 sec
2 hrs 30 min
2h30min
2 weeks and 5 days
9 mos 10 day
10 yrs 3 mos and 10d
never
List of artifacts that are not subject to deletion:
- Artifacts from the last pipeline. This behavior can be changed in the service settings
- Locked artifacts. These artifacts will not be deleted until they are unlocked.
- Artifacts with
expire_in: never
set
Example:
job:
artifacts:
expire_in: 1 week
needs
Use needs
to specify dependencies between jobs in the pipeline. The needs
keyword allows you to explicitly define the execution order of jobs.
Jobs can only download artifacts from jobs defined in needs
. Jobs in later stages automatically download all artifacts created in earlier stages if they are specified in needs
. When specifying an array of jobs, the job will only run after all specified jobs have completed successfully.
Keyword type: Job keyword.
Possible values: Array of file paths relative to the project directory.
Example:
stages:
- build
- test
build_job:
stage: build
script:
- echo "Building..."
test_job:
stage: test
script:
- echo "Testing..."
needs: [ build_job ]
needs:parallel:matrix
GitFlic supports specifying jobs that work with a matrix.
Jobs can use parallel:matrix
to run a job multiple times in parallel within a single pipeline, but with different variable values for each job.
Keyword type: Job keyword.
Example
# Creating a job with a matrix of parameters
job-1:
stage: test
script:
- echo "$PROVIDER"
- echo "$STACK"
parallel:
matrix:
- PROVIDER: ["qqq", "www", "eee"]
STACK: ["111", "222", "333"]
# Job specifying a single matrix job in needs
job-2:
stage: build
needs:
- job: job-1
parallel:
matrix:
- PROVIDER: qqq
STACK: 111
script:
- echo "Building"
# Job specifying multiple matrix jobs in needs
job-3:
stage: build
needs:
- job: job-1
parallel:
matrix:
- PROVIDER: [qqq, www, eee]
STACK: [111, 222, 333]
script:
- echo "Building"
when
Use when
to configure job execution conditions. If not defined in the job, the default value is when: on_success
.
Keyword type: Job keyword.
Possible values:
on_success
(default): the job will run only if no job in earlier stages has failed or hasallow_failure: true
manual
: the job will run only when manually triggered.
Example:
stages:
- build
- test
- deploy
build_job:
stage: build
script:
- echo "Building the project..."
test_job:
stage: test
script:
- echo "Running tests..."
when: on_success
deploy_job:
stage: deploy
script:
- echo "Deploying the project..."
when: manual
only:
- main
rules
Keyword type: Job keyword.
Use rules
to control job creation and change their field values based on logical expressions.
rules
are evaluated when the pipeline is created. Evaluation is sequential and stops at the first true rule.
When a true rule is found, the job is either included in the pipeline or excluded from it, and job attributes are changed according to the rule.
rules
are an alternative to the only/except
keywords and cannot be used together.
If a job specifies both rules
and only
and/or except
, pipeline processing will fail with an error.
rules
are an array of rules, each consisting of any combination of the following keywords:
A rule from rules
is true if the if
field is either missing or represents a true logical expression.
A job will be created if:
rules
are not declared or represent an empty array.- The first true rule has
when
not equal tonever
.
A job will not be created if:
- None of the rules in
rules
are true. - The first true rule has
when: never
.
rules:if
Use rules:if
to define the conditions under which a rule is true.
Possible values
An expression with CI/CD variables, namely:
- Comparing a variable to a string.
- Comparing two variables.
- Checking if a variable exists.
- Comparing a variable to a
regex pattern
. - Any combination of previous expressions using logical operators
&&
or||
. - Any previous expression wrapped in
(
and)
.
Comparing a variable to a string
You can use the equality operators ==
and !=
to compare a variable to a string.
Both single '
and double "
quotes are allowed.
The order of operands does not matter, so the variable can be first, or the string can be first. For example:
if: $VAR == "string"
if: $VAR != "string"
if: "string" == $VAR
You can compare the values of two variables. For example:
if: $VAR1 == $VAR2
if: $VAR1 != $VAR2
You can compare a variable's value to null
:
if: $VAR == null
if: $VAR != null
You can compare a variable's value to an empty string:
if: $VAR == ""
if: $VAR != ""
You can check if a variable exists:
if: $VAR
This expression is true only if the variable is defined and its value is not an empty string.
Comparing a variable to a regex pattern
You can match regular expressions against a variable's value using the operators =~ and !~.
For regular expressions, the RE2 syntax is used.
The regular expression must be wrapped in slashes /
.
The match is true if:
- Operator
=~
: at least one substring fully matches the regular expression. - Operator
!~
: no substring fully matches the regular expression.
Examples:
if: $VAR =~ /^feature/
if: $VAR !~/^feature/
The first expression will be true for the variable value: "feature/rules/if"
and false for "base/feature"
.
The second expression will be false for the first value and true for the second.
Note: single-character regular expressions (such as /./
) are not supported and will cause an error.
Another variable can be used as the right operand; its value will be interpreted as a regular expression. For example:
if: $VAR =~ $REGEX_VAR
if: $VAR !~ $REGEX_VAR
Note: if the variable value is not wrapped in /
, it will be interpreted as wrapped (for example, for REGEX_VAR: "^feature"
the result will be the same as REGEX_VAR: "/^feature/"
).
Two expressions can be joined using logical operators:
$VAR1 =~ /^feature/ && $VAR2 == "two"
$VAR1 || $VAR2 != "three" && $VAR3 =~ /main$/
$VAR1 AND $VAR2
$VAR1 OR $VAR2
Expressions can be grouped using (
and )
:
$VAR1 && ($VAR2 == "something" || $VAR3 == "something else")
Logical operators and (
)
have the following precedence:
- Expression in parentheses.
- Conjunction of expressions -
&&
orAND
. - Disjunction of expressions -
||
orOR
.
Example:
job:
scripts: echo "This job uses rules!"
variables:
VAR1: "one"
rules:
- if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
when: never
- if: $CI_COMMIT_REF_NAME =~ /^feature/ && VAR1 == "one"
allow_failure: true
variables:
VAR1: "one from rule"
VAR2: "two from rule"
- if: $CI_COMMIT_TAG
- when: manual
Four rules are defined for this job:
- The first rule is true if the pipeline is created on the default branch for the project (e.g.
main
ormaster
). - The second rule is true if the pipeline is created on a branch whose name starts with
feature
and the variableVAR1
contains the stringone
. - The third rule is true if the variable
CI_COMMIT_TAG
is declared. - The fourth rule is always true.
If the first rule is true, the job will not be created because when: never
.
If the second rule is true and the first is false, the job will be created. The job's allow_failure
field will become true
, the variable VAR1
will be overwritten, the variable VAR2
will be added to the job, and the job's when
field will not be changed.
If the third rule is true and the previous ones are false, the job will be created, and its fields will not be changed.
If none of the previous rules are true, the fourth rule will change the job's when
field to manual
.
rules:changes
Use rules:changes
to determine when to add a job to the pipeline, depending on changes in specific files or directories.
If compare_to
is not used, the check for changes will be performed between different git objects depending on the pipeline type:
- Merge pipeline and Merge result pipeline -
rules:changes
compares changes between the source and target branches in the merge request. - Branch pipelines and Tag pipelines -
rules:changes
compares changes between the commit on which the pipeline is run and the parent commit.
rules:changes:paths
Use rules:changes:paths
to specify file paths whose changes are required for the rule to be true.
Keyword type: Job keyword.
The array containing file paths supports the following options:
- Path to a file, which can include CI/CD variables
- Wildcard templates:
- Single directory, e.g.
path/to/directory/*
- Directory and all its subdirectories, e.g.
path/to/directory/**/*
- Single directory, e.g.
- Wildcard templates with extensions, e.g.
path/to/directory/*.md
orpath/to/directory/*.{java,py,sh}
Example:
job-build:
script: docker build -t my-image:$CI_COMMIT_REF_NAME .
rules:
- changes:
paths:
- Dockerfile
- build/*
In this example, the job is included in the pipeline if the changes include the file Dockerfile
or any file from the build
directory.
rules:changes:compare_to
Use rules:changes:compare_to
to specify which git objects should be compared when searching for changes in files listed in rules:changes:paths
.
Keyword type: Job keyword. Can only be used together with rules:changes:paths
.
Supported values:
- Branch name in short
master
or long formrefs/heads/master
- Tag name in short
v.4.0.0
or long formrefs/tags/v.4.0.0
- Commit hash, e.g.
1a2b3c4
CI/CD variables are supported.
Example:
job-build:
script: docker build -t my-image:$CI_COMMIT_REF_NAME .
rules:
- changes:
paths:
- Dockerfile
compare_to: "develop"
In this example, the job is included in the pipeline if the Dockerfile
was changed compared to the develop
branch.
rules:allow_failure
If the rule is true and this field is defined, it overwrites the value of the job's allow_failure
field.
rules:variables
If the rule is true and this field is defined, it adds variables to the job, potentially overwriting already declared ones.
Possible values: Set of fields in the format VARIABLE_NAME: "variable value"
.
Example:
job:
variables:
ARGUMENT: "default"
rules:
- if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
variables: # Overwrite the value
ARGUMENT: "important" # of an existing variable
- if: $CI_COMMIT_REF_NAME =~ /feature/
variables:
IS_A_FEATURE: "true" # Define a new variable
scripts:
- echo "Run script with $ARGUMENT as an argument"
- echo "Run another script if $IS_A_FEATURE exists"
Variable name | Value | Result |
---|---|---|
GIT_STRATEGY |
none |
Disables repository cloning |
GIT_STRATEGY |
Any except none |
git fetch is applied |
ARTIFACT_DOWNLOAD_STRATEGY |
none |
Disables artifact downloading |
rules:when
If the rule is true and this field is defined, it overwrites the job's when
field value.
Possible values
never
- the job will not be added to the pipelinemanual
- the job is executed only when manually triggered from the UI.on_success
- the job is executed only if all jobs withallow_failure: false
completed successfully.
tags
Use tags
to configure the agent.
You can specify tags for a particular agent in its settings.
Keyword type: Job keyword.
Possible values: Array of tag names.
Example:
job:
tags:
- test
- build
allow_failure
Use allow_failure
to determine whether the pipeline should continue if the job fails.
-
To allow the pipeline to continue with subsequent jobs, use
allow_failure: true
. -
To prevent the pipeline from running subsequent jobs, use
allow_failure: false
.
Keyword type: Job keyword.
Possible values:
true
false
(default value)
For
manual
jobs, which are executed only when manually triggered from the UI, the default value istrue
except
Use except
to specify branches on which the job will not be created.
Keyword type: Job keyword.
Possible values: Array of branch names.
Example:
job:
except:
- master
- deploy
only
Use only
to specify branches on which the job will be created.
Keyword type: Job keyword.
Possible values: Array of branch names.
Example:
job:
only:
- master
trigger
Use trigger
to configure pipeline triggering from another pipeline.
For cross-project triggers to work properly, the user initiating the pipeline must have at least the developer
role in the child project.
A single job cannot contain both
script
andtrigger
keywords simultaneously.
Keyword type: Job keyword.
trigger:project
trigger:project
- Specifies the project where the pipeline should be triggered.
trigger:branch
- Specifies the branch for which the pipeline will be triggered in the project specified in trigger:project
.
trigger:strategy
- Defines the relationship between the job and child pipeline. If omitted, the trigger job completes immediately after creating the child pipeline without waiting for its successful execution.
Possible values:
depend
: Requires successful completion of the child pipeline for the trigger job to succeed.
trigger:forward
- Specifies which variables should be passed to the triggered project.
Possible values:
yaml_variables
:true
(default) orfalse
. Whentrue
, variables defined in the triggering pipeline (both global and job-level) are passed to child pipelines.pipeline_variables
:true
orfalse
(default). Whentrue
, variables defined in the pipeline scheduler that created the triggering pipeline are passed to child pipelines.
Example:
trigger-project:
trigger:
project: adminuser/cicd-child
branch: master
strategy: depend
forward:
yaml_variables: true
pipeline_variables: true
trigger:include
The trigger:include
keyword family allows creating child pipelines within the same project as the parent pipeline. Multiple trigger:include
types can be used in a single trigger job - all specified configuration files will be merged into one to create the child pipeline. These keywords function similarly to include
.
trigger:include:local
trigger:include:local
- Creates a child pipeline based on repository files specified by absolute paths.
Example:
trigger-child:
trigger:
include:
- local:
- "gitflic-1.yaml"
strategy: depend
forward:
yaml_variables: true
pipeline_variables: true
trigger:include:project
-
trigger:include:project
- Creates a child pipeline based on files from another repository. Required fields: -
project_path
- Project path in{ownerAlias}/{projectAlias}
format
Path Variable | Description |
---|---|
ownerAlias |
Project owner alias |
projectAlias |
Project alias |
ref
- Git reference (branch or tag) where the file should be locatedfile
- Absolute path to the configuration file for the child pipeline
Example:
trigger-child:
trigger:
include:
- project:
project_path: 'adminuser/cicd-child'
ref: master
file:
- 'gitflic-1.yaml'
strategy: depend
forward:
yaml_variables: true
pipeline_variables: true
trigger:include:remote
Available in self-hosted service versions
trigger:include:remote
- Creates a child pipeline based on remotely hosted files specified by URL.
trigger-child:
trigger:
include:
- remote:
- https://gitflic.ru/project/adminuser/cicd-child/blob/raw?file=gitflic-ci.yaml
strategy: depend
forward:
yaml_variables: true
pipeline_variables: true
trigger:include:artifact
trigger:include:artifact
- Implements dynamic pipeline functionality - creates child pipelines based on configuration files generated in other CI/CD jobs. Specifies the artifact filename to use for child pipeline creation. The file must be within an artifact generated by the job specified intrigger:include:job
.trigger:include:job
- Specifies the job that generates the artifact containing the child pipeline configuration file.
Peculiarities when tasks are in one stage.
If both artifact-generating and trigger jobs are in the same stage, the trigger job must use the needs
keyword referencing the artifact-generating job. For jobs in different stages, needs
is optional.
stages:
- artifact
- trigger
job-parent:
stage: artifact
scripts:
- echo -e 'from-artifact-job:\n scripts:\n - echo "$CI_PIPELINE_SOURCE"' > gitflic-1.yaml
artifacts:
paths:
- gitflic-1.yaml
job-trigger:
stage: trigger
trigger:
include:
- artifact: gitflic-1.yaml
job: job-parent
strategy: depend
forward:
yaml_variables: true
pipeline_variables: true
extends
Use extends
to inherit job configuration from other jobs or templates. If the same keywords exist in both the template/job and the main job, the keywords from the main job will be used.
A template is a job whose name starts with .
. Template jobs do not appear in the pipeline and are not executed separately from the main job.
Keyword type: Job keyword.
The main job must specify the stage
keyword; otherwise, the job will appear in the default test
stage.
Example:
.default_template:
before_script:
- echo "Executing before_script"
script:
- echo "Executing script"
build_job:
stage: build
extends: .default_template
script:
- echo "Building the project"
- make build
environment
Use environment
to define the environment in which deployment from the related job will be performed.
Keyword type: Job keyword.
environment:name
Use environment:name
to define the environment name.
Keyword type: Job keyword.
Possible values: The environment name may contain only letters, digits, "-", "_", "/", "$", "{", "}", ".", and spaces, but cannot start or end with "/".
It may contain CI/CD variables, including predefined variables, as well as project, company/team-level variables, and variables declared in gitflic-ci.yaml
.
Example:
deploy to production:
stage: deploy
script:
- make deploy
environment:
name: production
Note: If the environment specified in environment:name
does not exist, it will be created when the job is executed.
environment:url
Use environment:url
to define the environment URL.
Keyword type: Job keyword.
Possible values: Valid URL, e.g. https://prod.example.com
.
May contain CI/CD variables, including predefined variables, as well as project, company/team-level variables, and variables declared in gitflic-ci.yaml
.
Example:
deploy to production:
stage: deploy
script:
- make deploy
environment:
name: production
url: https://prod.example.com
environment:on_stop
Stopping the environment can be performed using the on_stop
keyword defined in the environment.
It declares another job that is executed to stop the environment.
Keyword type: Job keyword.
Possible values: Name of an existing job.
Example:
deploy to production:
stage: deploy
script:
- make deploy
environment:
name: production
url: https://prod.example.com
on_stop: stop_production
stop_production:
stage: deploy
script:
- make delete-app
when: manual
environment:
name: production
action: stop
environment:action
Use environment:action
to describe how the job will interact with the environment.
Keyword type: Job keyword.
Possible values: One of the following keywords:
* start
- Default value. Indicates that the job creates the environment and deployment.
* prepare
- Indicates that the job only prepares the environment. Deployment is not created.
* stop
- Indicates that the job stops the environment.
* verify
- Indicates that the job only verifies the environment. Deployment is not created.
* access
- Indicates that the job only accesses the environment. Deployment is not created.
Example:
stop_production:
stage: deploy
script:
- make delete-app
when: manual
environment:
name: production
action: stop
environment:deployment_tier
Use deployment_tier
to define the environment deployment tier.
Keyword type: Job keyword.
Possible values: One of the following options:
* production
* staging
* testing
* development
* other
Example:
deploy:
stage: deploy
script:
- make deploy
environment:
name: customer-portal
deployment_tier: production
Note: The default value is other
. It will be set for the environment if environment:deployment_tier
is missing.
services
Keyword type: Job keyword.
Use the services
keyword to start additional Docker containers together with the main job container, for example, to run a database or other auxiliary services.
Services defined in the job are started in the same isolated network as the main container and are only available within that job.
Example:
job:
stage: test
image: ubuntu:latest
services:
- postgres:latest
You can use services
in extended syntax with additional parameters. In this case, the name
directive must be specified for each service.
When using extended syntax, all directives of the image
keyword are supported.
services:name
Use services:name
to specify the image for the additional container.
services:
- name: postgres:latest
- name: mysql:latest
services:command
Use services:command
to specify the command to run the additional container.
services:
- name: docker:dind
command: ["--tls=false"]
services:alias
Use services:alias
to specify the name by which the container will be available on the network.
services:
- name: docker:dind
command: ["--tls=false"]
alias: docker
If alias
is not specified, the container will be available by automatically generated names.
In the example:
services:
- tutum/wordpress:latest
Available names:
- tutum__wordpress
- tutum-wordpress
If both names are already taken, the job will fail at runtime.
Docker-in-Docker support
To correctly run an additional container with the docker:dind
image, you need to configure the agent to run containers in privileged mode in the agent configuration.
docker.privileged=true
Note that using an additional container with the docker:dind
image when the docker.didEnable=true
parameter is enabled may lead to incorrect behavior.
Automated translation!
This page was translated using automatic translation tools. The text may contain inaccuracies.