Hello,
I am facing an issue combining the allow_failures
feature with deployment. Here is a relevant snippet of the used .travis.yml file:
jobs:
allow_failures:
- name: "Ubuntu 18.04.sudo.mpich [Job failure permitted]"
if: type = cron
script:
- docker build -f precice/Dockerfile.Ubuntu1804.sudo.mpich -t precice/precice-ubuntu1804.sudo.mpich-develop .
deploy:
skip_cleanup: true
provider: script
on:
all_branches: true
script: >-
echo "$DOCKER_PASSWORD" | docker login -u precice --password-stdin &&
docker push precice/precice-ubuntu1804.sudo.mpich-develop:latest
include:
- stage: Building preCICE
name: "Ubuntu 18.04.sudo.mpich [Job failure permitted]"
if: type = cron
script:
- docker build -f precice/Dockerfile.Ubuntu1804.sudo.mpich -t precice/precice-ubuntu1804.sudo.mpich-develop .
deploy:
skip_cleanup: true
provider: script
on:
all_branches: true
script: >-
echo "$DOCKER_PASSWORD" | docker login -u precice --password-stdin &&
docker push precice/precice-ubuntu1804.sudo.mpich-develop:latest
To give some background here, our travis build conducts a series of individual installations and deploys via script to upload docker images to DockerHub if the installation was successful. This specific job is known to fail, so we deliberately marked it as allowed failure.
Prior to using script deployments we successfully used after_success
. However, we find that using deploy
will no longer cause the job to be allowed to fail, as in this build log.
The travis docs specify that the keys have to match exactly, which should be the case here as well. Am I missing something else here?