Deployment from Pull Requests


We have an issue related to deployments from pull requests. It seems other people have also had some issues with this.

For one project we have, it has a 3 stage build process. We want to make artifacts that were created in the first stage available to the next stages. We’re putting these artifacts on S3 using S3 Deployment. They are set as public readable, so we can download them again in the other stages without authenticating. The problem comes when we try to submit a PR: the deployment blocks don’t run at all. While I can understand that you wouldn’t want an application deployment to be triggered from a PR, it would be fine for us if the PR used some of our S3 storage for artifact management.

The alternative would be to use the AWS CLI to upload the artifacts, but the problem is the authentication, since encrypted environment variables also aren’t made available to PRs.

There is an example in the documentation demonstrating use of S3, but it seems to be assuming the use of a bucket that has public write permissions, which is definitely not a feasible solution for us.

One thing I tried doing was using the Travis cache for the artifact management, but sadly I didn’t find this to be a completely reliable mechanism. Sometimes when I went to retrieve artifacts from the cache they were missing.

I’m wondering if anyone else has had this problem and if there is any workaround someone could suggest?



1 Like

Not being able to store artefacts somewhere like S3 in this described scenario seems like a pretty huge limitation!
Will be interesting to hear the communities thoughts on this one…

When you think about Pull Requests in general, it is paramount that you keep in mind that the PRs will execute arbitrary code, and create arbitrary build artifacts that you have little control over. You never know what kind of things the PR author will try to upload; even if we allowed deployment from PRs, and somehow found a way to do it with a more restricted way, it is effectively public writable.

Hi @BanzaiMan ,

Thank you for the response.

It is unfortunate that Travis aren’t allowing users to make their own decision on this and accept the consequences. Feel free to offer us advice on best practices, but it’s quite restrictive in my view to offer no choice on this.

One thing that I’m curious about then is, what is the intended usage of S3 in this scenario? Do you just have to setup a publicly writable bucket and that’s it? Or you just accept that it won’t work for PRs?



Hi @BanzaiMan

I would really appreciate your input on something. Someone else on our team discovered that there is a CACHE_NAME variable you can set for jobs. We thought this might be a solution and would allow us to use the Travis cache rather than S3 to pass around the artifacts. I set CACHE_NAME on the build stage and then use the same name on the test stage. However, even with that, it still doesn’t work correctly. When I try to retrieve the items I’ve cached, they’re not there.

I can see using the CLI that there are a bunch of files that look like the correct size:

n branch docker_travis_build:
cache-linux-trusty-79ae31d3a183aac6559e08bb737d57a8f3e939b0cfd8d4a059aa9fd288f4a583--cargo-1.29.1.tgz  last modified: 2019-02-07 14:53:14  size: 83.43 MiB
cache-linux-trusty-8191c84abbc0cbc8b4cb1dbbd0167402b96fe828a4b1e639fbe3eb54ff76af07--cargo-1.29.1.tgz  last modified: 2019-02-07 14:28:34  size: 4.15 MiB
cache-linux-trusty-fc38d4bb16937e8159b3a92ca95efb609e4f4227bed8a48fab88cc6e0ee689c7--cargo-1.29.1.tgz  last modified: 2019-02-07 14:49:34  size: 88.41 MiB
cache-linux-trusty-ff4573320af6d6ac9691e9f709834390855b22abe0ddc82750f8164d6661d584--cargo-1.29.1.tgz  last modified: 2019-02-07 14:28:28  size: 4.15 MiB
cache-osx-7f10584bb3303947afc46873707ca0e9ef953384eec7316aba6f2f4b8f202211--cargo-1.29.1.tgz           last modified: 2019-02-07 15:09:26  size: 133.21 MiB
cache-osx-870158efe2b22e98caa770582b83885d283e27291062b4b36d7ca47af6a3eafb--cargo-1.29.1.tgz           last modified: 2019-02-07 15:10:27  size: 133.21 MiB
cache-osx-e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855--cargo-1.29.1.tgz           last modified: 2019-02-07 15:41:06  size: 275.47 MiB

If you could download these and see what’s in them that may have given us a clue, but the CLI doesn’t seem to allow that.

The travis file is here. I would really appreciate if you could shed any light on this, as we are really reluctant to just throw this work away.



The CACHE_NAME appears in the document in this context:

If these characteristics are shared by more than one job in a build matrix, they will share the same URL on the network. This could corrupt the cache, or the cache may contain files that are not usable in all jobs using it. In this case, we advise you to add a public environment variable name to each job to create a unique cache entry:


In other words, it is a way to differentiate the caches that are otherwise identical and can corrupt each other.

It is not a way to share them.

Ah ok, I see, thanks. @BanzaiMan

Do you have anything else to suggest as to how we could make this work? Is the publicly writable bucket the only solution?

Btw, I just want to clarify something. The concern is not that the artifacts would be publicly available, that’s fine. The issue would be that the S3 bucket would be open to be abused.

Another team member has suggested this service: That may work. The issue with this though is that there’s an element of randomness to the URL that’s generated and I’m not sure how I can share that URL wit the other stages.