However, this does not fit my use case. I publish my Docker images to Docker hub for others to use. So, I don’t want to publish my Docker image to Docker hub in the build/test stage of my Travis run because I want to make sure that the tests pass first, otherwise, I might push a broken build. I don’t really understand the article because I’m not sure why someone would want to push an image before they run their tests. Also, what if the image that you are pushing to Docker hub gets updated somehow after the build/test stage and before the push to Docker hub stage? It just seems like there are a lot of problems with the approach shown in the article.
Is there any way to share the local Docker image between stages? I can run my tests inside the image. That is why I would like to share the image between stages. Otherwise, I have to re-build the image for each stage. I would build the image once and then run the tests inside it and if they pass then go on to the deployment phase which would push the same image to Docker Hub.
Is there a better way to do what I’m trying to do?
You won’t have concurrency problems if you use a tag that’s unique for the build (e.g. using Travis-provided envvars with build metadata). Likewise, since you specify a special tag, your users won’t get this image unless they specifically request that tag.
You can also use an entirely different image name as well. After all, if the image’s payload and purpose is significantly different to what you normally upload, it’s effectively a different product, why should it use the same name?
The documentation’s example doesn’t consider any of this because it’s just a proof of concept. It doesn’t advocate any specific usage scenario, just shows the principle and leaves the rest up to users to adapt to their specific needs.
But the problem with that is that I would have to push to a unique tag and then cleanup that tag after the CI run, which seems quite unnecessary.
The other problem (if I’m understanding this correctly) is that the test stage for remote forks can’t use secrets, so pushing to Docker hub for forks who send PR’s would fail, making the build fail. I’d have to find some way to detect if the job is running on a fork and skip uploading to Docker Hub. That’s why I was running that step in a deploy stage anyway, which handles the fork stuff for you.