I have a lot of projects that use matrices and build stages to build across different python versions and a stage that checks whether coverage remains at 100%. I need this to work on branches, such as master, and on PRs, including those from forks.
I had this almost working using coveralls.io, but because encrypted environment variables do not work with PRs from forks, I can’t tell the coveralls all the parallel builds are done.
The advice seems to be to use S3, but this looks like this has the same problem in that encrypted environment variables do not work with PRs from forks.
I’ve tried to (ab)use build caches to get this information across, but this build stage that checks coverage only sees cache files for the python version used to run it.
I can’t see any way through this, what should I do?
I see a fundamental conceptual problem here:
If a PR build would in some way be able to push to a third-party, all credentials that are needed for this push are essentially public (that’s why the travis docs call builds like PR builds untrusted).
I would suggest the following (though I’m not sure how well this integrates with coveralls.io in particular): Create a second account (or repository or whatever) and make all untrusted builds push there using unencrypted credentials. This way you can still see the result of PRs, you will have them in your main account (after you merged them) and your main account is shielded from untrusted builds.
I’m not following I’m afraid.
Fundamentally, I’m looking to aggregate some files from the build matrix into a later build stage. I only really want to share those within the Travis job like the build cache does.
It’s unclear whether that works at all, but even so, how would I build this in to a job such as this which uses both a matrix and build stages? I want the notification to fire once the matrix is done but before I check for 100% coverage.
Here’s a convoluted workaround that uses the webhook in combination with GitHub’s protected branches for pull requests, and the coverall.io API for the master branch:
I think that given the credentials restrictions on PR builds, your best bet to achieve what you want is by using our caching feature. Have you tried defining the cache’s name to ensure that every jobs will use the same cache?
e.g. CACHE_NAME=JOB1
You would need to set this in every job of your matrix and also ensure that every job doesn’t override the files you need.
@cjw296 Thank for updating the thread with everything you’ve tried so far. It’s really giving me a better understanding of what you are trying to achieve.
First, I don’t think that the CACHE_NAME suggestion will work in this case. Neither will the webhook: recipe because it’s only called at the end of a build and you want it at the end of a job.
I’ve reviewed Coveralls’ documentation and I think it could work by calling directly their API via cURL e.g.
You would need to add this command in your .travis.yml file e.g. in after_script: of your jobs.
I’m not quite sure but it’s possible that your contributors would need to register on https://www.coveralls.io/. Maybe this could be an acceptable requirement?
Please let me know what you think and the result you’ll get if you try this out.
I haven’t tried the CACHE_NAME thing, so thanks for saving me some time. The webhook: thing, which was the coveralls.io suggestion, can be made to work for PRs when combined with a GitHub branch protection preventing a PR from being merged without that Coveralls check posting back.
@dominic: posting to that webhook is exactly what I do on master branches, as I explained above, that’s what I wrapped up in the coveralls-check helper I wrote. BUT, it needs credentials plugging in, which Travis prevents on PRs.
Contributors needing to register on coveralls.io is both not an acceptable requirement and won’t help. How would they plug their credentials into the Travis CI job that processes their PR?
It would be really useful if all the jobs within a build, matrix and build stages included, could share some (limited) disk space between them in the same way that caches are stored but explicitly available in all builds, and only for the life time of that build.
Until either that happens (where should I file the feature request?) or you do something different with secrets (which I’m not sure would be a good idea), then the kludge I have above, with PRs and master having to be handled in different ways, will have to suffice.
@dominic - as I’ve said a few times now, the hacky solution I currently have does use the notifications/webhook stuff, but it can only be used for PRs, rather than when I push direct to master, as the notifications section of a .travis.yml only runs at the end of a build, not in a stage, so I have no way to tell coveralls that my parallel builds are done before checking for a result.
Coveralls.io is completely non-functional at the moment. So, back to looking at ways to share a few small coverage files between build stages. How can I do this? Why won’t the cache hack work?
coveralls continues to be flakey, and codecov will only do anything once the whole build is finished.
I don’t really need either of them, to be fair, since all my stuff considers <100% coverage a failure.
What I do need is a way to share some small coverage files between all jobs in a build for both master and PR builds. How can I achieve this?
You can use build caches, but you’ll need a separate coverage job for each configuration. This is logical since each configuration of your project is, strictly speaking, a separate piece of software.
Not sure that’ll help: I need to combine coverage information from all the jobs in the “test” stage to form a complete picture of code coverage; no individual job will hit 100%, but combined they will do.
That’s why I mentioned that “each configuration of your project is, strictly speaking, a separate piece of software.”
Since each your configuration works independently from others, it needs to be considered a separate product, and its test coverage needs to be considered separately. “Aggregate score” that you have in mind will show if each piece of code is tested in some configuration but will say nothing about how well each individual configuration is covered.
Imagine e.g. that you run one half of tests in one configuration and the other half in another. The sum is 100% but each product is only half covered.
Of course, that means that you need to somehow exclude code from coverage measurement that will only be executed in some configurations. One way is to somehow physically cut it from the sources (e.g. with a preprocessor) since in those configurations, it’s effectively dead code. Another is to somehow instruct coverage.py to exclude it from measurement.
@native-api - I strongly disagree, sorry, but your approach is not one I wish to consider. I want to combine coverage from the jobs in the test stage and check that the combined coverage is 100%. I know how to do this on the project side using python’s coverage tools, I do it happily for Jenkins-based CI with my commercial stuff. I’d love to do it using Travis for all my open source stuff. So, respectfully, if you cannot contribute to a solution to the problem I’m asking for help with, I’d ask that you refrain from commenting further.
Using any 3rd-party service is impossible due to conflicting requirements:
You require build agents to be authenticated to be able to contribute data, but you refuse to provide them with any means to authenticate themselves because they are untrusted.
The only idea I have is to use the travis console utility to extract information about the current build, locate the appropriate jobs within it and extract the necessary data from their build logs (a build log is the only build result that Travis retains).
It’s just frustrating that there’s not build-wide shared space, then I wouldn’t need any third party service or any hackery. The cache is potentially huge, and I’m talking about a few kb of coverage information here…
What happens is that our cache feature is language version dependent e.g. a cache for a Python 2.6 job can’t use a cache from a Python 2.7 job.
This behaviour can be overridden with the CACHE_NAME feature but then packages from all language versions would overwrite each other possibly leading to incompatible caches.
A possible solution to use CACHE_NAME could be to ensure your dependencies are installed in a location specific to the Python version + Django version of each job. You could then specify a directory shared by each job as well.