Some builds are successful (e.g., https://travis-ci.org/caronc/apprise-api/jobs/632762260) so I tend to think this is some sort of timing issue. If you restart them, do they all fail? I’m happy to enable debug feature on for this repository, so you can troubleshoot further.
I’d definitely appreciate if you could enable debugging if at all possible. If it’s a timing issue, do i just need to add a sleep 5s at the end of the .travis.yml and/or tox.ini file then for the Python 3.6 calls?
Seems weird how this worked perfectly before (2 weeks ago). Also, Python 3.5, 3.7, and 3.8 build without any problems at all. Here is the master one i just tried to restart (and failed again). Maybe stick debugging here?
Nothing has changed for this (xenial, Python 3.6) has changed in recent weeks. There might have been some underlying changes to your dependencies, so I’d suggest examining those as well. (If, for example, if you restart a previously successful build and it now fails, then the changes in your dependencies are very likely to blame.)
When i get home, I’ll explicitly create a branch forcing coverage to version 5.0.1 and see what happens. thanks again for all your help so far! I really hope you’re right and the problem is that easy!
@BanzaiMan: Just out of curiosity, to help with the open github ticket with coverage, are the containers you guys host internally available to us? The developer isn’t able to reproduce the issue. Alternatively, can you stick an strace on the call?
You can use https://hub.docker.com/repository/docker/travisci/ci-sardonyx. I’ve reproduced the issue with travisci/ci-sardonyx:packer-1542104228-d128723 in particular. (This image may not be exactly the same as one that is in use, but should be similar enough to troubleshoot.)
He also made an observation that it is tied to an alpha version of SQLite (5.0a2) that is installed. This kind of hints that the container has changed from the past (and to now). Is there anyway we can roll back from the alpha build and use the previous stable version instead?
Alternatively, strace should show the OS-level error (but you’ll need to filter it out from tons of system calls and due to the trace size, will probably have to upload it somewhere for examination).
SO questions with this error suggest that this can happen if you call commit() in a tight loop (should rather call it outside the loop) or if something else is using the .db-journal file at the same time (unlikely in Travis, but auditing should show this).