Default behaviour of git clone breaks pronto usage

github

#1

Hello!

I hope this is the correct place to start issues now.

My recent runs of pronto have been failing on travis-ci. The same issue does NOT occur locally or in other environments.

The error message itself is

Rugged::OdbError: object not found - no match for id (00ab260b26265995739a5e6fa60fcac7c095b97c)

occuring during the command

    git remote add upstream https://github.com/publicrepopath.git

    git fetch upstream master

    export PRONTO_PULL_REQUEST_ID=${TRAVIS_PULL_REQUEST} && bundle exec pronto run -c upstream/master --exit-code

the failure occurs on the last line (when running pronto).

The interesting problem is that the git object ID that is in the error message (that is “not being found”) is neither of the commits that are a part of the merge - it is not the head of the upstream/master, nor the head of the fork that is submitting the PR. It is rather a few months old object that used to be the head of upstream/master a few months ago.

44 (see how none of the ids is the one mentioned in the error message above)

Suspecting this was a cache problem, I cleared all of them, repushed the git tree with a different commit hash, but still no dice.

I see people have been running into similar problems online, but all of the tickets were closed and none of the solutions applied work here.


#2

This is an example of the failing job.

https://travis-ci.org/kubamahnert/gooddata-ruby/jobs/474382895


#3

I have figured out the problem.

It lies in the default call to git clone (https://docs.travis-ci.com/user/customizing-the-build/#git-clone-depth), which contains the depth switch. The mentioned object that is “not being found” by Rugged is actually NOT present in the tree it receives, as it is cut off by the default depth setting.

There is obviously some more magic to it, as it does not happen quite deterministacally based on my investigations, but it is quite clear that using Travis to run Pronto is NOT POSSIBLE in the default state.

It seems to me like this usecase is both valid and quite frequent. What’s the deal with this? Do you guys think this behaviour is correct?


#5

Based on your description, I tend to think that the object may or may not be reachable from the clone, depending on the configuration of the build. You can try disabling the --depth flag (as explained in the docs), or running git fetch --unshallow.


#6

Thanks! That’s exactly it actually, just as I described in my earlier comment.

My question is though - is this good default behaviour? I believe that having a pronto review job is almost universal across most Ruby CI environments.

Why does it not work out of the box? Maybe a mention in the documentation would also help.


#7

The default is a shallow clone to save time.

If you have ideas for improving documentation, you can do so by clicking on the button on the upper right corner. Thanks!