Mark a build "passed with warning" when "allow_failures" jobs fail

Hello Travis Community Forum,

I’m new on this forum, so I tried to do my home-works and find an existing feature request for this, but I couldn’t. That’s surprised me because what I would like to ask is not new, and is an open issue on github.

Basically, I think that “allow_failures” does not mean “should fail”, so I believe there should be a third state “passed with warning” to show, back in the PR, that Travis jobs didn’t run “perfectly” and would use a closer look.

I don’t think this requires a precise example. But one could still have a look here to see how I would like to use it for “optional checks”.


Please describe your intended use case. How would this status weave into your business process?

There’s no such thing as “passed with warnings” in software testing theory – it’s either shippable (pass) or not (fail). It’s deliberate that there’s no grey zone because that would be a time and nerve waster: “it’s “shippable, but…”, is that “but” bad enough or not? who and how decides that?”. If there’s a need to decide manually each time, automation goes south; if there is an actual algorithm to decide, it can rather be weaved into the test logic and convert “buts” into a pass/fail.

it’s either shippable (pass) or not (fail)

Well, it seems to me that this statement is quite “theoretical”.
I am tempted to answer with a question: What is the point of “allow_failures” then?

I’m not found of the fact that I can have unnoticed failures, even allowed. Your point is that you want it to be a no-brainer “pass” or “fail”, but if there are hidden allowed failures, in my sense it’s hidden information and there is no point in running those test since they don’t count.

“Passed with warning” would still be a “passed” state, just like before, in automation context. In review context, which is intrinsically manual, it could prove very useful nonetheless. See the second use case below.

Use cases:

  • On some projects, we are testing compilation against a wide variety of compilers. Some are not of prior concern. But if the test exists, It is to remind the team that this still not supported. Plus, this is only run in the “extensive testing case” not for every PR, typically.
  • On the linked example, the use case is fairly different. We want to analyze the branch history at PR time, and make sure that it doesn’t introduce problematic stuff, like a binary file, or a large text file. We want to make sure of that for every commit, since the final status of the code is not enough to describe the history. This is automated in a Travis job, but sometimes a large file is legitimate. So we want to point the reviewer to the “grey zone” so that he can decide if it is a valid situation or not.
1 Like

I would like add that I’m not asking for this feature to be imposed to every project.

I could be activated via a setting “report_warning:true” at job level, so that if the job ends up in the “allows_failures” category, its failure would trigger a “passed with warning” state.

This sounds like that should be an additional piece of information independent from build status.

I guess there could be e.g.

  • An additional icon next to build status in WebUI if not just the build passed but allow_failures: jobs also passed
    • I’m wary of adding anything whatsoever to builds where these fail – since that would make an impression that “something is not right” – while for the purpose of making decisions based on build status, those failures are 100% fine.

But this raises further questions:

  • Should we report if only some of them failed, or different ones failed?
  • For the analytics that you described, you will likely need more fine-grained info than “all allow_failures: passed”/“not all allow_failures: passed”. Making this feature inherently crippled.
  • You will not be able to see this information in Github UI next to corresponding commits (because it only supports error, failure, pending and success). Thus e.g. won’t be able to see trends.

As such, you are likely to be better off with custom PR checks/analytics for your task.

See e.g. for an example of custom PR checks for additional scrutiny.

It is true that Github does not allow any “intermediate status”. So you are right, I would not have the feedback in the PR page in the end, I would have to go in travis to know about the existence of warnings, which is not what I want.
Concerning the global status, as I suggested above maybe only a subset of jobs could be involved and as long as one of these fails, the warning would be raised.

But again, you are right, since what I really need in to have this feedback in Github, the limitation is not necessarily on Travis side.

I have to admit I don’t get what you mean by pointing me to this PR though, could you explain it?

Another related question: would it be possible to have 2 (or more) lines of feedback in Github from a single pipeline in Travis, maybe 1 per stage or configurable? This way I could for example get an independent feedback for my “optional” stage.

If you expand “View details” near the end of the PR page, you’ll see that that repo has many additinonal PR checks in addition to Travis build that signal of various stuff. I guess it may be possible to create likewise custom checks to show you the various stuff about the commit that you want to keep track of.

I don’t know the answer. Asked separately at Multiple Travis PR checks with different configurations.

1 Like

Ok I got you. In fact I could create a Github App, register it to my project with a token, and use it to add tests. But I would have to host it somewhere. Maybe that sounds silly but that’s basically what prevents me from doing it :).