I got a few responses on twitter, and one which made me rethink why this might be a good idea and that was:
It could easily lead to and incredible amount of noise if we are raising bugs against every failed test, and also 1 failed test case != 1 bug, in that if 3 automated tests on CI fail then we don't necessarily need 3 bugs. It could also potentially (as my reply states) devalue what a bug actually is, meaning that if a tester raises a bug then it might get ignored/lost in all the noise.
I also asked our internal QA Slack channel, responses were informative, and again helped lead me away from this potentially noisy and crazy idea.
Both of these points are extremely valid, and if you make it the number 1 priority to fix a broken test then why bother to create a bug? Surely just the visibility of a red cross next to a test run is enough to get people to work on it? Which is very true.
Another factor to consider is the type of test, if a Unit Test fails, do we need a bug? Most definitely not.... There is definitely more of an argument if an Acceptance Test fails though (though after this discussion today I don't think there is!).
So, what started out as a blog post about CI and bugs, has provided me with 3 insightful thoughts:
1 - The focus of this blog post being that a failed test on CI most definitely does not mean a new bug (whether the process is manual or automated in creating said bug) needs to be raised. It ultimately would devalue a bug, it would mean that people wouldn't necessarily talk about said issues/bug, and 1 failed test != 1 new bug.
2 - Asking people for feedback is invaluable, it can help shape your opinion and give you quick feedback before you go off down a rabbit hole!
3 - Discussions like this are the exact reason why I wanted to have an internal QA community, so we can discuss things and get feedback.