Commit 069c863b authored by eileen's avatar eileen 🎱
Browse files


parent dc2f3fa2
Kinda just experimenting with - but also looking at how the
people who do the bulk of the work on CiviCRM product maintenance
co-ordinate that.
What is product maintenance?
I feel collaboration is going really well, but there isn't a way for
collaborators to work together on prioritisation and focus and to
onboard others with an interest in active collaboration.
The ultimate goal is to ship a product that is free of bugs and regressions while ensuring code maintainability is improving and supporting the efforts of contributors to improve the product.
Note I think a key part of this project space is that it is for active
collaborators to co-ordinate their efforts - not for other people to attempt
to influence their priorities.
\ No newline at end of file
In real terms we only have limited resources so the intent is basically to aim to
Identify and fix recent regressions
Identify and fix critical issues
Identify and review review-ready pull requests
Provide input & mentorship to people trying create a review-ready pull request
In very practical terms that means keeping the first 3 columns on this board empty and keeping this list down as low as possible (still working on an achievable ongoing goal for that).
There are a bunch of other tasks that are done by the same people outside of this definition of product-maintenance. For example a fundamental function of the core team is to maintain all the infrastructure for the release process, for testing and for CiviCRM in general. This is absolutely necessary for product maintenance but I'm not including that in this definition. I'm also leaving security work under the security team hat and the significant core time that goes into improving compatibility (drupal 8, Wordpress enhancements, accessibility) to live under some under category.
Who is the product maintenance group?
It is an informal voluntary group organised through the product-maintenance channel on chat, which basically consists of the core team, a group of key people (Jitendra (Fuzion) , Monish (JMA), Seamus and myself) who put in regular substantive time to this work, outside of any customer work or 'scratching their own itch work' and a number of people who do the same on a more ad hoc basis (there are too many to fairly mention but I usually make a point of a call out to someone who has been notably recently contributing & Michael McAndrew stands out right now.)
A few years back CiviCRM partners were presented with a decision to find a way to fund a full core team or accept it being cut back to a skeleton team and to step up to make this work. Partners were generally clear they preferred to give time than money and we wound up attempting the latter. We found ourselves in a situation where partners & community members were submitting bug fixes that were not getting reviewed due to lack of reviewer resource, long-standing data integrity bugs were being left open while people fixed some very niche bugs, reviewer time was being squandered on issues that were really not review-ready, or where the submitter was not responding to reviewer input, and we didn't have a way to prioritise the time people were prepared to put into product maintenance.
It was also apparent that there were some people who were collaborating very well on addressing bugs, code quality and testing issues and reviewing and the best return on effort was to try to improve co-ordination of these people before trying to cast the net wider. I proposed some initial goals which were basically 'get critical issues down to 0 and keep them there', 'get open PRs under 155', 'get to the point where we have no PRs more than one year old' and 'figure out how to get on top of regressions'.
I feel like we have achieved out initial goals.
We got bugs identified as critical down to zero and kept them there, even after we started to get more aggressive in our classification (in fact I had Stoob protest that he didn't think a bug he reported was actually critical after we treated it as such & turned it around quickly).
We have got PRs down substantially.
We are now aggressively triaging to find regressions, analysing them and putting out fixes in patch releases.
Where next?
I think the goals are basically to keep the first 3 columns here empty and focus on getting the PRs down and I have quite a lot of thoughts on the latter.
My specific goal is that the PR queue should be usable as a 'to-do list' for would be reviewers and that someone wanting to do review is able to quickly find a PR that is reviewable. This means we need to figure out how to triage it.
Basically I think there are 2 types on things that should be in the review queue
PRs that are review ready. Ie. these are PRs where the fix is in good shape with no obvious blockers, there is no doubt as to whether it is a good idea, it has unit tests (unless there is reason to think they might not be required) and the submitter is prepared to respond to feedback and adjust their patch or test a reviewer's proposed alternative
Active work in progress. It's valid to put something into the review queue for feedback or testing which is still being actively worked on. If it has not had updates of comments for more than 7 days it is not really active.
Anything else I believe should be closed and re-opened when it fits into one of the 2 above categories. It's important to note that NOTHING is lost by closing a PR. The work remains in it. The comments remain there. It can be be a place for further discussion. It can be tracked through an issue tracker - usually gitlab but sometimes your own internal one.
On the other hand there is a real cost to having non-reviewable inactive PRs in the queue. They waste that precious resource - reviewer's time - meaning less review overall is done and reviewer's are less motivated.
Some more detailed thoughts on the review queue are over here
What do I do if I want to help?
Jump on the product maintenance channel on chat - our main to-do lists are the first 3 columns of
So, does this mean you will fix any regressions I hit or bugs I see as critical?
Erm maybe. The product maintenance group (like all volunteer workers) works on what I call an 'exploitation contract'. They agree to work for free & be exploited within limits that they probably can't and won't clearly articulate but which they will quickly identify a breach of. So we need to be very mindful of respecting volunteer time. Hence....
The focus is very much on recent regressions. If you upgrade to a release in the first few days or test an rc and identify a regression we are going to bust a gut to get a new patch release out as soon as humanly possible. If you upgrade after it has been out a couple of weeks we will still put in a major effort but may target the rc rather than issuing a patch release. If you are on a three monthly upgrade cycle and you find something that has been broken for the last 2 releases we will probably aim for a fix to be merged into master rather than targeting an rc. If you are only upgrading every 6 months and find an older regression we will prioritise reviewing any fix you submit. If you have only just found a regression that has been in the 4.7 or 5.x series for longer than 6 months then that won't really hit our to-do lists.
Likewise with critical bugs - over the past few months there have been a couple of times where we have expended significant amounts of time because an obscure bug has been classified as critical. However, they were bugs that have been in CiviCRM for a long time without being reported and the submitter did not show a commitment to testing proposed fixes. So, we wound up feeling bullied and exploited. This is something we need to avoid. I recall Jon G defining a minor bug as 'a bug one person thinks is critical'. So long-standing bugs may not be classified as critical unless the submitter is also showing positive engagement.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment