Wikipedia talk:WikiProject Articles for creation/July 2021 Backlog Drive

AfC submissions
Random submission
~7 weeks
1,217 pending submissions
Purge to update

Scoring system

edit

My proposal for the scoring and re-review system:

Scoring:

  • Reviews get 1 point.
    • Reviews of drafts older than the median draft instead get 1.5 points.
    • Reviews that fail a re-review instead count as -1 point.
    • Reviews that miss a copyvio instead count as -5 points. That is, for a draft that was a severe copyright violation, any action besides a decline + speedy deletion nomination is -5 points. (Manually flagged.)
  • Conducting a re-review gets 1 point.
  • Bonus of 3 points for improving a draft that would have clearly been declined and then accepting it. (Manually listed.)

Re-reviews:

  • If the original reviewer disagrees, go to the backlog drive talk page.
  • Each participant must have had at least 10% of their reviews (or 3 reviews, whichever is less more) re-reviewed for the drive to end and the awards to be distributed.
  • Each participant must have conducted a number of re-reviews greater than 10% of their number of reviews. Otherwise, each participant's score will be capped at 10 times their number of re-reviews.

Thoughts requested; please express either a preference for this system or the previous system we used (or a mix of both - I'd really like at least the bonus for reviewing older drafts), which I will copy-paste here for reference:

  • 1 point is earned per review.
  • 1 point is earned per re-review on other user's drive pages.
  • A bonus point is earned when you decline a submission as a copyright violation (WP:COPYVIO), and the page is then deleted by an admin.
  • 2 points are lost when two users fail a user's review. As the review has already earned one point, this counts as a loss of one point.

(The implementation of the scoring system isn't held up on this, but it would still be nice to get it established early.)

Thank you! Enterprisey (talk!) 06:09, 1 July 2021 (UTC)Reply

I think the proposed scoring is fine. About the re-review requirements, though, how exactly are re-reviews performed and tracked? Monitor the AfC recent accepted articles showcase and then re-review? How would one find a recently declined draft to re-review? And is everything tracked here on this talk page whether disagreed or endorsed? -2pou (talk) 03:05, 4 July 2021 (UTC)Reply
Enterprisey, how will drafts that are being held up due to existing redirects pending deletion under CSD G6 counted, or histmerge counted? – robertsky (talk) 08:39, 4 July 2021 (UTC)Reply
First/new scoring system proposed sounds good! — Bilorv (talk) 23:02, 7 July 2021 (UTC)Reply
Oh hang on, Enterprisey, "10% of their reviews (or 3 reviews, whichever is less)" should be "10% of their reviews (or 3 reviews, whichever is more)", right? — Bilorv (talk) 23:25, 8 July 2021 (UTC)Reply
@Bilorv This could just be to prevent the drive continuing for too long. ―Qwerfjkltalk 19:15, 10 July 2021 (UTC)Reply
Thanks for the catch; fixed. Enterprisey (talk!) 08:50, 15 July 2021 (UTC)Reply
@Enterprisey I like your proposals. As I've said below in response to those seeking different barnstar trigger points, I think none of us need have concerns if we do not get it right this time. There has been so much learning happening that teething troubles are inevitable, expected. If something cannot be tweaked for this drive we can use it as a learning experience and make it better for the next drive
I'm simply both impressed with your skill and those working with you and grateful for the fact that you are insane enough to take this project on FiddleTimtrent FaddleTalk to me 11:51, 8 July 2021 (UTC)Reply

Is the WP:QUERY, bot, or whatever is being used to tally the reviews able to detect reviews of drafts that ended up deleted? (G11, G12, etc.) –Novem Linguae (talk) 23:11, 7 July 2021 (UTC)Reply

Rewards are insufficient :)

edit

I thought I'll "earn" a barnstar by reviewing a few articles, but the "cheapest" BS is 50 reviews? Sorry, I think I'll go back to my usual hunts (writing new articles/reviewing them for WikiProjects I am involved in/AfDs). This is just a note that IMHO if you wanted to incentivize people with "barnstars", you put your threshold way too high to attracted most (i.e. you are not going to get a long tail effect here, you are asking people to be super active from the start). Piotr Konieczny aka Prokonsul Piotrus| reply here 04:52, 8 July 2021 (UTC)Reply

@Piotrus While you have a point, and I agree we should lower the bar, 50 is only just under a couple a day.
This is the first drive since 2014. There will be teething troubles. Why on earth would we get it right first time? Please try not to be negative. Your contribution is positive, yet you haven't quite managed that in the way you have told us what you feel is required.
What level of reviews would you wish for as the first award qualifier? FiddleTimtrent FaddleTalk to me 07:31, 8 July 2021 (UTC)Reply
Timtrent, I wish you good luck, sorry if my tone was a bit negative. You are doing a good job. What I'd suggest is to add awards for the 1,2,5, 10, 25 reviewed articles mark, to motivate the long tail contributions. Piotr Konieczny aka Prokonsul Piotrus| reply here 07:39, 8 July 2021 (UTC)Reply
@Piotrus It's a good suggestion. Even if it can't be achieved for this drive I'm certain folk will wish to consider it for future ones. We're having to re-learn everything the previous folk already learned, so forgive the teething troubles, keep making good suggestions, and join in for the pure fun of it. Who knows what you may find! What if you topped the leaderboard? FiddleTimtrent FaddleTalk to me 09:48, 8 July 2021 (UTC)Reply
As someone who has been considering helping at AfC for a few months now but so far am inexperienced in the matter, and thus may be a target demographic for this drive, I broadly agree with Piotrus. It would be useful to incentivise dropping in for just a day or two. The current ongoing GAN Backlog Drive for example has its first barnstar at 3 reviews, which is small enough that a new reviewer might see it as an achievable goal to try and help out. 50 on the other hand is a somewhat imposing number, and even 15 feels like a commitment. It may actually not be that much of a commitment timewise, but someone inexperienced wouldn't know that (I suspect it's easier than a GAN but I don't know what the conversion rate might be). There may not be a need for a 1, 2, 5, reviews etc., but I would say at least one award, perhaps the brownie, should be for a single digit contribution. CMD (talk) 10:48, 8 July 2021 (UTC)Reply
@Chipmunkdavis definitely excellent suggestions. I know we will strive to improve this drive, the first for many years, but I also know you will understand if not all improvements can be made until the next one
You are most assuredly part of the target demographic. Old hands, too. FiddleTimtrent FaddleTalk to me 11:39, 8 July 2021 (UTC)Reply
The GA backlog drive barnstar is 3 points, now there's bonuses, and used to be 2 reviews. Here, 15 points might be 9 normal reviews (some old) and a re-review and improving one draft so it can be accepted. An average GAR would take me 3 hours to conduct, and an average AFC review takes me 60–120 seconds. I suppose someone for whom it's not routine might average 5–10 minutes on a first batch of reviews, after some time reading the rules. But of course, it depends what types of articles/drafts you choose to look at. I've spent 15 hours+ on one GA review, 15 minutes+ on deciding whether to accept/decline certain drafts, and an hour+ improving drafts so that they can be accepted.
I would support something small (is strawberries lesser than brownies? How about a goat?) at 3 points, as something very achievable just for taking a look and dropping in. — Bilorv (talk) 23:25, 8 July 2021 (UTC)Reply
In theory it should be pretty easy to just edit the page and reduce some of the requirements. If we can get a consensus for that, I would support it. It may also be worth looking into replacing the brownie with some kind of barnstar, since apparently non-barnstars sometimes aren't as motivating. I see little downside and mostly upside to handing out more bling. –Novem Linguae (talk) 04:45, 9 July 2021 (UTC)Reply
Also, are the awards meant to be based "Number of submissions reviewed" or "number of points" (the table uses the former)? Pahunkat (talk) 08:23, 9 July 2021 (UTC)Reply
Number of points; table is wrong. Enterprisey (talk!) 06:01, 20 July 2021 (UTC)Reply
the current table is definitely wrong from the start, since it may not have counted in deleted drafts. it is meant to be a temporary gauge anyway until a scoring chart is up. – robertsky (talk) 14:44, 20 July 2021 (UTC)Reply


Proposals for Awards

edit

Let's create some proposals, and have a very brief consensus forming discussion. I've just created the heading. Pinging known interested parties @Pahunkat, Novem Linguae, Bilorv, Chipmunkdavis, and Piotrus: for formal input in what they woudl like to see, either individually or collectivekly, and consensus forming. FiddleTimtrent FaddleTalk to me 10:02, 9 July 2021 (UTC)Reply

I note that we have points and we have articles reviewed. I will make a proposal based on that, below FiddleTimtrent FaddleTalk to me 10:39, 9 July 2021 (UTC)Reply
I did some analysis. If we take the list of points of everybody signed up, and divide it into 6 equal groups, with the bottom group not getting any award, then our point thresholds become as follows: 5, 15, 50, 100, 200. This would be the most even, balanced way to distribute our 5 awards. And it also lowers the barrier to entry for the first 2 awards. That is my new suggestion. Although if there is not strong consensus for any one option, I think leaving it as is with no change is also reasonable. –Novem Linguae (talk) 22:27, 30 July 2021 (UTC)Reply

Implementation

edit

If anyone wants to go ahead and update the awards table with the results of this discussion, or even if they think it would be an improvement, please feel free. The current leaderboard will use whatever awards system is in the table. Enterprisey (talk!) 03:02, 28 July 2021 (UTC)Reply

It looks like the teamwork barnstar was previously awarded at 25 points. ―Qwerfjkltalk 06:28, 28 July 2021 (UTC)Reply
The modest barnstar might be useful too. ―Qwerfjkltalk 08:14, 28 July 2021 (UTC)Reply

Re-reviews

edit

Please can you explain how we are supposed to re-review others reviews? I've reviewed over 300 drafts since July 1st but have no idea how to re-review other users reviews, so will be penalised. Theroadislong (talk) 11:03, 9 July 2021 (UTC)Reply

My understanding is that this is a work in progress and will be something we're doing in some centralised fashion closer to the end of the drive. — Bilorv (talk) 11:51, 9 July 2021 (UTC)Reply
I have a vague memory of the prior model, in 2014. The prior scheme may be found here. Or at least it is there to the extent that it is visible! FiddleTimtrent FaddleTalk to me 17:10, 9 July 2021 (UTC)Reply
Somewhere was a table of who said what about which. FiddleTimtrent FaddleTalk to me 17:12, 9 July 2021 (UTC)Reply
And how do we find the recently reviewed drafts in order to re-review them? It all seems so tiresome and reduces the number of actual reviews I will take on. Where do we place the AFCDriveQC templates? Theroadislong (talk) 17:17, 9 July 2021 (UTC)Reply
@Theroadislong We are in the teething troubles phase. I think we need to await a technical answer. FiddleTimtrent FaddleTalk to me 17:24, 9 July 2021 (UTC)Reply

Same topic, different point (obvious as it may be): I think the re-review feature is important for maintaining the standard of reviews, and this should be sufficiently reflected in what comes to dishing out the bling and glory. A quantitative target may otherwise encourage shoddy reviews just to hit the numbers. I think this was pretty well built into at least the first scoring proposal I saw; just hoping it survives the discussion. --DoubleGrazing (talk) 11:10, 10 July 2021 (UTC)Reply

At the moment I can't see anywhere to record a re-review. Should we create a template that can be placed on an article's talk page, or perhaps each user who wants to participate should create a subpage either of this page or in their userspace where they can list them? Pi (Talk to me!) 23:57, 13 July 2021 (UTC)Reply

See #Re-reviews are open for business. Enterprisey (talk!) 08:52, 15 July 2021 (UTC)Reply

Oops

edit

I didn't see that I had to sign up. Are my 55 reviews not going to qualify? Clarityfiend (talk) 06:33, 13 July 2021 (UTC)Reply

@Clarityfiend I think it was said somewhere that they will count all your reviews during the drive, when you sign up (including reviews before you signed up). ―Qwerfjkltalk 06:41, 13 July 2021 (UTC)Reply
No, it says if you sign up in the first week, they count. Looks like I'm SOL. Clarityfiend (talk) 11:34, 13 July 2021 (UTC)Reply
Clarityfiend, just sign up first. people are still adding themselves to the list at this point in time. – robertsky (talk) 15:15, 13 July 2021 (UTC)Reply
Clarityfiend, we had a discussion somewhere about this. I don't see it here so maybe it was at WT:AFCR. I think we were leaning in the direction of auto counting all reviews for the month of July. –Novem Linguae (talk) 05:33, 14 July 2021 (UTC)Reply
All of July sounds good to me, unless anyone objects. Enterprisey (talk!) 08:52, 15 July 2021 (UTC)Reply

I can confirm that the scoring script doesn't currently check when you signed up. Stuartyeates (talk) 23:57, 15 July 2021 (UTC)Reply

@Clarityfiend: all your July reviews should count, defo, even the pre-sign-up ones. Unless, of course, you're very close to beating me to some bling, in which case they defo shouldn't. Hope that answers your question. --DoubleGrazing (talk) 06:26, 17 July 2021 (UTC)Reply

No chance of that. Thanks. Clarityfiend (talk) 06:33, 17 July 2021 (UTC)Reply

Leaderboard

edit

The Drive Page notes that there is an automated scoring system being discussed/created here, but I don't see any discussion on that - If it's not too technically difficult, I think there ought to be a leaderboard put up pretty soon (we're nearly halfway through the month). Even if the scoring system isn't 100% solidified, it should at least be possible to have a leaderboard tracking the number of reviews (as opposed to points) each reviewer has amassed. AviationFreak💬 02:01, 14 July 2021 (UTC)Reply

AviationFreak, if you're familiar with Quarry, you can just fork something like this and then hit "Submit Query" to refresh it. Credit to KylieTastic and Robertsky and whoever else I forked this from. –Novem Linguae (talk) 05:33, 14 July 2021 (UTC)Reply
P.S. I decided to boldly add a leaderboard to this page just now. Hope it helps. –Novem Linguae (talk) 05:40, 14 July 2021 (UTC)Reply
Thanks! Nearly done with my stuff, but we should keep that leaderboard up. Enterprisey (talk!) 06:12, 14 July 2021 (UTC)Reply
Thanks for adding the leaderboard! :) Just to clarify, the review tally is exactly that; doesn't include the scoring yet? And looking at the figures, I can't help remarking on the range of percentages — acceptance rates go from 0% to 100%! (Are reviewers selecting what they feel comfortable with? Or erring on one side or the other? Or applying wildly differing criteria? Just wondering.) --DoubleGrazing (talk) 06:34, 14 July 2021 (UTC)Reply
Yes, no scoring yet, the leaderboard I added is just AFC review counts. As to the wide spread of percentages... Front of queue patrolling can lead to more declines. So can focusing only on easy drafts or drafts with few references. I think I computed the average the other day, and it averages out to 21% accept rate, 76% decline rate, 3% reject rate. –Novem Linguae (talk) 08:23, 14 July 2021 (UTC)Reply
The participants list contains 81 names, but the leaderboard has 180 entries, so it looks like those are stats for all reviewers and not just those who have decided to join the drive. Please fix this so that those of us who have not signed up are not included in the contest leaderboard. Thanks! --bonadea contributions talk 08:00, 14 July 2021 (UTC)Reply
Bonadea, updated – robertsky (talk) 20:28, 17 July 2021 (UTC)Reply

Points and reviewing

edit

Are we hellbent on doing points and checking other people's work? Might be simpler and more efficient to just do straight up # of reviews. I imagine anybody doing a bad job of AFC reviewing will be discovered naturally through other means, and could be disqualified on a case-by-case basis. –Novem Linguae (talk) 08:27, 14 July 2021 (UTC)Reply

I'm very unconvinced that any bad AFC reviewing will be discovered naturally: some types would be (accepting obviously bad drafts) and others likely wouldn't (applying a much too harsh standard, missing copyvio checks etc.). And even if you discover a bad accept/decline, determining that it is a bad reviewer (not just an acceptable level of human error) requires co-ordination. This is particularly the case given the overwhelming NPP backlog (less scrutiny on accepts). Given the large influx of new participants and an incentive for rapid-scale reviewing, we definitely need the re-reviewing. — Bilorv (talk) 13:21, 14 July 2021 (UTC)Reply

Re-reviews are open for business

edit

See Wikipedia:WikiProject Articles for creation/July 2021 Backlog Drive#Reviewing Reviews. (05:56, 20 July 2021 (UTC): Instructions used to be in this talk page post, moved them there.)

I'll have the bot count the re-reviews later.

Thanks, and please let me know if you have any comments. Apologies for the delays. Enterprisey (talk!) 08:30, 15 July 2021 (UTC)Reply

Just a question about re-reviews - do we have to pick random entries from the logs or can we choose which ones to re-review? Pahunkat (talk) 09:00, 15 July 2021 (UTC)Reply
Pahunkat, you can choose, although try to evenly distribute them over the participants; see the scoring system at the top of this talk page (which I still need to copy over). Enterprisey (talk!) 09:10, 15 July 2021 (UTC)Reply
subpages of the Participants page is only showing 30 participants? Theroadislong (talk) 10:44, 15 July 2021 (UTC)Reply
Theroadislong Looks like Enterprisey is manually creating the pages in alphabetical order, but not all are up yet. Pahunkat (talk) 14:15, 15 July 2021 (UTC)Reply
When should we be failing a re-review? There's a boundary of flexibility based on inclusionism/deletionism and where each person's threshold "percentage chance of surviving AFD needed to accept" is. Am I failing it if there would have been a different outcome if I had been the one reviewing, or if I think that no reasonable reviewer could have given that outcome? Am I fine to comment directly on the re-review page if I want to say "I would have done this differently, but I think this could be reasonable", or "Can the reviewer give me some more explanation on which sources they thought content towards notability"? And can the same review be re-reviewed more than once (like someone who takes issue with someone else re-reviewing something a fail)? — Bilorv (talk) 20:21, 15 July 2021 (UTC)Reply
I'm still not clear about re-reviews do I place the templates on the draft's talk page like this [1] for example. You also state that "Each participant must have conducted a number of re-reviews greater than 10% of their number of reviews" so I have to re-review at least 70 other users reviews, seems rather onerous and could take a few days and puts me off reviewing more. Theroadislong (talk) 21:08, 15 July 2021 (UTC)Reply
@Theroadislong In part the burden of re-reviews slows the big hitters down a tad to allow the less productive folk to catch up. But you get points for a re-review. Now, points may not motivate you, but I bet quality does. As a big hitter, your re-reviews are likely to be more incisive, and help the overall project FiddleTimtrent FaddleTalk to me 22:56, 15 July 2021 (UTC)Reply
@Theroadislong Wikipedia:WikiProject Articles for creation/July 2021 Backlog Drive/Participants/15 ids a good example oh placing a re-review FiddleTimtrent FaddleTalk to me 22:58, 15 July 2021 (UTC)Reply
@Timtrent Ahhh, OK thanks for that I've been placing the re-review on the draft talk pages, I find all these instructions very vague and complicated! Theroadislong (talk) 07:30, 16 July 2021 (UTC)Reply
@Theroadislong We are very much in the teething troubles phase. But the drive is doing its job. FiddleTimtrent FaddleTalk to me 17:51, 17 July 2021 (UTC)Reply
@Bilorv I go back to the JUne 2014 re-reviews where we commented to justify a fail, but, broadly, accepted a pass. If we are all assumed to review to the same standard, then our "different outcome" and our view on the chances of failing an immediate deletion process ought to be broadly congruent.
Nothing wrong with two different opinions on a re-review. That builds consensus FiddleTimtrent FaddleTalk to me 22:53, 15 July 2021 (UTC)Reply
@Enterprisey I don't think you have every reviewer in the pages listing their reviews (More partticipants in the drive than prefixed pages). Tech glitch? FiddleTimtrent FaddleTalk to me 07:16, 16 July 2021 (UTC)Reply
I'm uploading them slowly because the process I have for creating them is pretty bad. I'll automate it soon. Enterprisey (talk!) 07:45, 16 July 2021 (UTC)Reply
I had a feeling that might be the case. More power to your elbow FiddleTimtrent FaddleTalk to me 08:20, 16 July 2021 (UTC)Reply
Enterprisey, fyi, typo in one page title: Wikipedia:WikiProject Articles for creation/July 2021 Backlog Drive/Participants/Antan0. I have moved it to Wikipedia:WikiProject Articles for creation/July 2021 Backlog Drive/Participants/AntanO cuz it appeared as a red link in the leaderboard table. – robertsky (talk) 20:38, 17 July 2021 (UTC)Reply
Thanks! Enterprisey (talk!) 06:02, 20 July 2021 (UTC)Reply
Update: I'm doing another run of all the participants. I'll get it down to daily updates after this. For my fellow BAG members, if anyone wants to take a look at my BRFA, that'll save me some laptop electricity :) Enterprisey (talk!) 06:04, 20 July 2021 (UTC)Reply

Duplicate reviews

edit

Any way to flag duplicate reviews? (Such as by JSFarman here ("Pass/Fail/Duplicate"?)) Bogger (talk) 15:28, 17 July 2021 (UTC)Reply

I introduced Invalid to flag such a situation (sorry about the bug); instructions updated. Thanks for pointing that out! Enterprisey (talk!) 05:57, 20 July 2021 (UTC)Reply

Leaderboard

edit

I've updated the leaderboard, tweaked to include columns for comments. I'm not sure whether these should be included in the scoring. Stuartyeates (talk) 20:36, 15 July 2021 (UTC)Reply

Why did everyones edit count go down with this edit [2]? Theroadislong (talk) 21:49, 15 July 2021 (UTC)Reply
Beats me. I just followed the instructions on updating it FiddleTimtrent FaddleTalk to me 22:50, 15 July 2021 (UTC)Reply
It's because in my version, everyone's counts included drafts you'd commented on and I forgot to publish by query. It's https://quarry.wmflabs.org/query/56812 BTW. Stuartyeates (talk) 22:52, 15 July 2021 (UTC)Reply
I've updated the leaderboard again based on https://quarry.wmflabs.org/query/56812 The tot_reveiews column is accept + decline + reject + comment. I've got the query pretty much figured out, so if anyone wants any other kinds of counts, let me know. Stuartyeates (talk) 20:41, 16 July 2021 (UTC)Reply
I've updated the leaderboard again and this time it has wikilinks to the participants. Stuartyeates (talk) 21:02, 16 July 2021 (UTC)Reply
I've put a considerably more radical update on User:Stuartyeates/sandbox, https://quarry.wmflabs.org/query/56841 if anyone is interested. It probably needs to be limited to the last dozen or so links, but I've not just worked out how to do that. Stuartyeates (talk) 21:30, 16 July 2021 (UTC)Reply
It is an honor to be among the top concat("'s. Clarityfiend (talk) 22:47, 16 July 2021 (UTC) Reply
Clarityfiend: fixed. Stuartyeates (talk) 23:30, 16 July 2021 (UTC)Reply
Stuartyeates, Hmm... how about just a link to the re-review pages? – robertsky (talk) 18:18, 17 July 2021 (UTC)Reply
I agree. ―Qwerfjkltalk 18:44, 17 July 2021 (UTC)Reply
I've updated with the re-reviews and also deleted the tot-reviews column. Stuartyeates (talk) 20:13, 17 July 2021 (UTC)Reply
Stuartyeates, I have also updated the list of participants in the query: my fork. – robertsky (talk) 20:29, 17 July 2021 (UTC)Reply
Updated again using User:Robertsky's excelent fork. Stuartyeates (talk) 04:20, 18 July 2021 (UTC)Reply

Review quality

edit

Ok, none of us will get anywhere near 100% re-reviews, but this drive is providing higher quality reviews than the June 2014 drive. YMMV.

I have re-reviewed a good number. I'm trying to review an accept and a decline for the same reviewer, not always managing it. I've found some borderline decisions, but have not yet found any I would fail. The borderline ones have made me think hard.

Back in 2014 we had quite a swathe of failures in the reviews. Regrettably one editor was responsible for many of those, and that may be why we lost our appetite for drives. Just over half way through and I think this one is showing all the right signs. FiddleTimtrent FaddleTalk to me 17:48, 17 July 2021 (UTC)Reply

  • I agree, I've not done much general checking but what I have has turned up no issues, also I've looked at the AfD and Prod candidates and no major worries (just a couple of accepts I disagree with). Also I was thinking this backlog may dump on the NPP but there is no noticeable change in trend in there stats. In general this has exceeded my expectation by a lot, and without some key people who are busy, we may even 'clear' the damn backlog (which I would take as under 1000). KylieTastic (talk) 18:51, 17 July 2021 (UTC)Reply
    We started at 4000, and got to 2000 by the middle of the month. We're on pace to clear the backlog entirely. That would be super cool :) –Novem Linguae (talk) 20:00, 17 July 2021 (UTC)Reply
    My hope is that we clear the time backlog, rather than the numeric backlog. I would like to see turnaround time to be sub one month, ideally a few days at the most.
    Even a questionable accept decision at review only becomes a true failure if more than one of us re-reviews it as such.
    I don't mind if material is sent to AfD. All we are asked to do is to use our knowledge and instinct to accept drafts with a better than 50% chance of survival. In other words we ''should'' be making borderline acceptances, just not ones that are a gamble. FiddleTimtrent FaddleTalk to me 20:59, 17 July 2021 (UTC)Reply
    I think the practical acceptance threshold at AfC is too high. A reject can feel like the path of least resistance—it feels like the default action. I only have to establish one reason why it should be rejected, rather than establish every condition for acceptance. I'm accountable mostly to the draft creator (generally a newbie or paid editor where I can write almost a boilerplate response to any question asked) rather than the experienced editors at NPP and AfD. There's no follow-up to a reject but I'm on the hook for follow-ups to the accept (like AfD) indefinitely. And sometimes with accepts I have all this G6, round robin or histmerge stuff to request. None of these are good reasons to reject, but they are non-trivial subconscious biases that can change how people act. I try to be aware that I have these biases and force myself to ask, "if I didn't have the pain in the arse of actually having to implement this action, which would I choose?" Same is true of people doing copyvio checks and follow-up properly. — Bilorv (talk) 22:51, 17 July 2021 (UTC)Reply
    @Bilorv I'm going to argue with being on the hook should something we accept go to AfD. I care enough to learn from the outcome, not the nomination, but I choose a steadfast neutral stance in the discussion if I contribute to it at all. I am not an advocate for the article, just a reviewer working as well as I am able. I don't view it that I got it wrong. I view it that my assessment of a better than 50% chance of surviving an immediate deletion discussion was assessed differently by the nominator, and I accept their right to think differently from me FiddleTimtrent FaddleTalk to me 17:17, 18 July 2021 (UTC)Reply
    I don't know if this is what @Bilorv: meant also, but for me reject isn't the path of least resistance; that would be decline, which can sometimes have a whiff of passing-the-buck. A clear accept is also easy, and quite satisfying, even. The one I find hardest is to reject: you're pretty much killing that draft, and that (possibly noob) editor's hopes and dreams, by saying "no way, not now and not ever". That's why (rightly or wrongly) I try to save my rejects for the ones that seem beyond saving (hence my pathetic reject %!). --DoubleGrazing (talk) 13:51, 19 July 2021 (UTC)Reply
    Reject was added for troublesome re-submitters. Personally I believe it is WP:BITEY and should not be used on first submissions. Also for a decline you can just say it has not yet been shown to be acceptable from what is presented, for a reject you need to go check that even the worst written submission isn't actually notable. So I see no need to BITE or waste time doing a WP:BEFORE when the majority of junk posts are not resubmitted. Some do choose to do more checks and reject, thats fine. Cheers KylieTastic (talk) 14:30, 19 July 2021 (UTC)Reply
    @DoubleGrazing: all of my comment was meant to refer to declines, rather than rejects. I forgot completely that there were three options rather than two and was just using "reject" as the literal English word meaning "the opposite of accept". — Bilorv (talk) 15:09, 19 July 2021 (UTC)Reply

Re-review quality

edit

Oddly I've come across more re-review issues that review issues. i.e. fails that as far as I can tell are not backed by policy. Such as there is no requirement to reject ever over just declining (and I have a general disslike for people rejecting first submissions as WP:BITEY). Also another fail based on procedural order and preference rather than any policy fail. Frankly I think a fail should require a remedy action to be taken, i.e AfD/PROD a bad accept or accept a bad decline/reject. Nitpicking things done not quite as you would have done but require no action should not be fails, maybe don't mark as passed but we do have a wide range of styles that still are acceptable per guidelines. KylieTastic (talk) 20:55, 17 July 2021 (UTC)Reply

@KylieTastic While I see your point regarding a desire to take action if we deem a review to have been an incorrect acceptance, I also understand why that may not be beimng done. Policy based or not, it feels rude. We need to get over ourselves on that issue, but I think human nature renders that unlikely. FiddleTimtrent FaddleTalk to me 21:01, 17 July 2021 (UTC)Reply
I also got failed for declining something instead of rejecting something (a draft in another language, for which there is a perfect decline reason but no perfect reject reason). It stings a bit, but hearing other people's feedback is a good thing in the long run, I guess. –Novem Linguae (talk) 01:58, 18 July 2021 (UTC)Reply
See, I don't think this should be a fail? Am I wrong? Enterprisey (talk!) 03:54, 19 July 2021 (UTC)Reply
This is really not a fail from the information provided here. The good news is that this means that the re-reviewer also gets educated, and we improve FiddleTimtrent FaddleTalk to me 08:02, 19 July 2021 (UTC)Reply
Having looked at this in more details I have left my own re-review, one that I hope also helps in educating folk. I can see why a rejection might be argued on this one, but it would not be my own choice. I do not view a decline vs a rejection to be a failure. FiddleTimtrent FaddleTalk to me 08:12, 19 July 2021 (UTC)Reply
Agree with @KylieTastic: I'm also finding myself disagreeing with reviews more than I was expecting (although looking at the huge variance in outcome %ages from one editor to the other, I guess I shouldn't have been expecting what I was). But what's more, after a couple of 'fails' I realise I started gravitating towards reviews that I could agree with, so that I don't become a complete party pooper. Which pretty much defeats the purpose of the whole re-reviewing exercise, or at least doesn't make the most of its quality-control potential. --DoubleGrazing (talk) 16:28, 18 July 2021 (UTC)Reply
Maybe someone could develop a clever script that allocates a random review for you to re-review, and you couldn't move on before completing that. And better yet, make it so that it requires a re-review after every ten reviews, to ensure that 10% re-review rate. :) --DoubleGrazing (talk) 16:31, 18 July 2021 (UTC)Reply
I like your idea here, very much
I see no harm in disagreeing with reviews or even with re-reviews. I see you and I have disagreed on a re-review in one area, for example, but neither you nor I will fall out over it. Having divergent opinions and interpretations is absolutely fine. Sufficient opinions, even divergent ones, create eventual consensus FiddleTimtrent FaddleTalk to me 17:12, 18 July 2021 (UTC)Reply
You can accomplish this yourself with a random number generator (which I've been using to pick what reviews to re-review). I am skipping ones I think I'm not qualified to do (like a history topic), but not difficult decisions. — Bilorv (talk) 00:19, 19 July 2021 (UTC)Reply
I'm gonna make one of those once I get the points-based leaderboard going. Enterprisey (talk!) 03:54, 19 July 2021 (UTC)Reply
DoubleGrazing, fyi, I don't mind having fails on my reviews. :3. I am a relatively new reviewer, and sometimes find it difficult to moderate my stance for approving/declining rationale. The re-reviews will at least validate/invalidate some of my decisions. To me the re-reviews is an exercise to gauge where we are relative of each other, and a learning experience. – robertsky (talk) 21:50, 18 July 2021 (UTC)Reply

It feels harder to make an inroad

edit

We're in the final third of the drive and it is starting to feel a bit like heavy lifting with the oldest submissions. We've gotten a load of the easier ones, or is that just the way I feel at present? Getting down to and below a 1,000 submissions queue feels like hard work.

Well done, and not just the big hitters. Those of us who've only dipped their toes into the water, it's really good to have you here. FiddleTimtrent FaddleTalk to me 15:41, 20 July 2021 (UTC)Reply

Yup, feels like we hit the wall. I guess some are getting review fatigue, but also we've had more submissions, with over 300 yesterday (inc. deletions). Also I worked on some that took an hour or two to finish then accept at the same time the submitter just submitting more so no backlog gain. I've updated my expectations from maybe we can clear, to lets hope we can get sub 1000 and clear 3 months! KylieTastic (talk) 16:07, 20 July 2021 (UTC)Reply
Yes I agree, it feels like wading through treacle, with the oldest ones usually being the most conflicted and difficult, I had a day off today and went to the beach and have other work to do tomorrow but will get back to the grindstone soon. Theroadislong (talk) 16:09, 20 July 2021 (UTC)Reply
I am sorta taking a break for this week, because of:
  1. there was this attack in a local school that had one student killed on Monday morning. kinda broke some psyche nationwide. increasing covid case count doesn't help in cheering up.
  2. public holiday Hari Raya Haji on Tuesday, cheered up a bit. news in rollbacking to semi lockdown over covid cases doesn't help in cheering up.
  3. taking time off from work in general on Thu and Fri. start of semi-lockdown on Thu definitely will not help in cheering up. – robertsky (talk) 16:21, 20 July 2021 (UTC)Reply
  • It would have been good if we could have closed submissions for two weeks. The progress would have been quicker and thus more motivational. Without 200+ a day submissions we would have cleared completely by now. KylieTastic (talk) 16:44, 20 July 2021 (UTC)Reply
Yes it feels like submissions have increased exponentially the past week or so. Theroadislong (talk) 16:56, 20 July 2021 (UTC)Reply
It always does feel like this. I really pointed it out as one of the veterans from 2014 so that we don't get discouraged. I think we need a round of applause for every net 100 we knock off the total now and a HUGE round of applause when we clear the ever refilling 3 month age category FiddleTimtrent FaddleTalk to me 20:46, 20 July 2021 (UTC)Reply
Reviewing lots of drafts possibly correlates with submissions going up, due to resubmits. That could explain why it's hard to get ahead of it. Hats off to the reviewers tackling hard drafts. CITEBOMB and foreign language sources are no walk in the park. –Novem Linguae (talk) 22:28, 20 July 2021 (UTC)Reply
Is there a way to get notified when a draft I review gets resubmitted? Many resubmissions seem to be trivial updates and once I've invested the grey matter understanding a draft, re-review should be easier, right? Stuartyeates (talk) 10:45, 21 July 2021 (UTC)Reply
Stuartyeates not that I have ever heard of, so the only options are watchlist or add a comment asking the author to let you know when they have updated and resubmitted . Regards KylieTastic (talk) 11:16, 21 July 2021 (UTC)Reply
I’ve been slowly gnawing away at the 3 month section, only 30 to go, it could be clear by the weekend perhaps?Theroadislong (talk) 21:39, 22 July 2021 (UTC)Reply

Clearing the backlog completely

edit

I did some (not so quick) math (since I forgot all my algebra, lol). Long story short, here's the equation to figure out how many drafts are left on a certain day:  . At this rate, we will clear the backlog completely on day 31. Woot.

If we don't hit 0 drafts on day 31, I think it might make sense to extend the drive a few days. 0 drafts would be an awesome goal to achieve. –Novem Linguae (talk) 12:12, 23 July 2021 (UTC)Reply

That depends on how many difficult drafts we get left that keep getting resubmitted. I think I would consider a 100 or less and max queue length of a week to be 'clear'. With the peaks and troughs of submissions and reviews it will be hard to get to actual zero. KylieTastic (talk) 12:51, 23 July 2021 (UTC)Reply
Is zero even possible (other than fleetingly), let alone realistic? Surely it's like bailing water out of a leaking boat in heavy rain, you never get that last drop. (Genuine question, not trying to be clever.) --DoubleGrazing (talk) 14:32, 23 July 2021 (UTC)Reply
Correct, there are 200 or so new submissions a day I would guess, many declined/accepted within seconds but others that are less immediately notable/non notable. Theroadislong (talk) 14:39, 23 July 2021 (UTC)Reply
I think 0 drafts older than 0 days old is the best estimation of "completely cleared". — Bilorv (talk) 10:51, 24 July 2021 (UTC)Reply
There are currently 146 pending submissions that needs attention of experienced AfC reviewers to clear the backlog drive. Pinging Theroadislong, KylieTastic, Novem Linguae, Timtrent, DoubleGrazing, Bilorv. TheBirdsShedTears (talk) 05:21, 30 July 2021 (UTC)Reply
I was travelling 95% of yesterday. I'll see what I can achieve today, but reviewing when tired is susceptible to errors FiddleTimtrent FaddleTalk to me 07:02, 30 July 2021 (UTC)Reply
I've been at it since 6am, and when I just purged the counter the damned number actually went up! :( Somebody really ought to block the feeder end of the pipeline for a couple of days... --DoubleGrazing (talk) 07:58, 30 July 2021 (UTC)Reply

Diff not found

edit

On Wikipedia:WikiProject Articles for creation/July 2021 Backlog Drive/Participants/TheWikiholic, the first draft gives a 'diff not found' error. ―Qwerfjkltalk 11:16, 24 July 2021 (UTC)Reply

Five and a half days to go

edit

If we thought it was tough before, the last days of a drive are always tough. The real question is "How low can we go?" FiddleTimtrent FaddleTalk to me 09:06, 26 July 2021 (UTC)Reply

Leaderboard - first draft

edit

I've only done spot checks, so these numbers may be REALLY screwy. Please let me know if anything seems off. Note that I ran these by giving a 1.5x bonus to all reviews done on 30-day or older drafts. That's a very different threshold from what I specified up above (median), but it's easier to calculate (I don't even know if we have median-age data). Seeing as how it changed the order around a bit, I'm open to suggestions on if we should do the "very old" threshold (which is a higher number that I don't remember) or something else. Anyway, enough from me. Without further ado:

Old leaderboard
  1. TheBirdsShedTears, 940 (normal: 400, old: 361, failed: 1, re-reviews done: 0
  2. Theroadislong, 928 (normal: 700, old: 102, failed: 1, re-reviews done: 76
  3. KylieTastic, 574 (normal: 543, old: 17, failed: 0, re-reviews done: 6
  4. DoubleGrazing, 424 (normal: 72, old: 211, failed: 0, re-reviews done: 36
  5. Robert McClenon, 260 (normal: 76, old: 123, failed: 0, re-reviews done: 0
  6. Nearlyevil665, 235 (normal: 201, old: 7, failed: 1, re-reviews done: 25
  7. Calliopejen1, 234 (normal: 170, old: 43, failed: 0, re-reviews done: 0
  8. Locomotive207, 228 (normal: 77, old: 101, failed: 0, re-reviews done: 0
  9. Clearfrienda, 227 (normal: 118, old: 73, failed: 0, re-reviews done: 0
  10. Robertsky, 193 (normal: 78, old: 62, failed: 0, re-reviews done: 22
  11. Timtrent, 191 (normal: 43, old: 73, failed: 0, re-reviews done: 39
  12. Doric Loon, 168 (normal: 37, old: 88, failed: 1, re-reviews done: 0
  13. Clarityfiend, 165 (normal: 2, old: 103, failed: 0, re-reviews done: 9
  14. Qwerfjkl, 161 (normal: 62, old: 53, failed: 0, re-reviews done: 20
  15. Eternal Shadow, 158 (normal: 94, old: 43, failed: 0, re-reviews done: 0
  16. Devonian Wombat, 144 (normal: 102, old: 18, failed: 0, re-reviews done: 15
  17. Laplorfill, 135 (normal: 128, old: 5, failed: 0, re-reviews done: 0
  18. MurielMary, 135 (normal: 53, old: 55, failed: 0, re-reviews done: 0
  19. Goldsztajn, 121 (normal: 43, old: 52, failed: 0, re-reviews done: 0
  20. AntanO, 120 (normal: 51, old: 48, failed: 3, re-reviews done: 0
  21. Novem Linguae, 119 (normal: 118, old: 1, failed: 0, re-reviews done: 0
  22. Sionk, 100 (normal: 4, old: 64, failed: 0, re-reviews done: 0
  23. Bogger, 92 (normal: 10, old: 38, failed: 1, re-reviews done: 26
  24. Chris troutman, 90 (normal: 0, old: 59, failed: 0, re-reviews done: 2
  25. HitroMilanese, 89 (normal: 49, old: 27, failed: 0, re-reviews done: 0
  26. Bilorv, 87 (normal: 50, old: 14, failed: 0, re-reviews done: 16
  27. HighKing, 83 (normal: 10, old: 49, failed: 0, re-reviews done: 0
  28. Curbon7, 82 (normal: 58, old: 16, failed: 0, re-reviews done: 0
  29. Stuartyeates, 82 (normal: 24, old: 39, failed: 0, re-reviews done: 0
  30. Curb Safe Charmer, 75 (normal: 51, old: 16, failed: 0, re-reviews done: 0
  31. Deb, 74 (normal: 11, old: 43, failed: 1, re-reviews done: 0
  32. ProClasher97, 73 (normal: 51, old: 15, failed: 0, re-reviews done: 0
  33. Pahunkat, 71 (normal: 61, old: 5, failed: 0, re-reviews done: 3
  34. 2pou, 67 (normal: 4, old: 37, failed: 0, re-reviews done: 8
  35. Modussiccandi, 66 (normal: 2, old: 43, failed: 0, re-reviews done: 0
  36. Nightenbelle, 63 (normal: 26, old: 24, failed: 0, re-reviews done: 1
  37. Kvng, 55 (normal: 0, old: 37, failed: 0, re-reviews done: 0
  38. Davisonio, 46 (normal: 6, old: 27, failed: 0, re-reviews done: 0
  39. K.e.coffman, 45 (normal: 5, old: 27, failed: 0, re-reviews done: 0
  40. Tol, 42 (normal: 41, old: 1, failed: 0, re-reviews done: 0
  41. Worldbruce, 37 (normal: 1, old: 22, failed: 0, re-reviews done: 3
  42. 78.26, 37 (normal: 9, old: 10, failed: 0, re-reviews done: 13
  43. Yeeno, 34 (normal: 15, old: 10, failed: 0, re-reviews done: 4
  44. JavaHurricane, 34 (normal: 34, old: 0, failed: 0, re-reviews done: 0
  45. M-Mustapha, 33 (normal: 3, old: 20, failed: 0, re-reviews done: 0
  46. Etzedek24, 32 (normal: 6, old: 18, failed: 1, re-reviews done: 0
  47. Innisfree987, 31 (normal: 1, old: 20, failed: 0, re-reviews done: 0
  48. DanCherek, 30 (normal: 29, old: 0, failed: 0, re-reviews done: 1
  49. JSFarman, 26 (normal: 17, old: 7, failed: 1, re-reviews done: 0
  50. Hatchens, 25 (normal: 25, old: 0, failed: 0, re-reviews done: 0
  51. Scorpions13256, 23 (normal: 5, old: 12, failed: 0, re-reviews done: 0
  52. AviationFreak, 16 (normal: 12, old: 3, failed: 0, re-reviews done: 0
  53. Chess, 14 (normal: 2, old: 8, failed: 0, re-reviews done: 0
  54. Umakant Bhalerao, 14 (normal: 14, old: 0, failed: 0, re-reviews done: 0
  55. Usedtobecool, 13 (normal: 6, old: 5, failed: 0, re-reviews done: 0
  56. Mcguy15, 12 (normal: 6, old: 4, failed: 0, re-reviews done: 0
  57. Pbrks, 12 (normal: 3, old: 6, failed: 0, re-reviews done: 0
  58. Berrely, 12 (normal: 2, old: 7, failed: 0, re-reviews done: 0
  59. Bradv, 12 (normal: 5, old: 5, failed: 0, re-reviews done: 0
  60. Grand'mere Eugene, 11 (normal: 1, old: 7, failed: 0, re-reviews done: 0
  61. Yitzilitt, 11 (normal: 2, old: 6, failed: 0, re-reviews done: 0
  62. TheWikiholic, 10 (normal: 10, old: 0, failed: 0, re-reviews done: 0
  63. LittlePuppers, 10 (normal: 6, old: 3, failed: 0, re-reviews done: 0
  64. Extraordinary Writ, 10 (normal: 10, old: 0, failed: 0, re-reviews done: 0
  65. Jack Frost, 9 (normal: 8, old: 1, failed: 0, re-reviews done: 0
  66. PK650, 8 (normal: 4, old: 3, failed: 0, re-reviews done: 0
  67. Kaizenify, 8 (normal: 1, old: 5, failed: 0, re-reviews done: 0
  68. 333-blue, 8 (normal: 5, old: 2, failed: 0, re-reviews done: 0
  69. Zeromonk, 6 (normal: 0, old: 4, failed: 0, re-reviews done: 0
  70. Z1720, 5 (normal: 1, old: 3, failed: 0, re-reviews done: 0
  71. Sam Sailor, 4 (normal: 4, old: 0, failed: 0, re-reviews done: 0
  72. 15, 3 (normal: 0, old: 2, failed: 0, re-reviews done: 0
  73. EDG 543, 3 (normal: 3, old: 0, failed: 0, re-reviews done: 0
  74. Plandu, 3 (normal: 0, old: 2, failed: 0, re-reviews done: 0
  75. CaptainEek, 2 (normal: 1, old: 1, failed: 0, re-reviews done: 0
  76. Dreamy Jazz, 2 (normal: 2, old: 0, failed: 0, re-reviews done: 0
  77. Chenzw, 1 (normal: 0, old: 1, failed: 0, re-reviews done: 0
  78. Nizil Shah, 1 (normal: 1, old: 0, failed: 0, re-reviews done: 0
  79. Trillfendi, 0 (normal: 0, old: 0, failed: 0, re-reviews done: 0
  80. Daask, 0 (normal: 0, old: 0, failed: 0, re-reviews done: 0
  81. Pi, 0 (normal: 0, old: 0, failed: 0, re-reviews done: 0
  82. Nyanardsan, 0 (normal: 0, old: 1, failed: 1, re-reviews done: 0
  83. UnitedStatesian, 0 (normal: 0, old: 0, failed: 0, re-reviews done: 0
  84. Enterprisey, 0 (normal: 0, old: 0, failed: 0, re-reviews done: 0
  85. AppleBsTime, 0 (normal: 0, old: 0, failed: 0, re-reviews done: 0
  86. G. Moore, 0 (normal: 0, old: 0, failed: 0, re-reviews done: 0
  87. GRuban, 0 (normal: 0, old: 0, failed: 0, re-reviews done: 0
  88. Godsy, 0 (normal: 0, old: 0, failed: 0, re-reviews done: 0
  89. ToBeFree, 0 (normal: 0, old: 0, failed: 0, re-reviews done: 0
  90. Modern Major General, -1 (normal: 0, old: 0, failed: 1, re-reviews done: 0)

Also, list of unresolved conflicting re-reviews:

  • Bogger: review is '#:Pass I don't have a problem with this, the RD request process works well and with the copy vio removed it's of very little harm. I've done similar a couple of times (but always keep on my watch list to make sure RD is completed) KylieTastic (talk) 20:34, 17 July 2021 (UTC)', disagreement is Fail vs PassReply
  • Modern Major General: review is '#:Pass (meaning, I agree with Modern Major General's declination — just for clarity!); I would have also declined this, albeit on the basis of being insufficiently sourced: there are several paragraphs without a single citation, which IMO isn't okay in a BLP. I realise I'm being quite strict about this, the policy only says that contentious material must be directly supported by a citation: I'm not saying that missing a cite would cause a published article to fail, but I am saying there's no reason to accept a draft that has major issues with citing even non-contentious statements (esp. when what is or isn't contentious is not necessarily always clear-cut). --DoubleGrazing (talk) 09:44, 18 July 2021 (UTC)', disagreement is Fail vs PassReply
  • Theroadislong: review is '#:Pass Passes WP:NPROF. All else can be solved in main space. FiddleTimtrent FaddleTalk to me 20:34, 17 July 2021 (UTC)', disagreement is Fail vs PassReply

Going to do another round of review log updates soon, been very busy IRL. Awesome job so far, everyone, and looking forward to seeing how far we can make it in the final few days! Enterprisey (talk!) 10:30, 26 July 2021 (UTC)Reply

@Enterprisey I think with re-reviews where one says pass and the other says fail, the review is considered to be a pass. It takes two fails to cancel a pass. Or so I recall from the preliminary discussions. FiddleTimtrent FaddleTalk to me 10:42, 26 July 2021 (UTC)Reply
That was how the older scoring system worked. I was sort of hoping for this one that the re-reviewers would reach a consensus, maybe here, maybe on the log page. Enterprisey (talk!) 10:57, 26 July 2021 (UTC)Reply
As I'm involved in one of those 'unresolved' cases, just to say that I'm happy to do whatever is needed to sort out this matter. However, given that the article in question has now been accepted, I'm not quite sure how consensus is to be reached, other than perhaps by me striking out my re-review (which I can do, in the interests of community harmony and all that!). :) --DoubleGrazing (talk) 11:25, 26 July 2021 (UTC) (PS: Using 30 days as opposed to median as the old/new boundary seems entirely sensible to me, FWIW.)Reply
@Enterprisey where there are two opinions that are opposed a consensus is unlikely. May I suggest we run with the scheme as devised originally and consider for the next drive how to improve it? FiddleTimtrent FaddleTalk to me 07:22, 27 July 2021 (UTC)Reply
It goes with the first re-review appearing in the text right now. I'd prefer a single failing re-review to fail the whole review. Then we just need to decide if a pass + a fail has the effect of a pass or a fail. Or maybe the review just doesn't count for any points? Enterprisey (talk!) 08:23, 28 July 2021 (UTC)Reply
@Enterprisey the numbers looked very low so I did a dig. Checking mine you have 543+17 = 560 reviews. Where as my checks show I have 712 with around 240 deleted ~ 951. So I'm assuming your not counting deleted, and comparing with Wikipedia:WikiProject Articles for creation/July 2021 Backlog Drive/Participants/KylieTastic to my list it looks like your not counting user-space reviews (NS:2), only draft namespace (NS: 118). However cutting out user-space I still got 617, so I then twigged if make articles distinct I get down to 580 so just 1 review per article (which may have been on purpose). Cheers KylieTastic (talk) 11:44, 26 July 2021 (UTC)Reply
Yeah, I should've said - I haven't updated the review logs (the subpages with the review lists) in a few days, and the leaderboard above was calculated from the current (i.e. outdated) state of the review logs. I'll update the review logs again soon (probably not long after midnight tonight in UTC), and then run the leaderboard again. Thank you for checking and paying attention. Enterprisey (talk!) 17:50, 26 July 2021 (UTC)Reply
Oh, schist! You're 100% right, I forgot to count userspace. Whoooops. And I'll have to look into whether I made articles distinct - the code just goes through every user contribution, but maybe there's something else screwing that up. I also don't count deleted/oversighted/suppressed revisions yet. Enterprisey (talk!) 17:53, 26 July 2021 (UTC)Reply
@Enterprisey Happy to did into details and give feedback, and whatever you have the time to fix great, but the only thing I care about is that we have got the backlog down to a reasonable level. Real life is more important and don't feel any pressure just because we charged into this with no really enough prep time. The backlog is down, and apart from one very angry ranty idiot I think most other are very happy with how this has gone. Cheers KylieTastic (talk) 19:41, 26 July 2021 (UTC)Reply
@Enterprisey I agree with KT. The backlog is the thing. We know and accept all the teething troubles. Please don not get stressed over balancing this and your actual life FiddleTimtrent FaddleTalk to me 19:45, 26 July 2021 (UTC)Reply
Userspace is being counted now; two reviews on the same page have fortunately been counted. Running into some difficulties using the API to get deleted revisions; probably later. Enterprisey (talk!) 07:56, 27 July 2021 (UTC)Reply
Nevermind, Majavah fixed it so now we can get deleted revisions too. (Thanks!) Enterprisey (talk!) 08:15, 27 July 2021 (UTC)Reply
And KylieTastic your review log now has 951 on the dot. So that worked out. Enterprisey (talk!) 08:29, 27 July 2021 (UTC)Reply
It's always pleasing to see that maths still works :) KylieTastic (talk) 08:35, 27 July 2021 (UTC)Reply
Enterprisey, fyi, I have scraped through all of the review pages, and it gives me 8756 reviews. (There may some missing rows since I am using Octoparse with one pass to do the scraping.) Removing the reviews that doesn't have "(X days)", it gives me 8450 reviews. And the median review of 8450 reviews is at... 0 days. If we remove the reviews that is done at 0 days (given that the aim is to clear backlog, removing 0-day reviews from counting the median should be acceptable), the median is 68 days. – robertsky (talk) 09:21, 27 July 2021 (UTC)Reply
Cool, thanks for looking into it. I was envisioning "median" being the median days-waited at the time when the review was done. 68 is sorta high. I wonder if anyone else has an opinion. Enterprisey (talk!) 03:10, 28 July 2021 (UTC)Reply
Isn't the problem with median that it changes each time a review is done, a new draft is added to the pool, and also with each day that passes. Seems very sophisticated, but also awfully complicated to me. As I said earlier, using an 'arbitrary' figure is reasonable IMO, and I think 30 days seems as good as any. --DoubleGrazing (talk) 05:05, 28 July 2021 (UTC)Reply
Yeah, so I was gonna calculate the median once a day and use that median for every review done that day. But that was taking too long, so I prioritized getting the visible parts (review logs and leaderboard) working first. Enterprisey (talk!) 06:13, 28 July 2021 (UTC)Reply
The points-based leaderboard is now transcluded on the drive page. Enterprisey (talk!) 03:10, 28 July 2021 (UTC)Reply
@Enterprisey It would be great if you could convert the lists (points and re-reviews) to table format, and merge the points with the current review leaderboard. Thanks! ―Qwerfjkltalk 12:40, 28 July 2021 (UTC)Reply
I considered that, but the resulting table would've been very wide. Not sure how to mitigate that. Enterprisey (talk!) 18:17, 28 July 2021 (UTC)Reply
@Enterprisey You could abbreviate the accept, decline etc. to A, D, R, remove the comments column, and remove either the percentage or count sections.
(Giving Rank, Points, Username, Re-Reviews, A, D, R.) ―Qwerfjkltalk 18:38, 28 July 2021 (UTC)Reply
(And you could abbreviate the Old, Normal, Failed columns similarly - might need a key.) ―Qwerfjkltalk 18:42, 28 July 2021 (UTC)Reply
That would be ten columns (rank, username, points, re-reviews, accepts, declines, rejects, normal, old, failed), which in my opinion would be pretty hard to read. I also like the percentages, as they add valuable information; that would be another two or three columns. The current design, although vertically inefficient (we could still just remove people without any reviews), at least doesn't have too much information per row. Enterprisey (talk!) 19:42, 28 July 2021 (UTC)Reply
@Enterprisey Can you convert the lists into tables, then? Also, I don't seem to have a Teamwork Barnstar in the Re-review leaderboard :(. Sorry for the late reply.Qwerfjkltalk 08:58, 29 July 2021 (UTC)Reply
I think the primary use-case of the points leaderboard is to quickly scan down, then find your name, then read across the row to find your points; having to check back at the top of the table for the column headers would interfere with that. I would prefer keeping it as a list for that reason. If anyone else has an opinion, feel free to post it. Enterprisey (talk!) 05:15, 30 July 2021 (UTC)Reply
@Enterprisey Demo hereQwerfjkltalk 16:53, 3 August 2021 (UTC)Reply

Bonus script bug

edit

@Enterprisey: I just wanted to point out an error in how EnterpriseyBot added a new review once I created a Level-2 Bonus section here. It looks like I'm the only one that has tried to add a Bonus, so I'm not sure how crucial fixing it with a day left is, but I wanted to point it out in case this is used again in the future. Basically, the bot added a new review I had performed into my bonus section, presumably since it is at the bottom of the page, and on the scoreboard, it credited me with an extra bonus. I have since moved the review to the proper section. -2pou (talk) 15:33, 30 July 2021 (UTC)Reply

Ah, sorry about that! Thanks for the report. I think I've fixed it. Enterprisey (talk!) 07:03, 31 July 2021 (UTC)Reply
@Enterprisey This affected KylieTastic as well, here. ―Qwerfjkltalk 08:57, 31 July 2021 (UTC)Reply

A Proposal regarding our wonderful technical champion

edit

While those of us who are reviewing conduct reviews, we have an unsung hero, the technical guru who is making all the behind the scenes work happen. I could take a unilateral decision, but I'd like very much for us, not me, us to offer a vote of thanks to Enterprisey and to award them with the Gold Wiki Award alongside the declared winner of the reviewing challenge.

I know, or I think I know, that others have been helping behind the scenes. I think Enterprisey will be happy to share the glory with those who have weighed in.

Please let us all pile in and say a big Thank You, and to vote our individual thanks below; I'll set the ball rolling with my own. FiddleTimtrent FaddleTalk to me 19:16, 26 July 2021 (UTC)Reply

Deadline?

edit

Is there a deadline by when the re-reviews (and whatever else wash-up follows this drive) will need to be done? Only asking as several people are currently short of their 10% re-review requirement, and could see their scores drastically cut, which obviously would be a pity. --DoubleGrazing (talk) 06:59, 28 July 2021 (UTC)Reply

As review logs were posted very late, we should certainly wait at least two or so weeks after the end of July. Enterprisey (talk!) 07:03, 28 July 2021 (UTC)Reply
TheBirdsShedTears would get 0 points currently... ―Qwerfjkltalk 07:06, 28 July 2021 (UTC)Reply
I am not sure what "re-reviews done" is all about. Could someone please help me understand this? And why leaderboard is showing "re-reviews done 0"? TheBirdsShedTears (talk) 07:18, 28 July 2021 (UTC)Reply
See #Re-reviews are open for business above. ―Qwerfjkltalk 07:20, 28 July 2021 (UTC)Reply
Pinging @TheBirdsShedTearsQwerfjkltalk 07:21, 28 July 2021 (UTC)Reply
Qwerfjkl, Thank you. TheBirdsShedTears (talk) 07:33, 28 July 2021 (UTC)Reply

Are we planning to disqualify people if they don't do re-reviews or don't do enough re-reviews? Currently the page says To ensure quality, users should re-review other user's reviews. Any user may re-review any other user's reviews if they wish. and does not mention disqualification. –Novem Linguae (talk) 07:24, 28 July 2021 (UTC)Reply

The issue is, again, teething troubles. This drive will always be imperfect because we are reinventing the rules. What we are learning is what the rules need to be for the next one.
This time around may I suggest that we encourage everyone whose review-given tally shows a shortfall to make those re-reviews, and next time consider how we might make it a requirement.
The issue, as always, is being seen to work for quality as well as quantity. We achieve quality instinctively, almost all of us, but the re-reviews show that we also care about quality. The low number of failures in reviews show that we are doing the right job (probably) though we can argue the toss over some of the fails. What is important is to have re-reviewed sufficient for our individual consciences this time around. We also get points for every re-review! Ok, points are not a lower backlog, but they are fun to get!
Hmm. I'm rambling
tl;dr: Adopt leniency this time, and impose tighter rules next time FiddleTimtrent FaddleTalk to me 07:39, 28 July 2021 (UTC)Reply
The original proposal by Enterprisey was Each participant must have conducted a number of re-reviews greater than 10% of their number of reviews. Otherwise, each participant's score will be capped at 10 times their number of re-reviews.Qwerfjkltalk 07:41, 28 July 2021 (UTC)Reply
No, I don't think there are any plans to do that. Even the score penalty (i.e. the "capping" system) I find to be a little harsh now that I think about it. Maybe we could only require 5% of one's number of reviews, or just require a fixed number like 5 or 10. Although there are lots and lots of participants who haven't done any. Maybe a mass talk page message might help with that? Enterprisey (talk!) 07:43, 28 July 2021 (UTC)Reply
@Enterprisey I am in full agreement with encouraging rather than mandating, certainly for this iteration of the drive process. A mass talk page message would be helpful, I think.
We need, then, to propose and discuss rules going forward based upon what we have learned this time around. Consensus for any form of mandating work to retain points is important FiddleTimtrent FaddleTalk to me 08:19, 28 July 2021 (UTC)Reply
I've set up the start of a discussion on the main talk page. This deals with drives after this one FiddleTimtrent FaddleTalk to me 09:05, 28 July 2021 (UTC)Reply
Also I would mention that for my entry in the re-reviews leaderboard, it says that I need to receive another re-review. What happens if you have done enough re-reviews, but other editors have not re-reviewed your reviews enough? Dreamy Jazz talk to me | my contributions 09:33, 1 August 2021 (UTC)Reply

Women in Red

edit

When this is all done, could someone do a count of "Biographies of Men" vs "Biographies of Women/Non-Male Humans" which were approved in this drive? I think it would be worth noting, but I don't have the querying expertise. Bogger (talk) 10:09, 29 July 2021 (UTC)Reply

Could look for WikiProject Women and other similar project tags. These probably populate into a category of some kind. Aren't all female biographies supposed to receive WikiProject Women tags during page curation? Strangely this is not an option in the AFCH WikiProject combo box, but it is an option in Rater. –Novem Linguae (talk) 19:58, 29 July 2021 (UTC)Reply

        • Aren't all female biographies supposed to receive WikiProject Women tags during page curation? WHAT?? Where is that documented? --bonadea contributions talk 23:19, 10 August 2021 (UTC)Reply
          Bonadea, seems obvious. If someone is a politician, I add them to WikiProject Politics. If someone is an American, I add them to WikiProject United States (or their state). If someone is a woman, I add them to WikiProject Women (different from Women In Red, which is of course more specific). I believe this is a reasonable conclusion to reach. I'd be happy to change my ways if I see documentation at WikiProject Women that says that their scope is much narrower than this though, such as only including feminists or something. –Novem Linguae (talk) 00:48, 11 August 2021 (UTC)Reply
          • Many feminists are not women, so hopefully the wikiproject women participants would not make that kind of blunder. I'd better not comment more on this as I am not really qualified to have an opinion, but it does seem very weird (and frankly a bit offensive) to me to automatically add a project like that, which marks a person as a woman rather than a person, and to do it regardless of whether we know if a person's gender is relevant to her. Thank goodness I will never become Wikipedia notable! :-) Women in red is, as you say, a different thing which (to me) is easier to see the point of. --bonadea contributions talk 09:19, 11 August 2021 (UTC)Reply
  • I may look at using ORES later but tonight I'm exhausted so just dumping the basic: 218 bios (on the talkpage tag) with only " he " 144, only " she " 54, with both or none: 20
List of actual articles...
— Preceding unsigned comment added by KylieTastic (talkcontribs)
oh wow. Non-men recently hit 17% of the bios, whereas from this analysis, this list is c.25%. Good times!. Bogger (talk) 22:14, 29 July 2021 (UTC)Reply
Indeed—and I manually counted the remaining 20, and I must have missed one but found 9 “he” bios, 9 “she”, and one that wasn’t actually a bio. So that’s 29%.
With infinite time/resources, it would be great to know the same stats for the declined drafts, which could tell us if gender is correlated to rate of acceptance. But since I don’t know how to do it myself I don’t want to be greedy! Thank you KylieTastic for pulling this data for us. Innisfree987 (talk) 22:23, 29 July 2021 (UTC)Reply

Final update

edit

Bogger after a day of work final results...

  • Using ORES + Manual categorization: Women: 267, Other: 800 - 23% women
  • Usning "He"/"She" + Manual categorization: Women: 267, Other: 799, Unknown: 1
  • Manual categorizationwas for Women: 90, Other (Men): 90
List of actual articles...

Conclusions:

  • Some reviewers did not add project tags, or sometime they did but not bio
  • ORES is OK but has a lot of issues, and some seriously weird mistakes
Other data

ORES fail so manual men selection "Jonas Misiūnas", "Monk from Kupinovo", "Austin Barrow", "Deng Zhonghua", "John Edward Delane", "Tong Zeng", "An Min", "John Biggar (mountaineer)", "George English (politician)", "Otto Bulow", "Bajram Begaj", "Robert J. Evans", "Lorenzo Córdova Vianello", "Thomas L. Brunell", "Devin Stone", "Philipp Holliger", "Nick Lyons", "Anthony Rope", "Isardas", "Ronnie Ong", "Terry Williams (sociologist)", "Kenneth Möllersten", "Abdul Waseh Basit", "Gaetano Ciancio", "Mark Molloy", "Haruichi Furudate", "Keita Morimoto", "Thomas Nelson Winter", "Naosaku Takahashi", "Bruce A. Wright", "Tim Gorski", "Hunter Reese Peña", "Bériou", "Lyman B. Sperry", "WAVEBOY", "Sylvester Sivertsen", "Norman Zamcheck", "Santhosh Damodharan", "Adam Frampton", "Joe Planansky", "Perry Fitzpatrick", "Stuart Crow", "Uglješa Bogunović", "Fox Fisher", "Elgin Gates", "Omari Salisbury", "Giovan Battista Nicolosi", "Rodney Scott (author)", "Wayne Oquin", "Robert Lukins", "Irfan Amin Malik", "Masaaki Kijima", "Callum Daniel", "Alexey Okulov", "Zach Everson", "Petar Đorđević Džoda", "Brad Walls", "Trevor Wignall", "Alpheus Ellis", "Les Brown (journalist)", "Luis Miguel Pinho", "Shin-ichi Sakamoto", "Kurgoqo Atajuq", "Greger Larson", "Vikram Dev", "Thomas Masterson (American Revolution)", "Max Lewkowicz", "Alexey G. Ryazanov", "Lorenzo Crasso", "Robert Castagnon", "Pradeep Nair", "Andreas Wil Gerdes", "Charles A. Appel", "Giuseppe de Samuele Cagna", "Sampson Edwards", "Edward Chang (neurosurgeon)", "Danny Balin", "Velimir Mihailo Teodorović", "Andy O'Sullivan (Irish Republican)", "Tim Heatley", "Elliott Hasler", "Mohammad Elsanour", "Tony Clay", "Edwin Kemp Attrill", "Andrei Doroshin", "Benjamin Van Mooy", "Sebastian Prodanovich", "Harry Palmer (animator)", "Nikola Karamarković", "Max Liboiron", "Akshit Sukhija", "Himanshu Soni", "Franz Lipp", "Daniil and David Liberman", "John Ludwig Wees", "Trevance Salmon", "Josip Šišković", "Gary Pemberton", "Qaplan I Giray", "Ramses Ramos", "Michael Kubovy", "Bikram Singh Jaryal", "Thomas Awah", "Vaughn Spann", "Madhaba Nanda Behera", "Sarjun KM", "Richmond Jay Bartlett", "Virendera Singh Pathania", "Robert James Patterson", "Big Foot (Potawatomi leader)", "David Edward Reichle", "Toby Prince Brigham", "Jose Perez (actor)", "Christopher Tappen", "Haris Doukas", "Sunil Dutti", "Brody Brown", "Jolyon Petch", "Samuel Bampfield", "David Provoost", "Maneaux", "Dorieus (Rhodian athlete and naval commander)", "Alfred Diston", "Duke Xuan of Wey", "Giuseppe de Samuele Cagnazzi", "Akintunde Akinleye"

ORES fail so manual Women selection "Kokomi Naruse", "Juanita Head Walton", "Elisapee Ootoova", "Paula Berwanger", "Patricia Downes Chomley", "Tatiana Andrianova (organist)", "Juanita Jaramillo Lavadie", "Amna Mufti", "Ludwika Sosnowska", "Maame Esi Acquah Taylor", "Mary E. Peabody", "Katja Keitaanniemi", "Vilawan Mangklatanakul", "Amy Seiwert", "Dawon (singer)", "Alev Erisir", "Marcia Cebulska", "Alexa Conradi", "Leota Toombs", "Mary T. Reiley", "Danika Littlechild", "Heather Morrison", "Annie Cattrell", "Rose Glass", "Elizabeth ""Betty"" Hayes", "Jazz Thornton", "Gertrude Warden", "Diana Malivani", "Arooj Aftab", "Kate Pahl", "Baby Kumari", "Josie James", "Maria da Graça Samo", "Katie Hetland", "Cathryn Mittelheuser", "Maude Dickinson", "Laurin Talese", "Lisa Wilhoit (actress)", "Temi Otedola", "Robin Rue Simmons", "Tasha-Nicole Terani", "Marianna Muntianu", "Anna Pestalozzi-Schulthess", "Guddi Devi", "Disappearance of Mekayla Bali", "Shelley Moore (educator)", "LØLØ", "Geneva Gay", "Rasha Kelej", "Irsa Ghazal", "Silvia Rivas", "Ambera Wellmann", "Lucy Mensing", "Yasmine El-Mehairy", "Roberta Marinelli", "Judith Grassle", "Polina Kanis", "Mary Elizabeth Livingston", "Leigh Janiak", "Melissanthi Mahut", "Daniele Moyal-Sharrock", "Hélène Gassin", "Rachel Moore (arts administrator)", "Ada-Rhodes Short", "Elizabeth Thomson (linguist)", "Song Mingqiong", "Sue Roffey", "Deborah LaVine", "Misa Tamagawa", "Chen Li (singer)", "Aina Dumlao", "Bernie de Le Cuona", "Emma DeSouza", "Vinnie Bagwell", "Pauline Stuart", "Pamela Ferrell", "Chen Arieli", "Jang Wonyoung", "Elyse Walker", "Yisel Tejeda", "Mary Taylor Bryan", "Brynley Stent", "Najia Ashar", "Ava Easton", "Barbie Kyagulanyi", "Clare Reimers", "Mara Karlin", "Shobha Chauhan", "Pailin Wedel", "Dorothy Eaton"

ORES fail false poitives - exclude non-bios "Stephanie, Cindy, Christy, Tatjana, Naomi, Hollywood", "Republic of Somaliland Representative Office in Taiwan", "2021 Pakistan Super League squads", "Topeka Plaindealer", "Vice Chair of the Federal Reserve", "Queens (American TV series)", "Minden High School (Minden, Louisiana)", "Better Days (Hedley Song)", "Dunedin Collective for Woman", "Spacehey", "Bishop School (Detroit)", "List of Melbourne Football Club seasons", "Annunciation (Artemisia Gentileschi)", "Precious Heritage Art Gallery Museum", "1988 Swatch Open – Singles", "Brazilian Syncretic Religions", "Françoise d'Eaubonne et l'écoféminisme", "Wolf Durmashkin Composition Award", "Hendrikov Family", "Dr. Cyril O. Spann Medical Office", "Howard School of International Relations", "Taisha Ryu", "Penderecki String Quartet", "Revolt of Hasan Khan Salar", "Racially motivated emergency calls", "1988 Bordeaux Open – Singles", "Orbit Culture", "2021 Women's Softball European Championship", "Ring Shout", "Grounded with Louis Theroux", "Rostam and Shaghad", "One World Family Commune", "2021 Superrace Championship", "Bourbon di Sorbello", "Lewis High School (Macon, Georgia)", "The Oyster Man", "First Baptist Church of Greater Cleveland", "All Things Go Fall Classic", "Witness Collection", "Al-Ayoubi family", "History of communication by presidents of the United States", "Portugal in the Junior Eurovision Song Contest 2021", "Women of the early East L. A. punk scene", "Light in Babylon", "KOKOKO!", "The Delightful Sausage", "Shinobu (band)", "Gualterio family", "Satellite Mode", "The Rumba Kings", "Malojian"

Cheers KylieTastic (talk) 20:07, 1 August 2021 (UTC)Reply

KylieTastic, thank you so much! This is immensely helpful: I’m going to to share with Women in Red now. Really appreciate your work. Innisfree987 (talk) 20:21, 1 August 2021 (UTC)Reply

WikiProject tagging for biographies

edit
@Novem Linguae: really? I didn't know that. I've created 30+ WiR articles (not as part of any drive etc., just picking off the red list) and never tagged them in any particular way, nor do they seem to have been subsequently tagged by anyone else. The rater only offers various meetup/initiative options, which I've not chosen since, as I mentioned, I've not been party to any meetups or similar. I guess I need to look into this more. --DoubleGrazing (talk) 04:41, 30 July 2021 (UTC)Reply
DoubleGrazing, yeah, I've never done any meetup WikiProjects. For biographies, I do WikiProject Biography, WikiProject Women if it's a woman, then their country/state and occupation. Here's an example: Talk:Joan A. Lambert. I'm happy to hear how others do WikiProject tagging too. –Novem Linguae (talk) 05:10, 30 July 2021 (UTC)Reply
@Novem Linguae: interesting; I'm not sure I've ever used the 'Women' project, although I can't be sure. I just looked up a random one I've made: Talk:Hanna Granfelt. I seem to have tagged it for 'Women in music' and 'Women's history' (the latter perhaps incorrectly; they have quite specific criteria, which this may not meet). The reason why I say "can't be sure" and "seem to have" is that somehow I always have to rethink the project ratings from scratch, hence I'm probably not being very consistent about it. Which is pretty poor, TBH. --DoubleGrazing (talk) 05:27, 30 July 2021 (UTC)Reply
  • I am working on an update with ORES - but I have found two issues.
  1. large numbers of articles with no {{WikiProject Biography...}} - so manually adding
  2. ORES has a lot of false positives!Such as 1988 Swatch Open – Singles is rated 80% a bio. KylieTastic (talk) 14:54, 1 August 2021 (UTC)Reply

The Kilo Klub

edit

Just wanted to acknowledge the 1,000+ tally of both Theroadislong and TheBirdsShedTears — between them, they've cleared nearly half the backlog. Flippin' 'eck, that's some massive effort! --DoubleGrazing (talk) 09:16, 30 July 2021 (UTC)Reply

My wife might leave me if I do much more...better get some hoovering done! Theroadislong (talk) 09:22, 30 July 2021 (UTC)Reply
DoubleGrazing, thank you, but this is the result of teamwork to clear the backlog. Everyone who participated in this drive has done a great work. Now, I have stopped reviewing old AfC submissions as they seem difficult as well as complicated. I am actively keeping an eye on new problematic AfC submissions to keep this backlog under 140. TheBirdsShedTears (talk) 09:27, 30 July 2021 (UTC)Reply

Zombie draft

edit

Anyone know what's happening with this Draft:Tabla Maestro Pt. Anil Palit? It kind of looks like it's been submitted, but then again not quite. (Or have I missed something?) Would hate it if this stopped us reaching zero... :) --DoubleGrazing (talk) 09:20, 31 July 2021 (UTC)Reply

@DoubleGrazing solved it. Submitted on behalf of the creating editor and declined it FiddleTimtrent FaddleTalk to me 09:23, 31 July 2021 (UTC)Reply

Finishing the drive

edit

The drive is done and so is the backlog. Good job all! I haven't been keeping up with the re-reviews, so I just did some of those. Should re-reviews be extended to, say, a week after the drive's conclusion? Tol (talk | contribs) @ 01:16, 1 August 2021 (UTC)Reply

We came shockingly close to getting the backlog down to zero! At one point we had just 3 or 4 drafts remaining. Phenomenal job by all, whether you made 1, 100, 1000 reviews and anything inbetween. With regards to the re-reviews, I think they should be extended for atleast a few days.--🌀Locomotive207-talk🌀 01:26, 1 August 2021 (UTC)Reply

@Locomotive207: I'm pretty sure we did get it down to zero at times! Tol (talk | contribs) @ 02:21, 1 August 2021 (UTC)Reply
Yes, we hit zero, see screenshots at WT:AFC. I also believe based on some conversations I've read that re-reviews will be given another 1-2 weeks, and that they will not be doing a "no re-review" penalty for this particular drive. Although feel free to correct me if I'm wrong / if consensus changed. –Novem Linguae (talk) 02:26, 1 August 2021 (UTC)Reply
Yes, it did go down to zero, I saw it with my very own eyes. :) But I guess that does raise a nit-picking point (for next time): is the target met when the counter hits zero (i) at any time during the drive, or (ii) does it have to be zero when the drive ends, or (iii) both? Option (i) would make it possible to have a drive which goes down to zero, say, three weeks in, then everyone up sticks and goes home, and the counter does a 'hockey stick' and finishes with hundreds of drafts in the queue. I wouldn't call that 'mission accomplished', exactly. So maybe option (iii)? (Sorry if this has been discussed and decided already.) --DoubleGrazing (talk) 06:21, 1 August 2021 (UTC)Reply
Yes, I figure we could keep re-reviews open for another week or two, and then end it and distribute awards. There should be penalties for lack of re-reviews (and I won't be implementing any, unless there's consensus to); they weren't publicized enough, no scripts or processes were in place to make them less tedious, and the review logs and leaderboard went up only halfway through the month. Enterprisey (talk!) 05:26, 2 August 2021 (UTC)Reply
Enterprisey, If an editor has reviewed 100 AfC submissions, how many re-reviews they need to done? TheBirdsShedTears (talk) 16:23, 2 August 2021 (UTC)Reply
TheBirdsShedTears, at least 10 re-reviews. – robertsky (talk) 16:30, 2 August 2021 (UTC)Reply
I'm not sure what the bot should count a re-review of meh as, so I'm not going to count it. Note, for everyone else, that anything besides exactly pass, fail, or invalid as the bolded text might not be counted. Enterprisey (talk!) 09:45, 6 August 2021 (UTC)Reply
Chris troutman, what counting method did you use for this leaderboard edit that significantly changed the numbers? Enterprisey (talk!) 05:19, 7 August 2021 (UTC)Reply
@Enterprisey: I followed the posted instructions on how to update the table. The history tab indicated it hadn't been updated in a few days. I suppose I must have made an error and I see my edit was since reverted. Chris Troutman (talk) 10:02, 7 August 2021 (UTC)Reply
Hey Chris troutmanthose instructions should have been removed (and now have been). Although the script is locked to July, the table it uses recentchanges_userindex only covers the last 31 days, so running it yesterday meant it would not have contained the first 5-6 days of data. Cheers KylieTastic (talk) 12:11, 7 August 2021 (UTC)Reply
Ah, thanks KylieTastic for the explanation, and thanks Chris for taking the effort to update the table (even though it turned out that the query had that issue). Enterprisey (talk!) 00:48, 9 August 2021 (UTC)Reply
@Tol @Chris troutman @DoubleGrazing @Enterprisey @KylieTastic @Locomotive207 @Novem Linguae @Robertsky @TheBirdsShedTears It has now been two weeks. Should we finish the drive and award prizes now? ― Qwerfjkltalk 21:08, 15 August 2021 (UTC)Reply
@Qwerfjkl: I reckon so. This is fast fading into memory. And with ~500 drafts already in the review queue, would be good to make a bit of noise to raise awareness again. Cheers, --DoubleGrazing (talk) 21:27, 15 August 2021 (UTC)Reply
As I participated in the drive it's not for me to say. The procedures for this drive should have been laid out prior to the start. Chris Troutman (talk) 22:01, 15 August 2021 (UTC)Reply
Qwerfjkl, I think this drive should be closed. I don't see any good reason to keep this drive unfinished. TheBirdsShedTears (talk) 04:57, 16 August 2021 (UTC)Reply
Sounds like a good idea. Tol (talk | contribs) @ 15:50, 16 August 2021 (UTC)Reply
Yeah, re-reviews can probably stop. I'll still count the late ones (posted after two weeks were up), as nobody said what would happen to them. I'm working on a script to post the barnstars. Enterprisey (talk!) 08:31, 17 August 2021 (UTC)Reply
I didn't get around to finishing it, and I'm on vacation next week. But I'll get to them after, don't worry. (If someone wants to do it manually, I have no objections, but it's a lot of work!) Enterprisey (talk!) 08:35, 20 August 2021 (UTC)Reply
Script seems better than manual work. It is nearly a difficult task to distribute the awards manually. TheBirdsShedTears (talk) 05:26, 21 August 2021 (UTC)Reply

Leaderboard code published

edit

I got around to publishing the code I used to create the review logs and leaderboard: https://git.sr.ht/~enterprisey/enwp-afc-backlog-drive/tree. It's written in Rust and uses both Magnus's excellent mediawiki crate and a new MediaWiki client library that I haven't published yet. Enterprisey (talk!) 00:50, 9 August 2021 (UTC)Reply

Seems not insane. The couple of issues I thought I found just showed me that the drive didn't work the way I thought it worked. For the next drive we need to make it clearer that we're running on UTC (some of us live in GMT +12 / +13). Stuartyeates (talk) 10:32, 9 August 2021 (UTC)Reply

Despondent fatigue

edit

Having been on a very relaxing holiday in the wilds of Wales with very little internet access, I have now returned and see that the back log has already leapt up from zero to 272, I rather enjoyed my time away from the stresses of WP:AFC and don’t feel in any rush to dive back in, maybe I’ll take a back seat for a while. Theroadislong (talk) 15:42, 9 August 2021 (UTC)Reply

@Theroadislong A natural reaction. I think everyone is feeling the same. But the overall outcome me is that the backlog is at a manageable level for the moment FiddleTimtrent FaddleTalk to me 16:20, 9 August 2021 (UTC)Reply
Definitely don't overwork yourself—glad to hear you were able to take a break. Could not be more well-deserved. I'm not sure where I can see how many drafts are coming in a day, but the figure in my head I saw somewhere was 300 a day, so 272 means we've still done 90% of August submissions so far. — Bilorv (talk) 13:55, 10 August 2021 (UTC)Reply

Awards distributed...

edit

...But has anyone given the Gold Wiki Award we agreed on for Enterprisey? Pahunkat (talk) 16:47, 11 September 2021 (UTC)Reply

Thank you for reminding. I think we forgot. Pahunkat, please go ahead and give Enterpriesey Gold Wiki Award. TheBirdsShedTears (talk) 17:04, 11 September 2021 (UTC)Reply
I've gone ahead and done that. Hopefully it looks alright, anyone who wants to do so can edit the message I left. Pahunkat (talk) 10:47, 12 September 2021 (UTC)Reply
Very good Enterprisey, I think the script skipped my new username as I was renamed prior to the distributions. Thanks The Living love talk 06:50, 2 December 2021 (UTC)Reply