Knowledge (XXG)

:Knowledge (XXG) Signpost/2011-08-01/Research interview - Knowledge (XXG)

Source πŸ“

628:
minority on to human eyes for higher-level sorting to identify the promising newbie-pluses for human interaction – a three-tiered filtering, as it were). Of particular interest might be the grey area of newbies – not those who will clearly stay and those who clearly won't (or who we clearly do or don't want to stay), but those where final stage, human interaction, has a reasonable likelihood of making the difference, of bringing them over the line. Finding the best bot/human mechanism for rationing the supply of "newbie mentors" to this prioritised editorial demographic, IMO, is the challenge. After that, a future project could work on developing guidelines for the best ways in which to interact with newbie-pluses.
169: 625:
editing history of the newbie-pluses (the ones we want to keep) and the newbie-minuses (vandals, link-spammers, and paid political/corporate operators). I suppose it will be a combination of factors such as (i) the linguistic patterns, (ii) the locational distribution of the edits (which pages are edited), and (iii) the temporal distribution of the edits. How these three aspects interact could do with some heavy-duty stats analysis, and of them, the linguistic is likely to be the most challenging and deepest (a research delimitation is required, I think).
303: 291: 606:
to edit' isn't good, because we don't want persistent vandals and spammers to keep editing. Then we will be able to run a bunch of interesting analyses on how different kinds of new users react to these different messages. It will be interesting to see if, for example, the more personalized warnings drive away vandals but not link spammers, or if the warnings with teaching messages are better at "converting" users who make test edits into good content contributors.
161: 373:
personalized and teaching-oriented versions of the default warning template for Huggle; Stuart and Aaron then expanded these templates with image/no-image versions and prepared a random template generator. Our requirement for the number of experimental welcomes/warnings is based on a bit of statistical algebra that allows us predict how many observations we'll need to find statistically significant differences between the variables."
452: 117: 107: 610:
where you can look at a newbie's first few edits and then leave one of a dozen or so targeted welcome messages. So if you see a user fixing a lot of spelling errors to articles about Canada, you'd be able to thank them for copyediting and invite them to join WikiProject Canada in just a few clicks. And if you have any other comments, questions, or suggestions, I'd be happy to hear them.
33: 127: 267:. Diederik has experience with the technical side of this project, Maryana is a qualitative researcher with an academic background, and I lend community experience to round out the leadership team. We built an enormous, multi-part question list publicly on Meta. But it turns out that was just a beginning guide. We've been structuring the summer as a series of weekly 87: 137: 279: 97: 228:
been studying the project as an academic since his undergraduate senior thesis in 2006. Since then, he's been gathering from a number of fields the conceptual, theoretical, and methodological tools necessary to study something as complex as Knowledge (XXG). "At present, I'm a doctoral student from the
627:
Perhaps two critical concerns will govern the efficiency with which the problem can be addressed: (i) how long into a newbie's edit-history the patterns become clear, and (ii) the extent to which they can be identified by a bot (including whether a bot could do the initial "easy" filtering and pass a
588:
One does have to be careful, though; I fully hope that we drive the vandals and SEO upstarts away, which I'm guessing is probably over half of all new users at this point. I suspect the percentage will be dramatically better once we implement a requirement to become autoconfirmed to create articles;
605:
Hi, thanks for the comments on this topic! We are definitely going to be qualitatively analyzing the edits which all of these users make after receiving each of the different warnings. This is actually our primary way of evaluating the success of each of the templates -- a simple 'do they continue
372:
How were the parameters of the experiment decided on – for example, the number of warnings delivered, the proportion of changed warnings? Aaron says they settled on three variables for testing in the experiment: personalized, teaching-oriented, and image. "Dr. Kill, a professor of rhetoric, produced
624:
Thanks, Stu. You said, "we are working on trying to model which new users are more likely to become good contributors in the future" – this is the most important thing I've heard in this discussion. I think we'll be hoping you can find sufficiently distinctive patterning as early as possible in the
254:
How did the Summer of Research project come about, and what questions will it investigate? According to Steven, the experiment aims to test "warning templates that are explicitly more personalized and set out to teach new editors more directly, rather than simply pointing them to policy and asking
192:
anti-vandalism tool. The Summer of Research is a three-month intensive project to study aspects of participation in Knowledge (XXG) that may have a significant effect on editor retention. It brings together a group of researchers, mostly PhD candidates, who have experience in both computer science
227:
to find out more. Steven has been a volunteer editor on the English Knowledge (XXG) since 2006, and before taking up the Foundation Fellowship was a professional writer and blogger, mostly for technology publications and companies. Stuart has been a Knowledge (XXG) editor since late 2004, and has
609:
As to the time needed to personally interact with new users, this is a definite problem that we are very interested in, and we are working on trying to model which new users are more likely to become good contributors in the future. We are thinking of a new user welcoming suite like Huggle, but
351:
approach, which thinks in terms of how social and technical phenomena are inherently intertwined, especially when we study processes in communities as technologically mediated as Knowledge (XXG). Our motto, 'the πŸ’• that anyone can edit' speaks to this principle that the Knowledge (XXG) community
408:
and other volunteer developers deserve a lot of credit here; we couldn't have done this without their help and consent beforehand). I should probably point out that we felt pretty confident about this experiment because Stuart is a prolific Huggler himself. Even if we had no volunteer editing
343:
gave us a high-level understanding of the trends in participation, but it didn't tell us with certainty what internal community factors most have an impact. We need to have more data we're confident in if we're going to make good decisions, thus the Huggle experiment, which is clarifying that
319:
suspected that welcome messages might have an effect on how new editors perceive the community. So because Hugglers send out the most messages to new editors, we wanted to see if we could improve conversion (from damage) and other retention rates by just changing the wording of the message."
416:
an article. "We suspect that the reaction these potential editors receive affects whether they'll register an account and try contributing productively. We hypothesized that the tone of the welcome/warning message could be an important factor in this decision. We have Hugglers testing a few
569:
I agree: personal mentoring is the key, but to allocate such resources requires the identification of the most likely newbies. How to do that? Perhaps some more focused research questions? I wonder whether the research project will involve the gathering and coding of qualitative data from
368:
have become the predominant way in which new users are introduced into Knowledge (XXG). In fact, here's a statistic that is hot off the research press: almost 75% of newbies have their first talk page message sent to them from one of those semi- or full-automated software systems."
553:"Fewer than half of the newbies investigated received a response from a real person during their first 30 days". I think we really dropped the ball here. Interaction is a major way to recruit newbies and hopefully turn them into "regulars". 314:
Aaron said they decided to experiment with Huggle's standardised warning system because the project goal is to understand the decline in new editors, so it seemed logical to focus on new editors' experience in the community. "Team-member
391:
interviewer included) posed as inexperienced article creators to look into how new contributors are treated in the patrolling process. The experiment attracted significant controversy due to ethical concerns surrounding the lack of
344:
automated editing tools have a huge impact on new editors. The project was in the sweet spot of being able to gather a statistically significant sample quickly and with minimal impact on the normal functioning of the community."
339:, our Chief Community Officer, designed the summer to really dig deeper into the exact areas of English Knowledge (XXG) and other projects that have the largest effect on new editors, and whether those editors stick around. The 433:
page. We try to make sure to push a message locally here when new experiments with features or anything else is happening, but those three places are where to look if you're interested in these topics in the future."
532: 381: 424:
report just on this topic. "But let me give it a quick shot by saying that the recently released Annual Plan (see link above) will give you a very good idea of what direction we're focusing on, as well as
501: 491: 506: 496: 324: 412:
Aaron points out that Huggle users come across hundreds of potential editors every day, and a surprising proportion of these editors are testing whether they can, in fact, edit Knowledge (XXG) by
198: 476: 202: 73: 409:
experienced as a team, I think the key difference between this and the treatment experiment you referred to is that we've been transparent about our actions before going forward with it."
481: 469: 229: 244:
movements." Aaron is a computer science graduate student from the University of Minnesota. He's been an editor since 2008 and has published academic research on Knowledge (XXG) since
663: 463: 52: 41: 716: 21: 347:
Stuart's background seems ideally matched to an experiment that seeks to understand social phenomena using technical methodologies. "I'm an adherent of the
691: 686: 681: 594: 676: 181: 590: 397: 185: 376:
The Huggle experiment is not the first to investigate the interactions of patrollers and new page creators. In the 2009 community-led
233: 671: 451: 353: 275:. Because the team has a wide variety of skills, we've looked at many different aspects of Knowledge (XXG) as a community so far." 251:. He specializes in statistical data mining and designs user-scripts for Knowledge (XXG) to understand/improve editor interactions. 46: 32: 17: 168: 259:
at the Foundation, research has been part of his job. "I currently share the responsibility for leading the project team with
641: 619: 600: 583: 563: 271:, and to get a feel for the research topics that have been and are currently being explored, I'd check out the public list 268: 193:
and the social sciences, to give us a more well-rounded understanding of participation in the projects. (See also earlier
188:
to investigate potential improvements to the new editors' experience of their first contact with patrollers, using the
697: 430: 340: 348: 302: 335:
for 2011–12. "Recruiting and retaining editors for Knowledge (XXG) is now one of our top priorities, and
559: 328: 290: 420:
So can we expect more experiments like this in the future? Steven says he could probably do an entire
361: 356:
on which it runs – and vice versa. Huggle is a great example of this: scripts, tools, and bots like
332: 256: 615: 357: 189: 237: 282:
The spaces where en.wiki newbies asked for help in the project. Fewer than half of the newbies
90: 528: 336: 264: 241: 224: 173: 160: 120: 589:
there's no possible way we can leave customized messages for all of the people we encounter.
377: 246: 164:
The Summer of Research 2011 team working at Wikimedia Foundation's San Francisco headquarters
554: 393: 307: 100: 636: 578: 310:
that their articles are being deleted, but are participating less in deletion discussions.
130: 400:. "We also spoke directly with the main Huggle developers over email, IRC, and on-wiki ( 396:
of the participants. Steven says that before the experiment they posted a public notice
611: 220: 212: 150: 710: 401: 365: 216: 110: 426: 140: 255:
them not to do something". Steven says he personally got involved because, as a
327:, are translated into practical initiatives such as this. Steven points to the 629: 571: 405: 316: 283: 260: 154: 278: 272: 417:
variations of the 1st-level warning message to find out if we're right."
323:
We wondered how the Foundation's sometimes lofty strategic goals, like
306:
Article deletion is a major part of the newbie experience. Newbies are
295: 203:
WMF Community Department announces 'Summer of Research' participants
286:
received a response from a real person during their first 30 days.
325:"Support the recruitment and acculturation of newer contributors" 298:, representing a significant potential source of new Wikipedians 51: 294:
Anon edits have declined as a proportion of logged-in edits,
450: 429:, the tech portion of blog.wikimedia.org, and the impending 31: 352:
can't be fully understood without taking into account the
199:
Wikimedia Summer of Research: Three topics covered so far
544: 537: 517: 68:
The Huggle Experiment: interview with the research team
184:, the Wikimedia Foundation's Community Department has 542:If your comment has not appeared here, you can try 378:Newbie treatment at Criteria for speedy deletion 8: 717:Knowledge (XXG) Signpost archives 2011-08 236:, and I have a keen interest in both the 301: 289: 277: 167: 159: 18:Knowledge (XXG):Knowledge (XXG) Signpost 545: 521: 67: 7: 219:account for non-research editing), 234:University of California, Berkeley 223:, and Wikimedia Foundation Fellow 53: 28: 527:These comments are automatically 331:on Openness and the Foundation's 591:The Blade of the Northern Lights 135: 125: 115: 105: 95: 85: 538:add the page to your watchlist 1: 387:), experienced editors (one 308:receiving more notifications 296:but are still running at 20% 733: 642:02:41, 3 August 2011 (UTC) 620:20:20, 2 August 2011 (UTC) 601:19:09, 2 August 2011 (UTC) 584:08:43, 2 August 2011 (UTC) 564:05:18, 2 August 2011 (UTC) 427:activity on mediawiki.org 211:interviewed researchers 186:announced an experiment 182:2011 Summer of Research 535:.Β To follow comments, 455: 443:"Research interview" β†’ 349:sociotechnical systems 311: 299: 287: 177: 165: 36: 454: 305: 293: 281: 230:School of Information 171: 163: 35: 531:from this article's 431:software deployments 398:at the Village Pump 341:Editor trends study 522:Discuss this story 502:Arbitration report 492:WikiProject report 487:Research interview 456: 312: 300: 288: 261:Diederik van Liere 238:digital humanities 178: 166: 65:Research interview 42:← Back to Contents 37: 546:purging the cache 507:Technology report 242:social statistics 47:View Latest Issue 724: 700: 639: 634: 597: 581: 576: 549: 547: 541: 520: 497:Featured content 474: 466: 459: 442: 394:informed consent 329:Board resolution 317:Dr. Melanie Kill 273:on our Meta page 172:Research fellow 157: 139: 138: 129: 128: 119: 118: 109: 108: 99: 98: 89: 88: 59: 57: 55: 732: 731: 727: 726: 725: 723: 722: 721: 707: 706: 705: 704: 703: 702: 701: 696: 694: 689: 684: 679: 674: 667: 655: 654: 637: 630: 595: 579: 572: 570:newbies/anons. 551: 543: 536: 525: 524: 518:+ Add a comment 516: 512: 511: 510: 467: 462: 460: 457: 446: 445: 440: 265:Maryana Pinchuk 213:R Stuart Geiger 180:As part of the 158: 148: 147: 146: 145: 136: 126: 116: 106: 96: 86: 80: 77: 66: 62: 60: 50: 49: 44: 38: 26: 25: 24: 12: 11: 5: 730: 728: 720: 719: 709: 708: 695: 690: 685: 680: 675: 670: 669: 668: 657: 656: 653: 652: 651: 650: 649: 648: 647: 646: 645: 607: 526: 523: 515: 514: 513: 509: 504: 499: 494: 489: 484: 479: 477:News and notes 473: 461: 449: 448: 447: 438: 437: 436: 225:Steven Walling 221:Aaron Halfaker 215:(who uses the 174:Steven Walling 144: 143: 133: 123: 113: 103: 93: 82: 81: 78: 72: 71: 70: 69: 64: 63: 61: 58: 45: 40: 39: 30: 29: 27: 15: 14: 13: 10: 9: 6: 4: 3: 2: 729: 718: 715: 714: 712: 699: 693: 688: 683: 678: 673: 665: 661: 644: 643: 640: 635: 633: 623: 622: 621: 617: 613: 608: 604: 603: 602: 598: 592: 587: 586: 585: 582: 577: 575: 568: 567: 566: 565: 562: 561: 558: 557: 548: 539: 534: 530: 519: 508: 505: 503: 500: 498: 495: 493: 490: 488: 485: 483: 480: 478: 475: 471: 465: 464:1 August 2011 458:In this issue 453: 444: 435: 432: 428: 423: 418: 415: 410: 407: 403: 399: 395: 390: 386: 384: 379: 374: 370: 367: 363: 359: 355: 350: 345: 342: 338: 334: 330: 326: 321: 318: 309: 304: 297: 292: 285: 280: 276: 274: 270: 266: 262: 258: 252: 250: 248: 243: 239: 235: 231: 226: 222: 218: 214: 210: 206: 204: 200: 196: 191: 187: 183: 175: 170: 162: 156: 152: 142: 134: 132: 124: 122: 114: 112: 104: 102: 94: 92: 84: 83: 75: 56: 54:1 August 2011 48: 43: 34: 23: 19: 659: 631: 626: 573: 560: 555: 552: 486: 470:allΒ comments 439: 421: 419: 413: 411: 388: 382: 380:experiment ( 375: 371: 366:User:ClueBot 346: 322: 313: 284:investigated 253: 245: 209:The Signpost 208: 207: 194: 179: 698:Suggestions 556:OhanaUnited 529:transcluded 482:In the news 333:Annual Plan 197:coverage: " 662:. You can 658:It's your 337:Zack Exley 79:Share this 74:Contribute 22:2011-08-01 692:Subscribe 612:StuGeiger 533:talk page 151:Skomorokh 711:Category 687:Newsroom 682:Archives 660:Signpost 422:Signpost 414:damaging 402:Addshore 389:Signpost 385:coverage 383:Signpost 195:Signpost 121:LinkedIn 101:Facebook 20:‎ | 664:help us 362:Twinkle 269:sprints 247:WikiSym 232:at the 217:Staeiou 111:Twitter 638:(talk) 596:話して下さい 580:(talk) 364:, and 358:Huggle 257:Fellow 190:Huggle 131:Reddit 91:E-mail 677:About 406:Gurch 155:Tony1 16:< 672:Home 632:Tony 616:talk 574:Tony 441:Next 354:code 263:and 249:2009 240:and 201:", " 153:and 141:Digg 205:") 149:By 76:β€” 713:: 618:) 599:) 404:, 360:, 666:. 614:( 593:( 550:. 540:. 472:) 468:( 176:.

Index

Knowledge (XXG):Knowledge (XXG) Signpost
2011-08-01
The Signpost
← Back to Contents
View Latest Issue
1 August 2011
Contribute
E-mail
Facebook
Twitter
LinkedIn
Reddit
Digg
Skomorokh
Tony1


Steven Walling
2011 Summer of Research
announced an experiment
Huggle
Wikimedia Summer of Research: Three topics covered so far
WMF Community Department announces 'Summer of Research' participants
R Stuart Geiger
Staeiou
Aaron Halfaker
Steven Walling
School of Information
University of California, Berkeley
digital humanities

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.

↑