Knowledge

:Requests for comment/English Knowledge readership survey 2013 - Knowledge

Source 📝

1176:
largest sample that I can see being useful is about 4,000, and that shouldn't be too difficult. The margin of error would be about 2%, which is more precise than needed to decide any real questions. I'll suggest that reader's opinions, given their presumed lack of knowledge of many details of interesting questions, shouldn't enter any decision making process here unless there is more than a 10% difference between readers favoring one option over another. But I think it would be a real call to action if readers were split 90%-10% on a question that editors were split 50-50 on. I'll also suggest that answers to any specific question be taken twice before being considered as a cause for action. The reason is not statistical, but more related to current events and news. Opinions may change over time because of an election, terrorist bombing, or other news event, but we'd like to make sure the opinions are fairly stable. Thus twice a year surveys (say April and October) would give us results that we can use in a reasonable amount of time.
248:
have an account (and besides, cookies are required for you to stay logged in as you go from page to page). Are these surveys only intended to be displayed to logged-in users, or to everyone? Speaking personally, I hate surveys on web sites. If we're considering bothering readers with surveys, such surveys should be a last resort, and should only be used for the most absolutely important Knowledge issues (in my opinion); issues that cannot otherwise be decided by editors using the usual methods of discussions and RfC's, issues that cannot otherwise be decided without input from non-editors, and issues for which gathering survey input from non-editors has a very strong justification. I currently cannot think of any issues that rise to that level of importance.
374:
to exclude dynamic IPs, too, if that's possible (I'm an IT moron), to reduce the possibility of gaming the results. So, this might have to be a survey not of all readers, but of one reader per static IP ... which would still tell us something about the general readership. I would oppose limiting the survey to readers of the main page, that would unnecessarily vastly reduce the sample size, and so the strength of the results and the franchise, for no apparent gain. It could be that the sample accessing en.WP via the main page is somehow qualitatively different from that accessing articles via a search engine, too. --
909:
there is a consensus that having input from non-editors would be a useful/necessary data point? Until that situation comes up, I don't see the point in discussing this proposal. WMF apparently already has the technology to run a survey (since they've done it once in the past), so there is really nothing to discuss until there is actually a situation that requires a survey. I don't see the point in having an annual survey where people try to come up with things to survey about. I think it would be much better to wait until the need for a survey arises organically, and then deal with it at that point.
934:), and even the higher-cost implementation phase is not too bad - especially if we can borrow WMF's tech for the survey. Furthermore, there's an inevitable Catch-22 in what you're asking: failing to bring specific topics now brings accusations of "solution in search of a problem" (I cannot adequately convey how much I hate that phrase BTW); but I can only imagine that bringing specific topics and questions at this point would result in at least some responses of "aha, you want solution X to problem Y and reckon a survey might help! No, that's just a clever form of 1272: 703:(an Australian pollster). For three questions in their "omnibus" (multi-client, shared cost) phone survey of 630 people 14 years old and over, matching the Australian sex/age demographic, they charge $ 4590, and $ 935 per extra question. Presumably, this is negotiable, and presumably much cheaper per question if we commission our own large survey. I would hope we can achieve what we want for less than $ 15,000 in each of those three countries. We can consult the Foundation on this, and learn from their experience with 38:; this was run by the Wikimedia Foundation and focussed on technical issues. The proposal is to use a similar approach for issues of concern to English Knowledge, where the community thinks specific questions can usefully be asked of readers. A survey would not exclude active users, but would seek to identify them (eg by self-declaration of a rough editcount and rough account age) so that statistical analysis can be done to identify any systematic differences between readers and active editors. 1096:, since having multiple, competing banners up at the same time is likely to result in the worse response rates for both. There might be an advantage to a 13-month cycle (or any number not easily divisible by 12), or to spamming only a small fraction of users on a rolling cycle, because you could mitigate some of the seasonal aspects (e.g., all Germans in Spain for August, or all American schools closed, etc.) 1252: 675:
in Australia, UK and USA, we can compare the results with identical sex and age cohorts from those countries in the online survey. The sample limiters and factors affecting response in phone surveys are different from those affecting an online survey, so if the answers are similar from the different methods, you can have more confidence in the representativeness of each. Not perfect, of course. Nothing is. --
747:(A) Questions where bias is less of a concern include: Were you able to find the information you were looking for? Which navigation aids did you find helpful (a) the search box; (b) in-sentence blue-linking to other pages; (c) the "See Also" section of links at the bottom of the page; (d) Category pages; (e) Link summary boxes (templates); (f) External search engine such as google. 294:
would appear as the same single user, even though it would be quite unlikely that all 3000 people would have the same preferences regarding surveys. Or, what about people who access Knowledge from dynamic IP addresses that change continuously? Their preferences would not be saved. I don't mean to dwell on the technical details, but I am dwelling on it because I
756:(B) Questions where bias is concerning would include: "Should Knowledge have a "safe search" option, so readers can selectively filter violent, sexual, or other potentially offensive images?" This is a question on which some existing Wikipedians have strong opinions, and it is unrealistic to ignore the possibility of these editors stacking the survey responses. 438:
majority didn't want us to delete so many articles that we think are "not notable" then it would be difficult to go against that. A commercial organisation can consult its customers, find out their views on say pricing, and then make the best decision for their shareholders, but they don't have to publicise their survey and disclose how they have used that data.
1257: 1247: 1242: 1159: 1277: 425:
important resource is the time of our volunteer editors, and the risk of a survey is that it would be used to get readers to give their views on matters which require substantial extra time from volunteers. Not only do most readers not edit, but I suspect most have no idea as to the size and health of our editing community.
1154: 1297:
this idea on the grounds that it won't achieve much. Knowledge functions perfectly fine, with consensus derived from editors. I really do not see how reader input helps us in any great way. I really wouldn't want to see matters put to readership polls instead of RfC's, allowing community consensus to
994:
One of the issues is that once such a survey is done once, it may be tempting to do it again - and since there are substantial setup costs to making it work once, this does make sense. So it may be best to consider the issue of frequency now. I would suggest that such a survey be no more than annual;
620:
Nonsense. Surveying is not rocket science. The worst way to run a survey is to spam millions, annoying most of them, and using the few responses. That's what this sounds like. Busy people don't respond to non-targeted surveys. Enwiki editors come from may counties, and from diverse backgrounds.
373:
Scottywong: I agree it should be used sparingly, and I agree with Rd232 that annual seems about right. Regarding the ability of shared IP users to see/dismiss the sitenotice (or for that matter vote), we may have to live with some readers being excluded, if there is no easy solution. We probably need
321:
Re shared IP dismissal: yes, I thought of that, that once one person has dismissed the message, it'll be saved in that IP's subpage, and no-one else on that IP will see the message. That's not ideal, but not easy to fix; it might just be something we have to live with. But it can certainly be handled
298:
postulate a situation where all users can effectively dismiss the notice and not have it return against their wishes. I'm not sure that such a situation exists. And if that's the case, then we can conclude that any survey is, by definition, going to aggravate some percentage of our readers, and the
1055:
I agree it should be annual, I don't agree with 1st Jan. For an awful lot of people that is the day after New Year and something of a non-standard day, it also means that the first one is a long way off. My suggestion would be for something like the first Wednesday in October. That's close enough to
995:
I can't see the survey being needed for issues where a swift resolution is needed ("quick, let's have a survey!"), it should really be for long-term issues, priority-setting, etc. And drafting sensible questions and the community approving the results of that drafting process will take time anyway.
441:
That said I'm not against doing a readership survey where the questions are set and agreed by the community, and the option put to readers are all ones that we could live with. For example I can't see us standardising on one variant of English without losing a large proportion of editors, but if the
433:
compromise on CE/BCE v AD/BC, our plethora of weights and measures and even currencies and the way we encompass multiple written variants of English shifting from decisions designed to keep as many editors as possible to ones based on readership preferences could lead to a radically different pedia.
428:
Many of the most difficult discussions here have been resolved by compromises that were designed to keep both sides of the debate still editing on the pedia. But surveys, and their slippery slope sister referenda are rarely designed to hammer out compromises. Whether it comes to our diverse citation
54:
No, not if it is conducted by invitation on mainspace pages. It would be ripe for extreme non-response bias. The questions so far suggested touch Knowledge policy, and without bias control it invites gaming. Surveys need to be driven by a need for information. I see no information need driving this,
826:
for what's currently known about reader demographics. Generally speaking, they are less male, less young, and less white than our editors. This survey covered people in the 16 countries that represent 70% of all Knowledge page views. I don't know whether it's possible to disaggregate the data to
674:
Regarding response bias, it is inevitable in all surveys, but it doesn't make surveys valueless. One thing we can do to determine the extent of bias in this method is to run a concurrent survey using a different method, such as a general population telephone survey. If the phone survey is conducted
475:
Fair points. Yes, questions would need to be hammered out through a good discussion process. I'm not ruling out this will prove impossible in practice, and/or that the community would not approve the resulting questions to go live - so no survey would actually happen. As to the issue of what weight
420:
Currently the bulk of decisions are made by the editing community, with occasional override from the Foundation and or the devs. If we start surveying the readership then theoretically power passes to them, but more realistically, as with countries that use referenda, power accrues to those who set
325:
Re content - yes, absolutely it should be used sparingly. My gut right now suggests that it might work well as an annual thing, where let's say every 1st January there is a reader survey, if in the preceding months potentially suitable issues have been raised and questions have been drafted, and if
636:
reader survey on a website which the survey is about is "spam". I would also question whether an appropriate notice would cause more than very, very mild irritation. Most people are happy to be given the opportunity to voice their opinion about things they care about, and so even the very many who
635:
So you've gone from arguing that we should do a hugely expensive offline survey, to arguing that we should pretend en.wp editors are representative of readers as long as we account for demographic differences... say what? And as part of that argument you've gratuitously asserted that advertising a
293:
You're suggesting that we track anonymous users' dismissals by IP. How would we deal with people who access Knowledge from shared connections? For instance, there are 3000 people on a college campus, and all of their internet connections go through the same public IP address. To Knowledge, they
283:
is a nonsensical criterion: everything can be decided by editors (even if it's by lack of consensus meaning the status quo wins), and ultimately will be even if the reader survey becomes a thing. The question is whether some of the time the decisions of the community of editors will be informed by
1204:
The last set of questions would have to be very carefully selected of course - most likely by our usual consensus process, but focusing on issues where we don't have a consensus among editors - to make the results useful. Even a 60-40 split in the readership would change the dynamic of many RfCs.
424:
One of the commonest ways to rig discussions via referenda is to divorce projects from their costs and implications. "rebuild school x" and "raise tax Y by 2% in order to rebuild school x" are likely to get very different results, especially if you ask taxpayers. In the case of Knowledge the most
302:
I agree with you about the content of the survey, but I would still stress that any survey needs to have a very strong justification for bothering our readers. There should always be, in my opinion, a strongly demonstrated need for input from the readers, not just a vague "hmm, I wonder what our
247:
The main page averages about 10 million hits per day. If 5% of users going to the main page have cookies disabled, then we're inconveniencing half a million people per day. The dismissal could be recorded for each account even if cookies are disabled, but that only works for users that actually
1186:
I'll also suggest just sampling non-logged in folks ("readers"). These IPs don't have much of a say in what happens on Knowledge, but I propose that they are as important as the editors. What the editors do, doesn't mean a thing without readers, and what readers do doesn't mean a thing without
1183:. Readership by day-of-the-week and time-of-day is likely related to religious and geographical groups, so we can make the effort to even out these effects by sampling say in 4 time-periods each day for 7 days. Readership proportions (page views) by d-o-w and t-o-d should be readily available. 1164: 908:
I agree that this proposal seems like a solution in search of a problem. It seems like this proposal is about enabling the ability to create surveys, and then once we've agreed on that, we go out and start looking for things to survey about. Are there any current discussions on Knowledge where
1175:
I'll suggest that we strongly focus on getting a sample rather than trying to get "everybody's opinion". We can't get a census of everybody, and it just causes problems trying to do it. Rather we should recognize that we'll be taking a sample and try to figure out the best way to do this. The
845:
I participated in that survey and read the results. It's a stretch to call the "responder demographics" equivalent to "reader demographics". I am personally aware of two significant groups of readers that seem under-represented among the responders: primary school children, and 40-70 year old
964:
Well, yes, the drafting process needs to be sensibly managed, and especially in the final stages appropriately focussed on keeping the survey short and the questions structured to produce information likely to be useful. I would also stress again that the drafting/discussion process could be a
602:
Ensuring that such a survey didn't introduce its own bias would require using the services of a professional outside body. And it would have to be limited to one or a few countries, rather than all the countries Knowledge is used. And it would still be heart-stoppingly expensive. I don't think
134:
sort of response to opt out. I have a very hard time imagining an issue so fundamental to Knowledge that the Knowledge project -- one based on the idea that anyone can opt in and contribute -- should try to coerce (even in a minimal way) comment from those who choose to retain their status as
437:
Now it could be argued that we could conduct a readership survey and treat it purely as a consultative exercise. But in practice that would be difficult to do with a public and published survey. If the press knew that for example 52% of readers preferred American English spellings or that the
1119:
Aside: the definition of 'fundraising season' is changing. If we have a banner campaign season, many of the campaigns may not be requests for funds, as it seems we can get the funds we need effectively through better year-round requests and more efficient use of our current donor lists.
884:
This RfC starts by saying that "It is proposed...". This sounds like it's a done deal but who has proposed this and where is the draft proposal? If it isn't a done deal and this is just kite-flying, then please say so. At the moment, it is hard to comment on something which is so vague.
476:
the survey has - well if the results are significantly different from what editors think should happen, then it ought to be possible to justify that disagreement on some rational basis. At any rate, I think it can only contribute to a healthy debate to know more about what readers think.
421:
the questions. Surveying and referenda are not consensus based decision making methods, instead they tend to focus outcomes on a narrow set of predefined answers. So if we implement a readership survey it is crucial that the setting of questions is done consensually by the community.
846:
professionals. It seems to be that the "responder demographics" better reflects the frequency of idle access by readers. I'm concerned that simple surveys may refocus efforts to serve the most frequent users, biased to idle users, to the detriment of our prime objective. --
1282: 1020:
I like the idea of doing something like this regularly. There are many reasons to make surveys easier to run, and likewise reasons to test such tools on en:wp where there are a lot of statisticians and scriptwriters in the audience to manipulate and chart any results.
1009:
One of the advantages of a regular reader survey is that it might work as a new editor engagement tool as well. In exposing some of the issues and choices faced, there's clearly an opportunity to invite readers to become editors in order to engage with specific issues.
929:
of asking readers' input, and it's not realistic to expect them to start considering the possibility in a specific situation until there's some semblance of a mechanism for actually getting input. Establishing the principle and a discussion framework is pretty low-cost
653:
I never meant that we should do an expensive survey. I'm concerned that self selecting respondents might be very non-representative. Asking respondants who they are and why they were reading today is good. Do we know the demographics of our readers? I'm guessing not.
1262: 1149:
There is a limit to the number of questions we can ask. What are relevant inclusion criteria? Create a subpage (Knowledge:Requests for comment/English Knowledge readership survey 2013/A criterion) for discussion of each proposed criterion and add a link here.
1267: 303:
readers would say about this". Nor should the survey be used as a means to overturn something that already has consensus on Knowledge (i.e. using surveys as a form of forum shopping). It's just my opinion that it should be used exceedingly sparingly.
445:
Similarly we could find out our readers views on the contentious issue of whether to have an image filter. But in my view we should only ask the question if we are willing to act on the answer, (disclosure, I'm the principle author of
1056:
be viable for this year without being overly distant. As far as I know it is outside most religious festivals and pretty close to the Northern hemisphere Autumn equinox. So a pretty close approximation to a boring standard day.
554:
I think "readers" can be readily divided into types. Browsers, fact finders, finding introductory material on a specific subject, reading for pleasure, seeking to copy/paraphrase some description on an already known subject.
33:
The core idea is to get input on major issues concerning the future of Knowledge from readers, to supplement the usual model of discussions dominated by highly active Wikipedians. A model for this approach is the
947:
Fair enough, I can agree with that. I just hope that that's what this proposal is used for, as opposed to a situation where everyone says "oh, we have surveys now, let's find fun questions to ask our readers!"
515:
readers, at least). Second, the alternative to a survey like this is not some perfect census; it's just the status quo, relying solely on editors and not even attempting to get input from a wider audience.
491:
I don't think this can be reasonably called a "Readership survey" with the more-than-likely massive non-response to response ratio. I know many readers, using Knowledge frequently, who will not respond.
511:
is worth thinking about yes. First, we can collect some basic demographic data, and compare it to population statistics (this would give us some understanding of differences between responders and
792:(C) would only need reasonable care, such as having included question about who the responder is, and why they are seeking the information, questions that when analysed reveal faked submissions. 539:
The "real aim" is simply to create a mechanism for getting wider input into discussions about shaping the direction of Knowledge. Solely relying on current editors is not the optimal way to run
583:
With Knowledge usage being so common (everybody I know has used Knowledge), I think a representative survey of readers should be done by approaching random people in the real world. More at
969:
about readers' needs and perhaps more holistically about the future of Knowledge than "this would be nice. that's broken. Wish more people would do that..." wish is what we mostly have now.
765:(C) Questions of borderline bias concern might include: "Did Knowledge have sufficient information to meet your need" (this may touch inclusionism/notability/advocacy/fringe issues). 447: 266:
What are you on about? Every IP has a talk page, and can have a subpage of the talkpage to record a dismissal. But this is technical detail: postulate the condition that users
238:
Well, if that's an insurmountable technical issue (in theory, the dismissal could be recorded for the account, eg in a usertalk subpage), we can have it just on the Main Page.
222:
Dismissable for all users whose browsers are configured to accept cookies, right? Anyone whose browser doesn't like cookies will be forced to dismiss it on every article?
738:
Bias may or may not be a problem. I think it depends on whether responders have a conflict of interest between providing honest helpful information, and pushing a barrow.
621:
With targeted groups, we could access sufficient numbers of each. The questions are: What sort of people are we interested in, and what do we want to know from them. --
712:
I'm not suggesting we do the offline survey every time we do an online survey but I do think it would be prudent to do one at the outset, and again from time to time. --
786:(A) These issues should be regularly, even continuously surveyed, online. In fact, I think this question should be invited by link on every unsuccessful search result. 1187:
editors. Surveying readers only would just make up for a clear bias in our decision making processes. Editor-only surveys could be conducted separately if needed.
1238:
Create a subpage (Knowledge:Requests for comment/English Knowledge readership survey 2013/Your question) for discussion of your question and add a link here.
823: 894: 774:(D) An important question that is ignored by an online Knowledge invitation to respond is: "What things prevent you from being able to access Knowledge?" 525:
Can you be more specific about what the real question or aim is? Broad surveys are usually bad surveys. Targetted surveys are easier to do properly. --
442:
readers strongly preferred it we could make English v American English a user preference in the same way that the different versions of Chinese are.
97:
I don't see the harm in doing a survey as long as the limitations of the survey are kept in mind when making it and when analyzing the responses.--
41:
This page is primarily about the principle and mechanics of a survey. Specific questions or topics should be discussed on a dedicated subpage.
453:
My preference for this sort of survey is the same as for political ones. Only consult about options that you are willing and able to deliver.
1321: 1220: 1134: 1114: 1105: 1081: 1072: 1049: 1035: 1014: 973: 959: 942: 920: 903: 868: 855: 836: 813: 729: 692: 663: 648: 630: 615: 596: 573: 564: 547: 534: 520: 501: 480: 469: 391: 330: 314: 288: 274: 259: 242: 233: 217: 200: 174: 144: 122: 91: 64: 35: 1077:
Fair enough that 1 Jan may not be the best (also because of Christmas in the latter preparation stages). It's just the most obvious date.
1273:
Should Knowledge have a "safe search" option, so readers can selectively filter violent, sexual, or other potentially offensive images?
721: 684: 383: 83: 795:(D) Requires an offsite survey, or maybe it should be called "investigation", and certainly does not require professional services. 26:
It is hereby proposed that we conduct a survey of our readers, with a dismissible notification on every English Knowledge article (
998:
In fact it might actually work well as an explicitly annual thing, where let's say every 1st January there is a reader survey, if
1258:
What proportion of readers are aware that they can edit? Of the readers who are aware that they can edit, but choose not to, why?
725: 688: 387: 160: 87: 17: 1093: 707:
who conducted the 2011 reader survey. The total sample size for that study was 4000 with a sample of 250 in each country.
1063: 460: 569:
Well yes, some "why did you come here today" sort of question might well be helpful to understand responders better.
213:
and dismissable (in the sense that dismissing once would dismiss it on all articles). Should be OK if done right.
608: 27: 1200:
and a max of 5 for questions related to policy (would you prefer more photos, more video, safe searching, etc.)
890: 448:
the only image filter proposal where the workload falls on the filterers rather than the existing community
1101: 1058: 832: 717: 680: 455: 379: 79: 1002:
in the preceding months potentially suitable issues have been raised and questions have been drafted, and
789:(B) A high profile contentious issue like safe-searching may require near-professional care in surveying. 1253:
Are readers aware of the presence of a community of editors and discussion forums (e.g. Reference desk)?
584: 1213: 954: 915: 309: 254: 228: 210: 206: 195: 140: 1311: 1180: 886: 851: 809: 700: 659: 626: 592: 560: 530: 497: 60: 804:
My point is that possible biased responses need to be considered while proposing the question. --
154: 116: 1097: 828: 713: 676: 508: 375: 75: 860:
Fair enough. I said above that in judging responder demographics we should compare them with
935: 1232: 1208: 949: 910: 304: 249: 223: 190: 136: 1314: 1308: 1302: 847: 805: 655: 622: 588: 556: 526: 493: 170: 56: 284:
knowledge of what readers (people who don't contribute to discussions) actually want.
1045: 430: 150: 98: 1128: 1029: 1005:
if it's agreed by sometime in December that the result is worth putting to readers.
326:
it's agreed by sometime in December that the result is worth putting to readers.
1190:
Clearly the number of questions would need to be limited sharply, say max 15 -
1111: 1078: 1011: 970: 939: 900: 865: 699:
Regarding the cost of offline surveys: I've just spoken with Bruce Packard of
645: 612: 570: 544: 517: 477: 327: 285: 271: 239: 214: 165: 1278:
Do IPA symbols help you with word pronunciation? E.g.: dyʃɛn] / /duʊˈʃɛn/
1110:
Agreed that a conflict with fundraising season is probably best avoided.
1041: 1123: 1024: 270:
effectively dismiss the notice, and leave the rest for implementation.
322:
in a way that ensures no-one is ever aggravated by non-dismissability.
30:). This could be a one-off, or could be a regular survey (eg annual). 1155:
the potential to significantly improve our engagement with our reader
189:
article? That would get supremely annoying after about 15 seconds.
55:
and it looks pointless. Knowledge should not waste readers' time. --
925:
Oof. Don't you see the chicken-and-egg problem? Editors never even
1160:
the potential to improve the conversion rate of readers to editors
938:! Debate it on the merits!" or something along those lines. 704: 1248:
How often do readers not find what they are looking for?
1231:
For ease of processing, questions must be suitable for
965:
valuable exercise in itself in encouraging editors to
824:
meta:Research:Knowledge Readership Survey 2011/Results
299:
usage of surveys needs to take that fact into account.
1243:
What's the demographic distribution of our readers?
281:issues that cannot otherwise be decided by editors 1194:5 for identification (country, age, gender, etc.) 1197:5 for reading experience (did you find... etc.) 1171:Sampling not census, twice a year, readers only 899:Added "hereby" to "it is proposed". Clearer? 8: 130:I am opposed to any mechanism that requires 607:) do this even if the WMF accidentally won 36:m:Research:Knowledge Readership Survey 2011 1235:("yes/no/undecided" or similar) answers. 71: 7: 1298:be overridden by drive by voters. 24: 1165:Only ask options that are viable 149:Yes, I essentially agree with 18:Knowledge:Requests for comment 1: 878: 932:cost in time and energy, Wnt 50:Should we conduct a survey? 1337: 1088:If it's annual, it should 990:Survey frequency - annual? 640:respond might actually be 827:get en.wp results only. 416:Overriding the community? 175:17:51, 10 June 2013 (UTC) 1322:15:42, 2 July 2013 (UTC) 1221:01:58, 2 June 2013 (UTC) 1135:17:54, 1 June 2013 (UTC) 1036:17:54, 1 June 2013 (UTC) 145:17:06, 5 June 2013 (UTC) 123:18:48, 23 May 2013 (UTC) 1115:22:48, 6 May 2013 (UTC) 1106:01:26, 6 May 2013 (UTC) 1082:10:32, 3 May 2013 (UTC) 1073:23:44, 2 May 2013 (UTC) 1050:20:18, 2 May 2013 (UTC) 1015:14:10, 2 May 2013 (UTC) 974:21:37, 2 May 2013 (UTC) 960:21:01, 2 May 2013 (UTC) 943:20:08, 2 May 2013 (UTC) 921:19:02, 2 May 2013 (UTC) 904:14:10, 2 May 2013 (UTC) 895:14:00, 2 May 2013 (UTC) 869:11:54, 6 May 2013 (UTC) 856:01:54, 6 May 2013 (UTC) 837:01:40, 6 May 2013 (UTC) 814:01:14, 6 May 2013 (UTC) 730:23:47, 5 May 2013 (UTC) 705:Resolve Market Research 693:23:16, 5 May 2013 (UTC) 664:13:44, 3 May 2013 (UTC) 649:13:09, 3 May 2013 (UTC) 631:12:32, 3 May 2013 (UTC) 616:10:22, 3 May 2013 (UTC) 597:02:22, 3 May 2013 (UTC) 574:13:58, 2 May 2013 (UTC) 565:13:26, 2 May 2013 (UTC) 548:13:58, 2 May 2013 (UTC) 535:13:21, 2 May 2013 (UTC) 521:12:37, 2 May 2013 (UTC) 502:11:55, 2 May 2013 (UTC) 481:12:32, 2 May 2013 (UTC) 470:06:45, 2 May 2013 (UTC) 392:02:50, 2 May 2013 (UTC) 331:23:00, 1 May 2013 (UTC) 315:20:58, 1 May 2013 (UTC) 289:20:08, 1 May 2013 (UTC) 275:20:08, 1 May 2013 (UTC) 260:19:13, 1 May 2013 (UTC) 243:18:42, 1 May 2013 (UTC) 234:18:24, 1 May 2013 (UTC) 218:17:54, 1 May 2013 (UTC) 201:17:39, 1 May 2013 (UTC) 92:22:15, 5 May 2013 (UTC) 65:11:51, 5 May 2013 (UTC) 1283:Main Page expectations 644:to see an invitation. 180:Publicising the survey 104:(previously Atlantima) 585:Sampling (statistics) 879:What's the question? 211:MediaWiki:Anonnotice 207:MediaWiki:Sitenotice 1181:stratified sampling 135:non-contributors. 1289:What is the point? 1226:Suggest a question 1179:I'll also suggest 1145:Inclusion criteria 1094:fundraising season 543:such discussions. 185:A notification on 72:#Non response bias 1263:Spelling variants 603:Knowledge would ( 509:Non-response bias 487:Non response bias 105: 1328: 1319: 1307: 1305: 1216: 1133: 1131: 1126: 1070: 1066: 1061: 1034: 1032: 1027: 957: 952: 918: 913: 467: 463: 458: 312: 307: 257: 252: 231: 226: 198: 193: 119: 114: 111: 108: 103: 101: 45:Discuss the idea 1336: 1335: 1331: 1330: 1329: 1327: 1326: 1325: 1317: 1303: 1300: 1291: 1233:multiple choice 1228: 1219: 1214: 1173: 1147: 1129: 1124: 1121: 1068: 1064: 1059: 1030: 1025: 1022: 992: 955: 950: 916: 911: 881: 489: 465: 461: 456: 418: 310: 305: 279:As to content: 255: 250: 230:| communicate _ 229: 224: 197:| communicate _ 196: 191: 182: 117: 112: 109: 106: 99: 52: 47: 22: 21: 20: 12: 11: 5: 1334: 1332: 1290: 1287: 1286: 1285: 1280: 1275: 1270: 1268:AD/BC v CE/BCE 1265: 1260: 1255: 1250: 1245: 1236: 1227: 1224: 1211: 1202: 1201: 1198: 1195: 1172: 1169: 1168: 1167: 1162: 1157: 1146: 1143: 1142: 1141: 1140: 1139: 1138: 1137: 1086: 1085: 1084: 1039: 1038: 1007: 1006: 1003: 991: 988: 987: 986: 985: 984: 983: 982: 981: 980: 979: 978: 977: 976: 880: 877: 876: 875: 874: 873: 872: 871: 840: 839: 819: 818: 817: 816: 799: 798: 797: 796: 793: 790: 787: 780: 778: 777: 776: 775: 769: 768: 767: 766: 760: 759: 758: 757: 751: 750: 749: 748: 742: 741: 740: 739: 733: 732: 709: 708: 696: 695: 672: 671: 670: 669: 668: 667: 666: 581: 580: 579: 578: 577: 576: 552: 551: 550: 488: 485: 484: 483: 417: 414: 413: 412: 411: 410: 409: 408: 407: 406: 405: 404: 403: 402: 401: 400: 399: 398: 397: 396: 395: 394: 352: 351: 350: 349: 348: 347: 346: 345: 344: 343: 342: 341: 340: 339: 338: 337: 336: 335: 334: 333: 323: 300: 277: 205:It would be a 181: 178: 95: 94: 51: 48: 46: 43: 23: 15: 14: 13: 10: 9: 6: 4: 3: 2: 1333: 1324: 1323: 1320: 1316: 1313: 1310: 1306: 1296: 1293:I'm going to 1288: 1284: 1281: 1279: 1276: 1274: 1271: 1269: 1266: 1264: 1261: 1259: 1256: 1254: 1251: 1249: 1246: 1244: 1241: 1240: 1239: 1237: 1234: 1225: 1223: 1222: 1217: 1210: 1206: 1199: 1196: 1193: 1192: 1191: 1188: 1184: 1182: 1177: 1170: 1166: 1163: 1161: 1158: 1156: 1153: 1152: 1151: 1144: 1136: 1132: 1127: 1118: 1117: 1116: 1113: 1109: 1108: 1107: 1103: 1099: 1095: 1091: 1087: 1083: 1080: 1076: 1075: 1074: 1071: 1067: 1062: 1054: 1053: 1052: 1051: 1047: 1043: 1037: 1033: 1028: 1019: 1018: 1017: 1016: 1013: 1004: 1001: 1000: 999: 996: 989: 975: 972: 968: 963: 962: 961: 958: 953: 946: 945: 944: 941: 937: 936:WP:CANVASsing 933: 928: 924: 923: 922: 919: 914: 907: 906: 905: 902: 898: 897: 896: 892: 888: 883: 882: 870: 867: 863: 859: 858: 857: 853: 849: 844: 843: 842: 841: 838: 834: 830: 825: 821: 820: 815: 811: 807: 803: 802: 801: 800: 794: 791: 788: 785: 784: 783: 782: 781: 773: 772: 771: 770: 764: 763: 762: 761: 755: 754: 753: 752: 746: 745: 744: 743: 737: 736: 735: 734: 731: 727: 723: 719: 715: 711: 710: 706: 702: 698: 697: 694: 690: 686: 682: 678: 673: 665: 661: 657: 652: 651: 650: 647: 643: 639: 634: 633: 632: 628: 624: 619: 618: 617: 614: 610: 606: 601: 600: 599: 598: 594: 590: 586: 575: 572: 568: 567: 566: 562: 558: 553: 549: 546: 542: 538: 537: 536: 532: 528: 524: 523: 522: 519: 514: 510: 506: 505: 504: 503: 499: 495: 486: 482: 479: 474: 473: 472: 471: 468: 464: 459: 451: 449: 443: 439: 435: 432: 426: 422: 415: 393: 389: 385: 381: 377: 372: 371: 370: 369: 368: 367: 366: 365: 364: 363: 362: 361: 360: 359: 358: 357: 356: 355: 354: 353: 332: 329: 324: 320: 319: 318: 317: 316: 313: 308: 301: 297: 292: 291: 290: 287: 282: 278: 276: 273: 269: 265: 264: 263: 262: 261: 258: 253: 246: 245: 244: 241: 237: 236: 235: 232: 227: 221: 220: 219: 216: 212: 208: 204: 203: 202: 199: 194: 188: 184: 183: 179: 177: 176: 172: 168: 167: 162: 159: 156: 152: 147: 146: 142: 138: 133: 129: 125: 124: 120: 102: 93: 89: 85: 81: 77: 73: 69: 68: 67: 66: 62: 58: 49: 44: 42: 39: 37: 31: 29: 19: 1299: 1294: 1292: 1230: 1229: 1207: 1203: 1189: 1185: 1178: 1174: 1148: 1098:WhatamIdoing 1089: 1057: 1040: 1008: 997: 993: 966: 931: 926: 861: 829:WhatamIdoing 779: 714:Anthonyhcole 677:Anthonyhcole 641: 637: 604: 582: 540: 512: 490: 454: 452: 444: 440: 436: 429:styles, our 427: 423: 419: 376:Anthonyhcole 295: 280: 267: 186: 164: 163:), above. — 157: 148: 131: 127: 126: 96: 76:Anthonyhcole 53: 40: 32: 25: 956:| confess _ 951:‑Scottywong 917:| chatter _ 912:‑Scottywong 311:| express _ 306:‑Scottywong 256:| prattle _ 251:‑Scottywong 225:‑Scottywong 192:‑Scottywong 1209:Smallbones 1092:be during 701:Roy Morgan 137:BSVulturis 28:sitenotice 1215:smalltalk 864:readers. 862:potential 848:SmokeyJoe 806:SmokeyJoe 656:SmokeyJoe 623:SmokeyJoe 605:or should 589:SmokeyJoe 557:SmokeyJoe 527:SmokeyJoe 513:potential 494:SmokeyJoe 74:below. -- 57:SmokeyJoe 1069:Chequers 722:contribs 685:contribs 638:wouldn't 609:El Gordo 466:Chequers 384:contribs 161:contribs 151:Brainy J 100:Brainy J 84:contribs 1295:oppose 887:Warden 507:Well, 431:wp:ERA 1309:Retro 1112:Rd232 1079:Rd232 1065:Spiel 1012:Rd232 971:Rd232 967:think 940:Rd232 927:think 901:Rd232 866:Rd232 726:email 689:email 646:Rd232 642:happy 613:Rd232 587:. -- 571:Rd232 545:Rd232 518:Rd232 478:Rd232 462:Spiel 388:email 328:Rd232 296:can't 286:Rd232 272:Rd232 240:Rd232 215:Rd232 187:every 88:email 16:< 1312:Lord 1102:talk 1060:Ϣere 1046:talk 891:talk 852:talk 833:talk 822:See 810:talk 718:talk 681:talk 660:talk 627:talk 593:talk 561:talk 531:talk 498:talk 457:Ϣere 380:talk 171:talk 166:Cirt 155:talk 141:talk 118:talk 80:talk 70:See 61:talk 1090:not 1042:Wnt 728:) 691:) 541:all 450:). 390:) 268:can 132:any 128:No. 90:) 1125:SJ 1122:– 1104:) 1048:) 1026:SJ 1023:– 893:) 854:) 835:) 812:) 724:· 720:· 687:· 683:· 662:) 654:-- 629:) 611:. 595:) 563:) 555:-- 533:) 500:) 492:-- 386:· 382:· 209:/ 173:) 143:) 121:) 86:· 82:· 63:) 1318:★ 1315:★ 1304:★ 1301:★ 1218:) 1212:( 1130:+ 1100:( 1044:( 1031:+ 930:( 889:( 850:( 831:( 808:( 716:( 679:( 658:( 625:( 591:( 559:( 529:( 496:( 378:( 169:( 158:· 153:( 139:( 115:( 113:~ 110:✿ 107:~ 78:( 59:(

Index

Knowledge:Requests for comment
sitenotice
m:Research:Knowledge Readership Survey 2011
SmokeyJoe
talk
11:51, 5 May 2013 (UTC)
#Non response bias
Anthonyhcole
talk
contribs
email
22:15, 5 May 2013 (UTC)
Brainy J
talk
18:48, 23 May 2013 (UTC)
BSVulturis
talk
17:06, 5 June 2013 (UTC)
Brainy J
talk
contribs
Cirt
talk
17:51, 10 June 2013 (UTC)
‑Scottywong
| communicate _
17:39, 1 May 2013 (UTC)
MediaWiki:Sitenotice
MediaWiki:Anonnotice
Rd232

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.