Wikipedia:Wikipedia Signpost/2012-04-23/Investigative report
Spin doctors spin Jimmy's "bright line"
It began on 17 April with a misleadingly titled report on the newswise site, "Survey finds most Wikipedia entries contain factual errors". This was reprinted on the same day by the online research news site Science Daily ("Most Wikipedia entries about companies contain factual errors, study finds"). Within days it had gone viral on internet news sites all over the world. The story was picked up by the American ABC news blog ("Wikipedia: public relations people, editors differ over entries"), The Telegraph in the UK ("Six out of 10 Wikipedia business entries contain factual errors"), the Indian edition of NYDailyNews ("Wikipedia entries full of factual errors, says researcher"), and The Register, a British technology news and opinion website ("Let promoters edit clients' Wikipedia entries"). One outlet, the Business2Community, went so far as to announce that "a new study published in the Public Relations Journal shows that a stunning 60 percent of articles about specific companies contained factual errors."At the centre of the hubbub are a set of research results that their author, Pennsylvania State University's Marcia W. DiStaso, claims "will help establish a baseline of understanding for how public relations professionals work with Wikipedia editors to achieve accuracy in their clients' entries". The study involved a survey of nearly 1300 public relations and communications professionals to analyse how they work with the English Wikipedia. Funded by Penn State's Arthur W. Page Center for Integrity in Public Communication and recently published in Public Relations Journal, the paper goes by the title "Measuring public relations Wikipedia engagement: how bright is the rule?", a play on Jimmy Wales's "bright line" – a reference to the boundary he advocates people with a conflict of interest in a topic should not cross by never editing articles directly.
The results, which have cast a shadow over the English Wikipedia's company articles, have relevance to the ongoing debate about whether paid editing should be officially permitted on Wikipedia. "Public relations professionals have their hands tied," DiStaso told ABC. "They can only make comments on discussion pages suggesting corrections, and wait for the public to reply.” She believes that while waiting for a reply, a company may be caught in a crisis of public image: “In today’s fast-paced society, five days is a long time.”
Scrutinising the 60% claim
The claim that 60% of Wikipedia's articles contain factual inaccuracies, however, doesn't stand up to basic scrutiny. We have bulleted the quotation for ease of reading:
- "When asked if there are currently factual errors on their company or client’s Wikipedia articles,
- 32% said that there were (n=406),
- 25% said that they don’t know (n=310),
- 22% said no (n=273), and
- 22% said that their company or client does not have a Wikipedia article (n=271).
- In other words, 60% of the Wikipedia articles for respondents who were familiar with their company or recent client’s article contained factual errors." (our underlining)
The problem is hidden in the underlined clause. This cleverly allows DiStaso to exclude from the sample the 25% of respondents who ticked "don't know". DiStaso told The Signpost, "if all respondents were familiar with their articles this could go either way – more 'yes' answers would make it a higher % and more 'no' answers would make it lower." But although it's valid to exclude respondents whose companies have no article (the last bullet), excluding the don't knows, which boosts the factual error rate to 60% (406 / [406 + 273]) raises difficult issues. Including the don't knows would yield 41% (406 / [406 + 310 + 273]). This problematic calculation was independently pointed out by Tilman Bayer (HaeB) of Wikimedia's communications team, who has postgraduate degrees in mathematics and co-edits the Wikimedia Research Newsletter. The true percentage is almost certainly not 60.
One of the problems with the way in which the findings have been disseminated is the omission of the critical clause from most news reports. Most news journalists have apparently not grasped that the "60%" claim represents a relatively narrow statistical artefact. They can hardly be blamed when the press release by Dick Jones Communications – the PR company that represents her college – started that ball rolling ("Survey finds majority of Wikipedia entries contain factual errors").
"I think it's a mistake to give their nonsense any attention whatsoever. Wikipedia is not for sale."
Jimmy Wales
In addition, a list of categories of errors were presented to respondents, mixing the more political and subjective "Criticisms" with more objective criteria such as dates, board membership, and even spelling in the article text. Indeed, more than one in five respondents ticked "incorrect spelling" as a category of inaccuracy, which suggests the possible scenario in which one US spelling in an article about a British company might be enough to classify a whole article as inaccurate. DiStaso told us, "I suppose it is possible but probably unlikely that [such a spelling inconsistency] would be considered a 'factual error' [but] there could be other words that if misspelled could be considered factual errors such as the misspelling of a product name." We wonder whether this distinction was clear to respondents.
Another potential flaw in the methodology was that respondents were not asked to read the article on their company or client and identify the errors as they saw them; this would have enabled reliable verification of what the perceived errors were, and the extent to which they could be considered to be errors.
Further results
"The survey was self selected. It wasn't sampled research. This does pull into question the results. I'm a Founding Fellow of the Society for New Communications Research, and our Research Head, recommends all fellows don't only conduct online surveys."
John Cass, CREWE Facebook page discussion
A key claim in interpreting the data is that public relations professionals find responses to their talk-page requests for changes to articles either slow or non-existent (nearly a quarter reported no response at all, and 12% said it took "weeks" to get a response). However, the data appear to disregard the size, age, quality, and hit-rates of articles on which the data was based.
The data also led to the claim that PR professionals have little understanding of Wikipedia's rules for editing and the protocol for contacting editors to have facts altered – but just how a lack of understanding interacts with political considerations was not made clear: "Only 35 percent of respondents were able to engage with Wikipedia, either by using its "Talk" pages to converse with editors or through direct editing of a client's entry. Respondents indicated this figure is low partly because some fear media backlash over making edits to clients' entries. ... Twenty-nine percent said their interactions with Wikipedia editors were 'never productive'."
This lack of understanding by PR professionals of how to use Wikipedia's infrastructure, some of it explicitly set up for them, sits oddly with the article's strident argument that Wikipedia's policies needs to be changed. Unusually for a scholastic journal, the key "findings" of the paper are displayed on the download site in a larger-than-life ad-like infographic, which is now appearing elsewhere on the internet.
Facebook lobby group
"It's exactly this disregard for the facts and the advocacy for erroneous conclusions that give Wikipedians (or any logical thinker) pause about how public relations coexists with the public interest. ... Those who created the erroneous headlines need to be held accountable for it, not the news outlets that repeated them."
Andrew Lih, Wikimedian author and journalism academic
There have been claims that Wales is or has been a member of CREWE. He told The Signpost in no uncertain terms: "(1) I am not a member of CREWE. I do not approve of their attempt to forcibly change Wikipedia policy by off-site coordination of paid advocates in a facebook group; and (2) I think it's a mistake to give their nonsense any attention whatsoever. Wikipedia is not for sale."
Phil Gomes, who launched CREWE and has played a major part in recruiting its members, responded to Wales's remarks: "CREWE is about exploring where company communicators and Wikipedians can work together towards the mutual objective of accurate entries. The primary outputs of this group have looked at the best ways that we can educate PR people to do right by Wikipedia. ... To characterize this is as an 'attempt to forcibly change Wikipedia policy by off-site coordination of paid advocates in a Facebook group' is inaccurate and a bit exaggerated. No one is forcing anyone to do anything. Our most passionate contributors even describe themselves as Wikipedians, not PR people. Evidently, the "public shaming" approach to past bad PR behavior is not a deterrent. We're trying to be proactive by educating instead. If someone considers any of this to be 'nonsense', then that's a shame."
Andrew Lih, author of The Wikipedia Revolution: How a Bunch of Nobodies Created the World's Greatest Encyclopedia, an associate professor at the University of Southern California Annenberg School of Communication and Journalism, has taken DiStaso's work to task on the CREWE page itself. "I will state the question again, which has been avoided. Can you in good conscience and as a good academic use that report to stand by the words "60% of Wikipedia articles had factual errors" (these are DiStaso's own words to accompany the announcement of the report). Or stand by the PRSA headline: "Survey finds majority of Wikipedia entries contain factual errors" (PRSA's words to announce the report). These two statements must be strongly rejected, or there is no chance to see eye to eye on having PR folks to edit Wikipedia."
Science Daily reports DiStaso as saying that "the status quo can't continue. A high amount of factual errors doesn't work for anyone, especially the public, which relies on Wikipedia for accurate, balanced information. ... If errors are found or if public relations professionals believe content needs to be added or changed, they should refer to the [CREWE] Wikipedia Engagement Flowchart, available on Wikimedia Commons, for guidance on requesting edits." The flowchart, first posted to Commons on 2 April, is due to be finalised by the end of June. The Signpost notes that among other advice, the flowchart says that if an issue raised at the COI noticeboard has not been addressed within 48 hours, "you are now entitled to complain about Wikipedia in any forum you want."
ABC reports Jay Walsh, director of communications for the Wikimedia Foundation, as conceding that DiStaso has a point about the slow responses by Wikipedians to requests for corrections to company articles.
Discuss this story
I'm sorry, but this draft article itself looks like PR spin. It's also a very one-sided reflection of the discussions at CREWE ... and there is nothing about how best to address such errors as Wikipedia does contain, as we all know, whether it's in 60% of company articles or 40%. Overly defensive, and transparently so. --JN466 15:55, 23 April 2012 (UTC)[reply]
Bright line
Re: "a reference to the boundary that people with a conflict of interest in a topic should not cross under site policy." Jimmy Wales' rule for 'no direct editing' is not policy. COI only prohibits edits that promote an editor's interests above Wikipedia's; otherwise, it permits at least uncontroversial changes such as to spelling, grammar, statistics, etc. Also, COI is not a policy, it's a guideline. Ocaasi t | c 16:05, 23 April 2012 (UTC)[reply]
2010 study reference
You question Marcia DiStaso's credibility as a researcher by writing the following: "The Signpost notes that in a 2010 article in the same scholarly journal, the starting point contain hints that companies are doing it hard on the English Wikipedia: ..." I would like to point out that Marcia DiStaso's and my 2010 study (she did not write it by herself by the way) is wrongly used here to undermine her credibility. Please take a look at our conclusions in which we told PR pros to pretty much keep their hands off negative Wikipedia content and engage in conversations with Wikipedians, except for editing minor mistakes (which we thought was the policy when the paper was written). The main advice we gave back then was to keep an eye on the Wikipedia articles. That's standard PR procedure for other media formats. I'm not sure why that's controversial and would be used to undermine her credibility. Can you please clarify. Socialmediaprofessor (talk) 18:57, 23 April 2012 (UTC)[reply]
Quick analysis
Hi! I'm a bit concerned that some of the problems identified in the article aren't the best choices, and distract from what I think are the more significant problems with some of the findings. I have three main concerns with the paper.
First, my concern with the 60% figure isn't the same as that which others have expressed in the article. Fundamentally, my concern is that figure could not be derived from the collected data as presented. According to the paper, the question asked of respondents was:
The problem is the use of a plural for "their company or client's WP articles". That means that the figure that the question collects is the percentage of people who were able to identify errors in one or more of the articles they looked at, not the percentage of articles that contained errors. To use an alternative example, let's say that I identified 10 articles, only two of which contained errors. Then I asked 100 people to look at those articles and tell me if they found any errors, with 90% of repondents reporting that they found at least one error in the articles. I can reasonably state "90% of respondents found at least one error", but I cannot say "90% of articles contained an error". To do that I would have to ask a) how many WP articles did each respondent look at (presumably 10), and b) how many of those contained errors. But the paper doesn't state that the survey tool asked those additional questions. So either the question as expressed in the paper is incorrect, there was other data collected that wasn't stated in the paper which identified particular articles, or the "60% of wikipedia pages about companies contain errors" could not have been derived from the data.
My other concerns are easier to express. The second is sample bias. This is always a problem with online surveys, but the problem here was the use of CREWE. CREWE have a clear bias, yet they can be expected to have answered the questions and were invovled in finding respondents. In the acknowledgements it states:
Given the sample size (1284) and the number of people who are part of CREWE (294, not all of whom are PR professionals), this may consititute significant bias that isn't addressed in the paper.
Finally, and most fundamentally, the paper starts with the claim "This study found that the “bright line” rule as co-founder Jimmy Wales has called it, is not working." The problem is that there is no bright line rule. Jimmy has advocated for one, but it doesn't exist. So to write a paper exploring the effectivness of a non-existent rule based on the assumption that the rule exists is a significant error.
I think there is real value in the paper's findings, but the problem is that the errors - in particular the 60% claim and the assumption that the Bright Line rule is currently a policy or guideline on WP, tends to hide the valuable figures. - Bilby (talk) 00:31, 24 April 2012 (UTC)[reply]
NPOV
Could you at least make an attempt to reflect discussion at CREWE neutrally? The opinions you have stuck into your call-out boxes are all from one side of the debate that was had at CREWE. --JN466 02:35, 24 April 2012 (UTC)[reply]
60%
In my consideration (and that of several other editors who advised on the preparation of the article), the 60% claim is way out of line. All that Dr DiStaso was entitled to say was that 406 of the 989 respondents whose client or company had an article felt that there was at least one factual error. This is 41%. She might have pointed out that a proportion of the don't knows might also have perceived an error, had they been required to actually read the article they claimed to be associated with. A note that 24 respondents didn't tick any box for this question might also have been included.
Research is necessarily conservative, cautious. You try to prove the null hypothesis, not the hypothesis, and if you fail, you've succeeded in an inverted way. Researchers are likely to be reviewed negatively where they give the green light to a press release that makes scientifically false claims, based on a methodology that needed questioning before the trials took place. It's most unfortunate that respondents were not asked (in confidence) to specify the article, so that Dr DiStaso's research assistants might have verified and expressed in precise qualitative terms what the "factual errors" were in each case: how many, of what type, and how serious. The generalised multiple-choice question on type of error was never going to provide convincing support for the findings that were trumpeted in headlines, since there was no specific connection between the data for each.
I can only agree with Bilby's points about sample bias in encouraging members of CREWE to participate, with an information environment that was highly likely to contaminate in the encouragement. ITN has already pointed out that selection bias was likely to be significant just in the "call to action" environments in which invitations to particate were made. The very existence of CREWE, indeed, meant that some effort needed to be put into countering selection (and self-selection) bias.
This is not a credible study, although it does provide some interesting and possibly useful things for the movement to look at, with caution. Tony (talk) 03:29, 24 April 2012 (UTC)[reply]
NPOV, please
This looks like a POV essay. Try reporting the STORY instead of writing an opinion essay. There is no Wikipedia policy or guideline called "bright line" with respect to COI editing, it is a creation from the mind of Jimmy Wales, without a corresponding connection to WP's current doctrine. We've all got opinions, that's his. I think he's wrong. That's not the story — the story is the publication of the survey itself and its conclusions — and there may well have room for critique of the methodology. But do bear in mind with 1200 respondents, even if this is an unscientific accumulation of anecdotal evidence and personal opinions of PR pros, it has value even as that.
The main criticism is with the misinterpretation and erroneous headlines used by a few of the bimbos in the commercial press, not with the study itself. The only criticism I have with the study itself is the fact that its author placed far, far too much credence in Mr. Wales opinions, elevating these to some sort of actually applicable guideline at WP in the last few pages of commentary. That's wrong and publishing that was disinformative. Carrite (talk) 04:08, 24 April 2012 (UTC)[reply]
FOR THE RECORD: There is no "membership" in CREWE per se. Anyone can join that Facebook group which is (nominally) a dialogue between Wikipedians and PR professionals. So to clearly state my reason for being there: I'm a Wikipedian in good standing since 2003, sysop, academic and author who is HIGHLY CRITICAL of PR professionals editing Wikipedia in any direct way. David Gerard (another longtime Wikipedian) and I are both prominent active voices of the "Wikipedia side" in that CREWE group. -- Fuzheado | Talk 18:00, 24 April 2012 (UTC)[reply]
— V = IR (Talk • Contribs) 22:32, 25 April 2012 (UTC)[reply]
Only 60%
Only 60% of our articles contain factual errors? This is excellent news and definitely something to celebrate. Much better than the 100% of PR press releases that contain factual errors. An excellent publication re enforcing our policy against COI editing. Now only if the community while get behind enforcing it.--Doc James (talk · contribs · email) 12:01, 24 April 2012 (UTC)[reply]
How to lie with statistics
The part that is incredibly missing from this story, is that when you ask a flack who has seventeen clients, whether there are errors in any of their clients' articles, and they say "yes", they may only mean that one of the seventeen articles has errors. But this is reported as a yes, instead of as one-seventeenth of a yes; and the press picks up on this falsity and runs with it. It is hard to escape the suspicion that this bias was built into the study; I would love to be proven wrong. --Orange Mike | Talk 13:32, 24 April 2012 (UTC)[reply]
No real data
The real trouble here is, there's no real data here—we know that a certain percentage of paid shills said they found an inaccuracy. Firstly, do they even know the information is inaccurate? Well, no one knows. What we'd need, if we wanted to study this, is something that asks not only "Do you think there's an inaccuracy?" but rather "What do you assert is inaccurate and why?". That way, we could do the following:
That would be real data. These are just meaningless numbers in a self-selected survey from a non-neutral source. Seraphimblade Talk to me 15:27, 24 April 2012 (UTC)[reply]
You should not try to fight POV-pushing by POV-pushing
I tend to enjoy the high standards of objectivity and neutrality most Signpost articles live up to, and this one falls disappointingly short. It feels like a PR-spin itself. Even if the article is a blatant attempt at influencing policy discussions (I don't have an opinion on that as the link to it does not work, but the infographic certainly feels that way), that is no justification for giving up the norms of journalism and encyclopedic neutrality, and responding in kind. A few examples of what is wrong with the piece:
One can of course always pull hypotheses about possible biases out of one's nose; just to prove the point, here are two which suggest the real number is above 60%: 1) larger companies have more PR staff so there is a larger chance they have one who does have time to read their Wikipedia entry; smaller, less notable companies are probably overrepresented in the don't knows, and it is well known that Wikipedia articles about less notable subjects tend to have more errors than ones about more notable subjects; 2) those PR professionals who found factual errors in their articles probably warned the editors, and (unless one has very unfavorable views of Wikipedia editors) it is reasonable to assume that most of those were corrected - so the ratio of errors should be much lower for those articles which have been examined by the respondents than the articles belonging to the don't knows.
The point is, you can make any number of claims about why some subgroup of the respondents (or even the whole group) are biased this way or that, and thus how the 60% figure should be adjusted up or down, but those adjustments will be actually less natural than not assuming biases (and arriving at 60%), not more. One could make a compelling argument that treating the survey results as unbiased is not justified, and the results should be treated as unreliable (amongst other things, the sample was self-selected, and most participants probably have an interest a result which is unfavorable for Wikipedia); instead of that, the Signpost article tries to counter with assumptions which are even more unjustified, while at the same time crying foul play.
--Tgr (talk) 17:04, 24 April 2012 (UTC)[reply]
RE:the comment "Anyone who suggests the 41% as a better estimation to 60% is either uneducated in statistics ... or dishonest." I am neither uneducated in statistics, nor dishonest. User:HaeB, who was quoted in the article, is certainly educated in statistics and I must assume that he is honest (since I've never run into him before). This is not about credentials however. Anybody who has had more than one university-level stats course (or equivalent) should recognize that for many reasons the 60% number is simply meaningless. I believe that nobody with a knowledge of stats will take the results of this study seriously. Smallbones (talk) 18:04, 24 April 2012 (UTC)[reply]
Independent research needed
The frustrating thing is that we don't seem to have a recent reputable study into the accuracy of the pedia. Without that we are vulnerable to this sort of exercise. A more open response to this would be for the WMF to commission a trustworthy third party to quality check a random set of facts and articles and produce a report on it. If this was done as an annual or even biannual exercise then the press would have something to check against, we would have an interesting benchmark, and if and when "studies" like this emerged the press could ask the researchers would done the benchmark study to comment on the competing study. ϢereSpielChequers 17:16, 24 April 2012 (UTC)[reply]
PRSA has changed/updated their headline
Arthur Yann of PRSA said in the CREWE Facebook group:
WAS:
NOW:
-- Fuzheado | Talk 19:15, 24 April 2012 (UTC)[reply]
Thank you!
There are lots of people on the in the corporate and PR community, lobbyists and others who would like to use Wikipedia for promotion. I cannot imagine a stronger COI than a person who is paid to make their client look good on the internet (or a person or company editing an article about him, her or itself). For an academic to put together a biased opinion poll of these people with strong COIs, and then to publish their poll answers as if what they said is somehow objective is astonishing. Thanks, Signpost, for alerting us to this travesty. The proof of the pudding is in the eating: What this "study" led to is asinine headlines like this: NYDailyNews: "Wikipedia entries full of factual errors". No, what the "study" found is that 60% of paid shills who were asked ambiguous questions, if they had any opinion at all, felt that one or more articles on their clients had an error, including, possibly, a spelling error. -- Ssilvers (talk) 01:23, 25 April 2012 (UTC)[reply]
What the report tells me
The report is intended to convince Wikipedians to openly allow PR people, but the report itself seems to demonstrate a laundry list of reasons NOT to. It's not in encyclopedic tone, doesn't represent all majority and minority viewpoints, uses misinformation to support an agenda and so on. It even demonstrates an ability to corrupt the balance of trusted sources from the real-world equivalent of the Talk page and create one-sided stories in independent sources through the availability of resources.
In other words, editors like Smallbones weren't given a voice in these media articles, because he doesn't have a PR person pitching him to the media. Data to support their POV was presented, but what about data like this[3] showing the edit histories associated with the top ten PR agencies by revenue. If the same behavior and dynamics we see with the report were brought to Wikipedia, it would certainly be a bad thing for the pedia, more so than factual errors.
While I don't believe this to actually be the case, the report seems to communicate to me a need to outright ban PR people. Additionally, I find it difficult for anyone who cares about Wikipedia to consider an open collaboration with a group that publicly assaults the website's credibility in such a manner. All I can do is invite PRSA/IPR/etc. to humble themselves and commit to learn how to meet Wikipedia's content needs and collaboration style, but I don't expect such an invitation to be met. User:King4057 (COI Disclosure on User Page) 01:51, 25 April 2012 (UTC)[reply]
A practical response to PR complaints
Let's move on from the hyped and flakey 60% claim. Whether the current low-tolerance policy remains—which looks likely for the time being—or whether it's loosened, it's hard to ignore the perceptions among PR and communications professionals of long waits or no response at all to open requests for changes to articles on companies. These are the good guys, the ones who do the right thing by asking for editorial mediation; yet the message is that they're routinely discouraged. Perhaps this is a collision between the volunteer culture on the foundation's sites ("there's no deadline") and the rigours of turbo-charged capitalism, where I tend to agree with DiStaso's point that five days is a long time for professionals and their clients to sit in silence ("is anyone at home?"). Yet volunteers appear to have done reasonably well in managing serious and complex issues such as quick action on copyright and BLP issues: we've shown that dynamic management is possible, and isn't it part of the cost of doing business on a big, powerful wiki?
Personally, I've found it difficult and time-consuming to navigate through the maze of CoI-related pages on the English WP. Some are tired, moribund, or confused, bloat abounds, and there seems to be no centre of gravity. No wonder a lot of PR professionals and company reps throw up their hands and edit under the radar, when the radar resembles a low-wattage flickering street light in bad weather.
The head of communications, Jay Walsh, sees the response problem and has acknowledged it publicly, as reported at the end of the story. So leadership is in order from the foundation—the cross-wiki implications alone suggest that it's a matter in which the foundation should take a more active, practical role: god knows what tangled webs or straight-out neglect are the norm on the other 280 WPs (including the smaller, outlying language WPs, largely impenetrable to the movement).
If it's good enough for the foundation to create a summer fellowship to revamp our help pages (see the Signpost report this week), it's good enough to consider employing a fellow to work with the community to revamp the speed and efficiency with which we respond to PR requests and queries—to see things from the perspective of incoming PR professionals and to create an easy system to tempt them away from subterfuge. Good openers would be to create a template for posting on company-article talk pages with a link to a convenient, one-stop noticeboard, and working out how to attract volunteers into a response team that involves personal stimulation and social reward. And there's the possibility of sending pro-active messages out to the PR/communications/corporate community about working with them to ensure balance and neutrality; that would be good for the movement's image, wouldn't it. Tony (talk) 04:14, 25 April 2012 (UTC)[reply]
Lose the drama, read the study
The DiStasso paper is publicly available: http://www.prsa.org/intelligence/prjournal
It is not hard reading.
There are certainly a couple of structural problems with the survey, nicely pointed out above: (1) There is no quantification of the magnitude of error, minor errors and major catastrophes are both considered the same; (2) Respondents were not asked to answer about a single client, so some may be venting about one client and being counted for it, but having no problems with other pages and not having those "good" pages tallied; (3) The paper pretends there is something called a "bright line" policy about paid COI editing and spends a lot of time studying respondent understanding of this incorrect interpretation of actual WP policy.
There were also tactical errors: (1) It was a mistake to try to come up with a sensational high error number and to make that the hook of the piece. The takeaway should be "Most PR people who deal with clients that have Wikipedia pages feel that there are significant errors on those pages, and they are confused about Wikipedia's practices for getting those corrected." Instead we've got a bunch of people yelling about whether 41% or 60% are more accurate quantifications of the problem; (2) It was a very big mistake taking the results to the press and trying to make a news story out of it, rather than quietly bringing the findings to WP directly. Bad blood resulted.
We've just had an RfC on COI editing, now running out of gas. As one might have predicted, opinions vary widely and there is no consensus for any approach to clarification of the matter. What's pretty clear is that as long as there are pages about large corporations on Wikipedia, there will be paid PR people with a professional interest in making sure that those Wikipedia pages are fair, neutral, and error free. That does not describe the current state of many of these pages, I think we all can agree — whether 41% are screwed up, or 21%, or 60%, or some other is absolutely irrelevant. The fact is that there is a problem of some magnitude. How this is resolved is ultimately up to us as a community.
I am very disappointed in this piece, my comments above were written about a late draft, which changed little. It is not journalism, it is an opinion piece disguised as journalism, and a very one-sided and shrill piece of work. Done's done. The issue isn't going to go away. I just urge people to actually read the report and to see what it says and what it does not say directly before they fly off the handle being all too sure about how to resolve a complex problem. Carrite (talk) 04:14, 25 April 2012 (UTC)[reply]
Utter nonsense
It is impossible to give a sensible critique of, or response to, something that is utter nonsense to begin with.
While this piece is a noble effort to put the survey in some perspective, its flaw is treating the the survey as though it means anything in the first place.—Finell 18:35, 28 April 2012 (UTC)[reply]
BizProd