Hi,
Wikipedia is a great system to organize a set of articles, and it has a
large help system as well. But newcomers and mid-termers alike would
have lots of similar questions about the system, specific guidelines, or
techniques of writing articles.
StackExchange <http://en.wikipedia.org/wiki/StackExchange>, a free
Question and Answer network of websites would start a website dedicated
to Wikipedia and Wiki questions if the community only supports the
project by voting for it. This website would have a very unique set of
features that cannot compare with the way Wikipedia handles questions,
simply because it is different. In Wikipedia questions are added to
pages, much like a forum. In StackExchange, questions are added to a
database that is searchable where each question can be voted for by the
community.
* Database of questions, listings by vote, or by newest/oldest
* User accounts with ranking system per user based on helpfulness
* Answers are voted for by users, and best answers show on top
* Tagging and searching for questions by tag (eg. 'syntax',
'images', 'audio')
I'm writing here to call for the support of Wikipedians around the
world, simply for our own benefit. If we can vote for this site and
visit it regularly to answer questions then Wikipedia could grow so much
faster since newcomers would have an intelligent and easy-to-use
platform for their questions and troubles.
1. Please start on this proposal page
<http://area51.stackexchange.com/proposals/13716/wikipedia-and-wikis>.
2. You'll need to login (link on the top)
3. Then you have to click the "Follow" button (or "Commit", if available)
4. When the site begins you will get a link to it on the same page.
Thank you!
Tomjenkins52 <http://meta.wikimedia.org/wiki/User:Tomjenkins52>
Few organizations track Wikipedia usage. Pew has carried out a couple
of surveys of American adults in recent years, listed below:
2007 http://www.pewinternet.org/Reports/2007/Wikipedia-users.aspx 2007
"36% of online American adults consult Wikipedia"
Pew found that in America Wikipedia was more popular with wealthy
people, white people and English-speaking hispanics, men, adults under
30, college graduates and home broadband users (obviously some of
those factors correlate).
Please note that Pew doesn't survey under-18s.
Wikipedia was the most popular education and reference website by
almost an order of magnitude.
"Over 70% of the visits to Wikipedia in the week ending March 17 came
from search engines, according to Hitwise data."
But the web and the way people use it has continued to evolve.
2010 http://pewinternet.org/Reports/2011/Wikipedia.aspx "53 percent of
online Americans use Wikipedia"
'In the "scope of general online activities, using Wikipedia is more
popular than sending instant messages (done by 47 percent of Internet
users) or rating a product, service, or person (32 percent), but is
less popular than using social network sites (61 percent) or watching
videos on sites like YouTube (66 percent)."'
Hi all,
I'm still on vacation, but I saw "the WYSIWYG discussion, reloaded",
and was bored, so...
As far as I could deduce, the goal is to use a run-of-the-mill HTML
WYSIWYG editor, with minimal modifications, to edit MediaWiki text.
Since rebuilding the perfect parser has Failed Repeatedly (TM), any
parsed substitute should fall back gracefully, that is, only parse
wikitext into some HTML structure when it is very sure it is doing it
right, and otherwise leave it alone and just show it as old, ugly
wikitext.
I took two hours to write a pure JavaScript demo that can render
(note, not parse!) wikitext as HTML, so that a WYSIWYG HTML editor
could use it. Some elements, like headings, blank lines, and
templates, it converts into a pseudo-parsed structure, using classes
to indicate where the element(s) came from. I believe that, basically,
the original wikitext could be reconstructed from the rendered HTML
(not checked, though), and that changes in plain ol' HTML (read:
WYSIWYG edits) could be integrated likewise.
My demo is rudimentary: no checking for HTML comments or <nowiki>, no
bold or italics, no <ref> or [[link]] handling, and tables and lists
are ignored as well. But even so, the output remains readable and
recognisable as wikitext, and it should be quite clear how the
original wikitext could be regenerated from it.
The main function right now is the template collapse. Template code is
surrounded by a green border, and the template name is green. Long
templates hide their parameters, which can be shown by double-clicking
the template name. Depending on context, it is decided to use <div> or
<span>, so short inline templates stay inline. It is not always
pretty, but IMHO demonstrates the concept.
The JavaScript seems reasonably quick. Yes, some wikitext will be hard
to render; but frankly, we can just ignore it for the time being.
Better something that works quickly and reliably in most cases and
fails gracefully than something that would be perfect but never gets
done, I say!
Again, quick hack demo warning. If you're brave enough to try it, my
test article (only runs in article namespace ATM) is the article of
the day, [[Lince (tank)]].
JavaScript at http://toolserver.org/~magnus/wysiwtf/wysiwtf.js
CSS at http://toolserver.org/~magnus/wysiwtf/wysiwtf.css
To test, edit your vector.js, and copy this:
document.write('<script type="text/javascript"
src="http://toolserver.org/~magnus/wysiwtf/wysiwtf.js"><\/script>');
document.write('<link rel="stylesheet" type="text/css"
href="http://toolserver.org/~magnus/wysiwtf/wysiwtf.css"><\/link>');
Force-reload, go to an article, and you'll see a new "WYSIWTF" tab (I
trust you can decipher the acronym ;-)
Enjoy! ;-)
Magnus
Hi, I was trying to extract some information from the protein target
infobox on protein target pages (eg
http://en.wikipedia.org/wiki/Calreticulin or
http://en.wikipedia.org/wiki/Hsp90).
However when I export the page via
http://en.wikipedia.org/w/api.php?action=query&pageids=7120&export=&exportn…
the XML page does not seem to contain the information that I can see
when viewing the page in the browser. For example, the XML export for
Calreticulin does not contain the links to the rendering of the
structure or the PDB identifiers and so on.
Is my export URL wrong? Or is there a reason that the infobox
information is not exported and if so, is there a way to access it via
export?
Thanks,
--
Rajarshi Guha
NIH Chemical Genomics Center
Since Wikipedia grew and became more ambitious in its scope, there
have been predictions of its downfall, many of them giving an estimate
for the timescale of its demise. If you hunt around you may find a
prediction by me that Wikipedia was unlikely to survive much beyond
2010 because I thought it would decline in populatrity. Since then
Wikipedia has cemented itself into the fabric of modern culture and
become particularly useful in academia, where its strengths and
limitations are now well understood.
Reading the references Joseph Reagle's book I encountered this:
http://blog.ericgoldman.org/archives/2006/12/wikipedia_will_1.htm
Wikipedia, it appears, was destined to die within four years--by
December 5, 2010, because it would be involved in an unwinnable war
with marketers,
Since it's Christmas, the new year is coming, and we'll soon be
bouncing out of that into a celebration of Wikipedia's first decade,
perhaps now it the time to look back at the predictions of Wikipedia's
demise.
What are your favorite predictions of Wikideath?