HOME    BOOKS   ONE-DAY COURSE   ET NOTEBOOKS   SCULPTURE   PRINTS   POSTERS, GRAPH PAPER   ABOUT ET 
  CART

 

All 4 books by Edward Tufte now in
paperback editions, $100 for all 4
Visual Display of Quantitative Information
Envisioning Information
Visual Explanations
Beautiful Evidence
Paper/printing = original clothbound books.
Only available through ET's Graphics Press:
catalog + shopping cart
Edward Tufte e-books
Immediate download to any computer
connected to the internet:
La représentation de l'information
quantitative, (200 pages) $12
La Representación Visual de Información
Cuantitativa, (200 páginas) $12
Visual and Statistical Thinking, $2
The Cognitive Style of Powerpoint, $2
Seeing Around + Feynman Diagrams, $2
Data Analysis for Politics and Policy, $2
catalog + shopping cart
Edward Tufte one-day course,
Presenting Data and Information
Bethesda, November 17
Washington, November 18, 19
San Jose, December 15
San Francisco, December 18, 19
San Francisco, February 9, 10, 11
San Jose, February 13
Moderating internet forums: What's smart, not what's new

I am interested in finding out if it is a lot of work to moderate this public forum. How many people moderate the forum? Who moderates the forum?

Do people have experience moderating web forums? Are there any known procedures for doing so? What is the best way to moderate a forum so that information seems accurate and truthful? At what point does moderating a forum become a misrepresentation of the information sent?

Your feedback is appreciated.

-- Web Designer (email)


Our experiences in moderating a forum: What's best, not what's new

Moderating a forum is fairly straightforward: knowing what you want, deleting entire threads that aren't going anywhere, correcting the spelling of the word "it's," fixing URLs, deleting individual contributions that fail to advance the thread. It helps to have experience writing and editing (and reading student papers, refereeing journal articles, reviewing manuscripts and grant proposals).

As clearly indicated to potential contributors, we do a lot of deleting--only about half of all submitted contributions survive for more than a month. This doubtless hurts a few feelings but substantially raises the quality of the board. Very few published contributions are edited at all, other than silently to correct spelling, update an URL, or to delete a sour note in an otherwise good answer. Our view is that every contribution to Ask E.T. should advance the analytical quality of the thread. We particularly seek to avoid the chronic internet disease of "All Opinions, All the Time." The idea is to have an interesting and excellent board on analytical design that serves the content and the readers, not a board logging every attempt at publication. We also are ruthless in deleting contributions with incivilities, rants, taunts, and personal commentary on other contributors.

The forum is named "Ask E.T.," and my interests are reflected in my contributions, selection of topics, and editorial decisions. Sometimes I don't answer a question because our contributors have already produced good answers, or I don't have any idea of the answer or don't have anything to say just now.

A lot of implicit editing now comes from potential contributors themselves, as they see the style of this board and what good contributions look like. This is a very important point, for as more and more good contributions have accumulated over time, less and less editing has been needed because contributors can tell what it takes to get published here.

We are fortunate in having several star authors, who make smart and interesting contributions. They are occasionally identified as a "Kindly Contributor" to acknowledge publicly my gratitude for their writing.

For some boards, a bozo filter may prove useful by automatically deflecting certain trigger words. My friend Philip Greenspun constructed a filter at photo.net which bounced all those who misspelled the word "aperture," on the grounds that they did not know much about photography.

It may be a good idea to post some notes about your editorial policies near the place where potential contributors submit their efforts.

Five people are involved in the part-time (very part-time) management of this forum: Dariane Hunt (web designer), Elaine Morse (my design assistant), David Rodriguez (programmer), a very wise anonymous external reviewer, and myself.

So moderating a forum is fairly straightforward and not all that time-consuming--if you have excellent contributors, some editorial skills, a clear idea about what you want, and the ability to make quick final decisions. It takes much more effort to answer questions, start new threads, and add new material to the website.

-- Edward Tufte


Design methods for reducing the recency bias in forums

At this forum, in setting up categories and order of threads within categories, I was concerned with reducing the extreme recency bias of the internet. The idea is that threads have their own history and stand on their own, rather than being embedded in an over-arching and arbitrary time sequence.

Thus it is good to see old threads, when a new response comes in, resurface at the top of the New Responses category. This effect has increased since we stopped accepting any new questions and people have look through the old threads to find a place for their new contribution or question.

Perhaps there are other design methods to reduce the recency bias of internet presentations. The bias is overwhelming. Here is an example: when this board is mentioned at a major site (slashdot, Arts & Letters Daily, Kottke, NYTimes), our logs will show, say, 2000 visits from that site the first day, a 1000 the second, and then, as the mention slips into a screen position reachable only by scrolling, 100 on the third day, and by 5 days, nothing. The quality of the thread didn't change in 5 days, only its relative position on the referring site.

What then are some ideas on how to present material on at a website in a way that sorts out material by merit and by ideas and not by recency? This includes techniques of forum moderating such as re-opening or re-surfacing older items and bringing them to the top of the relevant page or list.

-- Edward Tufte


Necessity for screening forum contributions: too many spammers and idiots

From Philip Greenspun about forum moderating:

"ALL USER-CONTRIBUTED WEB CONTENT NEEDS PRE-MODERATION

In the mid-1990s when I started building online communities I didn't understand why publishers like Amazon pre-moderated all user-contributed content such as comments. The vast majority of users were intelligent and well-meaning and only a small fraction of material had to be deleted. It seemed like it wasn't worth interrupting the flow of conversation and exchange to ensure that an off-topic posting never saw the light of day. It would be intercepted within a day or so and deleted in any case.

The Manila software that Harvard runs behind these blogs shows the foolishness of my point of view. More than 90 percent of the comments posted to this blog are link spammers trying to increase their Google rank by adding comments to old and forgotten postings. Manila makes it impossible to delete this spams except one by one, each one requiring a several page process of confirmation. In the old ArsDigita Community System we had a "delete all from this user" and "delete all from this IP address" option that made it a lot easier. But in the Age of Spam what we really need is pre-moderation. Maybe there should be an option for a vibrant interactive discussion that content goes live for 24 hours without being approved but otherwise given the small percentage of useful non-spam content it seems that the only answer is that nothing goes public without approval.

Another reason to program in pre-approval only is that eventually the moderators of every online forum find other things to do with their lives. The server doesn't realize this and soldiers on processing postings. Spammers discover a happy home and the database fills up with crud. Software should be robust to the moderator disappearing and in an Internet that is mostly spam that means approval-required-before-going-live."

http://blogs.law.harvard.edu/philg/2005/08/03#a9970

-- Edward Tufte


Moveable Type guide on comment spam

Six Apart, which produces the unequaled, open source Movable Type blogging software, publishes the authoritative Guide to Combatting Comment Spam. Alas, like all security (antiterrorism, insurance, police, copyright), there isn't a one-sentence answer except to say every site must make, and continuously review, its own complex cost-benefit assessment and maintain a layered defense.

-- Niels Olson (email)


What's best, not what's new

In Robert Pirsig's Zen and the Art of Motorcycle Maintenance
he frames his discussion as a focus not on "What's New" but rather "What's Best"

It is subjective and it would require more intervention by the
moderator(s) but it would remove the recency bias and SPAM avalanche.

-- Tchad (email)


An excellent comment moderation policy

Here is Philip Greenspun's recent "Comment moderation policy," posted at

http://blogs.law.harvard.edu/philg/comment-moderation-policy/

"Comments on this Weblog are moderated by Ray Fraser, a kind-hearted volunteer, according to the following policy.

The most valued comments are alternative perspectives. If the posting is about a trip to Kugluktuk, Nunavut, a great comment would start "I took a trip there 10 years ago and my experience was ...".

The least valued comments are reviews of the posting, good or bad. The reader has just read the entire posting. He or she doesn't need someone else's opinion that "this was great" or "this was bad." Reviews make sense in the off-line world where consuming the book or movie happens after reading the review and takes a lot more time and effort. In the online world, the comments are usually read after the item being reviewed has been consumed.

Comments that attack another person's motivation, intelligence, or character are bad because they degrade the quality of the discussion and discourage thoughtful comments by others. For some reason, human beings often are confident that they can discern the hidden motivation for another person doing or saying something. Trained psychiatrists and psychologists, however, do very poorly at this task, so what hope is there for a lay person?

Cute/clever comments that are off-topic should only be published if they are very cute and clever indeed. Off-topic content breeds more off-topic content.

Attributed/real-name content is preferred to anonymous content. Due to the fact that 90 percent of the comments posted on the former version of this blog were link spam, all comments are reviewed by the moderator before going live."

-- Edward Tufte


Previewing contributions before publicly posting

See the essay by Alan Jacobs, "Goodbye, blog: The friend of information but the enemy of thought" here

Several, although not all, of the problems mentioned can be eliminated by pre-approving comments before they go live and by regular purging of contributions that turn out not to advance a thread. Thus the focus is always on the quality of the thread for the reader, not on the opportunity for anyone to publish whatever they want. Lost is the spontaneous continuity of back-and-forth conversation; gained is a smarter, coherent, and civil thread over the long run--if there are some good contributors along with a firm editorial board. Thus the model for bloggers is that people are publishing on your board and that therefore you exert editorial control of what is published. This is especially the case on the internet, where there are innumerable opportunities to get anything up on some blog somewhere, or indeed to start one's own blog.

The director of Princeton University Press, Herb Bailey, once lectured the Editorial Board of the Press that our job was not to be fair but rather to publish good books. Of course a fair process might help to cause good manuscripts to show up to be published, but he had a very strong point, especially since there were many more manuscripts than could ever be published (the acceptance rate of the Press was about 5%-10%).

-- Edward Tufte


How to ask questions the smart way

Compare ET's kind, even gentle publication policies to this hacker classic by Eric Steven Raymond, How to Ask Questions the Smart Way. Here's a brief excerpt from the disclaimer:

If you're reading this document because you need help, and you walk away with the impression you can get it directly from the authors, you are one of the idiots in question. Don't ask us questions. We'll just ignore you. We are here to show you how to get help from people who actually know about the software or hardware you're dealing with, but 99% of the time that will not be us. Unless you know for certain that one of the authors is an expert on what you're dealing with, leave us alone and everybody will be happier.

-- Niels Olson (email)


Managing rejection: You are coming to a sad realization--cancel or allow?

A rejection slip from a Chinese economics journal supposedly went this way (quoted in the Financial Times): "We have read your manuscript with boundless delight. If we were to publish your paper, it would be impossible for us to publish any work of lower standard. And as it is unthinkable that in the next thousand years we shall see its equal, we are, to our regret, compelled to return your divine composition, and to beg you a thousand times to overlook our short sight and timidity."

From long experience on this board, we have learned never ever to get into discussions with sulky rejected contributors (some become maliciously angry) or even to reply with a bizarrely gracious rejection slip. Our policy is identical to that of nearly all edited forums, beginning with the "letters to the editor" section of The New York Times, Science, Nature, and on down. Publishers have no obligation to publish, acknowledge, or reply to the thousands of unsolicited contributions that they receive. Life's too short.

Every day we receive partially meritorious contributions that are not published; even some of the contributions of our Kindly Contributors are not published. There are billions of other places on the internet where rejected contributions can be published, unreviewed and unedited. But not here.

-- Edward Tufte


Which corporations, government agencies, and other network owners are editing Wikipedia? Find out with wikiscanner. Via the New York Times article Seeing Corporate Fingerprints in Wikipedia Edits, by Katie Hafner.

-- Niels Olson (email)


The wikiscanner that Niels Olson mentioned in a previous post is an interesting and clever application of happenstance data: who changed the wikipage? I think I have two thoughts on this:

1. People seem either amazed, or entertained, or alarmed that someone with a vested interest in information posted on a public site with open editing privilege would actually change the content. I just checked to make sure that there wasn't a 'Rafe Donahue' entry on wikipedia. There is no entry (yet?) but you can bet that if someone said something about me that was untrue or unbecoming, I would likely be changing that entry as fast as I could (as long as that change was in my best interests; if the lie made me look better, I might be less likely to change it...). Or maybe I wouldn't care; after all, as my Grandma Emma used to say, "Consider the source." What would you do?

What am I supposed to conclude if I find that someone has changed information about them? Does this make them evil? I suppose at an absolute minimum I need to find out what was changed. Are corporations and government agencies and other network owners assumed to always be running the spin machine in an unethical manner?

And if I don't know who changed it, does that make it better? What if the change is done without being about to track the ip address? How can the value or validity of the information (wikinformation?) be dependent upon not who changed it but only upon whether or not we know who changed it?

So, I know that I do not know what to think about all of this.

2. The result of the wikiscanner will not be that people will no longer change their own entries; it will be that they will be smarter about it. In the end all that it will show is that there are still some people who don't understand that others are watching and that there are other people who have gamed the system.

Should we be amazed or entertained or alarmed?

(Goodness. I need to learn how to make shorter posts.)

Rafe

-- rafe donahue (email)


Hi -

I actually moderate an internet forum on WatchUSeek, one of the larger watch forums on the Internet (we've got almost 15000 registered members with 79000+ threads and close to 475000 posts, with an average hit of some 20000+ hits per day. I moderate the vintage watch forum there. And yes, all we do is discuss watches and other timepieces. Surprisingly, we're not the biggest: there are other sites that are even busier that us, but we're nicer. :-)

The owner of the forum chooses people to be moderators based on maturity, technical knowledge (you've never known a geek until you meet a watch geek!) and willingness to be there and answer dumb questions: he vets those folks with existing moderators. Usually people are tapped to be mods when they have been helpful in the forums, have cooled down flame wars and fights by bringing things back on-topic, and generally acting like mature adults rather than loons.

WatchUSeek is fairly stringently moderated: there are no political discussions allowed, nor posting of any photos of weapons or the like. There are clear rules posted and everyone becoming a member agrees to them (at least pro forma). The key is that you have there a team of moderators, working world-wide (literally: we have mods from Australia to Canada and back again, in almost every time zone) who are committed to maintaining two things: high quality of discussion and the ruthless suppression of flame wars, trolling and the like. There are a number of supermods who can zap anyone anywhere; I can do that in my forum, but not in anyone else's forum.

It works well. Sure, we've had our share of problems, but given that you have to be a registered member in order to post anything, you can get rid of any troublemakers by simply banning them (and in the case of static IPs, their IP as well). We deal with spammers the same way: immediate ban on them and their IP. Dynamic IPs won't be banned, but we've hard by and large few problems with folks with dynamic IPs. There's banning, there's warnings, there are time-limited bans, etc: we have a number of options for punishment available.

It works because people want to be there and are largely willing to accept the rules to do so.

Clear rules and a clear hierarchy of responsibilities helps. We shut down political discussions ASAP and will delete the threads when doing so; we ruthlessly prune and ban trolls and troublemakers, and the mods all know each other via the forums and respect the decisions that other mods make, but there is also a moderator forum that is open and visible only to the mods for discussions about problem cases and the like. Abusive and threatening members don't last very long.

John

-- John F. Opie (email)


Clear statement about moderating forums

Doug Reeves, who runs Van's Air Force, a big forum about home-constructed airplanes, gives potential contributors an earful about VAF posting policies:

" My House, My Rules. I've noticed a sharp increase in people injecting talk of global warming, politics, economics, religion, gun control and other off-limit topics in the forums, acting like it's perfectly appropriate to do so. It's not. Posting 'sorry if this isn't appropriate' along with your text doesn't make it OK, either. I get it, lots of people have differing views on stuff. When you post something that doesn't abide by the rules, all that happens is you spend a lot of time composing a long, well thought out letter...then I or one of the many moderators delete it. Then you send me an email crying 'Censorship!!!', which I also delete.

Like it or not I am the Captain of this ship, and it's ultimately my responsibility to see that the rules are followed. Try to get (3) of your friends to agree on something - chances are you probably won't be able to. Now imagine a room with 7,100 people in it - that's the VAF Forums. They simply do not provide value to the RV [note added: a homebrew airplane design] community without clear- cut, defined rules of conduct. Period. My vision is one of a RV-only spot on the internet, completely civil and focused on the topic of RVs with laser-like precision. If you can't accept that, leave. I'm OK with it, I really am. There are PLENTY of spots on the web that are perfectly OK with no-holds-barred discussions, literally tens of thousands, but this is not one of them. The rules for posting are very simple and laid out clearly.

I'll leave you with my favorite rule of all....'if a moderator doesn't like it it's toast'. With the Cowboys losing Sunday, my tolerance threshold is pretty low also (so fair warning). I really shouldn't type after the Cowboys lose."


His response to rejected contributors who proclaim themselves victims of censorship is particularly helpful in thinking about forum moderation.

Doug's full statement of forum posting policies is here.

-- Edward Tufte


Online Communities Rot Without Daily Tending By Human Hands

Xeni Jardin (Tech Culture Journalist; Co-editor, Boing Boing; Commentator, NPR; Host, Boing Boing tv) wrote a wonderful essay about forum moderation in her contribution to the Edge World Question Center collection of fascinating answers to the question "What have you changed your mind about?" The complete set of answers may be seen here.

Here is her answer to the question "What have you changed your mind about?":

Online Communities Rot Without Daily Tending By Human Hands

"I changed my mind about online community this year.

"I co-edit a blog that attracts a large number of daily visitors, many of whom have something to say back to us about whatever we write or produce in video. When our audience was small in the early days, interacting was simple: we tacked a little href tag to an open comments thread at the end of each post: Link, Discuss. No moderation, no complication, come as you are, anonymity's fine. Every once in a while, a thread accumulated more noise than signal, but the balance mostly worked.

"But then, the audience grew. Fast. And with that, grew the number of antisocial actors, "drive-by trolls," people for whom dialogue wasn't the point. It doesn't take many of them to ruin the experience for much larger numbers of participants acting in good faith.

"Some of the more grotesque attacks were pointed at me, and the new experience of being on the receiving end of that much personally-directed nastiness was upsetting. I dreaded hitting the "publish" button on posts, because I knew what would now follow.

"The noise on the blog grew, the interaction ceased to be fun for anyone, and with much regret, we removed the comments feature entirely.

"I grew to believe that the easier it is to post a drive-by comment, and the easier it is to remain faceless, reputation- less, and real-world-less while doing so, the greater the volume of antisocial behavior that follows. I decided that no online community could remain civil after it grew too large, and gave up on that aspect of internet life.

"My co-editors and I debated, we brainstormed, we observed other big sites that included some kind of community forum or comments feature. Some relied on voting systems to "score" whether a comment is of value -- this felt clinical, cold, like grading what a friend says to you in conversation. Dialogue shouldn't be a beauty contest. Other sites used other automated systems to rank the relevance of a speech thread. None of this felt natural to us, or an effective way to prevent the toxic sludge buildup. So we stalled for years, and our blog remained more monologue than dialogue. That felt unnatural, too.

"Finally, this year, we resurrected comments on the blog, with the one thing that did feel natural. Human hands.

"We hired a community manager, and equipped our comments system with a secret weapon: the "disemvoweller." If someone's misbehaving, she can remove all the vowels from their screed with one click. The dialogue stays, but the misanthrope looks ridiculous, and the emotional sting is neutralized.

"Now, once again, the balance mostly works. I still believe that there is no fully automated system capable of managing the complexities of online human interaction -- no software fix I know of. But I'd underestimated the power of dedicated human attention.

"Plucking one early weed from a bed of germinating seeds changes everything. Small actions by focused participants change the tone of the whole. It is possible to maintain big healthy gardens online. The solution isn't cheap, or easy, or hands-free. Few things of value are."

For our board, the solution was to eliminate direct posting; every potential contribution, before it goes live, must pass through a non-public approval queue that is reviewed by our editors. IP blocking of spammers, weeds, and trolls has also proved useful.

-- Edward Tufte


More on moderating comments at BoingBoing. Teresa Nielsen Hayden, a moderator at BoingBoing posted about BoingBoing's moderation policy.

-- Ed Manlove (email)


Paul Graham's essay How to Disagree.

-- Niels Olson (email)


Language Log forum moderation policies

Mark Liberman, "A comment about comments" writes:

"Earlier today, someone calling himself (?) Baishui submitted this comment on Victor Mair's post Burlesque Matinee at the Max Planck Gesellschaft:

'My comments were deleted twice here. Apparently, someone is offended by me saying 'this incident shows how ignorant the West (and its academics) are of the non-Western world'. What pettiness!'

I deleted this comment, just as I had deleted the same individual's first attempt, which consisted only of a one- sentence indictment of Western academics, and the second attempt, which added the accusation of censorship.

In an attempt to maintain a reasonable signal-to-noise ratio in the comments section, I'll continue to be skeptical of comments that lack specific and relevant content. To put this skepticism into context, though, you need to understand something about how comments work in a standard WordPress blog."

Go to link above for much more.

-- Edward Tufte


Paul Graham's Hacker News is a forum for start-up entrepreneurs. If there was ever a forum with a cultural consciousness of their own forum culture, and how to maintain it, Hacker News is it. These are all people trying to build user groups with a sustainable culture. Here are some of pg's recent thoughts on implementing forum controls

Rapid banning of trolls seems to be an effective troll guard. It also works well with spam.

Trolling and spam are both self-perpetuating problems. Users are ruder on sites where everyone else is rude, and spammers are more likely to submit links to sites they get traffic from. So you can prevent both problems by never letting them get a foothold.

Deletion doesn't have to be manual, especially in the case of spam. Spammers smart enough to measure the traffic they get from HN quickly give up. And the dumb ones obligingly continue to post from banned accounts and IP addresses. So currently 80-90% of spam is killed by software rather than humans.

Flagging turns out to be a feature that saves a lot of work. So does rate-limiting submissions from newly created accounts (and, obviously, the IP addresses they use).

One general approach I've found very useful is not to protect against a certain type of abuse till it arises. Aside from obvious things like not letting people vote more than once, you don't need much protection when you first launch.

-- Niels Olson (email)


There seems to be an almost bizarre difference between the worst of the net and best: the best forums almost universally have members who use their real names. The worst of course, don't.

It doesn't appear to be related to the quality of the carbon-based life form at the keyboard either. Yale Law is abroil with accusations of slander on AutoAdmit.com, a forum for law students (eg, Google for Brittan Heller). Our experience at Tulane Med convinced us, as students, to pull the plug on our own forum, even though it proved to be our best source of information after the storm. People weren't using their real names.

On our new site, TMedWeb, we have a secure site for reviews of residency programs. Everything is encrypted and password protected (using the university's directory), but it's still remarkably sensitive information: these are students who are commenting on the residency programs they apply to, and we enforce real names by publishing their university account aliases (eg, Niels Olson is nolson). The reviewers still don't shy away from commenting on the quality of programs, but the students provide well composed essays instead of the filth that some of the same people wrote on the old, more public, but pseudo-anonymous forums. The utility of the information is yet to be fully realized. For example, the University of Chicago hospital got some rather negative press recently, due to layoffs, and we had a well written review from a student who had recently spent four weeks there, which provided balance for students considering whether or not to keep Chicago on their Match list for residency.

Ask ET is actually more heavily moderated, but is nevertheless open to the public. It is easily the highest quality public forum I know of.

Paul Graham's Hacker News allows anonymity but every comment goes live first, and the system relies very explicitly on the community to reward good behavior and punish bad behavior. And, as it happens a lot of people use their real names, even people who might have good reason to not hang their reputation out on a public forum:

  • pg: Paul Graham, inventor of what the first e-commerce site builder, which he sold for gobs to Yahoo. The ups and downs of Yahoo's advertising revenue may provide a lot of material for the news, but the Yahoo Store operation continues to provide an underlying cash flow.
  • paul: Paul Buchheit, inventor of GMail.
  • dhh: David Heinemeier Hansson, creator of Ruby on Rails
  • aaronsw: Aaron Swartz. Helped write the RDF specification while in middle school and wrote the RSS specificaton before college.

I think there is a general sense on the Internet that young people in particular favor anonymity and are also more prone to the worst public behavior when they go anonymous. I wonder to what extent this is at least partially a function of self-preservation: the young know they're young.

It would be interesting to find out how, exactly, young people behave differently on secure forums. Will they use such forums at all? What if we get into a realm of speech-to-text? A 1st grader can read but probably doesn't type much. We might find out some interesting patterns. High-integrity authentication systems, like LDAP and Kerberos, are becoming more understood and more widely deployed in progressively smaller institutions, so it's something to keep an eye on. Certainly our recent experience at Tulane Med suggests the upshot can be richly rewarding.

-- Niels Olson (email)


Eliezer Yudkowsky has a good essay encouraging moderators to actively defend their communities: Well-Kept Gardens Die By Pacifism.

-- Niels Olson (email)


How to write an incendiary blog post

A brilliant essay by Chris Clarke:

http://faultline.org/index.php/site/item/incendiary/

And the accompanying comments are often delightful.

-- Edward Tufte


And, for a thoroughly different, though not inconsistent view, reddit

"Posting personal information is the Internet version of vandalism and abuse and will not be tolerated. If you see it, report it..."

http://blog.reddit.com/2011/05/reddit-we-need-to-talk.html

-- Niels Olson (email)


Jeff Atwood of the highly popular Stack Exchange (SE) sites has some recent comments on dealing with troublesome accounts:

Our method of dealing with disruptive or destructive community members is simple: their accounts are placed in timed suspension. Initial suspension periods range from 1 to 7 days, and increase exponentially with each subsequent suspension. We prefer the term "timed suspension" to "ban" to emphasize that we do want users to come back to their accounts, if they can learn to refrain from engaging in those disruptive or problematic behaviors.

The above is fine for minor infractions, but for more disruptive individuals it may be necessary to escalate tactics, the first (of three) stratgies mentioned is:

A hellbanned user is invisible to all other users, but crucially, not himself. From their perspective, they are participating normally in the community but nobody ever responds to them. They can no longer disrupt the community because they are effectively a ghost. It's a clever way of enforcing the "don't feed the troll" rule in the community.

His post has a few other tidbits:

http://www.codinghorror.com/blog/2011/06/suspension-ban-or-hellban.html

The folks at SE have really thought a lot about how to keep their community well run:

http://meta.stackoverflow.com/questions/23385/

They've tried to consolidate the rules of everything that's come before (Usenet, Slashdot, web forums, etc.) into a their software eliminating what doesn't work and keep what does.

-- David Magda (email)


Interesting critique of the real-world consequences of online anonymity

http://blog.urbantag.com/post/6359235485/still-trust-reviews

-- Niels Olson (email)


If journals are the original forums, this may be the most significant way to elevate discourse on the internet: elevate the quality of accessible content. Princeton bans authors from yielding all copyright to publishers as part of open access policy

-- Niels Olson (email)


Further encouragement for authors to at least make their work available online

http://rationalconspiracy.com/2012/06/20/why-academic-papers-are-a-terrible-discussion- forum/

-- Niels Olson (email)




Threads relevant to the internet: