Looney Labs Rabbits Mailing list Archive

Re: [Rabbits] Re: Wiki Under Spambot Assault

  • FromBrian Campbell <lambda@xxxxxxx>
  • DateSat, 27 Oct 2007 16:34:29 -0400
(crossposted to the Geeks list; if we continue this thread, we should probably continue it there)

On Oct 27, 2007, at 4:10 PM, Edward Lorden wrote:

A website for which I am responsible has had similar attacks. While investigating CAPTCHA alternatives, I found several articles on the drawbacks of the technology. While it can help with some spambot attacks, there are others that will be able to crack many of these methods without too much trouble. One of the most insidious is called the Man in the Middle scenario. In this scenario, the attacker has set up a free porn website and uses your CAPTCHA image as part of the sign up criteria. In this way, a human being provides the solution in the CAPTCHA puzzle which is then used to gain access to your site. I provide this information more as a warning than anything else. I’ve gone ahead and implemented a CAPTCHA puzzle but have continued to look for other ways to verify that it is a human that is actually accessing the site. If you wish, I can provide more details as they become available.


I am well aware of the deficiencies of CAPTCHAs. I have seen references to the mechanical turk approach to defeating CAPTCHAs, though that approach greatly increases the cost of defeating a CAPTCHA (bandwidth, advertising costs to get users to the site, etc). I have not seen any widespread abuse of CAPTCHA systems using that method, however; there are much easier ways of getting around such systems.

Much easier is to simply hire a human to post spam to blogs and wikis. That is what many of those spams you see about "make money by just surfing the internet" are all about; they do actually pay people (very small amounts) to post wiki and blog spam. There will never be any sort of automated system to detect these users (unless some sort of realtime blacklist system is set up that is run by trustworthy administrators), so if that becomes a problem, we'll have to take new approaches.

My approach to securing the wiki against vandalism and spam is to make it as lenient as possibly while preventing any large-scale abuses that are far beyond what we can reasonably handle by manually reverting and blocking users. A manual revert once a month isn't all that bad; several per day is just too frustrating to deal with, and will bring the productivity of people who work on the wiki way down.

So far, we've only seen vandalbot attacks, as far as I can tell, so blocking those with a CAPTCHA should be sufficient. If we start seeing human attacks, we may need to block URL posting for untrusted users, or require admin approval before people can edit, or possibly only allow certain keywords to be input by untrusted users; it all depends on what we see.

Current Thread