We store all the submissions first in an intermediate table.
Usually spam bots fill out every field of the form. My algorithm then checks certain key fields (of which both the user and the bot don't know that these are checked fields, and they are also not indicated as required fields) for data (type) consistency and processes the correct records further.
This may not be as safe as Google's reCAPTCHA to distinguish between man and machine, but as I see from our records, there were only two false positives that were blocked in the last ten years (because the user did not fill the form as requested), but all the bots were blocked savely.
Another option is to send a confirmation mail with an activation link that expires after some deadline (e.g. 15 minutes). Only if the link is clicked (or followed), the record will be activated:
- if the mail bounces it may be a non-valid e-mail address supplied by a bot
- if the mail arrives and it is still answered by a bot (which might be possible), you have a valid e-mail address to track the spammer
I have moved all my sites to Cloudflare and have noticed a dramatic decrease in bot activity.
Service is free but additional services can be added for fees.
Hey, Drew! there's an interesting article on that:
I haven't tried this, and the article seems to cover a bit of all the methods (or at least gives you an idea of what others have done).
I use (note the "and", because I use all of them in each site):
1. HTML5 validation (upon submit) - although not all browsers comply
3. validation before adding to the database.
I've tried Captcha on various sites, but found that it runs into the problems listed in the above article.
If you can capture the http_user_agent and perhaps the http_referrer, you can "filter" much of this out. that's what some of the spam blockers use.
Very cool, Beverly. I like this a lot! I get complaints about the difficulty of reading the CAPTCHA images, but hadn't thought of a good alternative until this popped up. Thanks!