An almost infinite array of automated tools exist to spider and
mirror application content, extract confidential material, brute force
guess authentication credentials, discover code-injection flaws, fuzz
application variables for exploitable overflows, scan for common files
or vulnerable CGI's, and generally attack or exploit web-based
application flaws. While of great value to security professionals, the
use of these tools by attackers represents a clear and present danger
to all organizations.
These automated tools have become increasingly popular for attackers
seeking to compromise the integrity of online applications, and are
used during most phases of an attack. Whilst there are a number of
defense techniques which, when incorporated into a web-based
application, are capable of stopping even the latest generation of
tools, unfortunately most organizations have failed to adopt them.
This whitepaper examines techniques which are capable of defending
applications against these tools; providing advice on their particular
strengths and weaknesses and proposing solutions capable of stopping
the next generation of automated attack tools.
By Gunter Ollmann. Get the PDF at Infosecwriters.com.
This is a good read and has some suggestions I had not though of
before. I strongly suggest looking at intrusion prevention if you
have public web servers. Here's a peek inside the pdf:
The most 10 most frequently utilised defences are:
the server hosting software
HEAD requests for content information,
- Use of
the REFERER field to evaluate previous link information,
of Content-Type to “break” file downloads,
redirects to the real content location,
status codes to hide informational errors,
thresholds and timeouts to prevent repetitive content requests,
links to ensure users stick to a single navigation path,
links to identify non-human requests,
tests to block non-human content requests.