>>:>> Any suggestions on how to filter URL's?
>>:>> I know I can filter via IP address, but I need to be able to filter a
>>:>> complete URL, and the list will be 2-4MB worth of URL's .
>>:>> e.g. if http://www.whatever.com/~users/badstuff must be filtered, but
>>:>> http://www.whatever.com/~users/goodstuff shouldn't be: This requires the
>>:>> packets to be reassembled to determine the complete URL to base an accept
>>:>> or deny , so ipfw doesn't work (I think).
>>:>This sounds like something that might be better accomplished with a
>>:>Squid or other HTTP proxy. You must then force all of your users to use
>>:I would like to implement it without touching client desktops.
>>You forgot to complete your sentence:
>>"I would like to implement it without touching client desktops, so I
>>will use NAT to redirect all HTTP requests to a Squid server, where
>>I can implement logging and policy based access controls."
>Does a http request directed to a proxy server for
>return to the browser look different than a normal
The RFC for HTTP 1.1 defines the correct way for browsers to
communicate with proxy servers. So if you're doing any kind
of proxying, you will require a browser that is HTTP 1.1
compatible, ie Nav 3.0 and above and IE 4.0. AFAIK, IE 3.0
is not HTTP 1.1 compatible.
The long answer is that you will need to set up squid, NAT (via ipfw or
ipfilter) and stick your freebsd box in the path of your outgoing
HTTP requests. This is known as transparent proxy. The disadvantage
to this is that your freebsd box is now in your outbound data path.
Another way of doing this is to setup your router to redirect all
outgoing HTTP requests to your squid box through access lists. If
access lists on a router disturb you, you might consider a layer-4
switch such as Alteon Networks' ACEDirector.
Disclaimer: I work for Alteon Networks.