Frequently Asked Questions

How does whitetrash protect a user against browser exploitation?

The ghost in the browser told us that practically all malware is served from a different domain. This makes sense from a malware distribution point of view and is unlikely to change. What this means is that even when legitimate websites have been compromised and are serving malware, the malware is being served from a different domain using an iframe or small javascript snippet. Because it is coming from a different domain, the malware domain will not be in the whitelist and hence the attack will be blocked by whitetrash.

How does whitetrash protect a user against malware in general?

Whitetrash makes things hard for malware because the domains malware wants to use will not be in the whitelist. This means any beaconing, downloading of tools, command and control, and exfil of data won't work.

Couldn't the malware just submit the form and add itself to the whitelist?

Yes. However this would be very advanced malware functionality, and highly unlikely at present. A CAPTCHA system has been integrated to combat this threat.

Don't users just whitelist absolutely everything, negating the effectiveness of the whitelist?

Users whitelist everything that they can see. i.e. anything that takes up screen real-estate. There are no pop-up prompts in whitetrash to whitelist domains that do not provide visible content to the screen. This means that whitetrash will block malicious sites that will try and serve you malware through invisble iframes, 1x1 pixel media files, and nasty bits of javascript - none of which advertise their presence by taking up screen real-estate and hence will not be whitelisted by the user.

Won't whitetrash break sites that serve content off different domains? E.g. Google's webpage www.google.com serves all its images off images.google.com.

Whitetrash wildcards the first label of all "www" domains. This means that if you add www.google.com, you automatically get *.google.com. Usually, with a small number of exceptions, if a website is serving content off a different domain (i.e. not a subdomain) it is either malware or an ad - neither of which you want.

What about content delivery networks? Won't that break whitetrash, since the content is coming from a completely different domain, ie. not a subdomain as above?

This approach is gaining in popularity with large sites (slashdot is serving its images off fsdn.com, facebook uses fbcdn.net), but is still not presenting a big problem. A firefox plugin has been developed to address this issue, and an IE toolbar is being considered as well. As a last resort users can look at the HTML source to look for domains that need to be whitelisted, but this is probably too much to expect of less tech-savvy users. Organisations with established support areas may use those resources to investigate problematic sites on behalf of users.

The firefox plugin gives users a quick way to whitelist domains that are required to make the page work. Isn't that dangerous, since you are giving the user the ability to whitelist all the nasty domains that could be in that list?

There is an element of risk, but the user is only expected to resort to the plugin when the page is broken. The page is refreshed after each domain is whitelisted, so users should stop once the page 'works', hopefully leaving the advertising/malware/tracking domains unwhitelisted. One way of approaching this would be to provide the browser plugin only to helpdesk staff, so they can quickly investigate problematic sites for users. The whitelist is published on the whitetrash server for all users, so there is also an element of 'peer review' that may make users more cautious about the domains they whitelist.

Can I still content-filter the whitelisted webpages?

Of course, content filtering is still a good idea. This could be done as another squid plugin (e.g. Dan's Guardian) or a commercial appliance inline.

I have an existing blacklist, how would this fit in with whitetrash?

You can apply a blacklist before whitetrash using a squid configuration line so that users cannot whitelist a blacklisted domain, or you could do it with an appliance before the request reaches whitetrash.

Whitetrash allows users to delete from the whitelist, so couldn't I just add a malicious domain, use it, then remove it from the whitelist?

Yes. However, allowing users to delete is configurable, i.e. you can restrict deletion to designated admins if this is a problem. All accesses are still logged as usual by squid.

Can I use whitetrash to filter out classes of domains, eg. pornographic or violent sites?

Bear in mind that this tool is designed to mitigate initial exploitation and malware communication/exfiltration, not enforce a ban on porn. However, the whitelist is published on the whitetrash webserver, so assuming you are using authentication, having your username against one of these sites where everyone in the organisation can view it might be enough disincentive. Alternatively you can still use squid blacklisting or perform blacklisting and content filtering through an upstream appliance.

Could you have a workflow where users register domains they want to visit in a similar way, but the domain is not whitelisted until it is reviewed and approved by the corporate helpdesk/outsourcer?

You could, although I think this is of fairly low value in preventing initial exploitation, exfil, and C&C. If a user actually wants to visit a domain, it is unlikely that that domain will be serving malware (see the ghost in the browser). The exception is phishing attacks where the user believes they want to visit a malicious site. For these sites the google safebrowsing service used by whitetrash is a far superior evaluation to any that could be performed by the helpdesk staff.

How will the helpdesk staff evaluate whether a site is 'safe'? Given that evaluating the 'safeness' of a complex website is extremely difficult (read practically impossible) and time-consuming for even a skilled security researcher, what chance does a helpdesk employee have to make a meaningful assessment in a timeframe that will be useful to the user?

This strategy is in a similar vein to the available commercial peer-review web categorisation services, and is better at blocking porn than malware (see previous question).

Why do I need to install the whitetrash SSL Certificate Authority (CA) certificate?

Whitetrash has its own certificate authority built-in that is used to create certificates for whitelisting SSL sites. When you request an SSL site (e.g. mail.google.com) that isn't in the whitelist, whitetrash creates an SSL certificate for that site (*.google.com). It then redirects you to the whitetrash 'addentry' form using the new certificate for the domain you requested. In effect whitetrash pretends to be mail.google.com for the purpose of delivering you the form to add the domain. Having whitetrash as a trusted CA means that the user will not get SSL warnings about bad certificates each time they try to whitelist an SSL site.

Once the SSL site has been whitelisted, the SSL certificate exchange is simply proxied to the actual site by squid without any involvement from whitetrash. Whitetrash does not intercept or decrypt SSL sessions with whitelisted sites - users can verify this by checking the certificate in their browser.

How do I install the whitetrash SSL Certificate Authority (CA) certificate in my browser?

Go to http://whitetrash, right click on 'Download CA', choose 'Save link as', and save the file on your computer. In firefox, go to Edit -> Preferences (on linux) or Tools -> Options... (on windows). Select Advanced -> Encryption -> View Certificates -> Authorities -> Import... choose the saved certificate and select Trust this CA to identify web sites.

For Microsoft Internet Explorer, download the certificate as above and import it by clicking View -> Tools -> Internet Options -> Content -> Personal -> Certificates -> Trusted Root Certification Authorities and import.