Docstoc

The-Citizens-Guide-to-Government-Spying-on-Data-and-Telecommunications

Document Sample
The-Citizens-Guide-to-Government-Spying-on-Data-and-Telecommunications Powered By Docstoc
					The Surveillance Self Defense Project
The Electronic Frontier Foundation (EFF) has created this Surveillance Self-Defense site to educate the
American public about the law and technology of government surveillance in the United States,
providing the information and tools necessary to evaluate the threat of surveillance and take
appropriate steps to defend against it.

  Surveillance Self-Defense (SSD) exists to answer two main questions: What can the

  government legally do to spy on your computer data and communications? And what can you

  legally do to protect yourself against such spying?



  After an introductory discussion of how you should think about making security decisions — it's

  all about risk management — we'll be answering those two questions for three types of data:



  First, we're going to talk about the threat to the data stored on your computer posed by

  searches and seizures by law enforcement, as well as subpoenas demanding your records.



  Second, we're going to talk about the threat to your data on the wire — that is, your data as

  it's being transmitted — posed by wiretapping and other real-time surveillance of your

  telephone and Internet communications by law enforcement.



  Third, we're going to describe the information about you that is stored by third parties like

  your phone company and your Internet service provider, and how law enforcement officials can

  get it.



  In each of these three sections, we're going to give you practical advice about how to protect

  your private data against law enforcement agents.



  In a fourth section, we'll also provide some basic information about the U.S. government's

  expanded legal authority when it comes to foreign intelligence and terrorism investigations.



  Finally, we've collected several articles about specific defensive technologies that you can use

  to protect your privacy, which are linked to from the other sections or can be accessed

  individually. So, for example, if you're only looking for information about how to securely

  delete your files, or how to use encryption to protect the privacy of your emails or instant

  messages, you can just directly visit that article.



  Legal disclaimer: This guide is for informational purposes only and does not constitute legal

  advice. EFF's aim is to provide a general description of the legal and technical issues
  surrounding you or your organization's computer and communications security, and different

  factual situations and different legal jurisdictions will result in different answers to a number of

  questions. Therefore, please do not act on this legal information alone; if you have any specific

  legal problems, issues, or questions, seek a complete review of your situation with a lawyer

  licensed to practice in your jurisdiction.


Risk Management
Security Means Making Trade-Offs to Manage Risks

  Security isn't having the strongest lock or the best anti-virus software — security is about

  making trade-offs to manage risk, something we do in many contexts throughout the day.

  When you consider crossing the street in the middle of the block rather than at a cross-walk,

  you are making a security trade-off: you consider the threat of getting run over versus the

  trouble of walking to the corner, and assess the risk of that threat happening by looking for

  oncoming cars. Your bodily safety is the asset you're trying to protect. How high is the risk of

  getting run over and are you in such a rush that you're willing to tolerate it, even though the

  threat is to your most valuable asset?



  That's a security decision. Not so hard, is it? It's just the language that takes getting used to.

  Security professionals use four distinct but interrelated concepts when considering security

  decisions: assets, threats, risks and adversaries.


Assets
What You Are Protecting

  An asset is something you value and want to protect. Anything of value can be an asset, but in

  the context of this discussion most of the assets in question are information. Examples are you

  or your organization's emails, instant messages, data files and web site, as well as the

  computers holding all of that information.


Threats
What You Are Protecting Against

  A threat is something bad that can happen to an asset. Security professionals divide the

  various ways threats can hurt your data assets into six sub-areas that must be balanced

  against each other:


    •   Confidentiality is keeping assets or knowledge about assets away from unauthorized
        parties.

    •   Integrity is keeping assets undamaged and unaltered.
    •   Availability is the assurance that assets are available to parties authorized to use them.

    •   Consistency is when assets behave and work as expected, all the time.

    •   Control is the regulation of access to assets.

    •   Audit is the ability to verify that assets are secure.


  Threats can be classified based on which types of security they threaten. For example,

  someone trying to read your email (the asset) without permission threatens its confidentiality

  and your control over it. If, on the other hand, an adversary wants to destroy your email or

  prevent you from getting it, the adversary is threatening the email's integrity and availability.

  Using encryption, as described later in this guide, you can protect against several of these

  threats. Encryption not only protects the confidentiality of your email by scrambling it into a

  form that only you or your intended recipient can descramble, but also allows you to audit the

  emails — that is, check and see that the person claiming to be the sender is actually that

  person, or confirm that the email wasn't changed between the sender and you to ensure that

  you've maintained the email's integrity and your control over it.


Risk
The Likelihood of a Threat Actually Occuring

  Risk is the likelihood that a particular threat against a particular asset will actually come to

  pass, and how damaged the asset would be. There is a crucial distinction between threats and

  risks: threats are the bad things that can happen to assets, but risk is the likelihood that

  specific threats will occur. For instance, there is a threat that your building will collapse, but

  the risk that it will really happen is far greater in San Francisco (where earthquakes are

  common) than in Minneapolis (where they are not).



  People often over-estimate and thus over-react to the risk of unlikely threats because they are

  rare enough that the worst incidents are well publicized or interesting in their unusualness.

  Similarly, they under-estimate and under-react to more common risks. The most clichéd

  example is driving versus flying. Another example: when we talk to individuals about

  government privacy intrusions, they are often concerned about wiretapping or searches, but

  most people are much more at risk from less dramatic measures, like subpoenas demanding

  records from you or your email provider. That is why we so strongly recommend good data

  practices — if it's private, don't give it to others to hold and don't store it, but if you do store

  it, protect it — while also covering more unusual circumstances, like what to do when the

  police show up at your door or seize your laptop.
  Evaluating risk is necessarily a subjective process; not everyone has the same priorities or

  views threats in the same way. Many people find certain threats unacceptable no matter what

  the risk, because the mere presence of the threat at any likelihood is not worth the cost. In

  other cases, people disregard high risks because they don't view the threat as a problem. In a

  military context, for example, it might be preferable for an asset to be destroyed than for it to

  fall into enemy hands. Conversely, in many civilian contexts, it's more important for an asset

  such as email service to be available than confidential.



  In his book Beyond Fear, security expert Bruce Schneier identifies five critical questions about

  risk that you should ask when assessing proposed security solutions:


    •   What assets are you trying to protect?

    •   What are the risks to those assets?

    •   How well does the security solution mitigate those risks?

    •   What other risks does the security solution cause?

    •   What costs and trade-offs does the security solution impose?


  Security is the art of balancing the value of the asset you are trying to protect against the

  costs of providing protection against particular risks. Practical security requires you to

  realistically judge the actual risk of a threat in order to decide which security precautions may

  be worth using to protect an asset, and which precautions are absolutely necessary.



  In this sense, protecting your security is a game of tradeoffs. Consider the lock on your front

  door. What kind of lock — or locks — should you invest in, or should you lock the door at all?

  The assets are invaluable — the privacy of your home and control over the things inside. The

  threat level is very high — you could be financially wiped out, and all of your most valuable

  and private information exposed, if someone broke in. The critical question then becomes: how

  serious is the risk of someone breaking in? If the risk is low, you probably won't want to invest

  much money in a lock; if the risk is high, you'll want to get the best locks that you can.


Adversaries
Who Poses a Threat?

  A critical part of assessing risk and deciding on security solutions is knowing who or what your

  adversary is. An adversary, in security-speak, is any person or entity that poses a threat
  against an asset. Different adversaries pose different threats to different assets with different

  risks; different adversaries will demand different solutions.



  For example, if you want to protect your house from a random burglar, your lock just needs to

  be better than your neighbors', or your porch better lit, so that the burglar will choose the

  other house. If your adversary is the government, though, money spent on a better lock than

  your neighbors' would be wasted — if the government is investigating you and wants to search

  your house, it won't matter how well your security compares to your neighbors. You would

  instead be better off spending your time and money on other security measures, like

  encrypting your valuable information so that if it's seized, the government can't read it.



  Here are some examples of the kinds of adversaries that may pose a threat to your digital

  privacy and security:


    •   U.S. government agents that follow laws which limit their activities

    •   U.S. government agents that are willing and able to operate without legal restrictions

    •   Foreign governments

    •   Civil litigants who have filed or intend to file a lawsuit against you

    •   Companies that store or otherwise have access to your data

    •   Individual employees who work for those companies

    •   Hackers or organized criminals who randomly break into your computer, or the computers of
        companies that store your data

    •   Hackers or organized criminals that specifically target your computer or the computers of the
        companies that store your data

    •   Stalkers, private investigators or other private parties who want to eavesdrop on your
        communications or obtain access to your machines


  This guide focuses on defending against threats from the first adversary — government agents

  that follow the law — but the information herein should also provide some help in defending

  against the others.


Putting it All Together
Which Threats from Which Adversaries Pose the Highest Risk to Your Assets?

  Putting these concepts together, you need to evaluate which threats to your assets from which

  adversaries pose the most risk, and then decide how to manage the risk. Intelligently trading
 off risks and costs is the essence of security. How much is it worth to you to manage the risk?

 For example, you may recognize that government adversaries pose a threat to your webmail

 account, because of their ability to secretly subpoena its contents. If you consider that threat

 from that adversary to be a high risk, you may choose not to store your email messages with

 the webmail company, and instead store it on your own computer. If you consider it a low risk,

 you may decide to leave your email with the webmail company — trading security for the

 convenience of being able to access your email from any internet-connected computer. Or, if

 you think it’s an intermediate risk, you may leave your email with the webmail company but

 tolerate the inconvenience of using encryption to protect the confidentiality of your most

 sensitive emails. In the end, it’s up to you to decide which trade-offs you are willing to make to

 help secure your assets.


A Few Parting Lessons

 Now that we've covered the critical concepts, here are a few more basic lessons in security-

 think that you should consider before reading the rest of this guide:



 Knowledge is Power. Good security decisions can't be made without good information. Your

 security tradeoffs are only as good as the information you have about the value of your assets,

 the severity of the threats from different adversaries to those assets, and the risk of those

 attacks actually happening. We're going to try to give you the knowledge you need to identify

 the threats to your computer and communications security that are posed by the government,

 and judge the risk against possible security measures.



 The Weakest Link. Think about assets as components of the system in which they are used.

 The security of the asset depends on the strength of all the components in the system. The old

 adage that "a chain is only as strong as its weakest link" applies to security, too: The system

 as a whole is only as strong as the weakest component. For example, the best door lock is of

 no use if you have cheap window latches. Encrypting your email so it won't get intercepted in

 transit won't protect the confidentiality of that email if you store an unencrypted copy on your

 laptop and your laptop is stolen.



 Simpler is Safer and Easier. It is generally most cost-effective and most important to

 protect the weakest component of the system in which an asset is used. Since the weak

 components are much easier to identify and understand in simple systems, you should strive

 to reduce the number and complexity of components in your information systems. A small
  number of components will also serve to reduce the number of interactions between

  components, which is another source of complexity, cost, and risk.



  More Expensive Doesn't Mean More Secure. Don't assume that the most expensive

  security solution is the best, especially if it takes away resources needed elsewhere. Low-cost

  measures like shredding trash before leaving it on the curb can give you lots of bang for your

  security buck.



  There is No Perfect Security — It's Always a Trade-Off. Set security policies that are

  reasonable for your organization, for the risks you face, and for the implementation steps your

  group can and will take. A perfect security policy on paper won't work if it's too difficult to

  follow day-to-day.



  What's Secure Today May Not Be Secure Tomorrow. It is also crucially important to

  continually re-evaluate the security of your assets. Just because they were secure last year or

  last week doesn't mean they're still secure!


Data Stored on Your Computer
Search, Seizure and Subpoenas

  In this section, you'll learn about how the law protects — or doesn't protect — the data that

  you store on your own computer, and under what circumstances law enforcement agents can

  search or seize your computer or use a subpoena to demand that you turn over your data.

  You'll also learn how to protect yourself in case the government does attempt to search, seize,

  or subpoena your data, with a focus on learning how to minimize the data that you store and

  use encryption to protect what you do store.


What Can the Government Do?

  Before you can think about security against the government, you need to know law

  enforcment’s capabilities and limitations. The government has extraordinary abilities — it’s the

  best-funded adversary you’ll ever face. But the government does have limits. It must decide

  whether it is cost-effective to deploy its resources against you. Further, law enforcement

  officers have to follow the law, and most often will try to do so, even if only because there are

  penalties associated with violating it. The first and most important law for our purposes is the

  Fourth Amendment to the United States Constitution.


The Fourth Amendment
Protecting People From Unreasonable Government Searches and Seizures

  The Fourth Amendment says, "[t]he right of the people to be secure in their persons, houses,

  papers, and effects, against unreasonable searches and seizures, shall not be violated, and no

  Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and

  particularly describing the place to be searched, and the persons or things to be seized."



  A seizure occurs when the government takes possession of items or detains people.



  A search is any intrusion by the government into something in which one has a reasonable

  expectation of privacy.



  Some examples of searches include: reaching into your pockets or searching through your

  purse; entering into your house, apartment, office, hotel room, or mobile home; and

  examining the contents of your backpack or luggage. Depending on the facts, eavesdropping

  on your conversations or wiretapping of your communications can also constitute a search and

  seizure under the Fourth Amendment.



  The Fourth Amendment requires searches and seizures to be "reasonable", which generally

  means that police must get a search warrant if they want to conduct a legal search or seizure,

  although there are exceptions to this general rule. If a search or seizure is "unreasonable" and

  thus illegal, then police cannot use the evidence obtained through that search or seizure in a

  criminal trial. This is called the exclusionary rule and it is the primary incentive against

  government agents violating your Fourth Amendment rights.



  A few important things to remember:


    •   The Fourth Amendment protects you from unreasonable searches whether or not you are a
        citizen. In particular, the exclusionary rule applies to all criminal defendants, including non-
        citizens. However, the exclusionary rule does not apply in immigration hearings, meaning that
        the government may introduce evidence from an illegal search or seizure in those
        proceedings.

    •   The Fourth Amendment applies whenever the government — whether local, state or federal
        — conducts a search or seizure. It protects you from an unreasonable search or seizure by
        any government official or agent, not just the police.

    •   The Fourth Amendment does not protect you from privacy invasions by people other than the
        government, even if they later hand over what they found to the government — unless the
        government directed them to search your things in the first place.
•   Your Fourth Amendment rights against unreasonable searches and seizures cannot be
    suspended — even during a state of emergency or wartime — and they have not been
    suspended by the USA PATRIOT Act or any other post-9/11 legislation.

•   If you are ever searched or served with any kind of government order, contact a lawyer
    immediately to discuss your rights. Contact a lawyer any time you are searched, threatened
    with a search, or served with any kind of legal papers from the government or anyone else. If
    you do not have a lawyer, pro bono legal organizations such as EFF are available to help you
    or assist in finding other lawyers who will.

•   Reasonable Expectation of Privacy
•   The Fourth Amendment only protects you against searches that violate your reasonable

    expectation of privacy. A reasonable expectation of privacy exists if 1) you actually

    expect privacy, and 2) your expectation is one that society as a whole would think is

    legitimate.
•   This rule comes from a decision by the United States Supreme Court in 1967, Katz v.

    United States, holding that when a person enters a telephone booth, shuts the door, and

    makes a call, the government can not record what that person says on the phone

    without a warrant. Even though the recording device was stuck to the outside of the

    phone booth glass and did not physically invade Katz’s private space, the Supreme Court

    decided that when Katz shut the phone booth’s door, he justifiably expected that no one

    would hear his conversation, and that it was this expectation — rather than the inside of

    the phone booth itself — that was protected from government intrusion by the Fourth

    Amendment. This idea is generally phrased as "the Fourth Amendment protects people,

    not places."

•   A big question in determining whether your expectation of privacy is "reasonable" and

    protected by the Fourth Amendment arises when you have "knowingly exposed"

    something to another person or to the public at large. Although Katz did have a

    reasonable expectation of privacy in the sound of his conversation, would he have had a

    reasonable expectation of privacy in his appearance or actions while inside the glass

    phone booth? Probably not.
•   Thus, some Supreme Court cases have held that you have no reasonable expectation of

    privacy in information you have "knowingly exposed" to a third party — for example,

    bank records or records of telephone numbers you have dialed — even if you intended

    for that third party to keep the information secret. In other words, by engaging in

    transactions with your bank or communicating phone numbers to your phone company

    for the purpose of connecting a call, you’ve "assumed the risk" that they will share that

    information with the government.
•   You may "knowingly expose" a lot more than you really know or intend. Most

    information a third party collects — such as your insurance records, credit records, bank
    records, travel records, library records, phone records and even the records your grocery

    store keeps when you use your "loyalty" card to get discounts — was given freely to

    them by you, and is probably not protected by the Fourth Amendment under current

    law. There may be privacy statutes that protect against the sharing of information about

    you — some communications records receive special legal protection, for example — but

    there is likely no constitutional protection, and it is often very easy for the government

    to get a hold of these third party records without your ever being notified.

•   Here are some more details on how the Fourth Amendment will — or won't — protect

    you in certain circumstances:
•   Residences. Everyone has a reasonable expectation of privacy in their home. This is not

    just a house as it says in the Fourth Amendment, but anywhere you live, be it an

    apartment, a hotel or motel room, or a mobile home.

•   However, even things in your home might be knowingly exposed to the public and lose

    their Fourth Amendment protection. For example, you have no reasonable expectation of

    privacy in conversations or other sounds inside your home that a person outside could

    hear, or odors that a passerby could smell (although the Supreme Court has held that

    more invasive technological means of obtaining information about the inside of your

    home, like thermal imaging technology to detect heat sources, is a Fourth Amendment

    search requiring a warrant). Similarly, if you open your house to the public for a party, a

    political meeting, or some other public event, police officers could walk in posing as

    guests and look at or listen to whatever any of the other guests could, without having to

    get a warrant.
•   Business premises. You have a reasonable expectation of privacy in your office, so

    long as it’s not open to the public. But if there is a part of your office where the public is

    allowed, like a reception area in the front, and if a police officer enters that part of the

    office as any other member of the public is allowed to, it is not a search for the officer to

    look at objects in plain view or listen to conversations there. That’s because you’ve

    knowingly exposed that part of your office to the public. However, if the officer does not

    stay in that portion of the premises that is open to the public — if he starts opening file

    cabinets or tries to go to private offices in the back without an invitation — then his

    conduct becomes a search requiring a search warrant.
•   Trash. The things you leave outside your home at the edge of your property are

    unprotected by the Fourth Amendment. For example, once you carry your trash out of

    your house or office and put it on the curb or in the dumpster for collection, you have

    given up any expectation of privacy in the contents of that trash. You should always

    keep this in mind when you are disposing of sensitive documents or anything else that
    you want to keep private. You may want to shred all paper documents and destroy all

    electronic media. You could also try to put the trash out (or unlock your trashcan) right

    before it’s picked up, rather than leaving it out overnight without a lock.
•   Public places. It may sound obvious, but you have little to no privacy when you are in

    public. When you are in a public place — whether walking down the sidewalk, shopping

    in a store, sitting in a restaurant or in the park — your actions, movements, and

    conversations are knowingly exposed to the public. That means the police can follow you

    around in public and observe your activities, see what you are carrying or to whom you

    are talking, sit next to you or behind you and listen to your conversations — all without a

    warrant. You cannot necessarily expect Fourth Amendment protection when you’re in a

    public place, even if you think you are alone. Fourth Amendment challenges have been

    unsuccessfully brought against police officers using monitoring beepers to track a

    suspect’s location in a public place, but it is unclear how those cases might apply to

    more pervasive remote monitoring, like using GPS or other cell phone location

    information to track a suspect’s physical location.
•   Infiltrators and undercover agents. Public meetings of community and political

    organizations, just like any other public places, are not private. If the government

    considers you a potential criminal or terrorist threat, or even if they just have an

    unfounded suspicion that your organization might be up to something, undercover police

    or police informants could come to your public meetings and attempt to infiltrate your

    organization. They may even wear hidden microphones and record every word that’s

    said. Investigators can lie about their identities and never admit that they’re cops —

    even if asked directly. By infiltrating your organization, the police can identify any of

    your supporters, learn about your plans and tactics, and could even get involved in the

    politics of the group and influence organizational decisions. You may want to save the

    open-to-the-public meetings for public education and other non-sensitive matters and

    only discuss sensitive matters in meetings limited to the most trusted, long-time staff

    and constituents.

•   Importantly, the threat of infiltrators exists in the virtual world as well as the physical

    world: for example, a police officer may pose as a online "friend" in order to access your

    private social network profile.
•   Records stored by others. As the Supreme Court has stated, "The Fourth Amendment

    does not prohibit the obtaining of information revealed to a third party and conveyed by

    him to Government authorities, even if the information is revealed on the assumption

    that it will be used only for a limited purpose and the confidence placed in the third party

    will not be betrayed." This means that you will often have no Fourth Amendment
    protection in the records that others keep about you, because most information that a

    third party will have about you was either given freely to them by you, thus knowingly

    exposed, or was collected from other, public sources. It doesn’t necessarily matter if you

    thought you were handing over the information in confidence, or if you thought the

    information was only going to be used for a particular purpose.

•   Therefore it is important to pay close attention to the kinds of information about you and

    your organization’s activities that you reveal to third parties, and work to reduce the

    amount of private information you leave behind when you go about your daily business.
•   Opaque containers and packages. Even when you are in public, you have a

    reasonable expectation of privacy in the contents of any opaque (not see-through)

    clothes or containers. So, unless the police have a warrant or qualify for one of the

    warrantless search exceptions discussed below, they can’t go digging in your pockets or

    rummaging through your bags.

•   Laptops, pagers, cell phones and other electronic devices are also protected. Courts

    have generally treated electronic devices that hold data as if they were opaque

    containers.

•   However, always keep in mind that whatever you expose to the public isn’t protected.

    So, if you’re in a coffee shop using your laptop and an FBI agent sitting at the next table

    sees what you are writing in an email, or if you open your backpack and the FBI agent

    can see what’s inside, the Fourth Amendment won’t protect you.
•   Postal mail. The mail that you send through the U.S. Postal Service is protected by the

    Fourth Amendment, and police have to get a warrant to open it in most cases.

•   If you’re using the U.S. Postal Service, send your package using First Class mail or

    above. Postal inspectors don’t need a search warrant to open discount (media) rate mail

    because it isn’t supposed to be used for personal correspondence.

•   Keep in mind that although you have privacy in the contents of your mail and packages,

    you don’t have any privacy in the "to" and "from" addresses printed on them. That

    means the police can ask the post office to report the name and address of every person

    you send mail to or receive mail from — this is called a "mail cover" — without getting a

    warrant. Mail covers are a low-tech form of "traffic analysis," which we’ll discuss in the

    section dealing with electronic surveillance.

•   You don’t have any privacy in what you write on a postcard, either. By not putting your

    correspondence in an envelope, you’ve knowingly exposed it, and the government can

    read it without a warrant.
•   Police at the door: Police in your home or office when it’s open to the public?
    •   The police may be able to come into your home or office if you have opened those

        places to the public — but you can also ask them to leave, just as if they were any

        other members of the public. If they don’t have a warrant, or don’t qualify for any of

        the warrant exceptions, they have no more right to stay once you’ve asked them to

        leave than any other trespasser. However, undercover agents or officers need not

        announce their true identities, so asking all cops to leave the room before a meeting

        is not going provide any protection.

Search Warrants
Search Warrants Are Generally Required For Most Searches and Seizures

  The Fourth Amendment requires that any search or seizure be reasonable. The general rule is

  that warrantless searches or seizures are automatically unreasonable, though there are many

  exceptions.



  To get a warrant, investigators must go to a neutral and detached magistrate and swear to

  facts demonstrating that they have probable cause to conduct the search or seizure. There is

  probable cause to search when a truthful affidavit establishes that evidence of a crime will be

  probably be found in the particular place to be searched. Police suspicions or hunches aren't

  enough — probable cause must be based on actual facts that would lead a reasonable person

  to believe that the police will find evidence of a crime.



  In addition to satisfying the Fourth Amendment's probable cause requirement, search warrants

  must satisfy the particularity requirement. This means that in order to get a search warrant,

  the police have to give the judge details about where they are going to search and what kind

  of evidence they are searching for. If the judge issues the search warrant, it will only authorize

  the police to search those particular places for those particular things.


Police at the door: Search warrants

   What should you do if a police officer comes to your home or office with a search warrant?

   Be polite. Do not get in the officers' way, do not get into an argument with them or

   complain, even if you think your rights are being violated. Never insult a police officer. But

   you should say "I do not consent to this search." If they are properly authorized, they will

   search anyway. But if they are not, then you have reserved your right to challenge the

   search later.

   Ask to see the warrant. You have a right to examine the warrant. The warrant must tell

   in detail the places to be searched and the people or things to be seized, and may limit
   what time of day the police can search. A valid warrant must have a recent date (usually

   not more than a couple of weeks), the correct address, and a judge's or magistrate's

   signature. If the warrant appears incomplete, indicates a different address, or otherwise

   seems mistaken, politely point this out to the police.

   Clearly state that you do not consent to the search. The police don't need your

   consent if they have a warrant, but clearly saying "I do not consent to this search" will limit

   them to search only where the warrant authorizes. If possible, have witnesses around when

   you say it.

   Do not resist, even if you think the search is illegal, or else you may be arrested. Keep

   your hands where the police can see them, and never touch a police officer. Do not try to

   leave if the police tell you to stay — a valid warrant gives them the right to detain any

   people that are on the premises while the search is conducted. You are allowed to observe

   and take notes of what the officers do, though they may tell you to sit in one place while

   they are conducting the search.

   Don't answer any questions. The Fifth Amendment guarantees your right not to answer

   questions from the police, even if they have a warrant. Remember that anything you say

   might be used against you later. If they ask you anything other than your name and

   address, you should tell them "I choose to remain silent, and will not answer any questions

   without a lawyer." If you say this, they are legally required to stop asking you questions

   until you have a lawyer with you.

   Take notes. Write down the police officers' names and badge numbers, as well as the

   names and contact information of any witnesses. Write down, as best you can remember,

   everything that the police say and everything you say to them. Ask if you can watch the

   search, and if they say yes, write down everything that you see them search and/or seize

   (you may also try to tape or take pictures, but realize that this may escalate the situation).

   If it appears they are going beyond what is authorized by the warrant, politely point this

   out.

   Ask for an inventory. At the conclusion of the search, the police should typically provide

   an inventory of what has been seized; if not, request a copy but do not sign any statement

   that the inventory is accurate or complete.

   Call a lawyer as soon as possible. If you don't have a lawyer, you can call EFF and we'll try

   to find you one.

Police at the door: Computer searches and seizures
If the police believe a computer is itself evidence of a crime — for example, if it is stolen or

was used to commit a crime — they will usually seize it and then search its contents later.

However, if the evidence is just stored on the computer — for example, you have computer

records that contain information about the person they are investigating — instead of

seizing the whole machine, the police may choose to:

  •   Search the computer and print out a hard copy of the particular files they are looking for
      (this is rarely done)

  •   Search the computer and make an electronic copy of the particular files

  •   Create a duplicate electronic copy of all of the computer's contents (this is called "imaging"
      or creating a "bitstream copy" of the computer hard drive) and then search for the
      particular files later

  •   "Sneak and Peek" Search Warrants
  •   "Sneak and Peek" Search Warrants Are Easier to Obtain Than They Used to Be

  •   Generally, police officers serving a warrant must "knock and announce" — that is, give

      you notice that they are the police and are serving a warrant (although they might not

      do this if they reasonably suspect that they will be put in danger, or that evidence will

      be destroyed, if they give such notice). If they have a warrant, they can enter and

      search even if you aren't home — but they still have to leave a copy of the warrant and

      an inventory of what they seized, so you'll know that your place was searched.

  •   However, thanks to the USA PATRIOT Act, it is much easier for law enforcement to get

      permission from the court to delay notice rather than immediately inform the person

      whose premises are searched, if agents claim that giving notice would disrupt the

      investigation. Since the goal is not to tip the suspect off, these orders usually don't

      authorize the government to actually seize any property — but that won't stop them

      from poking around your computers.
  •   The delay of notice in criminal cases can last months. The average delay is 30 to 90

      days. In the case of super-secret foreign intelligence surveillance to be discussed later,

      the delay lasts forever — no one is ever notified, unless and until evidence from the

      search is introduced in open court.

  •   The risk of being targeted with such a "sneak-and-peek" warrant is very low, although

      rising quickly. Law enforcement made 47 sneak-and-peek searches nationwide from

      September 2001 to April 2003 and an additional 108 through January 2005, averaging

      about fifty per year, mostly in drug cases. We don't know how many foreign

      intelligence searches there are per year — it's secret, of course — but we'd guess that

      it's much more common than secret searches by regular law enforcement.
  •   Privacy tip: Sneak and peek searches, key-loggers and government spyware
•   Secret searches can be used to install eavesdropping and wiretapping devices.

    Secret searches may also be used to install a key-logging device on your computer.

    A key-logger records all of the keystrokes that you make on the computer's

    keyboard, for later retrieval by the police who installed it. So if you are concerned

    about government surveillance, you should check your office computers for new

    added hardware that you don't recognize — especially anything installed between

    the keyboard and the computer — and remove it. A hardware key-logger often

    looks like a little dongle in between the keyboard plug and computer itself.

    Keyghost is an example of a hardware key-logger.
•   However, the government also has the capability to remotely install software key-

    loggers on your computer — or search the contents of your hard drive, or install

    surveillance capability on your computer — using its own spyware. There were

    rumors of such capability a few years ago in news reports about a government

    software program code-named "Magic Lantern" that could be secretly installed and

    monitored over the Internet, without the police ever having to enter your house or

    office. More recently, news reports revealed that the government had in one case

    been able to hack into a computer remotely and install software code-named

    "CIPAV" (the "Computer and Internet Protocol Address Verifier"), which gave the

    government the IP addresses with which the infected computer communicated.
•   In response to a survey, all of the major anti-spyware companies claimed that their

    products would treat government spyware like any other spyware programs, so you

    should definitely use some anti-spyware product to monitor your computer for such

    programs. It's possible that a spyware company may receive a court order requiring

    it not to alert you to the presence of government spyware (several of the

    companies that were surveyed declined to say whether they had received such

    orders), but you should still use anti-spyware software if only to protect yourself

    against garden-variety spyware deployed by identity thieves and commercial data

    harvesters.
•   Warrantless Searches
•   There Are Many Fourth Amendment Exceptions to the General Rule of Warrants

•   In some cases, a search can be reasonable — and thus allowed under the Fourth

    Amendment — even if the police don't have a warrant. There are several key

    exceptions to the warrant requirement that you should be aware of.
•   Consent. The police can conduct a warrantless search if you voluntarily consent to the

    search — that is, if you say it's OK. In fact, any person who the police reasonably think

    has a right to use or occupy the property, like a roommate or guest in your home, or a
    coworker at your office, can consent to the search. You can make clear to the people

    you share a home or office with that they do not have your permission to consent to a

    search and that if police ask, they should say no.
•   Privacy tip: Don't accidentally consent!

•   If the police show up at your door without a warrant, step outside then close and

    lock the door behind you — if you don't, they might just walk in, and later argue

    that you implied an invitation by leaving the door open. If they ask to come in, tell

    them "I do not consent to a search." Tell roommates, guests, coworkers and renters

    that they cannot consent on your behalf.
•   Administrative searches. In some cases, the government can conduct administrative

    searches. These are searches done for purposes other than law enforcement; for

    example, for a fire inspection. Court authorization is required for involuntary

    administrative searches, although the standards are lower. The only time the

    government doesn't need a warrant for an administrative search is when they are

    searching businesses in highly regulated industries such as liquor, guns, strip mining,

    waste management, nuclear power, etc. This exception to the warrant requirement

    clearly does not apply to the average homeowner, activist organization or community

    group.
•   Privacy tip: Just because they're "inspectors" doesn't mean you have to let them in!

•   If someone shows up at your home or office claiming to be a fire inspector, building

    code inspector, or some other non-law enforcement government employee who

    wants to inspect the premises, you can tell them to come back with a warrant. You

    don't have to let them in without a warrant!
•   Exigent circumstances. Exigent circumstances are emergency situations where it

    would be unreasonable for the police to wait to get a warrant, like if a person is calling

    for help from inside your house, if the police are chasing a criminal suspect who runs

    into an office or home, or if evidence will be destroyed if the police do not act

    immediately.
•   Privacy tip: Don't get tricked into consenting!

•   Police could try to get your consent by pressuring you, or making you think that

    you have to let them in. For example, they may show up at your door claiming that

    your neighbor saw someone breaking into your home or office, saw a criminal

    suspect entering the premises, or heard calls for help, and that they need to take a

    look around. You should never physically interfere if they demand to come in (which

    they will do if there are indeed exigent circumstances), but no matter what they say

    or do, keep saying the magic words: "I do not consent to a search."
•   Plain view. The police can make a warrantless search or seizure if they are lawfully in

    a position to see and access the evidence, so long as that evidence is obviously

    incriminating. For example, if the police enter a house with a valid search warrant to

    search for and seize some stolen electronics and then see a bag of drugs in plain view

    on the coffee table, they can seize the drugs too, even though the warrant didn't

    specifically authorize that seizure. Similarly, the police could seize the drugs without a

    warrant, or look at any other documents or things left in plain view in the house, if

    there were exigent circumstances that led the police into the house — for example, if a

    suspect they were chasing ran into the house, or if they heard gunshots from inside.

    Even a law-abiding citizen who does not have any contraband or evidence that the

    police would want to seize may still have sensitive documents in plain view that one

    would not want the authorities to see.

•   The plain view exception alone does not allow the police to enter your home or office

    without a warrant. So, for example, even if the police see evidence through your

    window, they cannot enter and seize it. However, plain view can combine with other

    exceptions to allow searches that might otherwise require a warrant. For example, if

    the person with the bag of drugs in the previous example saw the police looking

    through his window, then grabbed the bag and ran towards the bathroom as if he was

    about to flush the evidence down the toilet, that would be an exigent circumstance and

    the police could enter without a warrant to stop him.
•   Automobiles. Since cars and other vehicles are mobile, and therefore might not be

    around later if the police need to go get a warrant, the police can search them without

    one. They still need probable cause, though, because you do have a privacy interest in

    your vehicle.

•   If the police have probable cause, they can search the entire vehicle (including the

    trunk) and all containers in the vehicle that might contain the object for which they are

    searching. For example, if the police have probable cause to believe that drugs are in

    the vehicle, they can search almost any container, but if they have probable cause to

    believe that a murder suspect is hiding inside the vehicle, they must limit their search

    to areas where a person can hide.

•   Also, it's important to know that the "plain view" exception is often applied to cars.

    That means that the police aren't conducting a search just by looking through your car

    windows, or even by shining a flashlight in your car. And if they see evidence inside

    your car, that can then give them probable cause to search the rest of the vehicle

    under the automobile exception.
•   Police at the (car) door: What if I get pulled over?
•   If you are pulled over by a police officer, you may choose to stop somewhere you

    feel safe, both from traffic and from the officer herself. In other words, you can pull

    into a lighted gas station, or in front of someone's home or somewhere there are

    other people present, rather than stopping on a dark road, so long as you indicate

    to the officer by your driving that you are in fact stopping. You are required to show

    the officer your license, insurance and registration. Keep your hands where the

    officer can see them at all times. For example, you can wait to get your

    documentation out when the officer is standing near your car so that she can watch

    what you are doing and have no cause to fear that you are going into the glove box

    for a weapon. Be polite and courteous.
•   Airport searches. As you certainly know if you've flown recently, the government is

    allowed to search you and all your luggage for bombs and weapons before you are

    allowed to board a plane, without a warrant. Always assume that the government will

    look in your bags when you fly, and pack accordingly.
•   Border searches. The government has the right to warrantlessly search travelers at

    the border, including international airports, as part of its traditional power to control

    the flow of items into and out of the country. The case law distinguishes between

    "routine" searches, which require no cause, and "non-routine" searches, which require

    reasonable suspicion, but no warrant. "Non-routine" searches include strip searches,

    cavity searches, involuntary X-rays and other particularly invasive investigative

    techniques. Several courts have found that searching the contents of your laptop or

    other electronic devices is "routine" and doesn't require a warrant or even reasonable

    suspicion.

•   One solution to this problem is to bring a blank "traveling" laptop and leave your

    personal information at home. You could then access the information that you left at

    home over the internet by using a VPN or other secure method to connect to a server

    where you've stored the information.
•   However, bringing a clean laptop means more than simply dragging files into the trash.

    Deleting files will not remove them from your hard drive. See our software and

    technology article on secure deletion for details.
•   Another solution is to use password-based disk encryption to prevent border agents

    from being able to read your files. The consequences of refusing to disclose a password

    under those circumstances are difficult to predict with certainty, but non-citizens would

    face a significant risk of being refused entry to the country. Citizens cannot be refused

    entry, but could be detained until the border agents decide what to do, which may

    include seizing your computer.
•   Stop and frisk searches. The police can stop you on the street and perform a limited

    "pat-down" search or "frisk" — this means they can feel around your outer clothing for

    concealed weapons.
•   The police don't need probable cause to stop and frisk you, but they do at least need to

    have a reasonable suspicion of criminal activity based on specific facts. This is a very

    low standard, though, and the courts usually give the police a lot of leeway. For

    example, if a police officer is suspicious that you're carrying a concealed weapon based

    on the shape of a lump under your jacket or the funny way that you're walking, that's

    usually enough.

•   If, while patting you down, a police officer feels something that he reasonably believes

    is a weapon or an illegal item, the officer can reach into your clothes and seize that

    item.
•   Search Incident to Lawful Arrest
•   Search Incident to Arrest (SITA) doctrine is an exception to the general requirement

    that police obtain a warrant before conducting a search. The purpose of this exception

    is to protect the officer by locating and seizing any weapons the person has and to

    prevent the destruction of any evidence on the person. According to the SITA doctrine,

    if an arrest is valid, officers may conduct a warrantless search of the arrestee and the

    area and objects in close proximity — i.e. the "grab area" — at about the same time as

    the arrest.

•   Officers may also perform inventory searches of the arrested person at the time of the

    arrest or upon arrival at the jail or other place of detention.

•   So, the police are allowed to search your clothing and your personal belongings after

    they've arrested you. They can also search any area nearby where you might conceal a

    weapon or hide evidence. If you are arrested inside a building, this usually means they

    can search the room they found you in but not the entire building. If you are arrested

    while driving, this means they can search inside the car, but not the trunk. But if they

    impound the car, then they can search the trunk as part of an inventory search. This is

    another example of the way that multiple exceptions to the warrant requirement can

    combine to allow the police a lot of leeway to search without going to a judge first.

•   When searches are delayed until some time after the arrest, courts generally have

    allowed warrantless searches of the person, including containers the arrestee carries,

    while rejecting searches of possessions that were within an arrestee's control. These no

    longer present any danger to the officer or risk of destruction because the arrestee is

    now in custody.
•   The question remains whether the SITA doctrine authorizes warrantless searches of the

    data on cell phones and computers carried by or located near the arrestee. There are

    very few cases addressing this question. In one case in Kansas, for example, the

    arresting officer downloaded the memory from the arrestee's cellphone for subsequent

    search. The court found that this seizure did not violate the Fourth Amendment

    because the officer only downloaded the dialed and incoming numbers, and because it

    was imperative to preserve the evidence given the volatile, easily destroyed, nature of

    cell phone memory.
•   In contrast, in another case in California, the court held that a cellphone search was

    not justified by the SITA doctrine because it was conducted for investigatory reasons

    rather than out of a concern for officer safety, or to prevent the concealment or

    destruction of evidence. The officers could seize the phone, and then go obtain a

    warrant to do any searching of it. The decision rejected the idea that the data searched

    was not private, in light of the nature and amount of information usually stored on cell

    phones and laptops.
•   Police at the door: Arrest warrants

•   If the police arrive at your home or office with an arrest warrant, go outside, lock

    the door, and give yourself up. Otherwise, they'll just force their way in and arrest

    you anyway, and then be able to search nearby. It is better to just go peacefully

    without giving them an excuse to search inside.
•   Police at the door: Searches of electronic devices incident to arrest

•   If you are arrested, the officers are going to seize all the property on your person

    before you are taken to jail. If you have a cell phone or a laptop, they will take that

    too. If you are sitting near a cell phone or laptop, they may take those as well. The

    SITA doctrine may allow police to search the data. It many also allow copying for

    later search, though this is well beyond what the SITA doctrine's original

    justification would allow.
•   You can and should password protect your devices to prevent this potentially

    unconstitutional privacy invasion. But for much stronger protection, consider

    protecting your data with file and disk encryption.

•   Prudent arresting officers will simply secure the devices while they get a warrant.

    There's nothing you can do to prevent that. Do not try to convince the officers to

    leave your phone or laptop behind by disavowing ownership. Lying to a police

    officer can be a crime. Also, prosecutors may use your statements against you later

    to argue that you do not have the right to challenge even an illegal search or
          seizure of the device, while still being able to introduce information stored on the

          device against you.

Subpoenas
Another Powerful Investigative Tool

  In addition to search warrants, the government has another very powerful legal tool for getting

  evidence — the subpoena. Subpoenas are legal documents that demand that someone produce

  specific documents or appear in court to testify. The subpoena can be directed at you to

  produce evidence you have about yourself or someone else, or at a third party to produce

  evidence they have collected about you.


    •   Subpoenas demand that you produce the requested evidence, or appear in court to testify, at
        some future time. Search warrants, on the other hand, are served and executed immediately
        by law enforcement with or without your cooperation.

    •   Subpoenas, unlike search warrants, can be challenged in court before compliance. If you are
        the recipient of the subpoena, you can challenge it on the grounds that it is too broad or that
        it would be unduly burdensome to comply with it. If a judge agrees, then the court may
        quash the subpoena so you don't have to produce the requested evidence. You may also be
        able to quash the subpoena if it is seeking legally privileged material, or information that is
        protected by the First Amendment, such as a political organization's membership list or
        information to identify an anonymous speaker. If the subpoena is directed to a third party
        that holds information about you, and you find out about it before compliance, then you can
        make a motion to quash the subpoena on the grounds of privilege or constitutional rights
        regardless of whether the third party decides it would otherwise comply. However, you have
        to do so before the compliance date. Subpoenas that are used to get records about you from
        third parties sometimes require that you be notified, but usually do not.

    •   Subpoenas are issued under a much lower standard than the probable cause standard used
        for search warrants. A subpoena can be used so long as there is any reasonable possibility
        that the materials or testimony sought will produce information relevant to the general
        subject of the investigation.

    •   Subpoenas can be issued in civil or criminal cases and on behalf of government prosecutors
        or private litigants; often, subpoenas are merely signed by a government employee, a court
        clerk, or even a private attorney. In contrast, only the government can get a search warrant.

Police at the door: Subpoenas

    What should you do if a government agent (or anyone else) shows up with a subpoena?

    NOTHING.

    Subpoenas are demands that you produce evidence at some time in the future. A subpoena

    does not give anyone the right to enter or search your home or office, nor does it require

    you to hand over anything immediately. Even a "subpoena forthwith", which asks for

    immediate compliance, can not be enforced without first going to a judge.
    So, if someone shows up with a subpoena, don't answer any questions, don't invite them

    in, and don't consent to a search — just take the subpoena, say thank you, close the door

    and call a lawyer as soon as possible!

What Can I Do To Protect Myself?

  You can’t stop or prevent a seizure of your computers, and your best defense against a

  subpoena is a lawyer, but there are still steps you can take to prevent a search of your

  computers without your cooperation, and minimize what information the government can get

  its hands on.


Develop a Data Retention and Destruction Policy
If You Don't Have It, They Can't Get It

  The best defense against a search or a subpoena is to minimize the amount of information that

  it can reach. Every organization should have a clear policy on how long to keep particular types

  of information, for three key reasons:


    •   It’s a pain and an expense to keep everything.

    •   It’s a pain and an expense to have to produce everything in response to subpoenas.

    •   It’s a real pain if any of it is used against you in court — just ask Bill Gates. His internal
        emails about crushing Netscape were not very helpful at Microsoft’s antitrust trial.


  Think about it — how far back does your email archive go? Do you really need to keep every

  email? Imagine you got a subpoena tomorrow — what will you wish you’d destroyed?



  Establish a retention policy. Your organization should review all of the types of documents,

  computer files, communications records, and other information that it collects and then

  develop a policy defining whether and when different types of data should be destroyed. For

  example, you may choose to destroy case files six months after cases are closed, or destroy

  Internet logs showing who visited your website immediately, or delete emails after one week.

  This is called a "document retention policy," and it’s your best defense against a subpoena —

  they can’t get it if you don’t have it. And the only way to make sure you don’t have it is to

  establish a policy that everyone follows. Set a clear written policy for the length of time

  documents are kept (both electronic and paper documents). Having a written policy and

  following it will help you if you are accused of destroying documents to hide evidence.



  Do not destroy evidence. You should never destroy anything after it has been subpoenaed

  or if you have reason to believe you are under investigation and it is about to be subpoenaed
— destruction of evidence and obstruction of justice are serious crimes that carry steep fines

and possible jail time, even if you didn’t do the original crime. Nor should you selectively

destroy documents — for example, destroying some intake files or emails but not others —

unless it’s part of your policy. Otherwise, it may look like you were trying to hide evidence, and

again might make you vulnerable to criminal charges. Just stick to your policy.



Destroying paper documents. Remember, your trash is fair game under the Fourth

Amendment, so just tossing your old membership rolls in the garbage is not the way to go.



If you are concerned about the privacy of the documents that you throw away (and you should

be!), you should destroy them before they go in the trash. At the very least you should run

documents through a "cross-cut" paper shredder that will cut them in two directions and turn

them into confetti, and then mix up the shreds from different documents to make them harder

to put back together (documents cut in one direction by "strip-cut" shredders are very easy to

put back together). If you have evidence giving you reason to believe that your trash is being

or is about to be searched, you should also completely burn all of the shreds. Even if you’re

not particularly worried about someone searching your trash, you should still destroy or

thoroughly erase any computer equipment or media that you throw out.



If you destroy any of your papers and disks before throwing them out, you should try to

destroy all of them, even the ones you don’t need to keep private. If you don’t destroy

everything, anyone with access to your trash can will be able to quickly isolate the shreds of

your private documents and focus on reconstructing them. Both government investigators and

identity thieves often have the manpower and time necessary to reconstruct your shredded

documents — even the burned ones, in some crime labs.



Your web browser's watching you, so you have to watch your browser. In a recent

trial, government forensics experts were able to retrieve web pages of Google search results

that the suspect downloaded years ago — his web browser had "cached" copies of the pages.

It was a murder trial, and the suspect had Googled for information about breaking necks and

the depth of the local lake, where he ended up dumping the body. The suspect was convicted.



Hopefully, you have much more innocent things you’d like to keep private, but the point is that

your browser is a security hole that needs to be plugged. You need to take regular steps to

clear out all the stuff it’s been storing, such as a history of the web sites you’ve visited and the

files you’ve downloaded, cached copies of web pages, and cookies from the web sites you visit
(which we will talk more about later). In particular, it’s a bad idea to have the browser save

your passwords for web sites, and it’s a bad idea to have it save the data you’ve entered into

web forms. If your computer is seized or stolen, that information will be compromised. So

consider turning these features off completely. Not having these features is less convenient —

but that’s the security trade-off. Are you worried enough about your computer’s security that

you’re willing to type a few extra times each day to enter a password or a web address?



Visit our Defensive Technology article on web browsers for help with browser hygiene and

other recommendations to improve security.



Your instant messenger software is probably watching you too. Many instant messaging

(IM) clients are set by default to log all of you IM conversations. You should check the

software's preferences so you know what it's doing, and figure out how these logs fit into your

retention policy. Will you clean them out every month? Every week? Or will you take the simple

route and just set the preferences so that your IM client doesn't log any messages at all? The

choice is up to you, but because people often treat IM like an in-person conversation and often

say things they normally wouldn't in an email, you should consider such logs very sensitive. If

you do insist on logging your IMs, all the more reason to make sure they are protected by

encryption. For more information, check out our Defensive Technology article about instant

messaging.



Minimize computer logging. If you run a network, an email server or a web server, you

should consider reducing or eliminating logging for those computer and network services, to

protect the privacy of your colleagues and your clients. For more information, refer to EFF’s

"Best Data Practices for Online Service Providers."



When you delete computer files, really delete them. When you put a file in your

computer’s trash folder and empty the trash, you may think you’ve deleted that file — but you

really haven’t. Instead, the computer has just made the file invisible to the user, and marked

the part of the disk drive that it is stored on as "empty" — meaning, it can be overwritten with

new data. But it may be weeks, months, or even years before that data is overwritten, and the

government’s computer technicians can often retrieve data that has been overwritten by newer

files. Indeed, no data is ever really deleted, just overwritten over time, and overwritten again.



The best way to keep those "deleted" files hidden, then, is to make sure they get overwritten

immediately, and many times. Your operating system probably already includes software that
 will do this, and overwrite all of the "empty" space on your disk with gibberish, dozens or

 hundreds of times, and thereby protect the confidentiality of deleted data. Visit the secure

 deletion article to learn more about how to do this in various operating systems.



 In addition to using a secure deletion tool, you should consider using encrypted storage. Visit

 the disk encryption article for more information.



 Destroying hardware and electronic media. When it comes to CD-ROMs, you should do

 the same thing you do with paper — shred 'em. There are inexpensive shredders that will chew

 up CD-ROMs. Never just toss a CD-ROM out in the garbage unless you’re absolutely sure

 there’s nothing sensitive on it.



 If you want to throw a piece of hardware away or sell it on EBay, you’ll want to make sure no

 one can retrieve your data from it. So, before selling or recycling a computer, be sure to

 overwrite its storage media with gibberish first. Darik's Boot and Nuke is an excellent free tool

 for this purpose.



 Make data hygiene a regular habit, like flossing. The easiest way to keep this all straight

 is to do it regularly. If you think you face a high risk of government seizure, or carry a laptop

 around with you and therefore face a high risk of theft or loss, perhaps you should do it at the

 end of each day. If not, you might want to do it once a week.



 For example, at the end of each week you could:


   •   Shred any paper documents or electronic media that are scheduled for destruction under your
       policy.

   •   Delete any emails or other documents that are scheduled for deletion under your policy.

   •   Clear your browser of all logs.

   •   Run your secure-deletion software to overwrite all of the newly deleted stuff.


 Have your organization put this weekly ritual or something like it in its written policy. You’ll be

 glad you did.


Master the Basics of Data Protection

 We're not going to lecture you on how to physically secure your office, because as we've said

 before, if the government has permission from a court to bust in, they are going to bust in
 regardless of what you do. We're more concerned here about what they can do to your

 computers once they are inside. Here are some steps to ensure that just because someone has

 physical access to your machine it doesn't mean they'll be able to get at all the data inside of

 it:



 Require logins! Operating systems can be set to automatically log into a user account when

 the machine boots. Disable this feature! Require that the user provide a username and

 password before the machine will allow access to a user account.



 Require screensaver logins too! Set the screensaver on your system to start automatically

 after a short time (such as 2 or 5 minutes) and to require that the user supply their password

 again before the screensaver will unlock. All operating systems support a feature like this, and

 it makes no sense not to use it.



 Access controls are only as strong as your authentication mechanism. In other words,

 if your password is "12345" or your dog's name, or if you keep your password in a drawer next

 to your computer, your files may be accessible to anyone who has access to your computer

 and has a couple minutes to guess some passwords or look through your desk. Follow the next

 section's advice to generate and manage strong passwords effectively.



 Choose your sysadmin wisely. In mainstream operating systems, the systems administrator

 must be "trusted" – that is, he or she is always able to circumvent access controls. Therefore,

 your organization's management must take care when selecting and training systems

 administrators, to ensure that he or she is worthy of trust. Trustworthy administrators will

 adhere to a code of professional ethics such as that published by the Systems Administrators

 Guild.



 Guest accounts. To provide availability for unauthorized users, if that is desired, create a

 guest account for general use, and make sure that it cannot modify the operating system or

 cause other damage to the system. Ensure that the guest account does not have the privilege

 to read or modify sensitive files.


Learn How to Use Passwords Properly
Choosing a Password
Longer and more complex passwords are more secure. If the government seizes your computer

it can quickly guess simple passwords by automatically trying large lists of words from a

dictionary. Automated dictionary attacks use lists of regular words as well as proper names and

common variations of these (e.g. adding a number to a dictionary word or replacing letters with

similar numbers, e.g. replacing o with 0).



So, if it's human-readable, it's computer-breakable. Don't use names, song titles, random

words or any dictionary words at all, whether alone, in combination with numbers, or with

letters replaced by numbers – the government can and will break it. For stronger password

security, use a lengthy passphrase that includes upper- and lower-case letters, one or more

numerical digits and special characters (e.g. #,$ or &), and change it frequently.



New computer hardware usually comes with default passwords, such as "password" or "default"

or the name of the technology vendor. Always change the default passwords immediately!


Password Management
When it comes to passwords, the only truly secure password is the one that's only in your head.

Written-down passwords can be seized or subpoenaed. But there's a tough trade-off — the

better your password, the harder it'll be to remember. And if you forget the password and don't

have it recorded somewhere, you could lose access to a critical asset at just the wrong time —

perhaps even forever.



Although we recommend memorizing your passwords, we recognize you probably won't. So,

here are a few other options to consider:



Use a password safe. There are a number of software tools available that will keep all of your

passwords for you on your computer, in an encrypted virtual safe, which you access with one

master password. Just remember to never write down the password to your password safe —

that piece of paper can become a single point of failure for all of your password-secured assets.

This brings another drawback, of course — if you forget that master password, you've lost all of

your other passwords forever.



Carry your passwords on paper, in your pocket. This is a somewhat controversial solution

promoted by security expert Bruce Schneier — even though he wrote the digital password

management program Password Safe. Schneier advocates that people keep their passwords in
 their wallets. What you sacrifice in security, the argument goes, is made up for by the

 convenience — with easy access to your passwords, you're more likely to use very strong ones

 that you couldn't remember otherwise, plus you can access your passwords even when you're

 away from your computer. An added benefit is that when your passwords are in your wallet,

 you'll find out very quickly if they've been lost or stolen.



 However, to mitigate the risk of a loss, add a certain number of dummy characters before and

 after the real passwords to make it harder to identify them, and use simple code-words to

 indicate what asset they protect, rather than saying "Chase Manhattan Bank" or "Work

 Computer."



 Don't use the same password to protect multiple assets. Sure, it's OK to use the same

 password to log into the New York Times web site that you use for the Washington Post,

 because those aren't valuable assets. But when it comes to the important stuff, use unique

 passwords. That way, even if one asset is compromised, the others are still safe.



 Never keep a password in the same physical location as the asset it protects, unless

 it's encrypted. This is the biggest password boo-boo, and it's an object lesson in security

 planning: if a security measure is too inconvenient for day-to-day use, people won't use it

 correctly. Your password is worse than useless if it's on a sticky note next to your computer,

 and probably useless against secret searches if it is anywhere in the same office. Again, this is

 why Bruce Schneier recommends keeping your passwords in your pocket — you'll have stronger

 passwords, and you won't leave them lying around.



 Change passwords regularly. A password may have already been compromised and you just

 don't know it. You should change passwords every week, every month, or every year — it all

 depends on the threat, the risk, and the value of the asset, traded against usability and

 convenience.


Encrypt Your Data

 Requiring a strong password to log onto accounts on your computer is a good security step.

 But when the government is your attacker, it's not nearly enough. If the government seizes

 your computer, all it has to do to get around your account protection is to take the hard drive

 out and stick it into another computer to get around your password protection. Similarly, if you

 were subject to a sneak-and-peek search, the government could sneak in with their own
 hardware, take your hard drive out and copy it, and then replace it without you ever knowing.

 Your best and only protection against this is to encrypt the data that's on your computer so the

 government can't read it. If you're not familiar with encryption, how it works, and what it does,

 check out our technology article about encryption basics.



 You should also find out more about how to choose and use file and disk encryption software.



 So I used file encryption and the government seized my computer — now what? Well,

 first off, don't give them your password during the search — you have the right to remain

 silent, so use it. Since they can't search your encrypted files without your help, you've got

 leverage that most search targets never have. But now you've done all you can — now it's

 time to call a lawyer. (Anyway, you should have called as soon as the computer was seized,

 right?)



 A lawyer may be able to get your property back if the warrant was improper, negotiate a deal

 with the government's attorneys to limit the search or get important files back, or convince the

 court to strictly limit the search so that they won't search files that are legally privileged (like

 confidential legal or medical records), protected by the First Amendment (like private

 membership lists), or irrelevant to the case.



 Alternatively, a prosecutor may ask a judge to order you to turn over your password. The law

 is unclear on whether such an order would be valid, but that is a matter to face with the

 assistance of counsel. No one other than a judge can force you to reveal your password.


Protect Yourself Against Malware

 Although it's been confirmed that the government has used remotely-installed spyware in at

 least one criminal investigation, and probably many more, the risk of Internet-based attack

 from the government is still hard to judge. However, there is definitely a high risk from just

 about every other bad guy on the net. Network-based threats to computers include denial of

 service (e.g., flooding the network or causing the computers to crash) and software and/or

 data theft or destruction ("hacking"). In addition, malicious users could hijack your computers

 so they can be used to attack other computers and networks. The risk that this threat will

 materialize for any computer connected to the Internet is a near-certainty. For example, a

 recent report concludes that 80 percent of Windows computers in homes has been

 compromised by one or more viruses, worms, or other malicious software.
Since this guide is about the government and not hackers, and since there are plenty of other

resources about fighting viruses and the like, we’ll only share some basic thoughts on how to

secure yourself against Internet-based attacks. Several of these steps will help protect you

from any hacker, be it a government agent or an identity thief:



For maximum security, create an "air gap" between sensitive data and the Internet.

To protect confidentiality and integrity, do not connect computers that store sensitive

information to the Internet or other public networks. Any computer connected to the Internet

is exposed and possibly vulnerable to a huge number of attacks.



Avoid Microsoft products where possible. Computers using the Microsoft Windows

platform are especially vulnerable as of this writing (although no operating system is immune

to all potential attacks). Consider using a non-Microsoft operating system if possible. However,

if you have to use Microsoft Windows and you are connecting to the Internet, your best bet is

to minimize the number of Microsoft Internet applications you use – for example, use Firefox

as a browser or Thunderbird as a mail client. Microsoft’s Internet Explorer and its email

programs Outlook and Outlook Express are very difficult for even professionals to secure.

Furthermore, adversaries tend to attack more popular platforms and applications.



Keep your software updated. Use the latest stable version of your operating system. As of

this writing, Windows 95, 98, and ME are utterly obsolete. You should be using at least

Windows Server 2003 for servers and Windows XP for clients, with all patches and service

packs applied. For Macintosh computers, use OS X 10.4 or greater, with all patches applied.

For Linux and Unix, get whatever version is the most recent stable release, and follow all

updates. It is especially important not to let server software versions lag behind, since servers

are always on and always connected.



Maintain your firewalls. Firewalls are software or hardware components that protect your

computer or network from the Internet, blocking traffic based on network-related parameters

like IP addresses and port numbers. Firewalls can protect against those who want to access

your computer without permission. Configuring network firewalls is pretty tough for the

layperson and beyond the scope of this guide, but you should learn how to use the personal

firewall software that’s included in most recent operating systems.



For more detailed information about malware, check out the Malware article in the Defensive

Technology section.
Summing Up
If You Don't Keep It, They Can't Get It; If You Do Keep It, Encrypt It!

  In a nutshell: if you don't want the government to see it, encrypt it or don't keep it.



  Subpoenas are less threatening than search warrants, but pose a much greater risk. Only a

  good lawyer can help you avoid having to respond to a subpoena, and oftentimes even a good

  lawyer will fail, and you’ll have to turn the information over or face contempt charges. The best

  defense against a subpoena is to not have what they are looking for.



  Not having what they’re looking for is also your best defense against a search warrant, which

  is a much higher threat but lower risk. After that your best bet is encryption. You may not be

  able to stop the government from seizing your computers, but by using encryption you might

  be able to stop them from searching the data on those computers.


Data on the Wire
Electronic Surveillance and Communications Privacy

  In this section, you'll learn about what the government can do — technically and legally —

  when it wants to conduct real-time surveillance of your communications, whether by planting a

  "bug" to eavesdrop on your face-to-face conversations, "wiretapping" the content of your

  phone calls and Internet communications, or using "pen registers" and "trap and trace devices"

  to track who you communicate with and when. We'll also discuss what steps you can take to

  defend against this kind of surveillance, with a focus on how to use encryption to protect the

  privacy of your communications.


What Can the Government Do?

  When the government wants to record or monitor your private communications as they

  happen, it has three basic options, all of which we'll cover in-depth: it can install a hidden

  microphone or "bug" to eavesdrop on your conversation; it can install a "wiretap" to capture

  the content of your phone or Internet communications as they happen; or it can install a "pen

  register" and a "trap and trace device" to capture dialing and routing information indicating

  who you communicate with and when. In this section, we'll lay out the legal rules for when the

  government can conduct these types of surveillance, and look at some statistics to help you

  gauge the risk of having your communications targeted.


Wiretapping
Wiretapping By The Government is Strictly Regulated
When it comes to secretly eavesdropping on your conversations — whether you're talking in

private or public, on the phone or face to face, by email or by instant messenger — no one's

got better funding, equipment or experience than the government. They are capable of

"bugging" you by using tiny hidden microphones that they've installed in your home, office, or

anywhere else that you have private conversations. They can also bug you from long distances

or through windows using high-powered microphones, or even laser microphones that can hear

what you say by sensing the vibrations of your voice on the window's glass. They can put a

"wire" or a small hidden microphone on an informant or undercover police officer to record

their conversations with other people. Or they can conduct a "wiretap," where they tap into

your phone or computer communications.



Use of these investigative techniques is regulated by very strong laws that protect the privacy

of your communications against any eavesdropper, including law enforcement, and we'll

describe those below. (Another set of laws regulating surveillance for foreign intelligence and

national security purposes will be discussed later.)



However, it's important to note at the outset that the government has been known to break

these laws and spy on communications without going to a judge first, usually in the name of

national security. Indeed, as was first revealed in December 2005, since 9/11 the National

Security Agency (NSA) has been conducting a massive and illegal program to wiretap the

phone calls and emails of millions of ordinary Americans without warrants, hoping to discover

terrorists by sifting through the mounds of data using computers (for more details, see EFF's

NSA Spying page and the Beyond FISA section of this guide).



One might hope that the information collected as part of the NSA's dragnet surveillance will

only be used against real terrorists, but there's no guarantee, particularly when there's no

court oversight. And we don't have any hard data about how the NSA actually uses that

information, with whom it is shared, or how long it is stored. So, although communications that

have been illegally wiretapped by the NSA are unlikely to be used against you in a criminal trial

— the Fourth Amendment's exclusionary rule would likely disallow it — there's no knowing

whether it might be used against you in the future in some other way.



Therefore, regardless of the strengths of the laws described below, you should consider

wiretapping to be a high risk, unless and until the NSA program is stopped by Congressional

action or a successful lawsuit. EFF is currently suing the government and the individual officials

responsible for the NSA program (see http://www.eff.org/cases/jewel), as well as AT&T, one of
  the companies assisting in the illegal surveillance (see http://www.eff.org/nsa/hepting), to try

  and stop the surveillance.


Wiretapping Law Protections
Wiretapping Law Protects "Oral," "Wire," and "Electronic" Communications Against "Interception"

  Before 1967, the Fourth Amendment didn't require police to get a warrant to tap conversations

  occurring over phone company lines. But that year, in two key decisions (including the Katz

  case), the Supreme Court made clear that eavesdropping — bugging private conversations or

  wiretapping phone lines — counted as a search that required a warrant. Congress and the

  states took the hint and passed updated laws reflecting the court's decision and providing

  procedures for getting a warrant for eavesdropping.



  The federal wiretap statute, originally passed in 1968 and sometimes called "Title III" or the

  Wiretap Act, requires the police to get a wiretap order — often called a "super-warrant"

  because it is even harder to get than a regular search warrant — before they monitor or record

  your communications. One reason the Fourth Amendment and the statute give us more

  protection against government eavesdropping than against physical searches is because

  eavesdropping violates not only the targets' privacy, but the privacy of every other person that

  they communicate with.



  The Supreme Court has also said that since eavesdropping violates so many individuals'

  privacy, the police should only be allowed to bug or wiretap when investigating very serious

  crimes. So, the Wiretap Act contains enumerated offenses — that is, a list of crimes — that are

  the only ones that can be investigated with a wiretap order. Unfortunately, Congress has

  added so many crimes to that list in the past 30 years that now practically any federal felony

  can justify a wiretap order.



  The Wiretap Act requires the police to get a wiretap order whenever they want to "intercept"

  an "oral communication," an "electronic communication," or a "wire communication."

  Interception of those communications is commonly called electronic surveillance.



  An oral communication is your typical face-to-face, in-person talking. A communication

  qualifies as an oral communication that is protected by the statute (and the Fourth

  Amendment) if it is uttered when you have a reasonable expectation that your conversation

  won't be recorded. So, if the police want to install a microphone or a "bug" in your house or

  office (or stick one outside of a closed phone booth, like in the Katz case), they have to get a
  wiretap order. The government may also attempt to use your own microphones against you —

  for example, by obtaining your phone company's cooperation to turn on your cell phone's

  microphone and eavesdrop on nearby conversations.



  A wire communication is any voice communication that is transmitted, whether over the

  phone company's wires, a cellular network, or the Internet. You don't need to have a

  reasonable expectation of privacy for the statute to protect you, although radio broadcasts and

  other communications that can be received by the public are not protected. If the government

  wants to tap any of your phone calls — landline, cellphone, or Internet-based — it has to get a

  wiretap order.



  An electronic communication is any transmitted communication that isn't a voice

  communication. So, that includes all of your non-voice Internet and cellular phone activities

  like email, instant messaging, texting and websurfing. It also covers faxes and messages sent

  with digital pagers. Like with wire communications, you don't need to have a reasonable

  expectation of privacy in your electronic communications for them to be protected by the

  statute.


Privacy tip: Voice communications have more legal protection.

   Under the Wiretap Act, although a wiretap order is needed to intercept your email and

   other electronic communications, only your oral and wire communications — that is, voice

   communications — are covered by the statute's exclusionary rule. So, for example, if your

   phone calls are illegally intercepted, that evidence can't be introduced against you in a

   criminal trial, but the statute won't prevent the introduction of illegally intercepted emails

   and text messages.

  An interception is any acquisition of the contents of any oral, wire, or electronic

  communication using any mechanical or electronic device — for example, using a microphone

  or a tape recorder to intercept your oral communications, or using computer software or

  hardware to monitor your Internet and phone communications. Wiretap law does not protect

  you from government eavesdroppers that are just using their ears.



  Although the government may get a super-warrant to "intercept" your communications, it is

  not allowed to prevent your communications from occurring. For example, the government

  can't prevent your calls from being connected, block your emails and their attachments, or

  otherwise interfere with your communications based on an intercept order. In fact, if their goal
  is to gather intelligence on you by tapping your communications, it will not be in their best

  interest to interfere in your communications and possibly tip you off to their surveillance, which

  might prompt you to use another communications method that may be more difficult to tap.



  According to the Wiretap Act, it's a crime for anyone that is not a party to a communication —

  anyone that isn't one of the people talking, listening, writing, reading, or otherwise

  participating in the communication — to intercept the communication, unless at least one of

  the parties to the communication has previously consented to (agreed to) the interception.

  Many state wiretap laws require all parties to consent, but those laws control state and local

  police, not the feds. If the police want to intercept an oral, wire, or electronic communication

  to which they are not a party and for which they have no consent, they have to get a wiretap

  order. Of course, an undercover police officer or informant that is talking to you while wearing

  a wire is a party to the conversation and has consented to the interception.


Privacy tip: Wiretapping and public websites, newsletters, and message boards

    The police do not need to get a wiretap order to read your organization's website, sign up

    for your email newsletter, visit your public MySpace or Facebook profile or pose as a

    member in an Internet chat room. Since those are all open to the public, you're allowing

    the police to become a party to those communications.

Getting a Court Order Authorizing a Wiretap
It Isn't Easy

  The requirements for getting a wiretap order from a judge are very strict. The Wiretap Act

  (and similar state statutes) requires law enforcement to submit a lengthy application that

  contains a full and complete statement of facts about (1) the crime that has been, is being, or

  is about to be committed and (2) the place, like your house or office, and/or the

  communications facilities, like those of your phone company or ISP, from which the

  communications are to be intercepted. The government must also submit a particular

  description of (3) the communications sought to be intercepted and (4) the identity of the

  persons committing the crime (if known) and of the persons whose communications are to be

  intercepted. Finally, the government must offer 5) a full and complete statement of whether

  other investigative procedures have been tried and have failed or why they appear unlikely to

  succeed or are too dangerous, (6) a full and complete statement of the period of time for

  which the interception is to be maintained, and (7) a full and complete statement about all

  previous wiretap applications concerning any of the same persons, facilities, or places.
  The court can then issue the wiretap order only if it finds probable cause to believe that (1) a

  person is committing an enumerated offense (one of the crimes listed in the Wiretap Act); (2)

  communications concerning that crime will be obtained through the interception; and (3) the

  facilities from which the communications are to be intercepted are being used in connection

  with the commission of the offense. The court must also find that normal investigative

  techniques have failed, appear unlikely to succeed, or would be too dangerous.



  The wiretap order, if issued, will almost always require the cooperation of some other person

  for it to be carried out. For example, the police can make your landlord let them into your

  apartment to install a bug, or, more often, force your ISP or phone company to help them

  intercept your phone or Internet communications. The wiretap order will include a "gag order"

  prohibiting anyone who cooperated with the police from telling you — or anyone else — about

  the wiretap.



  It's important to note that when it comes to tapping your Internet or phone communications,

  third parties like your ISP or your phone company can act as an important check on police

  abuse. In general, the police need their cooperation, and most will not cooperate unless there

  is a valid wiretap order requiring them to (otherwise, they could be violating the law

  themselves). However, as AT&T and other companies' cooperation in the NSA's illegal

  wiretapping shows, these companies can never be a perfect check against government abuse,

  particularly when the government cites national security as its goal.



  Although law enforcement can intercept your communications without your knowledge, they

  generally have to tell you about it when they are done. A wiretap order initially lasts for 30

  days, and investigators can obtain additional 30-day renewals from the court if they need more

  time. But after the interception is completed and the wiretap order expires, an inventory must

  be issued to the person(s) named in the wiretap order and, as the judge may require, to other

  persons whose communications were intercepted.


Wiretap Statistics
How Big is The Risk?

  A wiretap is an incredibly powerful surveillance tool. A single wiretap can invade the privacy of

  dozens or even hundreds of people. Fortunately, wiretaps in criminal investigations are pretty

  rare. Here are some numbers to keep in mind when calculating the risk of government

  wiretaps to you or your organization, according to the 2007 Wiretap Report to Congress from

  the Administrative Office of U.S. Courts:
•   In 2007, according to the report, 2,208 applications for wiretap orders were submitted to
    state and federal courts. 457 were in federal cases, the rest state. The courts granted every
    application, and of the 2,208 authorized wiretaps, 2,119 of them were installed.




•   Although it may appear that the number of federal wiretaps has been steadily dropping since
    2004, in contrast to the sharp rise in state wiretaps, the truth is much more troubling.
    According to the latest report, the U.S. Department of Justice has in recent years declined to
    provide information about all of its wiretap activity for the report, in order to protect
    "sensitive and/or sealed" information. The Department of Justice admits that if it did provide
    all of that information, however, the 2007 report "would not reflect any decrease in the use of
    court-approved electronic surveillance" by U.S. agencies. So, the feds aren't wiretapping any
    less — they're just being even more secretive about it — and presumably the number of
    federal wiretaps is growing at the same rate as the state number.

•   On average, according to the report, each installed wiretap intercepted over 3,000 separate
    communications.

•   On average, according to the report, each installed wiretap intercepted the communications
    of 94 different people. In other words, the 2,119 installed wiretaps reported in 2007
    intercepted the communications of nearly two hundred thousand people!




•   "Roving" wiretap orders are especially powerful. Instead of being limited to particular phone
    lines or Internet accounts, these orders allow the police to tap any phone or computer that
    the suspect uses, even if it isn't specified in the order itself. In 2007, 21 roving wiretap orders
    were reported by state authorities, mostly in narcotics cases. The federal authorities didn't
    report any roving wiretaps, but that doesn't mean they didn't use them; the Department of
    Justice likely thinks all of its roving wiretaps were in cases too "sensitive" to warrant
    reporting.

•   Over 80% of all reported wiretap orders in 2007 were issued in drug investigations.


     Wiretap orders by crime:
    •   Nearly 95% of the 2,119 wiretap installations reported in 2007 were for the interception of
        wire communications — that is, taps on phones — rather than for interception of electronic
        communications. It's doubtful that the federal authorities have been fully forthcoming on this
        point — they reported only one (!) wiretap of electronic communications and only three
        wiretaps that collected a combination of wire and electronic communications — but it's clear
        that telephone wiretaps are still much more prevalent than Internet wiretaps. One major
        reason for this is that the government has another way of getting at your Internet
        communications, under less strict legal requirements: by obtaining stored copies of your
        communications from your ISP or your email provider, as described in the next section,
        Information Stored By Third Parties. Oral intercepts — through the bugging of your home or
        car or office, for example — are also quite rare. You're more likely to have your oral
        conversations intercepted by an undercover agent or informant wearing a hidden microphone,
        since such conduct does not require a wiretap order.


          Wiretaps by type of communication intercepted:




  In conclusion, although the annual Wiretap Report is no longer as useful a gauge as it once

  was due to the Department of Justice's recent withholding of information, it's still clear that

  unless you're suspected of dealing drugs (or targeted for foreign intelligence surveillance), the

  chances of you or your organization's phone lines being tapped are fairly low, and the chances

  of your Internet communications being tapped are even lower. But remember, you don't have

  to be a suspect to end up having your communications intercepted. So, for example, if your

  organization serves a client population arguably connected to criminal activity, or if you

  personally associate with "shady characters," your risk goes up.


"Pen Registers" and "Trap and Trace Devices"
Less Powerful Than a Wiretap But With Much Weaker Privacy Safeguards
There's a particular type of communications surveillance that we haven't discussed yet and

that's not included in the above numbers: surveillance using pen registers and/or trap & trace

devices ("pen/trap taps"). Pen registers record the phone numbers that you call, while trap &

trace devices record the numbers that call you. The Supreme Court decided in 1979, in the

case of Smith v. Maryland, that because you knowingly expose phone numbers to the phone

company when you dial them (you are voluntarily handing over the number so the phone

company will connect you, and you know that the numbers you call may be monitored for

billing purposes), the Fourth Amendment doesn't protect the privacy of those numbers against

pen/trap surveillance by the government. The contents of your telephone conversation are

protected, but not the dialing information.



Luckily, Congress decided to give us a little more privacy than the Supreme Court did — but

not much more — by passing the Pen Register Statute to regulate the use of "pen/trap"

devices. Under that statute, the police do have to go to court for permission to conduct a

pen/trap tap and get your dialing information, but the standard for getting a pen/trap order is

much lower than the probable cause standard used for normal wiretaps. The police don't even

have to state any facts as part of the Electronic Communications Privacy Act of 1986 — they

just need to certify to the court that they think the dialing information would be relevant to

their investigation. If they do so, the judge must issue the pen/trap order (which lasts for sixty

days rather than a wiretap order's thirty days). Also, unlike normal wiretaps, the police aren't

required to report back to the court about what they intercepted, and aren't required to notify

the targets of the surveillance when it has ended.



With a pen/trap tap on your phone, the police can intercept:


  •   The phone numbers you call

  •   The phone numbers that call you

  •   The time each call is made

  •   Whether the call was connected, or went to voicemail

  •   The length of each call

  •   Most worrisome, we've heard some reports of the government using pen/trap taps to
      intercept content that should require a wiretap order: specifically, the content of SMS text
      messages, as well as "post-cut-through dialed digits" (digits you dial after your call is
      connected, like your banking PIN number, your prescription refill numbers, or your vote for
      American Idol).
That information is revealing enough on its own. But pen/traps aren't just for phones anymore

— thanks the USA PATRIOT Act, the government can now use pen/trap orders to intercept

information about your Internet communications as well. By serving a pen/trap order on your

ISP or email provider, the police can get:


  •   All email header information other than the subject line, including the email addresses of the
      people to whom you send email, the email addresses of people that send to you, the time
      each email is sent or received, and the size of each email that is sent or received.

  •   Your IP (Internet Protocol) address and the IP address of other computers on the Internet
      that you exchange information with, with timestamp and size information.

  •   The communications ports and protocols used, which can be used to determine what types of
      communications you are sending using what types of applications.

  •   Although we don't think the statute allows it, the police might also use pen/trap taps to get
      the URLs (web addresses) of every website you visit, allowing them to track what you are
      reading when you surf the web. The Department of Justice's apparent policy on this score is
      to collect information about what site you are visiting — e.g., "www.eff.org" — using pen/trap
      taps, but to obtain a wiretap order before collecting information about what particular page or
      file you are visiting — e.g., "www.eff.org/nsa". However, there's no way to confirm that
      federal authorities actually follow this policy in all cases, and serious doubt as to whether
      state authorities do.


(If you are confused by terms like "IP addresses" and "communications ports and protocols",

you may want to take a quick look at our very basic explanation of how the Internet works.)



Pen/trap taps enable what the security experts call traffic analysis. That's when an attacker

tries to discover information about an asset by analyzing how it moves. For example, if your

organization is working with another organization and you need to keep the relationship

confidential, traffic analysis of your Internet communications could reveal the connection and

show who you emailed, who you instant messaged with, what web sites you visited, and what

online forums you posted to. It could also show when those communications occurred and how

big they were.



For the government, the usual goal of a pen/trap tap is to identify who you are communicating

with and when. In particular, individuals can often be identified based on the IP address

assigned to their computer. IP addresses are generally allotted in batches, semi-permanently,

to institutions such as universities, Internet service providers (ISPs), and businesses.

Depending how the institution distributes its IP address allotment, it may be more or less

difficult to link specific computers, and users, to certain IP addresses. It is often surprisingly

easy. ISPs often keep detailed logs about IP address allotment, and as we'll discuss later,

those logs are easy for the government to get using a subpoena. Similarly, if the government
 is collecting email addresses with a pen/trap, it's easy for them to go to the email provider and

 subpoena the identity of the person who registered that address.



 Another purpose of pen/trap taps is to access information about your cell phone's location in

 real-time. When your handset is powered on, it connects to nearby cell towers to signal its

 proximity, so that the towers can rapidly route a call when it comes through. Law enforcement

 can use pen/trap devices to monitor these connections, or "pings", to pinpoint the physical

 location of the handset, sometimes within a few meters. And although Congress has made

 clear that pen/trap orders alone cannot be used to authorize this sort of location surveillance,

 it hasn't yet clarified what type of court order would suffice. So, although many courts have

 chosen to require warrants for location tracking, others have not, and the government has

 routinely been able to get court authorization for such tracking without probable cause.



 As already noted, court authorization for a pen/trap tap is much easier to get than a wiretap

 order. We don't know how many pen/trap orders get issued every year — unfortunately, there

 is no annual report on pen/trap surveillance like there is for wiretapping — but we have heard

 unofficial numbers that reach into the many tens of thousands. Therefore, the risk of being

 subjected to pen/trap surveillance is higher than the risk of being wiretapped.


What Can I Do To Protect Myself?

 In the last section, you learned that wiretapping and pen-trap tapping are powerful and routine

 government surveillance techniques, and got an idea of how often those techniques are legally

 used. In this section, you'll learn how to defend yourself against such real-time

 communications surveillance. As we'll describe in detail below, unless you take specific

 technical measures to protect your communications against wiretapping or traffic analysis —

 such as using encryption to scramble your messages — your best defense is to use the

 communications methods that possess the strongest and clearest legal protections: postal mail

 and landline telephones.


Electronic Eavesdropping is Legally Hard for the Government, But
Technically Easy

 As you learned in the last section, wiretapping is legally difficult for the government: it must

 obtain a hard-to-get intercept order or "super-warrant" from a court, subject to strict oversight

 and variety of strong privacy protections. However, wiretapping is typically very technically

 easy for the government. For example, practically anyone within range of your laptop's

 wireless signal, including the government, can intercept your wireless Internet
  communications. Similarly, practically anyone within range of your cell phone's radio signal,

  including the government, can — with a few hundred bucks to buy the right equipment —

  eavesdrop on your cell phone conversations.



  As far as communications that travel over telecommunications' companies cables and wires

  rather than (or in addition to) traveling over the air, the government has very sophisticated

  wiretapping capabilities. For example, using a nationwide surveillance system called "DCSNet"

  ("DCS" stands for "Digital Collection System") that is tied into key telecommunications

  switches across the country, FBI agents can from the comfort of their field offices "go up" on a

  particular phone line and start intercepting or pen-trap tapping wireline phone calls, cellular

  phone calls, SMS text messages and push-to-talk communications, or start tracking a cell

  phone's location, at a moment's notice. The government is believed to have similar capabilities

  when it comes to Internet communications. The extensive and powerful capabilities of the

  DCSNet, first uncovered in government documents that EFF obtained in a Freedom of

  Information Act lawsuit (details at http://www.eff.org/issues/foia/061708CKK), are well-

  summarized in the Wired.com article "Point, Click...Eavesdrop: How the FBI Wiretap Net

  Operates".



  Using "bugs" to eavesdrop on your oral conversations has also gotten much easier for the

  government with changes in technology. Most notably, the government now has the technical

  capability, with the cooperation of your cell phone provider, to convert the microphone on

  some cell phones or the cell phone in your car's emergency services system into a bug. The

  government likely also has the ability, with your phone company's help, to open the line on

  your landline phone and use its microphone as a bug, although we've yet to see any specific

  cases where such landline phone-based bugging has been used. Finally, the government may

  even have the capability, using remotely-installed government malware, to turn on the

  microphone or camera on your computer.


Choosing a Communication Method
Old Ways are Often the Best Ways

  Considering the government's broad capability to wiretap communications, there isn't much

  difference in the technical risk that wiretapping poses to your phone calls versus your emails

  versus your SMS text messages. However, as described in the last section, there are

  differences in the legal protections for these modes of communication, and as will be described

  later in this section, there may be technical steps that you can take — such as encrypting your
 communications — that may be easier or harder depending on which communications method

 you choose.



 So, when thinking about securing your communications against eavesdropping and

 wiretapping, your first choice — whether to meet in person, call on the telephone, write an

 email, or tap out an SMS text or IM message — is also your most important choice. As you'll

 see below, the least technically sophisticated modes of communication like face-to-face

 conversations and landline telephone conversations are often the most secure against

 unwanted eavesdropping, unless you and those you communicate with have mastered how to

 encrypt your Internet communications.


Face-to-Face Conversations Are the Safest Bet

 As shown in the last section, government eavesdropping of your "oral communications" or

 face-to-face conversations using "bugs" or hidden microphones is very rare: only 20 court

 orders authorizing oral intercepts were reported in the 2007 wiretap report, compared to 1,998

 orders authorizing wiretapping of "wire communications" or voice communications. In other

 words, you are 100 times more likely to have your phone conversations tapped than to have

 your face-to-face conversations "bugged".



 Not only are your oral conversations at less risk than your phone conversations, but they also

 receive the same strong legal protections as your phone conversations. Like your phone calls

 and unlike your non-voice Internet communications, oral communications that are intercepted

 in violation of the Wiretap Act are subject to that statute's exclusionary rule, and cannot be

 used against you as evidence in a criminal trial.



 Therefore, face-to-face conversations in private are the most secure method of

 communicating. Deciding whether to talk face-to-face rather than send an email or make a

 telephone call becomes a traditional security trade-off: is the inconvenience of having to meet

 face-to-face worth the security gain? Depending on whom you want to talk to and where they

 are, that inconvenience could be trivial or it could mean a cross-country trip. If the person you

 want to communicate with is in the same office or just next door, you may want to choose a

 private conversation even for communications that aren't particularly sensitive. When it comes

 to your very most sensitive data, though, that cross-country flight might be worth the trade-

 off.
 Just because the risk of oral interception is very low doesn't mean you shouldn't take technical

 precautions to reduce that risk, particularly when it comes to very sensitive conversations.

 Therefore, depending on how convenient it is and how sensitive the conversation is — again,

 it's a trade-off — you may want to have your conversation in a room that does not contain a

 landline telephone or a computer with a built-in or attached microphone or camera, and either

 not carry your cell phone or remove its battery (the microphone on some phones can be

 activated even when the phone is powered down, unless you remove the battery). Even if your

 conversation isn't especially sensitive, it doesn't hurt to detach external microphones and

 cameras from your laptop or cover the lens of attached cameras with a small piece of tape

 when they aren't in use. It's easy to do, and ensures that remote activation of those mics and

 cameras is one less thing to worry about.


Using the Telephone is Still the Second Safest Bet

 If having an oral conversation is simply too great an inconvenience, the second most secure

 option — unless you've mastered how to encrypt your internet communications — is to use the

 phone. Even though your phone is statistically more likely to be wiretapped than your Internet

 communications, the phone is still less risky than unencrypted Internet communications.



 This is true for several reasons. First and most important, your phone calls don't generate

 copies of your communications — once your call is over, the communication disappears

 forever. Internet communications, on the other hand and as discussed more below, generate

 copies that make it easier and more likely that someone can find out what you said. The risk of

 subpoenas to get these copies is much higher than the risk of a phone wiretap. Also, many

 more potential adversaries have or can gain access to your Internet traffic than to your phone

 lines.



 Also, remember that "wire communications" — that is, voice communications — get more legal

 protection. If your voice communications are wiretapped in violation of the Wiretap Act, they

 won't be allowed as evidence; illegally wiretapped Internet communications may still end up in

 court. That means that investigators have less reason to avoid stretching the law when it

 comes to your electronic communications.



 Speaking generally, just as phone conversations are a safer bet than unencrypted Internet

 communications, telephone conversations between landline telephones are a safer bet than

 telephone conversations that involve a cellular telephone.
  Most obviously, conversations that involve cellular telephones are technically much easier to

  tap than your landline phone conversations — anyone who is in range of a cell phone's radio

  signal can listen in using a few hundred dollars worth of specialized cell phone interception

  equipment (for more discussion of the security threats posed to mobile devices like cell

  phones, see the article on mobile devices). If you are concerned that government agents may

  ignore the law and choose to intercept your phone conversations without a wiretap order,

  intercepting your cell phone's radio signals would be an effective way for them to secretly do

  so, particularly considering that they do not need to get the assistance of the cell phone

  provider and that their radio-based interception wouldn't leave any physical trace.



  Cell phone conversations may also be more vulnerable legally — some courts have held that

  communications using cordless telephones are not protected by the Fourth Amendment,

  finding that there is no reasonable expectation of privacy in the radio signal sent between the

  cordless handset and the base station. The government may similarly consider the radio signal

  sent between your cell phone and the cell phone company's cell tower to be unprotected by the

  Fourth Amendment.


Privacy tip: Avoiding phone tap paranoia

   Contrary to popular belief, modern phone wiretaps used by the government don't make any

   noise — no clicks, no hisses, no static, nothing. Don't worry that the government is

   monitoring you if you happen to hear some unexplained noise on the phone line. You

   wouldn't believe how often we're told, "I think I'm being wiretapped — I keep hearing

   clicks!"

What About Phone Calls Using the Internet?

  Your "wire communications" or voice communications are subject to stronger legal protections

  than your other communications, regardless of what communications medium you use. So, for

  example, whether government agents intercept your landline telephone call, your cellular

  telephone call, or a telephone call made over the Internet, the Wiretap Act's exclusionary rule

  will prevent them from using that information as evidence against you in a criminal trial if they

  didn't get a wiretap order first. In contrast, the statute wouldn't prevent the government from

  using illegally intercepted "electronic communications" like text messages or emails as

  evidence.



  Therefore, you may want to consider using Voice-over-IP (VoIP) services, which allow you to

  send live voice communications — basically, phone calls — over the Internet. VoIP may be
 more private than regular calls for one big reason: it's easier to encrypt your conversation, as

 encrypting regular phone calls is very difficult and expensive. Unfortunately, there isn't any

 obviously effective and trustworthy option for encrypted VoIP that we can recommend at the

 moment. See our article on VoIP for futher details.


Avoid SMS Text Messages If You Can

 Text messaging over your cell phone using SMS can be an incredibly quick and convenient way

 of communicating short messages, but from a privacy perspective, it poses some serious

 problems.



 First, just like your cell phone conversations, SMS text messages sent to and from your cell

 phone can easily be intercepted over radio with minimal equipment and without any

 cooperation from the cell phone provider.



 Second, just like with your cell phone conversations, it's unclear whether the Fourth

 Amendment protects the radio signals that carry your SMS messages against interception. This

 uncertainty increases the possibility that the government may intercept such communications

 without a probable cause warrant.



 Third, and unlike your cell phone calls, SMS messages are "electronic communications" rather

 than "wire communications," and therefore aren't protected by the Wiretap Act's exclusionary

 rule. That means the statute would allow the government to use your messages against you in

 a criminal case, even if they were intercepted without a wiretap order in violation of the

 statute.



 Finally, although the Wiretap Act clearly does require the government to obtain a wiretap order

 before intercepting SMS messages, just as with any other "electronic communication," we have

 heard anecdotal reports of the government intercepting SMS messages without wiretap orders,

 instead using the much-easier-to-obtain pen/trap orders. These reports are bolstered by

 known cases where the government has obtained the content of stored SMS messages under

 the lesser standards reserved for non-content communications records.



 Putting all these factors together, we currently consider SMS messages to be highly vulnerable

 to government wiretapping, and recommend reserving that mode of communication for only

 the most trivial of communications, if you use it at all. The only exception is if you use
 encryption to protect your SMS messages. For now, SMS encryption software for cell phones is

 still quite rare, though you can find information about such software for Java-enabled phones

 here.


Learn to Encrypt Your Internet Communications

 Always remember that anyone with access to a wire or a computer carrying your

 communications, or within range of your wireless signal, can intercept your Internet

 communications with cheap and readily available equipment and software. Lawyers call this

 wiretapping, while Internet techies call it "packet sniffing" or "traffic sniffing". The only way to

 protect your Internet communications against wiretapping by the government or anyone else is

 by using encryption. Of course, it is true that most encryption systems can be broken with

 enough effort. However, breaking modern encryption systems usually requires that an

 adversary find a mistake in the way that the encryption was engineered or used. This often

 requires large amounts of effort and expense, and means that encryption is usually a critically

 significant defensive measure even when it isn't totally impregnable.



 Encryption, unfortunately, isn't always easy to use, so as in other cases, your decision of

 whether to use it will pose a trade-off: is the inconvenience of using the encryption worth the

 security benefit?



 The occasional inconvenience posed by some encryption systems is counter-balanced by the

 fact that encryption will protect you against much more than overzealous law enforcement

 agents. Your Internet communications are vulnerable to a wide range of governmental and

 private adversaries in addition to law enforcement, whether it's the National Security Agency

 or a hacker trying to intercept your credit card number, and encryption will help you defend

 against those adversaries as well.



 Also, as described in later sections, encrypting your communications not only protects against

 wiretapping but can also protect your communications while they are stored with your

 communications provider. So, for example, even if the government is able to seize your emails

 from your provider, it won't be able to read them.



 Considering all the benefits of encryption, we think that it's usually worth the trade-off,

 although as always, your mileage may vary depending on your tolerance for inconvience and

 on how serious you judge the threat of wiretapping to be. In some cases, using encryption
 may not be inconvenient at all. For example, the OTR encryption system for IM is extremely

 easy to set up and use; there's little reason not to give it a try. Check out the following articles

 to learn more about how you can use encryption to protect your internet communications

 against wiretapping, as well as against traffic analysis using pen-trap taps.



 Wi-Fi. Using encryption is especially critical when transmitting your Internet communications

 over the air using Wi-Fi, since pretty much anyone else in the area that has a wireless-enabled

 laptop can easily intercept your radio signals. This article will explain how to encrypt the radio

 signals sent between your laptop and a wireless access point.



 Virtual Private Networks (VPNs). Virtual Private Networks or "VPNs" are a potent

 encryption tool allowing you to "tunnel" communications securely over the Internet.



 Web browsers. Some of your web communications can be encrypted to protect against traffic

 sniffing. Take a look at this article to learn more about HTTPS, the most common web

 encryption standard, as well as other browser security and privacy tips.



 Email and IM. There are a number of powerful tools available for encrypting your emails and

 your IM messages; take a look at these articles to learn more.



 Tor. Tor is free, powerful, encryption-based anonymizing software that offers one of the few

 methods of defending yourself against traffic analysis using pen-trap taps, and also provides

 some protection against wiretapping. Visit this article for all the details.


Defend Yourself Against Cell Phone Tracking

 As described earlier, the government can use information transmitted by your cellular

 telephone to track its location in real-time, whether based on what cell phone towers your cell

 phone is communicating with, or by using the GPS chip included in most cell phones.



 Many courts have required the government to obtain a warrant before conducting this type of

 surveillance, often thanks to briefing by EFF. (For more information on our work in this area,

 visit EFF's cell tracking page.) However, many other courts have been happy to routinely

 authorize cell phone tracking without probable cause.
  Even more worrisome, the government has the capability to track cell phones without the cell

  phone provider's assistance using a mobile tracking technology code-named "triggerfish". This

  technology raises the possibility that the government might bypass the courts altogether. Even

  if the government does seek a court order before using "triggerfish," though, it will only need

  to get an easy-to-get pen-trap order rather than a wiretap order based on probable cause.



  Put simply, cell phone location tracking is an incredibly powerful surveillance technology that is

  currently subject to weak technical and legal protections.



  Unfortunately, if you want to use your cell phone at all, avoiding the threat of this kind of real-

  time tracking is nearly impossible. That's because the government can track your cell phone

  whenever it's on, even if you aren't making a call. The government can even track some cell

  phones when they are powered down, unless you have also removed the battery. So, once

  again, there is a security trade-off: the only way to eliminate the risk of location tracking is to

  leave the cell phone at home, or remove the battery.



  For more information about the privacy risks posed by cell phones, take a look at our article on

  mobile devices. You may also want to take a look at the advice offered by MobileActive.org in

  its Primer on Mobile Surveillance.




Mobile Surveillance Primer
Posted by MelissaLoudon on Jun 10, 2009
Author:
Melissa Loudon
Abstract:


Global Regions:
Sub-Saharan Africa
Asia
Australia and Oceania
Central America and the Caribbean
Europe
Middle East and North Africa
North America
South America
Mobile Surveillance Basics
Mobiles can be useful tools for collecting, planning, coordinating and recording activities of NGO staff and
activists. But did you know that whenever your phone is on, your location is known to the network
operator? Or that each phone and SIM card transmits a unique identifying code, which, unless you are
very careful about how you acquire the phone and SIM, can be traced uniquely to you?

With cameras, GPS, mobile Internet come ever more dangerous surveillance possibilities, allowing an
observer, once they have succeeded in gaining control of the phone, to turn it into a sophisticated
recording device. However, even a simple phone can be tracked whenever it is on the network, and calls
and text messages are far from private. Where surveillance is undertaken in collusion with the network
operator, both the content of the communication and the identities of the parties involved is able to be
discovered, sometimes even retrospectively. It is also possible to surreptitiously install software on
phones on the network, potentially gaining access to any records stored on the phone.

This is understandably disquieting to activists involved in sensitive work.

Obviously, the most secure way to use a phone is not to use one at all. Even so, most organisations,
even if they understand the risks involved, find that phones are too useful to discard completely. The best
approach then becomes one of harm reduction: identifying and understanding the risks involved, and
taking appropriate steps to limit exposure. In this article, we try to identify these risks, and to offer some
suggestions for securing your mobile communications.

Information transmitted phones the network
For every phone currently on the network (receiving a signal, regardless of whether the phone has been
used to call or send messages) the network operator has the following information:

   •     The IMEI number – a number that uniquely identifies the phone hardware
   •     The IMSI number – a number that uniquely identifies the SIM card
   •     The TMSI number, a temporary number that is re-assigned regularly according to location or
         coverage changes but can be tracked by commercially available eavesdropping systems
   •     The network cell in which the phone is currently located. Cells can cover any area from a few
         meters to several kilometers, with much smaller cells in urban areas and even small cells in
         buildings that use a repeater aerial to improve signal indoors.
   •     The location of the subscriber within that cell, determined by triangulating the signal from nearby
         masts. Again, location accuracy depends on the size of the cell - the more masts in the area, the
         more accurate the positioning.

In addition, phones in most American and Canadian states are designed to broadcast their position on the
so-called E911 service for emergency use. The service uses signal triangulation as well as the phone's
built-in GPS (if available), and is enabled by default for all new phones. In fact, the E911 service may
track the phone even when it appears to be switched off. Although this is unlikely, there is nothing to stop
the phone software simulating being powered down while in fact remaining on the network. To be sure
your phone is actually offline, you need to removed the battery or use a signal-blocking bag - more on this
later.
Of course, location determination services like E911 do place a restriction on who can access location
information for a particular phone. For E911, emergency services are automatically granted access, but in
law enforcement agencies need to obtain a court order to track a phone. In theory, third parties can only
access location information with the permission of the owner of the phone, but mechanisms for granting
such permission are often simplistic - replying to a tracking opt-in SMS from a target's unattended phone
can be enough allow you to track them continuously. Essentially, you need to decide whether you trust
your network operator to keep your location information secret. If not, leave the phone at home, take the
battery out, or read on for tips on how you can use a phone anonymously to avoid its location being linked
to your identity.

Information stored on the phone
All mobile phones have a small amount of storage space on the SIM card. This is used to store:

   •   Your phone book - contact names and telephone number
   •   Your call history - who you called, who called you, and what time the call was placed
   •   SMS you have sent or received
   •   Data from any applications you use, such as a calendar or to-do list
   •   Photos you have taken using the phone camera, if available. Most phones store the time the photo
       was taken, and may also include location information.

For phones that allow web browsing, you should also consider how much of your browsing history is
stored on the phone. Also, does your browser store passwords for sites you have accessed from the
phone? Emails are a further potential danger should an attacker obtain access to the SIM card or phone
memory.

Like the hard drive in a computer, the SIM memory of your mobile phone keeps data ever saved on it until
it is full, when old data gets written over. This means that even deleted SMS, call records and contacts
can potentially be recovered from the SIM. (There is a free application to do this using a smartcard
reader). The same applies to phones that have additional memory, either built into the phone or using a
memory card. As a rule, the more storage a phone has, the longer deleted items will be retrievable.

Can SMS be intercepted?
Standard SMS are sent in plain text, include location, phone and SIM card identifiers, and are visible to
the network operator. Interception by third parties without the assistance of the operator is less likely, but
over the last few years there have been various loopholes related to phone cloning that have allowed
more than one phone to register to the same number and receive SMS sent to that number. So, even
though some networks assure that SMS are not stored beyond the time needed to deliver them (and
many make no such guarantees), it is worth thinking about who might be able to access messages during
that time.

According to a recent paper discussing the role of SMS as a carrier for political jokes in China, the
Chinese government has an established SMS monitoring programme:
"As deviant ideas spread rapidly through SMS, the government started to establish mechanisms to
monitor and censor textual messages, with 2,800 SMS surveillance centers around the country. In June
2004, a Chinese firm, Venus Info Tech Ltd., announced that it had received the Ministry of Public
Security's first permit to sell a real-time content monitoring and filtering system for SMS. The system uses
the Chinese Academy of Science's tests of information content as the basis of its filtering algorithm, which
covers a wide range of "politically sensitive" combination of characters."

In addition to searching for keywords in message content, monitoring systems may also flag messages
based on suspicion attached to the sender or receiver. Plain text SMS should not be considered secure,
particularly when it is possible that the sender or receiver of the message may have been identified for
surveillance.

Can mobile calls be eavesdropped?
The idea of someone 'listening in' to sensitive phone calls is a familiar legacy of the era of analogue
phone systems. Eavesdropping in digital systems is technically complicated, although entirely possible
using commercially-available equipment. However, a much simpler way to listen in to phone call is to
capture the conversation on the phone, and transmit a recording to the eavesdropper. This can be done
with small software applications surreptitiously installed, or with a bugging device attached to the phone.

It has also emerged that the network operator is able to remotely activate a phone as a recording device,
independent of calls or SMS or even of whether the phone is switched on. This report from
wombles.org.uk confirms that remote surveillance functionality is available on US networks.

Essentially, network operators have always been able to send undetectable control messages to phones
on the network. This capability exists to allow operators to update software stored on the SIM card, but
could also be used to transform the phone into a tracking device, or to access messages or contact
information stored on the SIM card. Once surveillance is being conducted with the co-operation of the
network operator, there is very little that can be done to prevent this.

Mobile Viruses, spyware and keyloggers
Modern high-end phones are essentially scaled-down PCs, with the processing power, memory and
connectivity to run all kinds of applications - including malicious ones. Thankfully, mobile viruses are still
rare, and affect mostly high-end phones. However, security experts predict that malicious mobile
applications will become increasingly common. This includes not only viruses, but also mobile spyware
and keyloggers, which could secretly spy on your mobile activities to glean passwords and other sensitive
data.

To protect yourself, consider buying a very simple phone, perhaps even one that does not allow third
party applications to be installed. If you do need to use a smartphone, install only trusted applications -
many viruses masquerade as a useful little application, but wreak havoc when installed (or worse,
compomise the data stored on your phone). Bluetooth in particular has proven a popular way for mobile
viruses to spread, and should be turned off when not in use. You could also consider installing antivirus
software on your phone. Most of the major antivirus vendors, including Norton, Kapersky and F-Secure
have recently brought out mobile security products.

Aggregate data can be dangerous too
So far, surveillance has been discussed in terms of individuals. But with sophisticated network analysis
software, it is also possible to detect patterns from large quantities of cellular network data - call records,
messaging records, message content - potentially leading to the identification of dissident groups. An
ICT4Peace post quotes Daniel Soar in the London Review of Books describing how this might work:

"[..] companies like ThorpeGlen (and VASTech and Kommlabs and Aqsacom) sell systems that carry out
‘passive probing’, analysing vast quantities of communications data to detect subjects of potential interest
to security services, thereby doing their expensive legwork for them. ThorpeGlen’s VP of sales and
marketing showed off one of these tools in a ‘Webinar’ broadcast to the ISS community on 13 May. He
used as an example the data from ‘a mobile network we have access to’ – since he chose not to obscure
the numbers we know it’s Indonesia-based – and explained that calls from the entire network of 50 million
subscribers had been processed, over a period of two weeks, to produce a database of eight billion or so
‘events’. Everyone on a network, he said, is part of a group; most groups talk to other groups, creating a
spider’s web of interactions.

Tools and precautions
First, you need to decide what risks are unacceptable. Is your main concern being identified as the
sender or receiver of calls and messages, or is there sensitive information stored on your phone that you
need to protect? You may also require encryption for your SMS messages, or for emails sent from the
phone.

We've published some general suggestions on phone security for nonprofits before. To recap:

  •    Use a pre-paid SIM card
  •    Buy a SIM card just for the specific project and dispose of it afterwards.
  •    Make it routine to delete the information on your phone. Check the settings on the phone to see if
       can be set to not store call logs and outgoing SMS.
  •    If your conversation is sensitive, don’t discuss it on the phone and consider taking the battery out
       of any phones in your vicinity.
  •    Consider turning the phone off at certain times in your journey. Move the phone to places that it
       can be established you are not at so that all activity on the phone is not linked to you.
  •    If you suspect that messages are monitored use agreed innocuous words in your message
Avoiding being identified by your phone
If you are particularly worried about being identified as the owner of a specific phone or SIM card
(remember, both have a unique identifier that is transmitted when the phone is on the network) you need
to be careful not be be identified when buying or using the phone. FreeB.E.A.G.L.E.S has some
suggestions for safe purchasing of phone and credit, and we've added some to their list.
  •    Make your purchase in a shop away from where you live so that the seller is unable to identify you,
       so consider the trail you leave and don't use a credit card or a traceable email address.
  •    Avoid places that are likely to have CCTV - town centres, malls and larger chain stores are obvious
       examples
  •    Do not giving your real details if asked. Many shop do ask for your details, but not proof of ID.
       Check whether you have a legal obligation to provide any details at all.
  •    Get the simplest phone you need, avoiding extra features unless necessary. For calls and SMS
       only, get a bottom of the range phone. If you want to use additional encryption software though,
       consider a Java-enabled phone, which is able to run encryption software provided by third parties.
  •    Do not buy a phone in a deal that locks you into a contract with a particular operator. Always ask
       for the phone on pay-as-you-go, even if this is much more expensive.
  •    Do not register the phone – in many countries there is no legal obligation to do so (though some
       countries require SIM registration or track identification when you buy a SIM card)
  •    Buy top-up vouchers to load credit onto the phone. When buying the vouchers, follow the same
       rules as for buying the phone: avoid CCTV, pay cash, don't give out your details. Do not buy a top-
       up card, if available - this allows all the topups to be traced to a single user.

In addition, it's a good idea to change the SIM card and phone regularly. Also, recognise that the location
of each call or message you send or receive is know, as is the phone and SIM card involved. If your
location gives away your identity (for example, if you call or message from your home or place of work, or
are captured on CCTV making or receiving a call, or sending or receiving a message) then you should
dispose of the phone and SIM card you have used and start again with a new phone and SIM.

Preventing location tracking
If your phone is receiving a signal, your location is known to the network operator. This information can be
used to track you in real time. If an observer is tracking one or several people over a period of time, it
could also be analysed to identify meeting places or regular routes. If your location is sensitive, you
should avoid using a phone at all - Wombles.org.uk suggests taking the battery out during sensitive
meetings, as well as while travelling there and back. You could also try making a signal blocking bag for
you phone out of three layers of foil, which should prevent the phone from registering on the network.

The E911 service in the US is a particularly powerful location tracking system. While all phones sold in
the US must be E911 enabled, some models allow the service to be turned of (here] is an incomplete
list). You may also be able to find information on the Internet for specific models - try searching for 'E911'
and the phone's make and model.

Finally, remember that your phone may still be registering on the network when it is switched off - 'off' is in
fact only a software mode. Location tracking, as well as the E911 service, can potentially be activated on
a phone that is switched off. To be sure, take the battery out or use a signal blocking bag. You should
also make sure your phone cannot be linked to your identity by following the advice above, taking
particular care to change your SIM card and handset often.

Preventing bugging
Bugging of a phone undertaken, especially if undertaken with the cooperation with the network operator
may be very difficult to detect, especially if software is installed on the phone using the command mode.
The same general advice applies - change both your handset and SIM card often, and consider using a
new, 'clean' phone and SIM for sensitive operations. Recognize that all the information stored on your
SIM or in phone memory can be remotely accessed by the network operator, or by a third party if they
have managed to install bugging software on your phone. Do not store sensitive contacts, and consider
using encrypted SMS, MMS and email messages both to prevent unauthorised viewing of messages in
transit, and to stop saved messages from being readable should someone gain access to the information
stored on your phone or SIM.

SMS Encryption applications for Java-enabled phones
Most phones available today (excepting some very basic models) allow users to download and install
Java applications written by third parties. In general, you should avoid installing anything additional on a
phone that is used for sensitive communications. The one exception to this may be encryption
applications, which allow for securely encrypted communication once installed on the phone of both
sender and receiver. SMS encryption applications allow short encrypted messages to be sent and
received, using either a standard SMS (which is charged as normal but is scrambled so as to be
indecipherable in the records of the network operator) or a data service such as GPRS as the message
bearer.

Most SMS encryption applications work by asking the sender to choose a password (or 'key') which is
used to encrypt the message. The sender discloses the password to the desired recipient, who then be
uses it to decrypt the message. The obvious weakness in this system is that anyone who manages to
acquire the password (perhaps by eavesdropping on the conversation in which it is disclosed) is able to
decrypt the message. A more secure form of encryption, called public key encryption, uses two keys,
uniquely identifying the sender and receiver. You can read more about this type of encryption on this
wikipedia page.

There are many commercially developed SMS encryption applications, but some offer a free trial version.
There is also an open source SMS encryption application, CryptoSMS, which carries no license fees.
The advantage of using an open source application is that all source code is freely available, and can be
scrutinised for security flaws. In commercial proprietary applications, the source code is not made
available, potentially allowing malicious code to be included undetected.

This isn't a full review of SMS encryption applications but here are some Java-based encryption
applications that you could look into:

  •    CryptoSMS offers public key encryption and, as an Open Source application, does not require the
       purchase of a license. It does not anonymise the message sender and receiver of the encrypted
       message.
  •    SMS 007 is a commercial application that uses password encryption, but has the added advantage
       of anonymising the sender and receiver of encrypted messages by creating its own encrypted
       contacts list. It can be bought online for around €35.
  •    Kryptext is another commercial product, available online at £5.99 per month for up to 10 licences
       according to the product webpage. It uses password encryption, but does not advertise additional
       features such as contact list encryption.

Although it does not advertise encryption, Feedelix is a Java SMS application that allows users with the
FeedelSMS software installed on GPRS and Java phones to send cheap text messages. It has versions
for Hindi and Ethiopic. It supports bulk messaging and does not use the network's SMS service, making it
useful for information dissemination in situations were this may have been disabled.

Applications for Smartphones (Symbian and Windows
Mobile)
Smartphones, many of which use the Windows Mobile or Symbian operating systems, have more
memory, more processing power, better peripheral devices such as cameras and more connectivity
options. Many can run scaled-down mobile versions of PC communication applications, such as email,
Voice-over-IP (VOIP) calling and instant messaging (IM). But, given that more sophisticated phones
inevitably offer more sophisticated surveillance possibilities, which of the applications can be used safely
to communicate sensitive information?

Email, VOIP and instant messaging applications for mobile phones could all be designed to provide
secure communication. Several applications already claim to do this, including Skype, the popular VOIP
and instant messaging system. Skype is peer-to-peer, making it harder to intercept communication as
there is no central server. It also uses several layers of encryption. The weakness of the system is that it
is closed source and proprietary, meaning that no-one is allowed to review the source code of the Skype
client for potential security vulnerabilities. More worrying is the discovery of surveillance code in a
Chinese skype client, which logged the content of messages containing specific keywords, presumably
for use by Chinese authorities. This kind of vulnerability (called a 'backdoor') could be built into any
closed-source application, even one that promises secure communication. Open source software solves
this problem, but as far as we can tell there are no mature open source mobile VOIP or IM applications
that also offer encryption. This is definitely something to look out for, but for now both are probably best
avoided for sensitive work.

Mobile email can be encrypted either by using a peer-to-peer system that requires generation and sharing
of a key or keys, as described for SMS encryption, or by making use of an encrypted email service where
this is managed on a central server. In the first case, the responsibility for key generation and sharing of
keys is entirely with the sender and receiver of the message, involving no third party. In the second, a
third party email encryption service handles part of the process, for example, key management or storage
of encrypted messages on a secure server. Hushmail is one email encryption services that has released
a mobile client. As demonstrated by a case not long ago involving the service, however, it is important
to remember that encryption services are still subject to the laws of the country in which they are based.
This means that they could be forced to hand over information to law enforcement. As with VOIP and IM,
you should also consider whether you are willing to trust the proprietary email clients are not inadvertently
or secretly passing on information through a software backdoor.
Managing your own email encryption, for example by using an OpenPGP or S/MIME capable email client,
avoids reliance on any third party. This can be technically challenging, but if you do have expertise and/or
time to try things out, a peer-to-peer email encryption system can be a good way to communicate
securely from a sophisticated phone. All the same, given the benefits of changing your handset regularly
and the high cost of smartphones, it is worth considering whether the few secure communication options
available are not better executed from a basic, low-spec PC.

Conclusion
This article has highlighted some of the potential surveillance risks posed by the use of mobiles,
particularly for activists working in sensitive areas or under hostile regimes. It is the nature of mobile
cellular systems that the network operator knows the approximate location of all phones currently on the
network, as well as maintaining extensive call and messaging records. Once this information is available,
there is always a risk of it falling into the wrong hands. However, taking basic precautions about the
phone you choose and how you use it can help to reduce your risk of surveillance, and encryption
applications can make the phone an effective tool for secure communication.
       Tags
  •    Advocacy
  •    Citizen Media
  •    Community Organizing
  •    Democratic Participation
  •    Freedom of Expression and Press
  •    Human Rights Abuses, Torture
  •    Mobilization
  •    Political Parties, Politics

  •    Summing Up
  •    What You Need to Know

  •    Due to a combination of legal and technical factors, face-to-face conversations and

       conversations using landline telephones are more secure against government wiretapping

       than cell phone or Internet communications. Cell phone conversations are more vulnerable

       both technically and legally, while SMS text messaging appears for now to be very insecure

       both technically and legally. Cell phones also create the risk of location tracking, and the

       only way to eliminate that risk entirely is to not carry a cell phone or to remove the

       battery.

  •    When it comes to Internet communications, using encryption is the only way to defend

       against wiretapping, whether by the government or anyone else.
  •    When it comes to pen/trap taps, on the other hand, most encryption products won't

       protect the types of information that the government can get. That information needs to
      be transmitted in the clear so computers can direct it to the proper recipient. Only

      anonymizing tools like Tor will protect you from traffic analysis via pen/trap tap.
  •   Information Stored By Third Parties
  •   Third parties — like your phone company, your Internet service provider, the web sites you

      visit and interact with or the search engine that you use — regularly collect a great deal of

      sensitive information about how you use the phone system and the Internet, such as

      information about who you're calling, who's emailing or IMing you, what web pages you're

      reading, what you're searching for online, and more. In addition to those records being

      compiled about you, there's also data that you choose to store with third parties, like the

      voicemails you store with you cell phone company or the emails you store with your email

      provider. In this section, we'll talk about the legal rules that govern when and how law

      enforcement agents can obtain this kind of information stored by and with third parties.

      We'll then outline steps that you can take to reduce that risk, by learning how to reduce

      the amount of information collected about you by third parties, minimize the amount of

      data you choose to store with third parties, or replace plainly readable data with encrypted

      versions for storage with third parties.
  •   What Can the Government Do?
  •   In addition to being able to use wiretaps to intercept your communications while they are

      being transmitted, the government has a variety of ways of getting (1) records about your

      communications and (2) the content of communications that you have stored with a third

      party. In particular, the government can get all of the records that your ISP, phone

      company, or other communications service providers have on you, and the SMS messages,

      instant messages, emails or voice-mails you've stored with them. However, unlike regular

      third-party records discussed above, which can be subpoenaed without any notice to you,

      the records of your communications providers are given some extra protection by the

      "Stored Communications Act" portion of the "Electronic Communications Privacy Act", or

      ECPA.

  •   So what can the government get?


Some Records Only Require a Subpoena
Basic Subscriber Information Held by Your Communications Providers Is Available With Just a
Subpoena

  With a subpoena, the government can obtain from your communications providers what is

  often called "basic subscriber information." Sometimes, the subpoena will specifically name a

  person whose information is being sought; other times the government will ask for information

  regarding a particular phone number, Internet username, email address, or IP address. With

  such a subpoena, the government can (only) get your:
    •     Name.

    •     Address.

    •     The length of time you've used that phone or Internet company, along with service start date
          and the types of services you use.

    •     Phone records. They can get your telephone number, as well as local and long distance
          telephone connection records — those are records identifying all the phone numbers you've
          called or have called you, and the time and length of each call.

    •     Internet records. They can get the times you signed on and off of the service, the length of
          each session, and the IP address that the ISP assigned to you for each session.

    •     Information on how you pay your bill, including any credit card or bank account number the
          ISP or phone company has on file.


  The government can get this information with no notice to you at all, and can also get a court

  order forcing your service provider not to tell you or anyone else.


Other Records Require a Court Order
Other Communications Records Held by Your Communications Providers Require a Court Order

  In order to get a communications provider to turn over other records beyond basic subscriber

  information, the government either has to get a search warrant or a special court order.

  Sometimes called "D" orders, since they are authorized in subsection (d) of section 2703 of the

  Stored Communications Act, these court orders are much easier to get than search warrants

  but harder to get than subpoenas. The government can get this information with no notice to

  you at all, and can also get a court order forcing your service provider not to tell you or anyone

  else.



  In addition to basic subscriber information, your ISP or email provider may maintain records or

  "logs" of:


    •     The email addresses of people you send emails to and receive emails from, the time each
          email is sent and received, and the size of each email

    •     The IP addresses of other computers on the Internet that you communicate with, when you
          communicated with them, and how much data was exchanged

    •     The web addresses of the web pages that you visit


  Which, if any, of the above are logged varies, depending on your particular ISP or email

  provider's privacy policies and resources. However, just about every ISP will log IP addresses

  and log-on/off times, and keep those logs for at least a few months.
  Cellular phone companies may also keep records of which cell tower your phone communicated

  with when you were making calls. These cell site tower records can help pinpoint your physical

  location at points in the past, and are increasingly the target of law enforcement

  investigations. And although some courts have required the government to obtain a warrant

  based on probable cause before obtaining these records, the government's usual practice is to

  get such records based on the much lower "D" Order standard.


Not All Records are Protected
Records Collected by Search Engines and Other Web Sites May Not Be Protected

  In addition to the logs kept by your communications providers, there are also logs kept by the

  Web sites that you visit. For example, the Apache web server is currently the most widely used

  web server on the Internet. In its default configuration, it logs the following information about

  each request it receives from a web browser:


    •   requesting host name/IP address

    •   username of requester (rarely present)

    •   time of request

    •   first line of request (indicating requested page, plus some parameters)

    •   success or failure of request

    •   size of response in bytes

    •   the previous page viewed by requester (if any)

    •   the name and version of the web browser used


  However, the server could potentially be configured to log anything you or your browser tells

  it, in addition to the above.



  The Stored Communications Act clearly protects records held by companies that offer the

  public the ability to send and receive communications — phone companies, ISPs, webmail

  providers, IM providers, bulletin board sites, etc. However, it does not necessarily protect logs

  held by web sites that don't offer communications service, which is most of them.



  This is particularly worrisome when it comes to search engines. The government's position is

  that logs kept by search engines are not protected by the Stored Communications Act at all.

  Considering that these logs can often be linked back to you — either by your IP address or

  "cookies," or, if you've registered with other services offered by the search engine, by the
  information you entered when registering — this potential gap in legal protection represents a

  serious privacy threat.


Some Content Receives Stronger Protection
Emails, Voicemails, and Other Communications Content Stored by Your Communications Providers
Receive Stronger Protection

  Compared to the relatively weak protection for non-content records, the law gives some extra

  protection to communications content that you have stored with (or that is otherwise stored

  by) communications service providers like your phone company, your ISP, or an email provider

  like Gmail or Hotmail. Your communications providers cannot disclose your stored

  communications to the government unless the government satisfies the requirements

  described below; nor can they disclose your stored communications to anyone other than the

  government without your permission. There is one notable exception, though, for serious

  emergencies: if the provider believes in good faith that not immediately disclosing the

  communications could lead to someone’s death or serious injury, they can give them to the

  government.



  Note, however, that these restrictions on the disclosure of your communications only apply to

  communications providers that offer their services to the public. Even more worrisome, the

  government doesn’t consider businesses or schools and universities that offer their employees

  and students service to be offering services to the public, and therefore considers them

  unprotected by the Stored Communications Act. That means they could get communications

  from those entities with only a subpoena, and maybe even just a polite request if your

  employee agreement or your school's privacy policy allows it.


Privacy tip: Use communications providers that serve the public!

    Don’t let some friend with a mail server in his basement handle your email service unless

    he is very trustworthy — unlike a regular ISP or public web-mail service, there are no legal

    restrictions on who your friend shares your emails with.

  The Stored Communications Act strongly protects communications that have been in 'electronic

  storage' for 180 days or less, but the government has a very narrow reading of what

  'electronic storage' means in the statute. The government doesn't consider already-read or

  opened incoming communications to be in electronic storage (for example, emails in your

  inbox that you've already looked at, or voicemails in your voicemail account that you've saved

  after listening). Nor does the government consider messages in your sent box or messages in
  your drafts box to be in 'electronic storage.' Under the government's view, here's how your

  communications are treated under the law:



  New unopened communications: If the email or voice-mail messages are unopened or

  unlistened to, and have been in storage for 180 days or less, the police must get a search

  warrant. However, you are not notified of the search.



  Opened or old communications: If you have opened the stored email or voice-mail

  messages, or they are unopened and have been stored for more than 180 days, the

  government can use a special court order — the same “D” orders discussed — or a subpoena

  to demand your communications. Either way, the government has to give you notice

  (although, like with sneak & peek search warrants, that notice can sometimes be delayed for a

  substantial time, and as far as we can tell almost always is delayed). However, the police may

  still choose to use a search warrant instead of a D order or subpoena, so they don’t have to

  give you notice at all.



  Notably, the Ninth Circuit Court of Appeals has disagreed with the government's reading of the

  law, finding that communications are in electronic storage even after they are opened —

  meaning that the government needs a warrant to obtain opened messages in storage for 180

  days or less.


Privacy tip: Use communications providers based in California

    Communications providers in states that are in the Ninth Circuit, such as California, are

    bound by Ninth Circuit law and therefore are very resistant to providing the government

    with opened emails that are 180 days old or less without a warrant.

  In sum, although the law sometimes requires the government to get a warrant before

  accessing communications you’ve stored with your communication provider, it doesn’t always.

  For this reason, storing your communications on your own computer is preferable — the

  government will almost always need a warrant if it wants to seize and search the files on your

  computer.


What Can I Do To Protect Myself?

  When we were talking about how to defend yourself against subpoenas and search warrants,

  we said, "If you don't have it, they can't get it."
Of course, that's only partially true: if you don't have it, they can't get it from you. But that

doesn't mean they might not be able to get copies of your communications or detailed records

about them from someone else, such as your communications service providers or the people

and services that you communicate with. Indeed, as we outlined in the last section, it's much

easier as a legal matter for the government to obtain information from these third parties —

often without probable cause or any notice to you.



So, you also need to remember this lesson: "If someone else has stored it, they can get

it." If you let a third party store your voicemail or email, store your calendar and contacts,

back up your computer, or log your communications traffic, that information will be relatively

easy for the government to secretly obtain, especially compared to trying it to get it from you

directly. So, we'll discuss in this section how to minimize the content that you store with third

parties.



We've also asked you to "encrypt, encrypt, encrypt!" in the previous sections about

protecting data on your computer and while you are communicating. The same holds true

when protecting against the government getting your information from other people. Although

ideally you will avoid storing sensitive information with third parties, using encryption to

protect the data that you do store — such as the emails you store with your provider, or the

files you back up online — can provide a strong line of defense. We'll talk in this section about

how to do that.



Communications content that you've chosen to store with a service provider isn't the only

issue, though. There are also the records that those third parties are creating about your

interactions with their services. Practically everything you do online will create records, as will

your phone calls. So your best defense is to think before you communicate:


  •   Do you really want the phone company to have a record of this call — who you called, when,
      and how long you talked?

  •   Do you really want a copy of this email floating around in the recipient's inbox, or on your or
      his email provider's system?

  •   Do you really want your cell phone provider to have a copy of that embarrassing SMS text
      message?

  •   Do you really want Google to know that you're searching for that?


It may be that the communication is so trivial or the convenience so great that you decide that

the risk is worth it. But think about it — seriously consider the security trade-offs and make a
  decision — before you press "send". We'll give you information in this section that should help

  you make those decisions.



  Another option for minimizing the information that's recorded about you — short of avoiding

  using a service altogether — is to protect your anonymity using encryption and

  anonymous communication tools. If you want to search Google or browse Amazon without

  them being able to log information that the government could use to identify you, you'll need

  to use software such as Tor to hide your IP address, as well as carefully manage your

  browser's privacy settings. This section will give you the information you need to do that.


Getting Started
Learn What Your Service Providers Store

  Most communications service providers and commercial web sites have privacy policies. Read

  them to find out:


    •   What information do they collect? It may be more than you think. If anyone you do
        business with doesn't have a privacy policy (or their policy is unclear), you should contact
        them and ask about what they collect.

    •   With whom do they share it? Most companies will share your information with other
        companies in their corporate family and with marketers; many companies will sell your data
        to anyone who wants it. Check to see if they'll let you "opt-out" of sharing your information
        with other companies.

    •   What about the government? Look in the privacy policy to see under what circumstances
        they'll hand your information over to the government. Try to do business with companies that
        will not give your information to the government unless required by law to do so. Also find
        out whether they will notify you if the government asks for your files, and do business with
        companies who will always notify you unless prohibited by law from doing so. That way, you
        can call a lawyer and try to stop the disclosure before it happens.


  Consider using activist-friendly, privacy-respecting communications providers that offer free

  services. The Online Policy Group, for example, offers free web hosting and email list hosting,

  while Rise Up offers free email (including web-mail), web hosting, and email list hosting. These

  services have strong privacy policies and will notify you of any governmental or other attempt

  to seek customer information unless prevented by law. Cable companies that offer Internet

  access usually also have a policy of notifying you unless they've been gagged — in fact,

  because of a quirky imbalance in the law, they actually have to notify you if they can, unlike

  non-cable providers. So, if you're especially worried about the communications records held by

  your ISP, consider using a cable broadband provider.


Choosing a Communications Method
Again, Telephone Calls are Your Safest Bet

  When it comes to protecting the privacy of communications content stored by your provider,

  the safest choice is to avoid storing any content with the provider at all. Therefore, just as

  when we were discussing wiretapping, regular old telephone calls have a distinct advantage

  over other communications methods: putting aside voicemail, which we'll discuss on the next

  page, telephone calls don't create copies. That means, unless the government goes to the

  technical and legal trouble of directly wiretapping you (a very low risk, compared to the

  government trying to obtain stored copies of your communications), or the person you are

  talking to is so untrustworthy that they would record your conversation without telling you (a

  rarity, but it does sometimes occur), your telephone call will be safe from prying ears.



  As you'll see on the following pages, telephone calls are far preferable to SMS text messages,

  which providers apparently store for long periods of time, and which are very difficult to

  encrypt. IM and VOIP are better alternatives, as we'll also discuss, since they can be more

  easily encrypted, and since instant messages and VOIP call contents are typically not logged by

  providers. Email is a harder case, since it necessarily creates a range of copies — with

  providers and with recipients — but as you'll see later, there are a number of steps you can

  take to make that mode of communication safer, too.


Protecting Your Voicemail

  As we explained previously, copies of your communications stored by your phone company

  such as your voicemail receive very weak legal protection compared to copies of your

  communications stored in your own home. In particular, after a communication has been

  stored more than 180 days — or, according to the government's reading of the law, after

  you've first accessed that stored communication — the government no longer needs to get a

  warrant before obtaining that communication, and can instead use only a subpoena to the

  company (usually with no notice to you).



  When it comes to your voicemail, this means two things:


    •   Where possible, use your own answering machine or voicemail system, not the phone
        company's.

    •   Where it's not possible to use your own answering machine or voicemail system, such as with
        your cell phone, you should always delete your voicemails as soon as you listen to them!

    •   Protecting Your Voice Over IP Communications
    •   As best we can tell, providers of Voice Over IP telephone service such as Skype do not

        record your calls as a matter of routine. So, short of using encryption to protect the

        confidentiality of your calls there are no special steps that you need to take to ensure

        that the government can't obtain stored copies of your conversations. Notably, Skype

        uses encryption by default. However, as discussed in our VoIP article, the security of

        Skype's encryption system is still in question. And, as with your regular phone calls,

        there is always going to be some risk that the person at the end of the line is recording

        the conversation.


Protecting Your Email Inbox
(and Sent folder, and Drafts folder, and…)

  The Stored Communications Act requires the government to obtain a warrant before seizing

  emails that are in "electronic storage" with your communications provider and are less than

  181 days old. However, under the government's interpretation of the term "electronic storage",

  the emails that arrive in your inbox lose warrant protection under the Stored Communications

  Act, and are obtainable with nothing more than a subpoena (often with no notice to you) as

  soon as you've downloaded, opened, or otherwise viewed them. Similarly, the government

  believes that it can obtain the sent emails and draft emails that you store with your provider

  with only a subpoena, again often without notice to you; the government doesn't think those

  sent or draft emails are in "electronic storage" as defined by the statute, either.



  EFF is doing it's best to prove the government's interpretation wrong in court, and some courts

  have already disagreed with the government. Yet as far as we can tell, those court decisions

  haven't significantly changed the government's behavior and it still routinely obtains opened

  emails (and sent emails and draft emails) without warrants, regardless of how old they are.



  Because of the government's aggressive position, you need to be just as aggressive when it

  comes to defending your email privacy. As described on the next few pages, the most critical

  things you can do are:


    •   Delete emails from your provider's server as soon as you first access the messages,
        and store your sent and draft emails locally in your email client software, rather than with
        your provider.

    •   In order to minimize the number of emails stored with your provider — be they received,
        sent, or draft — avoid using webmail if at all possible, or, if you do use a webmail
        account, avoid the web interface and instead configure your email client software to send and
        receive emails directly via POP.

    •   Encrypt your emails whenever possible.
•   Protecting Email: Download and Delete!
•   The single most powerful step you can take to protect the privacy of your email is to not

    store it with your email provider. Rather than leave email on your provider's server, you

    should configure your email software to immediately delete incoming emails from your

    provider's server as you download those messages to your computer — and also make

    sure that your email software is configured to store your draft and sent email on your

    computer rather than with the provider.
•   Of course, this is a serious security/convenience trade-off — by fetching your email

    using the "POP" email protocol and storing all your mail locally, you won't have access to

    your email from multiple devices like you would if you were using the IMAP protocol or a

    webmail interface, both of which store all of your mail with the provider. We realize that

    for some people, particularly those without their own computer, using POP and storing

    everything locally may not be an option. But if it is an option, and you can effectively

    function without storing your emails with your provider, we highly recommend doing so.

    For more, check out our email article.
•   Don't Use Webmail if You Don't Need It - or POP It.
•   Webmail poses a serious security trade-off for those concerned about a government
    adversary.

•   Webmail is usually free, very easy to use, and super-convenient, especially if you want

    the ability to access your email from several different computers or mobile devices.

    However, deleting your email from your provider's servers as soon as you've

    downloaded — a critical step to protecting your email's privacy against the government

    — is hard if not impossible to do when you use a webmail service like Gmail or Yahoo!

    Mail, especially if you want to maintain access to a copy of that email. Since you view

    your email in your browser rather than downloading it to email client software, the only

    conveniently accessible copy of your email is going to be the one you store with your

    provider.
•   If you take the idea of a government adversary seriously, webmail is a very bad risk.

    The government is hundreds if not thousands of times more likely to try and obtain your

    stored email rather than wiretap it. Indeed, the reason that the number of wiretaps on

    electronic communications is so low is because it's so easy to obtain the same

    information from the provider's storage.
•   So, if you think that government adversaries may pose a threat to your privacy, we

    strongly recommend that you not use webmail for any unencrypted sensitive

    communications, unless you simply can't live your life or do your job without an easy-

    to-access-anywhere inbox. If you really don't need that kind of access and usually
        access your mail from the same computer, the convenience of webmail probably isn't

        worth the risk.
    •   If you do use a webmail account, though, one way of mitigating the risk is to avoid using

        the web interface and instead download your emails directly to your email client

        software using POP and immediately delete them from the provider's server.

        This option may not be available from all webmail providers, but it is offered by major

        providers such as Gmail, Microsoft and Yahoo!. You'll lose the convenient access to past

        messages via the web, and it might not be free, but you'll still have cheap and reliable

        email service.
    •   Protecting Email: Use Email Encryption When You Can
    •   Using email encryption is a good idea even if you are storing all your email locally, if only

        to counter the wiretapping threat. But using encryption becomes all the more important

        if you are storing your email content with your email provider. If the government comes

        calling on your provider with a subpoena for your stored emails, you'll wish you had

        learned how to protect those messages with encryption, so visit our email article and

        learn now!


Email

 The act of using email stores data on your machines, transmits data over the network, and

 stores data on third party machines.


Locally Stored Data
 The usual measures apply to managing the copies of emails (both sent and received) that are

 kept on your own machines. Encrypt your drives and decide upon and follow an appropriate

 data deletion policy.


Data on the Wire
 Email usually travels through a number of separate hops between the sender and receiver. This

 diagram illustrates the typical steps messages might travel through, the transmission protocols

 used for those steps, and the available types of encryption for those steps.
End-to-End Encryption of Specific Emails

 Encrypting emails all the way from the sender to the receiver has historically been difficult,

 although the tools for achieving this kind of end-to-end encryption are getting better and easier

 to use. Pretty Good Privacy (PGP) and its free cousin GNU Privacy Guard (GnuPG) are the

 standard tools for doing this. Both of these programs can provide protection for your email in

 transit and also protect your stored data. Major email clients such as Microsoft Outlook and

 Mozilla Thunderbird can be configured to work smoothly with encryption software, making it a

 simple matter of clicking a button to sign, verify, encrypt and decrypt email messages.



 The great thing about end-to-end encryption is that it ensures that the contents of your emails

 will be protected not only against interception on the wire, but also against some of the threats

 to the contents of copies of your emails stored on your machine or third party machines.



 There are two catches with GnuPG/PGP. The first is that they only work if the other parties you

 are corresponding with also use them. Inevitably, many of the people you exchange email with

 will not use GPG/PGP, though it can be deployed amongst your friends or within an

 organization.



 The second catch is that you need to find and verify public keys for the people you are sending

 email to, to ensure that eavesdroppers cannot trick you into using the wrong key. This trickery

 is known as a "man in the middle" attack.
 Probably the easiest way to start using GnuPG is to use Mozilla Thunderbird with the Enigmail

 plugin. You can find the quick start guide for installing and configuring Enigmail here.


Server-to-Server Encrypted Transit

 After you press "send", emails are typically relayed along a chain of SMTP mail servers before

 reaching their destination. You can use your mail client to look at the headers of any email

 you've received to see the chain of servers the message traveled along. In most cases,

 messages are passed between mail servers without encryption. But there is a standard called

 SMTP over TLS which allows encryption when the sending and receiving servers for a given hop

 of the chain support it.



 If you or your organization operates a mail server, you should ensure that it supports TLS

 encryption when talking to other mail servers. Consult the documentation for your SMTP server

 software to find out how to enable TLS.


Client-to-Mail Server Encryption

 If you use POP or IMAP to fetch your email, make sure it is encrypted POP or IMAP. If your mail

 server doesn't support the encrypted version of that protocol, get your service provider or

 systems administrator to fix that.



 If you use a webmail service, ensure that you only access it using HTTPS rather than HTTP.

 Hushmail.com is a webmail service provider that always uses HTTPS, and also offers some end-

 to-end encryption facilities (though they are not immune to warrants).



 Many webmail service providers only use HTTPS for the login page, and then revert to HTTP.

 This isn't secure. Look for an account configuration option (or a browser plugin) to ensure that

 your webmail account always uses HTTPS. In Gmail, for instance, you can find this option in the

 "general" tab of the settings page:




 If you can't find a way to ensure that you only see your webmail through https, switch to a

 different web mail provider.
Data Stored on Second- and Third-Party Machines
 There are two main reasons why your emails will be stored on computers controlled by third

 parties.


Storage by your Service Provider

 If you don't run your own mail server, then there is a third party who obtains (and may store)

 copies of all of your emails. This would commonly be an ISP, an employer, or a webmail

 provider. Copies of messages will also be scattered across computers controlled by the ISPs,

 employers and webmail hosts of those you correspond with.



 Make sure your email software is configured so that it deletes messages off of your ISP's mail

 server after it downloads them. This is the most common arrangement if you're using POP to

 fetch your email, but it is common for people to use IMAP or webmail to leave copies of

 messages on the server.



 If you use webmail or IMAP, make sure you delete messages immediately after you read them.

 Keep in mind that with major webmail services, it may be a long time – maybe a matter of

 months – before the message is really deleted, regardless of whether you still have access to it

 or not. With smaller IMAP or webmail servers, it is possible that forensically accessible copies of

 messages could be subpoenaed years after the user deleted them.



 The content of PGP/GnuPG encrypted emails will not be accessible through these third parties,

 although the email headers (such as the To: and Subject: lines) will be.



 Running your own mail server with an encrypted drive, or using end-to-end encryption for

 sensitive communications, are the best ways of mitigating these risks.


Storage by Those You Correspond With

 Most people and organizations save all of the email they send and receive. Therefore, almost

 every email you send and receive will be stored in at least one other place, regardless of the

 practices and procedures you follow. In addition to the personal machine of the person you

 sent/received the message to/from, copies might be made on their ISP or firm's mail or backup

 servers. You should take these copies into consideration, and if the threat model you have for

 sensitive communications includes an adversary that might gain access to those copies, then

 you should either use PGP to encrypt those messages, or send them by some means other than
 email. Be aware that even if you use PGP, those you communicate with could be subject to

 subpoenas or requests from law enforcement to decrypt your correspondence.


End-to-End Email Encryption
 Email encryption is a topic that could fill a book, and has: see Bruce Schneier's book Email

 Security: How to Keep Your Electronic Messages Private. While this book is somewhat out of

 date (it refers to old versions of software), the concepts it introduces are essential.


Protecting Instant Messaging

  Major IM service providers like AOL, Yahoo! And Microsoft say that they don't store your IM

  messages after they are transmitted. We think they are telling the truth, but even so, you

  should use encryption when IMing, if only because it is so easy to do (see our IM article to find

  out how).



  Gmail's chat, on the other hand, logs all of your IMs by default as a feature and stores them

  online in your Google account for you to access later. If you use Google Talk or Gmail's chat

  service, we strongly recommend turning off this feature by going "Off the Record" or "OTR", as

  Google calls it — so that you aren't storing those transcripts with Google.



  If you really need access to past transcripts, log them on your own computer using your IM

  software's settings (subject, of course, to the data retention policy you established after

  reading our section on protecting data stored on your computer). However, also keep in mind

  that many if not most of the people you chat with will be keeping their own logs on their own

  computer (or in their Google account if using Gchat, unless you've gone "Off the Record").


Protecting SMS
Avoid Texting Sensitive Communications

  Major cell phone providers claim that they don't log your SMS text messages except for a very

  short period of time to ensure delivery (see, e.g., statements from providers in this news story

  entitled "Most Text Messages Are Saved Only Briefly", or another article containing similar

  claims). However, there is reason to doubt these claims: we've seen several cases where SMS

  messages were disclosed by a provider months or even years after they were originally sent.

  For example, as USA Today recounts, text messages were subpoenaed in the Kobe Bryant rape

  case four months after they were sent, despite A&T Wireless' claims that customers' text

  messages are deleted within 72 hours. According to that story, "How messages in the Bryant

  case would be available four months later isn't known; most likely they were retrieved from an
  archival storage system." Considering such incidents, provider-side logging of your SMS text

  messages must be considered a high risk.



  Furthermore, although we think that the Stored Communications Act and the Fourth

  Amendment require the government in most cases to get a warrant before obtaining your

  pager or SMS messages from your provider, there are several known cases where it has

  obtained such messages without warrants under the lower legal standards reserved for non-

  content records, using only subpoenas.



  Not only is there the threat of your provider logging your messages and the government

  subpoenaing them, but also the near certainty that the phones of the people you are

  communicating with are logging those messages, adding yet another point of vulnerability.

  That's in addition to the logs on your own phone, which you should delete regularly based on

  the data retention policy you developed after reading about "Data Stored on Your Computer."

  However, keep in mind that with the right forensic tools, investigators will likely be able to

  recover even those deleted messages if they ever get a hold of your phone, and the secure

  deletion options for mobile devices are still quite limited.



  Finally, although there have been some efforts at coming up with encryption solutions that

  work for SMS (as described in our mobile devices article), none of those techniques are easily

  or widely used.



  Therefore, given the possibility that your SMS texts are logged by your provider, that the

  government may be able to obtain those messages from your provider without warrants and

  without notice to you, and that such messages are hard if not impossible to encrypt, along with

  the certainty that they will be logged on your phone and the phones of the people you

  communicate with, we strongly recommend against using SMS for any sensitive

  communications.


Online Storage of Your Private Data
Online Storage of Your Private Data

  There's a lot of talk these days about how convenient it is to store your data in the internet

  "cloud." Why store your calendar or contacts list or critical documents on one computer, or buy

  a hard drive to back up your files at home, when you can store them "in the cloud" and access

  them from anywhere using services like Google Calendar, or Google Docs, or remote backup
 services that will store copies of all your files for you? Well, here's a reason: the government

 can easily subpoena that data from those providers, with no notice to you.



 As we already described in the "What Can The Government Do?" section, the communications

 stored by your communications service providers are very weakly protected compared to those

 you store yourself: after 180 days (or after you've downloaded a copy, according to the DOJ),

 the government can get those communications with only a subpoena and usually with no

 notice to you. But the situation is even worse when it comes to data that you store with

 someone other than your communications provider — so called "remote computing services"

 (RCSs). Under the Stored Communications Act, the government can obtain data that you send

 to an RCS for storage or processing with only a subpoena regardless of how old it is, and

 although the government is supposed to notify you before they do, the law makes it very easy

 for investigators to delay that notice until after they've gotten your data.



 Therefore, storing all that data yourself, on your own computers — without relying on RCSs —

 is the most legally secure way to handle your private information. If you do choose to store

 copies of your files online, though, we strongly recommend encrypting those files yourself

 before you do (visit our article on disk and file encryption to learn how), or using services like

 IDrive or MozyPro that give you the option of encrypting your files using your own private

 encryption key.


Protecting Your Search Privacy and Your Web Browsing Activity

 The search history you generate when using search engines like Google or Yahoo! reveals

 incredibly sensitive data about what you look at — or even think of looking at — on the web.

 These logs may be tied to your identity based on your IP address, the cookie files that the

 search engine places on your computer, or your account information if you've registered to use

 the search engine or other services offered by the provider. And as discussed earlier in the

 "What Can the Government Do?" section, these logs are subject to uncertain legal protections.



 Considering the sensitivity of search logs and the questions surrounding their legal status, we

 highly recommend that you exercise great care to ensure that your identity cannot be linked to

 your search queries. For an in-depth discussion of how to do that, read EFF's "Six Tips to

 Protect Your Search Privacy". You should also take a look at our article on browsers to learn

 more about cookie management and on the anonymizing software Tor to learn more about

 how to mask your IP address. These same techniques can be used to protect you against
  logging by any web site you visit, not just search engines, and we recommend that you do use

  them whenever you visit a web site and don't want that site to log personally-identifying

  information about you and the pages that you read.



  Finally, we recommend avoiding using one online portal for multiple services — e.g., try to

  avoid using Yahoo! Search and Yahoo! Mail, or Google Search and Google Reader. Not only are

  you making it easier for the search provider to identify you by virtue of linking all of your

  activity to your personalized account, but you are also offering the government a convenient

  "one-stop shop" opportunity to access a wide range of your personal information at once.

  Using these "mega-portals" to manage all aspects of your online life might be convenient, but

  it also creates a single point of failure that raises a serious security risk.


TMI on the Web
Do You Really Want to Publish that Blog Post, Flickr that Picture, or Broadcast that Facebook Status?

  The web is a powerful engine of personal expression, giving you a wide variety of online

  venues to speak your mind and communicate with friends or the public. But before you publish

  that blog post on MySpace or Blogger, post a picture to a picture-sharing sites like Flickr or

  Picasa, or broadcast your status on Facebook or using Twitter, think, "Is this really information

  that you want to expose on the web?" Even if you do now, think about years from now: will

  you want evidence of this youthful indiscretion or that personal opinion floating around on the

  web in the future? Remember, you don't have any expectation of privacy in information that

  you post to the public web, and information that you post now but delete later may still

  persist, whether on the pages of the friends you communicated with (like your Wall Posts to a

  friend on Facebook), or in Google's cache of old web pages, or the Internet Archive's library of

  public web pages.



  One way of limiting the risks of posting information about yourself on the web is to use the

  privacy settings offered by social sharing sites like Flickr or Facebook, with which you can avoid

  publishing your information to the public web and can define which of your "friends" on the

  same service are allowed access to your information. However, these settings can sometimes

  be confusing and difficult to configure correctly, and it's unclear how robust such privacy

  protections would be against the attacks of a dedicated hacker. There's also the possibility that

  an adversary may try to "friend" you using fake information to pose as someone you know or

  would want to know. (A good rule of thumb is to only become "friends" with people that you

  know personally, after verifying with them via another means of communication — for

  example, by emailing them or calling them — to ensure that they are the ones that actually
  made the request.). Then there's the additional threat of adversaries gaining access to your

  account information by convincing you to use their "app." Finally, of course, there's always the

  risk that one of your "friends" will republish to others the information that you thought you had

  posted privately. So, even if you think you've strictly controlled access to your Facebook profile

  or Flickr page, you should recognize the significant risk that what you post there might leak

  out, and act accordingly.



  Another option, if you're more interested in sharing information and opinion than in socializing,

  is to communicate anonymously, without tying your posts to your real identity. For an

  extended discussion of how to do that safely and effectively, take a look at our guide on "How

  to Blog Safely (About Work or Anything Else)."


Protecting Your Location Information
More on Cell Phone Tracking

  We described earlier how the government can enlist your phone company's help in tracking the

  location of your phone in real time. However, that's not the only location privacy threat posed

  by your cell phone: your provider also keeps records of where your cell phone was each time

  you made or received a phone call.



  In particular, phone companies typically log the cell phone tower you were closest to when you

  called someone or someone called you, as well as which "sector" of the tower's coverage area

  your phone was in. Particularly in urban environments where there are lots of cell towers, such

  records can locate you with a fairly high degree of precision, sometimes to within a city block

  or even within a particular building. The government routinely obtains these kinds of location

  records with only subpoenas and with no notice to the target, although EFF is working hard to

  ensure that such data can only be obtained with a search warrant.



  Unfortunately, there's nothing you can do to prevent these records from being created short of

  not making phone calls, and turning your phone off to ensure that no one calls you. Indeed,

  turning your phone off might be your only recourse — particularly since some experts have

  advised us that the phone companies not only log the location of your phone when a call is

  made but also log the closest cell tower whenever your phone is turned on, as your phone

  continuously registers itself with the cell network.



  Therefore, as is true with every communications device that you use, your best defense is to

  think before you use your cell phone. Do you really want your phone company to have a log
 reflecting that you were in that part of town at that time? If not, then you should turn the cell

 phone off.



 Another potential solution is to anonymously purchase a prepaid cell phone using cash. The

 phone company will still have the same location data, but it won't be as easily linked to your

 identity. Keep in mind, however, that even if the phone company doesn't have subscriber

 information like your name and address, investigators might be able to quickly associate you

 with the phone based on the people you communicate with, or based on security camera

 footage from the store where you bought the phone.



 For more information about the privacy risks posed by cell phones, take a look at our article on

 mobile devices. You may also want to take a look at the advice offered by MobileActive.org in

 its Primer on Mobile Surveillance.


Summing Up

 Whenever you use technology to communicate, you will necessarily leave traces of your

 activity with third parties like your phone company, your ISP, or your search engine provider.

 If a third party has it, the government can get it, often under weak legal standards and without

 any notice to you. So remember:


   •   Think before you communicate. Do you really want there to be a record of this?

   •   Choose to make a telephone call when you can, rather than using SMS or the Internet,
       unless your communications are encrypted. Otherwise, there may be a record of the
       content of your communication on some third party's server or in an archival database.

   •   Avoid storing your data with third parties when you can. The records you store with
       others receive much less legal protection than those you store yourself.

   •   Use file encryption where possible if you do choose to store data with an online service.

   •   If you are using email or voicemail, delete the copies stored by your communications
       provider as soon as you download or listen to them.

   •   Learn how to hide your identity online and minimize the information that online services
       log about you by learning how to configure your browser and use anonymizing technologies
       like Tor.


 Powerful new communications technologies carry with them powerful risks to the privacy and

 security of your communications. Learn to defend yourself!


Foreign Intelligence and Terrorism Investigations
 All of the government surveillance tactics and standards discussed in previous sections relate

 to law enforcement investigations — that is, investigations for the purpose of gathering

 evidence for criminal prosecution. However, the government also engages in surveillance in

 order to combat foreign threats to national security. When it comes to foreign spies and

 terrorists, the government uses essentially the same tools — searches, wiretaps, pen/traps,

 subpoenas — but operates under much lower legal standards and in much greater secrecy. It's

 important that you understand these foreign intelligence surveillance authorities such as the

 government's access to records using National Security Letters and its wiretapping powers

 under the Foreign Intelligence Surveillance Act (FISA) so that you can evaluate the risk of such

 surveillance to you or your organization and defend against it.


National Security Letters

 Imagine if the FBI could, with only a piece of paper signed by the special agent in charge of

 your local FBI office, demand detailed information about your private Internet communications

 directly from your ISP, webmail service, or other communications provider. Imagine that it

 could do this:


   •   without court review or approval

   •   without you being suspected of a crime

   •   without ever having to tell you that it happened


 Further imagine that with this piece of paper, the FBI could see a wide range of private details,

 including:


   •   your basic subscriber records, including your true identity and payment information

   •   your Internet Protocol address and the IP address of every Web server you communicate with

   •   the identity of anyone using a particular IP address, username, or email address

   •   the email address or username of everyone you email or IM, or who emails or IMs you

   •   the time, size in bytes, and duration of each of your communications, and possibly even the
       web address of every website you visit


 Finally, imagine that the FBI could use the same piece of paper to gain access your private

 credit and financial information — and that your ISP, bank, and any other business from which

 the FBI gathers your private records is barred by law from notifying you.
  Now, stop imagining: the FBI already has this authority, in the form of National Security

  Letters. These are essentially secret subpoenas that are issued directly by the FBI without any

  court involvement. Thanks to the USA PATRIOT Act, the only requirement the government

  must meet to issue an NSL is that the FBI must certify in the letter that the information it is

  seeking is relevant to an authorized investigation to protect against international terrorism or

  clandestine intelligence activities.



  The number of National Security Letters used each year is classified, but the Washington Post

  has reported that by late 2005, the government had on average issued 30,000 National

  Security Letters each year since the PATRIOT Act passed in 2001. That’s a hundredfold

  increase over the pre-PATRIOT numbers.



  Further revelations by the FBI's Inspector General in 2007 showed that in many cases, the FBI

  had failed even to meet the weak post-PATRIOT National Security Letter standards, illegally

  issuing so-called "exigent letters" to communications providers asking for the same information

  National Security Letters are used to obtain, but without meeting the minimal requirement that

  the requested information be relevant to an authorized terrorism or espionage investigation.

  EFF has since sued the Department of Justice to learn more about how the government has

  been abusing its National Security Letter authority.


The FBI's Secret Scrutiny
The FBI came calling in Windsor, Conn., this summer with a document marked for delivery by
hand. On Matianuk Avenue, across from the tennis courts, two special agents found their man.
They gave George Christian the letter, which warned him to tell no one, ever, what it said.
Under the shield and stars of the FBI crest, the letter directed Christian to surrender "all
subscriber information, billing information and access logs of any person" who used a specific
computer at a library branch some distance away. Christian, who manages digital records for
three dozen Connecticut libraries, said in an affidavit that he configures his system for privacy.
But the vendors of the software he operates said their databases can reveal the Web sites that
visitors browse, the e-mail accounts they open and the books they borrow.
Christian refused to hand over those records, and his employer, Library Connection Inc., filed
suit for the right to protest the FBI demand in public. The Washington Post established their
identities -- still under seal in the U.S. Court of Appeals for the 2nd Circuit -- by comparing
unsealed portions of the file with public records and information gleaned from people who had
no knowledge of the FBI demand.
The Connecticut case affords a rare glimpse of an exponentially growing practice of domestic
surveillance under the USA Patriot Act, which marked its fourth anniversary on Oct. 26.
"National security letters," created in the 1970s for espionage and terrorism investigations,
originated as narrow exceptions in consumer privacy law, enabling the FBI to review in secret
the customer records of suspected foreign agents. The Patriot Act, and Bush administration
guidelines for its use, transformed those letters by permitting clandestine scrutiny of U.S.
residents and visitors who are not alleged to be terrorists or spies.
The FBI now issues more than 30,000 national security letters a year, according to government
sources, a hundredfold increase over historic norms. The letters -- one of which can be used to
sweep up the records of many people -- are extending the bureau's reach as never before into the
telephone calls, correspondence and financial lives of ordinary Americans.
Issued by FBI field supervisors, national security letters do not need the imprimatur of a
prosecutor, grand jury or judge. They receive no review after the fact by the Justice Department
or Congress. The executive branch maintains only statistics, which are incomplete and confined
to classified reports. The Bush administration defeated legislation and a lawsuit to require a
public accounting, and has offered no example in which the use of a national security letter
helped disrupt a terrorist plot.
The burgeoning use of national security letters coincides with an unannounced decision to
deposit all the information they yield into government data banks -- and to share those private
records widely, in the federal government and beyond. In late 2003, the Bush administration
reversed a long-standing policy requiring agents to destroy their files on innocent American
citizens, companies and residents when investigations closed. Late last month, President Bush
signed Executive Order 13388, expanding access to those files for "state, local and tribal"
governments and for "appropriate private sector entities," which are not defined.
National security letters offer a case study of the impact of the Patriot Act outside the spotlight of
political debate. Drafted in haste after the Sept. 11, 2001, attacks, the law's 132 pages wrought
scores of changes in the landscape of intelligence and law enforcement. Many received far more
attention than the amendments to a seemingly pedestrian power to review "transactional
records." But few if any other provisions touch as many ordinary Americans without their
knowledge.
Senior FBI officials acknowledged in interviews that the proliferation of national security letters
results primarily from the bureau's new authority to collect intimate facts about people who are
not suspected of any wrongdoing. Criticized for failure to detect the Sept. 11 plot, the bureau
now casts a much wider net, using national security letters to generate leads as well as to pursue
them. Casual or unwitting contact with a suspect -- a single telephone call, for example -- may
attract the attention of investigators and subject a person to scrutiny about which he never learns.
A national security letter cannot be used to authorize eavesdropping or to read the contents of e-
mail. But it does permit investigators to trace revealing paths through the private affairs of a
modern digital citizen. The records it yields describe where a person makes and spends money,
with whom he lives and lived before, how much he gambles, what he buys online, what he
pawns and borrows, where he travels, how he invests, what he searches for and reads on the
Web, and who telephones or e-mails him at home and at work.
As it wrote the Patriot Act four years ago, Congress bought time and leverage for oversight by
placing an expiration date on 16 provisions. The changes involving national security letters were
not among them. In fact, as the Dec. 31 deadline approaches and Congress prepares to renew or
make permanent the expiring provisions, House and Senate conferees are poised again to amplify
the FBI's power to compel the secret surrender of private records.
The House and Senate have voted to make noncompliance with a national security letter a
criminal offense. The House would also impose a prison term for breach of secrecy.
Like many Patriot Act provisions, the ones involving national security letters have been debated
in largely abstract terms. The Justice Department has offered Congress no concrete information,
even in classified form, save for a partial count of the number of letters delivered. The statistics
do not cover all forms of national security letters or all U.S. agencies making use of them.
"The beef with the NSLs is that they don't have even a pretense of judicial or impartial scrutiny,"
said former representative Robert L. Barr Jr. (Ga.), who finds himself allied with the American
Civil Liberties Union after a career as prosecutor, CIA analyst and conservative GOP stalwart.
"There's no checks and balances whatever on them. It is simply some bureaucrat's decision that
they want information, and they can basically just go and get it."
'A Routine Tool'

Career investigators and Bush administration officials emphasized, in congressional testimony
and interviews for this story, that national security letters are for hunting terrorists, not fishing
through the private lives of the innocent. The distinction is not as clear in practice.
Under the old legal test, the FBI had to have "specific and articulable" reasons to believe the
records it gathered in secret belonged to a terrorist or a spy. Now the bureau needs only to certify
that the records are "sought for" or "relevant to" an investigation "to protect against international
terrorism or clandestine intelligence activities."
That standard enables investigators to look for conspirators by sifting the records of nearly
anyone who crosses a suspect's path.
"If you have a list of, say, 20 telephone numbers that have come up . . . on a bad guy's
telephone," said Valerie E. Caproni, the FBI's general counsel, "you want to find out who he's in
contact with." Investigators will say, " 'Okay, phone company, give us subscriber information
and toll records on these 20 telephone numbers,' and that can easily be 100."
Bush administration officials compare national security letters to grand jury subpoenas, which
are also based on "relevance" to an inquiry. There are differences. Grand juries tend to have a
narrower focus because they investigate past conduct, not the speculative threat of unknown
future attacks. Recipients of grand jury subpoenas are generally free to discuss the subpoenas
publicly. And there are strict limits on sharing grand jury information with government agencies.
Since the Patriot Act, the FBI has dispersed the authority to sign national security letters to more
than five dozen supervisors -- the special agents in charge of field offices, the deputies in New
York, Los Angeles and Washington, and a few senior headquarters officials. FBI rules
established after the Patriot Act allow the letters to be issued long before a case is judged
substantial enough for a "full field investigation." Agents commonly use the letters now in
"preliminary investigations" and in the "threat assessments" that precede a decision whether to
launch an investigation.
"Congress has given us this tool to obtain basic telephone data, basic banking data, basic credit
reports," said Caproni, who is among the officials with signature authority. "The fact that a
national security letter is a routine tool used, that doesn't bother me."
If agents had to wait for grounds to suspect a person of ill intent, said Joseph Billy Jr., the FBI's
deputy assistant director for counterterrorism, they would already know what they want to find
out with a national security letter. "It's all chicken and egg," he said. "We're trying to determine
if someone warrants scrutiny or doesn't."
Billy said he understands that "merely being in a government or FBI database . . . gives
everybody, you know, neck hair standing up." Innocent Americans, he said, "should take comfort
at least knowing that it is done under a great deal of investigative care, oversight, within the
parameters of the law."
He added: "That's not going to satisfy a majority of people, but . . . I've had people say, you
know, 'Hey, I don't care, I've done nothing to be concerned about. You can have me in your files
and that's that.' Some people take that approach."
'Don't Go Overboard'

In Room 7975 of the J. Edgar Hoover Building, around two corners from the director's suite, the
chief of the FBI's national security law unit sat down at his keyboard about a month after the
Patriot Act became law. Michael J. Woods had helped devise the FBI wish list for surveillance
powers. Now he offered a caution.
"NSLs are powerful investigative tools, in that they can compel the production of substantial
amounts of relevant information," he wrote in a Nov. 28, 2001, "electronic communication" to
the FBI's 56 field offices. "However, they must be used judiciously." Standing guidelines, he
wrote, "require that the FBI accomplish its investigations through the 'least intrusive' means. . . .
The greater availability of NSLs does not mean that they should be used in every case."
Woods, who left government service in 2002, added a practical consideration. Legislators
granted the new authority and could as easily take it back. When making that decision, he wrote,
"Congress certainly will examine the manner in which the FBI exercised it."
Looking back last month, Woods was struck by how starkly he misjudged the climate. The FBI
disregarded his warning, and no one noticed.
This is not something that should be automatically done because it's easy," he said. "We need to
be sure . . . we don't go overboard."
One thing Woods did not anticipate was then-Attorney General John D. Ashcroft's revision of
Justice Department guidelines. On May 30, 2002, and Oct. 31, 2003, Ashcroft rewrote the
playbooks for investigations of terrorist crimes and national security threats. He gave overriding
priority to preventing attacks by any means available.
Ashcroft remained bound by Executive Order 12333, which requires the use of the "least
intrusive means" in domestic intelligence investigations. But his new interpretation came close to
upending the mandate. Three times in the new guidelines, Ashcroft wrote that the FBI "should
consider . . . less intrusive means" but "should not hesitate to use any lawful techniques . . . even
if intrusive" when investigators believe them to be more timely. "This point," he added, "is to be
particularly observed in investigations relating to terrorist activities."
'Why Do You Want to Know?'

As the Justice Department prepared congressional testimony this year, FBI headquarters
searched for examples that would show how expanded surveillance powers made a difference.
Michael Mason, who runs the Washington field office and has the rank of assistant FBI director,
found no ready answer.
"I'd love to have a made-for-Hollywood story, but I don't have one," Mason said. "I am not even
sure such an example exists."
What national security letters give his agents, Mason said, is speed.
"I have 675 terrorism cases," he said. "Every one of these is a potential threat. And anything I
can do to get to the bottom of any one of them more quickly gets me closer to neutralizing a
potential threat."
Because recipients are permanently barred from disclosing the letters, outsiders can make no
assessment of their relevance to Mason's task.

Woods, the former FBI lawyer, said secrecy is essential when an investigation begins because "it
would defeat the whole purpose" to tip off a suspected terrorist or spy, but national security
seldom requires that the secret be kept forever. Even mobster "John Gotti finds out eventually
that he was wiretapped" in a criminal probe, said Peter Swire, the federal government's chief
privacy counselor until 2001. "Anyone caught up in an NSL investigation never gets notice."
To establish the "relevance" of the information they seek, agents face a test so basic it is hard to
come up with a plausible way to fail. A model request for a supervisor's signature, according to
internal FBI guidelines, offers this one-sentence suggestion: "This subscriber information is
being requested to determine the individuals or entities that the subject has been in contact with
during the past six months."
Edward L. Williams, the chief division counsel in Mason's office, said that supervisors, in
practice, "aren't afraid to ask . . . 'Why do you want to know?' " He would not say how many
requests, if any, are rejected.
'The Abuse Is in the Power Itself'

Those who favor the new rules maintain -- as Sen. Pat Roberts (R-Kan.), chairman of the Senate
Select Committee on Intelligence, put it in a prepared statement -- that "there has not been one
substantiated allegation of abuse of these lawful intelligence tools."
What the Bush administration means by abuse is unauthorized use of surveillance data -- for
example, to blackmail an enemy or track an estranged spouse. Critics are focused elsewhere.
What troubles them is not unofficial abuse but the official and routine intrusion into private lives.
To Jeffrey Breinholt, deputy chief of the Justice Department's counterterrorism section, the civil
liberties objections "are eccentric." Data collection on the innocent, he said, does no harm unless
"someone [decides] to act on the information, put you on a no-fly list or something." Only a
serious error, he said, could lead the government, based on nothing more than someone's bank or
phone records, "to freeze your assets or go after you criminally and you suffer consequences that
are irreparable." He added: "It's a pretty small chance."
"I don't necessarily want somebody knowing what videos I rent or the fact that I like cartoons,"
said Mason, the Washington field office chief. But if those records "are never used against a
person, if they're never used to put him in jail, or deprive him of a vote, et cetera, then what is the
argument?"
Barr, the former congressman, said that "the abuse is in the power itself."
"As a conservative," he said, "I really resent an administration that calls itself conservative taking
the position that the burden is on the citizen to show the government has abused power, and
otherwise shut up and comply."
At the ACLU, staff attorney Jameel Jaffer spoke of "the profound chilling effect" of this kind of
surveillance: "If the government monitors the Web sites that people visit and the books that they
read, people will stop visiting disfavored Web sites and stop reading disfavored books. The FBI
should not have unchecked authority to keep track of who visits [al-Jazeera's Web site] or who
visits the Web site of the Federalist Society."
Links in a Chain

Ready access to national security letters allows investigators to employ them routinely for
"contact chaining."
"Starting with your bad guy and his telephone number and looking at who he's calling, and [then]
who they're calling," the number of people surveilled "goes up exponentially," acknowledged
Caproni, the FBI's general counsel.

But Caproni said it would not be rational for the bureau to follow the chain too far. "Everybody's
connected" if investigators keep tracing calls "far enough away from your targeted bad guy," she
said. "What's the point of that?"
One point is to fill government data banks for another investigative technique. That one is called
"link analysis," a practice Caproni would neither confirm nor deny.
Two years ago, Ashcroft rescinded a 1995 guideline directing that information obtained through
a national security letter about a U.S. citizen or resident "shall be destroyed by the FBI and not
further disseminated" if it proves "not relevant to the purposes for which it was collected."
Ashcroft's new order was that "the FBI shall retain" all records it collects and "may disseminate"
them freely among federal agencies.
The same order directed the FBI to develop "data mining" technology to probe for hidden links
among the people in its growing cache of electronic files. According to an FBI status report, the
bureau's office of intelligence began operating in January 2004 a new Investigative Data
Warehouse, based on the same Oracle technology used by the CIA. The CIA is generally
forbidden to keep such files on Americans.
Data mining intensifies the impact of national security letters, because anyone's personal files
can be scrutinized again and again without a fresh need to establish relevance.
The composite picture of a person which emerges from transactional information is more telling
than the direct content of your speech," said Woods, the former FBI lawyer. "That's certainly not
been lost on the intelligence community and the FBI."
Ashcroft's new guidelines allowed the FBI for the first time to add to government files consumer
data from commercial providers such as LexisNexis and ChoicePoint Inc. Previous attorneys
general had decided that such a move would violate the Privacy Act. In many field offices,
agents said, they now have access to ChoicePoint in their squad rooms.
What national security letters add to government data banks is information that no commercial
service can lawfully possess. Strict privacy laws, for example, govern financial and
communications records. National security letters -- along with the more powerful but much less
frequently used secret subpoenas from the Foreign Intelligence Surveillance Court -- override
them.
'What Happens in Vegas'

The bureau displayed its ambition for data mining in an emergency operation at the end of 2003.
The Department of Homeland Security declared an orange alert on Dec. 21 of that year, in part
because of intelligence that hinted at a New Year's Eve attack in Las Vegas. The identities of the
plotters were unknown.
The FBI sent Gurvais Grigg, chief of the bureau's little-known Proactive Data Exploitation Unit,
in an audacious effort to assemble a real-time census of every visitor in the nation's most-visited
city. An average of about 300,000 tourists a day stayed an average of four days each, presenting
Grigg's team with close to a million potential suspects in the ensuing two weeks.
A former stockbroker with a degree in biochemistry, Grigg declined to be interviewed.
Government and private sector sources who followed the operation described epic efforts to
vacuum up information.
An interagency task force began pulling together the records of every hotel guest, everyone who
rented a car or truck, every lease on a storage space, and every airplane passenger who landed in
the city. Grigg's unit filtered that population for leads. Any link to the known terrorist universe --
a shared address or utility account, a check deposited, a telephone call -- could give investigators
a start.
"It was basically a manhunt, and in circumstances where there is a manhunt, the most effective
way of doing that was to scoop up a lot of third party data and compare it to other data we were
getting," Breinholt said.
Investigators began with emergency requests for help from the city's sprawling hospitality
industry. "A lot of it was done voluntary at first," said Billy, the deputy assistant FBI director.
According to others directly involved, investigators turned to national security letters and grand
jury subpoenas when friendly persuasion did not work.
Early in the operation, according to participants, the FBI gathered casino executives and asked
for guest lists. The MGM Mirage company, followed by others, balked.
"Some casinos were saying no to consent [and said], 'You have to produce a piece of paper,' "
said Jeff Jonas, chief scientist at IBM Entity Analytics, who previously built data management
systems for casino surveillance. "They don't just market 'What happens in Vegas stays in Vegas.'
They want it to be true."
The operation remained secret for about a week. Then casino sources told Rod Smith, gaming
editor of the Las Vegas Review-Journal, that the FBI had served national security letters on
them. In an interview for this article, one former casino executive confirmed the use of a national
security letter. Details remain elusive. Some law enforcement officials, speaking on the condition
of anonymity because they had not been authorized to divulge particulars, said they relied
primarily on grand jury subpoenas. One said in an interview that national security letters may
eventually have been withdrawn. Agents encouraged voluntary disclosures, he said, by raising
the prospect that the FBI would use the letters to gather something more sensitive: the gambling
profiles of casino guests. Caproni declined to confirm or deny that account.
What happened in Vegas stayed in federal data banks. Under Ashcroft's revised policy, none of
the information has been purged. For every visitor, Breinholt said, "the record of the Las Vegas
hotel room would still exist."
Grigg's operation found no suspect, and the orange alert ended on Jan. 10, 2004."The whole
thing washed out," one participant said.
'Of Interest to President Bush'
At around the time the FBI found George Christian in Connecticut, agents from the bureau's
Charlotte field office paid an urgent call on the chemical engineering department at North
Carolina State University in Raleigh. They were looking for information about a former student
named Magdy Nashar, then suspected in the July 7 London subway bombing but since cleared of
suspicion.
University officials said in interviews late last month that the FBI tried to use a national security
letter to demand much more information than the law allows.
David T. Drooz, the university's senior associate counsel, said special authority is required for
the surrender of records protected by educational and medical privacy. The FBI's first request, a
July 14 grand jury subpoena, did not appear to supply that authority, Drooz said, and the
university did not honor it. Referring to notes he took that day, Drooz said Eric Davis, the FBI's
top lawyer in Charlotte, "was focused very much on the urgency" and "he even indicated the case
was of interest to President Bush."
The next day, July 15, FBI agents arrived with a national security letter. Drooz said it demanded
all records of Nashar's admission, housing, emergency contacts, use of health services and
extracurricular activities. University lawyers "looked up what law we could on the fly," he said.
They discovered that the FBI was demanding files that national security letters have no power to
obtain. The statute the FBI cited that day covers only telephone and Internet records.
"We're very eager to comply with the authorities in this regard, but we needed to have what we
felt was a legally valid procedure," said Larry A. Neilsen, the university provost.
Soon afterward, the FBI returned with a new subpoena. It was the same as the first one, Drooz
said, and the university still had doubts about its legal sufficiency. This time, however, it came
from New York and summoned Drooz to appear personally. The tactic was "a bit heavy-
handed," Drooz said, "the implication being you're subject to contempt of court." Drooz
surrendered the records.
The FBI's Charlotte office referred questions to headquarters. A high-ranking FBI official, who
spoke on the condition of anonymity, acknowledged that the field office erred in attempting to
use a national security letter. Investigators, he said, "were in a big hurry for obvious reasons" and
did not approach the university "in the exact right way."
'Unreasonable' or 'Oppressive'

The electronic docket in the Connecticut case, as the New York Times first reported, briefly
titled the lawsuit Library Connection Inc. v. Gonzales . Because identifying details were not
supposed to be left in the public file, the court soon replaced the plaintiff's name with "John
Doe."
George Christian, Library Connection's executive director, is identified in his affidavit as "John
Doe 2." In that sworn statement, he said people often come to libraries for information that is
"highly sensitive, embarrassing or personal." He wanted to fight the FBI but feared calling a
lawyer because the letter said he could not disclose its existence to "any person." He consulted
Peter Chase, vice president of Library Connection and chairman of a state intellectual freedom
committee. Chase -- "John Doe 1" in his affidavit -- advised Christian to call the ACLU.
Reached by telephone at their homes, both men declined to be interviewed.
U.S. District Judge Janet C. Hall ruled in September that the FBI gag order violates Christian's,
and Library Connection's, First Amendment rights. A three-judge panel heard oral argument on
Wednesday in the government's appeal.
The central facts remain opaque, even to the judges, because the FBI is not obliged to describe
what it is looking for, or why. During oral argument in open court on Aug. 31, Hall said one
government explanation was so vague that "if I were to say it out loud, I would get quite a laugh
here." After the government elaborated in a classified brief delivered for her eyes only, she wrote
in her decision that it offered "nothing specific."
The Justice Department tried to conceal the existence of the first and only other known lawsuit
against a national security letter, also brought by the ACLU's Jaffer and Ann Beeson.
Government lawyers opposed its entry into the public docket of a New York federal judge. They
have since tried to censor nearly all the contents of the exhibits and briefs. They asked the judge,
for example, to black out every line of the affidavit that describes the delivery of the national
security letter to a New York Internet company, including, "I am a Special Agent of the Federal
Bureau of Investigation ('FBI')."
U.S. District Judge Victor Marrero, in a ruling that is under appeal, held that the law authorizing
national security letters violates the First and Fourth Amendments.
Resistance to national security letters is rare. Most of them are served on large companies in
highly regulated industries, with business interests that favor cooperation. The in-house lawyers
who handle such cases, said Jim Dempsey, executive director of the Center for Democracy and
Technology, "are often former prosecutors -- instinctively pro-government but also instinctively
by-the-books." National security letters give them a shield against liability to their customers.
Kenneth M. Breen, a partner at the New York law firm Fulbright & Jaworski, held a seminar for
corporate lawyers one recent evening to explain the "significant risks for the non-compliant" in
government counterterrorism investigations. A former federal prosecutor, Breen said failure to
provide the required information could create "the perception that your company didn't live up to
its duty to fight terrorism" and could invite class-action lawsuits from the families of terrorism
victims. In extreme cases, he said, a business could face criminal prosecution, "a 'death sentence'
for certain kinds of companies."
The volume of government information demands, even so, has provoked a backlash. Several
major business groups, including the National Association of Manufacturers and the U.S.
Chamber of Commerce, complained in an Oct. 4 letter to senators that customer records can "too
easily be obtained and disseminated" around the government. National security letters, they
wrote, have begun to impose an "expensive and time-consuming burden" on business.
The House and Senate bills renewing the Patriot Act do not tighten privacy protections, but they
offer a concession to business interests. In both bills, a judge may modify a national security
letter if it imposes an "unreasonable" or "oppressive" burden on the company that is asked for
information.
'A Legitimate Question'

As national security letters have grown in number and importance, oversight has not kept up. In
each house of Congress, jurisdiction is divided between the judiciary and intelligence
committees. None of the four Republican chairmen agreed to be interviewed.
Roberts, the Senate intelligence chairman, said in a statement issued through his staff that "the
committee is well aware of the intelligence value of the information that is lawfully collected
under these national security letter authorities," which he described as "non-intrusive" and
"crucial to tracking terrorist networks and detecting clandestine intelligence activities." Senators
receive "valuable reporting by the FBI," he said, in "semi-annual reports [that] provide the
committee with the information necessary to conduct effective oversight."
Roberts was referring to the Justice Department's classified statistics, which in fact have been
delivered three times in four years. They include the following information: how many times the
FBI issued national security letters; whether the letters sought financial, credit or
communications records; and how many of the targets were "U.S. persons." The statistics omit
one whole category of FBI national security letters and also do not count letters issued by the
Defense Department and other agencies.
Committee members have occasionally asked to see a sampling of national security letters, a
description of their fruits or examples of their contribution to a particular case. The Justice
Department has not obliged.
In 2004, the conference report attached to the intelligence authorization bill asked the attorney
general to "include in his next semiannual report" a description of "the scope of such letters" and
the "process and standards for approving" them. More than a year has passed without a Justice
Department reply.
"The committee chairman has the power to issue subpoenas" for information from the executive
branch, said Rep. Zoe Lofgren (D-Calif.), a House Judiciary Committee member. "The minority
has no power to compel, and . . . Republicans are not going to push for oversight of the
Republicans. That's the story of this Congress."
In the executive branch, no FBI or Justice Department official audits the use of national security
letters to assess whether they are appropriately targeted, lawfully applied or contribute important
facts to an investigation.
Justice Department officials noted frequently this year that Inspector General Glenn A. Fine
reports twice a year on abuses of the Patriot Act and has yet to substantiate any complaint. (One
investigation is pending.) Fine advertises his role, but there is a puzzle built into the mandate.
Under what scenario could a person protest a search of his personal records if he is never
notified?
"We do rely upon complaints coming in," Fine said in House testimony in May. He added: "To
the extent that people do not know of anything happening to them, there is an issue about
whether they can complain. So, I think that's a legitimate question."
Asked more recently whether Fine's office has conducted an independent examination of national
security letters, Deputy Inspector General Paul K. Martin said in an interview: "We have not
initiated a broad-based review that examines the use of specific provisions of the Patriot Act."
At the FBI, senior officials said the most important check on their power is that Congress is
watching.
"People have to depend on their elected representatives to do the job of oversight they were
elected to do," Caproni said. "And we think they do a fine job of it."
Researcher Julie Tate and research editor Lucy Shackelford contributed to this report.




Abuse of Authority Revelations
Sunday, March 11, 2007
THE EXPANSION of law enforcement powers approved by Congress after Sept. 11 and
contained in the USA Patriot Act was conditioned on the notion that these new authorities would
be carefully used and closely monitored. An infuriating report released Friday by the Justice
Department's inspector general, Glenn A. Fine, demonstrates that the Federal Bureau of
Investigation treated its new powers with anything but that kind of restraint. The report depicts
an FBI cavalierly using its expanded power to issue "national security letters" without adequate
oversight or justification.
National security letters are used to obtain information such as credit and financial data and
telephone or e-mail subscriber records (but not the content of messages) without having to secure
a court order. The Patriot Act made it far easier for the FBI to use this tool. Now, the information
needs to be only "relevant" to a terrorism or espionage case -- involving any individual swept up
in the case, not just the target -- and the heads of FBI field offices can approve the search.
Having obtained these far-reaching new powers, according to the report, the FBI proceeded to
"seriously misuse" them. It didn't establish clear guidelines for using national security letters,
didn't institute an adequate system for approving requests and didn't put in place procedures to
purge information if the investigation fizzled. Although the FBI itself reported to a review board
a mere 26 instances in which information was improperly obtained, the real number appears to
be much higher. Of just 77 files reviewed by the inspector general, 17 -- 22 percent -- revealed
one or more instances in which information may have been obtained in violation of the law.
Indeed, the FBI's procedures were so slipshod, the report concludes, that it didn't even keep
proper count of how many such letters were issued. The use of these letters ballooned from 8,500
in 2000 to 47,000 in 2005 -- but that "significantly understated" the real numbers, the report
found.
Beyond that -- and perhaps the most disturbing revelation in a disturbing document -- the FBI
came up with a category of demands called exigent letters, in which agents got around even the
minimal requirements of national security letters. These exigent letters -- signed by FBI
counterterrorism personnel not authorized to sign national security letters -- assured telephone
companies on the receiving end that investigators faced an emergency situation and that
subpoenas or national security letters would follow. In fact, according to the account of the more
than 700 such letters, many times there were no urgent circumstances, and many times the
promised follow-up authorization never happened. This lawless practice was so egregious it was
stopped last May, FBI Director Robert S. Mueller III announced.
Mr. Mueller deserves credit for taking responsibility for the debacle. "I am the person
responsible, I am the person accountable, and I am committed to ensuring that we correct these
deficiencies and live up to these responsibilities," Mr. Mueller said at a news conference. Added
Attorney General Alberto R. Gonzales, "To say that I'm concerned about what has been revealed
in this report would be an enormous understatement." Those words, however heartfelt, do not
excuse the director or the attorney general from the travesty that occurred on their watch and for
failing, inexplicably, to put the proper controls in place. President Bush expressed confidence in
the two officials yesterday but asked at a news conference during his trip to Uruguay, "What are
you going to do to solve the problem, and how fast can you get it solved?" His expression of
confidence might be premature, but the questions are good ones.


Surveillance Under the Foreign Intelligence Surveillance Act (FISA)
The History of FISA
 As stated above, the government was free to wiretap whenever it wanted to in law enforcement

 investigations until the Supreme Court addressed the issue in 1967, and Congress passed the

 Wiretap Act in 1968. Similarly, the legality of warrantless searches and wiretaps in national

 security investigations, as opposed to law enforcement investigations, wasn't settled until the

 seventies.



 In 1972, the Supreme Court ruled on the use of wiretaps in national security cases. In that

 case, a group of Americans protesting the Vietnam War tried to blow up their local CIA

 recruiting office. Investigators collected evidence against them with a wiretap but without

 getting a wiretap order, and argued in court that since the investigation was for national

 security, the president had the authority to authorize surveillance without having to go through

 the courts.



 The Supreme Court held that the government didn't have unlimited power to conduct

 surveillance without the approval of a judge just by claiming the investigation was for national

 security, at least when investigating domestic threats to national security (that is, threats from

 U.S. citizens and legal residents). It left open whether or not such warrantless surveillance was

 allowed when investigating foreign threats.



 After this decision, and after revelations throughout the seventies that the government had

 been engaging in an enormous amount of unauthorized spying during the 1960s and early

 1970s, Congress decided to provide a legal framework to rein in foreign intelligence

 investigations. The Foreign Intelligence Surveillance Act of 1978 (or "FISA"), along with later

 amendments to that act, created a warrant procedure for foreign intelligence investigations so

 that there would no longer be any foreign intelligence surveillance without court oversight.


FISA in Action
 FISA requires the government to get search warrants and wiretap orders from a court even

 when it is investigating foreign threats to national security. However, the FISA process is

 different from the law enforcement processes described in earlier sections.



 First, all government requests for foreign intelligence surveillance authorization are made to a

 secret court: the FISA court. In order to get authorization, a significant purpose of the

 surveillance must be to gather foreign intelligence information — information about foreign

 spies, foreign terrorists, and other foreign threats — instead of evidence of a crime.
Most importantly, the probable cause standard is very different. Instead of having to show

probable cause that a crime is being, has been, or will be committed, the government must

show that the target of the surveillance is a foreign power or an agent of a foreign power.



Also unlike law enforcement surveillance, the target is never told by the government that

he/she was spied on, and every person that is served with a FISA search warrant, wiretap or

pen/trap order, or subpoena is also served with a gag order forbidding them from every telling

anyone about it except their lawyer.



Foreign Powers and Their Agents. So, what exactly qualifies as a foreign power or agent of

a foreign power when it comes to FISA surveillance? It's a bit unclear. The FISA law defines

those terms only vaguely, and without any access to the decisions of the secret FISA court,

there's no way of telling how broadly or narrowly the definitions are being interpreted.



According to FISA, a Foreign Power is defined to include:


  •   Any foreign government or component of a foreign government, whether or not officially
      recognized by the United States

  •   Any "faction" of a foreign nation or nations, or any foreign-based political organization, that
      isn't "substantially" composed of United States persons ("faction" and "substantially" aren't
      defined; a U.S. person is a citizen or a legal resident of the U.S.)

  •   Any entity, like a political organization or a business, that is directed or controlled by a foreign
      government

  •   Any group engaged in, or preparing to engage in, "international terrorism." ("International
      terrorism" is broadly defined as activities that (1) involve violent acts or acts dangerous to
      human life that are a violation of U.S. criminal laws or would be a violation if committed in the
      U.S., (2) appear to be intended to intimidate or coerce a civilian population, to influence the
      policy of a government by intimidation or coercion, or to affect the conduct of a government
      by assassination or kidnapping, and (3) occur totally outside the U.S., or transcend national
      boundaries in terms of how they are accomplished, the people they are intended to coerce or
      intimidate, or the place where the terrorists operate)


According to FISA, an Agent of a Foreign Power is defined to include:


  •   Anyone that is not a U.S. person who is an officer or employee of a foreign power

  •   Anyone that is not a U.S. person who engages in "clandestine intelligence activities" (spying)
      in the U.S. on behalf of a foreign power or any U.S. person that does the same and may be
      violating the law. So, if you're not a U.S. person, you don't have to be suspected of a crime;
      but even if you are a U.S. person, that suspicion doesn't have to meet traditional probable
      cause standards

  •   Anyone, whether a U.S. person or not, who engages in or prepares for acts of international
      terrorism or sabotage
 If you think that all sounds like very vague gobbledy-gook, you're right. No one really knows

 what these terms mean other than the FISA court, which won't release its decisions.



 And it's even worse for FISA subpoenas, which can be used to force anyone to hand over

 anything in complete secrecy, and which were greatly strengthened by Section 215 of the USA

 PATRIOT Act. The government doesn't have to show probable cause that the target is a foreign

 power or agent — only that they are seeking the requested records "for" an intelligence or

 terrorism investigation. Once the government makes this assertion, the court must issue the

 subpoena.


Police at the door: FISA Orders and National Security Letters

 If federal agents serve you with a FISA warrant or subpoena, or a National Security Letter, the

 advice given for regular warrants and subpoenas applies. However, FISA orders and National

 Security Letters will also come with a gag order that forbids you from discussing them. Do NOT

 violate the gag order. Only speak to members of your organization whose participation is

 necessary to comply with the order, and your lawyer. The constitutionality of FISA orders and

 especially National Security Letters is a matter of great dispute — in particular, several courts

 have found that the gag order that comes with a National Security Letter violates the First

 Amendment — and you may be able to successfully challenge the government's demand in

 court. If you do decide to seek counsel and do not have an a lawyer of your own, you can call

 the lawyers at EFF.

FISA Wiretap Statistics
 Like law enforcement wiretaps, FISA surveillance is relatively rare. Also like law enforcement

 wiretaps, however, FISA surveillance probably sweeps in the communications of a great many
 people. Because the information released about FISA surveillance is so limited, though, it's

 impossible to gauge just how many people are affected and how many communications are

 intercepted. The only public data available on FISA are the numbers of applications made to,

 and approved by, the FISA court. And those numbers have steadily increased through the

 years, to the point where FISA orders now outnumber all federal and state wiretap orders

 combined! For example, in 2007, 2,370 applications for FISA wiretaps were granted by the FISA

 court, compared to 2,208 state and federal wiretaps reported in the same year. And each

 application can contain a request for more than one type of surveillance — for example, a

 wiretap, a secret search, and secret subpoenas.
 Like with law enforcement wiretaps, your FISA wiretap risk is very low, as is the risk of being

 subjected to a secret physical search under FISA. The risk of having records about you secretly

 subpoenaed under FISA is much higher, but if it's your communications records the government

 is after, they're more likely to use a National Security Letter.


Privacy tip: Foreign Intelligence Surveillance

 If your organization deals with lots of non-U.S. persons or any foreign governments or foreign-

 based organizations, you will likely face a higher risk of foreign intelligence surveillance, and

 should factor that risk into your security decision-making.

Beyond FISA
The NSA Surveillance Program, the Protect America Act and the FISA Amendments Act


 FISA is a dangerously weak restraint on the government's power to secretly spy on Americans

 without probable cause of a crime, particularly since passage of the USA PATRIOT Act in 2001.

 Yet just as the Bush Administration was successfully lobbying Congress to expand its FISA

 surveillance authority through the USA PATRIOT Act, it was already building a new surveillance

 program at the National Security Agency (NSA) that would secretly ignore FISA's limitations

 and spy on Americans without first going to the FISA court.


The NSA's Surveillance Program Revealed

 In a story published on December 16, 2005, the New York Times first revealed to the country

 that since 9/11, the NSA had regularly targeted Americans in the U.S. for electronic surveillance

 without first obtaining the required court orders from the FISA court. The president and his

 representatives quickly admitted that the Bush administration had chosen to bypass FISA as

 part of its "Terrorist Surveillance Program" or "TSP." The administration claimed that the TSP

 was narrowly targeted at international communications — i.e., communications into and out of

 the country — where at least one of the parties had known links to terrorist organizations. The

 president made the frighteningly broad claim that because of his inherent power under the

 Constitution to combat foreign threats as Commander-in-Chief, he had the authority to order

 such warrantless surveillance regardless of FISA's dictates or the Fourth Amendment.



 However, the warrantless surveillance proved to be much broader than the "narrow and

 targeted" program that the president described. Further reporting by the Times and other

 papers made clear that the NSA's surveillance program went far beyond the admitted "TSP."

 Those news stories, along with whistleblower evidence [PDF], demonstrated that the NSA

 program amounted to an untargeted dragnet of millions of ordinary Americans' domestic
 communications and communications records. With the cooperation of the country's major

 telecommunications companies such as AT&T, the NSA had illegally gained backdoor access to

 critical telecommunications switching facilities and communications records databases around

 the nation. With that illegal access, the government was vacuuming up all of the data passing

 through those facilities — not only records of who communicated with whom and when but also

 the content of nearly every American's private communications — as part of a vast data-mining

 program. In response to the mounting evidence of a dragnet surveillance program (view a

 summary of all of that evidence [PDF]), EFF brought suit against AT&T in 2006 — and later, in

 2008, against the government itself — on behalf of ordinary AT&T customers seeking to stop

 the warrantless surveillance of their telephone and Internet communications. You can find out

 more about the progress of those lawsuits, Hepting v. AT&T and Jewel v. NSA, at our NSA

 Multi-District Litigation page.


The Protect America Act of 2007, the FISA Amendments Act of 2008, and the
Future of the NSA's Surveillance Program

 One might expect that the revelation of a massive and illegal spying program would lead to

 broad bipartisan condemnation from Congress and an effort to pass legislation to provide

 additional protections against unbridled Executive spying. Unfortunately, that's not what

 happened. Instead, the Bush administration was able to use fear of terrorism to convince

 Congress to pass bills authorizing surveillance programs even broader than the admitted "TSP."



 Claiming that critical intelligence about potential terrorist attacks would be lost unless FISA was

 immediately "modernized," the White House succeeded in convincing Congress to pass two

 laws. First was the temporary Protect America Act ("PAA") of 2007, which expired after one

 year. Next was the second and more-permanent FISA Amendments Act ("FAA") of 2008. Both

 allowed the Executive Branch to target the communications of people outside of the U.S. for

 surveillance without prior FISA court approval and without demonstrating any link to terrorism.

 Interpreted aggressively, these statutes arguably authorized the programmatic, non-

 particularized dragnet surveillance of any American's international communications, opening the

 door to virtually unchecked executive power to intercept your international emails and

 telephone calls.



 In the meantime, although we don't think that the PAA or the FAA authorizes it, there's been no

 indication that the domestic dragnet, revealed by news reports and whistleblower evidence and

 alleged in EFF's lawsuits, has ended. As far as we know, the NSA is still plugged into key
 telecommunications facilities across the country and acquiring copies of all of the

 communications content that flows through them, while also obtaining records detailing the

 communications activity of millions of ordinary Americans, in violation of FISA and the Fourth

 Amendment.



 Considering the latest changes to the law, we strongly recommend encrypting all of your

 international communications traffic. As for protecting the privacy of your domestic

 communications, the best way to combat the NSA's unchecked access to the nation's

 communications infrastructure — short of encrypting every single communication or avoiding

 using telecommunications at all — is to support EFF in its litigation and lobbying efforts to stop

 the spying for good.


Summing Up
What You Need to Know

  To sum up, the steps you'd take to combat FISA surveillance or national security letters are

  the same ones you'd take in the law enforcement context:


    •   If you don't keep it, they can't get it — destroy unnecessary records.

    •   If you do keep it, protect it with file encryption and strong passwords.

    •   Encrypt your Internet communications to prevent wiretapping.

    •   Use anonymizing tools like Tor when you're online.

    •   Always delete your providers' copies of emails and voicemails as soon as you can access
        them.

Defensive Technology

  If you are looking for basic technical information on how to protect the privacy of your data —

  whether it's on your own computer, on the wire, or in the hands of a third party — you've

  come to the right place. Although we hope you'll have the time to review all of the information

  in the SSD guide, if you're in a hurry to get to the technical details, this is where you can read

  articles that will explain:


    •   the basics of the relevant technologies, such as the Internet Basics and Encryption Basics
        articles

    •   how to improve the security of different communication applications, such as your web
        browsers, email systems and IM clients

    •   how to protect your privacy by using defensive technologies such as secure deletion software,
        file and disk encryption software, and virtual private networks
   •   the overarching security threat posed by malware, how to evaluate that threat, and how to
       reduce it


 Just remember: technology changes quickly. We'll be doing our best to keep these articles

 updated to reflect current developments, but in the meantime, you should take the time to

 review information from multiple sources before making any serious security decisions.


Internet Basics

 The Internet is a global network of many individual computer networks, all speaking the same

 computer language, the Internet Protocol (IP). Every computer connected to the Internet has

 an IP address, a unique numeric identifier that can be "static", i.e. unchanging, or may be

 "dynamically" assigned by your ISP, such that your computer’s address changes with each new

 Internet session.



 More sophisticated networking protocols may be "layered" on top of the IP protocol, enabling

 different types of Internet communications. For instance, World Wide Web (Web)

 communications are transmitted via the HyperText Transfer Protocol (HTTP) and e-mails via

 the Simple Mail Transport Protocol (SMTP).



 These additional protocols use their own types of addresses, apart from IP addresses. For

 example, to download a Web page, you need its Web address, known as a Uniform Resource

 Locator (URL) (e.g., http://www.eff.org). To exchange e-mails, both the sender and recipient

 need e-mail addresses (e.g., user@emailprovider.com).



 Computers that offer files for download over the Internet are called servers or hosts. For
 example, a computer that offers Web pages for download is called an HTTP server or Web

 host. Any computer may be server, client, or both, depending on the communication. The

 amount of data in an Internet communication is measured in bytes.



 Communications to and from an Internet-connected computer occur through 65,536 different

 computer software "ports." Many networking protocols have been assigned to particular port

 numbers by the Internet Engineering Task Force. For example, HTTP (Web) is assigned to port

 80 and SMTP (e-mail) is assigned to port 25. However, any port can be used for any

 application, and these are only conventions.
 If you want to learn more, the website How Stuff Works publishes a popular series of "Internet

 Basics" articles that answer questions about the nuts and bolts of the Internet.


Encryption Basics

 Encryption is a technique that uses math to transform information in a way that makes it

 unreadable to anyone except those with special knowledge, usually referred to as a "key."

 There are many applications of encryption, but some of the most important uses help protect

 the security and privacy of files on your computer, information passing over the Internet, or left

 sitting in a file on someone else's computer. If encryption is used properly, the information

 should only be readable by you and people that receive the key from you. Encryption provides a

 very strong technical protection against many kinds of threats — and this protection is often

 easy to obtain.


How Does Encryption Work?
 What do you need to know about how encryption works? Surprisingly little. Encryption is

 conceptually similar to the "secret codes" that children learn about and use to communicate. If

 you’ve ever spoken in pig Latin or used a decoder ring, you've used very simple encryption

 techniques on a message. Again, the idea is to take a normal human-readable message (often

 called the plaintext message) and transform it into an incomprehensible format that can only

 become comprehensible again to someone with secret knowledge:



 Plaintext message + Encryption algorithm + Key = Scrambled message



 Decryption algorithm + Key + Scrambled message = Plaintext Message



 Your Little Brother’s Cryptography. A simple encryption system would be to change each

 letter in your message to a set number of letters later in the alphabet. The specific number of

 spaces you move down the alphabet for each letter is the secret key. If the key is two, A

 becomes C, B becomes D, C becomes E, etc. Using that encryption system, the plaintext

 message "INSECURE" would become "KPUGEWTG."


How is Encryption Applied?
 Although the mechanics of encryption can be explained by the "decoder ring" analogy, the

 modern practice of using encryption has been accurately described as using a very resilient

 envelope for your messages. Most unencrypted data transmitted online is accessible to the

 servers passing off the information. Conversely, using encryption puts your online
 communications in a "steel envelope" — they can't be read in the course of delivering the

 message to the recipient and are extremely resistant to tampering.



 Modern encryption is very difficult to break, using very complex mathematics to scramble

 information and ensure that only people possessing the right key can unscramble it. In many

 cases you can get major security benefits from encryption without a detailed understanding of

 how it works. Some software implements very convenient, fully automated encryption features

 which may simply require that you turn them on.



 For instance, when a website is configured properly, web browsers can use SSL encryption to

 protect the privacy of information you send to or receive from a web server. This is most often

 used to protect log-in passwords and financial data. Using a browser's SSL encryption can be as

 simple as accessing a site with the https scheme instead of the http scheme (for instance,

 https://www.eff.org/ instead of http://www.eff.org/); the browser typically takes care of all the

 details behind the scenes.


Why Is Encryption Important?
 Encryption plays an important role in mitigating risk related to the many threats listed in this

 guide. If sensitive information stored on your computer is encrypted, it will take a secret key to

 decode it. If sensitive information en route to others is encrypted, only someone that knows the

 secret key can read what it says. When you encrypt sensitive information and it ends up logged

 by others in the course of communicating online, encryption keeps those without the secret key

 from knowing the contents of the message.



 Most of the Defensive Technology articles in this guide will cover practical ways to apply

 encryption to particular communications (like email) or particular applications (like web

 browsers).



 Encryption is absolutely essential to maintaining information security. Moreover, modern

 computers are powerful enough that we can aim to make encryption of our communications and

 data routine, not just reserving encryption for special occasions or particularly sensitive

 information.
For More About Encryption
 Many encryption tools can be used successfully without much beyond a conceptual

 understanding. We explain how to use many of these well-developed tools in other parts of this

 guide.



 However, be aware that while encryption is a powerful tool and is critical to information

 security, it has limitations — particularly if it is not being used correctly. Learning more about

 encryption and its limitations can help ensure that you're using it properly and getting

 protection against as many kinds of attacks as possible.


Web Browsers

 Web browsers are software on your machine that communicate with servers or hosts on the

 Internet. Using a web browser causes data to be stored on your computer and logs to be stored

 on the web servers you visit, and frequently transmits unencrypted information.



 Until you have understood the mechanisms by which this occurs — and taken steps to prevent

 them — it is best to assume that anything you do with a web browser could be recorded by

 your own machine, by the web servers you're communicating with, or by any adversary that is

 able to monitor your network connection.


Controlling and Limiting the Logs Kept by Your Browser
 Web browsers often retain a large amount of information about the way they are used. A

 browser typically keeps a history of the web pages it visits. Browsers also often retain cached

 copies of the pages you've visited, information about which accounts you log into on web

 servers, names and other data you enter into web forms, and cookies that record preferences

 and link your browser to records on third party web servers. Fortunately, browsers also include

 features for managing these records. In general, the features are getting better, so it's getting

 easier to control browser records.



 For example, here are the stored data privacy settings pages for Firefox, the free web browser:
For each type of information your browser stores, you can either set it to not collect it at all, set

it to delete within a certain span of days, set it to delete when you quit the browser, or press

"clear" to manually erase the data. Or you can "clear all" of the info — all the data your

browser’s been keeping on you.



Apple’s Safari browser also has an easy one-click option to clear everything. Just select "Reset

Safari" from the "Safari" pull-down menu and you’ll get this option:
Controlling and Limiting the Logs Kept By Web Servers
 Web servers usually see and retain a large amount of information about what you do when you

 surf to them. For instance, if you type any information into a form on a web page (such as a

 search engine), the server will record not only what you sent it, but also information that might

 identify you: your IP address, the browser and operating system you are using, whether you

 followed a link from another web page to get to the page, what that previous site/page was,

 your account if you are logged in to the site, and cookies that were created when you

 previously looked at pages on the site.


Web Privacy is Hard

 If you use a particular website a lot, the chances are that it is going to end up retaining a huge

 amount of information about you. To get a sense of the kinds of information, and what needs to

 be done to prevent them from being aggregated, read our white paper on search privacy.

 Although that document primarily discusses search engines, the issues to consider for other

 kinds of sites are similar.
Cookies

 Cookies are pieces of information that a web site can send to your browser. If your browser

 "accepts" them, they will be sent back to the site every time the browser accepts a page, image

 or script from the site. A cookie set by the page/site you're visiting is a "second party" cookie. A

 cookie set by another site that's just providing an image or script (an advertiser, for instance),

 is called a "third party" cookie.



 Cookies are the most common mechanisms used to record the fact that a particular visitor has

 logged in to an account on a site, and to track the state of a multi-step transaction such as a

 reservation or shopping cart purchase. As a result, it is not possible to block all cookies without

 losing the ability to log into many sites and perform transactions with others.



 Unfortunately, cookies are also used for other purposes that are less clearly in users' interests,

 such as recording their usage of a site over a long period of time, or even tracking and

 correlating their visits to many separate sites (via cookies associated with advertisements, for

 instance).



 With recent browsers, the cookie setting that offers users the most pragmatic tradeoff between

 cookie-dependent functionality and privacy is to only allow cookies to persist until the user quits

 the browser (also known as only allowing "session cookies").



 You can enable this in the "Privacy" tab of Firefox 3's "Preferences" pane:
Unfortunately, if you only quit your browser entirely once every week or two, web sites will still

collect a huge amount of information about your habits, such as the IP addresses you use at

home, at work, at friends' houses and at Internet cafes. However, the "Incognito" mode offered

by Google's Chrome browser and the "InPrivate" mode offered by Internet Explorer 8 are signs

that in future browsers may offer more convenient ways to limit cookie tracking.



Sophisticated users can configure their browser to manually decide whether each site they visit

is allowed to set cookies. This may have good privacy outcomes, such as allowing session

cookies for sites the user logs in to or purchases things from, but not any other sites. But it

requires a lot of work. A certain amount of debugging may also be required for situations where

sites are poorly designed and fail to function without certain third-party cookies.
Recent Cookie-Like "Features" in Web Browsers

 In addition to the regular cookies that web browsers send and receive, and which users have

 begun to be aware of and manage for privacy, companies have continued to implement new

 "features" which behave like cookies but which aren't managed in the same way. Adobe has

 created "Local Stored Objects" (also known as "Flash Cookies") as a part of its Flash plug-ins;

 Mozilla has incorporated a feature called "DOM storage" in recent versions of Firefox. Web sites

 could use either or both of these in addition to cookies to track visitors. We recommend that

 users take steps to prevent this.



 Managing Mozilla/Firefox DOM Storage Privacy. If you use a Mozilla browser, you can

 disable DOM Storage pseudo-cookies by typing about:config into the URL bar. That will bring

 up an extensive list of internal browser configuration options. Type "storage" into the filter box,

 and press return. You should see an option called dom.storage.enabled. Change it to "false".




 Managing Adobe Flash Privacy. Adobe lists advice on how to disable Flash cookies here.

 There are some problems with the options Adobe offers (for instance, there is no "session only"

 option), so it's probably best to globally set Local Stored Object space to 0 and only change that

 for sites which you're willing to have tracking you. On the Linux version of Adobe's Flash plugin

 there doesn't seem to be a way set the limit to 0 for all sites — consider donating or

 contributing to the Gnash project to give users an alternative to Adobe's privacy-unfriendly

 design decisions.



 Aside from being an annoying medium for advertising, Flash poses other kinds of privacy and

 security risks. Some people choose not to use Flash at all (using other tools like youtube-dl for

 watching Youtube videos). Others install a Flash management browser plugin like FlashBlocker.

 Unfortunately, while FlashBlocker makes surfing the web a more peaceful experience, it does

 not protect you from being tracked by Flash cookies or from exposure to other flash-based

 security risks.


IP Addresses

 Whenever your browser fetches a page, image or script from a website, you should expect the

 website to record the IP address of the computer you're using. Your ISP, or anybody with the
 power to subpoena your ISP, could tie those records to the Internet account subscription you

 are connected through. Use Tor (or a proxy server, which is faster but less secure) if you wish

 to prevent these records from being created.


Privacy on the wire

HTTPS

 Most sites on the web are accessed using the unencrypted HTTP protocol. HTTP is susceptible to

 eavesdropping, and even to intermediaries that might set out to modify the pages a browser is

 fetching.



 HTTPS is a more secure alternative to HTTP. HTTPS encrypts pages, and attempts to ensure

 three things: (1) that third parties cannot see the contents of the page; (2) that the page

 cannot be modified by third parties; (3) that the page was really sent by the web server listed

 in the URL bar.



 Unfortunately, a web server must be configured to support HTTPS properly before you can use

 it. If there is a site you were planning to send sensitive information to, ensure that you are

 using HTTPS. If a site doesn't support HTTPS, don't send sensitive information to it.


Some Notes on Using HTTPS

 Check three indicators to ensure that you're at an HTTPS page: (1) the URL begins with
 https://; (2) there is a lock icon in the corner of the browser; and (3) the URL/location bar is

 colored.


 If you receive a warning about certificates, or a see broken lock icon, you should assume that

 any of the security properties of the page could be broken. Contact the site's webmaster and

 have them fix the problem before sending any sensitive information to the site.


Blocking Javascript for Browser Security and Privacy?
 Javascript is a simple programming language which is part of modern web browsers. Unlike

 HTML, javascript allows a page to make the browser perform complicated and conditional

 calculations in determining what a page will look like and how it will function.
 Javascript has many uses. Sometimes it is simply used to make webpages look flashier by

 having them respond as the mouse moves around or change themselves continually. In other

 cases, javascript adds significantly to a page's functionality, allowing it to respond to user

 interactions without the need to click on a "submit" button and wait for the web server to send

 back a new page in response.



 Unfortunately, javascript also contributes to many security and privacy problems with the web.

 If a malicious party can find a way to have their javascript included in a page, they can use it

 for all kinds of evil: making links change as the user clicks them; sending usernames and

 passwords to the wrong places; reporting lots of information about the users browser back to a

 site. Javascript is frequently a part of schemes to track people across the web, or worse, to

 install malware on people's computers.



 For this reason, sophisticated users with strict security and privacy requirements may wish to

 consider selectively blocking javascript in their browser. There is a Mozilla/Firefox plugin called

 NoScript which is very useful for this purpose. Noscript (1) allows you to see the sources of any

 javascript in a page (many pages include javascript from third parties); (2) blocks javascript by

 default and (3) allows javascript from particular sources to be temporarily or permanently

 reenabled. Surfing the web with NoScript is more work (because when you visit new sites, you

 may have to enable some javascript sources to make them work properly), but surfing the web

 with NoScript is also much more secure.


Email

 The act of using email stores data on your machines, transmits data over the network, and

 stores data on third party machines.


Locally Stored Data
 The usual measures apply to managing the copies of emails (both sent and received) that are

 kept on your own machines. Encrypt your drives and decide upon and follow an appropriate

 data deletion policy.


Data on the Wire
 Email usually travels through a number of separate hops between the sender and receiver. This

 diagram illustrates the typical steps messages might travel through, the transmission protocols

 used for those steps, and the available types of encryption for those steps.
End-to-End Encryption of Specific Emails

 Encrypting emails all the way from the sender to the receiver has historically been difficult,

 although the tools for achieving this kind of end-to-end encryption are getting better and easier

 to use. Pretty Good Privacy (PGP) and its free cousin GNU Privacy Guard (GnuPG) are the

 standard tools for doing this. Both of these programs can provide protection for your email in

 transit and also protect your stored data. Major email clients such as Microsoft Outlook and

 Mozilla Thunderbird can be configured to work smoothly with encryption software, making it a

 simple matter of clicking a button to sign, verify, encrypt and decrypt email messages.



 The great thing about end-to-end encryption is that it ensures that the contents of your emails

 will be protected not only against interception on the wire, but also against some of the threats

 to the contents of copies of your emails stored on your machine or third party machines.



 There are two catches with GnuPG/PGP. The first is that they only work if the other parties you

 are corresponding with also use them. Inevitably, many of the people you exchange email with

 will not use GPG/PGP, though it can be deployed amongst your friends or within an

 organization.



 The second catch is that you need to find and verify public keys for the people you are sending

 email to, to ensure that eavesdroppers cannot trick you into using the wrong key. This trickery

 is known as a "man in the middle" attack.
 Probably the easiest way to start using GnuPG is to use Mozilla Thunderbird with the Enigmail

 plugin. You can find the quick start guide for installing and configuring Enigmail here.


Server-to-Server Encrypted Transit

 After you press "send", emails are typically relayed along a chain of SMTP mail servers before

 reaching their destination. You can use your mail client to look at the headers of any email

 you've received to see the chain of servers the message traveled along. In most cases,

 messages are passed between mail servers without encryption. But there is a standard called

 SMTP over TLS which allows encryption when the sending and receiving servers for a given hop

 of the chain support it.



 If you or your organization operates a mail server, you should ensure that it supports TLS

 encryption when talking to other mail servers. Consult the documentation for your SMTP server

 software to find out how to enable TLS.


Client-to-Mail Server Encryption

 If you use POP or IMAP to fetch your email, make sure it is encrypted POP or IMAP. If your mail

 server doesn't support the encrypted version of that protocol, get your service provider or

 systems administrator to fix that.



 If you use a webmail service, ensure that you only access it using HTTPS rather than HTTP.

 Hushmail.com is a webmail service provider that always uses HTTPS, and also offers some end-

 to-end encryption facilities (though they are not immune to warrants).



 Many webmail service providers only use HTTPS for the login page, and then revert to HTTP.

 This isn't secure. Look for an account configuration option (or a browser plugin) to ensure that

 your webmail account always uses HTTPS. In Gmail, for instance, you can find this option in the

 "general" tab of the settings page:




 If you can't find a way to ensure that you only see your webmail through https, switch to a

 different web mail provider.
Data Stored on Second- and Third-Party Machines
 There are two main reasons why your emails will be stored on computers controlled by third

 parties.


Storage by your Service Provider

 If you don't run your own mail server, then there is a third party who obtains (and may store)

 copies of all of your emails. This would commonly be an ISP, an employer, or a webmail

 provider. Copies of messages will also be scattered across computers controlled by the ISPs,

 employers and webmail hosts of those you correspond with.



 Make sure your email software is configured so that it deletes messages off of your ISP's mail

 server after it downloads them. This is the most common arrangement if you're using POP to

 fetch your email, but it is common for people to use IMAP or webmail to leave copies of

 messages on the server.



 If you use webmail or IMAP, make sure you delete messages immediately after you read them.

 Keep in mind that with major webmail services, it may be a long time – maybe a matter of

 months – before the message is really deleted, regardless of whether you still have access to it

 or not. With smaller IMAP or webmail servers, it is possible that forensically accessible copies of

 messages could be subpoenaed years after the user deleted them.



 The content of PGP/GnuPG encrypted emails will not be accessible through these third parties,

 although the email headers (such as the To: and Subject: lines) will be.



 Running your own mail server with an encrypted drive, or using end-to-end encryption for

 sensitive communications, are the best ways of mitigating these risks.


Storage by Those You Correspond With

 Most people and organizations save all of the email they send and receive. Therefore, almost

 every email you send and receive will be stored in at least one other place, regardless of the

 practices and procedures you follow. In addition to the personal machine of the person you

 sent/received the message to/from, copies might be made on their ISP or firm's mail or backup

 servers. You should take these copies into consideration, and if the threat model you have for

 sensitive communications includes an adversary that might gain access to those copies, then

 you should either use PGP to encrypt those messages, or send them by some means other than
 email. Be aware that even if you use PGP, those you communicate with could be subject to

 subpoenas or requests from law enforcement to decrypt your correspondence.


End-to-End Email Encryption
 Email encryption is a topic that could fill a book, and has: see Bruce Schneier's book Email

 Security: How to Keep Your Electronic Messages Private. While this book is somewhat out of

 date (it refers to old versions of software), the concepts it introduces are essential.


Instant Messaging (IM)

 Instant messaging is a convenient way to communicate with people online. In privacy

 terms, it's a bit better and easier to secure than email but in some situations a

 telephone call will offer you better privacy.



 Instant messaging software creates data stored on your computer (logs of your

 communications), transmits communications over the network (the messages traveling back

 and forth), and leaves communications stored on other computers (logs kept by the people you

 talk to, and sometimes logs kept by the IM provider).



 If you use IM without taking special precautions, you can assume that all of these records will

 be available to adversaries. The easiest way for an adversary to obtain the contents of your

 communications is from you, your correspondent, or your service provider, if any of those

 parties logs (stores) the messages. The more difficult way is to intercept the messages as they

 travel over the network.


Encrypt Your Instant Messaging Conversations as They Travel
 To protect messages from interception as they travel over the network, you need to use

 encryption. Fortunately, there is an excellent instant messaging encryption system called OTR

 (Off The Record). Confusingly, Google has a different instant messaging privacy feature which

 is also called "Off The Record". To disambiguate them, this page will talk bout "OTR encryption"

 and "Google OTR". It's actually possible to be using them both at the same time.



 If you and the person you are talking to both use OTR encryption, you have excellent protection

 for communications on the network, and you will prevent your IM provider from storing the

 content of your communications (though they may still keep records of who you talk to).
      The easiest way to use OTR encryption is to use Pidgin or Adium X for your IMs (Pidgin is a

      program that will talk to your friends over the MSN, Yahoo!, Google, Jabber, and AIM networks;

      Adium X is similar program specifically for Mac OS X). If you're using Pidgin, install the the OTR

      encryption plugin for that client. Adium X comes with OTR built in.



      With OTR encryption installed, you still need to do a few things for network privacy:


1. Read and understand OTR encryptions's information.
2.   Make sure the people you are talking to also use OTR encryption, and make sure it's active. (In
     Pidgin, check for OTR:private or OTR:unverfied in the bottom right corner.)
3. Follow OTR encryption's instructions to "Confirm" any person you need to have sensitive
     conversations with. This reduces the risk of an interloper (including the government with a warrant)
     being able to trick you into talking to them instead of the person you meant to talk to. Recent
     versions of OTR encryption allow you to do this just by agreeing on a shared secret word that you
     both have to type ("what was the name of the friend who introduced us?"). Older versions required
     that both users check that their client reported the right fingerprint for the other client.

     Configure Your IM Client to use SSL/TLS
      This step is complementary to using OTR encryption. It will prevent someone watching the

      network from seeing who you are chatting to, and will offer partial protection of your chats even

      if the other party isn't using OTR.


      If you are using Pidgin, you can ensure SSL is enabled by going to Manage Accounts, selecting

      Modify for an account, selecting the Advanced tab, and ticking Require SSL/TLS.


     Understand and Control IM Logging on Your Machine
      To protect the privacy of your IM conversations, you will need to decide what to do about logs

      kept on your computer. You have three choices:


        •   Configure your IM client to not keep logs

        •   Encrypt your hard disk

        •   Accept the risk that anyone who has access to your computer can read your old messages


      If at some point you decide to configure your IM client not to keep logs, you may want to go

      back and delete previous logs using secure deletion software.


     Be Aware of Logging on Others' Machines
      As noted above, using OTR encryption will ensure that your IM service provider should be

      unable to log the contents of your communications. They will, however, be in a position to
 record who you talk to, and possibly record the timing and length of the messages you

 exchange.



 OTR encryption does not stop the people you are talking to from logging your conversations.

 Unless you trust that they have disabled logging in their client or that they encrypt their hard

 disk and will not turn over its contents, you should assume that an adversary could obtain

 records of your conversations from the other party, either voluntarily or through subpoena or

 search.


Google OTR

 Google OTR is a feature of the Google instant messaging service that allows you to request that

 neither Google nor the people your talk to should be able to log your conversations.

 Unfortunately, there is no plausible enforcement mechanism for this feature. The people you

 talk to could be using a different IM client (like Pidgin or Adium) that can log regardless of

 whether Google OTR is enabled — or they could take screenshots of your conversations. Your

 client might be able to tell you whether they are using a client that follows the OTR rules (such

 as Gmail or Gchat), but that won't tell you whether they are taking screenshots. The bottom

 line is that Google OTR is nice in theory but insecure in practice. Turn it on, but don't expect it

 to work if the other party uses a non-Google client or actively wants to record the converstion.


Wi-Fi

 Wireless networking is now a ubiquitous means of connecting computers to each other and to

 the Internet. The primary privacy concern with Wi-Fi is the interception of the communications

 you send over the air. In some cases, wireless routers might also store a small amount of

 information about your computer, such as its name and the unique number assigned to its

 networking card (MAC address).



 Wireless networks are particularly vulnerable to eavesdropping — in the end, "wireless" just

 means "broadcasting your messages over the radio," and anyone can intercept your wireless

 signal unless you use encryption. Listening in on unencrypted Wi-Fi communications is easy:

 almost any computer can do it with simple packet-sniffing software. Special expertise or

 equipment isn't necessary.



 Even worse, the legal protections for unencrypted wireless communications are unclear. Law

 enforcement may be able to argue that it does not need a wiretap order to intercept
 unencrypted wi-fi communications because there is an exception to the rules requiring such

 orders when the messages that are being intercepted are "readily accessible to the public."

 Basically, any communication over the radio spectrum that isn't transmitted by your phone

 company and isn't scrambled or encrypted poses a privacy risk.


Encrypting a Wireless Network
 If you want to protect your wireless communications from the government or anyone else, you

 must use encryption! Almost all wireless Internet access points come with WEP (Wired

 Equivalent Privacy) or WPA (Wi-Fi Protected Access) encryption software installed to encrypt the

 messages between your computer and the access point, but you have to read the manual and

 figure out how to use it. WEP is not great encryption (and we recommend strong, end-to-end

 encryption for sensitive communications regardless of the transmission medium), and practiced

 hackers can defeat it very quickly, but it's worth the trouble to ensure that your

 communications will be entitled to the legal protections of the Wiretap Act. WPA is much

 stronger than WEP, but it still only covers the first step your packets will take across the

 Internet.


When Using Open Wi-Fi
 If you're using someone else's "open" — unencrypted — wireless access point, like the one at

 the coffee shop, you will have to take care of your own encryption using the tools and methods

 described in other sections. Tor is especially useful for protecting your wireless transmissions. If

 you don't use Tor, and even if you do, you should also always use application-level encryption

 over open wireless, so no one can sniff your passwords.



 Because of the threat of password sniffing, it is crucially important that you do not use the

 same password for all your accounts! For example, http://www.nytimes.com/ requires a

 username and password to log in, but the site does not use encryption. However, web sites for

 banks, like https://www.wellsfargo.com/, always use encryption due to the sensitive nature of

 the transactions people make with banks. If you use the same passwords for the two sites, an

 eavesdropper could see your unencrypted password traveling to the newspaper site, and guess

 that you were using the same password for your bank account.


Tor

 Tor is another encryption tool that can help you protect the confidentiality of your

 communications. Tor is a free, relatively easy to use tool primarily designed to protect your
anonymity online. But it also has the side benefit of encrypting your communications for some

of their journey across the Internet.


How Tor Works
Using Tor can help you anonymize web browsing and publishing, instant messaging, IRC, SSH,

and many other applications. The information you transmit is safer when you use Tor, because

communications are bounced around a distributed network of servers, called onion routers. This

provides anonymity, since the computer you’re communicating with will never see your IP

address — only the IP address of the last Tor router that your communications traveled

through.



Tor helps to defend against traffic analysis by encrypting your communications multiple times

and then routing them through a randomly selected set of intermediaries. Thus, unless an

eavesdropper can observe all traffic to and from both parties, it will be very hard to determine

your IP address. The idea is similar to using a twisty, hard-to-follow route in order to throw off

somebody who is tailing you, and then periodically erasing your footprints.



To create a private network pathway with Tor, Alice’s Tor client first queries a global directory

to discover where on the Internet all the Tor servers are. Then it incrementally builds a circuit

of encrypted connections through servers on the network. The circuit is extended one hop at a

time, and each server along the way knows only which server gave it data and which server it is

giving data to. No individual server ever knows the complete path that a data packet has taken.

The Tor software on your machine negotiates a separate set of encryption keys for each hop

along the circuit to ensure that each hop can't trace these connections as they pass through.
Due to the way Alice’s Tor client encrypted her data, each node in the circuit can only know the

IP addresses of the nodes immediately adjacent to it. For example, the first Tor server in the

circuit knows that Alice’s Tor client sent it some data, and that it should pass that data on to

the second Tor server. Similarly, Bob knows only that it received data from the last Tor server

in the circuit — Bob has no knowledge of the true Alice.



For efficiency, the Tor software uses the same circuit for connections that happen within the

same ten-minute period. Later requests are given a new circuit, to keep people from linking

your earlier actions to the new ones.



Tor’s primary purpose is to frustrate traffic analysis, but as a by-product of how it works, Tor's

encryption provides strong protection for the confidentiality of the content of messages as well.

If an eavesdropper wiretaps Alice’s network link, all she’ll see is encrypted traffic between Alice

and her first Tor server — a great feature. If the eavesdropper wiretaps Bob’s network link, she

can see the unencrypted content Alice sent to Bob — but it may be very hard indeed for her to

link the content to Alice!



You can learn about Tor, find easy installation instructions, and download the software at

http://www.torproject.org. There you will also find instructions on how to easily "Torify" all

kinds of different applications, including web browsers and instant messaging clients.
     What Tor Won't Defend You Against
      Tor won't defend you against malware. If your adversary can run programs on your computer,

      it's likely that they can see where you are and what you're doing with Tor.



      If you've installed Tor on your computer but are using applications that don't understand how to

      use it, or aren't set up to use it, you won't receive protection while using those applications.



      Tor may not defend you against extremely resourceful and determined oponents. Tor is

      believed to work quite well at defeating surveillance from one or a handful of locations, such as

      surveillance by someone on your wireless network or surveillance by your ISP. But it may not

      work if someone can surveil a great many places around the Internet and look for patterns

      across them.



      If you aren't using encryption with the actual servers you're communicating with (for instance,

      if you're using HTTP rather than HTTPS), the operator of an "exit node" (the last Tor node in

      your path) could read all your communications, just the way your own ISP can if you don't use

      Tor. Since Tor chooses your path through the Tor network randomly, targeted attacks may still

      be difficult, but researchers have demonstrated that a malicious Tor exit node operator can

      capture a large amount of sensitive unencrypted traffic. Tor node operators are volunteers and

      there is no technical guarantee that individual exit node operators won't spy on users; anyone

      can set up a Tor exit node.



      These and related issues are discussed in more detail at here.




     https://trac.torproject.org/projects/tor/wiki/doc/TorFAQTop of Form
     doc/TorFAQ

     Table of Contents
1.   Running Tor
1.   How can I share files anonymously through Tor?
2.   I'm supposed to "edit my torrc". What does that mean?
3.   How do I set up logging, or see Tor's logs?
4.   What log level should I use?
5. Do I have to open all these outbound ports on my firewall?
6. My Tor keeps crashing.
7. I installed Tor and Polipo but it's not working.
8. How can I tell if Tor is working, and that my connections …
9. How do I use my browser for ftp with Tor?
10. Will Torbutton be available for other …
11. Does Tor remove personal information from the data my …
12. I want to run my Tor client on a different computer than my applications.
13. How often does Tor change its paths?
14. Why does netstat show these outbound connections?
15. Tor uses hundreds of bytes for every IRC line. I can't afford that!
16. Can I control what nodes I use for entry/exit, or what country the nodes …
17. Google makes me solve a Captcha or tells me I have spyware installed.
18. Gmail warns me that my account may have been compromised.
19. Why does Google show up in foreign languages?
20. How do I access Tor hidden services?
21. My Internet connection requires an HTTP or SOCKS proxy.
22. My firewall only allows a few outgoing ports.
23. Is there a list of default exit ports?
24. What should I do if I can't use an http proxy with my application?
25. I keep seeing these warnings about SOCKS and DNS and information leaks. …
26. How do I check if my application that uses SOCKS is leaking …
27. Tor/Vidalia prompts for a password at start
28. Why do we need Polipo or Privoxy with Tor? Which is …
29. Vidalia doesn't work in Windows 2000?
2. Tor Browser Bundle
1. There is no Flash in TBB!
2. I'm on OSX or Linux and I want to run another application through the Tor …
3. I need an HTTP proxy.
4. I want to leave Tor Browser Bundle running but close the browser.
5. I want to use a different browser with Tor.
6. I want to install my favorite extension in TBB. How do I do it?
7. Do I have to reinstall my extensions every time I upgrade TBB?
3. Running a Tor relay
1. How do I decide if I should run a relay?
2. Why isn't my relay being used more?
3. How can I get Tor to fully make use of my high capacity connection?
4. I'd run a relay, but I don't want to deal with abuse issues.
5. Do I get better anonymity if I run a relay?
6. Why doesn't my Windows (or other OS) Tor relay run well?
7. So I can just configure a nickname and ORPort and join the network?
8. I want to upgrade/move my relay. How do I keep the same key?
9. How do I run my Tor relay as an NT service?
10. Can I run a Tor relay from my virtual server account?
11. I want to run more than one relay.
12. My relay is picking the wrong IP address.
13. I don't have a static IP.
14. I'm behind a NAT/Firewall
15. My cable/dsl modem keeps crashing. What's going on?
16. Why do I get portscanned more often when I run a Tor relay?
17. I have more than one CPU. Does this help?
18. Why is my Tor relay using so much memory?
19. What bandwidth shaping options are available to Tor relays?
20. Does BandwidthRate really work?
21. How can I limit the total amount of bandwidth used by my Tor relay?
22. Why does my relay write more bytes onto the network than it reads?
23. Why can I not browse anymore after limiting bandwidth on my Tor relay?
24. How can I make my relay accessible to people stuck behind restrictive …
25. Bridge related questions
26. Can I install Tor on a central server, and have my clients connect to it?
27. How do I provide a hidden service?
28. What is the BadExit flag?
29. I got the BadExit flag. Why did that happen?
30. I'm facing legal trouble. How do I prove that my server was a Tor relay at …
31. I'm still having issues. Where can I get help?
4. Development
1. Who is responsible for Tor?
2. What do these weird version numbers mean?
3. How do I set up my own private Tor network?
4. How can I make my Java program use the Tor Network?
5. What is libevent?
6. What do I need to do to get a new feature into Tor?
5. Anonymity and Security
1. What protections does Tor provide?
2. Can exit nodes eavesdrop on communications? Isn't that bad?
3. What is Exit Enclaving?
4. So I'm totally anonymous if I use Tor?
5. Please explain Tor's public key infrastructure.
6. Where can I learn more about anonymity?
7. What's this about entry guard (formerly known as "helper") nodes?
8. What about powerful blocking mechanisms?
9. What attacks remain against onion routing?
10. Does Tor resist "remote physical device fingerprinting"?
6. Alternate designs that we don't do (yet)
1. You should send padding so it's more secure.
2. You should make every Tor user be a relay.
3. You should transport all IP packets, not just TCP packets.
4. You should hide the list of Tor relays, so people can't block the exits.
5. You should let people choose their path length.
6. You should split each connection over many paths.
7. You should migrate application streams across circuits.
8. You should let the network pick the path, not the client.
9. You should use steganography to hide Tor traffic.
10. Your default exit policy should block unallocated net blocks too.
11. Exit policies should be able to block websites, not just IP addresses
12. You should change Tor to prevent users from posting certain content.
13. Tor should support IPv6.
7. Abuse
1. Doesn't Tor enable criminals to do bad things?
2. How do I respond to my ISP about my exit relay?
3.   Info to help with police or lawyers questions about exit relays

     This FAQ is being migrated to General FAQ. The answers in this FAQ may be old, incorrect,
     or obsolete.
         •   Copyright 2003-2006 Roger Dingledine
         •   Copyright 2004-2005 Nick Mathewson
         •   Copyright 2004 Douglas F. Calvert
         •   Copyright 2004-2006 Peter Palfrader
         •   Copyright 2005-2009 Andrew Lewman
         •   Copyright 2007 Matt D. Harris
         •   Copyright 2010 The Tor Project, Inc.




     Distributed under the MIT license, see Legal Stuff for a full text.




     Running Tor¶
     How can I share files anonymously through Tor?¶
     File sharing (peer-to-peer/P2P) is widely unwanted in the Tor network and exit nodes are
     configured to block file sharing traffic by default. Tor is not really designed for it and file
     sharing through Tor excessively wastes everyone's bandwidth (slows down browsing) and
     due to current BitTorrent clients is NOT anonymous!

     I'm supposed to "edit my torrc". What does that mean?¶
     Answer moved to our new FAQ page

     How do I set up logging, or see Tor's logs?¶
     Answer moved to our new FAQ page

     What log level should I use?¶
     There are five log levels (also called "log severities") you might see in Tor's logs:
         •   "err": something bad just happened, and we can't recover. Tor will exit.
         •   "warn": something bad happened, but we're still running. The bad thing might be a
             bug in the code, some other Tor process doing something unexpected, etc. The
             operator should examine the message and try to correct the problem.
         •   "notice": something the operator will want to know about.
         •   "info": something happened (maybe bad, maybe ok), but there's nothing you need
             to (or can) do about it.
         •   "debug": for everything louder than info. It is quite loud indeed.
Alas, some of the warn messages are hard for ordinary users to correct -- the developers
are slowly making progress at making Tor automatically react correctly for each situation.
We recommend running at the default, which is "notice". You will hear about important
things, and you won't hear about unimportant things.
Tor relays in particular should avoid logging at info or debug in normal operation, since they
might end up recording sensitive information in their logs.

Do I have to open all these outbound ports on my firewall?¶
Tor may attempt to connect to any port that is advertised in the directory as an !ORPort (for
making Tor connections) or a DirPort (for fetching updates to the directory). There are a
variety of these ports, but many of them are running on 80, 443, 9001, and 9030.
So as a client, you could probably get away with opening only those four ports. Since Tor
does all its connections in the background, it will retry ones that fail, and hopefully you'll
never have to know that it failed, as long as it finds a working one often enough. However,
to get the most diversity in your entry nodes -- and thus the most security -- as well as the
most robustness in your connectivity, you'll want to let it connect to all of them.
If you really need to connect to only a small set of ports, see the FAQ entry on firewalled
ports.
Note that if you're running as a Tor relay, you must allow outgoing connections to every
other relay, and to anywhere your exit policy advertises that you allow. The cleanest way to
do that is to simply allow all outgoing connections at your firewall. If you don't, clients will
try to use these connections and things won't work.

My Tor keeps crashing.¶
We want to hear from you! There are supposed to be zero crash bugs in Tor. This FAQ entry
describes the best way for you to be helpful to us. But even if you can't work out all the
details, we still want to hear about it, so we can help you track it down.
First, make sure you're using the latest version of Tor (either the latest stable or the latest
development version).
Second, make sure your version of libevent is new enough. We recommend at least libevent
1.3a.
Third, see if there's already an entry for your bug in the Tor bugtracker. If so, check if there
are any new details that you can add.
Fourth, is the crash repeatable? Can you cause the crash? Can you isolate some of the
circumstances or config options that make it happen? How quickly or often does the bug
show up? Can you check if it happens with other versions of Tor, for example the latest
stable release?
Fifth, what sort of crash do you get?
   •   Does your Tor log include an "assert failure"? If so, please tell us that line, since it
       helps us figure out what's going on. Tell us the previous couple of log messages as
       well, especially if they seem important.
   •   If it says "Segmentation fault - core dumped" then you need to do a bit more to
       track it down. Look for a file like "core" or "tor.core" or "core.12345" in your current
       directory, or in your Data Directory. If it's there, run "gdb tor core" and then "bt",
       and include the output. If you can't find a core, run "ulimit -c unlimited", restart Tor,
       and try to make it crash again. (This core thing will only work on Unix -- alas,
       tracking down bugs on Windows is harder. If you're on Windows, can you get
       somebody to duplicate your bug on Unix?)
   •   If Tor simply vanishes mysteriously, it probably is a segmentation fault but you're
       running Tor in the background (as a daemon) so you won't notice. Go look at the
       end of your log file, and look for a core file as above. If you don't find any good
       hints, you should consider running Tor in the foreground (from a shell) so you can
       see how it dies. Warning: if you switch to running Tor in the foreground, you might
       start using a different torrc file, with a different default Data Directory; see the relay-
       upgrade FAQ entry for details.
   •   If it's still vanishing mysteriously, perhaps something else is killing it? Do you have
       resource limits (ulimits) configured that kill off processes sometimes? (This is
       especially common on !OpenBSD.) On Linux, try running "dmesg" to see if the out-
       of-memory killer removed your process. (Tor will exit cleanly if it notices that it's run
       out of memory, but in some cases it might not have time to notice.) In very rare
       circumstances, hardware problems could also be the culprit.
Sixth, if the above ideas don't point out the bug, consider increasing your log level to
"loglevel debug". You can look at the log-configuration FAQ entry for instructions on what to
put in your torrc file. If it usually takes a long time for the crash to show up, you will want
to reserve a whole lot of disk space for the debug log. Alternatively, you could just send
debug-level logs to the screen (it's called "stdout" in the torrc), and then when it crashes
you'll see the last couple of log lines it had printed. (Note that running with verbose logging
like this will slow Tor down considerably, and note also that it's generally not a good idea
security-wise to keep logs like this sitting around.)

I installed Tor and Polipo but it's not working.¶
Answer moved to our new FAQ page

How can I tell if Tor is working, and that my connections really are
anonymized? Are there external servers that will test my connection?¶
Once you've set up your browser to point to Polipo, and (if necessary) your Polipo to point
to Tor, there are sites you can visit that will tell you if you appear to be coming through the
Tor network. Try the Tor Check site and see whether it thinks you are using Tor or not.
If that site is down, you can still test, but it will involve more effort: http://ipid.shat.net
and http://www.showmyip.com/ will tell you what your IP address appears to be, but you'll
need to know your current IP address so you can compare and decide whether you're using
Tor correctly.
To learn your IP address on OS X, Linux, BSD, etc, run "ifconfig". On Windows, go to the
Start menu, click Run and enter "cmd". At the command prompt, enter "ipconfig /a".
If you are behind a NAT or firewall, though, your IP address will show up as something like
192.168.1.1 or 10.10.10.10, and this isn't your public IP address. In this case, you should
1) configure your browser to connect directly (that is, stop using Polipo), 2) check your IP
address with one of the sites above, 3) point your browser back to Polipo, and 4) see
whether your IP address has changed.

How do I use my browser for ftp with Tor?¶
The short answer is to use Firefox 3.0 or above with Torbutton. With this configuration,
accessing ftp:// links should be safe for you: your Firefox will safely use Tor directly as a
socks proxy when accessing these links.
Versions of Firefox older than 1.5 don't know how to use a socks proxy without broadcasting
your DNS queries to the local network, so in those cases you should avoid ftp:// links.
Torbutton will automatically configure your browser in this case to point all protocols to
Polipo: this means that ftp connections will fail, but at least they won't be dangerous.
If you're using a different browser, we wish you luck. Most of them don't support doing
socks requests without leaking the DNS resolve, so you will want to set as many proxy
lines as you can. Internet Explorer users beware --- there is a known bug that causes
Explorer to directly send FTP requests without going through the specified proxy. You should
at least disable Folder View in Internet Explorer if using Tor with Polipo, and you may need
to take other steps as well.
If you want a separate application for an ftp client, we've heard good things about FileZilla
for Windows. You can configure it to point to Tor as a "socks4a" proxy on "localhost" port
"9050".

Will Torbutton be available for other browsers?¶
We don't support IE, Opera or Safari and never plan to. There are too many ways that your
privacy can go wrong with those browsers, and because of their closed design it is really
hard for us to do anything to change these privacy problems.
We are working with the Chrome people to modify Chrome's internals so that we can
eventually support it. But for now, Firefox is the only safe choice.

Does Tor remove personal information from the data my application
sends?¶
No, it doesn't. You need to use a separate program that understands your application and
protocol and knows how to clean or "scrub" the data it sends. Privoxy is an example of this
for web browsing. But note that even Privoxy won't protect you completely: you may still
fall victim to viruses, Java Script attacks, etc; and Privoxy can't do anything about text that
you type into forms. Be careful and be smart.

I want to run my Tor client on a different computer than my applications.¶
By default, your Tor client only listens for applications that connect from localhost.
Connections from other computers are refused. If you want to torify applications on
different computers than the Tor client, you should edit your torrc to define
SocksListenAddress 0.0.0.0 g and then restart (or hup) Tor. If you want to get more
advanced, you can configure your Tor client on a firewall to bind to your internal IP but not
your external IP. (For a complete example of this, see Tor through SSH tunnel using a web
browser on Debian to connect to a Tor client running on !OpenBSD. The data is transferred
between the computers using an SSH tunnel.)
If you are using tor through Polipo, or using the Firefox Torbutton plugin with Polipo (the
default arrangement) you will need to edit your Polipo config file so that your config
contains:
socksParentProxy = "192.168.1.2:9050" socksProxyType = socks5
Where 192.168.1.2 is the address on your local network where your tor relay is running.
For more information on setting up a central tor server, see Can I install Tor on a central
server, and have my clients connect to it?
How often does Tor change its paths?¶
Tor will reuse the same circuit for new TCP streams for 10 minutes, as long as the circuit is
working fine. (If the circuit fails, Tor will switch to a new circuit immediately.)
But note that a single TCP stream (e.g. a long IRC connection) will stay on the same circuit
forever -- we don't rotate individual streams from one circuit to the next. Otherwise an
adversary with a partial view of the network would be given many chances over time to link
you to your destination, rather than just one chance.

Why does netstat show these outbound connections?¶
Because that's how Tor works. It holds open a handful of connections so there will be one
available when you need one.

Tor uses hundreds of bytes for every IRC line. I can't afford that!¶
Tor sends data in chunks of 512 bytes (called "cells"), to make it harder for intermediaries
to guess exactly how many bytes you're communicating at each step. This is unlikely to
change in the near future -- if this increased bandwidth use is prohibitive for you, I'm afraid
Tor is not useful for you right now.
The actual content of these fixed size cells is documented in the main Tor spec, section 3.
We have been considering one day adding two classes of cells -- maybe a 64 byte cell and a
1024 byte cell. This would allow less overhead for interactive streams while still allowing
good throughput for bulk streams. But since we want to do a lot of work on quality-of-
service and better queuing approaches first, you shouldn't expect this change anytime soon
(if ever). However if you are keen, there are a couple of Research Ideas that may involve
changing the cell size.

Can I control what nodes I use for entry/exit, or what country the nodes
are in?¶
Answer moved to our new FAQ page

Google makes me solve a Captcha or tells me I have spyware installed.¶
Answer moved to our new FAQ page

Gmail warns me that my account may have been compromised.¶
Answer moved to our new FAQ page

Why does Google show up in foreign languages?¶
Google uses "geolocation" to determine where in the world you are, so it can give you a
personalized experience. This includes using the language it thinks you prefer, and it also
includes giving you different results on your queries.
If you really want to see Google in English you can click the link that provides that. But we
consider this a feature with Tor, not a bug --- the Internet is not flat, and it in fact does look
different depending on where you are. This feature reminds people of this fact. The easy
way to avoid this "feature" is to use http://google.com/ncr.
Note that Google search URLs take name/value pairs as arguments and one of those names
is "hl". If you set "hl" to "en" then Google will return search results in English regardless of
what Google server you have been sent to. On a query this looks like:
 http://google.com/search?q=...&hl=en&..g
In Firefox you can search for the google.src file and add the line <input name="hl"
value="en">g to it. Then restart Firefox and it will automatically add the "hl=en"
name/value pair to all queries made from the search bar so you will get English results
regardless of which Google server you have been sent to. Note that this file is actually
'hidden' as part of the application container on Macs. To get to this file on a Mac you have to
right click on the Firefox application icon and select "Show Package Contents" then navigate
to Contents/MacOS/searchplugins.
Another method is to simply use your country code for accessing Google. This can be
google.be, google.de, google.us and so on. You can also set your language by first selecting
it in Language Tools section, search for something simple. Then extract the language from
the URL. In this example, we'll choose Hebrew: http://www.google.com/search?
lr=lang_g'''iw. Next, use that string in the url: http://google.com/intl/iw/. This can
obviously be set as your homepage or bookmarked if necessary.

How do I access Tor hidden services?¶
Tor hidden services are named with a special top-level domain (TLD) name in DNS: .onion.
Since the .onion TLD is not recognized by the official root DNS servers on the Internet, your
application will not get the response it needs to locate the service. Currently, the Tor
directory server provides this look-up service; and thus the look-up request must get to the
Tor network.
Therefore, your application needs to pass the .onion hostname to Tor directly. You can't try
to resolve it to an IP address, since there is no corresponding IP address: the server is
hidden, after all!
So, how do you make your application pass the hostname directly to Tor? You can't use
SOCKS 4, since SOCKS 4 proxies require an IP from the client (a web browser is an example
of a SOCKS client). Even though SOCKS 5 can accept either an IP or a hostname, most
applications supporting SOCKS 5 try to resolve the name before passing it to the SOCKS
proxy. SOCKS 4a, however, always accepts a hostname: You'll need to use SOCKS 4a.
Some applications, such as the browsers Mozilla Firefox and Apple's Safari, support sending
DNS queries to Tor's SOCKS 5 proxy. Most web browsers don't support SOCKS 4a very well,
though. The workaround is to point your web browser at an HTTP proxy, and tell the HTTP
proxy to speak to Tor with SOCKS 4a. We recommend Polipo as your HTTP proxy.
For applications that do not support HTTP proxy, and so cannot use Polipo, FreeCap is an
alternative. When using FreeCap set proxy protocol to SOCKS 5 and under settings set DNS
name resolving to remote. This will allow you to use almost any program with Tor without
leaking DNS lookups and allow those same programs to access hidden services.
See also the question on DNS.

My Internet connection requires an HTTP or SOCKS proxy.¶
Check out the HttpProxy and HttpsProxy config options in the man page. You will need an
HTTP proxy for doing GET requests to fetch the Tor directory, and you will need an HTTPS
proxy for doing CONNECT requests to get to Tor relays. (It's fine if they're the same proxy.)
Also check out HttpProxyAuthenticator and HttpsProxyAuthenticator if your proxy requires
auth. We only support basic auth currently, but if you need NTLM authentication, check out
 this post in the archives.
Tor can use any proxy to get access to the unfiltered Internet. Use the !Socks4Proxy or !
Socks5Proxy options.
If your proxies only allow you to connect to certain ports, look at the entry below on
Firewalled clients for how to restrict what ports your Tor will try to access.

My firewall only allows a few outgoing ports.¶
Answer moved to our new FAQ page

Is there a list of default exit ports?¶
The default open ports are listed below but keep in mind that, any port or ports can be
opened by the relay operator by configuring it in torrc or modifying the source code. But the
default according to tor.1.in from the source code release tor-0.1.0.8-rc is:

       reject 0.0.0.0/8                              !# Reject non-routable IP's
     requests

       reject 169.254.0.0/16                         !# Reject non-routable IP's
     requests

       reject 127.0.0.0/8                            !# Reject non-routable IP's
     requests

       reject 192.168.0.0/16                         !# Reject non-routable IP's
     requests

       reject 10.0.0.0/8                             !# Reject non-routable IP's
     requests

       reject 172.16.0.0/12                          !# Reject non-routable IP's
     requests

       reject *:25                                   !# Reject SMTP for anti-spam
     purposes

       reject *:119                                  !# Reject NNTP (News Network
     Transfer Protocol)

       reject *:135-139                              !# Reject NetBIOS (File
     sharing for older versions of windows)

       reject *:445                                  !# Reject Microsoft-DS (a.k.a
     !NetBIOS for newer NT versions)

       reject *:1214                                 !# Reject Kazaa

       reject *:4661-4666                            !# Reject eDonkey network

       reject *:6346-6429                            !# Reject Gnutella networks

       reject *:6699                                 !# Reject Napster

       reject *:6881-6999                            !# Reject (Dark Star)
     deltasource & Bittorrent network
         accept *:*"                                   !# Accept the rest of 65535
       possible ports

Thanks to http://www.seifried.org for port references.
The default Exit Policy also ExitEnclave blocks the node's own IP address for security
reasons]. UnallocatedNetBlocks doesn't block unallocated addresses.

What should I do if I can't use an http proxy with my application?¶
On Unix, you might try tsocks, but it doesn't seem to work so well on FreeBSD, we'd be
happy to hear about alternatives; You might also try socat. It might not be as seamless as
tsocks, but it's worked where the former hasn't. There is also proxychains, but I can't get it
to play nicely with Tor at the moment.
For FreeBSD and OpenBSD, you can try dante instead of tsocks. Both have a port and
package for dante. Instead of running torify irssig you would run socksify irssi after
properly setting up dante. See Tor chrooted in OpenBSD for an example dante configuration
that works with Tor.
On windows, look at sockscap, or maybe freecap if you prefer free software.

I keep seeing these warnings about SOCKS and DNS and information
leaks. Should I worry?¶
The warning is:
Your application (using socks5 on port %d) is giving Tor only an IP address. Applications
that do DNS resolves themselves may leak information. Consider using Socks4A (e.g. via
Polipo or socat) instead.
If you are running Tor to get anonymity, and you are worried about an attacker who is even
slightly clever, then yes, you should worry. Here's why.
The Problem. When your applications connect to servers on the Internet, they need to
resolve hostnames that you can read (like www.torproject.orgg) into IP addresses that the
Internet can use (like 209.237.230.66). To do this, your application sends a request to a
DNS server, telling it the hostname it wants to resolve. The DNS server replies by telling
your application the IP address.
Clearly, this is a bad idea if you plan to connect to the remote host anonymously: when
your application sends the request to the DNS server, the DNS server (and anybody else
who might be watching) can see what hostname you are asking for. Even if your application
then uses Tor to connect to the IP anonymously, it will be pretty obvious that the user
making the anonymous connection is probably the same person who made the DNS
request.
Where SOCKS comes in. Your application uses the SOCKS protocol to connect to your
local Tor client. There are 3 versions of SOCKS you are likely to run into: SOCKS 4 (which
only uses IP addresses), SOCKS 5 (which usually uses IP addresses in practice), and SOCKS
4a (which uses hostnames).
When your application uses SOCKS 4 or SOCKS 5 to give Tor an IP address, Tor guesses
that it 'probably' got the IP address non-anonymously from a DNS server. That's why it
gives you a warning message: you probably aren't as anonymous as you think.
So what can I do? We describe a few solutions below.
   •    If your application speaks SOCKS 4a, use it.
   •   For HTTP (web browsing), either configure your browser to perform remote DNS
       lookups (see the Torify HOWTO how to do this for some versions of Firefox) or use a
       socks4a-capable HTTP proxy, such as Polipo. See the Tor documentation for more
       information. For instant messaging or IRC, use Gaim or XChat. For other programs,
       consider using freecap (on Win32) or dsocks (on BSD).
   •   If you only need one or two hosts, or you are good at programming, you may be
       able to get a socks-based port-forwarder like socatg to work for you; see the Torify
       HOWTO for examples.
   •   Tor ships with a program called tor-resolveg that can use the Tor network to look up
       hostnames remotely; if you resolve hostnames to IPs with tor-resolve, then pass the
       IPs to your applications, you'll be fine. (Tor will still give the warning, but now you
       know what it means.)
   •   You can use TorDNS as a local DNS server to rectify the DNS leakage. See the Torify
       HOWTO for info on how to run particular applications anonymously.
If you think that you applied one of the solutions properly but still experience DNS leaks
please verify there is no third-party application using DNS independently of Tor. Please see
the FAQ entry on whether you're really absolutely anonymous using Tor for some examples.

How do I check if my application that uses SOCKS is leaking DNS requests?
¶
These are two steps you need to take here. The first is to make sure that it's using the
correct variant of the SOCKS protocol, and the second is to make sure that there aren't
other leaks.
Step one: add "TestgSocks 1" to your torrc file, and then watch your logs as you use your
application. Tor will then log, for each SOCKS connection, whether it was using a 'good'
variant or a 'bad' one. (If you want to automatically disable all 'bad' variants, set
"SafeSocks 1" in your torrc file.)
Step two: even if your application is using the correct variant of the SOCKS protocol, there
is still a risk that it could be leaking DNS queries. This problem happens most commonly in
Firefox extensions that resolve the destination hostname themselves?, for example to show
you its IP address, what country it's in, etc. These applications may use a safe SOCKS
variant when actually making connections, but they still do DNS resolves locally. If you
suspect your application might behave like this, you should use a network sniffer like
Wireshark and look for suspicious outbound DNS requests. I'm afraid the details of how to
look for these problems are beyond the scope of a FAQ entry though -- find a friend to help
if you have problems.
If your application doesn't behave safely, or you're not sure, you may find it simpler to use
a Tor package that sets Tor up as a transparent proxy. On Windows these include JanusVM
or TorVM; on Linux and BSD you can set this up with these instructions.

Tor/Vidalia prompts for a password at start¶
Answer moved to our new FAQ page

Why do we need Polipo or Privoxy with Tor? Which is better?¶
The first question is, "why a HTTP proxy at all?"
The answer is, because Firefox SOCKS layer has hard-coded timeouts, and other issues,
 https://bugzilla.mozilla.org/show_bug.cgi?id=280661. Personally, I don't use an HTTP
proxy, I simply let my browser talk to Tor via SOCKS directly. The user experience sucks,
because you'll receive untold numbers of "The connection has timed out" warnings, because
Firefox won't wait for Tor to build a circuit.
Once Firefox fixes bug 280661, we don't need a HTTP proxy at all.
The second question is, "why switch from Privoxy to Polipo?"
Privoxy is fine filtering software that works well for what is it intended to do. However, its
user experience is lacking due to it lacking a few features, namely, HTTP 1.1 pipelining,
caching most requested objects, and it needs to see the entire page to parse it, before
sending it on to the browser. Lack of these three features is the reason we switched from
Privoxy to Polipo.
We've received plenty of feedback that browsing with Polipo in place of Privoxy "feels
faster". The feedback indicates that because Polipo streams the content to the browser for
rendering nearly as fast as it receives it from Tor, the user understands what's going on and
will start to read the web page as it loads. Privoxy, necesarily, will load the entire page,
parse it for items to be filtered, and then send the page on to the browser. The user
experience, especially on a slow circuit, is that nothing happens, the browser activity icon
spins forever, and suddenly a page appears many, many seconds later.
If Tor was vastly faster, Privoxy's mode of operation wouldn't matter. We're working on
making Tor faster. However, purposely showing the user how slow tor can be with privoxy
was a huge point of complaint, and not what we intended to do.
Does Polipo have some bugs? Sure. Chrisd primarily, among others, is working on fixing
them. At the current rate of progress on Firefox bug 280661, we'll have Polipo fixed before
Mozilla releases the SOCKS layer fix. Chrisd even wrote Mozilla a patch and submitted it on
the bug.
The final point is that this is all free software. You are in control. If you don't like Polipo, but
do like Privoxy, then don't install Polipo and use Privoxy. The power of choice is yours.
See this or-talk posting for further discussion and comparisons. See Andrew's response for
the source for this answer.

Vidalia doesn't work in Windows 2000?¶
No. Vidalia doesn't work in Win2k because of a winsock DLL bug in Win2K. The explanation
for why is here: http://msdn.microsoft.com/en-us/library/ms737931. If you don't want to
recompile Vidalia yourself, this site offers a replacement DLL (with source) that appears to
work: http://codemagnet.blogspot.com/2007/10/winsock2-replacement.html and
 http://martin.brenner.de/files/winsock2_getaddrinfo.rar

Tor Browser Bundle¶
There is no Flash in TBB!¶
Flash is currently not safe to use for a variety of reasons. We are working on making it
possible to use Flash via TBB (#3547), but it isn't ready yet.

I'm on OSX or Linux and I want to run another application through the Tor
launched by Tor Browser Bundle. How do I predict my SOCKS port?¶
In Vidalia, go to Settings->Advanced and uncheck the box that says 'Configure ControlPort
automatically'. Your SOCKS port will then be on 9050.
I need an HTTP proxy.¶
Feel free to install privoxy or polipo. Our polipo config file is here:
 https://gitweb.torproject.org/torbrowser.git/blob_plain/HEAD:/build-
scripts/config/polipo.conf. However, please realize that this approach is unsupported and
you should instead use the SOCKS port directly.

I want to leave Tor Browser Bundle running but close the browser.¶
We're working on a way to make this possible on all platforms. Please be patient.

I want to use a different browser with Tor.¶
This is unsupported and we strongly recommend that you not do this. For more information
about how Torbutton protects you, please read the Torbutton design document:
 https://www.torproject.org/torbutton/en/design/

I want to install my favorite extension in TBB. How do I do it?¶
You can install extensions in TBB the same way you install them in a normal Firefox.

Do I have to reinstall my extensions every time I upgrade TBB?¶
If you are extracting a new TBB over the old TBB directory, assuming there are no version
conflicts between a new Firefox and your old extensions, it should work. If it doesn't, please
let us know by filing a bug.

Running a Tor relay¶
How do I decide if I should run a relay?¶
We're looking for people with reasonably reliable Internet connections, that have at least 20
kilobytes/s each way. If that's you, please consider helping out.

Why isn't my relay being used more?¶
If your relay is relatively new then give it time. Tor decides which relays it uses heuristically
based on reports from Bandwidth Authorities. These authorities take measurements of your
relay's capacity and, over time, directs more traffic there until it reaches an optimal load.
If you've been running a relay for a while and still having issues then try asking on the tor-
relays list.

How can I get Tor to fully make use of my high capacity connection?¶
See this tor-relays thread.

I'd run a relay, but I don't want to deal with abuse issues.¶
Answer moved to our new FAQ page

Do I get better anonymity if I run a relay?¶
Yes, you do get better anonymity against some attacks.
The simplest example is an attacker who owns a small number of Tor relays. He will see a
connection from you, but he won't be able to know whether the connection originated at
your computer or was relayed from somebody else.
There are some cases where it doesn't seem to help: if an attacker can watch all of your
incoming and outgoing traffic, then it's easy for him to learn which connections were relayed
and which started at you. (In this case he still doesn't know your destinations unless he is
watching them too, but you're no better off than if you were an ordinary client.)
There are also some downsides to running a Tor relay. First, while we only have a few
hundred relays, the fact that you're running one might signal to an attacker that you place a
high value on your anonymity. Second, there are some more esoteric attacks that are not
as well-understood or well-tested that involve making use of the knowledge that you're
running a relay -- for example, an attacker may be able to "observe" whether you're
sending traffic even if he can't actually watch your network, by relaying traffic through your
Tor relay and noticing changes in traffic timing.
It is an open research question whether the benefits outweigh the risks. A lot of that
depends on the attacks you are most worried about. For most users, we think it's a smart
move.

Why doesn't my Windows (or other OS) Tor relay run well?¶
Tor relays work best on Linux, FreeBSD 5.x+, OS X Tiger or later, and Windows Server
2003.
You can probably get it working just fine on other operating systems too, but note the
following caveats:
   •   Versions of Windows without the word "server" in their name sometimes have
       problems. This is especially the case for Win98, but it also happens in some cases for
       XP, especially if you don't have much memory. The problem is that we don't use the
       networking system calls in a very Windows-like way, so we run out of space in a
       fixed-size memory space known as the non-page pool, and then everything goes
       bad. The symptom is an assert error with the message "No buffer space available
       [WSAENOBUFS ] [10055]". You can read more here.
   •   Most developers who contribute to Tor work with Unix-like operating systems. It
       would be great if more people with Windows experience help out, so we can improve
       Tor's usability and stability in Windows. appreciated to help improve the usability of
       Tor in Windows.
   •   More esoteric or archaic operating systems, like SunOS 5.9 or Irix64, may have
       problems with some libevent methods (devpoll, etc), probably due to bugs in
       libevent. If you experience crashes, try setting the EVENT_NODEVPOLL or equivalent
       environment variable.

So I can just configure a nickname and ORPort and join the network?¶
Yes. You can join the network and be a useful relay just by configuring your Tor to be a
relay and making sure it's reachable from the outside.
30 Seconds to a Tor Relay:
   •   Configure a Nickname:
Nickname ididnteditheconfig
   •   Configure !ORPort:
ORPort 9001
   •   Configure ContactgInfo:
ContactInfo human@…
     •    Start Tor. Watch the log file for a log entry that states:
[notice] router_orport_found_reachable(): Self-testing indicates your !ORPort is reachable
from the outside. Excellent. Publishing server descriptor.

I want to upgrade/move my relay. How do I keep the same key?¶
When upgrading your Tor relay, or running it on a different computer, the important part is
to keep the same nickname (defined in your torrc file) and the same identity key (stored in
"keys/secret_id_key" in your DataDirectory).
This means that if you're upgrading your Tor relay and you keep the same torrc and the
same DataDirectory, then the upgrade should just work and your relay will keep using the
same key. If you need to pick a new DataDirectory, be sure to copy your old
keys/secret_id_key over.

How do I run my Tor relay as an NT service?¶
You can run Tor as a service on all versions of Windows except Windows 95/98/ME. This
way you can run a Tor relay without needing to always have Vidalia running.
If you've already configured your Tor to be a relay, please note that when you enable Tor as
a service, it will use a different DatagDirectory, and thus will generate a different key. If you
want to keep using the old key, see the Upgrading your Tor relay FAQ entry for how to
restore the old identity key.
To install Tor as a service, you can simply run:

         tor -install

A service called Tor Win32 Service will be installed and started. This service will also
automatically start every time Windows boots, unless you change the Start-up type. An
easy way to check the status of Tor, start or stop the service, and change the start-up type
is by running services.msc and finding the Tor service in the list of currently installed
services.
Optionally, you can specify additional options for the Tor service using the -optionsg
argument. For example, if you want Tor to use C:\tor\torrc, instead of the default torrc, and
open a control port on port 9051, you would run:

         tor -install -options -f C:\torrc ControlPort 9051

If you are running Tor 0.1.1.x, you will need to move your torrc file from "\Documents and
Settings\user name\Application Data\Tor" to the same folder as your executable before
installing the Tor service.
If you have Tor 0.1.0.12 or later, you can also start or stop the Tor service from the
command line by typing:

          tor -service start

or

          tor -service stop
To remove the Tor service, you can run the following command:

     tor -remove

If you are running Tor as a service and you want to uninstall Tor entirely, be sure to run the
service removal command (shown above) first before running the uninstaller from
"Add/Remove Programs". The uninstaller is currently not capable of removing the active
service.

Can I run a Tor relay from my virtual server account?¶
Some ISPs are selling "vserver" accounts that provide what they call a virtual server -- you
can't actually interact with the hardware, and they can artificially limit certain resources
such as the number of file descriptors you can open at once. Competent vserver admins are
able to configure your server to not hit these limits. For example, in SWSoft's Virtuozzo,
investigate /proc/user_beancounters. Look for "failcnt" in tcpsndbuf, tcprecvbuf,
numothersock, and othersockbuf. Ask for these to be increased accordingly. Some users
have seen settings work well as follows:
resource         held     maxheld barrier       limit      failcnt
tcpsndbuf        46620    48840      3440640 5406720 0
tcprcvbuf        0        2220       3440640 5406720 0
othersockbuf 243516 260072           2252160 4194304 0
numothersock 151    153              720     720     0
Xen, Virtual Box and VMware virtual servers have no such limits normally.
If the vserver admin will not increase system limits another option is to reduce the memory
allocated to the send and receive buffers on TCP connections Tor uses. An experimental
feature to constrain socket buffers has recently been added. If your version of Tor supports
it, set "ConstrainedSockets 1" in your configuration. See the tor man page for additional
details about this option.
Unfortunately, since Tor currently requires you to be able to connect to all the other Tor
relays, we need you to be able to use at least 1024 file descriptors. This means we can't
make use of Tor relays that are crippled in this way.
We hope to fix this in the future, once we know how to build a Tor network with restricted
topologies -- that is, where each node connects to only a few other nodes. But this is still a
long way off.

I want to run more than one relay.¶
Answer moved to our new FAQ page

My relay is picking the wrong IP address.¶
Tor guesses its IP address by asking the computer for its hostname, and then resolving that
hostname. Often people have old entries in their /etc/hosts file that point to old IP
addresses.
If that doesn't fix it, you should use the "Address" config option to specify the IP you want it
to pick. If your computer is behind a NAT and it only has an internal IP address, see the
following FAQ entry on dynamic IP addresses.
Also, if you have many addresses, you might also want to set "OutboundBindAddress" so
external connections come from the IP you intend to present to the world.
I don't have a static IP.¶
Tor can handle relays with dynamic IP addresses just fine. Just leave the "Address" line in
your torrc blank, and Tor will guess.
Alas, there are bugs with this feature every so often, so if it's not working for you and you
can demonstrate it, please help us improve it. You may find the 0.2.0.x version of Tor to be
better at guessing than the earlier versions.

I'm behind a NAT/Firewall¶
See http://portforward.com/ for directions on how to port forward with your NAT/router
device.
If your relay is running on a internal net you need to setup port forwarding. Forwarding TCP
connections is system dependent but the firewalled-clients FAQ entry offers some examples
on how to do this.
Also, here's an example of how you would do this on GNU/Linux if you're using iptables:
/sbin/iptables -A INPUT -i eth0 -p tcp --destination-port 9001 -j ACCEPT
You may have to change "eth0" if you have a different external interface (the one connected
to the Internet). Chances are you have only one (except the loopback) so it shouldn't be too
hard to figure out.

My cable/dsl modem keeps crashing. What's going on?¶
Tor relays hold many connections open at once. This is more intensive use than your cable
modem (or other home router) would ever get normally. So if there are any bugs or
instabilities, they might show up now.
If your router/etc keeps crashing, you've got two options. First, you should try to upgrade
its firmware. If you need tips on how to do this, ask Google or your cable / router provider,
or try the Tor IRC channel.
Usually the firmware upgrade will fix it. If it doesn't, you will probably want to get a new
(better) router.

Why do I get portscanned more often when I run a Tor relay?¶
If you allow exit connections, some services that people connect to from your relay will
connect back to collect more information about you. For example, some IRC servers connect
back to your identd port to record which user made the connection. (This doesn't really
work for them, because Tor doesn't know this information, but they try anyway.) Also, users
exiting from you might attract the attention of other users on the IRC server, website, etc.
who want to know more about the host they're relaying through.
Another reason is that groups who scan for open proxies on the Internet have learned that
sometimes Tor relays expose their socks port to the world. We recommend that you bind
your socksport to local networks only.
In any case, you need to keep up to date with your security. See this article on operational
security for Tor relays for more suggestions.

I have more than one CPU. Does this help?¶
Yes. You can set your NumCpus config option in torrc to the number of !CPUs you have, and
Tor will spawn this many cpuworkers to deal with public key operations in parallel.
This option has no effect for clients.
Why is my Tor relay using so much memory?¶
Answer moved to our new FAQ page

What bandwidth shaping options are available to Tor relays?¶
There are two options you can add to your torrc file:
   •   BandwidthRate is the maximum long-term bandwidth allowed (bytes per second).
       For example, you might want to choose "BandwidthRate 2 MB" for 2 megabytes per
       second (a fast connection), or "BandwidthRate 50 KB" for 50 kilobytes per second (a
       medium-speed cable connection). The minimum BandwidthRate is 20 kilobytes per
       second.
   •   BandwidthgBurst is a pool of bytes used to fulfill requests during short periods of
       traffic above BandwidthRate but still keeps the average over a long period to
       BandwidthRate. A low Rate but a high Burst enforces a long-term average while still
       allowing more traffic during peak times if the average hasn't been reached lately. For
       example, if you choose "BandwidthBurst 50 KB" and also use that for your
       BandwidthRate, then you will never use more than 50 kilobytes per second; but if
       you choose a higher BandwidthBurst (like 1 MB), it will allow more bytes through
       until the pool is empty.
If you have an asymmetric connection (upload less than download) such as a cable modem,
you should set BandwidthRate to less than your smaller bandwidth (Usually that's the
upload bandwidth). (Otherwise, you could drop many packets during periods of maximum
bandwidth usage -- you may need to experiment with which values make your connection
comfortable.) Then set BandwidthBurst to the same as BandwidthRate.
Linux-based Tor nodes have another option at their disposal: they can prioritize Tor traffic
below other traffic on their machine, so that their own personal traffic is not impacted by
Tor load. A script to do this can be found in the Tor source distribution's contrib directory.
Additionally, there are hibernation options where you can tell Tor to only serve a certain
amount of bandwidth per time period (such as 100 GB per month). These are covered in the
hibernation entry below.

Does BandwidthRate really work?¶
Yes, it really works. Reread the above entry on limiting the required bandwidth. Note well
this point:
   •   It is in Bytes, not Bits.
(Of course it's always possible that there is a bug. If you are certain you found one please
let us know on the talk mailinglist.)

How can I limit the total amount of bandwidth used by my Tor relay?¶
The accounting options in the torrc file allow you to specify the maximum amount of bytes
your relay uses for a time period.

       AccountingStart day week month [day] HH:MM

This specifies when the accounting should reset. For instance, to setup a total amount of
bytes served for a week (that resets every Wednesday at 10:00am), you would use:

       AccountingStart week 3 10:00
      AccountingMax N bytes KB MB GB TB

This specifies the maximum amount of data your relay will send during an accounting
period, and the maximum amount of data your relay will receive during an account period.
When the accounting period resets
(from AccountingStart ), then the counters for AccountingMax are reset to 0.
Example. Let's say you want to allow 1 GB of traffic every day in each direction and the
accounting should reset at noon each day:

      AccountingStart day 12:00

      AccountingMax 1 GB

Note that your relay won't wake up exactly at the beginning of each accounting period. It
will keep track of how quickly it used its quota in the last period, and choose a random point
in the new interval to wake up. This way we avoid having hundreds of relays working at the
beginning of each month but none still up by the end.
If you have only a small amount of bandwidth to donate compared to your connection
speed, we recommend you use daily accounting, so you don't end up using your entire
monthly quota in the first day. Just divide your monthly amount by 30. You might also
consider rate limiting to spread your usefulness over more of the day: if you want to offer X
GB in each direction, you could set your BandwidthRate to 20*X. For example, if you have
10 GB to offer each way, you might set your BandwidthRate to 200 KB: this way your relay
will always be useful for at least half of each day.

Why does my relay write more bytes onto the network than it reads?¶
You're right, for the most part a byte into your Tor relay means a byte out, and vice versa.
But there are a few exceptions:
If you open your DirPort, then Tor clients will ask you for a copy of the directory. The
request they make (an HTTP GET) is quite small, and the response is sometimes quite large.
This probably accounts for most of the difference between your "write" byte count and your
"read" byte count.
Note that in Tor 0.1.1.8-alpha and later, your relay is more intelligent about deciding
whether to advertise its DirPort. The main change is to not advertise it if we're running at
capacity and either a) we could hibernate or b) our capacity is under 50kB and we're using a
DirPort above 1024.
Another minor exception shows up when you operate as an exit node, and you read a few
bytes from an exit connection (for example, an instant messaging or ssh connection) and
wrap it up into an entire 512 byte cell for transport through the Tor network.

Why can I not browse anymore after limiting bandwidth on my Tor relay?¶
The parameters assigned in the AccountingMax and BandwidthRate apply to both client and
relay functions of the Tor process. Thus you may find that you are unable to browse as soon
as your Tor goes into hibernation, signaled by this entry in the log:
. localhost Tor: consider_hibernation(): Bandwidth soft limit reached; commencing
hibernation.
The solution is to run two Tor processes - one relay and one client, each with its own config.
One way to do this (if you are starting from a working relay setup) is as follows:
   •    In the relay Tor torrc file, simply set the SocksPort to 0.
   •    Create a new client torrc file from the torrc.sample and ensure it uses a different log
        file from the relay. One naming convention may be torrc.client and torrc.relay.
   •    Modify the Tor client and relay startup scripts to include -f /path/to/correct/torrc.
        (relay client).
   •    In Linux/BSD/OSX, changing the startup scripts to Tor.client and Tor.relay may
        make separation of configs easier.

How can I make my relay accessible to people stuck behind restrictive
firewalls?¶
Expose your Tor relay on port 443 (HTTPS) so that people whose firewalls restrict them to
HTTPS can still get to it. Also, you should expose your directory mirror on port 80 (that even
works if Apache is already listening there).
If you're using the version of Tor packaged for Debian (or Debian-based distributions
like Ubuntu) then you can do this by setting orport to 443 and dirport to 80 in your relay's
torrc.
However, if you aren't using Tor's deb package then this will take some more work. Binding
to ports under 1024 usually requires you to run as root, and running Tor as root is not
recommended (in case there are unknown exploitable bugs). Instead, you should configure
Tor to advertise its orport as 443, but really bind to another port (such as 9001). Then, set
up your computer to forward incoming connections from port 443 to port 9001.
The Tor side is pretty easy: just set "orport 443" and "orlistenaddress 0.0.0.0:9001" in your
torrc file. This will make your Tor relay listen for connections to any of its IPs on port 9001,
but tell the world that it's listening on port 443 instead. Similarly, "dirport 80" and
"dirlistenaddress 0.0.0.0:9030" will bind to port 9030 locally but advertise port 80.
If your relay has multiple IP addresses and you want to advertise a port on an IP address
that isn't your default IP, you can do this with Tor's "Address" config option.
Forwarding TCP connections is system dependent, however. Here are some possibilities (you
can put them in your rc.local so they execute at boot):
   •    On Linux 2.4 or 2.6 (with iptables):

    •       iptables -t nat -A PREROUTING -p tcp -d $IP --dport 443 \

    •           -j DNAT --to-destination $IP:9001

        . Assuming you have a simple, consumer-level NAT gateway/firewall that is
        configured to forward TCP requests on port 443 of your external (WAN) IP to port
        443 of your Tor relay, then "$IP", in the command above, refers to the internal
        (LAN) IP address of your Tor relay. Often (but not always), this will begin with
        192.168....
   •    If you want to make this redirection work from localhost, add the following rule as
        well:

    •          iptables -t nat -A OUTPUT -p tcp -d $external_IP --dport 443 \

    •           -j DNAT --to-destination $internal_IP:9001
    . Here, "$internal_IP" is the same as "$IP" in the previous example, but
    "$external_IP" refers to the WAN IP of your gateway/firewall.
•   When using shorewall (version 2.2.3) you may find it helpful to do add something
    like this (inside /etc/shorewall/rules):

•       # DirListenAddress $IP:9091

•       DNAT      net       $FW:$IP:9091     tcp       80        -         $IP

•       ACCEPT    $FW:$IP          net       tcp       9091

•       # ORListenAddress $IP:9090

•       DNAT      net       $FW:$IP:9090     tcp       443       -         $IP

•       ACCEPT    $FW:$IP          net       tcp       9090

    . Don't forget to tune your default policy (/etc/shorewall/policy) so that it doesn't log
    those rules when they're triggered.
•   With ssh (do not use in conjunction with DirPolicyg):

•       ssh -fNL 443:localhost:9001 localhost

    . Note: if you get an error message "channel 2: open failed: connect failed:
    Connection refused", try replacing "localhost" with "127.0.0.1" in the ssh command.)
•   To offer your directory mirror on port 80, where apache is already listening, add this
    to your apache config:

•       <IfModule mod_proxy.c>

•            ProxyPass /tor/ http://localhost:9030/tor/

•            ProxyPassReverse /tor/ http://localhost:9030/tor/

•       </IfModule>

    . Ideally you wouldn't log those requests. That's not very hard either: Remove your
    normal AccessgLog, and use a Custom}}}Log:

            LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%
         {User-Agent}i\"" combined

            ...

            SetEnvIf Request_URI "^/tor/" request_is_for_tor=yes

            CustomLog /var/log/apache/combined.log combined env=!
         request_is_for_tor
                CustomLog /dev/null common env=request_is_for_tor

        . Refer to the Apache documentation for why this works:
         http://httpd.apache.org/docs/mod/mod_log_config.html#customlog and
         http://httpd.apache.org/docs/mod/mod_setenvif.html
* To offer your directory on port 80 when Apache (or anything else) is not listening, use a
port redirection for the dirport, as per the orport method described earlier in this section.
   •    On Linux 2.4 or 2.6 (with iptables):

    •       iptables -t nat -A PREROUTING -p tcp -d $IP --dport 80 \

    •           -j DNAT --to-destination $IP:9030

   •    On OpenBSD/FreeBSD/NetBSD with PF ( Tutorial). Assume you have a 3com 905b
        card connected to an Internet gateway.
# Redirect traffic coming in on xl0 from any:any to $IP:443 to localhost:9001 rdr on xl0
proto tcp from any to $IP port 443 -> $IP port 9001 g
   •    On Mac OS X (tested on Leopard, might work on Panther/Tiger as well):

    •       sudo ipfw add fwd 127.0.0.1,9030 tcp from any to me 80 in

    •       sudo ipfw add fwd 127.0.0.1,9001 tcp from any to me 443 in

   •    If you just use an external NAT router as your firewall, you only need to do the port
        forwarding through that.
Volunteers: please add advice for other platforms if you know how they work.

Bridge related questions¶
* See the Bridge manual for details on setting up, publicizing, understanding and
troubleshooting your bridge. * How long until a new bridge gets some traffic? Hard to
answer. We're working on better feedback mechanisms for bridge operators.

Can I install Tor on a central server, and have my clients connect to it?¶
Yes. Tor can be configured as a client or a relay on another machine, and allow other
machines to be able to connect to it for anonymity. This is most useful in an environment
where many computers want a gateway of anonymity to the rest of the world. However, be
forwarned that with this configuration, anyone within your private network (existing
between you and the Tor client/relay) can see what traffic you are sending in clear text. The
anonymity doesn't start until you get to the Tor relay. Because of this, if you are the
controller of your domain and you know everything's locked down, you will be OK, but this
configuration may not be suitable for large private networks where security is key all
around.
Configuration is simple, editing your torrc file's SocksListenAddress according to the
following examples:

         SocksListenAddress 127.0.0.1 #This provides local interface access
       only, needs SocksPort to be greater than 0
       SocksListenAddress 192.168.x.x:9100 #This provides access to Tor on
     a specified interface

       SocksListenAddress 0.0.0.0:9100 #Accept from all interfaces

You can state multiple listen addresses, in the case that you are part of several networks or
subnets.

       SocksListenAddress 192.168.x.x:9100 #eth0

       SocksListenAddress 10.x.x.x:9100 #eth1

After this, your clients on their respective networks/subnets would specify a socks proxy
with the address and port you specified SocksListenAddress to be. This is a direct
connection to the Tor relay not running through Polipo or other programs, and may be
susceptible to DNS leaks - check the configuration of each application you allow to run on
the client computers. For more information on setting up clients with a central server, see I
want to run my Tor client on a different computer than my applications.
Please note that the SocksPort configuration option gives the port ONLY for localhost
(127.0.0.1). When setting up your SocksListenAddress(es), you need to give the port with
the address, as shown above.
If you are interested in forcing all outgoing data through the central Tor client/relay, instead
of the server only being an optional proxy, you may find the program iptables (for *nix)
useful.

How do I provide a hidden service?¶
See the official hidden service configuration instructions

What is the BadExit flag?¶
When an exit is misconfigured or malicious it's assigned the BadExit flag. This tells Tor to
avoid exiting through that relay so, in effect, relays with this flag become non-exits.

I got the BadExit flag. Why did that happen?¶
If you got this flag then we either discovered a problem or suspicious activity coming from
your exit and weren't able to contact you. The reason for most flaggings are documented on
the bad relays wiki. Please contact us so we can sort out the issue!

I'm facing legal trouble. How do I prove that my server was a Tor relay at a
given time?¶
To check if you were a relay at a given time go here. If you need a signed letter to this
effect then let us know.

I'm still having issues. Where can I get help?¶
For help try the #tor channel on oftc and tor-relays list. Both have Tor developers and
veteran relay operators that are usually more than happy to help Tor newcomers.

Development¶
Who is responsible for Tor?¶
 Roger Dingledine and Nick Mathewson are the main developers of Tor. You can read more
at Tor's People page.

What do these weird version numbers mean?¶
Versions of Tor before 0.1.0 used a strange and hard-to-explain version scheme. Let's
forget about those.
Starting with 0.1.0, versions all look like this: MAJOR.MINOR.MICRO(.PATCHLEVEL)(-TAG).
The stuff in parenthesis is optional. MAJOR, MINOR, MICRO, and PATCHLEVEL are all
numbers. Only one release is ever made with any given set of these version numbers. The
TAG lets you know how stable we think the release is: "alpha" is pretty unstable; "rc" is a
release candidate; and no tag at all means that we have a final release. If the tag ends with
"-cvs", you're looking at a development snapshot that came after a given release.
So for example, we might start a development branch with (say) 0.1.1.1-alpha. The
patchlevel increments consistently as the status tag changes, for example, as in: 0.1.1.2-
alpha, 0.1.1.3-alpha, 0.1.1.4-rc, 0.1.1.5-rc, etc. Eventually, we would release 0.1.1.6. The
next stable release would be 0.1.1.7.
Why do we do it like this? Because every release has a unique version number, it is easy for
tools like package manager to tell which release is newer than another. The tag makes it
easy for users to tell how stable the release is likely to be.

How do I set up my own private Tor network?¶
If you want to experiment locally with your own network, or you're cut off from the Internet
and want to be able to mess with Tor still, then you may want to set up your own separate
Tor network.
To set up your own Tor network, you need to run your own authoritative directory servers,
and you need to configure each client and relay so it knows about your directory servers
rather than the default public ones.
   1. Grab the latest release.
   2. Make one directory for each Tor node you intend to run (including clients). This
       directory will be called the node's data directory. Now make a temporary torrc file
       with this content: a.

    3. DirServer test 127.0.0.1:5000 0000 0000 0000 0000 0000 0000 0000
       0000 0000 0000

       a.

             ORPort 5000

            a. For each directory authority, run:

            b. tor --list-fingerprint --DataDirectory /path/to/datadir/ -f
               tmp.torrc
This will give you the nodes' fingerprints. You will need the fingerprints of your directory
authorities in the next step, so note them down. The format will be hostname
FINGERPRINT, you just need the FINGERPRINT part.
   3. For all the directory authorities that you plan to use (using just one is enough, but
      you can run as many as you want), you have to create the v3 directory identity keys
      before you can continue. Run this command:

    4. tor-gencert --create-identity-key

Pick a password you like, then move the created authority_certificate and
authority_signing_key into the keys directory inside your first authority's data directory.
Look inside the authority_certificate file. Note the fingerprint line: the 40 characters are this
authority's v3ident. You will need this later. Repeat for all other authorities you plan to set
up.
   4. You will need to set the following options on all your nodes, so start a new empty
      torrc file. We will use this as the basis torrc. Put the following options in:
           a. Make them aware they're part of a private Tor network

            b. TestingTorNetwork 1

   a. Tell them about all the directory authorities in your network. For each of your
      directory authorities, add one line.

    b. DirServer NODENAME v3ident=IDENTITY orport=ORPORT IP:DIRPORT
       FINGERPRINT

example:

     DirServer myCoolAuthority
     v3ident=E2A2AF570166665D738736323258169CC61D8A8B orport=9001
     127.34.23.1:9031 FFCB 46DB 1339 DA84 674C DDDD FFFF 6434 C437 0441

   a. Each Tor node needs its own data directory

    b. DataDirectory .

The file is now suitable to be used for all your clients. Copy it into the data directories. If
you want to use more than one node with an open SocksPort, make sure to pick a different
one for each node and add to its configuration

     SocksPort PORT

   5. Now it is time to set up the relay configuration. Each Relay needs at least an !ORPort
      to work. Make sure you pick a different one for each relay!
           a. For each relay, add this to their torrc:

            b. ORPort PORT
          c. For this example, we also want to disable the SocksPort for all our relays and
             authorities

           d. SocksgPort 0

          e. To avoid problems with IP address resolution, set the local IP address

           f. Address IP

   6. Additional setup for authorities.
          a. Each authority needs an !ORPort. Set them up as above, and make sure that
             the !ORPort matches the one you put into the sample torrc. Also, set up a
             DirPort that matches:

           b. DirPort PORT

          c. Also, set them up as authorities

           d. V3AuthoritativeDirectory 1

           e. V2AuthoritativeDirectory 1

           f. AuthoritativeDirectory 1

          g. We need contact information on authorities

           h. ContactInfo test@test.test

   7. Starting the network
          a. in each data directory, start a new tor process

           b. tor -f torrc-file.torrc

   8. Modify the .torrc files to meet your needs. For example, if you plan to play with
       Hidden Services, setting MinUptimeHidServDirectoryV2 0 is a good idea.

How can I make my Java program use the Tor Network?¶
The newest versions of Java now have SOCKS4/5 support build in. Unfortunately, the
SOCKS interface is not very well documented and may still leak your DNS lookups. The best
way to use Tor is to interface the SOCKS protocol directly or go through an application-level
proxy that speaks SOCKS4a. For an example and libraries that implement the SOCKS4a
connection, goto Joe Foley's TorLib in the TinFoil Project at
 http://web.mit.edu/foley/www/TinFoil/.

What is libevent?¶
When you want to deal with a bunch of net connections at once, you have a few options:
One is multithreading: you have a separate micro-program inside the main program for
each net connection that reads and writes to the connection as needed.This, performance-
wise, sucks.
Another is asynchronous network programming: you have a single main program that finds
out when various net connections are ready to read/write, and acts accordingly.
The problem is that the oldest ways to find out when net connections are ready to
read/write, suck. And the newest ways are finally fast, but are not available on all
platforms.
This is where Libevent comes in and wraps all these ways to find out whether net
connections are ready to read/write, so that Tor (and other programs) can use the fastest
one that your platform supports, but can still work on older platforms (these methods are all
different depending on the platorm) So Libevent presents a consistent and fast interface to
select, poll, kqueue, epoll, /dev/poll, and windows select.
However, On the the Win32 platform (by Microsoft) the only good way to do fast IO on
windows with hundreds of sockets is using overlapped IO, which is grossly unlike every
other BSD sockets interface.
Website: http://www.monkey.org/~provos/libevent/

What do I need to do to get a new feature into Tor?¶
For a new feature to go into Tor, it needs to be designed (explain what you think Tor should
do), argued to be secure (explain why it's better or at least as good as what Tor does now),
specified (explained at the byte level at approximately the level of detail in tor-spec.txt),
and implemented (done in software).
You probably shouldn't count on other people doing all of these steps for you: people who
are skilled enough to do this stuff generally have their own favorite feature requests.

Anonymity and Security¶
What protections does Tor provide?¶
Internet communication is based on a store-and-forward model that can be understood in
analogy to postal mail: Data is transmitted in blocks called IP datagrams or packets. Every
packet includes a source IP address (of the sender) and a destination IP address (of the
receiver), just as ordinary letters contain postal addresses of sender and receiver. The way
from sender to receiver involves multiple hops of routers, where each router inspects the
destination IP address and forwards the packet closer to its destination. Thus, every router
between sender and receiver learns that the sender is communicating with the receiver. In
particular, your local ISP is in the position to build a complete profile of your Internet usage.
In addition, every server in the Internet that can see any of the packets can profile your
behaviour.
The aim of Tor is to improve your privacy by sending your traffic through a series of proxies.
Your communication is encrypted in multiple layers and routed via multiple hops through
the Tor network to the final receiver. More details on this process can be found in the Tor
overview. Note that all your local ISP can observe now is that you are communicating with
Tor nodes. Similarly, servers in the Internet just see that they are being contacted by Tor
nodes.
Generally speaking, Tor aims to solve three privacy problems:
First, Tor prevents websites and other services from learning your location, which they can
use to build databases about your habits and interests. With Tor, your Internet connections
don't give you away by default -- now you can have the ability to choose, for each
connection, how much information to reveal.
Second, Tor prevents people watching your traffic locally (such as your ISP) from learning
what information you're fetching and where you're fetching it from. It also stops them from
deciding what you're allowed to learn and publish -- if you can get to any part of the Tor
network, you can reach any site on the Internet.
Third, Tor routes your connection through more than one Tor relay so no single relay can
learn what you're up to. Because these relays are run by different individuals or
organizations, distributing trust provides more security than the old one hop proxy
approach.
Note, however, that there are situations where Tor fails to solve these privacy problems
entirely: see the entry below on remaining attacks).

Can exit nodes eavesdrop on communications? Isn't that bad?¶
Yes, the guy running the exit node can read the bytes that come in and out there. Tor
anonymizes the origin of your traffic, and it makes sure to encrypt everything inside the Tor
network, but it does not magically encrypt all traffic throughout the Internet.
This is why you should always use end-to-end encryption such as SSL for sensitive Internet
connections. (The corollary to this answer is that if you are worried about somebody
intercepting your traffic and you're *not* using end-to-end encryption at the application
layer, then something has already gone wrong and you shouldn't be thinking that Tor is the
problem.)
Tor does provide a partial solution in a very specific situation, though. When you make a
connection to a destination that also runs a Tor relay, Tor will automatically extend your
circuit so you exit from that circuit. So for example if Indymedia ran a Tor relay on the
same IP address as their website, people using Tor to get to the Indymedia website would
automatically exit from their Tor relay, thus getting *better* encryption and authentication
properties than just browsing there the normal way.
We'd like to make it still work even if the service is nearby the Tor relay but not on the
same IP address. But there are a variety of technical problems we need to overcome first
(the main one being "how does the Tor client learn which relays are associated with which
websites in a decentralized yet non-gamable way?").

What is Exit Enclaving?¶
When a machine that runs a Tor relay also runs a public service, such as a webserver, you
can configure Tor to offer Exit Enclaving to that service. Running an Exit Enclave for all of
your services you wish to be accessible via Tor provides your users the assurance that they
will exit through your server, rather than exiting from a randomly selected exit node that
could be watched. Normally, a tor circuit would end at an exit node and then that node
would make a connection to your service. Anyone watching that exit node could see the
connection to your service, and be able to snoop on the contents if it were an unencrypted
connection. If you run an Exit Enclave for your service, then the exit from the Tor network
happens on the machine that runs your service, rather than on an untrusted random node.
This works when Tor clients wishing to connect to this public service extend their their
circuit to exit from the Tor relay running on that same host. For example, if the server at
1.2.3.4 runs a web server on port 80 and also acts as a Tor relay configured for Exit
Enclaving, then Tor clients wishing to connect to the webserver will extend their circuit a
fourth hop to exit to port 80 on the Tor relay running on 1.2.3.4.
Exit Enclaving is disabled by default to prevent attackers from exploiting trust relationships
with locally bound services. For example, often 127.0.0.1 will run services that are not
designed to be shared with the entire world. Sometimes these services will also be bound to
the public IP address, but will only allow connections if the source address is something
trusted, such as 127.0.0.1.
As a result of possible trust issues, relay operators must configure their exit policy to allow
connections to themselves, but they should do so only when they are certain that this is a
feature that they would like. Once certain, turning off the ExitPolicyRejectPrivate option will
enable Exit Enclaving. An example configuration would be as follows:

      ExitPolicy accept 1.2.3.4:80

      ExitPolicy reject 127.0.0.1/8

      ExitPolicyRejectPrivate 0

This option should be used with care as it may expose internal network blocks that are not
meant to be accessible from the outside world or the Tor network. Please tailor your
ExitPolicy to reflect all netblocks that you want to prohibit access.
Although Exit Enclaving provides benefits, there is a situation where it could allow a rogue
exit node to control where a client may exit. To protect against this, your services should
provide proper SSL authentication to the clients, and then things will work as expected. How
this works is that a Tor client picks an arbitrary circuit to resolve hosts (e.g. example.com).
A rogue exit node could spoof DNS responses for example.com to be the IP address of the
rogue node, rather than the correct node where the service actually runs. The Tor client
would then attempt to use that rogue node as an Exit Enclave. This is only possible for the
first access attempt for example.com; after the first attempt a circuit is established with the
Exit Enclave IP address directly.
While useful, this behavior may go away in the future because it is imperfect. A great idea
but not such a great implementation.

So I'm totally anonymous if I use Tor?¶
No.
First, Tor protects the network communications. It separates where you are from where you
are going on the Internet. What content and data you transmit over Tor is controlled by
you. If you login to Google or Facebook via Tor, the local ISP or network provider doesn't
know you are visiting Google or Facebook. Google and Facebook don't know where you are
in the world. However, since you have logged into their sites, they know who you are. If you
don't want to share information, you are in control.
Second, active content, such as Java, Javascript, Adobe Flash, Adobe Shockwave,
QuickTime, RealAudio, ActiveX controls, and VBScript, are binary applications. These binary
applications run as your user account with your permissions in your operating system. This
means these applications can access anything that your user account can access. Some of
these technologies, such as Java and Adobe Flash for instance, run in what is known as a
virtual machine. This virtual machine may have the ability to ignore your configured proxy
settings, and therefore bypass Tor and share information directly to other sites on the
Internet. The virtual machine may be able to store data, such as cookies, completely
separate form your browser or operating system data stores. Therefore, we recommend
 disabling these technologies in your browser to improve the situation.
We produce two pieces of software to help you control the risks to your privacy and
anonymity while using the Internet:
   1.    Torbutton attempts to mitigate many of the anonymity risks when browsing the
        Internet via Tor.
   2.    The Tor Browser Bundle is a pre-configured set of applications to allow you to
        anonymously browse the Internet.
Alternatively, you may find a Live CD or USB operating system more to your liking. Now you
have an entire bootable operating system configured for anonymity and privacy on the
Internet.
Tor is a work in progress. There is still plenty of work left to do for a strong, secure, and
complete solution.

Please explain Tor's public key infrastructure.¶
Answer moved to our new FAQ page

Where can I learn more about anonymity?¶
 Read these papers (especially the ones in boxes) to get up to speed on anonymous
communication systems.

What's this about entry guard (formerly known as "helper") nodes?¶
Answer moved to our new FAQ page

What about powerful blocking mechanisms?¶
An adversary with a great deal of manpower and money, and severe real-world penalties to
discourage people from trying to evade detection, is a difficult test for an anonymity and
anti-censorship system.
The original Tor design was easy to block if the attacker controls Alice's connection to the
Tor network --- by blocking the directory authorities, by blocking all the relay IP addresses
in the directory, or by filtering based on the fingerprint of the Tor TLS handshake. Some
government-level firewalls could easily launch this type of attack, which would make the
whole Tor network no longer usable for the people behind the firewalls. This is one part of
what is commonly known as the "China problem" or the "Chinese firewall problem."
We've made quite a bit of progress on this problem lately. You can read more details in our
 blocking-resistance design document, watch the video of Roger's talk at 24C3, or read the
specifics of our bridge design and implementation.

What attacks remain against onion routing?¶
As mentioned above, it is possible for an observer who can view both you and either the
destination website or your Tor exit node to correlate timings of your traffic as it enters the
Tor network and also as it exits. Tor does not defend against such a threat model.
In a more limited sense, note that if a censor or law enforcement agency has the ability to
obtain specific observation of parts of the network, it is possible for them to verify a
suspicion that you talk regularly to your friend by observing traffic at both ends and
correlating the timing of only that traffic. Again, this is only useful to verify that parties
already suspected of communicating with one another are doing so. In most countries, the
suspicion required to obtain a warrant already carries more weight than timing correlation
would provide.
Furthermore, since Tor reuses circuits for multiple TCP connections, it is possible to
 associate non anonymous and anonymous traffic at a given exit node, so be careful about
what applications you run concurrently over Tor. Perhaps even run separate Tor clients for
these applications.

Does Tor resist "remote physical device fingerprinting"?¶
Yes, we resist all of these attacks as far as we know.
These attacks come from examining characteristics of the IP headers or TCP headers and
looking for information leaks based on individual hardware signatures. One example is the
 Oakland 2005 paper that lets you learn if two packet streams originated from the same
hardware, but only if you can see the original TCP timestamps.
Tor transports TCP streams, not IP packets, so we end up automatically scrubbing a lot of
the potential information leaks. Because Tor relays use their own (new) IP and TCP headers
at each hop, this information isn't relayed from hop to hop. Of course, this also means that
we're limited in the protocols we can transport (only correctly-formed TCP, not all IP like
ZKS's Freedom network could) -- but maybe that's a good thing at this stage.

Alternate designs that we don't do (yet)¶
You should send padding so it's more secure.¶
Like all anonymous communication networks that are fast enough for web browsing, Tor is
vulnerable to statistical "traffic confirmation" attacks, where the adversary watches traffic at
both ends of a circuit and confirms his guess that they're communicating. It would be really
nice if we could use cover traffic to confuse this attack. But there are three problems here:
   •   Cover traffic is really expensive. And *every* user needs to be doing it. This adds up
       to a lot of extra bandwidth cost for our volunteer operators, and they're already
       pushed to the limit.
   •   You'd need to always be sending traffic, meaning you'd need to always be online.
       Otherwise, you'd need to be sending end-to-end cover traffic -- not just to the first
       hop, but all the way to your final destination -- to prevent the adversary from
       correlating presence of traffic at the destination to times when you're online. What
       does it mean to send cover traffic to -- and from -- a web server? That is not
       supported in most protocols.
   •   Even if you *could* send full end-to-end padding between all users and all
       destinations all the time, you're *still* vulnerable to active attacks that block the
       padding for a short time at one end and look for patterns later in the path.
In short, for a system like Tor that aims to be fast, we don't see any use for padding, and it
would definitely be a serious usability problem. We hope that one day somebody will prove
us wrong, but we are not optimistic.

You should make every Tor user be a relay.¶
Answer moved to our new FAQ page

You should transport all IP packets, not just TCP packets.¶
Answer moved to our new FAQ page

You should hide the list of Tor relays, so people can't block the exits.¶
There are a few reasons we don't:
   1. We can't help but make the information available, since Tor clients need to use it to
      pick their paths. So if the "blockers" want it, they can get it anyway. Further, even if
      we didn't tell clients about the list of relays directly, somebody could still make a lot
      of connections through Tor to a test site and build a list of the addresses they see.
   2. If people want to block us, we believe that they should be allowed to do so.
      Obviously, we would prefer for everybody to allow Tor users to connect to them, but
      people have the right to decide who their services should allow connections from,
      and if they want to block anonymous users, they can.
   3. Being blockable also has tactical advantages: it may be a persuasive response to
       website maintainers who feel threatened by Tor. Giving them the option may inspire
       them to stop and think about whether they really want to eliminate private access to
       their system, and if not, what other options they might have. The time they might
       otherwise have spent blocking Tor, they may instead spend rethinking their overall
       approach to privacy and anonymity.

You should let people choose their path length.¶
Right now the path length is hard-coded at 3 plus the number of nodes in your path that are
sensitive. That is, in normal cases it's 3, but for example if you're accessing a hidden service
or a ".exit" address it could be 4.
We don't want to encourage people to use paths longer than this -- it increases load on the
network without (as far as we can tell) providing any more security. Remember that the
best way to attack Tor is to attack the endpoints and ignore the middle of the path.
And we don't want to encourage people to use paths of length 1 either. Currently there is no
reason to suspect that investigating a single relay will yield user-destination pairs, but if
many people are using only a single hop, we make it more likely that attackers will seize or
break into relays in hopes of tracing users.
Now, there is a good argument for making the number of hops in a path unpredictable. For
example, somebody who happens to control the last two hops in your path still doesn't
know who you are, but they know for sure which entry node you used. Choosing path length
from, say, a geometric distribution will turn this into a statistical attack, which seems to be
an improvement. On the other hand, a longer path length is bad for usability. We're not
sure of the right trade-offs here. Please write a research paper that tells us what to do.

You should split each connection over many paths.¶
We don't currently think this is a good idea. You see, the attacks we're worried about are at
the endpoints: the adversary watches Alice (or the first hop in the path) and Bob (or the
last hop in the path) and learns that they are communicating.
If we make the assumption that timing attacks work well on even a few packets end-to-end,
then having *more* possible ways for the adversary to observe the connection seems to
hurt anonymity, not help it.
Now, it's possible that we could make ourselves more resistant to end-to-end attacks with a
little bit of padding and by making each circuit send and receive a fixed number of cells.
This approach is more well-understood in the context of high-latency systems. See e.g.
 Message Splitting Against the Partial Adversary by Andrei Serjantov and Steven J. Murdoch.
But since we don't currently understand what network and padding parameters, if any,
could provide increased end-to-end security, our current strategy is to minimize the number
of places that the adversary could possibly see.
You should migrate application streams across circuits.¶
This would be great for two reasons. First, if a circuit breaks, we would be able to shift its
active streams onto a new circuit, so they don't have to break. Second, it is conceivable that
we could get increased security against certain attacks by migrating streams periodically,
since leaving a stream on a given circuit for many hours might make it more vulnerable to
certain adversaries.
There are two problems though. First, Tor would need a much more bulky protocol. Right
now each end of the Tor circuit just sends the cells, and lets TCP provide the in-order
guaranteed delivery. If we can move streams across circuits, though, we would need to add
queues at each end of the circuit, add sequence numbers so we can send and receive
acknowledgements for cells, and so forth. These changes would increase the complexity of
the Tor protocol considerably. Which leads to the second problem: if the exit node goes
away, there's nothing we can do to save the TCP connection. Circuits are typically three
hops long, so in about a third of the cases we just lose.
Thus our current answer is that since we can only improve things by at best 2/3, it's not
worth the added code and complexity. If somebody writes a protocol specification for it and
it turns out to be pretty simple, we'd love to add it.
But there are still some approaches we can take to improve the reliability of streams. The
main approach we have now is to specify that streams using certain application ports prefer
circuits to be made up of stable nodes. These ports are specified in the "LongLivedPorts"
torrc option, and they default to 21,22,706,1863,5050,5190,5222,5223,6667,6697,8300. The
definition of "stable" is an open research question, since we can only guess future stability
based on past performance. Right now we judge that a node is stable if it advertises that it
has been up for more than a day. Down the road we plan to refine this so it takes into
account the average stability of the other nodes in the Tor network.
* It's not just a 2/3 improvement, it is a thing that is simply necessary to truly anonymize
hosts connected using a dynamic IP setup, like many consumer ISPs use them. Without the
possibility to migrate streams, an attacker can examine which long-lived connections end
when the observed person gets a new IP. By allowing stream migration, the connection can
persist as if nothing had happened. This will make Tor a tool for more than anonymity, as it
improves networking in general. Maybe it's not even that hard to implement. It could be
gradually phased into the protocol. The first step would be to send sequencing information
with the data stream. Future versions could then investigate possibilities for picking up the
connections. Security should not be a problem as we are already using strong cryptography,
which enables us to authenticate the stream owner.

You should let the network pick the path, not the client.¶
No. You cannot trust the network to pick the path for relays could collude and route you
through their colluding friends. This would give an adversary the ability to watch all of your
traffic end to end.

You should use steganography to hide Tor traffic.¶
Many people suggest that we should use steganography to make it hard to notice Tor
connections on the Internet. There are a few problems with this idea though:
First, in the current network topology, the Tor relays list is public and can be accessed by
attackers. An attacker who wants to detect or block anonymous users could always just
notice any connection to or from a Tor relay's IP address.
Second, effective steganography is an extremely hard problem. The protocols we know that
are currently considered secure require incredible effort and bandwidth overhead.
That said, Tor has already started a little bit down this path: Tor clients speak only HTTP
(for the directory) and HTTPS (for Tor connections), so simply looking at the protocol is not
sufficient to identify Tor traffic.

Your default exit policy should block unallocated net blocks too.¶
No, it shouldn't. The default exit policy blocks certain private net blocks, like 10.0.0.0/8,
because they might actively be in use by Tor relays and we don't want to cause any
surprises by bridging to internal networks. Some overzealous firewall configs suggest that
you also block all the parts of the Internet that IANA has not currently allocated. First, this
turns into a problem for them when those addresses *are* allocated. Second, why should
we default-reject something that might one day be useful?
Tor's default exit policy is chosen to be flexible and useful in the future: we allow everything
except the specific addresses and ports that we anticipate will lead to problems.

Exit policies should be able to block websites, not just IP addresses¶
It would be nice to let relay operators say things like "reject www.slashdot.org" in their exit
policies, rather than requiring them to learn all the IP address space that could be covered
by the site (and then also blocking other sites at those IP addresses).
There are two problems, though. First, users could still get around these blocks. For
example, they could request the IP address rather than the hostname when they exit from
the Tor network. This means operators would still need to learn all the IP addresses for the
destinations in question.
The second problem is that it would allow remote attackers to censor arbitrary sites. For
example, if a Tor operator blocks www1.slashdot.org, and then some attacker poisons the
Tor relay's DNS or otherwise changes that hostname to resolve to the IP address for a
major news site, then suddenly that Tor relay is blocking the news site.

You should change Tor to prevent users from posting certain content.¶
Tor only transports data, it does not inspect the contents of the connections which are sent
over it. In general it's a very hard problem for a computer to determine what is
objectionable content with good true positive/false positive rates and we are not interested
in addressing this problem.
Further, and more importantly, which definition of "certain content" could we use? Every
choice would lead to a quagmire of conflicting personal morals. The only solution is to have
no opinion.

Tor should support IPv6.¶
That's a great idea! There are two aspects for IPv6 support that Tor needs. First, Tor needs
to support exit to hosts that only have IPv6 addresses. Second, Tor needs to support Tor
relays that only have IPv6 addresses.
The first is far easier: the protocol changes are relatively simple and isolated. It would be
like another kind of exit policy.
The second is a little harder: right now, we assume that (mostly) every Tor relay can
connect to every other. This has problems of its own, and adding IPv6-address-only relays
adds problems too: it means that only relays with IPv6 abilities can connect to IPv6-
address-only relays. This makes it possible for the attacker to make some inferences about
client paths that it would not be able to make otherwise.
There is an IPv6 exit proposal to address the first step for anonymous access to IPv6
resources on the Internet.
Full IPv6 support is definitely on our "someday" list; it will come along faster if somebody
who wants it does some of the work.

Abuse¶
Doesn't Tor enable criminals to do bad things?¶
For the answer to this question and others, please see our new Tor Abuse FAQ.

How do I respond to my ISP about my exit relay?¶
A collection of templates for successfully responding to ISPs is collected here.

Info to help with police or lawyers questions about exit relays¶
Please read the legal FAQ written by EFF lawyers. There's a growing legal directory of
people who may be able to help you.
If you need to check that a Tor exit node was running at a certain date and time on a given
IP address, you can look in the release archive for signed, time-stamped lists of nodes.
Malware

 Malware is a catch-all term refering to software that runs on a computer and operates against

 the interests of the computer's owner. Computer viruses, worms, trojan horses, "spyware",

 rootkits and key loggers are often cited as subcategories of malware. Note that some programs

 may belong to more than one of those categories.


How Does Malware Get Onto a Computer?
 Some malware is spread by exploiting vulnerabilities in operating systems or application

 software. These vulnerabilities are design or programming errors in software that can allow a

 clever programmer to trick the defective software into giving someone else control.

 Unfortunately, such vulnerabilities have been found in a wide variety of mainstream software,

 and more are detected all the time — both by those trying to fix the vulnerabilities and by those

 trying to exploit them.



 Another common vector by which malware spreads is to trick the computer user into running a

 software program that does something the user wouldn't have wanted. Tricking the user is a

 pretty powerful way to take over a computer, because the attacker doesn't have to depend on

 finding a serious weakness in mainstream software. It is especially difficult to be sure that

 computers shared by several users, or a computer in a public place such as a library or Internet

 café, are not compromised. If a single user is tricked into running a malware installer, every

 subsequent user, no matter how cautious, could be at risk. Malware written by sophisticated

 programmers generally leaves no immediately visible signs of its presence.
What is Malware Capable of?
 Malware is extremely bad news from a security and privacy perspective. Malware may be

 capable of stealing account details and passwords, reading the documents on a computer

 (including encrypted documents, if the user has typed in the password), defeating attempts to

 access the Internet anonymously, taking screenshots of your desktop, and hiding itself from

 other programs. Malware is even capable of using your computer's microphone, webcam, or

 other peripherals against you.



 The chief limitation in malware's capability is that the author needs to (1) have anticipated the

 need for the malware to do something, (2) spent a substantial amount of effort programming

 the malicious feature, testing that it works and is robust on numerous different versions of an

 operating system, and (3) be free of legal or other restrictions preventing the implementation of

 the feature.



 Unfortunately, a black market has appeared in recent years that sells malware customized for

 various purposes. This has reduced the obstacles listed in category (2) above.



 The most alarming feature of malware is that, once installed, it can potentially nullify the

 benefits of other security precautions. For example, malware can be used to bypass the

 protections of encryption software even if this software is otherwise used properly. On the other

 hand, the majority of malware is mainly designed to do other things, like popping up

 advertisements or hijacking a computer to send spam.


Is Malware Infection Likely?
 Nobody knows how many computers are infected with malware, but informed estimates range

 from 40% to almost 90% of computers running Windows operating systems. Infection rates are

 lower for MacOS and Linux systems, but this is not necessarily because Windows is an easier

 target. Indeed, recent versions of Windows are much improved in security. Rather, more

 malware authors target Windows machines because an effective attack will give them control of

 more computers.



 The risk that any given computer is infected with malware is therefore quite high unless skilled

 computer security specialists are putting a substantial amount of effort into securing the

 system. With time, any machine on which security updates are not installed promptly is

 virtually guaranteed to become infected. It is however overwhelmingly likely that the malware

 in question will be working on obtaining credit card numbers, obtaining eBay account
 passwords, obtaining online banking passwords, sending spam, or launching denial of service

 attacks, rather than spying on specific individuals or organizations.



 Infection by malware run by U.S. law enforcement or other governmental agencies is also

 possible, though vastly less likely. There have been a handful of cases in which it is known that

 warrants were obtained to install malware to identify a suspect or record their communications

 (see the section on CIPAV below). It is unlikely that U.S. government agencies would use

 malware except as part of significant and expensive investigations.


How Can You Reduce the Risk of Malware Infection?
 Currently, running a minority operating system significantly diminishes the risk of infection

 because fewer malware applications have been targeted at these platforms. (The overwhelming

 majority of existing malware targets only a single particular operating system.)



 Vulnerabilities due to software defects are difficult to mitigate. Installing software updates

 promptly and regularly can ensure that at least known defects are repaired.



 Not installing (or running) any software of unknown provenance is an important precaution to

 avoid being tricked into installing malware. This includes, for example, software applications

 advertised by banner ads or pop-ups, or distributed by e-mail (even if disguised as something

 other than a computer program). Recent operating systems attempt to warn users about

 running software from an unknown source; these security warnings serve an important purpose

 and should not be casually ignored. Strictly limiting the number of users of a computer

 containing sensitive information can also be helpful. Notably, some malware targets children,

 including malicious code along with downloadable video games. (Of course, computer users of

 any age can be tricked into installing malware!)



 On Windows, regularly running antivirus and antispyware software can remove a large

 proportion of common malware. However, this software is not effective against all malware, and

 must be regularly updated. Since anti-malware software is created by researching malware

 discovered "in the wild," it's also probably ineffective against uncommon, specially-targeted

 malware applications that aim to infect only a few specific computers rather than a large

 population on the Internet.
CIPAV: An Example of Malware Use for Law Enforcement
 A CIPAV is an FBI acronym which stands for Computer and Internet Protocol Address

 Verifier. CIPAVs are a type of malware intended to identify people who are hiding their identity

 using proxy servers, bot nets, compromised computers or anonymity networks like Tor. A small

 amount is known about them as a result of published documents from cases in which they were

 used. CIPAVs may include use of browser exploits to run software on a computer regardless of

 how many steps of indirection are present between the attacking server and the user.


Malware Risk Assessment
 Ubiquitous malware poses a threat to all computer users. The seriousness of the threat varies

 greatly. For some users, it is sufficient to install operating system updates regularly and utilize

 caution in running software found on the web. For organizations that face a high risk of being

 specifically targetted by a malware author, it is advisable to find computer security experts to

 defend their computers — or better yet, to simply avoid using networked computers for their

 most sensitive activities.


Mobile Devices

 This article discusses privacy implications of cell phones and other devices that communicate

 with large scale wireless voice and data networks.



 This page doesn't discuss Wi-Fi. If you have a mobile device that uses Wi-Fi but not GSM, CDMA

 2000, or any of the other cellular networks, you should follow the same steps that you would

 for a laptop with Wi-Fi. If you have a cell phone that also connects to Wi-Fi networks, you

 should read the Wi-Fi article as well as the material below.


Problems with Cellular Device Privacy
 Cell phones pose several privacy problems.



 No Anonymity. Every cell phone has several unique identifying numbers. For a GSM phone

 these include the IMEI number for the handset itself and the IMSI in the SIM card. Unless you

 have purchased your handset and account anonymously, these will be linked to your real

 identity. Even if you have an anonymous handset and account, the typical use pattern of a

 phone is almost always enough to link it to your identity.



 Location tracking. Cell phones communicate with transmission towers. The strength of the

 signal received by these towers from a phone is a measure of distance, and this allows the
 phone network to know where its users are. Many if not all networks log approximate location

 on a regular basis. These records may be subject to subpoena. If your adversary is law

 enforcement and has probable cause for a warrant, they could receive continuous triangulation

 location surveillance data from the network.



 Easy interception. Cell phone communications are sent through the air like communications

 from a walkie-talkie, and encryption is usually inadequate or absent. Although there are

 substantial legal protections for the privacy of cell phone calls, it's technologically

 straightforward to intercept cell phone calls on many cell networks without the cooperation of

 the carrier, and the technology to do this is only getting cheaper. Such interception without

 legal process could be a serious violation of privacy laws, but would be immensely difficult to

 detect. U.S. and foreign intelligence agencies have the technical capacity to intercept

 unencrypted and weakly encrypted cell phone calls on a routine basis.



 Lack of user control. Cell phones tend to run proprietary operating systems, and the

 operating systems on different devices tend to be different from each other. This means for

 instance that on most cell phones:


   •   it's impossible to guarantee that the phone is using secure encryption for its transmissions, or
       determine whether it's using encryption at all

   •   it's very difficult for the user to gain access to and control over the data recorded by the
       phone's operating system


 However, because cell phones do not create stored records of the contents of your

 communications, telephonic communication has certain privacy advantages over other modes of

 communication, like email, instant messaging or text messaging which do create such records.


Data Stored by Your Phone
 Your phone will store the contents of the text messages you send and receive, the times and

 numbers of the calls you make and receive, and possibly other information such as location-

 related data. Secure deletion of this data poses a challenge. On most mobile devices your best

 strategy is to manually delete these records using the phone's user interface, and then hope

 that new records will overwrite them. If you have deleted all your text messages and calls, and

 waited long enough for the phone's memory to fill, there is a chance that later forensic

 investigation would not find the original data.
 There are a couple of drive encryption programs available for devices that run the Windows

 Mobile operating system. Proprietary drive encryption that has not been audited by the

 computer security community should always be treated with caution; it is probably better than

 no protection at all, although even that is not guaranteed.



 We are hopeful that the arrival of open Linux-based phones (notably OpenMoko and those using

 the Google Android code) will offer users better control over stored data in the future.



 The undeleted data could be accessible to anyone who takes physical possession of the phone,

 including thieves or an arresting officer.


Transmitted Data
 The control data and actual voice conversations sent by cellular devices may be encrypted

 using various standard encryption protocols. There is no guarantee that this will occur —

 phones do not usually offer users a way to refuse to operate in unencrypted mode, and many

 don't indicate whether they are using encryption. As a result, it is largely up to the network

 operator to decide if its users will receive any cryptographic defense against eavesdropping.



 Carrier-provided encryption can be good protection against eavesdropping by third parties.

 However, if it is the carrier that wants to listen in, or the government with a warrant ordering

 the carrier to allow wiretapping access to your calls, then that encryption will not protect you

 because the carrier has the means to decrypt.



 Even if your cell phone is communicating in an encrypted fashion, it turns out that most of the

 standard cryptography used in cell networks has been broken. This means that an adversary

 that is motivated and able to intercept radio communications and cryptanalyze them will be

 able to listen to your phone calls.



 It would be technologically possible to use strong end-to-end encryption with voice calls, but

 this technology is not yet widely available. The German company GMSK has begun selling a

 GSM-based "Cryptophone"; as with computer encryption, both users would need to be using the

 technology in order to make it work. Some third parties have produced software to encrypt SMS

 text messages; here, again, both the sender and recipient of a message would need to use

 compatible software.
Data Stored by Other Parties
 A great deal of data pertaining to your use of your phone will be stored by the telephone

 company or companies that are providing you with service. A more diffuse set of records will

 also be stored by the phones of the people you communicate with.



 Expect your telephone company to keep a record of: who you talk to and when; who you

 exchange messages with and when; what data you send and receive using wireless data

 services; information revealing your physical location at any time when your phone is on; and

 whether your phone is on or off.



 The text messages exchanged by your phone — as well as summary information for the calls

 you sent and receive from other cell phones — are likely to be stored by those other cell

 phones. As anyone who follows celebrity gossip should know, the people you are

 communicating with can disclose the contents of your communications. Other adversaries may

 use subpoenas or other legal process to obtain text messages or call information.


Malware for Phones
 If you face a determined adversary such as federal law enforcement with a warrant, assume

 that your phone could be reprogrammed with malware to assist in their investigations; there

 are reports of the FBI doing this.



 Under these extreme circumstances, it is possible for your phone to be turned into a remote

 bugging device. It is possible for a phone to remain on even when you press the "off" button,

 but not if you remove the battery.



 If you have a pair of speakers that crackle when your phone is nearby, you can check that the

 phone is actually off / not transmitting continuously by placing it near those speakers.


Secure Deletion

 Secure deletion involves the use of special software to ensure that when you delete a file,

 there really is no way to get it back again.



 When you "delete" a file — for instance, by putting the file in your computer's trash folder and

 emptying the trash — you may think you've deleted that file. But you really haven't. Instead,

 the computer has just made the file invisible to the user, and marked the part of the disk drive

 that it is stored on as "empty," meaning that it can be overwritten with new data. But it may be
 weeks, months, or even years before that data is overwritten, and the computer forensics

 experts can often even retrieve data that has been overwritten by newer files. Indeed,

 computers normally don't "delete" data; they just allow it to be overwritten over time, and

 overwritten again.



 The best way to keep those "deleted" files hidden, then, is to make sure they get overwritten

 immediately. Your operating system probably already includes software that can do this for you,

 and overwrite all of the "empty" space on your disk with gibberish (optionally multiple times),

 and thereby protect the confidentiality of deleted data. Examples include GNU Shred (Linux),

 Secure Delete (Mac OS X), and cipher.exe (Windows XP Pro and later).


Windows Secure Deletion

Without Installing New Software: Use Cipher.exe

Update: Cipher.exe is no longer recommended


 We previously discussed using a program called Cipher.exe to clear free space on Windows

 systems, without having to install any new software on the machine. However, people have

 written in to inform us about a grievous design flaw in Cipher.exe that could cause unintended

 deletion of entire drives of data.

 We recommend using Eraser instead.

A Better Option: Install Eraser

 Eraser is a free/open source secure deletion tool for Windows, and is much more sophisticated

 than the built in cipher.exe. It can be used to quickly and easily target individual files for secure

 deletion, or to implement periodic secure deletion policies. You can get a copy of Eraser here

 and some tips on how to use it here.


Secure Deletion on Mac OS X

Secure Deletion of Individual Files

 On OS X 10.4 an above, you can securely delete files by moving them to the Trash, and then

 selecting Finder > Secure Empty Trash.


Ensuring Previously Deleted Data Cannot be Recovered
 Apple's advice on preventing forensic undeletion on Mac OS X is as follows:


To prevent the recovery of files you deleted previously, open Disk Utility (in Applications/Utilities),
choose Help > Disk Utility Help, and search for help on erasing free disk space.

Secure Deletion on *nix Operating Systems

Secure Deletion of Individual Files

 Linux, FreeBSD and many other UNIX systems have a command line tool called shred installed

 on them. Shred works quite differently to the Windows cipher.exe program; rather than trying

 to prevent previously deleted files from being recoverable, it singles out specified files and

 repeatedly overwrites them and their names with random data.


 If you are comfortable using a terminal or command line, secure deletion of files with shred is

 simple. Just run the following command:


 shred -u


Ensuring Previously Deleted Data Cannot be Recovered

 Unfortunately we are not aware of any standard Linux/UNIX tools for overwriting previously

 deleted files to prevent undeletion.



 A hack solution that may work is to write zeroes or random data to a file on the drive until it

 fills up all of the available space, then delete it. Unfortunately, this will fill up the filesystem and

 may therefore make the system unstable while it is in progress. Caveat emptor.



 On Linux systems, you could try to achieve this by running the following command as root:


 dd if=/dev/zero of=/directory/junkfile ; rm /directory/junkfile



 Replace /directory/ with a directory that is within the mounted partition within which you wish
 to ensure that forensic undeletion is impossible. The dd command will take a long time to run

 and will finish with an error saying the disk is full; the rm will then delete the huge file full of

 random junk.
 Replacing /dev/zero with /dev/urandom uses random data instead of zeroes; that will result in

 slightly more secure erasure, but can take much longer.


A Warning About the Limitations of Secure Deletion Tools
 Even if you follow the advice above, there is a chance that certain traces of deleted files may

 persist on your computer, not because the files themselves haven't been properly deleted, but

 because some part of the operating system or some other program keeps a deliberate record of

 them.



 There are many ways in which this could occur, but two examples should suffice to convey the

 possibility. On Windows, a copy of Microsoft Office may retain a reference to the name of a file

 in the "Recent Documents" menu, even if the file has been deleted (office might sometimes

 even keep temporary files containing the contents of the file). On a Linux or other *nix system,

 a user's shell history file may contain commands that include the file's name, even though the

 file has been securely deleted. And OpenOffice may keep as many records as Microsoft Office.

 In practice, there may be dozens of programs that behave like this.



 It's hard to know how to respond to this problem. It is safe to assume that even if a file has

 been securely deleted, its name will probably continue to exist for some time on your computer.

 Overwriting the entire disk is the only way to be 100% sure the name is gone. Some of you

 may be wondering, "Could I search the raw data on the disk to see if there are any copies of

 the data anywhere?" The answer is yes and no. Searching the disk (eg by using a command like
 grep -ab /dev/ on Linux) will tell you if the data is present in plaintext, but it won't tell you if

 some program has compressed or otherwise coded references to it. Also be careful that the

 search itself does not leave a record! The probability that the file's contents may persist is

 lower, but not impossible. Overwriting the entire disk and installing fresh operating system is

 the only way to be 100% certain that records of a file have been erased.


Secure Deletion When Discarding Old Hardware
 If you want to finally throw a piece of hardware away or sell it on eBay, you'll want to make

 sure no one can retrieve your data from it. (Studies have repeatedly found that computer

 owners usually fail to do this — and hard drives are resold chock-full of highly sensitive

 information.) So, before selling or recycling a computer, be sure to overwrite its storage media

 with gibberish first. (Even if you're not getting rid of it right away, if you have a computer that's

 reached the end of its useful life and is no longer in use, it's also safer to wipe the hard drive
 before stashing the machine in a corner or a closet.) Darik's Boot and Nuke is an excellent free

 tool for this purpose.



 Some full-disk encryption software has the ability to destroy the master key, rendering a hard

 drive's encrypted contents permanently incomprehensible. Since the key is a tiny amount of

 data and can be destroyed almost instantaneously, this represents a much faster alternative to

 overwriting with software like Darik's Boot and Nuke, which can be quite time-consuming for

 larger drives. However, this option is only feasible if the hard drive was always encrypted. If

 you weren't using full-disk encryption ahead of time, you'll need to overwrite the whole drive

 before getting rid of it.


Discarding CD-ROMS

 When it comes to CD-ROMs, you should do the same thing you do with paper — shred'em.

 There are inexpensive shredders that will chew up CD-ROMs. Never just toss a CD-ROM out in

 the garbage unless you're absolutely sure there's nothing sensitive on it.


File and Disk Encryption

 Modern operating systems allow you to use a system of accounts and passwords to limit access

 to data on a computer. This may be useful when adversaries have casual passing access to your

 machine, but those accounts and passwords will not protect your data if your computer is stolen

 or seized — or if the adversaries have more than a minute or two alone with your computer.

 There are many ways (such as plugging your hard disk into another computer, or booting

 another operating system using a CD or USB key) that would allow files to be read off the disk.

 Even deleted files may be recoverable.



 The theft or seizure threats can be mitigated by encrypting the data on the disk. Some sort of

 mitigation is especially important for laptops, which are at high risk of being lost or stolen, but

 the same measures can be useful for improving the security of any client or workstation-type

 computer.



 Full-disk encryption is meant to protect stored data against this sort of exposure, if the

 computer is stolen or seized when it is powered off. If the computer is seized while running,

 there are tricks that sophisticated adversaries could use to read the data regardless of

 encryption.
 File encryption is disk encryption that only applies to certain specific files on your computer. It

 may be easier to deploy but is vulnerable to several threats that do not apply to full disk

 encryption.



 Hard disk passwords are a feature offered by many laptop manufacturers. These can be

 enabled within the BIOS of your computer. Hard disk passwords don't encrypt any data on your

 drive, they just prevent the drive from cooperating with the computer until the password is

 supplied. There are numerous commercial services which will disable these passwords for about

 $100 per drive. So a hard disk password is useful against a casual thief, but of no use against

 law enforcement or other non-casual adversaries.


Should I Encrypt My Drive?
 Everybody should use either disk encryption or a hard disk password (possibly augmented with

 file encryption) on their laptops. If your laptop has personal data but you would not regard any

 of it as sensitive, a hard disk password may be quick and easy, and sufficient protection in case

 of theft.



 If your computer contains a very small and easily quantified set of somewhat sensitive

 documents, it may be sufficient to use file encryption for those documents, alongside a hard

 disk password.



 If you computer contains a larger (or harder to quantify) set of sensitive documents, or any

 documents which might be considered highly sensitive, it is best to use full disk encryption. In

 such cases the threat posed by malware should also be taken into account.


Disk Encryption Is Of Little Use in Civil Lawsuits

 It is extremely important to note that disk encryption is unlikely to offer much protection

 against civil litigation. Many of the procedural obstacles which might apply to law enforcement

 attempts to obtain encrypted data during a criminal investigation would not apply in a civil

 case. If an adversary in a civil case persuades a judge to issue a subpoena for your data, a

 failure to decrypt and disclose the data would be held against you in the case.



 If your threat model involves civil litigation, it is essential to simply not have the data on a

 computer in the first place, or to have secure deletion practices in place long before any lawsuit

 is filed. Once a lawsuit is filed, you will be obliged to preserve any pertinent documents, and the
 presence of forensic evidence that you deleted data after a suit was filed would have dire

 consequences.


Choosing Disk Encryption Software
 There are many full-disk encryption tools. Using a mainstream one is probably safer than an

 obscure one, since mainstream disk encryption products have usually received more expert

 review. Leading disk encryption programs include BitLocker, PGPDisk, FileVault, TrueCrypt, and

 dm-crypt (LUKS); some of these come with the operating system, while others are third-party

 add-ons. You can read a detailed comparison of these and many other disk encryption products

 from a comparison at Wikipedia. This comparison may help you select a disk encryption product

 to meet your needs, but any of these systems can protect your data better than having no disk

 encryption

 at all.


Things To Know When Using Disk Encryption
 Generally, disk encryption software will require you to enter a separate disk password when you

 turn the computer on or start using the disk (some systems can use a smartcard instead of or

 in addition to a password). To be effective, this password must be resistant to all forms of

 automated guessing. Remember that the disk encryption is fully effective at preventing access

 to the disk when the computer is turned off (or the encrypted disk is entirely unmounted or

 removed from use); to get the full benefit, you should unmount the encrypted disk or turn the

 computer off in any situation where the risk of compromise is especially high, such as a

 computer left unattended overnight or a laptop being carried from place to place. (Using disk

 encryption without following this precaution scrupulously will still provide more protection

 against some attackers than not using disk encryption.)



 Finally, full-disk encryption can also be used on servers, providing some protection against

 seizure of the servers. However, even servers with encrypted hard drives could be vulnerable to

 attackers with specialized techniques if they're seized while they're operating. Proper use of

 disk encryption on servers can also be a nuisance because the server can't do a fully

 unattended automatic reboot. (It's not safe to store the password for the disk on the server

 itself, so an administrator will have to enter the disk password whenever the computer is

 restarted.)
Plausible Deniability
 One interesting property which some disk encryption developers are working towards is

 plausible deniability. The goal of these efforts is to offer users a way to not only encrypt their

 files, but to prevent an attacker from being able to even deduce the existence of some of the

 encrypted files. The user will have a way to "plausibly deny" that the files exist.



 One example of this concept is TrueCrypt's ability to have an encrypted partition (which can be

 hidden as any file on your hard drive) and within that partition hide another partition. One

 password will reveal the outer partition and another separate password will reveal the inner

 one. Because of the way TrueCrypt encrypts the partition table itself, an observer cannot detect

 a hidden partition even if she has access to the "regular" encrypted share. The idea is to give

 the user something to decrypt if a law enforcement officer or Customs official asks, while

 keeping the rest of their information secure.



 In practice, TrueCrypt's first attempt to implement this feature was shown to be ineffective

 because operating systems and applications leave so many traces of the files they work with,

 that a forensic investigator would have many avenues by which to determine that the inner

 partition existed. The TrueCrypt developers have responded to this research by offering a way

 to install and boot from an entire separate operating system within the inner partition. It is too

 soon to know whether their new approach will turn out to offer secure plausible deniability.



 Technical issues aside, remember that lying to a federal law enforcement officer about material

 facts is a crime, so if a person chose to answer a question about whether there were additional

 encrypted partitions on a computer, they would be legally obligated to answer truthfully.


Virtual Private Networks (VPN)

 Virtual Private Networks (VPNs) are a very powerful and general tool that can be used to

 encrypt all of the communications between participating computers. VPNs can be used to

 improve the privacy and security of protocols that are not encrypted (or not securely

 encrypted) by default.



 The biggest catch with VPNs is that all of the computers participating in them must be running

 the same VPN software, and must be correctly configured to communicate with each other. In

 general, this means that deploying a VPN is a non-trivial task requiring signicant systems

 administration time.
  Organizations that need to arrange secure access to intranet web servers, file servers, print

  servers and similar facilities should deploy VPNs.



  More information about different VPN architectures and software can be found at Wikipedia.



Virtual private network
From Wikipedia, the free encyclopedia

Jump to: navigation, search

           This article has multiple issues. Please help improve it or discuss these
           issues on the talk page.
               •   It may require cleanup to meet Wikipedia's quality standards.
                   Tagged since August 2011.
               •   It may be confusing or unclear to readers. Tagged since November
                   2009.
               •   It needs attention from an expert on the subject. Tagged since
                   November 2009.

"VPN" redirects here. For other uses, see VPN (disambiguation).




VPN Connectivity overview

A virtual private network (VPN) is a network that uses primarily public telecommunication
infrastructure, such as the Internet, to provide remote offices or traveling users access to a central
organizational network.
VPNs typically require remote users of the network to be authenticated, and often secure data
with encryption technologies to prevent disclosure of private information to unauthorized parties.
VPNs may serve any network functionality that is found on any network, such as sharing of data
and access to network resources, printers, databases, websites, etc. A VPN user typically
experiences the central network in a manner that is identical to being connected directly to the
central network. VPN technology via the public Internet has replaced the need to requisition and
maintain expensive dedicated leased-line telecommunication circuits once typical in wide-area
network installations.
Contents
    •   1 History
    •   2 Security mechanisms
           ○   2.1 Authentication
    •   3 Routing
           ○   3.1 PPVPN Building blocks
           ○   3.2 OSI Layer 1 services
                       3.2.1 Virtual private wire and private line services
                        (VPWS and VPLS)
           ○   3.3 OSI Layer 2 services
           ○   3.4 OSI Layer 3 PPVPN architectures
           ○   3.5 Plaintext tunnels
    •   4 Trusted delivery networks
    •   5 VPNs in mobile environments
    •   6 See also
    •   7 References
    •   8 Further reading
    •   9 External links

[edit] History
Until the end of the 1990s, networked computers were connected through expensive leased lines
and/or dial-up phone lines.
Virtual Private Networks reduce network costs because they avoid a need for physical leased
lines that individually connect remote offices (or remote users) to a private Intranet (internal
network). Users can exchange private data securely, making the expensive leased lines
unnecessary.[1]
VPN systems can be classified by:
•   The protocols used to tunnel the traffic
•   The tunnel's termination point, i.e., customer edge or network provider edge
•   Whether they offer site-to-site or remote access connectivity
•   The levels of security provided
•   The OSI layer they present to the connecting network, such as Layer 2 circuits or
    Layer 3 network connectivity
Some classification schemes are discussed in the following sections.
[edit] Security mechanisms
Secure VPNs use cryptographic tunneling protocols to provide confidentiality by blocking
intercepts and packet sniffing, allowing sender authentication to block identity spoofing, and
provide message integrity by preventing message alteration.
Secure VPN protocols include the following:
•   IPsec (Internet Protocol Security) was developed by the Internet Engineering
    Task Force (IETF), and was initially developed for IPv6, which requires it. This
    standards-based security protocol is also widely used with IPv4. Layer 2
    Tunneling Protocol frequently runs over IPsec. Its design meet the most security
    goals: authentication, integrity, and confidentiality. IPsec functions by
    summarizing an IP packet in conjunction with a surrounding packet, and
    encrypting the outcome.
•   Transport Layer Security (SSL/TLS) can tunnel an entire network's traffic, as it
    does in the OpenVPN project, or secure an individual connection. A number of
    vendors provide remote access VPN capabilities through SSL. An SSL VPN can
    connect from locations where IPsec runs into trouble with Network Address
    Translation and firewall rules.
•   Datagram Transport Layer Security (DTLS), is used in Cisco's next-generation
    VPN product, Cisco AnyConnect VPN, to solve the issues SSL/TLS has with
    tunneling over UDP.
•   Microsoft Point-to-Point Encryption (MPPE) works with their Point-to-Point
    Tunneling Protocol and in several compatible implementations on other
    platforms.
•   Microsoft introduced Secure Socket Tunneling Protocol (SSTP) in Windows Server
    2008 and Windows Vista Service Pack 1. SSTP tunnels Point-to-Point Protocol
    (PPP) or Layer 2 Tunneling Protocol traffic through an SSL 3.0 channel.
•   MPVPN (Multi Path Virtual Private Network). Ragula Systems Development
    Company owns the registered trademark "MPVPN".[2]
•   Secure Shell (SSH) VPN -- OpenSSH offers VPN tunneling to secure remote
    connections to a network or inter-network links. This should not be confused
    with port forwarding. OpenSSH server provides a limited number of concurrent
    tunnels and the VPN feature itself does not support personal authentication.[3][4][5]

[edit] Authentication
Tunnel endpoints must authenticate before secure VPN tunnels can be established.
User-created remote access VPNs may use passwords, biometrics, two-factor authentication or
other cryptographic methods.
Network-to-network tunnels often use passwords or digital certificates, as they permanently store
the key to allow the tunnel to establish automatically and without intervention from the user.
[edit] Routing
Tunneling protocols can be used in a point-to-point topology that would theoretically not be
considered a VPN, because a VPN by definition is expected to support arbitrary and changing
sets of network nodes. But since most router implementations support a software-defined tunnel
interface, customer-provisioned VPNs often are simply defined tunnels running conventional
routing protocols.
[edit] PPVPN Building blocks
Depending on whether the PPVPN (Provider Provisioned VPN) runs in layer 2 or layer 3, the
building blocks described below may be L2 only, L3 only, or combine them both. Multiprotocol
Label Switching (MPLS) functionality blurs the L2-L3 identity.
RFC 4026 generalized the following terms to cover L2 and L3 VPNs, but they were introduced
in RFC 2547.[6]
Customer edge device. (CE)

A device at the customer premises, that provides access to the PPVPN. Sometimes it's just a
demarcation point between provider and customer responsibility. Other providers allow
customers to configure it.
Provider edge device (PE)

A PE is a device, or set of devices, at the edge of the provider network, that presents the
provider's view of the customer site. PEs are aware of the VPNs that connect through them, and
maintain VPN state.
Provider device (P)

A P device operates inside the provider's core network, and does not directly interface to any
customer endpoint. It might, for example, provide routing for many provider-operated tunnels
that belong to different customers' PPVPNs. While the P device is a key part of implementing
PPVPNs, it is not itself VPN-aware and does not maintain VPN state. Its principal role is
allowing the service provider to scale its PPVPN offerings, as, for example, by acting as an
aggregation point for multiple PEs. P-to-P connections, in such a role, often are high-capacity
optical links between major locations of provider.
User-visible PPVPN services This section deals with the types of VPN considered in the IETF;
some historical names were replaced by these terms.
[edit] OSI Layer 1 services

[edit] Virtual private wire and private line services (VPWS and VPLS)

In both of these services, the service provider does not offer a full routed or bridged network, but
provides components to build customer-administered networks. VPWS are point-to-point while
VPLS can be point-to-multipoint. They can be Layer 1 emulated circuits with no data link .
The customer determines the overall customer VPN service, which also can involve routing,
bridging, or host network elements.
An unfortunate acronym confusion can occur between Virtual Private Line Service and Virtual
Private LAN Service; the context should make it clear whether "VPLS" means the layer 1 virtual
private line or the layer 2 virtual private LAN.
[edit] OSI Layer 2 services
Virtual LAN

A Layer 2 technique that allows for the coexistence of multiple LAN broadcast domains,
interconnected via trunks using the IEEE 802.1Q trunking protocol. Other trunking protocols
have been used but have become obsolete, including Inter-Switch Link (ISL), IEEE 802.10
(originally a security protocol but a subset was introduced for trunking), and ATM LAN
Emulation (LANE).
Virtual private LAN service (VPLS)

Developed by IEEE, VLANs allow multiple tagged LANs to share common trunking. VLANs
frequently comprise only customer-owned facilities. The former[clarification needed] is a layer 1
technology that supports emulation of both point-to-point and point-to-multipoint topologies.
The method discussed here extends Layer 2 technologies such as 802.1d and 802.1q LAN
trunking to run over transports such as Metro Ethernet.
As used in this context, a VPLS is a Layer 2 PPVPN, rather than a private line, emulating the full
functionality of a traditional local area network (LAN). From a user standpoint, a VPLS makes it
possible to interconnect several LAN segments over a packet-switched, or optical, provider core;
a core transparent to the user, making the remote LAN segments behave as one single LAN.[7]
In a VPLS, the provider network emulates a learning bridge, which optionally may include
VLAN service.
Pseudo wire (PW)

PW is similar to VPWS, but it can provide different L2 protocols at both ends. Typically, its
interface is a WAN protocol such as Asynchronous Transfer Mode or Frame Relay. In contrast,
when aiming to provide the appearance of a LAN contiguous between two or more locations, the
Virtual Private LAN service or IPLS would be appropriate.
IP-only LAN-like service (IPLS)

A subset of VPLS, the CE devices must have L3 capabilities; the IPLS presents packets rather
than frames. It may support IPv4 or IPv6.
[edit] OSI Layer 3 PPVPN architectures
This section discusses the main architectures for PPVPNs, one where the PE disambiguates
duplicate addresses in a single routing instance, and the other, virtual router, in which the PE
contains a virtual router instance per VPN. The former approach, and its variants, have gained
the most attention.
One of the challenges of PPVPNs involves different customers using the same address space,
especially the IPv4 private address space.[8] The provider must be able to disambiguate
overlapping addresses in the multiple customers' PPVPNs.
BGP/MPLS PPVPN

In the method defined by RFC 2547, BGP extensions advertise routes in the IPv4 VPN address
family, which are of the form of 12-byte strings, beginning with an 8-byte Route Distinguisher
(RD) and ending with a 4-byte IPv4 address. RDs disambiguate otherwise duplicate addresses in
the same PE.
PEs understand the topology of each VPN, which are interconnected with MPLS tunnels, either
directly or via P routers. In MPLS terminology, the P routers are Label Switch Routers without
awareness of VPNs.
Virtual router PPVPN

The Virtual Router architecture,[9][10] as opposed to BGP/MPLS techniques, requires no
modification to existing routing protocols such as BGP. By the provisioning of logically
independent routing domains, the customer operating a VPN is completely responsible for the
address space. In the various MPLS tunnels, the different PPVPNs are disambiguated by their
label, but do not need routing distinguishers.
Virtual router architectures do not need to disambiguate addresses, because rather than a PE
router having awareness of all the PPVPNs, the PE contains multiple virtual router instances,
which belong to one and only one VPN.
[edit] Plaintext tunnels
Main article: Tunneling protocol

Some virtual networks may not use encryption to protect the data contents. While VPNs often
provide security, an unencrypted overlay network does not neatly fit within the secure or trusted
categorization. For example a tunnel set up between two hosts that used Generic Routing
Encapsulation (GRE) would in fact be a virtual private network, but neither secure nor trusted.
Besides the GRE example above, native plaintext tunneling protocols include Layer 2 Tunneling
Protocol (L2TP) when it is set up without IPsec and Point-to-Point Tunneling Protocol (PPTP) or
Microsoft Point-to-Point Encryption (MPPE).
[edit] Trusted delivery networks
Trusted VPNs do not use cryptographic tunneling, and instead rely on the security of a single
provider's network to protect the traffic.
•   Multi-Protocol Label Switching (MPLS) is often used to overlay VPNs, often with
    quality-of-service control over a trusted delivery network.
•   Layer 2 Tunneling Protocol (L2TP)[11] which is a standards-based replacement,
    and a compromise taking the good features from each, for two proprietary VPN
    protocols: Cisco's Layer 2 Forwarding (L2F)[12] (obsolete as of 2009[update]) and
    Microsoft's Point-to-Point Tunneling Protocol (PPTP).[13]
From the security standpoint, VPNs either trust the underlying delivery network, or must enforce
security with mechanisms in the VPN itself. Unless the trusted delivery network runs among
physically secure sites only, both trusted and secure models need an authentication mechanism
for users to gain access to the VPN.
[edit] VPNs in mobile environments
Main article: Mobile virtual private network

Mobile VPNs are used in a setting where an endpoint of the VPN is not fixed to a single IP
address, but instead roams across various networks such as data networks from cellular carriers
or between multiple Wi-Fi access points.[14] Mobile VPNs have been widely used in public
safety, where they give law enforcement officers access to mission-critical applications, such as
computer-assisted dispatch and criminal databases, while they travel between different subnets of
a mobile network.[15] They are also used in field service management and by healthcare
organizations,[16] among other industries.
Increasingly, mobile VPNs are being adopted by mobile professionals and white-collar workers
who need reliable connections.[16] They are used for roaming seamlessly across networks and in
and out of wireless-coverage areas without losing application sessions or dropping the secure
VPN session. A conventional VPN cannot survive such events because the network tunnel is
disrupted, causing applications to disconnect, time out,[14] or fail, or even cause the computing
device itself to crash.[16]
Instead of logically tying the endpoint of the network tunnel to the physical IP address, each
tunnel is bound to a permanently associated IP address at the device. The mobile VPN software
handles the necessary network authentication and maintains the network sessions in a manner
transparent to the application and the user.[14] The Host Identity Protocol (HIP), under study by
the Internet Engineering Task Force, is designed to support mobility of hosts by separating the
role of IP addresses for host identification from their locator functionality in an IP network. With
HIP a mobile host maintains its logical connections established via the host identity identifier
while associating with different IP addresses when roaming between access networks.
[edit] See also
•   Opportunistic encryption
•   Split tunneling
•   Mediated VPN
•   OpenVPN
•   UT-VPN
•   Tinc (protocol)
•   DMVPN (Dynamic Multipoint VPN)
•   Virtual Private LAN Service over MPLS
•   Ethernet Virtual Private LAN (EVP-LAN or E-LAN) defined by MEF
[edit] References
    1. ^ Feilner, Markus. "Chapter 1 - VPN—Virtual Private Network". OpenVPN: Building
       and Integrating Virtual Private Networks: Learn How to Build Secure VPNs Using this
       Powerful Open Source Application. Packt Publishing.
    2. ^ Trademark Applications and Registrations Retrieval (TARR)
    3. ^ OpenBSD ssh manual page, VPN section
    4. ^ Unix Toolbox section on SSH VPN
    5. ^ Ubuntu SSH VPN how-to
    6. ^ E. Rosen & Y. Rekhter (March 1999). "RFC 2547 BGP/MPLS VPNs". Internet
       Engineering Task Forc (IETF). http://www.ietf.org/rfc/rfc2547.txt.
    7. ^ Ethernet Bridging (OpenVPN), http://openvpn.net/index.php/access-server/howto-
       openvpn-as/214-how-to-setup-layer-2-ethernet-bridging.html
    8. ^ Address Allocation for Private Internets, RFC 1918, Y. Rekhter et al.,February 1996
    9. ^ RFC 2917, A Core MPLS IP VPN Architecture
    10. ^ RFC 2918, E. Chen (September 2000)
    11. ^ Layer Two Tunneling Protocol "L2TP", RFC 2661, W. Townsley et al.,August 1999
    12. ^ IP Based Virtual Private Networks, RFC 2341, A. Valencia et al., May 1998
    13. ^ Point-to-Point Tunneling Protocol (PPTP), RFC 2637, K. Hamzeh et al., July 1999
    14. ^ a b c Phifer, Lisa. "Mobile VPN: Closing the Gap", SearchMobileComputing.com, July
       16, 2006.
    15. ^ Willett, Andy. "Solving the Computing Challenges of Mobile Officers",
       www.officer.com, May, 2006.
    16. ^ a b c Cheng, Roger. "Lost Connections", The Wall Street Journal, December 11,
       2007.

[edit] Further reading
•   Kelly, Sean (August 2001). "Necessity is the mother of VPN invention".
    Communication News: 26–28. ISSN 0010-3632.
    http://web.archive.org/web/20011217153420/http://www.comnews.com/cgi-
    bin/arttop.asp?Page=c0801necessity.htm.
•   "VPN Buyers Guide". Communication News: 34–38. August 2001. ISSN 0010-
    3632.
[edit] External links
•   JANET UK "Different Flavours of VPN: Technology and Applications"
•   List of all VPNs
•   Virtual Private Network Consortium - a trade association for VPN vendors
•   CShip VPN-Wiki/List
•   Virtual Private Networks on Microsoft TechNet
•   Creating VPNs with IPsec and SSL/TLS Linux Journal article by Rami Rosen
•   curvetun a lightweight curve25519-based multiuser IP tunnel / VPN
•   Using VPN to bypass internet censorship in How to Bypass Internet Censorship, a
    FLOSS Manual, 10 March 2011, 240 pp

                   [hide]v · d · eVirtual private networking



                           Check Point VPN-1 · Cisco Systems VPN Client ·
    Software (closed
                           CryptoLink · Gbridge · Hamachi · Microsoft Forefront
             source)
                           Unified Access Gateway · Vyatta · Wippien



      Software (open       n2n · Openswan · OpenVPN · Social VPN · strongSwan ·
             source)       tcpcrypt · tinc (protocol) · Cloudvpn
                            SSTP · IPsec · L2TP · L2TPv3 · PPTP · Split tunneling ·
         Mechanisms
                            SSL/TLS · (Opportunistic : tcpcrypt)



       Vendor-driven        Layer 2 Forwarding Protocol · DirectAccess

Voice over Internet Protocol (VoIP)
Coming soon!

   Our VoIP section is currently being updated. Please check back soon.

Global Regions:
Sub-Saharan Africa
Asia
Australia and Oceania
Central America and the Caribbean
Europe
Middle East and North Africa
North America
South America

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:413
posted:12/6/2011
language:English
pages:174