Security through obscurity

See also: Security | Obscurity | Backdoor | Steganography | Security through openness

Passively or actively using obscurity as a method of staying secure. While this can be a useful and effective deterrent, the term may have a negative connotation, referring a feigned ignorance, akin to putting one’s head in the sand when freightened.

Also, any one single security method, be it a physical or data security, may not be enough.

The most familiar example is camoflauge – disguising something to make it look similar to its surroundings.

However, camoflauge alone would never win a war, requiring other offensive and defensive measures.

Against a strong enough foe, obscurity may be the only option.

On the Internet, security through obscurity is both easy and difficult in two ways:

  • Easy to be just a face in the crowd but difficult to be totally anonymous.

Hiding software bugs

The most common usage of the term is for not publicizing software problems, or bug. The information is held in secret to avoid its exploitation, often in an effort to give more time for repair to those resolving a problem. This fear is that, if the problems are publicised, rogues can take advantage of the information before the problems can be corrected.

Two major sides of the arguement exist in this situation:

  • Open-source advocates often believe in end-to-end openness; they believe, for both closed and open source projects, openness encourages activity, prevents harm by allowing admins to disconnect vulnerable systems, and that there is no incentive to fix security failures if they are not first revealed. Moreover, at least for open projects, there are greater resources (knowledge, tools, skill) available to the open-source team than to a rogue team, and fixes are often quick in the Open Source community.
  • The opposite side believes that it is important not to assume anything of the resources available to a rogue or rogues who would abuse openly publicised problems. Moreover, it is the perrogative of a software project/company to determine threats and method of repair, judging from a volitile, damaging public relations issue that open source projects do not experience.

Both extremes of the arguement have drawbacks: many bugs are flatly ignored by major companies and vulnerabilities made public have hurt computer users in both camps.

One solution as been the emergence of a middle-ground: a safe period of a week when a bug appears to allow a fix to be generated is required but that the bug’s existance must be made public to ensure honesty.

Hiding Servers

Hiding your server from outside contact is one way. Firewalls, NAT routers, and dynamic IP addresses such as with DHCP make a computer much harder to find on a network. More advanced users can custom-configure which ports are allowed to have any communication with the Internet.

For instance, an FTP server should only be allowed to communicate on port 21.

No method of obfuscation is an excuse for not adding patches and security updates to your computer.

One of the best methods of Security Through Obscurity is disconnection. This allows facilities to focus only on physical security rather than Internet Security. Many police stations and government institutions provide no connection with their computer systems to the Internet, known as a Private Network, preventing their computers from being accessable from outside an individual building or selected area. This debunks the myth that some day a nuclear facility could one day be hacked, told to the judge in the Kevin Mitnick case and featured in the movie “War Games.” Not all city services follow this model as many critical city services such as power and emergency-vehicle dispatch have been affected by viruses and worms.

Hiding Data

A great example of hiding something rather than using other secure methods, although some Steganography also includes the use of encryption. (More)

TakeDown.NET -> “Security-through-obscurity