Thursday, July 31, 2008

Security through obscurity - is it useless?

For a few weeks now I've been thinking about security through obscurity (STO). It is a common wisdom that it's a bad way to build security of anything. But, this doesn't have to be necessarily true, as I'll explain in the moment. What made me write this post is that a similar comment about usefulness of STO was given in a Matt Bishop's artice in IEEE Security & Privacy journal (About Penetration Testing, November/December 2007, pp 84-87). He notes that:

Contrary to widespread opinion, this defense [STO] is valid, providing that it’s used with several defensive mechanisms (“defense in depth”). In this way, the attacker must still overcome other defenses after discovering the information. That said, the conventional wisdom is correct in that hiding information should never be the only defensive mechanism.
His note goes right to the point. So, to explain this point, first I'll explain what STO is and why it is problematic. Then I'll explain what actually security is, and finally, how in this context STO can be actually useful.

STO is a principle that you are secure if the attacker doesn't know how you protect yourself. For example, if you invent new crypto algorithm and don't tell anyone how it works, then the one that invented algorithm believes it's more secure. Instead of crypto algorithm, you can take almost anything you want. Good example would be communication protocol. Now, the problem with this approach was that usually crypto algorithms, or protocols, were very poorly desinged! So, the moment someone reverse engineered those he was able to break in! Now, think for the moment if this secret algorithm is actually AES? Would discovery of algorithm mean that STO is bad? I suppose not, and so should you, but let us first see what security is.

Security is complex topic, and I believe we could discuss it for days without reaching it's true definition. But, one key point about security is that there is no such thing as perfect security. You are always vulnerable, that is, in any real world situation. So, to be secure actually means too hard for attacker to break in. When attacker breaks in, he doesn't attack from some void, but he has to have some information. So, the more information attacker has about it's target, it's more likely he'll succeed.

Now, how this goes along with STO? Imagine to implementations, completly identical, apart from the first implementation beeing secret. In the first case attacker has first to find information about implementation and then he can try some attack, while in the second case the attacker can immediately start attack.

So, STO can make security better, but with precautions. First, it must not be the only way of protection, i.e. bad algorithm/protocol/implementation. Second, you have to be ceratin that sooner or later someone will reverse engineer your secret, depending on how popular your implementation is.

To conclude, STO could help make security better, but only if used with caution. What you can be almost certain, is that if you go to invent new crypto algorithm, new protocol, or something similar you'll certainly make an error that will make the design, as well as implementation, very weak! Thus, this was of using STO might be usefull only for biggest ones with plenty of resources and skills, like e.g. NSA. :)

Sunday, July 13, 2008

The critique of dshield, honeypots, network telescopes and such...

To start, it's not that those mechanisms are bad, but what they present is only a part of the whole picture. Namely, they give a picture of the opportunistic attacks. In other words, they monitor behavior of automated tools, script kiddies and similar attackers, those that do not target specific victims. The result is that not much sophistication is necessary in such cases. If you can not compromise one target, you do not waste your time but move on to the next potential victim.

Why they only analyse optimistic attacks? Simple, targeted attacks are against victims with some value, and honeypots/honeynets and network telescopes work by using anallocated IP address space and thus there is no value in those addresses. What would be interesting is to see attacks on high profile targets, and surrounding addresses!

As for dshield, which might collect logs from some high profile site, the data collected is too simple to make any judgements about the attackers sophistication. What's more, because of the anonymization of data, this information is lost! Honeypot, on the other hand, do allow such analysis, but those data is not collected from the high profile sites.

In conclusion, it would be usefull to analyse data of attacks on popular sites, or honeypots placed in the same address range as those interesting sites. Maybe even combination of those two approaches would be interesting for analysis.

That's it. Here are some links:

dshiled
honeynets

Sunday, July 6, 2008

Reputations for ISP protection

Several serious problems this year made me think about security of the Internet as a whole. Those particular problems were caused by misconfigurations in BGP routers of different Internet providers. The real problem is that there are too many players on the Internet that are treated equally even though they are not equal. This causes all sorts of the problems and it is hard to expect that those problems will be solved any time soon.

Internet, at the level of the autonomous systems, is kind of a peer-to-peer network and similar problems in those networks are solved using reputations. So, it's natural to try to apply similar concept to the Internet. And indeed, there are few papers discussing use of reputations on Internet. Still, there are at least two problems with them. The first one is that thay require at least several players to deploy them, even more if they are going to be usefull at all. The second one is that they are usualy restricted in scope, e.g. try to only solve some subset of BGP security problems.

The solution I envision assumes that ISP's differ in quality and that each ISP's quality can be determined by measuring their behivor. Then, based on those measurements all the ISPs are ranked. Finally, this ranking is used to penalize misbehaving ISPs. The penalization is done by using DiffServ to lower the priority of the traffic and when some router's queues start filling up, then packets are droped, but first of the worst ISPs. This can further be expaned, as each decision made can use trustworthiness of the ISP in question. E.g., when calculating BGP paths, trustworthiness of AS path can be determined and this can be taken into account for setting up the routes. Furhtermore, all the IDS and firewalls can have special set of rules and/or lower tresholds for more problemattic traffic. I believe that possibilities are endless. It should be noted that it is envisioned that this system will be deployed by a single ISP in some kind of a trust server, and that this ISP will monitor other ISPs and appropriately modulate traffic entering it's network!

In time, when this system is deployed by more and more ISPs (well, I should better say IF :)), there will be additional benefits. First, communication between trust servers of ISPs could be established in order to exchange recommendations (as is already proposed in one paper). But the biggest benefit could be the incentive that ISPs start to think about security of the Internet, their own security and security of their custerms. If they don't then their traffic and their sevices will have lower priorites on the Internet and thus their sevice will be worse that those of their competitors which will reflect on income!

Of course that it's not so easy at it might seem at first glance. There are number of problems that have to be solved, starting with the first and the most basic one: How practical/useful is this really for network operators? Then, there are problems of how to exactly calculate reputation. And when the reputation is determined, how will routers mark the packets? They should match each packet by the source address in order to determine DS codepoint but the routers are already overloaded and this could prove unfeasible.

I started to write a paper that I planned to submit for HotNets08, but I'm not certain if I'm going to make it before deadline as I have some other, higher priority work to do. The primary reason for sending this paper is to get feedback that is necessary in order to continue developing this idea. But, maybe I get some feedback from this post, who knows? :)

20081229
I missed the deadline because of the omission, but the paper is available on my homepage. It is under work in progress section on the publication page. Maybe I'll try to work a bit on it and send it to some relevant conference next year. Are there any suggestions or opinions about that?

Tuesday, February 19, 2008

Studentska posla...

Uvijek mi je drago pročitati koju studentsku raspravu o smislu života. Može se tu doista čuti svega i svačega, nađe se ponešto istine, al' kao i kod naših novinara, nađe se i podosta izvrnutih, ili bolje rečeno "prilagođenih" izjava za potrebe tekuće rasprave. Nije nam bez veze novinarstvo u takvom stanju kakvom je...

No, da se vratim raspravi o smislu života. Dakle, baš kao i u svakoj drugoj takvoj raspravi, diskutira se o stvarima o kojima se zna tek djelić činjenica, dok je ostatak prikriven. Što točno mislim s tom rečenicom? Pa, najbolje ju je ilustrirati sljedećom sličicom:
Najmanji krug predstavlja područje koje se odnosi na studente. Idući krug predstavlja djelokrug pojedinog zavoda, potom dolazi fakultet i na kraju, sveučilište. Slika nije savršena, ali će poslužiti za ilustraciju. Dakle, rasprava koju studenti vode oko predmeta i nastave je rasprava o presjeku prva dva kruga. Kao što se vidi, raspravlja se o jednom ograničenom području budući da manji krug nema viđenje svega što sadrži veći krug. Iskreno, kad ja osobno raspravljam o sveučilištu (čak i o fakultetu) onda imam stalno na umu dvije činjenice:
  1. Svjestan sam kako raspravljam o svojim osobnim problemima, znači onome što tišti mene, ali to ne znači da su to problemi svih, i
  2. Sveučilište je velik i kompleksan sustav o kojemu ništa neznam, osim nešto sitno iz svog vlastitog iskustva.
Da zaključim, u bilo kojoj raspravi treba biti svjestan kako ne vidimo cijelu istinu i zbog toga se ne smije biti kategoričan u svojim izjavama, a pogotovo ne dijeliti savijete koji su neprovedivi ili jednostavno nemaju veze s istinom. Problemi su na sve strane, počevši od osnovnih i srednjih škola, fakulteta, sveučiliša pa do ministarstva i države. Izuzetno složen krug za čije rješenje treba vremena (mislim da se može mjeriti u desetcima godina), novaca, planiranja i volje. Na žalost, ništa od navedenog baš nema. Prema tome, raspravljati o malim djelićima cijele priče koji su većinom posljedice, neće ništa promijeniti jer uzroci i dalje ostaju.

Friday, February 8, 2008

New Internet architecture, my take at it no. 1

Reading all those papers about new Internet architecture simply doesn't give me peace. What is the solution? Probably it is a simple one in a concept, though , as always, the devil is in the details. Look at the Internet now. When it was first proposed to use packet switching it looked like lunatics' idea and now it's so normal we don't even think about it and take it for granted. So, it's strange feeling that probably I'm looking and thinking about solution but I'm not aware of it.

So, let me make try number one!

What about making Internet in an onion layered style? The most inner layer, 0th layer, forms the core and makes the most trustfull and protected part of the network. It's not possible for outer layers to access anything inside inner layers (here we could maybe take inspiration from Bell-LaPadula and similar models here?). The infrastructure of the Tier 1 NSPs could form this 0th layer. N-th network layer offers transportation services to (N-1)-th layer. This model would protect inner layers from the outer layers, as outer layers would have no access to inner layers of the network. Something similar is already done with MPLS. But MPLS is deployed inside autonomus system, not as a global concept.

There could be several layers corresponding to current Tier 1, 2 and 3 ISPs. Each layer with more and more participants, and accordingly, more and more untrustworthy. Lower layers could form some kind of isolation layer between all the participants and thus, protect them from the configuration errors. Or mallicius attacks. Note, that this could be problematic as it means that lower layers not only encapsulate higher layers, but also inspect them, or assemble and disassemble. It could be hard to do so it's questionable whether and how this is achiavable.

Each layer could use it's own communication protocol, most suited for the purpose and environemnt it works in. For example, in the core layer there is necessity for fast switching as huge speed could be expected in the years to come with extremly low loss rate, so packet formats best adjusted to that purpose should be used. Probably, the outer - user - layers, would need to have more features, for example, quality of service, access decisions and a like. Futhermore, maybe lossy network is used, e.g. wireless network, so some additional features are necessary.

Communication of request to lower layers could be done withih the format of the packets, as ATM did where it's cells had different format when entering network and inside the network, so called UNI and NNI.

We could further envision (N-1)th layer of the onion for the content distribution. This layer's task could be to distribute content using services from the (N-2)th layer. Content could be anything you can think of, e.g. different documents (openoffice, pdf), video, audio, Web pages, mails, even key strokes and events for remote work and gaming. Those are very different in nature, with probably many more yet to be invented, so, this layer should be extensible. It could take care of access decisions and a like. Note that content layer doesn't work with parts of the objects, but with the whole ones. So, if user requests a movie, this movie is completly transfered to content network ingerent for the user at it's current location.

This could make servers less susceptible to attacks as they wouldn't be directly visible to the users!

Finally, Nth layer could be a user layer. In this layer user connects to the network and requests or sends content addressed with variaty of means. For example, someone could request particular newspaper's article from the particular date. The content network would search for the nearest copy of this contents, and use core network to transfer the object to the user. Someone else could request a particular film, and content network would search for it and present it to the user.

Just as a note, I watched VJ's lecture in Google and this is on the track of what he proposes.

Tuesday, February 5, 2008

DDoS attacks, Internet, new Internet and POTS...

I was just thinking about many initiatives (e.g. GENI) to design Internet from scratch! It certainly requires us to break out from the current way of thinking, that's with us for about 40 years now, and to find and propose something new. The good example of this break through was the Internet itself, i.e. the concept of packet switched network. As a side note, Van Jacobson has an idea of how this new might look like and I recommend the reader to find his lecture he held in Google on Google Videos.

While thinking about what is this "new" thing, I took as an example DDoS attacks. There are no DDoS attacks in POTS and they are a big problem for the Internet. So, how this new mechanism should work in order to prevent DDoS attacks. The key point of DDoS attack (or more generally, DoS attack) is that there are finite resources that are consumed by attacker and thus, regular users can not access those resources, they are denied service.

And, while I was thinking about it, I actually realised that there is DDoS attack possibility in the POTS as there are also finite resources. Ok, ok, I know, I managed to reinvent the wheel, but hey, I'm happy with it. :) So, if possible, why there are no DoS attacks in telephony? The key point is that end devices in POTS are dumb and thus, not remotely controllable. If they were remotely controllable, then the attacker would be able to gain access to them and to use huge number of those devices to mount an attack on selected victim. Maybe this attack would be even more effective than the one on the Internet since resources taken by end devices are not shared even though the end devices don't use them.

It turns out that DDoS attack is actually a consequence of giving more power to the user via the more capable end devices. Furthermore, because those end devices are complex systems it's inevitable that there would be many ways of breaking in and controlling them.

Of course, someone might argue that the problem is in ease with which IP packets can be spoofed. But, this is actually easily solvable, at least in theory, if each ISP would control it's access network for spoofed addresses. The more serious problem is actually DoS attack made by legitimate IP packets. It is traceable if coming from a single source, or small number of sources, but the real problem is a network of compromized hosts (botnets). There is no defence from those networks as they look as legitimate users.

So, because we are limited with real world and we'll always have only finite resources on our disposal it turns out that the only way of getting rid of DDoS is to restrict end devices, which by itself is impossible. Now, this is thinking within current framework. But, what if we can made finite resource apparently infinite, or somehow restrict end devices.... This is something for further thinking...

About Me

scientist, consultant, security specialist, networking guy, system administrator, philosopher ;)

Blog Archive