On MakeTheClientPay, there is much discussion on denial of service attacks, and a scheme is proposed for thwarting some types thereof. This page is an attempt at a proof that such schemes cannot in general work. Please BeConstructive with your edits to this page. ---- OK, here is a proof, perhaps in outline only, of why "Denial of Service" attacks cannot be thwarted by a "Client Must Pay" scheme. If you spot a hole, please be specific and constructive. * A "Denial of Service Attack" is, by definition, a coherent set actions taken by someone, an attacker, '''A''', that prevents a server, '''S''', from providing their service to a legitimate client, '''L'''. * Consider requiring a client, '''C''', to prove that they have made a payment, '''P''' (standing both for proof and payment - there should be no confusion). * '''C''' must send their request for service, '''R''', to '''S''' including '''P'''. * '''S''' must receive '''R'''. ** This requires resources. * '''S''' must confirm that '''P''' is valid. ** This requires resources. * '''A''' can send an arbitrary number of '''R'''s to '''S''', whether valid or not. * '''A''' can thereby force '''S''' to expend an arbitrary amount of resources confirming that various '''P'''s are valid. * '''A''' can thereby degrade the service provided by '''S''' to '''L'''. * In the extreme, '''A''' (which may be distributed and therefore not recognized as a single attacker) can prevent '''S''' from servicing '''L'''. * Therefore '''A''' has successfully executed a DoS attack on '''S'''. This seems independent of the mechanism of '''P''', and to prove that a DoS attack can succeed by definition. ---- However, this proves a somewhat useless truth: if '''A''' has enough resources, he can always simply overwhelm '''S''', should he wish to do so. Against brute force, there is no defense, even more so, if the attack is carried out with essentially stolen resources in the form of a Distributed DoS attack with a botnet. Traditionally, only attacks where the attacker spends considerably less resources himself than he is able to tie up at the server were called DoS attacks. The classic example is SynFlooding, where A only sends a few small packets and is rewarded with a completely clogged server. To some extent this is still possible with every web server, which can be coaxed into forking a thread and probably doing lots of calculation by just sending a small request. Against attacks of this type, the scheme described in MakeTheClientPay may well help. ---- Since the definition of DoS was contested, here are a few definitions taken from the 'net: * ... a type of attack on a network that is designed to bring the network to its knees by flooding it with useless traffic. ** http://www.webopedia.com/TERM/D/DoS_attack.html * A denial-of-service attack (also, DoS attack) is an attack on a computer system or network that causes a loss of service to users, typically the loss of network connectivity and services by consuming the bandwidth of the victim network or overloading the computational resources of the victim system. ** http://en.wikipedia.org/wiki/Denial_of_service * A "denial-of-service" attack is characterized by an explicit attempt by attackers to prevent legitimate users of a service from using that service ** http://www.cert.org/tech_tips/denial_of_service.html If your definition differs substantially from these, it would appear you are talking about something completely different. If it doesn't, the original reasoning seems to stand. ---- I removed this line of reasoning from MakeTheClientPay because I didn't want to muddy the waters over there, I wanted a clear discussion from a single line of reasoning. However, to answer some of the points you raised over there: * The DoS attack is also radically asymmetric. It can use a huge number of computers to overwhelm a single service. ** "number of computers" isn't a meaningful term. To overwhelm a service it takes as much bandwidth or other resource as the service has available. When it takes less then you have a real problem. When it takes much more then you're safe. The number of computers necessary for this to happen means nothing. * Under these circumstances even trivial amounts of computing required to verify individual payments become significant. ** For some meanings of "significant" which just so happen to be insignificant. The only meaning that's relevant is economic significance. And it's just not economically significant. * You are right when you say that bandwidth is the bottleneck. However, the above line of reasoning suggests that it doesn't matter what the packets contain, DoS is possible simply by flooding the network asking for the service. ** And that too is irrelevant. It is theoretically possible, using extensive brainwashing, to turn me into a raving right-winger. Nobody is too concerned about this possibility because it's so improbable as to be absurd. It ''may'' be a question of economics, you ''may'' be absolutely right, but that doesn't show where the line of reasoning fails. * If you allow the definition, where is the logic wrong? * If you don't allow the definition, what definition would you use? ---- The flaw in your proof is here: * '''A''' can send an arbitrary number of '''R'''s to '''S''', whether valid or not. RK hasn't really been clear on when and where this proposal applies. '''It does not work against DDoS attacks'''. The assumption in HashCash and similar algorithms is that the client cannot marshal significantly more computing resources than the aggregate legitimate users. The type of DoS attack addressed by HashCash etc. is where a rogue client uses an abundant resource (CPU time) to take up a scarce resource (TCP connection state, user attention). It's based on forcing a client to pay exponentially more time to establish a connection than a server pays to service it. If the client sends an invalid certificate, then the server can just drop the connection, and it takes up fewer computational resources than servicing a legitimate client. It does absolutely nothing about trojans that can commandeer thousands of computers to ''look'' like legitimate users. There's no defense against these - all they have to do is flood the network pipe until nobody else can get through. But it can be a significant deterrent against the single guy in Russia with a spam-mailer, DSL connection, and few thousand e-mail addresses. And one of the proposals is to create a "beacon" - a prefix that must be appended to the hashed string - and change it every week. Under this proposal, your zombie DDoS machines would need to call home to their target every week, exposing themselves and making it obvious that a DDoS attack is in the making. -- JonathanTang Well, that's not quite true. Let's put everything in terms of HashCash because it makes things clearer and less controversial. Even without having the beacon, hashcash makes spamming more difficult for botnet operators. If hashcash increases the cost of sending an email by a factor of 100x, then it means the number of bots you need in your botnet to produce the exact same result is multiplied by a hundred fold. That's a big factor and in the real world it matters. It matters because it means that the total number of spam attacks that ''can'' be performed in the world is suddenly decreased by a hundred fold. Now, there's another thing about botnets which is extremely important. It is that bots are ''stolen property''. That doesn't seem to matter from a technical point of view, but it sure matters a lot from a politico-economic point of view. Because if clients are stolen (bots) then putting pressure on the bots until they are unusable to their legitimate owners will force those owners to secure their computers and pull them out of the botnet. Now, botnet operators still have the option of operating invisibly but only at the cost of increasing the size of their botnet by whatever ''arbitrary'' factor the hashcash user decides to price their service at. Botnet operators aren't very smart people and so it's inevitable that they will screw up their stealth operations, revealing the zombie state of many of their bots and making botnets less viable. -- RK Right, but that's not the type of DDoS attack being assumed above. That attack is to send a ''knowingly false'' certificate to force servers to spend computational CPU resources verifying a bunch of random bits. Instead of the price ratio being cost of generating certificate / (cost of verifying certificate + servicing request), it's cost of generating a string of 0s / cost of rejecting certificate. The latter is still in the attacker's favor, but by a much smaller margin. A server's still way better off employing HashCash or similar scheme than not, but it doesn't complete eliminate the threat. You can prevent attention theft from spam: the limiting resource will be CPU, so the mail server will overload long before the message reaches recipients. You can also prevent the SYN-flood attack that you mentioned on the original page: the bandwith pipe overloads long before the server runs out of buffer space for connection state. But you can't prevent ''all'' DDoS attacks, and it's the trivial cases like bandwidth overload that Eric etc. are harping on. This is inherent as long as a.) it is possible to gain control of a large number of machines and b.) it takes a non-zero amount of resources to reject a connection. Simply throw enough machines into pinging a site until it overloads. If Google wanted to take down your home PC, for whatever reason, there's no way you could stop them because they have 100,000 machines and you have 1. Even if you set your firewall to block all traffic from Google, it'd just saturate your incoming pipe and router. The only way to prevent this is to prevent attackers from gaining control of a large number of machines, which ''is'' a security issue. -- JonathanTang ---- From a security point of view, the ''only'' thing wrong with D/DoS floods is the fact the zombies are stolen. If the only solution is to prevent their being stolen then this is literally ''not a problem''. At this point I can wash my hands of the whole thing. ''And at this point, Richard has moved the goalposts. His initial claim to stop D/DoS attacks hinges on his idiosyncratic definition of what DoS means, and now that he defines it as something other than what the rest of the world does, he can claim success.'' ---- The fact that '''S''' must allocate resources to receiving '''R''' is the core of the problem. '''S''' remains vulnerable to exhaustion of those resources no matter how efficiently it evaluates '''P'''. No matter what protocol level we look at, a server is vulnerable to DoS attack as long as receiving information consumes finite resources. -- EricHodges Does it take more resources for an attacker to send a falisifed payment than it takes for the victim to verify that the payment is false? -- AnonymousDonor A falsified payment is no more than a memset (+ network send), verification means computing the hash of that falsified value. Hash functions are usually pretty fast, but there's no way it'll be faster than a memset. (Actually...conceivably it's possible if memory reads are significantly faster than memory writes, enough to make up for the arithmetic computations of the hash function. I don't know any archituctures where this is the case.) -- JonathanTang The most efficient verification would be a compare (assuming that a given window can be assigned a single valid value). That's the best possible case - where network bandwidth is the limiting factor. -- EH ---- A few points: * Several times in the above Eric and I were lumped together. I think this is inaccurate. I haven't edited it because it would require substantial, although trivial, changes to the surrounding text. * Please note, however, that Eric has been advancing arguments other than, and in addition to, mine. My arguments were specific and limited. Anything not signed by me was not said by me and should not be construed as agreed with by me. * That's not to say I don't agree with them, I just haven't checked them all in detail. I don't want to be side-tracked by other people's reasoning and examples. Jonathan: why is it not true that * '''A''' can send an arbitrary number of '''R'''s to '''S''', whether valid or not. Which part do you disagree with? If I have access to a fat pipe and a really, really, really fast machine, I can make essentially arbitrarily many requests. Are you concerned about the limit on the communication capacity? CPU capacity? Do you want me to change the word "arbitrary" to tighten the statement? * ''Arbitrary means unbounded pretty much. And both Jonathan and I are talking about economics. What do you think?'' * The IP network behaves precisely in this way: everything a sender sends is transmitted to its destination, the sender's and receiver's bandwidth is used up in equal amounts, the receiver has no choice in what it receives. If you want to invert the first and last conditions so that the receiver chooses what it receives, then you've got TCP, not IP. The only other option is to deny the second condition; good luck with that one. The second condition is what makes network bandwidth into a shared resource exactly like storage and computing power inside of a single computer. And as a shared resource, it is a power resource, that is it can be hoarded to exert power over someone else using the same resource. This is ''correct behaviour''. -- RK You wrote: * Eric ''et al'' are going from the position "oh, look all the D/DoS attacks look exactly identical on the server side therefore they must be identical" when in fact they're radically different on the client side. I'm not saying that all attacks are identical. I am saying that loss-of-service is the result of insufficient server resources to meet the demands being placed on it. The causes may be different, but the line of reasoning I was trying to give is independent of that. *** I think our discussion has, so far, been very fruitful. In particular, I think I'm starting to understand your premises. They're a long way from what I thought they were, hence the exchange has taught me about your domain of discourse. Not helpful to you, of course, except that I'm starting to understand what you're saying. *** I do care about using a correct definition. I'm trying to understand your definition, so I can see where my definition differs, where my definition might be wrong. I'm sorry if it seemed otherwise. If I said something that made it seem like I don't care then I've mis-spoken myself, or you've misunderstood. Which it is is irrelevant, consider it unsaid. I'm trying to clarify the situations under which the scheme described on MakeTheClientPay does or does not work. If you have access to a fast machine and a fat pipe, bigger than that of the server you're attacking, you will ''always'' be able to overwhelm it through sheer network traffic. But that's outside the scope of what Richard's talking about; he acknowledges that, then glosses over it as unimportant (I'm not certain I agree, but I don't want to debate the relative importance of various types of attacks, so I let it slide). You're all talking past each other. Can we frame it like this? * In the case where one side has significantly larger resources than another, the more powerful side will ''always'' be able to overwhelm the lesser with a (D)DoS attack * In the case where one side has fewer total resources than another, yet manages to trade an abundant resource for a scarce one (CPU time for user attention in spam, or for connection state in syn floods), a system such as MakeTheClientPay will prevent the attack by using up more of the abundant resource. Your assumption that I quoted holds in case 1), but your proof isn't being disputed in that case (I'm not sure we're making this clear). In case 2, it's ''not'' the case that one side can marshall arbitrary resources, by definition. -- JonathanTang I don't gloss over it as unimportant. It's very important, it's just not my concern because I claim that this is ''not a failure''. I claim that you ''should'' be able to DoS a server through sheer power. The correct solution to some nobody denying a popular service is not to whine about it and try to prevent such attacks using some sort of totalitarian scheme (and that is exactly what would be required). The correct solution is to distribute the service across every computer that would have it. That too is just basic politico-economics. If the web and DNS architectures don't allow such massive distribution, then the failure doesn't lie in IP but in the web and DNS architectures. Which incidentally we know to be broken for entirely different reasons. So let's fix ''those'' and we'll be done with DoS, at least for any services we care about. If you ever fix the web and DNS (replace them with something radically different) then something downright interesting happens. What happens is that DDoS no longer applies to the vast majority of services but becomes a problem only to a very specific type of service; corporate services. A corporation can't afford to hand over even partial control of its data and processes over to third parties because that would inevitably cut into its revenues. Nor can it afford to arbitrarily add more computing power because that would increase its costs. Everyone else is safe but corporate services aren't and that is something downright interesting. If DDOS is a corporate problem, why should anyone care about it? -- RK Wait a second, let's make this explicit. You're claiming that your solution to DoS attacks doesn't prevent some DoS attacks (those that consume all available network bandwidth) because that kind of DoS attack shouldn't be prevented? Do I understand you correctly? -- EH ---- CategorySecurity CategoryInternet CategoryDistributed