HTTP over both TCP and UDP

Discussion in 'Linux Networking' started by karthikbalaguru, Apr 25, 2009.

  1. Hi,
    I understand that HTTP uses TCP rather than UDP,
    and most of the existing implementations of HTTP
    are based on TCP , since reliability is critical for
    Web pages with text.

    But, TCP can be good for transferring long files,
    but not for small sessions.
    So, can HTTP switch-over between TCP and UDP
    rather than using TCP alone ?

    By having the support for both TCP & UDP, we will
    be able to utilize the benefits of TCP and UDP.
    Is there any opensource available for it ?

    Thx in advans,
    Karthik Balaguru
     
    karthikbalaguru, Apr 25, 2009
    #1
    1. Advertisements

  2. No, just 2 packets (like on a Telnet session): the byte encapsulated in
    a packet and an ACK in the opposite direction, assuming that both
    packets actually arrive at their destinations.

    Robert
     
    Robert Harris, Apr 25, 2009
    #2
    1. Advertisements

  3. I don't see why it would be good. TCP has been optimized by experts
    over many, many years and is very, very good at what it does. The only
    time you should prefer UDP is because TCP provides some expensive
    feature that you don't need. But we need everything TCP does.
    Specifically, we need the following things UDP does not provide:

    Slow start, Exponential backoff, Transmit pacing, Congestion control
    Retransmissions, Acknowledgements
    Reordering, Duplicate packet rejection, Lost packet detection, RTT
    estimation
    Path MTU determination, Segmentation
    Session management

    So the question is, why would you think that you can implement those
    things better than TCP does? In other words, every feature (pretty
    much) that TCP provides that UDP doesn't is one you need. So it's
    madness to use UDP.

    DS
     
    David Schwartz, Apr 26, 2009
    #3
  4. karthikbalaguru

    Joe Pfeiffer Guest

    Before you even get to the technical issues (which David Schwartz
    outlined, and which might be summarized as saying http was designed to
    run on top of tcp), you'd need a server and browser that both used
    udp. And nobody is going to create one of them until the other is in
    wide use.
     
    Joe Pfeiffer, Apr 26, 2009
    #4
  5. I don't think that's true, so long as there was no net harm to the
    capabilities for clients/servers that didn't support it. If there was
    a standard that provided genuine advantages if supported on both ends
    and had effectively no cost if it wasn't supported and someone
    supported patches to Apache and Firefox to support it, I can't think
    of any reason those patches would be rejected.

    You would need to have a standards document explaining how it works in
    sufficient detail to determine if implementations were correct or not.
    Ideally, it would be submitted as an IETF draft. The general opinion
    would have to be that it's, at least, not harmful. The implementation
    would have to be of comparable quality to the existing code in those
    projects.

    The catch though is that the implementation would have to make sense.
    And there's a very critical reason why it likely won't -- TCP-like
    protocols don't generally interoperate well with TCP on the Internet
    in general. And if you go to enough effort to make it interoperate
    well, you'll wind up suffering from the fact that the TCP
    implementation is tuned for the platform and tightly integrated with
    the network driver architecture and your user-space implementation
    can't be. For example, exactly when you send your packets will be
    critical to sharing bandwidth fairly, and precision timing in user-
    space is difficult to impossible.

    DS
     
    David Schwartz, Apr 27, 2009
    #5
  6. How do you figure? A UDP request can easily be thrown away by the HTTP
    server. With TCP, a connection will be established by the kernel
    whether or not the server wants it to be.
    Again, how do you figure? TCP is vulnerable to all the same denial of
    service attacks UDP is, plus TCP-specific attacks such as SYN floods.

    DS
     
    David Schwartz, Apr 27, 2009
    #6
  7. Right, but any algorithm proposed would also have congestion control.
    So it couldn't be used to overwhelm outbound bandwidth. As for inbound
    bandwidth, an attacker is not going to follow the congestion control
    algorithms. So there's no different there.
    Right, and in fact the client creates fabricated packets.
    Right, but I can attack a server with UDP packets whether or not it's
    listening for them.
    I don't see how. You can bombard the server with UDP packets and DDoS
    it whether or not it is listening for those packets.
    I'm not sure I follow your argument. Are you saying that they would
    overload the server with a bunch of requests and it would be saturated
    trying to process those requests? Or are you saying the network or the
    stack would be overloaded? If the latter, that could occur whether or
    not the server is listening. If the former, that is solvable in the
    design of the protocol (for example, by requiring a token before the
    request is processed and issuing the token over TCP).

    DS
     
    David Schwartz, Apr 28, 2009
    #7
  8. You would pretty much have to do what TCP does. Slow start,
    exponential backoff, transmit pacing, and so on. That's largely why
    this is pointless -- to do it, you'd have to replicate TCP.
    Congestion of outbound bandwidth is. If I try to congest your outbound
    bandwidth, I'll open a bunch of HTTP sessions and request a large
    file. Even if I don't ACK your data packets, you'll waste a lot more
    of your outbound bandwidth than I'll waste of mine getting you to do
    it.

    You prevent an attacker from overwhelming your outbound bandwidth by
    controlling how much outbound data you send.
    It doesn't matter where the congestion occurs, if it's due to server-
    to-client traffic, you can solve it by controlling how much data the
    server sends. This is exactly the same issue with both TCP and UDP.
    Yes, congestion control, just like what TCP does.
    Right, but that applies equally to TCP and UDP, so no difference
    there.
    Right, but what does that have to do with anything? You can queue up a
    million UDP packets and flood me with them whether or not I'm running
    any UDP protocol.
    Right, but so what? You think it's easier to overwhelm a web server
    than the network it's on?

    Perhaps you're thinking that the server would get overwhelmed by the
    requests if they were TCP. Sure, because the server has to create
    state and has to send tear down packets and so on. But they're UDP.
    The server just has to inspect and drop them. It's quick and
    efficient.
    So what? The server OS would have to process the packet even if the
    server weren't listening. And the server's processing of the packet
    can consist simply of dropping it. During an attack scenario, the net
    cost of a UDP packet is way less than the net cost of a TCP packet.
    But they're not even if it's not listening for UDP. Just hit it with
    UDP packets anyway, say on its DNS port. The relative cost of a
    listened-to UDP packet and a user-space dropped UDP packet is just a
    bit of server CPU, and the server CPU typically vastly outpaces the
    network connection.
    Actually, that's not quite true. Router can tell UDP clients to slow
    down by delaying their packets. And in practice, routers don't
    mutilate TCP packets. Even though ECN is available, it's not very
    widely supported and there's no evidence it's a critical part of
    protecting against attacks. In any event, UDP can use ECN as well as
    TCP.
    If the packets were hurting the server, it can simply drop them. If it
    did that to TCP, it would have to tear down the connection. The
    benefits of UDP being connectionless easily make up for the slight
    extra cost of having to drop the packet in user space.

    Any sensible server would have a fast path to drop suspicious packets
    during an attack. The protocol could specify, for example, that you
    send one UDP packet and if you get no reply, you must use a TCP
    request to obtain a validation token. Then each subsequent UDP request
    can contain that token. A packet with an invalid or abused token can
    be dropped immediately.

    You're just imagining a poor implementation and then blaming the
    protocol for the poor implementation.
    A UDP request with an invalid or abused token can be immediately
    dropped by the server. The overhead of setting up and tearing down a
    TCP session is avoided. No kernel state need be kept. An attack on
    such a protocol would be harder than a TCP SYN flood.
    Sure, and we instantly drop all but the first two because the token
    has been abused. That's *perfect* since we don't have to go to the
    aggravation of keeping TCP state.
    There is no congestion control on any attack. That's what makes it an
    attack. The beauty of UDP is we don't have the unavoidable penalty of
    session establishment and teardown.

    DS
     
    David Schwartz, Apr 29, 2009
    #8
  9. Right, but we're not talking about using UDP. We're talking about
    using a protocol layered on top of UDP. In general, routers don't
    handle TCP packets any different from UDP packets, and to the extent
    they do, I know of no difference that's critical to denial of service
    attack mitigation.
    Yes, it can. It can delay or drop the UDP packets, just as it does
    with TCP packets.
    So can a UDP attack the same way.
    Neither would a TCP-based attack. I don't know what automatic router
    blocking technique you're thinking of, but I don't think it actually
    exists. Service providers don't treat TCP packets any differently from
    UDP packets and once you get to the server, it's too late to stop an
    attack that congests inbound bandwidth.
    If you don't know the proposal, how can you conclude that it will be
    vulnerable to denial of service attacks then?
    And the server can simply drop the UDP packets. That's much cheaper
    than TCP that requires session establishment and tear down.
    Then, duh, you DON'T RESPOND TO THE QUERY. See how simple it is?

    Whereas with TCP, it's too late. Once you've established a connection,
    you have to tear it down.
    So what?
    Simple, you put a sequence number in the UDP packet. If the sequence
    number is not valid, the server drops it. Dropping a packet in user
    space is cheap.
    You put a token in the beginning of the packet. If the token is
    invalid or abused, you drop the UDP packet. It's really that simple.
    We're not talking about a web server, we're talking about *THIS*
    protocol. If this protocol says "drop duplicate requests", then a
    proper implementation will drop duplicate requests.

    You are, again, assuming a completely broken specification and then
    complaining that the specification is completely broken. Try assuming
    a non-idiotic specification.
    How? The network treats them the same. Once you get to the server
    machine, it's a non-issue. You're not going to overwhelm the machine.
    At least with UDP, you can spare the network and the server kernel the
    overhead of session setup and tear down.
    Right, that's because we don't have the protocol specification yet.
    You are claiming it's not possible to develop such a specification
    because of inherent problems with UDP. But all of your complaints are
    trivially solvable.

    The solution to this one is simple: if the server detects it is under
    attack, it switches to a mode in which each UDP query must contain a
    valid, unabused token. Queries without one are simply dropped.
    Legitimate clients are programmed to send one UDP request, and if it's
    not replied to, they send a TCP request that includes a request for a
    token.

    Dropping a UDP packet with no valid token is no more expensive than
    dropping a TCP packet. There are advantages and disadvantages on both
    sides. With TCP, if the server drops it, the kernel has to tear down
    the TCP session. With UDP, it doesn't. But with UDP, the drop has to
    take place in user space. More or less, it's a wash.
    How do you figure?
    Right. If you assume an attacker will be nice, then this will work for
    both TCP and UDP. If you don't, it won't work for either TCP or UDP.
    And, again, someone can flood you with UDP whether or not you are
    listening for it.
    For both TCP and UDP. So this is not an issue.

    That's a negligible difference since compromised clients will find it
    plenty easy and uncompromised clients can still flood you with UDP
    even if you're not listening for it. Once the packet reaches the
    server, the damage is all already done.
    Except it won't overload the server. A typical server will barely
    break a sweat dropping a full pipe of bogus UDP requests. So long as
    the application provides a way to easily detect bogus requests (which
    is trivial as I've explained), it doesn't matter.
    This is *NO* different from the following scenario:

    1) An evil client issues a web request using TCP.
    2) The client then floods the server with 10,000 UDP packets to a
    random port.
    3) Repeat for 10,000 evil clients.

    You think the server CPU won't be able to drop the UDP packets?!
    Seriously?
    So what? A CPU is way faster than the Internet.
    It looks at the token. If it's valid and unabused, it answers the
    request if it wants to. It can always drop the packet if it wants, the
    client will try again using TCP if it's legitimate.
    The same way TCP does. Honestly, this is really not complicated.
    Layering reliable protocols on top of TCP with special characteristics
    is really not all that complicated.

    You are the one arguing that no implementation is possible that
    doesn't have a defect. You have the burden of proof.
    Check token in table. If not issued or abused, drop packet. That
    simple.
    Exactly. The difference is, with the UDP state, you can avoid the
    session overhead if you can tell an illegitimate request. This stops
    an attacker from consuming as much outbound bandwidth. This will work
    so well that attackers will attack you with TCP instead, because most
    web servers have more inbound bandwidth to spare than outbound, and
    even with TCP SYN cookies, they can consume about as much outbound
    bandwidth as their flood of inbound packets.
    Because a million identical packets has no legitimate purpose. Duh.
    The same way TCP handles this without retaining state. If I get, say,
    a TCP RST request, I check it in my table. If it matches no TCP
    session I know of, I can detect this and ignore the packet. Note that
    I didn't need to retain *ANY* state about *THIS* packet. This attack
    packet required no state to be silently discarded. Same way.

    DS
     
    David Schwartz, Apr 29, 2009
    #9
  10. You do if it doesn't hurt you to do so. If it does, then you don't
    have to. The client cannot compel you to retain state, but you can do
    so if it benefits you.

    The difference is, an attacker cannot make you keep state with UDP. He
    can with TCP, that is, if you wish to continue to accept legitimate
    requests and nothing before the user-space web server application can
    tell the two apart.
    You don't. The server would only switch to that mode during an attack.
    Under normal conditions, your initial UDP request would simply be
    responded to.
    The server can either honor those requests or not. If it chooses not
    to, it doesn't have to maintain any state to do so. Te absence of the
    token would do the job. So this is no more difficult then if we
    weren't listening for the requests in the first place. Not matching a
    token is not significantly more complex than not matching a listening
    port.

    DS
     
    David Schwartz, Apr 29, 2009
    #10
  11. Quite the opposite. You can refuse to retain state. If the UDP
    protocol specifies that a UDP packet that doesn't match a retained
    state is silently discarded, you don't need to maintain any state
    about the attack to quickly drop all the attack packets.

    In contrast, with TCP, you can't usually recognize the attack packets
    until you've completed the TCP handshake. Tearing down the connections
    will waste outbound bandwidth.

    So in the classic case, a UDP attack will not consume precious
    outbound bandwidth, a TCP attack will.
    You cannot affect the rate at which an attacker makes TCP connections
    to you any more than you can slow down the rate at which an attacker
    blasts UDP connections at you.

    The difference is, you can drop the UDP packets with no reply while
    maintaining service. It is very difficult to do that with TCP
    connections because you must complete the handshake to find out what
    the request is all about.
    No, you switch to requiring a token when the UDP port is under attack.
    Nothing. Except that you can then drop those requests because they'll
    either have an abused token or an invalid one.
    Then you have to serve them. Same problem with TCP.
    If someone can bomb you with so many UDP packets your network is
    overwhelmed, you're screwed whether you are listening for those
    packets or not.
    How do you figure? Remember, the specification says you only send a
    request packet by UDP once and then you use TCP if there's no reply.
    So dropping the UDP packets has minimal effect on legitimate clients.

    DS
     
    David Schwartz, May 1, 2009
    #11
  12. It's really simple.
    If it doesn't match a retained state, and you drop it, then you have
    no state associated with the attack. Since it didn't match a state,
    then you had no state for it. Since you dropped it without creating a
    state, you have no state for it afterwards. No state on the attack
    traffic is required to resist an attack.

    This is much the same as with TCP. You normally do keep state. But
    during, say, a SYN flood, you don't keep state on the attack traffic
    with a technique like SYN cookies.

    The difference is, to retain normal service under a TCP attack, you
    must reply to each SYN with a SYN ACK, even though you don't keep
    state. To retain normal service under a UDP attack, you need not reply
    at all. So it's a win for UDP -- no outbound traffic can be consumed
    by an attack.
    1) Are we under attack? If not, assume it's valid.

    2) Check the state table, does this match an entry? If no, drop it.

    3) Check the entry for suspicious load we'd prefer not to serve. If
    so, delete the state entry and drop the packet.

    4) Serve the request.
    The thing is, you must reply to each SYN flood packet with a SYN ACK,
    or normal service is lost. UDP is precisely the same except that you
    don't need to reply to the attack packets. If you implement SYN flood
    defenses, the way SYN floods still hurt you is by overwhelming your
    precious outbound bandwidth. That's impossible with UDP, since you
    don't need to reply to the attack packets to retain normal service.

    So if this technique would resist a TCP attack, it will better resist
    a comparable UDP attack.
    Neither can TCP servers. If they get bombed with incoming UDP floods,
    there's nothing they can do to slow them down. So this affects TCP and
    UDP servers the same.

    DS
     
    David Schwartz, May 3, 2009
    #12
  13. Correct, so this gives UDP an advantage over TCP. With UDP, you don't
    need to maintain state on invalid packets to accept valid packets.
    Otherwise, it's the same.
    You watch the load level. If you can easily handle it, assume you are
    not under attack. if you are struggling to handle it (and empirical
    data suggests that attack mode will help), then switch to attack mode.

    This is what servers do already. I'm not suggesting anything unusual
    hear. For example, this it the same way TCP deals with SYN floods.
    (And I've designed protocols layered over UDP that resist such attacks
    using just these techniques many times.)
    Well, you don't have to, as I explained. You can treat all packets the
    same with this protocol since packets that don't match an existing
    state are stateless in this protocol. But the text right below here
    explains how you can do this, by looking up the token.
    No, it doesn't. Checking a table does not require you to retain state.
    Only adding an entry to a table does. This step does not involve
    adding an entry to a table, so it doesn't require you to retain state.
    So what?
    So what?
    So what?

    You are complaining about the completely trivial stuff.
    Usually with a simple cutoff. For example, you may allow, say 3
    requests per session initially and then increase the amount over time.
    You may leave them unlimited unless you have reason to think you are
    under attack. There are a variety of ways to do this.

    Remember, you are the one arguing this cannot be done, so you have the
    burden of proof, not me.
    And then you can consider the requests invalid as soon as too many
    requests rack up. There will be no effect on legitimate clients (as I
    explained). And the beauty -- as soon as you decide to invalidate the
    token, you no longer need to keep any state for any of these millions
    of requests and *no* outbound bandwidth is consumed by them. It's a
    *HUGE* win over TCP.

    In fact, this attack would do so little that nobody would bother with
    it and would instead use more sophisticated attacks that can consume
    outbound bandwidth. (Unless it killed you by purely overwhelming your
    inbound bandwidth, which would work whether or not you're listening
    for this traffic.)
    Well, that's the idea of SYN cookies. That allow you to weather SYN
    floods without keeping state on the attack packets. That's exactly
    what I'm suggesting you do on UDP. The difference is, you *must* reply
    to the SYNs in the flood to retain normal service. You wouldn't need
    to reply to the UDP packets in the flood to retain normal service, so
    that's, again, a huge win for TCP.
    First, you don't have to. You can drop all UDP packets and still
    retain normal service. As I explained, legitimate clients will send
    one UDP packet and then switch to TCP. So the UDP flood won't really
    hurt the server unless it overwhelms its inbound bandwidth. (It can
    even close the UDP socket, I guess, though I can't imagine that would
    be needed.)

    Second, it's trivial. You issue tokens using TCP if, and only if, it
    won't hurt you to do so. If a packet comes in with a token that's not
    in the table or for a token that's seen too heavy use, you simply drop
    the packet (and delete the token if any). You do not need to keep any
    state about the attack packets because it's the absence of a token
    that clues you in. You need only keep state about tokens you issue,
    and you can precisely control how many tokens you issue. So you can
    trivially prevent an attacker from consuming unbounded state.

    This is really so ridiculously simple I'm genuinely baffled why you
    haven't gotten it yet.

    DS
     
    David Schwartz, May 4, 2009
    #13
  14. No, I am not saying you actually do so. I am saying you can do so if
    it's necessary to protect the TCP service. So your argument that the
    UDP service will necessarily compromise the TCP service is bogus.
    No, it's not. For example, suppose it reduces average latency from,
    say 40ms to 15ms. That would be a great thing, even if you had to
    disable it during an attack. Why? Because most of the time you're not
    going to be under attack.
    Right, because it's impossible for the application to do that for TCP.
    The replies to the TCP SYNs are generated whether or not the
    application server needs them, resulting in uncontrollable consumption
    of outbound bandwidth.
    They were all done for employers and none of them are open.
    You seriously think I was talking about resisting denial-of-service
    attacks on secure networks?!
    It does. You don't have to. If it doesn't match an existing state,
    simply drop it.
    Yes, but it retains *no* state about attack packets. It only retains
    state about connections you have chosen to accept in a context where
    not accepting a connection does not cause a loss of service. So an
    attacker cannot force you to retain state to preserve normal service.
    Right, but an attacker can force you to keep TCP state to retain
    normal service. That cannot happen with this UDP protocol, since you
    can always refuse to retain state and maintain normal service.
    Yes, but it's state only for non-attack traffic. As I said, once you
    believe you are under an attack, you stop adding new state.
    No, I am not proposing a new protocol at all. Where did you get that I
    was proposing a new protocol? Someone else was suggesting that a new
    protocol be designed, you argued that it was not possible to make such
    a protocol secure.
    It is a burden you foolishly chose. I agree, what you are doing is
    ridiculous.

    You claimed: "It would, for instance, be much easier to bring a server
    to it's knees
    if it allowed UDP-based requests. It also makes the server much more
    vulnerable to a denial of service attempt."

    You didn't make these claims in response to any specific proposal.
    I already explained that.
    You obviously am not reading what I'm writing since I never suggested
    matching the port or address.
    Who cares? You *DON'T* compare ports or addresses *ever*. I never
    suggested doing so. You suggested it and now you are criticizing it.
    Feel free to argue with yourself if you like.
    If assuming it is valid will hurt you, drop it. Again, legitimate
    clients are not hurt by dropped packets.
    A legitimate client sends one packet and if it's not replied to, they
    switch to TCP. So dropping a suspicious packet will not hurt a
    legitimate client.
    TCP can use SYN cookies to avoid holding state for SYN flood packets.
    However, it must reply to each such packet to maintain normal service.
    Because a UDP server doesn't have to reply to each packet to maintain
    normal service, an attacker will likely much prefer to attack the TCP
    port rather than the UDP port.

    DS
     
    David Schwartz, May 8, 2009
    #14
  15. karthikbalaguru

    zawarzh Guest

    Hi every one! I need a little help NS2. I am working on a new AQM techniquewhich treats TCP packets differently from UDP packets. But I don't know how to differentiate TCP from UDP at Routers. I am Using NS2 for the simulations. Your help well be much appreciated and acknowledged.
    my ID is
     
    zawarzh, Feb 21, 2015
    #15
  16. karthikbalaguru

    Tauno Voipio Guest


    Time to turn in homeworks at Pakistani universities?

    1. There are Wiki pages for Network Simulator 2. I assume that
    you are using it. You should have spelled the name out in full.
    Google for the address of the wiki pages.

    2. Please learn the basics of TCP/IP packets. There are different
    transport protocol codes in IP headers for TCP and UDP.

    3. Please do not continue a thread which has nothing to do with
    your questions. Now the message header is misleading.

    4. There is no idea of reporting your email address. This is a
    public forum, with the responses here, not in mail.
     
    Tauno Voipio, Feb 21, 2015
    #16
  17. Tauno Voipio wrote:
    [lots of txt]
    Maybe I'm completely retarded, but HTTP _over_ UDP ?
    Occurs to me the server would never be able to tell
    when it's 'served' a page, unless some ACK's are
    implemented, what in turn is just TCP.

    Enlighten me someone on the the use of HTTP
    over UDP.

    [i.e. why not just send some <whatever> data
    over UDP, don't care whether it's there or not
    instead of dealing with headers like
    Connection: keep-alive
    which is impossible anyway et al.]
     
    Ralph Spitzner, Feb 23, 2015
    #17
  18. karthikbalaguru

    Tauno Voipio Guest


    That was not the only thing wrong in the message.
    The post did not say anything about HTTP over UDP.

    It seems to me that the OP is in all over his head
    about the IP protocol stack - there is a strong
    homework scent in the whole thing.
     
    Tauno Voipio, Feb 23, 2015
    #18
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.