A throttle needed for UDP sendto()???

Discussion in 'Linux Networking' started by guuwwe@hotmail.com, Mar 8, 2007.

  1. Guest

    I use UDP to transfer 64K datagrams between two local PC's over
    100Mbps
    Ethernet connection. Works fine under normal conditions. However, when
    my sender code generates calls to sendto() at max speed, the datagrams
    are
    lost. sendto() does not return any error code, send buffer is set to
    50MB
    but it does not make any difference because sendto() with large
    datagrams
    of 64K blocks the caller until all bits are sent. The receiving PC
    does not
    see any of the lost datagrams. It only sees the datagrams that get
    through.

    Apparently, the datagrams are trashed in the physical connection.
    I tried a direct connection with crosswire cable, and through a switch
    and the datagrams are lost in either case.

    When I put a very small delay/sleep between each call to sendto(),
    then all datagrams get through. Does it mean that sendto() requires
    a throttle?
     
    , Mar 8, 2007
    #1
    1. Advertisements

  2. UDP is a best effort protocol. Datagrams can always be lost.
    Actually, they're most likely being discarded by the sender. But UDP
    datagrams can be lost anywhere. The sender can opt not to send them,
    the network can lose, reorder, or duplicate them, and the receiver can
    throw them away. That's how UDP works.
    Again, that is the nature of UDP.
    UDP requires the application to do transmit pacing if the traffic is
    very bursty or a very large amount of data needs to be sent. I
    strongly recommend you use TCP if you need its features, rather than
    trying to re-implement TCP yourself.

    DS
     
    David Schwartz, Mar 8, 2007
    #2
    1. Advertisements

  3. Rick Jones Guest

    You need to know that unless you are running on a very rare network
    type (ie one with a 64+KB MTU), those 64K messages you hand to UDP
    will be handed to IP and _fragmented_ into MTUish-sized IP datagram
    fragments. If _any_ of the IP datagram fragments of an IP datagram
    carrying the UDP datagram carrying your message are lost, the entire
    IP datagram carrying the UDP datagram carrying your message is useless
    - it cannot be reassembled and will be dropped.
    You might want to check the link-level stats with ethtool on both
    sides, as well as the UDP and IP stats on the receiver with
    netstat. Take some "snapshots" from before and after your transfer,
    and run them through beforeafter:
    ftp://ftp.cup.hp.com/dist/networking/tools/

    And you will see the deltas between the snapshots without having to do
    the math yourself.
    Not sendto() so much as UDP. (The sendto() call might be used with
    other transport protocols with semantics different from UDP's)

    While Linux provides intra-stack flow-control when sending UDP
    datagrams (notice that a netperf UDP_STREAM test doesn't report
    greater than link-rate on the sending side), that I suspect provides
    some with a false sense of security, and allows them to forget that
    there is no end-to-end flow control (nor recovery from datagram loss)
    in UDP. UDP is but a thin veneer on top of IP.

    Not only does your application require a throttle, it requires a
    mechanism to recover from lost datagrams, because even if you throttle
    to a given rate nominally achievable by the link(s) you can still have
    packet loss.

    rick jones
     
    Rick Jones, Mar 8, 2007
    #3
  4. Guest

    This is not a question to discuss alternative protocols or discuss UDP
    in general. Only about a wrokaround/solution to a specific problem
    that
    will exist for any protocol that uses Ethernet.

    Can you expand on this transmit pacing, is it formalized somewhere?
     
    , Mar 8, 2007
    #4
  5. I don't see any reason to believe that you are in a better position
    than I am to set the scope of the question and discussion. It's quite
    possible that your choice to use UDP was simply a bad one and
    switching to TCP is the optimal workaround/solution.

    If you're interested in solutions for problems that exist for any
    protocol that uses Ethernet, why not just read up on TCP and see how
    it solves these same problems?
    Google "UDP" and "transmit pacing". Basically, if you aren't going to
    take advantage of TCP (that includes loss detection, slow start,
    exponential backoff, and the like, and you need those features) then
    you have to implement them yourself.

    The network stack has literally no way to know what's happening to the
    packets and whether network links downstream from it are overloaded.
    So you are responsible for all of that. The advantage is that you can
    send as much data as you want to whenever you want to. The
    disadvantage is that the network will drop packets if it is
    overloaded.

    DS
     
    David Schwartz, Mar 9, 2007
    #5
  6. Guest

    Who said about *choosing* a protocol for anything? Where in my
    post there was an intent to find a better protocol?
    This is a brute force approach - which means when the app is too fast
    the
    network layer code will get into a mindless loop of losses and
    retransmissions.
    I was looking for a throttle that would minimize this situation.
    Nobody asked
    here for a perfect solution, only a solution/workaround that is
    *typical* in
    protocols like TCP. Also, I know that I can google, but you should
    know
    that you do not have to reply if you do not like my post or you do not
    want to
    reply to it *directly*.
     
    , Mar 9, 2007
    #6
  7. Rick Jones Guest

    The network layer is IP. IP doesn't concern itself with losses and
    retransmissions. It leaves that up to other layers.
    The solution that is typical in TCP is to have end-to-end flow
    control, positive ACKnowledgement and window update, and a
    sender-calculated congestion window.

    An application using UDP could rate limit itself. It is indeed not a
    perfect solution (no solution ever is really). The rate the
    application picks needs to be good for everything between it and the
    remote end. Rate limiting alone may be "good enough" for two systems
    connected back-to-back, but once there starts to be more "stuff" in
    between the prospect of a packet loss for reasons other than
    congestion continues to increase, which makes the pacing workaround
    less and less effective.

    rick jones
     
    Rick Jones, Mar 9, 2007
    #7
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.