Tags:
create new tag
, view all tags
Collecting resources on network design issues.

See AboutThesePages.

Contents

Notes

Resources

See ResourceRecommendations. Feel free to add additional resources to these lists, but please follow the guidelines on ResourceRecommendations including ResourceRecommendations#Guidelines_for_Rating_Resources.

Recommended

  • (rhk) [[][]] --

Recommended for Specific Needs

  • (rhk) Why Gnutella Can't Scale. No, Really.; Jordan Ritter; July, 2001 -- mathematical analysis of how much traffic Gnutella generates in response to a simple request, and why Gnutella can't scale without causing major network problems. -- Haven't read it carefully enough to decide whether I agree with the analysis, but it wouldn't surprise me if it were true -- then, what are the workarounds (because I know there are some) -- maybe central servers that maintain lists of resources so searches go there (I know that violates one of the early premises of things like Gnutella, that everything would be anonymous -- it's just a first guess at a workaround)
    • (weh) The analysis neglects to take into account the aggregate bandwidth of the Gnutella Net participants. Assuming that each user has even a 256kbps bidirectional connection (all DSL or better participants will), in the case where n=8, and t=8, there are 7,686,400 particpants, and the aggregate bandwidth is 491,801,600,000 bytes per second. This is calculated from 32kBps (256kbps) * 2 (bi-directional) * 7,686,400). Considering again only the n=8, t=8 case, the author argues that a whopping 1.2 gigabytes (1,275,942,400 bytes to be exact) is consumed by a request – the i() function. This represents 0.259% of the aggregate bandwidth for one second. The h() and k() functions, which compute all traffic associated with a request, including replies, consumes 6,331,440,000 bytes (6.3 terabytes) of bandwidth in the n=8,t=8 case. This represents 1.2% of the aggregate bandwidth for one second. The author also states that 10 requests per second is considered heavy use. Considering 10 requests per second times the bandwidth consumes yields 12% of aggregate bandwidth consumed for “heavy” usage. I think most Gnutella participants would be willing to contribute 12% of their bandwidth. While the author suggests that the amount of the data transferred limits scaling, in fact I think the problem stems more from having an unbalanced network, and from routers within the internet that can't handle the aggregate data stream. It is true that the numbers involved are staggering, but it is also important to realize that one of the points of peer networks is to aggregate bandwidth, not just CPU or storage, or to avoid centralized servers. –- WilliamHertling - 05 Nov 2002
William: Thanks for your comments -- makes sense to me! -- RandyKramer - 08 Nov 2002

  • (rhk) It's the Latency, Stupid; Stuart Cheshire; May 1996 -- explains:
    • why latency is a bigger problem than bandwidth
    • modems have a latency on the order of 100 ms
    • therefore a share of a link with less latency (ISDN, Ethernet, ...) is "faster" than an entire link with more latency (56 kb modem)

Recommended by Others

  • (rhk) [[][]] --

No Recommendation

  • (rhk) [[][]] --

Not Recommended

  • (rhk) [[][]] --

Contributors

  • () RandyKramer - 05 Nov 2002
  • (weh) WilliamHertling - 05 Nov 2002
  • <If you edit this page: add your name here; move this to the next line; and include your comment marker (initials), if you have created one, in parenthesis before your WikiName.>

Page Ratings

Edit | Attach | Watch | Print version | History: r4 < r3 < r2 < r1 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r4 - 2002-11-08 - RandyKramer
 
  • Learn about TWiki  
  • Download TWiki
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 1999-2017 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding WikiLearn? WebBottomBar">Send feedback
See TWiki's New Look