Must Reads

CRTC Closes Net Neutrality Complaint Against Rogers

The CRTC has closed the net neutrality complaint against Rogers, concluding that it is satisfied with the ISPs response and disclosure practices.

77 Comments

  1. CRTC OK’s BS by incumbent telecom company…
    … Film at 11.

    Seriously, did we expect anything else?

  2. CRTC keeps proving it’s not for the people and really either needs to go or needs to be investigated for corruption.

  3. Lost Faith in the CRTC says:

    wow they are really looking for Canada aren’t they? Let a monopoly steam roll over us why don’t you.

    At any point did the CRTC cause or foster innovation in internet services? No they did not. The CRTC seems to think that the internet is like cable tv, a consumable product.

  4. Sigh
    It’s like they want to prove that they’re an anti-consumer dinosaur.

  5. Par for the course

  6. Rogers response would appear to be in compliance with Telecom Regulatory Policy 2009-657 at first glance. Whether or not you agree with the policy is a different issue.

    @Lost Faith: There is two ways that they could go about causing or fostering innovation. The way that you appear to refer to (my apologies if I am interpreting you incorrectly) is to actively engage with the industry, possibly by way of regulation. The other is to get the devil out of the way and allow the free market to do its thing. In 1999 they chose the latter. Is that decision still valid? That is the question for today.

  7. @Anon-K
    I fear this sets a dangerous precedent towards software developers. Rogers response seems to suggest that it is the software developers that take responsibility on what is or is not throttled or affected by ITMP which varies from ISP to ISP. There`s no way this response should be acceptable.

    Furthermore there needs to be a way more technical explanation involved here, so that software developers can properly develop apps around ITMP to ensure they are not affected by this technology if that’s the card Rogers wants to play here. Simply blaming the software doesn`t seem to be all that believable. I`ve done a lot of tests on my Rogers lines with respect to detecting ITMP on upload speeds. The tests came back inclusive due to interference on the lines with respect to download streams when nothing else was running on those lines. Upload throttling was detected though. Something a lot more seems to be going on here.

    But all of this can be the cause of one major factor, which is the lack of digital policy in Canada, and a CRTC that clearly has problems understanding technical information. If I was working on the file for the CRTC here I would have asked for a lot more detailed explaination.

    The gaming industry uses P2P techology for their multiplayer platforms. How does this affect a multi-billion dollar industry. Are software developers now too at the whim of the large teleco`s.

  8. Larry Searle says:

    The CRTC is nothing but a committee of people who are in principle an extension of large telecom companies. They tell us what we must watch, what telecom companies we have to get gouged by to watch these shows. Then they allow these companies to restrict our internet access so we cannot get our content elsewhere. Sounds more like the Soviet Union than a free country like Canada.We must be allowed to watch what we want when we want . Bring in Foreign competition


  9. @Jason: “The gaming industry uses P2P techology for their multiplayer platforms. How does this affect a multi-billion dollar industry. Are software developers now too at the whim of the large teleco`s. ”

    Interesting question. However let’s take a look at the developers too, as they’re not all white either.

    Years ago multiplayer games were done using dedicated servers to which players were connecting with their machines running the “client” side of the game. It was rather easy to identify the network traffic as all the gamers connection were made to a pool of known IP addresses (the game servers). So in theory a ISP could set a rule “pass UDP traffic to GameCompany’s pool of servers unthrottled”.

    But then the game publishers became greedy and wanted to avoid paying for those servers and related bandwidth. Since PCs and consoles became more powerful, you could run the “server” part of the game on one of the players machine.

    Try the last Call of Duty “Black Ops” on a PS3 to see how that works. It pisses everybody off since the the player hosting the game may disconnect in middle game, you get the game paused, then another player becomes the host so there’s now more traffic to his machine, so he starts to lag, so he disconnects and the cycle repeats. So in the end you may get the game interrupted several times and everyone grumbling. What for? More money for the developer.

    Now from the ISPs point of view, how can you help that the traffic for such particular game doesn’t get throttled along with bittorents? There’s not much you can do since now the traffic is between random customers, looking pretty much like P2P.

    Nap.

  10. P2P
    And now I’ll give everyone here a reason to flame me to no end. I’ll just state that right now P2P as a distribution media sucks big time and should be either improved or forgotten.

    The nr. 1 reason is that the signal/noise ratio is extremely low. I’ve been reading a couple of weeks ago a study that found that about 1/3 of the content is plainly misleading, 1/3 has what it says but also contains trojans/viruses, and only 1/3 is clean. From the last third, more than 90% is “pirate” material.

    So we’re pushing through the lines all these Terabytes of carp for just some 3% legit, useful purpose.

    The nr. 2 reason is that this kind of distribution lends itself to “fire and forget”, 24h/24h bulk loading. Like in “let’s put all these files in the queue then next Sunday we’ll check which of them are any good”.

    Now someone please let me know how this is “modern” “digital economy”.

    Nap.

  11. CBC
    @Mr. Moore:

    Dear Mr. Moore. How much “digital” CBC could the youngsters watch/listen with a monthly 2GB cap and some throttling on top of it?

    OTOH maybe we should note that the youngsters are attracted by *interactive* stuff, be it digital or not. Watching TV is not that interactive and would please just us the older couch potatoes, while the kids are playing “Angry Birds”, hacking their computer or playing street hockey.

    So the better question would be: how can CBC create some interactive activities for the young?

    Nap.

  12. re: P2P
    Uh Napalm, your confusing the delivery with the message. P2P works wonderfully hence it being used so damn much. As for the amount of crap being sent over it well that’s no different than the internet as a whole.

    And as far as it being “modern” and “digital economy”. It’s not an economy because the legitimate producers/distributors have not bothered to figure out how to use P2P to their advantage. That’s why it’s still more convenient for most people to use it the illegitimate way. Services like Netflix while not being P2P are changing the digital economy in terms of distribution. As for it being modern… well it beats driving to the local video store to rent/buy a disc.


  13. Jaysin: “As for it being modern… well it beats driving to the local video store to rent/buy a disc. ”

    Netflix, Playstation Store, Cable On Demand do exactly the same thing, with instant delivery and no nasty surprises.

    So tell me what exactly do you need to publish/distribute that only P2P will do?

    You have an interesting home video to share? youtube works
    Photos? flicker for sure
    Software? sourceforge does great
    Text? Blogger will do

    anything else? let me know maybe I’m missing something.

    Nap.

  14. Re: P2P
    I work for a hypothetical Netflix competitor. If we want to use P2P or BT as a technology for content delivery, in order to save massive amounts of bandwidth, we’re screwed because ISPs throttle these methods down to unacceptable levels of performance, thereby shutting us out of being about to compete.

    Re: Napalm

    97% of email is spam. Only 3% of it serves some legit useful purpose. Maybe we should just forget about it too.

  15. @Nap
    I’ve actually exclusively reported on the stuff that’s happening with Black Ops on the PS3. I broke a story on that last month.

    http://bit.ly/gy17bb

    The issue with BO was the fact it was released in Beta. There were a huge amount of problems with the connectivity of the game, most likely due to the fact beta testing was openly admitted to being skipped on both PS3 and PC platforms. It’s a different issue but one that software developers can exploit with releasing unfinished product. They can pretty much blame ITMP for their own misconduct.

    That being said, ITMP can also be a contributing factor to the games overall performance. This was something I studied when I was doing my investigation on the Black Ops issue, but found the game had way too many problems due to Beta testing being skipped to come up with any solid conclusion on just how much it was effecting gameplay. Although I’m convinced it does have an impact, but shouldn’t be justified enough to let Activision and Treyarch off the hook for putting out a product that was in Beta!

    With respect to this current case with the CRTC there are a tremendous amount of unanswered questions with Rogers response. Like the CNOC has stated: “If the Commission does not ask the right questions, it cannot possibly hope to get the right answers.”

    I’ve filed a request to intervene in this case, and also forwarded it off to the CIPPIC. This is issue is far from being closed, trust me. I’ve posted my request to intervene on this case here:

    http://bit.ly/hwyGOY

    Just so happens that I did do some tests with respect to my connection when I was investigating the black ops issues. Sucks to be Rogers! We also need qualified people in the IT field in the higher ranks at the CRTC. It’s not just telecom issues that are now being reported. An understanding of IT should be a requirement for those in high positions at the CRTC. This has to be fixed before the CRTC can start making proper decisions with respect our digital economy.

  16. @Nap
    Another interesting take on Black Ops is the fact that I e-mailed Tony Clement on the issues this game was having on his request. If you look at the conversation I posted a link to in the first blog post I did, I was very concerned about this issue. Clement refused to get involved. Currently the UK government is investigating any legal issues that might have been involved in releasing a game in Beta.

    As for digital policy surrounding ITMP, should we not then have digital policy forbidding to use P2P protocols in multiplayer platforms and or retail software development, and instead insist they set up dedicated servers to service consumers due to ITMP. That’s something that’s both costly and unnecessary on software developers.

    ISPs should be required to post exactly what apps are effected by ITMP in their FAQ IMO. It can affect more that just downloading some movie off the P2P networks. More conversation with the CRTC is needed, along with input from the software development community.

  17. @Jaysin
    “And as far as it being “modern” and “digital economy”. It’s not an economy because the legitimate producers/distributors have not bothered to figure out how to use P2P to their advantage.”

    That’s actually not true. There are several producers/distributors that do take advantage of the P2P networks. As both Nap and I have been talking about, even the gaming community is using P2P protocols to keep costs down with respect to how muliplayer platforms are run, and right now that’s at risk due to our digital policies.

    File Sharing is not public anymore, it’s now moved to darknet and exclusive communities which was predicted close to 5 years ago when this whole copyright deal hit Canada. File sharing will continue to exist, but now because of industries response to it, it becomes harder to track, and shut down. At least with public networks they could track the amount of downloads a production got. That’s not the case anymore, and billions if not trillions of advertising revenue lost due to the reluctance to accept P2P as a distribution chain by industry as a whole.

    We can’t even move to legal streaming in Canada for multimedia due to usage based billing policies. I gave up being an innovator in new media in Canada myself, because it’s not a viable market in Canada. Too much BS and not enough direction and leadership on digital policy to develop plans to ensure return on investment, and it’s in large part due to the major ISPs screwing with the consumers connection and the CRTC that are dictating policy, not government. It’s simply not worth investing in Canadian new media right now until this country has it crap together, and because of this uncertainly I can’t attract investors either!

  18. gaming
    @Napalm:

    I though P2P has always been the norm for console games? I’m not a big console gamer, but I always figured they just provided matchmaking and not any actual game servers, since no game company has ever really done that without a monthly fee (see MMORPG).

    On PC games, the dedicated servers would be operated by fans of the game or clans or gaming media or whoever, not the developers. It gives gamers a lot more control to be able to maintain their own servers, but it doesn’t really work with consoles since you have a generally less tech-savvy crowd, no PC client to attract people that might have PC game servers, and insufficient control to ensure a consistent experience.

  19. Canadian Consumer says:

    Some of you guys don’t understand P2P
    P2P is now successfully used all over Europe by media providers (mainly networks that want to provide shows to their consumers). CNN uses it to stream their channel on the net. This technology has relevance because it allows the users to use their own bandwidth (which they pay for) cooperatively to facilitate the stream. The technology is too advanced to go away simply because some use it for piracy.

    P2P is a fundamental concept to the internet. All it means is that there is no centralized server. So if you want to make a 3 way call, you do not need to have an intermediary. DESTROYING this principle cripples a crucial communication principle.

    (I know they want to paint P2P as the bad wolf, it means “peer to peer”, scratch your head and realize that this IS THE INTERNET. It is the foundation of it. It’s like them telling you that you can’t gather in a group. They are crossing the line with your liberties as always. Destroy P2P and you destroy your freedom on the internet. This is not about piracy but about control. Kind of like saying they need to read your mind because they may be worried about you being a child molester.)


  20. @Robert: “Re: P2P I work for a hypothetical Netflix competitor. If we want to use P2P or BT as a technology for content delivery, in order to save massive amounts of bandwidth, we’re screwed because ISPs throttle these methods down to unacceptable levels of performance, thereby shutting us out of being about to compete. ”

    So you want to save a buck and pass “massive bandwidth” on my expense? This is the gist of it? Thank you but I’m not impressed.

    Nap.


  21. Jason: “Although I’m convinced it does have an impact, but shouldn’t be justified enough to let Activision and Treyarch off the hook for putting out a product that was in Beta!”

    I’ve been a hardcore player of all the PS3 versions since the original MW. With each iteration the network play was degraded. I’m not sure I’ll buy the next one anymore.

    “An understanding of IT should be a requirement for those in high positions at the CRTC. This has to be fixed before the CRTC can start making proper decisions with respect our digital economy. ”

    The proper thing would be to disband CRTC. These “independent” agencies are not accountable. We should have back the Minister of Communications so when it goes bad we can straighten him up at elections time.

    Nap.


  22. @Jason: “As for digital policy surrounding ITMP, should we not then have digital policy forbidding to use P2P protocols in multiplayer platforms and or retail software development, and instead insist they set up dedicated servers to service consumers due to ITMP. That’s something that’s both costly and unnecessary on software developers.”

    We shouldn’t forbid anything. Let them do it if they’re cheapskates.

    But exactly like we want the ISP to disclose their policies, we should have the developer of a software product clearly disclose the network technologies it uses on the game box. So when I go to the store I could make an informed choice and purchase. I don’t like software that does unknown stuff on the network on my behalf. Otherwise eventually we could have them include a hidden pr0n/warez server in the game for “additional revenue streams”. Or some other stuff like that.

    So my take is that both developers and ISP should come up forward telling us what they really do. Yes, it’s their choice on what they want to do. But it’s on my money so I want to know what it is that I paid for.

    Nap.


  23. @Robert: “Re: Napalm 97% of email is spam. Only 3% of it serves some legit useful purpose. Maybe we should just forget about it too.”

    There are laws now about spam and you get fined when doing it.

    http://a11news.com/2504/spam-king-fined-711-million/

    Nap.

  24. ….
    @Jason:

    I just went through your article at:

    http://bit.ly/hwyGOY

    What really concerns me is not the P2P issue, but the death of the point to point stuff like personal web, ftp and e-mail servers.

    Before 2000 I used to run my own mail server and had a private sftp server with family photos and stuff. My relatives could download them from there, and this without having me to upload them to a third party server with shady privacy practices. This was internet as it was designed to be.

    This is all dead now, by various reasons including filtering of SMTP ports and the widespread use of NAT firewall/routers.

    This is what we should really be concerned of.

    Nap.


  25. @Canadian Consumer: “P2P is a fundamental concept to the internet. All it means is that there is no centralized server. So if you want to make a 3 way call, you do not need to have an intermediary. DESTROYING this principle cripples a crucial communication principle. ”

    See my reply to Jason above.

    P2P has the fundamental flaw of being open to any anonymous party that wants to join. This makes pretty much everything public both in upload and download. That’s why you get only 3% legit content on it.

    I personally want back to the internet as it started. Where I could set up my own, private server at home.

    Nap.

  26. Canadian Consumer says:

    You missed the point.
    “So you want to save a buck and pass “massive bandwidth” on my expense? This is the gist of it? Thank you but I’m not impressed. ”

    No one in P2P is forced to use their bandwith without permission. If you want to stream CNN, you can watch it, but you need to pass on some bits and bytes to other users. It’s part of fair use.

    How can you want content, but be against using your own bandwidth to recieve it? In a P2P model the typical user shares 1:1. The cost saved, are passed on to the cosumer. This is why a lot of content providers of P2P do not charge for it.

    So you can cry about your bandwidth, but without it you will not get anything. P2P is an unfairly slagged concept that is fundemental to the networks.

  27. Canadian Consumer says:

    Damned if you do and damned if you don’t…
    Someone has to pay for you bits and bytes. You can ask for massive servers to be installed that serve billions, but Bell will still complain that it is problematic for them as well (netflix). So how can anyone win serving bits and bytes?

    You can’t use bandwidth with P2P, and you can’t set up independent services….. hrmmm, is there anything we can do other than pay our ISP’s bill on time?

  28. Canadian Consumer says:

    one more time…
    In P2P, the way you pay for your download is by agreeing to let another user download directly from you. There is nothing wrong with this concept. You pay for you internet and your traffic, you inturn use some of that resource to serve another.

    What is concerning, is government stepping in and telling you that you can’t do this!. Nothing illegal is being done, in the exchange above. I did not mention which files were being transferred, in fact its none of your business.

    This is similar to saying, we no longer want people meeting in cafes because organized crime meets in cafes and this is a source of the criminal element. The concept of sharing bits and bytes directly CAN NOT BE DISSALLOWED. This is what the internet is based on. THIS IS A VERY IMPORTANT POINT. (sorry about the caps)

    If we lose the understanding of the above, than we lose the internet. P2P is your freedom to meet on the net with anyone you want. They have laws for illegal activity, they can use those. To stop the meetings is lunacy.

  29. P2P
    Consumer/Napalm: I enjoy both of your points.

    Consumer, I think you might be missing (or blurring) that a 1:1 sharing ratio on P2P content actually doubles the bandwidth you consume, filling upstream as well as downstream. I don’t think it fits universally within a ‘fair use’ doctrine.

    Nap, I think you might be confusing P2P distribution models with P2P content providing. I can easily concoct a P2P distribution mesh in which I am the only original content provider (short of hacked clients, of course).

    Ultimately I believe that consumer costs should reflect producer costs. I am against the concept of demand pricing–it seems like the very thing that market competition eliminates in the healthy case. A P2P movie site is going to be cheaper to run, and therefore should be cheaper to buy movies on. Part of my EULA would say that I agree to match bandwidth 1:1 or something. There would probably be in general a reduced degree of reliable throughput, the occasional poisoned stream due to hostile hacked peers (easily filtered but reducing overall QoS), and I would need to be able to support the additional bandwidth requirements.

    As a consumer then, I could choose between Netflix and the hypothetical P2PMovie. Netflix might cost $7, while P2PMovie might cost $5. I could use Netflix with minimal upload bandwidth and half the overall bandwidth consumption for the same quality of movie, so for me, the choice might depend in part on what plans my ISP offered (UBB, bandwidth caps and all that).

    As a software developer the P2P versus central content delivery model hinges around infrastructure costs, replication costs, and so forth. The P2P model carries some advantages and some disadvantages; the extra hosting weight in central models partially offsets the reliability and technical challenges of the P2P model.

    Hell, as an ISP I can see good reasons why a P2P model might fit my needs better in some cases. Any content replication that happens behind my equipment consumes no pipe at that level; if my last mile tree is strong and my upstream is weak, P2P applications can be lighter weight for me than centralized distribution apps. If my last mile sucks then I would rather see CDNs.

  30. Canadian Consumer says:

    @Mark
    Yes, a 1:1 reatio would mean doubling your usage. Not a big price to pay in the over all scheme of things. For one byte in you agree to send one byte out.

    I don’t know about you guys but I would rather pay in bits and bytes than in physical cash, I hardly ever max my internet connection.

    What worries me, is that ISP’s will limit who we can call our “peers”, this to me is unacceptable. It’s like banning video, because a lot of it is used to stream sexual material. We need to examine the spirit of the law. Once I have my internet, I should be able to peer with who ever I please.

  31. @Canadian
    I agree from a moral perspective that certain limitations in internet activity will constitute critical barriers to freedom. On this list I include blocking or banning encryption, significant outgoing port- or application-level filters, banning particular forms or styles of media, and any other activity that has the same effect de facto (for example, throttling the upband so severely that running services becomes intractable).

    I believe the government should make it mandatory for ISPs to provide (for example) an affordable option for significant upband, under circumstances when competitive forces are insufficient to provide for it. As it stands in my area there is no option that I am remotely able to afford that would provide me with more than 1mbps up. That means I can’t host streaming video at all, although I can host streaming audio on a good day. Is it part of my right or freedom to be able to host streaming video? Not intrinsically, but I do think that it’s my right to be able to make use of technology that is already deployed and available for use, at a fair price.

    Now, I would argue that there is a deeper level at which it is my right not to have my capabilities unfairly limited. For example I don’t think I should be charged for the right to run an SMTP service of some variety at home, so long as my doing so doesn’t interfere with my ISP’s ability to run their business. At the moment incoming port 25 is blocked, though 443 is open. I can’t decide if I feel this is an acceptable level of control exerted by the ISP–I know their reasons for doing so and I generally think it’s a good idea, although with my 128k/s I’m certainly not going to be funneling a lot of spam into the wild.

  32. @Jason K
    I’ve been in the software development business for about 20 years now, specializing in control systems and network communications systems (this is just to put my response here in context). My take on what Rogers indicated to the CRTC is that they have admitted that while what they said was technically correct, the effects of it may make it appear to the normal user that the downlink is throttled. Here is why:

    For protocols which depend on two way communications between the peers, throughput is a function of both the up and downlinks. If the uplink gets throttled, then there is a distinct possibility that the downlink will be throttled. For instance, a standard TCP packet header incorporates acknowledgement of previous packets; the sender needs to wait for an acknowledgement before it can move too far in advance as a result of flow control. Where the acknowledgements are throttled, this has the same impact as throttling the downlink.

    Now, what does this mean? It means that Rogers is in fact correct in that they only throttle the uplink; they can claim this, with a straight face, to the CRTC and to the general public and not be making any false claims that they could be hit with in court. However, the impact is that they can use this to effectively throttle the downlink without actively interfering with the downlink.

  33. @Anon-K
    “For protocols which depend on two way communications between the peers, throughput is a function of both the up and downlinks. If the uplink gets throttled, then there is a distinct possibility that the downlink will be throttled.”

    In the tests that I’ve run, prior to Rogers upgrade on their ITMP suggests that while uplink was throttled, there was no huge effect on downlink speeds. This has virtually been the case since Rogers started to throttle uplink. It changed approx 6 months ago when Rogers upgraded their ITMP software. Just as it is possible for the throttled uplink to effect downlink speeds, it is also possible to throttle uplink without any massive drop in downlink speeds. This is something that thousands of Rogers customers can say and prove with a straight face as well.

    “However, the impact is that they can use this to effectively throttle the downlink without actively interfering with the downlink.”

    To that I have no doubt, which is why this needs to be challenged with the CRTC. A clearer answer from Rogers IMO is needed.

  34. ….
    @Canadian: “How can you want content, but be against using your own bandwidth to recieve it? In a P2P model the typical user shares 1:1. The cost saved, are passed on to the cosumer.”

    Not really.

    1. P2P peddlers usually fail to disclose what their software exactly do and what this ratio will actually be (it’s not always 1:1).

    2. Residential Internet on DSL or cable is asymmetric and this is a technological limit of the equipment. It’s independent of the ISP or consumer will. My DSL modem can do 6MBps download and 0.7Mbps upload on my existing copper line. So how exactly is it efficient to put such big load on the upload channel. If the 1:1 ratio you mentioned would be true, then I would be using my link at 0.7 download / 0.7 upload. On a server/client model I could download at full 6MBps and have the download channel only lightly used (for ACK packets). Which model seems more efficient to you?

    3. The cost saving are *not* passed to the consumer. It will upsell him to more expensive plans / technologies that promise more upload speeds. Although in a client/server model his download speed would be plentiful and he wouldn’t need any upgrade. Also bandwidth is more expensive at the consumer end than in the data center. So instead of buying some coloc servers and some bandwidth at wholesale prices, these cheapskates want me to buy it at retail prices. Thank you I’m not impressed.

    So how is P2P an efficient use of current residential technology? As long as it’s asymmetric, it looks kinda stupid to me to try to force delivery through the slowest channel.

    And all this has nothing to do with content. We’re talking technology only.

    If we also bring the questionable 97% of content that you have to peddle through your link in order to get access to the 3% useful one. The picture is rather grim.

    Nap.

  35. Canadian Consumer says:

    @Nap
    1) The burden of the software you are running is still on you. The torrent software I use alllows me to see how many bytes I upload for any given amount of bytes downloaded. I can change it, stop it or even avoid using it if its dishonest.

    Stopping P2P simply becasue some choose to be deceptive, is like stopping software development because some write viruses.

    2) There is a reason that your connection is set up in such a way. This keeps you from serving others. This ensures that you will be a consumer of bits and bytes not a provider. The upload speed sucks even for me, but considering most of the things I download, are ISO images for linux and other goodies, its not a big deal. However, I agree, your connection is rigged so that you can only consume.

    3) Yes that is how the savings work. You as a consumer are responsible for the pipes you have in your house, and hopefully you are allowed to use them. In fact the internet has always worked this way. In the 90’s it grew because of porn and music. This may not be your persuit, but it was the reason a lot avarage joes were willing to pay 50 bucks a month to Rogers or Bell hispeed. The content, drove users to want faster and larger bandwidth. If it was just about email, we would still be on dialup.

    To suddenly, double charge the user for both the monthly connection he is willing to pay for and the content that will come down the pipes is a mirage that the large companies are trying to foist on you.

    On the net you vote which sites get bandwidth with your traffic. That is how the net works. Youtube and facbook grew big because people USED those sites. At no point did ISP’s say, wait a sec, we want money for your traffic? This is a new scam. The traffic on their networks is not caused by Netflix, but by the users WHO REQUEST Netflix content. (The content itself is suitable for those who claim to have a high speed connection.)

  36. Canadian Consumer says:

    P2P has become a target to take away MUCH BIGGER RIGHTS
    BTW guys, don’t fall into the dangerous trap of hating P2P, these guys are using the piracy issue on P2P to strip you of your rights.

    YOU should be allowed to connect to ANY PEER you want at reasonable speeds that are paid for as long as both peers have the speed.

    …they are using this issue to trick you into controling you so they can tell you who you can connect with and who you cannot. Don’t fall for it. I am not asking anyone to like piracy, or torrents that trasnfer copyright material. I agree the net has its problems in this regard. However, we need to be careful here because what they are asking of us is deceitful.

    The same rights you have to privacy and security and ability to invite any friend you want into your house, have to also be faught for on the net. Please look at the bigger picutre.

  37. ….
    @Canadian: “Stopping P2P simply becasue some choose to be deceptive, is like stopping software development because some write viruses. ”

    I’m not proposing to stop it. People should be free to do whatever they want. However they should also be well informed when making their decisions. What I’m proposing is that those incorporating this technology in their products explain it clearly on the box. So that I don’t find out later and the hard way that all the pr0n and music downloaded by my neighborhood passes through my house.

    I’ve seen so many arguments about it. The proponents use to get very vocal about “freedom” “democracy” “human rights” and so on like they are built-in into a certain network protocol and lacking from all the others. Let’s not go there again. Let’s keep technical and from this POV it doesn’t have many merits.

    Nap.

  38. Canadian Consumer says:

    @Nap
    The porn would only pass through your house if it was porn that you downloaded. You only share files that you yourself have determined are approporiate content for your harddrive. You can also quit sharing them anytime you like. In fact you don’t need to even run the p2p until you need it.

    We can’t let the government save us from ourselves. The more we act like we are powerless and outraged, the more they will treat us like children. We need to take responsibility for our own actions. Like I keep saying, this is a divide and conquer issues. As in divide and conquer your rights.

    Kind of like terrorism is used to justify less and less freedom.
    K


  39. Canadian, from my POV you’re free to do whatever you want. I’m not proposing to censor anything. If you like P2P and it works for you then by all means use it.

    All I want is a little blurb on the game package (or other software) stating that it uses P2P networking. Because my experience is that it will work poorly for me and I’d like to be able to avoid purchasing such in the future.

    Nap.

  40. RE: Napalm
    Then it was probably a mediocre implementation.

    BZFlag P2P works just fine.

    Furthermore Nap, you are forgetting the new DNS system that is being developed P2P, because that is the only way thus far to prevent censorship by ICANN and ht elike.

    BTW Nap, when you mention viruses and the like via P2P, you neglect to mention that this is largely a Windows-only issue, GNU/Linux simply isn’t as exploitable.


  41. @Eric: “BTW Nap, when you mention viruses and the like via P2P, you neglect to mention that this is largely a Windows-only issue, GNU/Linux simply isn’t as exploitable.”

    It is, however:

    – it’s not the target of choice for malware writers
    – most software that you would fancy to install is freely available from places like sourceforge; there’s no incentive to download it from shady places (where you might get it with an unwanted companion)

    Nap.

  42. Whew.. What a conversation.. Good discussion and good thoughts, and (mostly) good information.

    @Nap
    The “economics” of P2P. Peer to peer, peer to a Million other people.

    Start with that enormous data project you just finished gathering together, and want to share with the world, say it’s 1GB in size. It takes a LONG time to upload it at .7Mb/sec to one person, and then the next, and the next, and you have only “distributed” it to 3 other people. But now lets use P2P, with the *same* connection. You set it up and 3 other people start “downloading”, each getting a different piece of the file. They start sharing those pieces between themselves as well as getting a piece from you. After you have “uploaded” it once, there are now 3 people that have all of it. Good “economics” from your point of view, and all it “cost” the other 3 people was 66% of the “file” also being uploaded at the same time it was being downloaded. But the interesting thing is that those 4 effective “.7Mb upload streams” have been multiplied into a 2.1Mb/sec stream on the “download” side of each connection – except your own initiating source. If a 4th system joins and gets the file, it can get 33% from each of the original 3, and have 100% of it available for even more systems. Multiple by 10 or hundreds or thousands of “P2P participants”, the “swarm”. P2P is “designed” to get the most from the disparity between download and upload speeds. In exchange for “uploading” pieces from your system while you are downloading, you get it many times faster. That’s a fair exchange in many minds. If everyone operates “fairly” (1 to 1 ratio), they will each have uploaded exactly as much as they downloaded (except you, and you only upload it once), yet there are now 10, or 100, or a million “copies” on systems everywhere. 1 to 1 “network economics”, giving an effective multiplier as large as you wish.

    Take a read of Bram Cohen’s original analysis sometime.

    If you start throwing in network path analysis between peers, you start to see where lots of the “sharing” might occur between you and someone on the same ISP, or even on the same ISP CO segment. Places where “congestion” is not a problem (at least we are told).

    It’s not one to many. It’s one to a few, a few to more, more to many, many to many more. Boiled down, P2P is simply the most cost effective way for a small company or single user to distribute files and data across the internet. Users trade off fast download for trickle upload over time. It scales so well, that even large companies find it attractive. They simply cannot get that much aggregate “bandwidth” on a centralized server farm, even if they were willing to bankrupt themselves in the process. That’s why games use it, and Linux distros, and many more. And of course technologically adept “pirates” also recognize this, and use it.

    So think about that when looking at you next games as well. Would you be willing to buy it at 10 times the price if updates came from a server farm? At much slower speeds than what you could achieve through P2P distribution? Your choice.

  43. @Mark
    You make the point that:
    …”a 1:1 sharing ratio on P2P content actually doubles the bandwidth you consume”

    Which is correct from an individual “consumer” perspective. It is quite a different picture from a network perspective. There is absolutely no more overall “traffic” if a file is distributed to 1000 people through a server farm, or through a P2P network. For every byte downloaded, there is one byte uploaded – somewhere. It’s just that the “somewhere” is shifted and spread over 1000 systems instead of only one.

    I don’t know if you are familiar with “edge servers”, like the Akamai server network. Effectively moving data “closer” to the users (in a networking perspective), so they don’t need to traverse as much of the “backbone network” to get the data. This can even be extended to placing such servers within a larger ISP’s internal network. The purpose is speed, but part of the result is reduction of “backbone pipe congestion”.

    P2P applications also take such effects into consideration. The “closer” the other node is to the requesting system, the “faster” the response should be. Using that as a starting point, it then builds “preferred” systems that it wishes to exchange data with (I have piece B, you have piece C, lets swap). If an ISP does across the board throttling of P2P, they may actually increase congestion on their “backbone pipe” not reduce it. The only place throttling of P2P makes sense is on the “backbone pipes” to/from an ISP. They should leave anything internal unrestricted, so the apps will “prefer” internal (non-congested) P2P systems over backbone ones.

  44. P2P and network effects
    Netflix is coming, lots of video streaming, etc, etc.. How does P2P technology fit into the big picture? (pun intended.)

    Netflix is server farm streaming. It all has to come down through the ISP backbone “big pipes” from a server farm somewhere. Lets say you have 10K people attached to a largish ISP that decide to watch a particular movie that night. Say that (decent quality) video stream is 5Gbytes. That’s 50TBytes of “backbone bandwidth” the ISP has to allocate, over a 2 hour period. Math. That means the ISP has to provision for 55.5Gbits/sec to handle the load, but each DSL connection only needs a 5.5Mb/sec download connection to handle their “5Gbytes stream” over 2 hours.

    Now throw P2 technology into the picture. Lets start with some assumptions. First; that each “Central Office” can deliver sustained 6Mb/sec down and 1Mb/sec upload to each of the attached subscribers. Second; that each CO is connected through 1Gbit/sec fiber (yeah, I know it’s a lot faster – but). Third; that at around 100 people at each of 100 CO’s will be watching the same movie. Lastly; we exploit P2P technology at each subscriber and at the CO.
    Now, that 5Gbyte movie will transfer through a 1Gbit/sec pipe in about 40 seconds. You could stream from there directly to the 6Mbits/sec subscriber, but that means your “server” has to be big enough to handle *all* the movies potentially being served to all subscribers. So throw in a “P2P box” at each subscriber, with enough storage to store the movie. Start dumping a “piece” to each of those boxes, until every piece has been sent out – *once*, and then free the storage for the next movie (hmm, aggregate 600Mb/sec, less than 2 minutes to free). Let the boxes start exchanging those pieces among themselves. You can even bias the “pieces” so the first pieces go to the boxes first, and others later. They can start playing the movie sooner. The P2P “network” is still madly exchanging pieces behind the scene.
    Repeat for another 100 CO’s so we get the same 10K subscribers.

    So, the ISP backbone “big pipe” now only needs 500Gbytes of backbone bandwidth for that movie (once for each CO, but I’ll leave the obvious optimization unstated). They can now fit 100 of the same size movie into that same 50TBytes/2hour of backbone, or free up the bandwidth for “uncongested” normal usage.

    There are some “fiddling with statistics” you can (and will) need to do. But the principles are described above. Optimizations. Netflix results without the scary backbone bandwidth requirements. Just like P2P (it is!), it works best when you have a LOT of people trying to “download” the same thing. We have just built a “dynamic” video streaming edge network. Netflix on P2P could be an ISP’s best friend.


  45. @Oldguy: “Start with that enormous data project you just finished gathering together, and want to share with the world, say it’s 1GB in size.”

    Well I’d rather look at paying $2.95/month instead of completely f**** up my residential connection:

    http://www.godaddy.com/Hosting/web-hosting.aspx?currencytype=CAD&isc=htgsgca14

    Bandwidth delivered wholesale to a data center is so much cheaper than delivered to a residential address.

    Coming back to the games. I’d rather pay more upfront or a small monthly subscription (a la Xbox network) and have it working instead of having it not working or being very frustrating when it does, and paying for monthly cap overage at my ISP.

    “I don’t know if you are familiar with “edge servers”, like the Akamai server network.”

    Yes I am and I’m also familiar with “transparent caching proxies” located at your ISP. When a file gets popular, it gets served from the cache located at your ISP instead of going through the backbone. But all this doesn’t work for P2P.

    Nap.

  46. @Nap
    “‘m also familiar with “transparent caching proxies” located at your ISP. When a file gets popular, it gets served from the cache located at your ISP instead of going through the backbone. But all this doesn’t work for P2P.”

    It also doesn’t work with https, ftps, or any encrypted transfer. Akamai does.

  47. @Nap
    ..”Bandwidth delivered wholesale to a data center is so much cheaper than delivered to a residential address.”

    You will still need to upload it once. The same as you do with P2P. Then you want to distribute to an unlimited/unspecified amount of other people. “Unlimited bandwidth” does not mean “unlimited speed” (and what they really mean here is unlimited data – I already use multiple data centers, I know). I really don’t care how well a data center is provisioned or how accommodating they wish to be, if enough people come to get your data at the same time, it will bring it to it’s knees. There is a single (or a few) “chokepoint(s)” that you just cannot get around. This is where P2P starts to differ, a LOT.

    So you have a new 5GByte update for that online game. You have a few million people that want to get it, and they all want it as fast/soon as possible. Do you go with a server farm that has an extreme “limitation” of 1Tbits/sec and will chew up the backbone, or a P2P solution that can “scale” to whatever aggregate bandwidth you need and will do dynamic edge routing? From a network perspective, which is “cheaper”? Which will result in the faster delivery to that few million people? A P2P network will generally deliver to everyone in less than 24 hours, no matter if that “few” is 10 million or 100 million. You do the math on how long it will take for 500Pbytes through a 1Tbit connection, after you check out what it will cost for that kind of connection. P2P simply scales up a lot better than a server farm, any farm.

    As an individual you might look at the “chewing up your upload usage” side of things, but from a network perspective it makes sense. This also affects you. Would you be willing to trade off that “trickle upload usage” in order to get that 5Gbyte update in 24 hours, or wait a week or more to get it? What if it contains new features that both you and your online buddy need to have installed, but one of you gets it from the server farm today and the other one gets it 10 days from now?

    It’s not an “either or” decision. For some kinds of “problems” a server farm makes better sense, for others a P2P solution makes better sense. Or a graduated combination of the two.


  48. @Oldguy: “really don’t care how well a data center is provisioned or how accommodating they wish to be, if enough people come to get your data at the same time, it will bring it to it’s knees. ”

    Except that once the major ISPs get it in their caching proxies it will be served from there. So your server will be hit only with timestamp checks not with the whole file download.

    “So you have a new 5GByte update for that online game. You have a few million people that want to get it,”

    With a few million paying customers I could definitely afford to talk to Akamai to distribute it. And, again, once it hits the caching proxies, it will be served from the ISPs internal network.

    See, the current client-server model is a pretty well optimized, tried and tested solution.

    Nap.

  49. Canadian Consumer says:

    @Nap
    Nap,

    It’s obvious that you do not understand P2P. You probably don’t have much experience with it, so I don’t blame you.

    You will never “fuck up” you residential connection if you want to put out your 1GB project out. Just like you would not fuck up your residential connection if you uploaded the project to a server.

    Once its out there with seeder, ypu can turn off your PC and your project says “hello world”. “Heo Vietnam, Hello Iceland, Hello wiki leaks.”
    o
    That is a lot of power the user has that he never had in the history of the world. No wonder some do not like this.

  50. Canadian Consumer says:

    (god i hate laptops, sorry about spelling, stupid touch pad keeps sending my cursour on walk about…)

  51. Canadian Consumer says:

    Nap you may have other options for high priced servers. The nice thing about P2P is that now the 3rd world programmer has options, as does the poverty stricken student. With P2P, distribution of your art, movie, comedy or music is at your door step.

    And yes… believe it or not there are pleanty of people that offer up these talents for free. Most artists realize that they must find their niche first. P2P has found talent at a faster pace than all the record labels put together. It just is more efficent, because the listener votes with what he wants. Not some strategy session in a downtown office somwhere that tries to decide your consumption for the next season.

  52. Canadian Consumer says:

    Isn’t it great that the US STATE department, can’t take down wiki leaks insurance? There is not central server to take it noff from.

    A bunch of peers created a human chain to protect what they feel is correct.

    P2P is power to the people.

  53. @Nap
    …”With a few million paying customers I could definitely afford to talk to Akamai to distribute it. And, again, once it hits the caching proxies, it will be served from the ISPs internal network.”

    Have you ever talked to Akamai about pricing? If you describe something like this to them, I suspect you will not be so assured about the “affordability”.

    Caching proxies have their limitations. They have to be protocol aware for starters. Yes, if every ISP had a transparent proxy in place, *and* they checked for “new” on each hit (not all do), *and* you sent your content out via http (not https!), then it will work. For various reasons, including adding another “point of attack” on a network infrastructure, transparent caching proxies are no longer as common as you imply. Transparent proxies are totally under the control of the particular ISP, and are very dependent on how well it is set up and maintained. The ISP choses the “network path” your system traverses based upon static criteria.

    On the other hand, P2P is not dependent on the ISP setup, is resistant to attacks (proven many times over) and dynamically finds well performing “close nodes” that could be a lot “closer” in network terms than the proxy. And it’s very cheap from the “uploader” perspective.

    Dig into the reality of networking provisioning, server farms, etc. Now build that pricing into your game, of which you hope to sell a million (scale out to 1 million plus in network effects). Peer into the mists of “will I sell 100, 1000, or reach that million?” If I distribute fixes through P2P, I can sell at $10 and it doesn’t matter how many I sell. If I distribute updates through a central server, I have to scale that up, perhaps enormously, with sales – so say $100 for the same game. Will I still sell a million? Proxies and Akamai don’t really change that pricing structure all that much.

    From a broader network perspective, there is zero difference in the “amount of bytes transfered” for server farm vs P2P – even if you throw in proxies and Akamai. For every byte “downloaded”, there is one byte “uploaded”, from somewhere. P2P simply shifts that “upload” to everyone that is also “downloading”. It spreads that load across multiple points in the network.

    If you actually analyzed P2P from a networking technology perspective, instead of focusing on the “hit to your upload side”, you might see that – in most cases – it is simply better than anything else at distributing “largish” files to many, many people, and at doing it very quickly.

  54. Canadian Consumer says:

    (I know I am kind of hogging this thread, but this really means a lot to me and I not at a loss for words on this subject)

    To me the central issues is: P2P represents OUR independence. No one controls it. Users can come togather and peer to peer content on poetry, politics, law and even insurance like in the case of wiki and Assange.

    The main concept of P2P is that you are allowed to CONNECT using your bandwith to another using recipricol bandwith. This is a very simple concept, and anyone that tries to take this away from you the PEOPLE, by using bullshit arguments like, what about piracy, what about porn, what about music… is really just saying that “They want to keep you from meeting with others in a internet cafe”.

    Right now, you can go meet with your buddies outside of your house in a public facility and discuss your Dungeons or Dragons, or your mafia raquet. The choice for freedom is yours. If you get caught playing godfather, you will go to jail. This is also true of those that chose to do illegal acts. We have laws for handling these violations. If a site starts making money selling content its hard to hide the money trail and evidence for such. The main hurdle is police work. There are bad people on the net. We can have net police that go where the public go, I don’t see a problem with that.

    … but to ban public peering or to even fault it, is beyond ridiculous. The reprecussions of taking this posionous bait means decreased freedom for us all.

  55. @Canadian Consumer

    While I understand your philosophical viewpoint, and even agree with it, I advise caution not to confuse the philosophy with the technology. Something “better” may crop up tomorrow.

    Besides, I think the original focus of this blog posting was around Rogers technological response to the technology, specifically bittorrent. There are already alternatives and derivatives that can accomplish the same philosophical goals.

    Even the different technologies can be “better” or “worse” depending on your perspective. Part of that perspective is arranging the “goals” of the technology in order of priority. Network performance, individual performance, security, anonymity, whatever.
    In the category of scalable network performance delivering static content to an undefined (but potentially large) number of people, we can’t do much better than bittorrent – yet.


  56. @oldguy: “In the category of scalable network performance delivering static content to an undefined (but potentially large) number of people, we can’t do much better than bittorrent – yet. ”

    Hmmm… didn’t someone calculate earlier that at current speeds and pricing per residential GB, it would be more effective to mass mail hard drives through Canada Post?

    🙂 🙂 🙂

    Nap.

  57. ….
    @Canadian: “Once its out there with seeder, ypu can turn off your PC and your project says “hello world”. “Heo Vietnam, Hello Iceland, Hello wiki leaks.”

    Yeah, when the Egyptians said “Hello Mubarak” their internet stopped working.

    What would you think would happen if the Yankees decided to say “Hello Barak”?

    “That is a lot of power the user has that he never had in the history of the world. No wonder some do not like this. ”

    Mhhh there’s some “previous art” before P2P. Like in setting up a web page. You know, like this one.

    Nap.

  58. Canadian Consumer says:

    @Nap

    LOL, Yes setting up a web page is great. But what happens when the powers that be tell you that only some can connect to that web page. What happens when the server you are on does not agree with your webpage’s content. You say you have a right to start a klingon religion, the server provider does not like the Christian groups that are hammering him and is willing to ignore you for the sake of business. A web page is limited, and even a web page will violate your hosting agreement with most providers if served by you directly.

    This is what you are facing with P2P. All it means is your ability to connect to other peers, and being able to tell the ISP to fuck off. What is being transferred is irrelevant, just as what is contained in your personal emails to your girlfriend is irrelevant.

    I understand ISPs charging me for usage, as long as that usage is reasonable, I think at 50 bucks a month 200-300GB a month is reasonable. Beyond that, I would like them to kindly fuck off from prying into which peers I choose to connect to and for what reason.

    @old guy

    Yes, I understand something better can come along, what that is, lol I am at a loss to figure out, as connecting to peers is fairly basic tenant of communication. However, it is not P2P or torrents that I am advocating, what I advocate is your right to freedom to go where you want on the internet. Curbing P2P or deep packet inspection violates these principles.

  59. Canadian Consumer says:

    I don’t really understand what Mubarek and Egypt shutting down their internet has to do with this.

    I am aginst control of the internet or martial law of any sort.

  60. Canadian Consumer says:

    BTW, shutting down the internet would have to be the most desperate act imaginable in North America. You would be basically shutting down 50% of your economy. A lot of jobs require the internet to function now. This is not Egypt. We need our Reuters feeds to know how to trade futures in real time. We need our networks to communicate with out of province offices. It would be economic gridlock.

    In fact lots of businesses like security, telephones and promotional companies require the internet to do business.

    An internet kill switch is the stupidest thing I have ever heard off. It should be called an economy kill switch. It’s beyond dumb and short sighted. The only thing they can do is take away your rights on that internet and that is what I am fighting for.


  61. @Canadian: “I don’t really understand what Mubarek and Egypt shutting down their internet has to do with this. ”

    It shows that it is possible. No P2P would help you when your ISP decides to shut down all residential connections.

    “What happens when the server you are on does not agree with your webpage’s content. You say you have a right to start a klingon religion, the server provider does not like the Christian groups that are hammering him and is willing to ignore you for the sake of business”

    There are so many server providers around the globe, if you’re into the klingon thing I bet that the Swedish guys hosting Wikileaks would have no objection to hosting your pages either.

    Nap.


  62. @Canadian: “BTW, shutting down the internet would have to be the most desperate act imaginable in North America. You would be basically shutting down 50% of your economy. ”

    You can selectively shutdown residential internet only. Do you think that corporate America would hesitate even for a fraction of a second? They’re already preparing for it.

    http://news.cnet.com/8301-31921_3-20029282-281.html

    Nap.

    Nap.

  63. ..”Hmmm… didn’t someone calculate earlier that at current speeds and pricing per residential GB, it would be more effective to mass mail hard drives through Canada Post?”

    The original “P2P”. But it lacks a bit in the cheaply scale to lots of people characteristic. And it doesn’t even fit into any kind of “fast”, at least not by today’s standards.

    If the CRTC and the large ISP’s get their way, you might just be right. In that case, you might want be on the lookout for an option to get your “game updates” sent to you on a DVD.
    But this leads off into another tangent, one that has been hashed over pretty well in other areas. If you are fatalistic, you will just assume the consumer gets screwed and can’t do anything about it. If you are optimistic you’ll believe it “just won’t happen”. If you are in a compromising mood, you can negotiate or discuss options/choices. Shaw seems to have the right approach as an ISP, lets hope the others follow suit.


  64. @Oldguy: “And it doesn’t even fit into any kind of “fast”, at least not by today’s standards.”

    How about a 2TB drive sent by Canada Post “Priority Next A.M.” service? Can you beat it?

    nap.

  65. @Nap..
    Corollary..

    “Speed” vs “bandwidth” (and maybe usage). Consider a semi-trailer (8ft x 53ft x 8ft) absolutely packed with 16GB flash drives (2.5in x .5in x .25in – all loaded with data) leaving LA and heading for NY. Consider a wide open 1Gbit internet pipe between the starting point and the end point. Assume that it is the “data” that is important and not the flash drives, also assume that somehow you can instantaneously convert/store that data from the pipe or the flash drives. It takes 3 days of driving to make the trip with the semi.

    What is the “speed” of that 1Gbit pipe? What is it’s “bandwidth”?
    What is the “speed” of that semi-trailer full of data? What is it’s “bandwidth”?
    Which is “faster”? In what way?

    Now throw “data usage” into the picture. I don’t use “bandwidth”, and I don’t use “speed”. I use data. It takes “bandwidth” to get that data to me. If I want it “fast”, speed matters. If I want lots of data, *and* I want it fast, I need both “speed” *and* “bandwidth”.
    The topics are separate, but strongly inter-related.

    Up until recently, the ISP’s have been selling purely “speed”. Bandwidth was something they effectively ignored. Now they want to charge for “usage”. That’s what’s wrong with this picture, they should have been selling “bandwidth” all along, the ability to deliver X amount of data in Y amount of time. “Bandwidth congestion” occurs when more people want more “bandwidth” than the system can supply at that moment in time, it doesn’t matter how “fast” they are, there just isn’t enough “bandwidth” to support all those users. It may seem “slow”, but it’s not. They are just as “fast” as they were before, but they can’t get a steady stream at the “speed” they are expecting.
    It’s like they are madly racing from stoplight to stoplight through town, their overall result (speed) is about the same as a bicycle riding through and hitting all the green lights. How “fast” are they going? How much “bandwidth” (KM/H) are they getting? It’s a poor analogy, but hopefully illustrates the real problem.

    Some people “use” a steady trickle of data all month long, their contribution to the “congestion problem” is unnoticeable. If they were to compress all that usage into a sustained burst during peak times, they would cause a very noticeable problem.

    So the problem they are trying to resolve isn’t “usage” per se, it’s peak time bandwidth congestion. The only “data usage” that matters, is the data being “used” during those times. If they want to implement UBB fairly, they need to implement it on a sliding scale that addresses the “peak congestion time” problem.

    Interestingly enough, most P2P apps can be “tuned” to automatically avoid usage during these times. Many P2P app users already do this, out of courtesy. If ISP’s published some parameters for the “peak usage congestion”, they could be more accurate in their tuning.

  66. @Nap
    ..”How about a 2TB drive sent by Canada Post “Priority Next A.M.” service? Can you beat it?”

    Looks like we crossed posting times. See above..

    OK.. I need to send 5Gbytes to 1 million users world wide in the next 24 hours. Can Canada Post meet that deadline? Can Fedex?

    Don’t confuse “bandwidth” with “fast”. Your suggest has high bandwidth, it’s hardly “fast”.


  67. @Oldguy: “It’s like they are madly racing from stoplight to stoplight through town, their overall result (speed) is about the same as a bicycle riding through and hitting all the green lights. How “fast” are they going?”

    You can define all these through “latency”, peak speed, minimum speed and average speed on the interval of time you expect full delivery.

    But let’s get back to what started all this – the P2P thing.

    You may note that for different applications people need different latencies – or let’s put it as an expected time of arrival “ETA”.

    If I want to check the weather forecast, I need short ETA and it would be kinda stupid to order it by mail. But how would I get it on P2P either? supposing it exists on some people’s drive, I would need to wait until one of them connects to the network? So for low latency, “interactive” applications, web will beat anyday both P2P and Canada Post.

    OTOH, if I want a copy of Aunt Edna’s movies from the family vacations, and she used one of those newfangled HD cameras, I’m contemplating a couple of TB, but I don’t need them immediately. Here I would look at getting them ASAP, which would be Canada Post. Neither web nor P2P could deliver it in 24h on residential internet. But let’s say I put cost into the equation too and I would consider waiting more if it costs me less. Still Canada Post wins.

    My conclusion is that P2P is the least efficient way to send data. if you need interactivity, it doesn’t deliver. If you want low cost, it doesn’t deliver either.

    Nap.

  68. @Nap
    ..’You can define all these through “latency”‘

    Correct, “latency” is the technical term for the combined effects of “speed” plus “bandwidth” plus propagation time. Low latency is better/faster. Low speed, high bandwidth = high latency. High speed, congested/low bandwidth = high latency. Etc. Traffic shaping can modify the apparent “bandwidth” factor for certain packets. You can never achieve lower latency than what your speed plus propagation time will allow.

    …”supposing it exists on some people’s drive, I would need to wait until one of them connects to the network”

    Yup.. But the same situation exists for “web” based access as well. With the added twist that the file is available *only* from one place. You have to wait for *that* server to come back up. In P2P, *any* server that has the file will do. You might even have the case where there are 20 systems, each with a *different* 5% of the file, and you can obtain 5% from each of them to total the full 100% of the file (and they will each be exchanging pieces to eventually total 100% themselves).

    You seem to continually focus on your personal perspective, while ignoring the overall network effects.
    I have never stated that P2P is “best” for any/all cases, I stated that it is the current “best” for network delivery of static content to an undefined amount of people. The “best” is from a network performance, and originator perspective. You may not care about the originator’s point of view, but networking performance will affect you directly.
    Current weather reports are hardly static. And try putting “anyone that wants it” on the address line of your Canada Post delivery.
    You are correct that overnight Canada Post might be the cheapest, or even the “best”, way to deliver static data to known individuals. Try doing that with an unknown and undefined amount of individuals – and still do it overnight on a worldwide basis.
    You are only focusing on a single part of the “problem” P2P was designed to solve. Any solution has to encompass all parts, not just one. Even one part at a time doesn’t work if the “solutions” can’t be combined appropriately.

    So, back to the definition. “Efficient and quick delivery of static content to an undefined, but potentially large, amount of people.” Any solution has to meet *all* the criteria in the problem definition, not just some of them. Do you have anything that can do better than P2P from a networking perspective? How about from the originator perspective?

    You don’t happen to work for Canada Post, do you? 🙂


  69. @Oldguy:

    How about “Pseudo-anonymous delivery of undescript but potentially malicious content to an undefined, but potentially large, amount of people.”

    🙂

    Yes Oldguy, I look at it from a personal perspective like in “personal use” and “privacy”. P2P ain’t that good if you want to share personal photos with your family. Neither is Facebook if that matters but that’s beyond our discussion.

    As for web availability. Depends on your SLA which depends on how much money you’re willing to pay. “Best efforts” (no SLA) at $2.99/month usually offers some 98-99% uptime. If you have a really critical application (your name is Julien or something like that) then there are providers that will place copies on several farms located around the globe and connected to different trunks. Pretty much like Akamai albeit on a much smaller scale. So even if uncle (Mu)Barak turns off the switch, the content is still available for those outside his influence.

    Nap.

  70. …’How about “Pseudo-anonymous delivery of undescript but potentially malicious content to an undefined, but potentially large, amount of people.’

    Hmm.. I thought that was the definition of web surfing. Or a hacked proxy. At any rate it fits those approaches as well as it fits P2P.

    ..””privacy”. P2P ain’t that good … Neither is Facebook”

    Granted. And the concept of “privacy” is not really what they are meant to cover. Not part of the “problem” they are designed to solve. You still keep introducing criteria that P2P isn’t intended to solve. That’s not a defect in P2P, that’s misapplication of the technology.
    (Perhaps making the same mistake that many pushers of DRM are making?)

    ..”As for web availability”

    Not just site availability, 404 as well. Not only do you need to keep that site available and with lots of unspecified bandwidth, you have to keep it maintained and backed up.
    There is no possible way you can provision a web site to handle “quick” delivery to an unspecified amount of people. Look at the “network paths” involved to those unspecified people, what is the provisioning available at those points?
    Network effects. There is a reason that Google has their own network “backbone” and “server containers” scattered everywhere. But neither Akamai or Google can put their equipment *everywhere*, nor can they dynamically balance network paths in places they *don’t* have those nodes in place. P2P can, and is exactly what the technology is designed for.

    You have one variable you keep ignoring in your solutions, “undefined number of people/systems”. What technology do you have that addresses this, as least well as P2P does?


  71. @Oldguy: “Hmm.. I thought that was the definition of web surfing. Or a hacked proxy. At any rate it fits those approaches as well as it fits P2P. ”

    Not really. There’s the question of liability too. Three scenarios:

    1. Your bank gives you a business card with their http://www.bank.com address and tells you to connect there for online banking. You connect there after their site was hacked and what you thought would be your monthly statement proves to be a kid pr0n file.

    2. You connect to P2P and download a bunch of seemingly innocuous music and video files and some of them prove to be kid pr0n. Next day RCMP is at your door.

    In which case you stand a chance to explain to the judge that you were not at fault?

    Nap.

  72. …”Not really. There’s the question of liability too.”

    Hacked proxy server. Bank isn’t hacked. Same scenario. Search engine results, etc, etc..

    In either case, it doesn’t matter what you “got”, it’s what you went looking for. Intent does matter.

  73. Sorry.. Hacked DNS server is a better example.. Most banks are at least https, and proxy servers can’t support https..


  74. @Oldguy: “In either case, it doesn’t matter what you “got”, it’s what you went looking for. Intent does matter. ”

    Correct. So the judge will ask what exactly were you looking for on P2P?

    Nap.

  75. Wow guys, lots of good discussion on P2P. Too bad MG doesn’t support nor defend this technology, considering the research that’s been done on it by the UNTCAD, and our own independent researchers here in Canada. It really sucks that a lot of people in the copyright debate have their own agendas, rather than supporting facts and truths. While looking at the millions lost in the reluctance to move Pandora’s services to Canada, one should look at the trillions lost in rejecting P2P as a viable distribution and value chain for our creative industries.

  76. Toronto_Greek_Guy says:

    We have the bandwidth, but what good is the bandwidth if our throughput is messed up? p2p works in proportions reduce upload throughput your download throughput suffers….p2p is a great idea, when not limited….why do you think there are loads of data sharing services popping up left right and center there was rapidshare and megaupload (the originals) now there are dozens of them with free/paid subscriptions (all of them based in europe/asia) where access to massive pipelines is easily obtainable for extremely cheap, why? to take advantage of the situation of north american ISP throttling (cause rogers isn’t the only one doing it) p2p is a great system of data distribution but simply put it’s pretty much done for unless these traffic shaping models are scrapped

    ps these traffic shaping models are just put in place to hinder our freedoms, I know, you know and the ISPs know it