Distributed Computing Industry
Weekly Newsletter

In This Issue

P2P Safety

P2PTV Guide

P2P Networking

Industry News

Data Bank

Techno Features

Anti-Piracy

August 25, 2008
Volume XXIII, Issue 4


Verizon Urges Sound Policy and Industry Cooperation

The information and communications sectors are experiencing one of the greatest periods of innovation in their history, as entrepreneurs compete to provide consumers with increased speed, mobility, and content over broadband networks.

But, according to the Verizon senior executive who manages his company's large investment in networks, future breakthroughs will also depend on appropriate public policy, as well as cooperative industry efforts to set standards.

In a keynote address this week at the Progress and Freedom Foundation's annual Aspen Summit, Verizon Executive Vice President and Chief Technology Officer Dick Lynch urged a "change in mindset on the part of policymakers to acknowledge the realities of the 100-megabit world" and suggested that other industry participants be pragmatic as well.

"The public interest can best be served by getting as much broadband in front of as many people as possible, as quickly as possible, and ensuring that investment keeps up with demand," Lynch said. "To a large extent, this is a matter of taking down the barriers to investment and refraining from erecting new ones."

Lynch said the "high-passion" issue of network management is a "major public policy concern" that can be resolved in a way that preserves proper network management techniques. "We believe that network and applications providers can and must work together to find solutions that work for the industry and for our customers," he said, "and Verizon has taken a leadership role in doing just that."

To that end, Verizon and Pando Networks co-founded the P4P Working Group (P4PWG) under the auspices of the Distributed Computing Industry Association (DCIA) in 2007.

Lynch said the group identified "techniques which, in field tests, have dramatically reduced network costs and congestion while noticeably improving the performance of the service to the customer."

He said he expects those techniques "to be adopted as an Internet standard by all major network and P2P providers."

Lynch said the pragmatism displayed in the P4PWG's success "offers a model for the kind of industry cooperation and collaboration that should be used to address the emerging challenges of the Internet industry." But he noted that government also has "a legitimate role in helping to define the public interest, establish principles, and adjudicate conflicts."

"Dynamic industries like ours require flexible solutions that can evolve and adapt to a changing environment - not rigid regulatory solutions that are one step behind the marketplace," he said.

Lynch said broadband investment is up 40% over the last four years, with speeds doubling, on average, every 20 months. That new capacity enables "equally amazing advances in applications, services, and equipment," he said.

Verizon's 700 megahertz spectrum purchase, the choice of LTE technology for its fourth-generation wireless network, its wireless Open Development Initiative, and the company's superior, FiOS fiber-to-the home (FTTH) network position Verizon as a market leader, Lynch said.

Download Diet: Local File Sharing Cuts Network Loads

Ever since Bram Cohen invented BitTorrent, web traffic has never been the same.

Peer-to-peer networking, or P2P, has become the method of choice for sharing music and videos. 

While initially used to share unauthorized material, the system is now used by NBC, BBC, and others to deliver licensed video content and by Hollywood studios to distribute movies online.

Experts estimate that P2P systems generate 50% to 80% of all Internet traffic. Most predict that number will keep going up. Tensions remain, however, between users of bandwidth-hungry P2P applications and Internet service providers (ISPs).

To ease this tension, researchers at the University of Washington (UW) and Yale University propose a neighborly approach to file swapping, sharing preferentially with nearby computers. 

This will allow P2P traffic to continue growing without clogging up the Internet's major arteries, and provide a basis for future P2P systems.

A paper on the new system, known as P4P, was presented this week at the Association for Computing Machinery's Special Interest Group on Data Communications (Sigcomm) meeting in Seattle.

"Initial tests have shown that network load could be reduced by a factor of five or more without compromising network performance," said co-author Arvind Krishnamurthy, a UW Research Assistant Professor of Computer Science and Engineering. "At the same time, speeds are increased by about 20%."

"We think we have one of the most extensible, rigorous architectures for making these applications run more efficiently," said co-author Richard Yang, an Associate Professor of Computer Science at Yale.

The project has attracted interest from industry. A working group was established last year by the Distributed Computing Industry Association (DCIA) to explore P4P and now includes more than 80 members, including representatives from all the major US ISPs and many companies that supply content.

"The project seems to have a momentum of its own," Krishnamurthy said. The name P4P was chosen, he said, to convey the idea that this is a next-generation P2P system.

In typical web traffic, the end-points are fixed. For example, information travels from a server at Amazon to a computer screen in a Seattle home and the ISP chooses how to route traffic between those two fixed end-points.

But with P2P file sharing, many choices exist for the data source because thousands of users are simultaneously swapping pieces of a larger file. Right now the choice of P2P source is random: a college student in a dorm room would be as likely to download a piece of a file from someone in Japan as from a classmate down the hall.

"We realized that P2P networks were not taking advantage of the flexibility that exists," Yang said.

For the networks considered in the field tests, researchers calculated that the average P2P data packet currently travels 1,000 miles and takes 5.5 metro-hops, which are connections through major hubs.

With the new system, data traveled 160 miles on average and, more importantly, made just 0.89 metro-hops, dramatically reducing web traffic on arteries between cities where bottlenecks are most likely to occur.

Tests also showed that right now only 6% of file sharing is done locally. With the tweaking provided by P4P algorithms, local file sharing increased almost tenfold, to 58%.

The P4P system requires ISPs to provide a number that acts as a weighting factor for network routing, so cooperation between the ISP and the file-sharing host is necessary. But key to the system is that it does not force companies to disclose information about how they route Internet traffic.

Other authors of the paper are Haiyong Xie, a Yale graduate now working at Akamai Technologies, Yanbin Liu, at IBM's Thomas J. Watson Research Center, and Avi Silberschatz, Professor and Chair of Computer Science at Yale. The UW research was supported by the National Science Foundation (NSF).

Report from CEO Marty Lafferty

Photo of CEO Marty LaffertyThe DCIA-sponsored P4P Working Group (P4PWG), under the leadership of Co-Chairs Doug Pasko of Verizon Communications and Laird Popkin of Pando Networks, has successfully completed a second round of P4P field tests conducted by Pando and multiple Internet service providers (ISPs), focused on optimizing performance for different broadband network architectures and configurations.

In addition to Pando and the participating ISPs, the P4PWG owes an enormous debt of gratitude to the research teams at Yale University and University of Washington (UW) that originally conceived the idea for P4P, subsequently developed the underlying P4P protocol, and worked diligently on the initial simulation studies and follow-on field tests.

Haiyong Xie and Professors Richard Yang, Arvind Krishnamurthy, and their colleagues are to be heartily congratulated for their excellent work, which is reflected in the research paper presented this week at Sigcomm 2008.

In addition, the P4PWG is now planning follow-on field tests with multiple P2P technologies, including live P2P streaming, to likewise optimize the benefits of P4P mechanisms for different P2P protocols and applications.

The P4PWG has also recently added sub-groups to accelerate advancements in key critical areas and to improve overall productivity of the working group by leveraging the enormous talent and resources that this important initiative continues to bring together from all over the world.

The IP POLICY/GUIDELINES Sub-Group, facilitated by Microsoft's See-Mong Tan, is now completing its work product intended both to protect the intellectual property (IP) of all P4PWG participants, as well as to encourage an open process for continuing to develop, test, and commercially deploy P4P technical solutions to benefit all ISPs and P2P companies that voluntarily wish to take advantage of what these have to offer.

The CABLE Sub-Group, facilitated by Comcast's Rich Woundy is now evaluating results of the current field test to determine next steps that will address unique concerns of cable-type ISPs for additional testing and ultimate implementation. It is also working on how much topological data must be shared with P2Ps for optimal results and whether there should be multiple levels; and how to involve this set of ISPs in the broader P4PWG standards setting / best practices process.

The LIVE P2P Sub-Group, facilitated by Abacast's Mike King, is now studying alternatives for integrating live P2P streaming solutions into the upcoming P4PWG field tests and ultimate implementation. It is also working on how to communicate in a common way with ISPs; how to communicate in a common way with P2Ps; and how to involve live P2P offerings in the broader P4PWG standards setting / best practices process.

The TELCO Sub-Group, facilitated by AT&T's Jia Wang, is likewise now evaluating results of the current field test to determine next steps that will address unique concerns of telco-type ISPs for additional testing and ultimate implementation. And it is also working on how much topological data must be shared with P2Ps for optimal results and whether there should be multiple levels; and how to involve this set of ISPs in the broader P4PWG standards setting / best practices process.

The CACHING Sub-Group, facilitated by PeerApp's Eliot Listman and Oversi's Eitan Efron, is now investigating ways to integrate caching/content acceleration vendor offerings into upcoming P4PWG field tests, also with a view towards commercial implementation. It is also working on how to communicate in a common way with ISPs; how to communicate in a common way with P2Ps; and how to involve caching/content acceleration solutions in the broader P4PWG standards setting / best practices process.

Similarly, the WIRELESS/MOBILE Sub-Group, facilitated by Cisco Systems' Tim Cricchio, and the SATELLITE sub-group, facilitated by KlikVU's Lowell Feuer, are looking at ways to integrate unique concerns of their respective types of ISPs into upcoming P4PWG field tests and commercial implementation. And they are also working on how much topological data must be shared with P2Ps for optimal results and whether there should be multiple levels; and how to involve these sets of ISPs in the broader P4PWG standards setting / best practices process.

The HARDWARE Sub-Group, facilitated by GridNetworks' JeffreyPayne is now discussing router and CPE manufacturer issues and the feasibility of integrating this sector into future P4PWG field tests. It is also working on how to communicate in a common way with ISPs; how to communicate in a common way with P2Ps; and how to involve hardware manufacturers in the broader P4PWG standards setting / best practices process.

The RESEARCH Sub-Group, facilitated by Yale University's Richard Yang, is now appraising what resources are available beyond those tapped to date by the P4PWG and how these should be organized. It is also working on what relevant research has already been done (or is currently in process) by third parties and how should this be integrated into the P4PWG; as well as how research findings should be validated.

And finally, the STANDARDS Sub-Group, facilitated by Telecom Italia's Enrico Marocco, is now considering alternatives for the process the P4PWG should use to move beyond the field testing phase to the standards setting / best practices definition phase. It is working on whether the P4PWG should generate the open standards / best practices deliverable internally, or whether it should work with an established standards-setting body, such as IETF, on this aspect of the P4PWG mission, or a combination of such efforts.

Interested parties are encouraged to call the DCIA at 410-476-7965 or e-mail us at P4PWG@dcia.info for more information. Share wisely, and take care.

The Internet's New Shortcut

Excerpted from Forbes Report by Andy Greenberg

The Internet, it turns out, may have room enough for everyone.

That, at least, is the hope of two professors from the University of Washington (UW) and Yale University. They presented research at a conference in Seattle this week describing a new and speedier way to send data across the Internet. Their technique is based on an algorithm they call P4P.

P2P file sharing now accounts for 40% to 60% of all Internet traffic. Internet service providers (ISPs) charge flat rates to users regardless of how much data they send over a network.

Now, professors Arvind Krishnamurthy of UW and Richard Yang of Yale say they have a way to solve broadband providers' woes. Their algorithm, which they call P4P or "local file-sharing," tracks users' locations to find the shortest path across the Internet.

The result, they say, should please both sides of the P2P debate: Users can download files about 20% faster than conventional file sharing, while cutting the bandwidth requirements by more than a factor of five.

"We think we've come up with a way to end this catfight among ISPs and P2P users," Krishnamurthy said.

The barrier until now, Krishnamurthy says, has been privacy. Broadband providers haven't wanted to give applications providers access to the geography of their network - a move that could reveal elements of their business to competitors.

P4P, Krishnamurthy asserts, takes advantage of data about users' location and a provider's network map without revealing details to either side.

If P2P software distributors and broadband providers buy in, the results look promising. Since April, Verizon has been testing a version of the researchers' P4P system implemented by the New York, NY based P2P content-delivery service start-up, Pando.

In a test with around 600,000 users, Krishnamurthy says, data sent using P4P had to travel between an average of just two networks to reach its destination, as opposed to around seven with normal file-sharing, vastly cutting the cost of moving the data.

This week the researchers published their full P4P algorithm for the first time. They hope other major Internet providers will experiment with the method. Some are already watching from the sidelines: Cox Communications and Time Warner Cable have joined the DCIA-hosted P4P Working Group (P4PWG) as Observers.

Comcast has joined the group as an Active Participant, and its spokesman Charlie Douglas said that the company is "engaging, collaborating and working hard to find technical solutions to the network management issues raised by P2P."

The real winners from a truce between users and Internet providers will be those who have staked their business on file sharing itself. San Francisco-based BitTorrent, for instance, has worked to bring file sharing into the mainstream, cutting deals with companies like Fox, Paramount, and Viacom to distribute licensed P2P content.

BitTorrent's Chief Technology Officer Eric Klinker points out that P4P still faces hurdles before it can be adopted widely by ISPs. The largest is the creation of a standard to allow files to travel between networks, a process that often takes the Internet's standards body, known as the Internet Engineering Task Force (IETC), more than two years.

In the long run, however, Krishnamurthy argues that the companies controlling the Internet have little choice about adopting an alternative such as P4P.

"Throttling traffic is only a short-term solution," he says. "The ISPs have a very good incentive to make this work."

Comcast Does About-Face: Declares Love for P2P

Excerpted from Wired News Report by Betsy Schiffman

Comcast has seen the light. After testing P4P, an experimental file-sharing protocol that reduces network costs and bandwidth usage, the company is closer to embracing file sharing on its own network.

"We are active members of the P4P Working Group (P4PWG), and our engineers are collaborating with other engineers on it. We're absolutely engaged and part of it," said Comcast spokesman Charlie Douglas.

P4P, a next generation file-sharing system, has garnered intense interest from ISPs - such as AT&T, Verizon, Telefonica, and Comcast - as well as content providers. If the architecture becomes widely adopted by ISPs, it could change the face of file sharing; P4P lowers network costs for broadband providers by reducing the distance data travels on P2P applications.

The upshot? File sharing won't hurt network performance. And as an added benefit, it could also increase downloading speeds for customers. The technology could be equally beneficial for content providers since it might help movie studios and music labels track legitimate media sales.

For Comcast, the adoption of P4P would be a dramatic shift in strategy.

In July, Comcast tested the P4P protocol on its network, and saw dramatic improvements in network performance, according to Robert Levitan, CEO of Pando Networks, a content-delivery company that conducted the test for Comcast and AT&T.

In addition, the architecture increased download speeds for broadband subscribers. The results - though not made public yet - "will easily blow away Verizon's results," says Levitan.

That says a lot. When Verizon tested P4P on its network in February, it saw an 80% decrease in the distance that P2P network traffic traveled, and the delivery speeds increased by up to 200%, according to Levitan. Meanwhile, a team of computer scientists presented those results at a conference in Seattle this week.

Not only are ISPs showing the love for P4P, content providers are equally interested. The RIAA has shown support for the technology.

Levitan says Cary Sherman, the President of the RIAA, was so impressed with the technology, he introduced Pando to several music executives.

"The world is changing when the RIAA is introducing a P2P tech provider to music execs," says Levitan.

Comcast Tests New Network Management

Excerpted from Internet News Report by Kenneth Corbin

Comcast, which had announced in March that it would mirate to a "protocol-agnostic" method of network management by the end of the year, is in the advanced stages of testing a system that would slow speeds of the heaviest users during times of peak traffic.

"The approach will basically measure the amount of data throughput in your modem," Comcast spokesman Charlie Douglas told Internet News. "During a time when we're mirroring a state of congestion, it would de-prioritize some of the data requests from the very heaviest users at the time who were contributing the most to network congestion."

Comcast has been conducting trials of the new network management system in test markets since early June. The system is still being tweaked, but Douglas said that Comcast has settled on the general model for what its network management will look like once it switches its entire network over by the end of the year.

Comcast sent users in the five test markets e-mails alerting them to the policy change, and posted a FAQ page explaining the process. Douglas said that that for the small portion of users whose traffic was being "de-prioritized," the difference in speeds would be noticeable.

"Still, even in that de-prioritized state, it's going to be faster than a typical DSL connection," he said.

Douglas said that the system is dynamic, meaning that it can update its management in real time. So if a user found his traffic slowed, he could shut down a few applications, and the connection speed would pick up again.

Neighborhood P2P Could Alleviate Net Traffic Concerns

Excerpted from Trendwatch Report by Wolfgang Gruener

Ingenious solutions to problems often originate from very simple ideas. 

And, it appears, researchers from the University of Washington (UW) and Yale University have found a way to substantially reduce Internet congestion for service providers that could ultimately result in Internet users being saved from much discussed bandwidth caps. 

The idea: keep Internet traffic local. Internet service providers (ISPs) have not made many friends by starting a discussion about possible bandwidth caps. And even if some of these providers aren't exactly smart with their choice of words, there is no denying that some Internet users are using much more bandwidth than others.

It is generally believed that file sharing across P2P networks can account for 50% to 80% of web traffic at any given time. No matter how you look at it, that massive growth of this traffic could turn into a problem for all Internet users sooner or later. 

But, as it turns out, there might be a very simple solution to this problem - a solution that could dramatically reduce the load on critical Internet infrastructure. Researchers at UW and Yale propose a "neighborly approach to file swapping, sharing preferentially with nearby computers." 

The research group found that one of the problems with file sharing is that data packets travel enormous distances, taking advantage of key portions of the global Internet infrastructure. For the networks considered in the field tests, researchers calculated that the average P2P data packet currently travels 1,000 miles and takes 5.5 metro-hops, which are connections through major hubs.

Using their "neighborhood network," which is focused on reducing the overall distance, those numbers came down to 160 miles on average and just 0.89 metro-hops - which means that the load on significant Internet arteries between cities decreased. The network technology, dubbed P4P to indicate a next-generation P2P network, resulted in a local file-sharing share of 58%, compared to only 6% in the real world today.

"Initial tests have shown that network load could be reduced by a factor of five or more without compromising network performance," said co-author Arvind Krishnamurthy, a UW Research Assistant Professor of Computer Science and Engineering. "At the same time, speeds are increased by about 20%." 

Overall, the researchers said that the "experiments demonstrated that P4P either improves or maintains the same level of application performance of native P2P applications, while, at the same time, it substantially reduces network provider cost compared with either native or latency-based localized P2P applications." 

The research group said that a DCIA-facilitated working group formed last year to explore P4P now includes more than 80 members, including representatives from all the major US Internet service providers and many companies that supply content.

In order to be implemented into the current Internet infrastructure, the researchers said that the P4P system requires ISPs to provide a number that acts as a weighting factor for network routing. 

That means cooperation between the ISP and the file-sharing host. If P4P can keep its promise, it is without doubt a welcome technology and an idea that came at the right time. 

P4P in fact looks like a fantastic idea and may buy ISPs some time, before a major network upgrade will be necessary - or bandwidth volumes will be capped.

Abacast's Hybrid CDN Service Selected by NextMedia 

Abacast, an industry leader in hybrid peer-assisted content delivery network (CDN) services, whose President, Mike King, also leads the LIVE P2P Sub-Group of the P4P Working Group (P4PWG), has announced that NextMedia Radio Group has selected Abacast's Hybrid CDN Service for deployment across its 27 stations. 

Abacast's services will be used for hosting, unicast, and peer-assisted streaming, ad management and insertion, audience analytics, and user experience presentation. 

NextMedia, a national media company, operates in twelve rated and suburban markets and has an established track record for high levels of customer satisfaction and performance. 

Abacast's end-to-end solution is comprehensive and robust and allows NextMedia to provide a high quality service for their customers and to do so extremely efficiently. 

"I have found that Abacast has gone above and beyond to earn our business," said Brian Foster, VP of NTR for Next Media. "Our customized player started out as a simple sketch on an Excel spreadsheet. Abacast's industry knowledge, including revenue optimization and user-experience innovation, enabled them to build a differentiated, world-class player that is critical to our online radio business."

"They worked very hard to implement every piece of functionality that we dreamed up. When our radio markets have a question, they usually get a response in the same day. That is critical to me." 

Abacast's solution for NextMedia Radio enables an improved Internet radio business model and customer experience, while allowing customization for each local market. 

Abacast developed customizable players for each online station, complete with unique station branding and ad management, all integrated with Abacast's unicast and peer-assisted delivery. 

"Our eight years experience serving the radio industry with end-to-end solutions really came through in working with NextMedia Radio and winning their business," said Michael King, President of Abacast. "We're excited that they chose us, and we'll continue to try to move the online radio business forward with branding, business-model, and delivery innovations."

P4P Aims to Solve Bandwidth Challenges

Excerpted from CircleID Report

Two professors from the University of Washington (UW) and Yale University, presenting at a conference in Seattle this week, described a new and faster data transfer technology across the Internet. Professors Arvind Krishnamurthy and Richard Yang believe their technology offers a better solution to current challenges facing broadband providers.

Their algorithm, called P4P or "local file-sharing," finds the shortest path across the Internet by tracking users' locations- improving both download speeds by about 20% as well as bandwidth requirements.

Following is an abstract from their discussion:

"As P2P emerges as a major paradigm for scalable network application design, it also exposes significant new challenges in achieving efficient and fair utilization of Internet network resources.

Being largely network-oblivious, many P2P applications may lead to inefficient network resource usage and/or low application performance.

In this paper, we propose a simple architecture called P4P to allow for more effective cooperative traffic control between applications and network providers.

We conducted extensive simulations and real-life experiments on the Internet to demonstrate the feasibility and effectiveness of P4P.

Our experiments demonstrated that P4P either improves or maintains the same level of application performance of native P2P applications, while, at the same time, it substantially reduces network provider cost compared with either native or latency-based localized P2P applications."

PeerApp Caching Alleviates Upstream P2P Traffic

PeerApp, an industry leader in intelligent media caching and content acceleration solutions for the Video Internet, whose VP of Business Development, Eliot Listman, Co-Chairs the CACHING Sub-Group of the P4P Working Group (P4PWG), recently demonstrated its upstream caching technology and how it can be deployed as a network optimization tool. PeerApp participated in the CableLabs Innovation Showcase, a special event where companies present and demonstrate new, innovative technologies. The conference audience was attended by more than 250 top cable technology and strategic executives.

While P2P applications can consume as much as 30%-50% of downstream traffic on a cable network, it is not uncommon for P2P applications to consume as much as 70%-80% of upstream traffic, impairing the performance of other applications.

The PeerApp UltraBand caching solutions are deployed such that P2P file requests to MSO subscribers from local or remote Internet users are served at the edge of the network.

P2P requests are satisfied by the UltraBand as it delivers the requested files to the Internet users. This eliminates the unnecessary consumption of upstream bandwidth; reducing congestion and network upgrade costs, and still allowing subscribers to properly use increasingly popular P2P applications.

PeerApp's unique implementation of upstream caching operates only on existing P2P sessions or requests guaranteeing upstream relief.

Unlike "super peer" implementations, PeerApp's patented technology does not create new or additional P2P sessions that can consume bandwidth and have an adverse affect on the network.

The key benefits to cable broadband network operators include more symmetrical operation of upstream/downstream subscriber access networks, operation of P2P applications without affecting other applications on the network, and a network optimization tool, based on caching techniques which are protected by law under the Digital Millennium Copyright Act (DMCA).

Building a Better P2P

Excerpted from DSLreports Report

The P4P Working Group (P4PWG), a DCIA-sponsored coalition of most major ISPs, researchers, Pando Networks, and other leading P2P software distributors, is working on a more efficient P2P protocol that saves transit time by only serving file parts from local peers to reduce hops.

Pando and the P4PWG believe they can speed-up P2P transfers by as much as 235% across US cable networks and up to 898% across international broadband networks.

In Verizon tests, Pando increased the percentage of data routed internally across their networks from 2.2% to 43.4%, which they claim reduced inter-ISP data transfers by an average of 34% (up to 43.8% in the US and 75.1% internationally).

The project got renewed attention this week as researchers from the University of Washington (UW) and Yale University released additional data from project tests at a presentation in Seattle.

While the possibilities of the technology are promising, questions remain concerning how this will be implemented. Will ISP partners (AT&T, Verizon, and Comcast are involved in testing) charge customers more for prioritized P2P? Will the client source code be published? Will the system come with anti-piracy provisions and, if so, will it create an ISP gatekeeper situation to wall off "non-sanctioned" P2P content?

MONA Launches English Version P2P Software

Beijing, China based MonArc Corporation (MONA) is pleased to announce that development of an English version of the company's PP2008 software is in the last stages of development.

PP2008 is a proprietary P2P software platform developed by PP365.com. The latest PP2008 adopts an independently developed P2P protocol, which is compatible with HTTP/FTP, BitTorrent, and eMule protocols.

The Chinese version of PP2008 was successfully launched earlier this year. The updated software has more than 100,000 daily users, and more than 20,000 simultaneous online users, a significant improvement over the last generation software.

MONA CEO Yong Chen said, "We are finalizing criteria regarding copyright issues in Western markets, and nearing the completion of the debugging process of the software itself. We are targeting a launch date of 60-to-90 days. This software has the potential to significantly 'move the yardstick' for the company."

Can P4P Solve Bandwidth Bloat

Excerpted from GigaOM Report by Stacey Higginbotham

Researchers at The University of Washington (UW) and Yale University presented a paper this week on a developing Internet protocol that could lessen bandwidth demands from video and other large files.

The P4P protocol is being touted by Pando Networks and several ISPs as a way to solve some of the traffic problems caused by P2P file-sharing services such as BitTorrent.

Compressing or managing data more efficiently is becoming increasingly important, as providers attempt to clamp down on large amounts of traffic and as consumers and corporations demand ever more bandwidth-intensive applications.

Like P2P, the P4P protocol breaks files up into smaller packets, sends those around the Internet and then reassembles them at a destination, but P4P tracks the most efficient point in the networks from which to swap those files.

This involves the ISPs handing over information about their network topology and knowing where a file sharer sits on the network. P4P makes it possible to know these things without exposing the data to either side.

In tests with Verizon, Pando showed that by using P4P it could increase delivery speeds by up to 235% on US cable networks and reduce intra-network traffic by 34%.

On their own, such protocols can help, but they won't stave off the need to build out more network capacity or make existing protocols more efficient.

However, as carriers cry uncle under heavy loads that file sharing and P2P services put on their networks, seeking tiered services, bandwidth caps, and other practices to handle the bandwidth hogs, an industry-blessed protocol may be welcomed.

CD Baby and RightsFlow Enter Licensing Partnership

CD Baby, one of the largest sellers of independent music in the world, and RightsFlow, a one-stop solution for mechanical and digital phonorecord delivery (DPD) licensing and royalty processing for US distributions, have entered into an outsourced licensing partnership.

This relationship allows the more than 240,000 artists worldwide that CD Baby distributes in the US to license, account, and pay music publishers for download sales through online retailers including Apple iTunes and other digital download sellers like Amazon MP3, Napster, and Rhapsody.

"Our clients look to us to provide turnkey solutions for mechanical and DPD licensing and royalty payouts for US publishing," said Patrick Sullivan, President & CEO of RightsFlow. "We are glad to be partners with CD Baby and excited to provide this service to the artists it distributes."

With independent digital music sales continuing to grow each quarter, music distributors as well as labels are in need of a simplified solution to license, account, and pay publishing royalties for sales in the US.

"Partnering with RightsFlow makes it super easy for us to offer licensing services to our artists," said CD Baby President Brian Felsen. "In fact, artists can pay their mechanical and DPD licenses through this offering, making it both simple and cost-effective and easy for the artist."

RightsFlow offers outsourced music publishing licensing and royalty solutions to both record labels and online music retailers. These services provide a complete end-to-end solution for research and licensing of publishing, clearances, license administration, royalty calculation, and accounting to publishers. RightsFlow's solutions relieve the burden of music licensing and royalty administration through direct experience in developing and running bulk licensing departments.

"Without proper licensing and royalty services, many artists and labels are unable to position content for sale in the United States, particularly those labels based outside the US," said Ben Cockerham, Vice President of Operations at RightsFlow. "RightsFlow's scalable solutions are designed to directly facilitate increased sales of content through US channels."

Local Sharing Saves Bandwidth in P4P Tests

Excerpted from Slyck.com Report by Thomas Mennecke

It's no secret by now that Internet traffic creates a lot of bandwidth issues. There's only so much infrastructure to accommodate an insatiable population.

If you believe the ISPs and network bandwidth management companies, you'll also believe that file-sharing protocols such as BitTorrent, Gnutella, eDonkey2000, and Usenet, make up a majority of Internet traffic. 

It's not outside the realm of reason to believe that file-sharing technology has become the supreme communications medium of the Internet. For all its faults, it remains the best method for transferring files both large and small. BitTorrent and Usenet are best for large files, while Gnutella does a good job for small MP3s. 

As broadband becomes more commonplace in US households, more people are sharing larger files. So it stands to reason that ISPs are seeing an impressive percentage of their bandwidth used by file sharing. 

The average broadband speed in the US is about 6 megabits per second. That equates to approximately 750 kilobyte per second. In other words, for every second that goes by, the most users can hope to download is 75% of a megabyte. That's about a full-size MP3 in about ten seconds or a full 750 megabyte XviD movie in about an hour. 

Granted, those times aren't very bad. But that's taking into account optimal conditions, which largely exist on Usenet and private BitTorrent trackers. For the rest of the Internet community, that's when bandwidth bottlenecks start to rear their ugly heads. Since catching up with Japan's 60 megabit per second bandwidth average still remains in the future for the US, new and innovative alternatives have emerged throughout the history of P2P.

Of particular note is P2P caching. P2P caching requires an ISP to keep a caching server which stores the most frequent search requests. For example, if "A Great Song.mp3" is popular with the file-sharing community, a peer caching server will store this request. The next time a user wants to download this file, it will come from the caching server rather than someone outside the ISP's network.

Rerouting traffic so that file transfers stay within the network keeps bandwidth down - and correspondingly, the cost. The amount of bandwidth available to the end-user remains the same. Rather, the ISP keeps external bandwidth from other ISPs, and the associated connection charges, at bay.

P4P, the new file-sharing buzzword, has similar ambitions to P2P caching. Like P2P caching, the goal of P4P is to keep bandwidth local and avoid the costs involved with files transferring into and outside of the network.

However, instead of caching servers, the idea takes a distributed approach. It requires cooperation and communication between the ISP and the file-sharing client. The ISP communicates to the file-sharing client the path of least bandwidth resistance and keeps traffic within its network. 

According to a recent test by researchers at the University of Washington (UW) and Yale University, keeping P2P traffic local and off the major arteries greatly improved bandwidth allotment and completion rates for BitTorrent and other P2P traffic. 

The study notes that P2P applications are "network oblivious," meaning that clients don't care where their information comes from - just as long as the information is obtained. P4P hopes to change that. 

Each ISP would maintain an "iTracker," which would keep track of network congestion and stay in contact with the P2P client when a file request is made. 

Once LimeWire is ready to download "A Great Song.mp3," the ISP's iTracker will tell the client where a local version of that song is, and the download will begin. 

The theory is, since the ISP provided a direct, short-distance route for the transfer, bottlenecking will be greatly alleviated. According to the study's simulations, the idea seems to have merit. 

"Initial tests have shown that network load could be reduced by a factor of five or more without compromising network performance," said co-author Arvind Krishnamurthy, a UW Research Assistant Professor of Computer Science and Engineering. "At the same time, speeds are increased by about 20%." 

The study is well documented with scientific evidence that indicates that P4P technology indeed has tremendous potential.

Joost Warns on Kangaroo Competition

Excerpted from Digital Spy Report by James Welsh

Joost has warned that Project Kangaroo, the commercial video-on-demand (VOD) venture from BBC Worldwide, ITV, and Channel 4, could limit competition for VOD, program syndication and online advertising in Britain. 

In a submission to the Competition Commission's ongoing investigation into the proposed service, Joost said that Kangaroo would "immediately consolidate the market share" of streaming on-demand video services operated by the BBC, ITV, and Channel 4. 

"While Joost welcomes fair competition," it said, "Joost has a particular concern over the marketing and cross-promotional opportunities available to the joint venture. The joint venture's shareholders represent the UK's three most significant broadcasters in a unique position, with privileged access to analog terrestrial spectrum, and gifted spectrum on digital terrestrial." 

It added that "it is inconceivable that the joint venture and its shareholders will not fully exploit this cross-promotional opportunity, which far exceeds anything available to Joost or any other service struggling to get a foothold in the UK market." 

On concerns regarding program syndication, Joost said that despite having "many conversations" with the three broadcasters over distributing their in-house content, "the parties have been unable to conclude any arrangement, and as far as we are aware, licensed full-length episodes of BBC, ITV, or Channel 4 programs do not currently appear on any other online video service." 

With regard to advertising, Joost noted that the OFT and Ofcom were sufficiently concerned about the advertising clout of ITV alone to impose the contract rights renewal remedy at the time of the ITV plc-creating merger between Carlton and Granada. That mechanism, however, is being reviewed by Ofcom and the OFT. 

The Commission expects to announce its decision about Kangaroo in January.

Spare Some Bandwidth

Excerpted from MIT Technology Review Report by Mason Inman

Internet access is growing steadily in developing nations, but limited infrastructure means that at times connections can still be painfully slow. A major bottleneck for these countries is the need to force a lot of traffic through international links, which typically have relatively low bandwidth.

Now computer scientists in Pakistan are building a system to boost download speeds in the developing world by letting people effectively share their bandwidth. Software chops up popular pages and media files, allowing users to grab them from each other, building a grassroots Internet cache.

In developed countries, Internet service providers (ISPs) create web caches - machines that copy and store content locally - to boost their customers' browsing speeds.

When a user wants to view a popular website, the information can be pulled from the cache instead of from the computer hosting the website, which may be on the other side of the planet and busy with requests.

Similar services are offered by content distribution companies such as Akamai, based in Cambridge, MA. High-traffic sites pay Akamai to host copies of their content in multiple locations, and users are automatically served up a copy of the site from the cache closest to them.

In countries like Pakistan, Internet connections are generally slow and expensive, and few ISPs offer effective caching services, limiting access to information - one reason why the United Nations has made improving Internet connectivity worldwide one of its Millennium Development Goals.

None of Pakistan's small ISPs cache much data, and traffic is often routed through key Internet infrastructure in other nations.

"In Pakistan, almost all the traffic leaves the country," said Umar Saif, a computer scientist at the Lahore University of Management Sciences (LUMS). That's the case even when a Pakistani user is browsing websites hosted in his or her own country. "The packets can get routed all the way through New York and then back to Pakistan," Saif says.

So Saif's team at LUMS is developing DonateBandwidth, a system inspired by the BitTorrent P2P protocol that is popular for trading large music, film, and program files. With BitTorrent, people's computers swap small pieces of a file during download, reducing the strain placed on the original source.

DonateBandwidth works in much the same way but lets people share more than just large files. When users try to access a website or download a file, a DonateBandwidth program running on their machine checks first with the P2P cache to see if the data is stored there.

If so, it starts downloading chunks of the file from peers running the same software, while also getting parts of the file through the usual Internet connection. The software could allow people in countries that have better Internet connections to donate their bandwidth to users in the developing world.

DonateBandwidth also manipulates an ISP's cache. "Say a person with a dial-up connection wants to download a file," Saif says. "When running DonateBandwidth, their computer starts downloading part of a file, while also sending a request for other DonateBandwidth users who have access through the same ISP, and whose computers have spare bandwidth, to trigger them to start downloading other parts of the same file." The file is then loaded into the ISP's cache, so it can be downloaded more quickly.

Saif compares the project to distributed computing schemes such as SETI@Home, which uses volunteers' spare computer power to collaboratively analyze radio signals from space, looking for signs of intelligent life. "DonateBandwidth permits sharing of unused Internet bandwidth, which is much more valuable in the developing world, compared to computing cycles or disk space," he says.

The more people who use DonateBandwidth within the same country, the more websites and files could be cached, freeing up the international link. In the developed world, "typical bandwidth savings due to caching are around 30 to 40%," Saif said.

The program is not publicly available yet, but Saif's team is currently testing a proof-of-concept version and will collaborate with Eric Brewer and colleagues at the University of California, Berkeley, to implement it in Pakistan.

"In Pakistan and a lot of developing countries, they are building a good local network, but the international network is not very good," says computer scientist Saman Amarasinghe of MIT. "Having a system like what Saif proposed is very valuable."

"Misconfiguration of caches rings true with our experiences in Kenya and Ghana," adds Tariq Khokar of Aptivate, a nonprofit group in Cambridge, UK, that works on improving connectivity in developing countries. "I doubt anybody outside of a developing country would have come up with DonateBandwidth."

Aptivate created another system, called Loband, that strips photos and formatting from Web pages to make them load faster for users in developing countries. "Loband helps with bandwidth but not latency," Khokar says, but "having content cached in country means the latency associated with an international hop is eliminated."

Blacksmith Presents at World Peace Benefit Concert

This Monday, August 25th, starting at 9:30 PM, DCIA Member Blacksmith, founded by singer songwriter Al Smith, will perform live for a world peace and unity benefit concert at the Cutting Room in New York, NY.

Tickets are ten dollars and proceeds will go to We the World, a not-for-profit organization that has worked for ten years as an agent for social change in the political, environmental, and cultural fields, as well as educating and bringing awareness to the public.

Al Smith is an artist who can not only write and sing, but also play his own instruments and deliver a great live show. He embodies the opposite of the hip-hop movement that has dominated the music scene for the past twenty years.

In the words of music manager and agent Sam McKeith, "In my thirty-plus years in the entertainment industry, I've been involved with with the Rolling Stones, Sly, the Temptations, and Springsteen.

If Al Smith had just come along at a slightly different time, there's no doubt he would be included in that list of the great and the famous."

Al will also be appearing at 2008 Ecofest on September 28th at Lincoln Center.

Scooter Scudieri Debuts Live Multimedia Performance

Independent artist, DCIA Member, and Songwriters Hall of Fame Award winning writer whose fans have dubbed him the "Internet's First Rock Star," Scooter Scudieri plans to take his music to the next level with Rattle to Rifle - built around a live performance of his new nineteen-song CD.

The new multimedia presentation will debut on Saturday September 13th at the Timber Frame Theater in Shepherdstown, WV.

Inspired by a volatile mix of religion and politics, the performance gives voice to the children of war. The seventy-five minute show incorporates giant puppets, paintings, video projection, drones, triggers, and a live drummer in a swirling spectacle of political activism. Intended for mature audiences.

"The evolution of music in the digital age is bringing a renewed emphasis on performance. Enhanced presentation through artistic multimedia in addition to viral marketing of music will drive people to live events," said Scudieri.

"The music will not become secondary to the performance; it will become complimentary. With free and ad-supported downloads continuing to grow in the expanding P2P space, the age-old idea of 'seeing is believing' will drive a new industry of multimedia events like we have not seen since the days of KISS, David Bowie, and Pink Floyd."

"The result, in my opinion, will be an increase in ticket sales for live events, an increase in sales of merchandise, and ultimately an increase in both digital music purchases and CDs," he added.

Olympics Committee Wants Help from Piratebay.org

Excerpted from ITProPortal Report

The International Olympics Committee (IOC) is looking to the Pirate Bay to help it eradicate BitTorrent indexes that show clips from the Beijing Olympics.

According to newswire reports, the IOC has approached the Swedish government for assistance in stopping the Pirate Bay site from distributing copyrighted material from the opening ceremony of the Beijing Games. The Pirate Bay uses several Sweden-based servers. 

Justice Minister Beatrice Ask has been quoted as saying that, while she understood the IOC's reaction, there were "slim chances" of tackling the issue.

The Pirate Bay backers have said the site is a non-profit group and does not store copyrighted material but only offers a search engine service for users who exchange music, films, and computer games.

Coming Events of Interest

International Broadcasting Convention - September 11th-16th in Amsterdam, Holland. IBC is committed to providing the world's best event for everyone involved in the creation, management, and delivery of content for the entertainment industry. Uniquely, the key executives and committees who control the convention are drawn from the industry, bringing with them experience and expertise in all aspects. DCIA Member companies are exhibiting.

Streaming Media West - September 23rd-25th in San Jose, CA. The only show that covers both the business of online video and the technology of P2PTV, streaming, downloading, webcasting, Internet TV, IPTV, and mobile video. Covering both corporate and consumer business, technology, and content issues in the enterprise, advertising, media and entertainment, broadcast, and education markets. The DCIA will conduct a P2P session.

PopKomm - October 8th-10th in Berlin, Germany. The international music and entertainment business trade show, conference, and festival. Decisive developments within the business. Think forward: for three days, experts will be appraising and voicing their opinions on creation, communication, and commerce. Over 400 showcase performances.

P2P & MUSIC CONFERENCE - October 10th in Berlin, Germany. The DCIA proudly presents an all-new day-long conference within PopKomm, focused totally on P2P solutions for the music industry. How to protect and monetize musical content in the steadily growing P2P marketplace.

Spirit of Life Award Dinner - October 15th in Santa Monica, CA. The City of Hope Music and Entertainment Industry Group will award the 2008 Spirit of Life Award to Doug Morris. Dinner packages and advertising information can be obtained through Mary Carlzen and 213-241-7328.

Digital Hollywood Fall - October 27th-30th in Santa Monica, CA. With many new sessions and feature events, DHF has become the premiere digital entertainment conference and exposition. DCIA Member companies will exhibit and speak on a number of panels.

P2P MEDIA SUMMIT LV - January 7th in Las Vegas, NV. This is the DCIA's must-attend event for everyone interested in monetizing content using P2P and related technologies. Keynotes, panels, and workshops on the latest breakthroughs. This DCIA flagship event is a Conference within CES - the Consumer Electronics Show.

Copyright 2008 Distributed Computing Industry Association
This page last updated December 14, 2008
Privacy Policy