May 5, 2008
Volume XXI, Issue 12
Don't Miss the P2P MEDIA SUMMIT LA
Come to the Renaissance Hollywood Hotel this Monday May 5th for what promises to be a very stimulating and worthwhile P2P MEDIA SUMMIT LA.
You'll be able to witness firsthand the dynamic and transformative relationships now being forged among Internet service providers (ISPs) and peer-to-peer (P2P) companies, as well as promising new initiatives on the horizon with content providers.
Morning keynotes include industry leading broadband network operators and major P2P developers and distributors.
Our conference luncheon speaker will be the Motion Picture Association of America's (MPAA) Bob Pisano, and the afternoon sessions will be focused on Hollywood and P2P interests.
The conference will close with a very special session from Microsoft's See-Mong Tan, followed by a VIP networking cocktail reception.
Please click here to register now, or simply bring a check or credit card with you Monday morning. The continental breakfast begins at 8:00 AM.
P2P Best Practices Effort Moves Ahead
Excerpted from Telecommunications Report Daily Report by Lynn Stanton
The Distributed Computing Industry Association (DCIA), whose Members include P2P and social networking software providers, ISPs, and content providers, said today that it would facilitate the creation of a new P2P Best Practices Working Group (PBPG) by June, with the intent of finishing work "well before the end of the year."
Participation in the DCIA best practices initiative will be open to all interested parties. A statement of purpose for the initiative will be presented Monday at DCIA's P2P MEDIA SUMMIT LA in Los Angeles and recruitment of project participants will proceed, the group said.
"Preliminary objectives of the P2P Best Practices initiative include helping ensure responsible, safe use of P2P services by consumers, establishing general best practices for P2P clients and ISPs, and further advancing the efficiency of P2P performance on broadband networks," the DCIA said.
Responding to the DCIA announcement, Verizon Communications said that it "supports and has worked with DCIA Members to launch this important initiative. We are also engaged with the ongoing P4P Working Group (P4PWG) to improve how P2P works on networks. Best practices efforts like this are very important."
A spokesman for AT&T said that as a DCIA Member, AT&T is looking forward to continuing its "cooperative relationship with this important industry forum to move the P2P Best Practices Initiative to a successful completion."
The plan to develop a P2P "bill of rights and responsibilities" (BRR) announced by Comcast and Pando Networks last month will be part of this larger effort to develop P2P Best Practices, a Comcast spokeswoman said.
The DCIA Welcomes TVU Networks
Please warmly welcome TVU Networks to the Operations Group. We look forward to providing valuable services to this newest DCIA Member company and supporting its contributions to commercial development of the distributed computing industry.
TVU Networks launched its live peer-to-peer television (P2PTV) service in May 2006, and to date, its TVUPlayer has amassed a community of over 18 million viewers in 200 countries, comprising 11 million monthly viewing hours of long-form video content on its 300+ channels.
Founded by Paul Shen, creator of the CherryPicker product, author of cue-tones, principal developer of the HDTV ATSC standard, and Chairman of the MPEG-4 Syntax Group, TVU Networks offers real-time P2P technology and a complete broadband TV platform.
Its real-time P2P protocol based on proprietary algorithms provides scalability to an unlimited number of channels and viewers, and has successfully sustained 150,000 simultaneous viewers on a channel on a single broadband connection.
First to deliver P2P synchronized live video transmissions, and first to deliver automatic cue-tone based real-time in-stream video ad insertions, the TVU platform offers a turnkey integrated solution for broadcasters with a full suite of monetization tools. TVU's content management system features rights management, targeted in-stream ad insertions, subscription and pay-per-view capabilities, billing and payment with full reporting and analytics.
These monetization components are fully operational today, and the TVU service is successfully integrated with ad networks such as Tremor Media, Valueclick, and Broadband Enterprises. TVU's live Internet coverage of the Federer vs. Sampras tennis match from Madison Square Garden last month featured in-stream video ads from HP, Rolex, and NetJets.
TVU expects 14 million more installs this year through its partnership with HP, where TVUPlayer ships bundled with all HP Pavilion laptops. With expected organic and viral growth in addition to that, the company is forecasting a community of 40 million viewers by year-end.
Report from CEO Marty Lafferty
We very much hope you'll attend P2P MEDIA SUMMIT LA this Monday May 5th at the Renaissance Hollywood Hotel. This event promises to be our most memorable conference to date. Please click here to register now or simply come to the DCIA registration desk Monday morning at 8:00 AM.
Our opening remarks will include information about the new P2P Best Practices initiative, including its purpose, process, and how to participate.
Next, the morning agenda will feature a special session dedicated to the P4P Working Group (P4PWG), which will explore the phenomenal efforts that this group now has underway.
Co-Chairs Doug Pasko of Verizon Communications and Laird Popkin of Pando Networks, along with P4P principal researcher Haiyong Xie of the Yale University Laboratory of Networked Systems (LANS) will discuss the strategically critical issues of network resources - reducing bandwidth usage and improving P2P throughput.
Questions that they will address include, what are the mission, objectives, history, and status of the P4PWG? What tests have been conducted to date and what have the results shown? What are the next steps for the P4PWG? How does the P4PWG plan to move from testing to standards setting and best practices? How can interested parties get involved?
Morning keynotes will include industry leading broadband network operators and major P2P developers and distributors, such as AT&T's KK Ramakrishnan, BitTorrent's Eric Klinker, Comcast's Rich Woundy, Kontiki's Bill Wishon, and Vuze's Gilles BianRosa.
The solutions development panel will investigate technology advancement - creating the commercial P2P ecosystem. Jeff Anker of Oversi, Caesar Collazo of iWatchNow, Michael King of Abacast, Jonathan Lee of ARTISTdirect, Eliot Listman of PeerApp, and Jeffrey Payne of GridNetworks will participate.
What architectural, content acceleration, caching, and other technological solutions are now in development that will optimize P2P deployment for the benefit of all participants in the distribution chain? How are content delivery networks (CDNs) exploiting P2P and what issues remain? Can P2P streaming technology help broadcasters and content providers overcome the limitations of live webcasting?
Keynoters Perry Wu of BitGravity and David Rice of Move Networks will complete our morning agenda with a look at advanced solutions for content delivery networks (CDNs).
Our conference luncheon speaker will be the Motion Picture Association of America's (MPAA) Bob Pisano, and the afternoon sessions will be focused on the equally kinetic environment now involving Hollywood and P2P interests.
After-lunch keynotes will include TVU Networks' Dan Lofgren and LimeWire's George Searle. Following their presentations, our focus will turn to distribution strategies.
The MPAA's Fritz Attaway, Paramount Pictures' Derek Broes, TAG Strategic's Ted Cohen, MediaPass Network's Daniel Harris, Brand Asset Digital's Joey Patuleia, and RightsFlow Entertainment Group's Patrick Sullivan, will address this emerging channel from the perspective of artists and rights holders - harnessing P2P for content creators.
What has been the experience to date of professional and user-generated content (UGC) providers who have embraced P2P? What changes do they need to more effectively exploit file-sharing and related technologies? Which business models are showing the most promise? Are there innovative art forms or packaging approaches in development for the P2P distribution channel? How should P2P relate to other distribution channels?
Next, Manatt's Jeff Biederman and Patrick Sabatini will conduct a strategic session focused on the issues of rights negotiations with content providers for copyrighted works to be distributed in the P2P distribution channel, followed by KlikVU's Lowell Feuer.
This discussion will continue as the next panel addresses content licensing and protection. Chris Gillis of MediaDefender, Vance Ikezoye of Audible Magic, Mark Isherwood of Rightscom, Tom Patterson of DigitalContainers, Leslie Poole of Javien Digital Payment Solutions, and Reed Stager of the Digital Watermarking Alliance (DWA) will explore acquisition and accountability - affiliating with rights holders.
What are the business strategy and licensing issues that must be addressed in order to distribute copyrighted works using P2P? How can the industry ensure that the benefits of P4P and similar mechanisms are applied to authorized content distribution? What do participants at various levels of this channel need to do to gain support of rights holders? Which identification and filtering techniques (e.g., watermarking and/or fingerprinting) should be used to protect content and enhance the ecosystem?
Our afternoon keynotes will include Unlimited Media's Memo Rhein, HIRO Media's Ronny Golan, and Pando Networks' Robert Levitan.
The last panel of the afternoon will focus on the bottom-line: P2P traffic monetization. Manatt's Bill Heberer, Ultramercial's Dana Jones, Jambo Media's Rob Manoff, Beat9.com's Jay Rifkin, Wingman Media's David Shor, and YuMe Networks' Rosanne Vathana will cover revenue generation - executing ad-supported and consumer-paid models.
What do sponsors and advertising agencies need from P2P and social networks in order to monetize the enormous traffic that they generate? How should these networks organize their inventory of advertising and sponsorship availabilities to maximize value? What is the relative worth of the different formats and relative interactivity that this channel can support? Beyond CPM and click-through payment regimes, what are other opportunities?
Next, Motorola's John Waclawsky will provide a vision of the future of P2P, and the conference will close with a very special session from Microsoft's See-Mong Tan, followed by a VIP networking cocktail reception.
Again, we look forward very much to this opportunity to meet with you and together to learn from the people who are literally making history in the very fast-moving world of commercial development of P2P. Share wisely, and take care.
No Need for New Internet Regulation
Excerpted from TelecomTV Report by Martyn Warwick
Kevin Martin, the Chairman of the US Federal Communications Commission (FCC) says there is no need for any new regulation of the Internet.
Speaking at a Senate Commerce Committee hearing, the FCC Chairman told the panel, "I do not believe any additional regulations are needed at this time," and emphasized that it has recently taken enforcement action against ISPs suspected of deliberately throttling P2P traffic on their networks.
In the recent past the FCC has held two hearings to debate issues relating to network management issues. Mr. Martin maintains that the agency has sufficient powers to police US ISPs and that the FCC is effective in investigating complaints on a case-by-case basis.
The Senate hearings are taking place at a time when the thorny industry issue of "network neutrality" (basically the principle that people should be able to go where they choose on the Internet, when they choose, without any interference from network owners) has made it into the daily papers and into mainstream media.
The argument has become a party political issue and has divided Congress, with Democrats predominantly in favor of ensuring that all ISPs should legally be compelled to treat Internet traffic equally while most Republicans are against any such legislation.
The large US network players such as cable and telecom companies are adamantly opposed to any network neutrality laws and claim that, if passed, the nation would be hampered by another unnecessary layer of regulatory superstructure that would be to the detriment of consumers.
Indeed, according to Kyle McSlarrow, the President & CEO of the National Cable and Telecommunications Association (NCTA), the scenario envisaged by the pro-net neutrality lobby is "a complete fantasy."
He insists that "no one is being blocked" because, if they were, they would simply churn away to a competitive broadband provider.
MK Capital Acquires P2P Service Kontiki
Excerpted from Washington Post Report by Rafat Ali
VeriSign has sold its P2P content delivery network (CDN) service Kontiki to MK Capital, a VC firm which invested in the P2P company prior to its sale. VeriSign bought Kontiki in 2006, for around $62 million.
VeriSign announced in November last year that it planned to shed certain assets, including communications, billing, and commerce, though plans for its P2P CDN service were not announced.
Kontiki management explored several alternatives including management-backed buyout options, and that's how its previous investor MK came into the picture.
The buyer was first reported by Contentinople yesterday, which also reported that Kontiki is taking with it about 40 employees in the US and the UK. Eric Armstrong, former VP of Sales, Media, and Entertainment at VeriSign, is taking over as President.
MK Capital recently invested in digital production and management studio Generate
New Velocix Accelerator Family of Products
Velocix this week launched a free, large-file digital delivery service for start-ups and entrepreneurs looking to deliver video, music, software or games over the Internet.
The service is the world's first free version of its kind (it would cost around £5k a year traditionally) and it addresses the needs of an important market that, arguably, is not currently being served well.
Velocix offers three clients to showcase why this is important and relevant.
Craze Productions is a 100% digital record label, specializing in pushing music into various digital domains. Its catalogs include exclusive rights to songs and videos from the world's top artists. Craze Productions was one of the first to introduce its music video channels on the recently launched Adobe Media Player (AMP).
Fifzine is an innovative new online destination for the creative, cultural, and commercial worlds and is aimed at everyone who is interested in viewing, creating, sharing, and commercially developing the best in creative content. Fifzine is currently beta-testing this community, with a goal of providing a platform for leading edge design, illustration, photography, fashion, art, film, writing, and music.
Uploaded.TV is a next generation social network where users can appear on TV and buy airtime as easily as booking an airline ticket.
Hollywood Taking Sides in Net Neutrality Debate
Excerpted from LA Times Report by Jim Puzzanghera
Hollywood believes that the Internet is the key to its future. But its constituents are again squabbling over how to get there.
As in the recent television writers strike, the major studios are at odds with some members of the creative community over digital distribution. This time it's about a public-policy issue known as "network neutrality."
Some lawmakers, public-interest advocates, and big technology companies are pushing for federal rules that would prevent Internet service providers (ISPs) from blocking or slowing certain content flowing through their high-speed lines. They worry that cable and phone companies could become gatekeepers of the Internet and impede services that threaten their businesses or those of their corporate allies.
Net neutrality is a complicated issue with a wonky name. But as Congress and the Federal Communications Commission (FCC) consider banning discriminatory practices on the Internet, the entertainment industry is starting to take notice - and sides.
Major movie studios and record labels are concerned that net neutrality could eliminate a potential tool for fighting online copyright infringement. Meanwhile, independent artists want to ensure that they can disseminate their work freely.
Hollywood's involvement could elevate the largely inside-the-Beltway debate, which has smoldered since 2006 among online activists, public-interest groups, technology companies, and telecommunications giants.
How lawmakers and regulators deal with the issue could have major implications for Hollywood's battle against infringement and the burgeoning movement by writers, actors, and directors to bypass large media companies by distributing their work online.
"Two years or so ago, people in our industry were still looking at the Internet and saying it's not ready," said Jean Prewitt, President of the Independent Film & Television Alliance. "Now, every day you see new services being launched. The issue has intersected with the marketplace reality."
The Motion Picture Association of America (MPAA), which represents the major studios, says cable and phone companies need the flexibility to stop people from spreading unlicensed copies of movies over the Internet.
"Today, new tools are emerging that allow us to work with Internet service providers (ISPs) to prevent this illegal activity," MPAA head Dan Glickman said in a speech at the ShoWest convention in Las Vegas in March. "But new efforts are also emerging in Washington to stop this essential progress."
File-sharing is a popular way to distribute unauthorized content, but it also is increasingly used to distribute licensed videos.
"If the outcome is the studios will have preferred access for delivering content because of a deal they could get with ISPs, I think that would be a really bad thing for the industry," said Gilles BianRosa, chief executive of Vuze, a Palo Alto, CA-based company that uses a version of BitTorrent technology to let people watch and share videos, music, and games.
µTorrent Doubles User Base
Excerpted from Bit-Tech Net Report by Gareth Halfacree
In news that may come as a shock, popular file-sharing application LimeWire has been overtaken by rival application µTorrent, according to figures released by TorrentFreak last week.
The survey published by the pro-file-sharing group uses data gathered by PC Pitstop on a sample of one million Windows PCs located in North America, and shows that the BitTorrent client µTorrent has enjoyed a doubling of its installed user base since 2007.
While LimeWire still accounts for a huge 37.9% of the file-sharing market, µTorrent has grown to encompass 13.51% - a figure which is rapidly rising. Interestingly, the µTorrent software is more than twice as popular in Europe as it is in the US, with 11.6% coverage compared to 5.1% in America.
Another interesting statistic to come out of the survey is that, according to TorrentFreak, "based on the amount of traffic that is generated by each P2P application, µTorrent would be the absolute winner."
This shows that the plucky little BitTorrent client is the popular choice among the more 'hardcore' data shifters, with LimeWire being the common man's choice for the occasional download.
With more and more companies using BitTorrent as a distribution method for legitimate files, it's a market many software houses will be looking to break into, and µTorrent - owned by the BitTorrent corporation itself - is certainly looking like the package to beat.
Media Talk Shifts from Piracy to Ubiquity
Excerpted from Media Post Report
Media execs love to gripe about the threat of copyright infringement to their business, but the persistence of P2P media distribution has brought on a new call-to-arms: "Ubiquity."
On the web, that means "be accessible everywhere." Earlier this week, News Corp. COO Peter Chernin discussed that need, saying that consumers need to be able to access content at any time, in any way, and at any place they desire-and for a reasonable price. Why, because that's what they demand. But more than that, Chernin said that making legal content readily accessible at a reasonable price is the only way to combat copyright infringement.
Former Yahoo CEO Terry Semel, who sat in on the panel discussion with Chernin, added, "I don't think anyone's not paying attention to fraud, but the bulk of time is being devoted to where we're going." Semel believes the open distribution strategy adopted by content providers represents a shift in the distribution of content that's as major as when the music biz went digital.
Viral, organic distribution will ultimately help content owners sell more and better ads, he said. Please click here for expanded coverage in Variety.
Babelgum Snares PBS Deal
Excerpted from Hollywood Reporter Report by Stuart Kemp
Free, interactive TV-quality peer-to-peer television (P2PTV) platform Babelgum has struck a worldwide content deal with PBS and plans to showcase "Scientific American Frontiers" to web visitors through the agreement.
Babelgum is now the only P2PTV platform to offer 18 episodes of the series around the world on its dedicated PBS Channel.
Babelgum also will make available the largest online collection to date of free PBS documentaries and nonfiction programming in North America as part of the alliance with PBS.
PBS plans to roll-out 100 hours of content on the Babelgum platform, including "Empires" specials, "Medici: Godfathers of the Renaissance," and "Japan: Memoirs of a Secret Empire."
Babelgum CEO Valerio Zingarelli said the PBS deal gives "our platform its mark of approval."Said PBS Ventures SVP Andrew Russell: "PBS is making our content easily accessible on as many platforms as possible."
Zattoo Allows Users to Watch Live Terrestrial TV
Excerpted from ITPro Portal Report by Desire Athowl
There are a number of slightly dodgy looking internet TV services available, but Zattoo is the first one which offers all the programs from the five terrestrial channels online over and above 24 other satellite channels.
There's a good mix of international news outlets as well with broadcasts from France and Russia, with children programs and special interest content like AutoMotoTV and the Poker channel.
Users simply install a 17MB application to experience the thrill on online TV anywhere and register through the application.
Zattoo is available for Linux, Windows, and Mac systems and users can setup their preferences using the option menu.
Interestingly, it works in a similar way to other services like Joost or Babelgum in that it is a P2P application and is based on the work of Sugih Jamin and Wenjie Wang at the University of Michigan.
Surprisingly, Zattoo has a team of 50 people working on it worldwide and has plans to go big. The service is currently supported by advertising.
New Software Helps ISPs & P2P Users to Get Along
P2P file-sharing services, which connect individual users for simultaneous uploads and downloads directly rather than through a central server, are reported to account for as much as 70% of Internet traffic worldwide.
That level of use has led to a growing tension between Internet service providers (ISPs) and their customers' P2P file-sharing services, and has driven service providers to forcefully reduce P2P traffic at the expense of unhappy subscribers and the risk of government intervention.
Now researchers at Northwestern University's McCormick School of Engineering and Applied Science have discovered a way to ease that tension: Ono, a unique software solution that allows users to efficiently identify nearby P2P clients.
The software, which is freely available and has been downloaded by more than 150,000 users, benefits ISPs by reducing costly cross-network traffic without sacrificing performance for the user.
In fact, when ISPs configure their networks properly, their software significantly improves transfer speeds - by as much as 207% on average.
Ono, developed by Fabián E. Bustamante, Assistant Professor of Electrical Engineering and Computer Science, and PhD student David Choffnes, has been deployed for the Vuze BitTorrent P2P file-sharing client.
"Finding nearby computers for transferring data may seem like a simple thing to do," said Choffnes, "but the problem is that the Internet doesn't have a Google Map. Every computer may have an address, but it doesn't tell you whether the machine is close to you."
Worse yet, the simplest solution to finding computers that are close to you requires measuring the distance to every single one - an operation that is too costly and time consuming to be practical.
Instead, Ono - Hawaiian for "delicious" - relies on a clever trick based on observations of Internet companies like Akamai (incidentally Hawaiian for "clever"). Akamai is a content delivery network (CDN), which offloads data traffic from websites onto their proprietary network of more than 10,000 servers worldwide.
CDNs such as Akamai and Limelight power some of the most popular websites worldwide and enable higher performance for web clients by sending them to one of those servers within close proximity. Using the key assumption that two computers sent to the same CDN server are likely close to each other, Ono allows P2P users to quickly identify nearby users.
Ono is different from other software applications that address the conflict between ISPs and P2P traffic because it requires no cooperation or trust between ISPs and P2P users. Ono is also open-source and does not demand the deployment of additional infrastructure.
Bustamante's Aqualab research group has made Ono publicly available since March 2007 and recently published code that makes it easy to incorporate Ono services into other applications.
"The more users we have, the better the system works, so we're just trying make it easy to spread," says Bustamante.
Hiro-Media's Co-Founders Golan & Napchi
Excerpted from Variety Report by Scott Kirsner
Ariel Napchi and Ronny Golan are trying to put a positive spin on digital rights management (DRM), one of the ideas that can raise hackles among technophiles.
DRM is a system of software "locks" that can prevent a file from being shared - making it hard, for instance, to pass a TV show purchased from Amazon's Unbox service from one person to another, in the same way a DVD is easily swappable.
"We call our approach Positive DRM," says Napchi, Co-Founder of Hiro-Media, a startup in Tel Aviv, Israel. "We think you should be able to watch a show as many times as you like, pass it to your friends, and put it on file-sharing networks."
Instead of selling the download, the content owner would earn money from advertising integrated into it.
P2P networks have long been seen as an enemy of media companies. (MGM took two of the networks, Grokster and Morpheus, to the Supreme Court in 2005 - and won.) Golan says that in the company's early days, floating the idea of using the networks to distribute officially sanctioned content was unfathomable.
"People almost threw us out of their conference rooms," Golan says.
But more recently, media firms and telcos like NBC and British Telecom have been willing to give Hiro's system a try, NBC with its DotComedy video site and BT with a small selection of full-length features.
Hiro's software, which integrates with media players like Apple's Quicktime and Windows Media Player, allows users to download a video file and ensures that new ads are dynamically inserted each time the video is viewed - and that the video's producer can collect information about how many times the video is seen.
Napchi and Golan say that delivering videos as downloads rather than as streams radically reduces a media company's bandwidth costs.
"You can put the content anywhere you want - including on P2P networks - and the content owner makes money without incurring much cost," Napchi explains.
Golan says that "hundreds of thousands of users" have installed Hiro's software. While the company's latest deals have been with Australian and Russian digital media companies, Hiro's founders say US deals are on the way.
"But these are huge companies, and the sales cycle is pretty long," Napchi adds.
"People aren't willing to trade real dollars for virtual dimes," Golan says. "They're moving slowly and cautiously, because they have a lot to lose if they don't do it right - but a lot to gain if they do."
Protecting Content with Electronic Patterns
Excerpted from Nikkei Electronics Asia Report by Tomohisa Takei
Media firms and creators are looking at distribution systems applying electronic fingerprints and electronic watermarks to identify content, enabling them to collect the fees that are due to them.
Research into pattern recognition technology for use on audio, video, and other signals has been attracting considerable attention lately. Specifically, this includes technologies such as electronic fingerprints and watermarks.
As one researcher in the field explained, "The rise in applications designed to analyze the signals from voice, video, or other data, to identify the content and use that information in rights management, has occurred just as firms developing electronic fingerprinting, electronic watermarks, etc. have found ways to survive copying to analog data."
Hollywood film companies are also getting interested in technologies to identify imagery. The Motion Picture Association of America (MPAA) and Motion Picture Laboratories performed video identification tests for several months with participation by no less than ten groups, including Audible Magic, Gracenote, Nippon Telegraph & Telephone Corp (NTT), and Vobile.
A source at one firm participating in the tests commented, "I think they are hoping to put their material on video-sharing and similar sites, and wanted to evaluate the technologies to see whether they are really ready for commercial roll-out."
Video-sharing sites are already using electronic fingerprinting technology, and there is no doubt that it is entering practical use in a big way. MySpace, for example, uses the electronic fingerprinting technology for voice developed by Audible Magic to automatically make it impossible to repost content that has already been taken down once for copyright issues, with the "Take Down Stay Down" function announced in May 2007.
Distribution can occur without the authorization of the rights holder for several reasons: 1) terrestrial broadcasting, audio compact discs (CD) and other non-encrypted content, 2) copying through analog interfaces, recording through displays or speakers, and 3) security holes, such as the public release of the encryption key.
Conventionally, digital rights management (DRM) was intended to cover all of these possible abuses through encryption, verification, and other technologies, limiting the scope of viewing, copying, and other access.
P2P content distribution systems, however, do not fit well with this type of approach, because it is difficult to allow users to freely copy, edit, or otherwise access content.
As a result, there is an increased need for a framework that applies electronic fingerprints, electronic watermarks, or similar technologies to identify, analyze, and otherwise mark video content uploaded by users, making it possible to return appropriate value to rights holders.
Electronic fingerprinting technology can identify specific information such as who created the content and which company owns distribution rights and, for example, could control distribution of revenues from advertisements, views, or other means.
"Instead of encrypting the file with DRM technology, DRM is implemented throughout the entire network distribution framework," said Audible Magic President & CEO Vance Ikezoye.
Normally, electronic fingerprints use the characteristics of the signal actually perceived by the user, whether the file is audio, video, or whatever. The system detects content similarity in the same way that people do. For example, for a video, the system can convert content characteristics like luminance signals and chrominance signals into patterns, or left and right channel signals if the source is audio. This pattern is the "fingerprint" and is determined by the actual content.
With a fingerprinting technology utilizing the actual video, audio, or other signals, it is possible to identify a given file as being the same, even if it has been converted to analog and redigitized, or converted to different encoding or resolution. The signal is analyzed complete with noise, so the comparison cannot be 100%, but results are more than adequate for identifying content.
One of the groups developing such electronic fingerprinting technology is NTT Communication Science Laboratories, which has been involved in research and development in high-speed search technologies for audio, video, and other data for ten years.
According to Kunio Kashino, Senior Research Scientist, Recognition Research Group, Media Information Laboratory there, "We have improved robustness with respect to noise by extracting the characteristics on a coarse level, instead of comparing the fine details of the signal."
Video search is based on photographs taken with a miniature camera from the image as displayed on the monitor, and audio is searched in a server by transmitting music played through a speaker via a mobile phone as a standard voice call. One of the NTT group companies is already deploying services using these research results.
With audio, for example, a signal of a certain length (say, 1s) is divided in the frequency and time domains, and the amplitude of the signal in each domain determined. Sharp differences in intensity and other characteristics are detected, and those portions quantized coarsely. This pattern can then be used to search through an existing database to extract possible matches. This process is repeated at a certain interval (say, 10ms) for perhaps five seconds, and the record with the highest match selected.
The basic approach is the same for video, except that the domains are split into horizontal and vertical directions, and the luminance and other signals determined for each region on a single-frame basis. The characteristics are quantized in the same manner. The laboratory has used the search technology to identify three tunes mixed into a single signal, and identified a video re-photographed with an object obscuring part of the picture.
The characteristic information for the complete content is stored in the database, so search is possible even for a clip of a shorter length. As Junji Yamato, Group Leader, Recognition Research Group, Media Information Laboratory at NTT Communication Science Laboratories revealed, "We can almost always search effectively with a 5-second sample."
Some firms are proposing implementations other than having the server detect electronic fingerprints of content uploaded to shared sites. Vobile's VideoDNA electronic fingerprint, for example, proposes that the user detect the fingerprint personally, assuming that there is a continuing increase in distribution of content via P2P networks and similar means.
When non-encrypted content downloaded from P2P networks is played by a user, a player capable of detecting VideoDNA is used. The player transmits the fingerprint to a server via the Internet, identifying the content. This identification makes it possible to transmit advertisements to the user based on user, content, and other information. It would be possible to divide revenues gained from content access with content creators.
The question, of course, is how to get users to buy players that display advertising. Vobile suggests heightening the value of the players by: 1) providing additional information on titles, performers, etc., 2) improving the user experience through metadata, such as scene previews, and 3) providing special video content that can only be played on compatible players.
Electronic watermarking technology embeds significant information into signals, whether audio, image, video, or whatever. This embedded information can only be detected by using equipment specifically looking for it. It cannot normally be perceived by human beings, so the content can be accessed without even noticing it. A number of proposals are on the table for utilizing electronic watermarks to identify content, determine where content has been played or copied, and other tasks.
Most of the proposals related to content identification are for handling revenue sharing from content distributed through, for example, video-sharing sites, by embedding specific identifiers into the distributed content. Another suggested use is embedding extra information that only special equipment can detect, enhancing the user experience during play, or ensuring that the equipment obeys regulations on actions like play or copy.
The site of an action is determined to suppress unauthorized distribution. It makes it possible to determine who obtained the content, and through what route, making users think twice before trying to obtain content without authorization. Such information can be embedded into file headers or other locations, but headers can be easily lost in file type conversion. As a result, electronic fingerprinting is being considered as an alternative approach.
For example, the projector in a movie theater could be used to embed information such as the theater name or show-time into the film. If an infringing copy was later analyzed, it would be possible to determine when and where it was made. Likewise, the set-top box used to decode a distributed video signal could also embed user-specific information when the image signal is played.
This type of watermarking may encounter resistance from consumer groups, just as conventional DRM technologies have, because it could become an invasion of privacy, revealing who accessed what content. Any commercial roll-out will probably require practical measures to alleviate this, such as ensuring that personal information is not recorded, or ensuring informed consent.
One important point when using electronic watermarks in rights management is ensuring that they can be detected even when noise increases, such as in analog copies. If a monitor image is recaptured through a camcorder, for example, frames may be offset from the original image, or the image tilted. Japan Broadcasting Corp (NHK) of Japan and Mitsubishi Electric have jointly developed an electronic watermarking technology that withstands the signal degradation caused by the analog hole.
An algorithm detects spatial offset by sensing image distortion, using a time-domain pattern embedded in the source image to first correct the time offset. Information embedded in the source at regular intervals is sensed, and used to analyze statistical 0/1 data per unit time. The final 0/1 judgment is made depending on whether it exceeds preset thresholds.
Even when noise increases, say the firms, detection is possible with high accuracy. NHK and Mitsubishi Electric are working on a system to make final data judgments based on comparison with similar watermarks, using an analysis of portions thought likely to cause detection errors.
Many electronic watermarks are embedded in the video signal itself, so that, after embedding, the watermark is encoded, decoded, and then detected. KDDI R&D Laboratories has developed an electronic watermarking technology that minimizes the accompanying processing load.
Called MPmark, it is capable of detecting watermarks without decoding the video data, and supports Moving Picture Coding Experts Group (MPEG) video encoding schemes such as MPEG Phase 2 (MPEG-2) and MPEG-4 Advanced Video Coding (AVC)/H.264. The technology is intended for use in detecting embedded watermarks in large quantities of content.
MPmark data is embedded into the macro-block, which is used in MPEG-based encoding schemes. When the macro-block luminance signal is processed using discrete cosine transform (DCT), a portion of the resulting 8x8 DCT component difficult for human eyes to perceive is converted to serve as a judgment criterion: for example, if the value is greater than a specific element of the adjacent macro-block DCT component.
This data is detected for several frames, yielding statistical data such as totals. Judgment is 1 if the data is greater than a threshold, and 0 otherwise. This approach improves robustness to noise introduced by analog copying. Information on when the data is written and to which macro-blocks, and what the judgment criteria are, is shared by the embedding and detecting parties.
P2P Explained: What is a Peer Network
Excerpted from SYS-CON Media Report by Kevin Hoffman
At this point in time, creating, consuming, and using peer networks have never been simpler, and there are more efforts on the horizon that will bring new meaning to peer networking. If you're obsessed with networking, this is an incredibly exciting time to be a programmer.
Peer networks are really just logical graphs of computers, or, in many cases, logical graphs of connected applications. The physical topology of the peer network, means of communication, and weighting of the edges are all implementation-specific details that differ from P2P network to P2P network, but all of them can be reduced to a drawing containing nodes and edges.
There are many different strategies for arranging peer networks, but the main differentiating factor involves the designation of state servers for central data storage within a peer network and the physical topology of the network.
The hybrid peer network is probably the most common form of peer network. Usually it starts out where people have soaring ideals of creating a true server-less peer network and then they realize that such a thing is impractical for what they're doing.
So, as a compromise, they stick a central server in the middle of the peer network. Nodes in the mesh still talk to each other as if they were peers when necessary, following the edges as hops and obeying other P2P rules, but when they need central information (which often includes information about who is in the peer network at the time, shared state, central registration, etc.) then they talk to the central server.
This creates a hybrid network - a network of peers that use traditional client/server patterns for talking to a central registration/state server. You see this kind of network in instant messaging (IM) networks all the time - peers talk to each other directly as peers, but the central server is responsible for authentication, authorization, registration, buddy list storage, etc.
To compensate for some of the issues that people have when building the peer network (such as the infrastructure needs of maintaining a central server and making its location public knowledge and securing it, etc.) they often decide to make one of the peers the designated state server for some period of time.
Using a simple algorithm, these peer networks often designate the most recently joined node in the mesh as the new state server. The problem with these networks is that the programmer needs to deal with fault tolerance manually - what do you do when the designated peer "server" leaves the mesh? You need to build logic into the system to be aware of that application leaving the mesh (either intentionally or through a crash) and then designating the next state server in line until another peer joins the mesh.
To compensate for some of the downsides of the previous network arrangement, peer network programmers often create complex election algorithms where the peer itself essentially holds a vote to decide who gets to be the state server. Some advanced implementations do these votes periodically regardless of whether a new node has joined the mesh.
This allows the state server to roam, building in a level of fault tolerance. In addition, this kind of topology can support multiple state servers where some are designated backup servers for failover. You can also build enterprise service bus (ESB) style networks where services sit on the peer mesh and respond (independently, of course) to requests.
These are by far the most complex implementations, but, when implemented properly, these types of networks can become ridiculously powerful and are often the basis for many commercial third-party middleware implementations.
Peer networks represent the pure implementation of a network. Peers are in the mesh or they aren't, they have no implicit or explicit ranking or relative importance above and beyond anyone else in the mesh. Since there is no central state server, the peers do not contain explicit logic to decide upon some node to be a transient state server.
If there is shared state in the peer network, it is replicated simply throughout the entire network by pushing data through whatever connection pattern the peer network has already established. If one peer goes down, so what. If all peers go down, so what.
The peer mesh can operate with one node or one thousand nodes or ten thousands nodes if necessary - the distributed partial connectivity of the whole network allows the server-less peer network to scale to enormous sizes without negatively impacting the application using the network.
If you are writing an application that takes advantage of a peer network/peer mesh, the details of optimizing the connections among nodes should be abstracted and hidden from you.
If you're operating at this low level, you might want to consider using a different peer networking API because you should be concerned with making your application communicate with other instances of itself across a peer network, not about optimizing graph traversal patterns.
Coming Events of Interest
P2P MEDIA SUMMIT LA - May 5th in Los Angeles, CA. The third annual P2P MEDIA SUMMIT LA. The DCIA's flagship event featuring keynotes from industry-leading P2P and social network operators; tracks on policy, technology and marketing; panel discussions covering content distribution and solutions development; valuable workshops; networking opportunities; and more.
Digital Hollywood Spring - May 6th-8th in Los Angeles, CA. With many new sessions and feature events, DHS has become the premiere digital entertainment conference and exposition. DCIA Member companies will exhibit and speak on a number of panels.
Streaming Media East – May 20th-21st in New York, NY. SME is the place to learn what is taking place with all forms of online video business models and technology. Content owners, viral video creators, online marketers, enterprise corporations, broadcast professionals, ad agencies, educators, and others attend. The DCIA will participate in the P2P session.
Advertising 2.0 New York - June 4th-5th in New York, NY. A new kind of event being developed as a partnership of Advertising Age and Digital Hollywood. The DCIA is fully supporting this important inaugural effort and encourages DCINFO readers to plan now to attend.
P2P MEDIA SUMMIT SV - August 4th in San Jose, CA. The first-ever P2P MEDIA SUMMIT in Silicon Valley. Featuring keynotes from industry-leading P2P and social network operators; tracks on policy, technology and marketing; panel discussions covering content distribution and solutions development; valuable workshops; networking opportunities; and more.
Building Blocks 2008 - August 5th-7th in San Jose, CA. The premier event for transforming entertainment, consumer electronics, social media & web application technologies & the global communications network: TV, cable, telco, consumer electronics, mobile, broadband, search, games and the digital home.
International Broadcasting Convention - September 11th-16th in Amsterdam, Holland. IBC is committed to providing the world's best event for everyone involved in the creation, management, and delivery of content for the entertainment industry. Uniquely, the key executives and committees who control the convention are drawn from the industry, bringing with them experience and expertise in all aspects.
|