Distributed Computing Industry
Weekly Newsletter

In This Issue

P2P Safety

P2P Leaders

P2PTV Guide

P2P Networking

Industry News

Data Bank

Techno Features

Anti-Piracy

July 26, 2010
Volume XXXI, Issue 8


Twitter Turns to BitTorrent for 200 Times Faster Server Updates 

Excerpted from Softpedia Report by Lucien Parfeni

This is what Twitter has done having adopted BitTorrent as a means of distributing software updates to its thousands of servers in mere seconds.

As sites get larger, scalability becomes more of an issue. For sites with a modest audience, there are tried and tested tools, but for the biggest sites on the web, there are very few existing options. They're venturing into unknown territory and have to come up with solutions for problems that few if any have faced before. Yet, there are times when all you need to do is figure out how to apply existing technology to solve your particular problem. This is what Twitter has done having adopted BitTorrent as a means of distributing software updates to its thousands of servers in mere seconds.

Twitter used to rely on a Git-based system. Updates would be pushed to the repository and the servers would then sync to that. The problem with this centralized approach is that it doesn't scale, certainly not at the level that Twitter needed. And as more servers were added the problem got worse.

"It was time for something completely different, something decentralized, something more like BitTorrent running inside of our datacenter to quickly copy files around. Using the file-sharing protocol, we launched a side-project called Murder and after a few days (and especially nights) of nervous full-site tinkering, it turned a 40 minute deploy process into one that lasted just 12 seconds!," Larry Gadea, a Twitter engineer, wrote.

A flock of crows is called Murder, which is where the collection of scripts got its name. Written in Python and Ruby, it leverages the BitTorrent protocol and optimizes it for the particularities of a data center, high-bandwidth, low-latency and so on. Twitter uses a BitTorrent client built on top of the open-source BitTornado. The company has released Murder under an open license as well.

For Twitter, the advantages are obvious and the speed of deployment speaks for itself, BitTorrent is 200 times faster than the previous solution. Twitter is not the only one to tap into the peer-to-peer (P2P) protocol, Facebook is also using BitTorrent for the very same thing, distributing updates to its servers efficiently.

500,000 Mobile Subscribers on Spotify

Excerpted from Sell My Mobile Report by Jennie Cole

Spotify, the industry-leading European P2P music streaming service, has announced that its number of premium mobile subscribers has reached a staggering 500,000.

The company offers two fee-paying services - a basic one for PC users at $7.67 and a premium service for mobile-phone users at $15.35 per month.

The latter service allows users to access music via the company's desktop client and mobile-phone application. This is currently available on Android, Symbian, and the iPhone.

Music can be browsed by artist, album, record label, or playlist as well as by direct searches.

Spotify's Chief Executive Officer (CEO) Daniel Ek said, "One of the unique assets and the reason why we have more than half a million people paying $15.35 a month for the premium service is because they actually use it as their primary media player."

He added, "Spotify aims to become the music platform on the Internet, where you manage your music and then consume it on any device you want."

Although Mr. Ek expected a good take-up for the PC fee, in reality most people appear to have gone for the higher priced option simply because of its mobile nature.

The company is launching its service to other countries both within Europe and outside in the near future, notably the US and Japan. It is planning a launch in Germany, but music licensing issues are holding things up as in the other markets where it is not yet available, including the US. So it may be several months before Spotify will be available in Germany.

Report from CEO Marty Lafferty

Photo of CEO Marty LaffertyThe DCIA encourages DCINFO readers to examine the proposed Freedom for Consumer Choice Act (FCC Act) introduced in the US Senate this week by Senator Jim DeMint (R-SC), which presents a fresh approach to the issue of the Federal Communications Commission's (FCC) ongoing efforts to regulate the Internet.

Co-sponsors of the bill include Senators Tom Coburn (R-OK), John Cornyn (R-TX), John Ensign (R-NV), Orrin Hatch (R-UT), Jeff Sessions (R-AL), and John Thune (R-SD).

The bill follows a federal appeals court's unanimous ruling earlier this year that the Commission's attempt to impose net neutrality regulations on Internet service providers (ISPs) was "flatly inconsistent" with the law. That case involved P2P traffic throttling, which the industry itself subsequently addressed through technological innovation, solving the related bandwidth utilization issues without the need for regulatory assistance.

It also comes on the heels of Congressional letters sent to FCC Chairman Julius Genachowski in recent weeks criticizing the Commission's plans to reclassify broadband as a regulated telecommunications service as essentially trying to circumvent both the judicial and legislative branches of government.

37 Senate Republicans characterized the FCC proposal as seeking to impose "heavy-handed 19th century regulations," which would not only be unlawful, but even "inconceivable."

Meanwhile, 74 House Democrats and 171 Republicans aired separate reservations and warned that the move to reclassify broadband would retard innovation, harm employment, and stall investment - and that no such change should be initiated without explicit Congressional approval.

Democrats voiced "serious concerns" with the Commission's reclassification agenda, urging Genachowski "to carefully consider the full range of potential consequences that government action may have on network investment."

And Republicans added, "We encourage you not to proceed down your announced path to reclassify broadband as a phone service under Title II of the Communications Act. The policy consequences of reclassifying and regulating it under Title II could be severe: reduced broadband investment, less economic stimulation, and fewer jobs."

Up until now, the FCC has operated under the principle that broadband is an "information service," which is defined in the Communications Act as revised in 1996 as "the offering of a capability for generating, acquiring, storing, transforming, processing, retrieving, utilizing, or making available information via telecommunications."

A majority in Congress now seem not only to oppose the Commission's proposal to change this, but also to believe that the Communications Act itself is in need of an overhaul for the Internet age. To date, this approach seems to hold the greatest potential to yield an outcome that will benefit all constituencies.

In introducing the new measure, which is a far less ambitious undertaking, Senator DeMint said, "The FCC's rush to takeover the Internet is just the latest example of the need for fundamental reform to protect consumers."

The new bill, however, wouldn't eliminate the Commission's power over broadband providers entirely, but rather would narrow its scope in ways that are comparable to the anti-trust enforcement powers of the Department of Justice (DOJ) in conjunction with the Federal Trade Commission (FTC).

This legislation would impose a new framework on the FCC so that it could only introduce additional regulations if it demonstrated that a market failure had led to ongoing consumer harm. If nothing else, DeMint's proposal represents an interesting new argument in the ongoing net neutrality debate.

But whether it addresses the most fundamental worry of those concerned with preserving competition among Internet-based services - of ensuring that broadband providers cannot prioritize content or traffic in which they have a vested interest over third-party offerings - is another matter.

And it does not address the questions of full disclosure related to network management practices or proactively ensuring consumer choice, which is ironic considering the title of the bill.

What this legislation would require, however, is that the Commission must prove a tangible consumer benefit in order to impose new rules.

For example, the FCC could define "unfair methods of competition" and levy new requirements on the industry in the event that marketplace competition was shown to be insufficient to "adequately protect consumer welfare." Such an absence of competition would need to be proven to be serious enough to "cause substantial injury to consumers."

Please read this relatively short bill for yourself. The key question for industry participants and observers alike is whether an approach like this would indeed promote continued growth of broadband in the United States and encourage ongoing development of new Internet-based services. That's the future that all of us need to work to preserve and protect.

Meanwhile, the increasingly embattled FCC's unreleased broadband report is already drawing fire. Share wisely, and take care.

Cloud Computing Will Fuel Economic Recovery

Excerpted from ChannelWeb Report by Andrew Hickey

Cloud computing and its inherent flexibility will help pull organizations out of the economic slump and fuel recovery, according to a recent survey commissioned by cloud infrastructure and hosted IT provider Savvis.

According to the survey, which was conducted by international research firm Vanson Bourne and queried more than 600 IT and business decision makers in the US, UK, and Singapore, 68% of respondents said cloud computing will help their businesses recover from the recession.

Despite the silver lining, the survey found that companies are still feeling the pressure to do more with less, as budgets are reduced. 54% of respondents said the biggest issue they face is the demand for lower costs and more flexible IT provisioning.

In its second year, the Savvis survey found that confidence in the power of cloud computing and its ability to cut costs has organizations optimistic. The survey found that commercial and public sector respondents expect cloud usage to slash IT budgets by an average of 15%, while a handful of respondents expect that savings to hit more than 40% in the near term.

"Flexibility and pay-as-you-go elasticity are driving many of our clients toward cloud computing," Bryan Doerr, Chief Technology Officer (CTO) at Savvis, said.

The survey also found that 96% of IT decision makers are as confident or more confident that cloud computing is ready for enterprise use now than they were in 2009. Also, 7% of IT decision makers said they use or are planning to use enterprise-class cloud computing solutions within the next two years.

From a geographical standpoint, the survey revealed that Singapore is currently leading the cloud charge, with 76% of responding organizations leveraging cloud computing. Meanwhile, the US follows with 66% and the UK with 57%.

Respondents from the US and from Singapore, 30% and 42%, respectively, cited the ability to scale up and down on the fly to manage fluctuating business demand as the biggest benefit of cloud computing. In the UK, the top cloud computing driver is the lower total cost of ownership, with 41% of respondents saying that is the biggest catalyst for the cloud.

Despite the confidence surrounding cloud computing, security still remains a key adoption barrier, with 52% of survey respondents who don't use cloud computing citing security of sensitive data as a key concern.

Spending Soars on Internet's Plumbing

Excerpted from Wall Street Journal Report by Don Clark and Ben Worthen

Behind the recovery in business spending is a surge in purchases of the computers that form the backbone of the Internet, as companies scramble to meet growing demand for video and other web-based services.

The need to reach customers and employees over the web is driving furious demand for server systems, the machines that power corporate computer rooms.

Many companies are stocking up on new servers, which typically cost a few thousand dollars apiece, to replace older machines with more energy efficient models or systems with more powerful processors.

Also, an increasing number of businesses are turning to outsourcing companies, which manage computer rooms for customers and in many cases are sharply stepping up purchases of servers to keep up with rising demand.

"We've been buying thousands of computers this year," says Doug Erwin, chief executive of ThePlanet Internet Services, a Houston-based company that runs data centers to offer computing services. ThePlanet says it now owns about 50,000 Dell servers.

International Business Machines (IBM), one of the biggest vendors of servers, said Tuesday that sales of industry-standard servers jumped 30% in the second quarter, after rising 36% in the first quarter.

The buying activity became apparent last week, when Intel said quarterly revenue from its unit selling server chips rose 42% from a year earlier, while shipments driven by Internet-related companies' purchases nearly tripled.

Growth in web traffic isn't a new phenomenon, but computer purchasing to keep up with demand is accelerating because of improving economic conditions and technology that makes purchases of new computers pay off more quickly.

On Thursday, Internet giant Google reported $476 million in capital spending, including spending on servers and other hardware. That was more than triple the amount it spent a year earlier.

Unlike Google, many companies are side-stepping the costs of building their own computer rooms, opting to place servers they buy in "co-location" centers that maintain machines and offer Internet connections.

Rackspace Hosting a San Antonio, TX company that runs data centers, says it added 9,152 servers in 2009, plus about 3,000 more in the first quarter of this year. Savvis a competitor based in Town and Country, MO, says it has purchased more than 80% more servers over the last 12 months.

"All I see all day is trucks coming up to our loading docks dropping off servers," says George Slessman, chief executive of i/o Data Centers, a Phoenix, AZ based company. He says the number of customers that have installed servers in its computer rooms has risen from 140 at the beginning of 2009 to nearly 400 now.

The market research firm IDC puts spending on cloud-computing, a term that includes delivering computing capacity over the Internet, at $16.5 billion in 2009, and projects spending in the field will increase 27% a year through 2014 - with the number of servers deployed in cloud applications expected to triple to 1.35 million over that period.

Forrest Norrod, Dell's Vice President and General Manager of Server Platforms, says the company has seen "triple-digit increases" in its cloud-related business year over year. "The cloud side is growing faster than the rest" of the server market, Mr. Norrod says.

There are several reasons. Companies keep stepping up the use of the web to reach customers and adding features like video streams that require more computing power and faster network connections.

Such operations generate huge volumes of data, which have forced companies to buy more-powerful servers to help analyze the information, says Mike Long, chief executive of Arrow Electronics, which sells servers and distributes chips and other components.

Meanwhile, companies that stocked up on servers over the past decade have struggled to find space, electrical power and labor to keep them running. Technology suppliers like Intel and rival Advanced Micro Devices have reacted by designing chips that offer lower power consumption as well as greater performance. They argue that switching to new servers with such chips can save enough on power and labor costs to pay for upgrades in a few months.

Intel, for example, has overhauled its Xeon line of servers chips to include a model with the equivalent of eight electronic brains on one piece of silicon. The company estimates that a server with four such chips offers a 20-fold performance increase over an existing server with four single-processor chips; that means one new machine can take the place of 20.

Even before factoring in models based on Intel's newest Xeon chips, pricing for some server vendors is on the rise; the average price of Xeon-based servers sold by Hewlett-Packard (HP), for example, rose nearly 12% to $3,993 from the second quarter of 2009 to the first quarter of 2010, market researcher Gartner estimates.

Customers have responded, in many cases paying up for servers with high-end chips that command higher prices. Mr. Erwin of ThePlanet says it moved swiftly this year to Intel's new technology, saving his company money on power and labor costs and providing greater performance to offer customers at a higher price.

Zach Nelson, chief executive officer of web-based software provider NetSuite, plans to use HP servers with Intel's most-powerful chips in a new data center in Boston. "It maximizes our customer experience and reduces our cost," he says.

Other companies are adding different systems for different computing chores. Susan Shimamura, the Vice President of Operations at IAC/InterActiveCorp's Ask.com, says the company has traditionally bought only low-end Dell systems for its web search function. While continuing that practice, it recently decided to also buy higher-end machines for databases that analyze how people use Ask, she says.

Big-name server makers are not the only beneficiaries. To offer cloud-style services, Rackspace prefers little-known suppliers for attractively priced "white-label" servers "straight from the factory in Taiwan," says Lanham Napier, its chief executive.

Just how long the server-buying boom will last is unclear, amid economic jitters and the fact that cloud companies tend to buy servers in advance signing up customers.

"It's the build-it-and-they-will-come model," says Bryan Doerr, Chief Technology Officer of Savvis.

But companies pursuing cloud computing say demand is so strong that they aren't worried about adding too much capacity. "This is a major tectonic movement," says Manuel D. Medina, chief executive of Terremark Worldwide, which says its cloud business has been growing 30% sequentially each quarter. "There's zero chance of a bubble."

The Changing Cloud Platforms: Amazon, Google, Microsoft, and More 

Excerpted from PC Magazine Report by Michael Miller

"Cloud computing" means different things to different people. Some use the term when talking about what we used to call software-as-a-service (SaaS): applications that are web-hosted, from webmail to Salesforce.com and beyond. Others use it primarily to mean using publicly available computers, typically on an as-needed basis, instead of buying their own servers.

Still others use it to mean accessing both data and applications from the web, allowing cross-organization collaboration. And some use it to describe "private clouds" that they are building within their organizations, to make better use of their data centers and network infrastructure, and to assign costs based on usage. In short, the term "cloud computing" is now so broad that it covers pretty much any way of using the Internet beyond simple browsing.

For me, one of the most interesting things happening in this sphere is the emergence of new platforms for writing and running applications in the cloud. Over the past two years, since I wrote about Amazon, Google, and Microsoft, these three vendors have moved in very different directions. After some recent announcements, I thought I'd revisit the topic to look at the state of these platforms.

A caveat: Most of the following is based on conversations with developers I know. I haven't developed code professionally in many years.

Amazon Web Services was the first cloud platform to draw a lot of attention, and it still gets much attention today. The basic service is the the Amazon Elastic Computer Cloud (EC2), a web service that lets you assign your application to as many "compute units" as you would like, whenever you need them. The company also offers its Simple Storage Solution" for storing data. On top of this, Amazon has added a whole range of services, from its Simple DB database to a newer Relational Database Service (RDS), and includes such things as a notification service and a queue service.

On top of the EC2 platform, you can run Linux, Unix, or Windows Server and pretty much use the development tools of your choice. This makes AWS very flexible. I know a lot of developers at smaller companies who start out using Amazon services instead of their own internal infrastructure; some later get their own servers; others continue to host (either at Amazon or somewhere else), because they don't want to deal with managing their own infrastructure. Indeed, Amazon has also rolled out a lot of management tools over the past couple of years, including CloudWatch for monitoring EC2 instances. I also know some enterprise IT managers who have moved specific projects to AWS, often when they have a project that needs a lot of extra computing for a short period of time, or when they have sporadic peaks of usage.

More recently, the company has expanded its tools offerings in several ways, such as offering Elastic MapReduce, which uses a hosted version of the Hadoop framework to let developers work with huge amounts of data. It just announced Cluster Compute instances for EC2, specifically designed for high-performance computing. It offers CloudFront, a content delivery network that is tied into the platform.

Perhaps most interesting, it now offers a Virtual Private Cloud product, which promises to bridge your existing IT infrastructure and the EC2 services. In this scheme, you can assign a specific network address to a virtual server in Amazon's environment, so a customer might store data in its own datacenter but do calculations on it using a cloud-based server.

In all these cases, the appeal is pretty simple. You use the features you want when you want them and pay by the use. That makes it very flexible.

Developers I've talked to generally love the model, which is often known as "infrastructure as a service," because what Amazon provides is generally just the infrastructure, not the software for developing applications itself. Although Amazon does allow for Windows servers, most of the people I know who use it primarily use open-source tools--the LAMP (Linux, Apache, MySQL, PHP) stack, although sometimes with alternative languages instead of PHP.

You can, of course, host such platforms at more traditional hosting companies, many of which now offer cloud services. One of the best known of these services is offered by Rackspace, which offers managed hosting as well as CloudServers for on-demand computer servers, and CloudFiles for storage.

Rackspace and its partners just announced an initiative called OpenStack. It consists of Compute, designed to create and manage large numbers of virtual private servers, and Storage, designed to create redundant, scalable object storage using clusters of commodity servers. Rackspace's initial cloud offerings have been for Unix, but the company has just started beta testing Windows Server instances as well.

I've talked with a number of developers who have used Rackspace for more traditional hosting, and are now adding cloud services there as well.

A number of the developers I know who use Amazon or Rackspace are actually using the Java development framework from SpringSource, now a division of VMware. SpringSource hosts a variety of open-source projects for developing software, and the company sells supported versions and tools to run and manage applications created in its framework.

You can run the Spring framework on any server, including the cloud infrastructure providers. And as I said, many of the developers I know do just that. But more recently, SpringSource has introduced two particular alliances designed to use its framework as part of a broader offering.

Parent company VMware and Salesforce.com recently got together to introduce VMForce, which combines Java, Spring, VMware's vSphere virtualization platform, and Saleforce's Force.com cloud-computing platform. Essentially, this seems to combine the management features and integration with Salesforce that the Force.com platform offers with more standardized Java development.

In addition, VMware and Google have teamed up to allow developers to run Spring on Google App Engine, and VMware says the partnership will allow customers to take applications developed in Spring and run them on their own servers running vSphere, on partners offering vCloud services, and on AppEngine. The two companies are working together to combine Spring's Roo rapid application development environment with the Google Web Toolkit.

Of course, Salesforce and Google offer their own platforms.

Google AppEngine has been a more specialized type of platform, as the applications written for it use Google's infrastructure, including its supported languages (such as Python) and its own BigTable database, which is designed for large data sets but is not the traditional relational database. Google App Engine now offers broader support for Java and some other JVM-compatible languages, but there are notable restrictions.

Google recently announced App Engine for Business, which ties to the company's Google Apps offering (its software-as-a-service collection of productivity tools and e-mail), and offers more advanced company-focused administration, service level agreements. This is offered on a per user per month per application basis. Google says that later this year, it will offer more advanced features such as hosted SQL databases.

Salesforce.com's Force.com is a well-known cloud platform that seems particularly aimed at corporate accounts (everyone I know who uses it uses Salesforce's CRM solution as well) with pricing on a per user, per month basis.

Both Salesforce and Google offer predominantly a single environment with a variety of options. It's not as flexible as Amazon's, but it comes at a higher level with more of an emphasis on their own development tools. As such, it is more of a "platform as a service" play, because you are buying the whole platform, not paying for individual parts separately.

The other platform that gets a lot of attention is Microsoft's Windows Azure, which has officially been available only since its developer conference last November. Azure is clearly a "platform as a service" offering in that it is a closed platform running Microsoft software and is aimed at developers who use Microsoft's development tools, notably the .NET framework. But it offers pricing based on computer services and storage, much like offerings from Amazon and other cloud infrastructure providers.

The basic platform includes Windows Azure, which offers the computing and the storage; SQL Azure, a cloud-based relational database; and AppFabric (formerly called .NET Services), which includes access-control middleware and a service bus to connect various services, whether built in Azure or outside applications. This month, Microsoft released a new version of AppFabric that supports Flash and Silverlight.

Until now, Azure was available only from Microsoft's own datacenters, but the company just took the first steps towards making Azure available for organizations to deploy within their infrastructures. Microsoft announced the Windows Azure platform appliance, which consists of Windows Azure and SQL Azure on Microsoft-specified hardware. This appliance--which sounds like it's actually a large collection of physical servers--is designed for service providers, governments, and large enterprises.

Note this is very different from letting individual customers set up their own servers to run Azure; Microsoft and its partners will be managing the servers themselves, though companies can host their own data. Initially, Microsoft said Dell, Fujitsu, and HP would all be running such appliances in their own data centers and selling services to their customers, based on this appliance. eBay is also an early customer, using the appliance in its data center. I would expect that over time, this would be made available to more customers, and probably offer tighter links between on-premises and cloud servers.

Azure's initial target seems to be mainly corporate developers, people who already use Microsoft's developer tools, notably Visual Studio and the .NET framework. (Microsoft is also trying to compete with VMware in offering virtualization tools to cloud service providers, but that's another topic.)

Larger service providers such as HP and IBM also have their own cloud offerings, typically aimed at providing customized services to very large corporate accounts. IBM recently announced a new development and test environment on its own cloud. But in general, these tend to be company-specific choices rather than the "self-service" cloud platforms the more general platforms provide.

And I've talked to a number of very large customers who are deploying "private clouds": using their own infrastructures with virtualization and provisioning, as part of efforts to make their data centers more efficient.

Cloud platforms are still emerging, and there are still plenty of issues, from the typical concerns about management and security, to portability of applications and data from one cloud provider to another. But it's clear that cloud platforms and services are getting more mature and more sophisticated at a very rapid clip, and many--if not most--of the developers I know are either using these technologies or are actively considering them.

Experts Say Web Is Filling Up Fast

Excerpted from MediaPost Report

Hard as it is to believe, the World Wide Web is on track to run out of Internet addresses in about a year, according to John Curran, President and CEO of the American Registry for Internet Numbers.
As ReadWriteWeb notes, "The same thing was also stated recently by Vint Cerf, Google's Chief Internet Evangelist." 

According to these experts, the web is about to experience a data explosion, the likes of which we've not seen before, and a direct result of what ReadWriteWeb calls "sensor data, smart grids, radio frequency identification tags, and other Internet of Things data." Other reasons include the increase in connected mobile devices, and the continued growth in user-generated content. ReadWriteWeb uses the warnings as an opportunity to argue for a new Internet protocol. 

Presently, the web largely uses IPv4, Internet Protocol version 4. IPv6 is the next generation Internet Protocol, and thankfully supports a vastly larger number of unique IP addresses. Enough to give every person on the planet over 4 billion addresses, apparently.

Making Broadband Cheaper Through P2P Caching 

Excerpted from MyADSL Report by Rudolph Muller

South African broadband prices are closely linked with national and international bandwidth prices. The cost of both national and international capacity has come down over the last twelve months which resulted in significantly lower ADSL bandwidth prices.

Despite the lower backhaul bandwidth prices ADSL service providers can benefit further from either having on-net content or free and open peering where no national transit costs are charged for getting content from another network.

Web caching - where web documents like HTML pages and images are stored on an on-net server to reduce bandwidth usage and improve performance - is implemented by most large Internet service providers (ISPs).

A significant percentage of Internet traffic, especially international traffic, is however generated by file-sharing services where content is typically not cached. Users share movies, TV series, music, and software with each other which consume large amounts of bandwidth.

Local ISP Web Africa has now limited the impact of file-sharing traffic on its international network by implementing a P2P caching system where the most popular files are stored on-net. This has the dual benefit of saving international bandwidth and creating a far better file-sharing experience for Web Africa subscribers.

It is estimated that 75% of file-sharing content is requested multiple times which makes it very suitable for caching, and Web Africa CEO Matthew Tagg said that it is saving around 20% on international bandwidth since installing its P2P caching solution.

Distributed Computing Evolution - Beyond the Hype

Excerpted from ISGTW Report by Craig Lee

Last week, Amazon Web Services announced the launch of Cluster Compute Instances for Amazon EC2, which aims to provide high-bandwidth, low-latency instances for high performance computing.

The announcement was met with a variety of responses from the blogosphere and media. Based on the claim that Amazon had benchmarked its new cluster service at spot 146 on the top500.org list, Bill St. Arnaud asked his readers, "Should funding agencies ban purchase of HPC clusters by university researchers?" The Register's Dave Rosenberg, meanwhile, pronounced, "Amazon sounds death knell for rocket-science grids."

As always, we'd caution readers to reserve judgment until independent investigations can confirm advertised performance and cost.

Mark Twain's retort when seeing his obituary in the New York Journal seems highly appropriate here. While computing may be viewed as an intellectual endeavor governed by logical objectivity, it is nonetheless surrounded to a degree by hype, hyperbole, and fashion. For those who want a clear understanding of what's happening here, and where distributed computing technology is going, let's go beneath the marketing hyperbole, and separate concept from implementation from buzzwords.

It's historical fact that the grid concept came out of the "big science" arena - out of a desire to share data and processing capabilities. As such, grids were designed and built by computer scientists to support the way scientists do their work; i.e., staging data through FTP and submitting large jobs to batch queue managers. Doing so, however, required a secure, federated environment to manage identity, discovery, and resource allocation across administrative domains.

In the early years of this century, this concept of resource integration caught the imagination of industry. Why? Presumably because they thought that lucrative markets would develop around resource federation and distributed systems. However, the existing grid implementations at the time turned out to be way too hard and too complicated for the faint-hearted to install and maintain - that is to say, it had a poor value proposition for most organizations in the marketplace.

Enter cloud computing.

Cloud computing is a fantastic concept for the on-demand provisioning of computing resources - processing, storage, communication, and services - through a relatively simple API. This is enabling the commoditization of these resources, thereby creating readily identifiable business models and value propositions. Of course, many different types of computing infrastructures have been built for different computing requirements. HPC centers have been built around massive numbers of processors and associated storage to run tightly coupled codes. Data centers with classic three-tier architectures have been built to achieve massive throughput for web applications.

It is not surprising then that the one-size-fits-all approach of some commercially available public clouds, such as Amazon EC2, would not fit everybody. Tightly coupled applications suffer on EC2 because of insufficient communication bandwidth. It is equally unsurprising that Amazon could deploy a reconfigured cloud infrastructure to provide virtual clusters with acceptable HPC performance.

The fact that a range of computing resources can now be acquired on-demand for a reasonable market price will certainly drive the marketplace. For all the reasons cited by Bill St. Arnaud and others, there will be business and environmental decisions to be made around the cost and carbon footprint involved in owning your own compute resources. The fundamental business trade-off will have to be made in terms of how much "base" to own versus how much "peak" to rent. Even if commercial public clouds cannot be used for security or regulatory issues, enterprise clouds will be deployed to realize many of the same benefits of economy of scale, improved utilization, and ease of application deployment. To be sure, commodity, on-demand computing resources will become a fixture on the computing landscape.

But is this the end of the story?

As clouds mature, cloud users will certainly want to develop more extensive application systems that go beyond the basic client-provider model of commercial public clouds. Different governmental and scientific organizations will want to collaborate for national and scientific goals. Businesses will also want to collaborate where it delivers value and gives them an edge in the marketplace. It is easy to imagine a business-to-business scenario that requires the exchange of both data and VMIs. Clearly this will require interoperability and resource federation.

To this end, people are starting to talk about inter-clouds, federated clouds, or community clouds. Securely managing such systems will require federated identity management and role-based authorization to instantiate virtual organizations. Distributed workflow management tools will be required to manage data transfers and process scheduling. Organizations such as the International Grid Trust Federation will have to be set-up to make it all work. This secure management of sets of distributed resources is far beyond what the current public cloud providers are offering.

To sum it all up in one phrase - grids are about federation; clouds are about provisioning. To say "data can cross enterprise and data center boundaries in new ways" is to elide the issues of security and governance across administrative domains. These issues have to be addressed regardless of what technology is being used. To say "grid computing required too much hardware, too much software, and way too much money" is to ignore two items: 1) the hardware, software and money that Amazon has sunk into EC2 and Virtual Clusters, and 2) the fact that clouds by themselves do nothing to reduce the "vast complexity" of federation management.

The important achievement to focus on here is that virtualization has enabled a simple client-provider API for the on-demand provisioning of resources. If you need an inter-cloud, a grid, or whatever you want to call it, some type of virtual organization management will be necessary. The challenge to the distributed infrastructures community is to make such inter-cloud deployment as easy as "next, next, install." This will be critical for inter-clouds to build a self-sustaining market.

Buzzwords become loaded with baggage from previous implementations and hype resulting in unrealized expectations. It is hard to imagine that cloud computing will be completely immune to unrealized expectations! The advent of cloud computing is indeed an important evolutionary step, but the federation concept is looming larger than ever - whatever we call it.

LimeWire Store Partners with Merge Records

LimeWire this week announced a partnership with venerable independent label Merge Records to sell its catalog through the LimeWire Store

This deal brings the total number of licensed tracks available at LimeWire Store to over 6 million and furthers LimeWire's commitment to working with the independent music community. 

A longtime market and cultural leader, Merge Records remains one of the top performing independent labels. Merge Records is home to artists such as Spoon, Arcade Fire, Superchunk, Magnetic Fields, Neutral Milk Hotel, She & Him, and more. 

"Merge Records is synonymous with quality and has one of the best catalogs around. They've become an established player over the last 20 years, and we look forward to helping them prosper for many more years to come," said Tom Monday, Director of Partner Relations for LimeWire Store. "Adding content partners is always exciting, but it's especially so now, as we ready our new music service for launch later this year." 

LimeWire plans continued growth for its digital music retail operation, LimeWire Store, while it simultaneously develops a new music service that is set to launch in late 2010. 

This announcement comes on the heels of numerous new partnerships for LimeWire Store. In recent months, LimeWire Store has signed partnership agreements with respected independent labels including Naxos, Southern Lord, Sun Records, and Minty Fresh. 

LimeWire Store, launched in the spring of 2008 by the makers of the popular P2P software, sells 256kbps, DRM-free MP3s provided by leading distributors including The Orchard, IODA, Redeye, IRIS, Tunecore, CD Baby, Naxos, and respected labels like Merge, Polyvinyl, Sun Records, Southern Lord, Nettwerk, Smithsonian Folkways, Kemado, Delicious Vinyl, Dualtone, Militia Group, Fader, Om, plus many more.

Its digital music offering has grown to include more than 6 million recordings by today's top indie artists and music legends. LimeWire Store also produces exclusive content, such as its Live at Lime series, which includes live sessions by artists Matt & Kim, Juliana Hatfield, Sloan, School of Seven Bells, Langhorne Slim, Lisa Loeb, Tom Morello, and others. LimeWire Store's music team is committed to providing participating artists and labels unique promotional opportunities with exclusive releases, targeted marketing and events.

Adobe Shows Off FlashTime: P2P Video Calls on Android 

Excerpted from HEXUS Report by Parm Mann

Adobe has demonstrated a P2P video calling system for Google's Android mobile operating system.

The technology, which mimics Apple's iPhone 4-based FaceTime, is cheekily dubbed FlashTime and is built using an upcoming release of the cross-platform Adobe Integrated Runtime (AIR).

Previewed on a Nexus One handset, AIR 2.5 is said to be "at feature parity with the desktop Flash Platform" and offers support for device cameras and microphones, enabling developers to create video-conferencing applications.

Utilizing Adobe's own Stratus servers, the FlashTime demo showcases a user-to-user video call between two Android devices.

Mark Doherty, Flash Platform Evangelist and developer of FlashTime, claims that the service is working but warns that certain features "may not make it into the v1 product".

In recent months, Apple has made clear its reasons for snubbing Adobe technology on its iPad, iPod or iPhone products, with CEO Steve Jobs claiming that Flash is "the number one reason Macs crash".

One thing's for certain, the release of a FlashTime app on rival platforms isn't going to encourage Adobe-Apple relations.

Barclays New York Challenge a Coming-Out Party for Veetle

Excerpted from NewTeeVee Report by Janko Roettgers

Missing the World Cup action? Well, you're in luck: Palo Alto-based P2P startup Veetle will live stream all matches from the Barclays New York Challenge. The three-day tournament consists of four games between the New York Red Bulls, Tottenham Hotspur, Manchester City and Sporting Lisbon - but for Veetle, it's about more than just good soccer.

The New York Challenge is Veetle's first major partnership for a live event, and the four-year old company is hoping that other content owners will soon jump on board as well. Veetle's proposition to rights holders is simple: The company says it can stream live events in full 720p HD, no matter how many people tune in. And at least initially, it won't charge content partners a dime.

HD, high reliability, and low costs are made possible by Veetle's proprietary P2P technology, which was developed by Stanford graduates. Veetle's Thomas Ahn Hicks told me that Veetle is able to offload an estimated 80% of its traffic to its P2P component.

The company does offer a Flash stream for its top 15 most popular live feeds, but users are encouraged to install the Veetle plug-in to watch streams in HD, and less popular programming is only available with the P2P plug-in installed. Veetle utilizes VLC to play its H.264 video streams, and Hicks told me that a lot of work went into fine-tuning the video quality.

Visit the Veetle home page, and you'll get mixed results. Some streams look really good, but others are barely enjoyable. The company explains these discrepancies with the bandwidth available to the original broadcaster. Most of these broadcasters are regular Internet users, which not only explains varying degrees of video quality, but also the fact that much of the material broadcast via Veetle seems strangely familiar.

A number of users utilize the service to showcase their favorite movies and TV shows, and you'll also find the occasional sports event relayed from TV broadcasters or cable networks. Hicks told me that the company is adhering to the DMCA and immediately removing content upon request. Of course, piracy is a touchy subject when it comes to live streaming, and one can only imagine that Veetle might feel more heat once it's going to be more in the spotlight as well.

But the spotlight is exactly where Veetle wants to be, with hopes to sign up more content partners soon. Of course, it's not the first company to use P2P for live streaming, and it likely won't be the last. BitTorrent's founder Bram Cohen is working on a P2P streaming protocol as well, and the company is poised to unveil a live streaming offering any day now. Veetle is following these developments with interest, but believes it's positioned well with its own technology. "It would take a long time to replicate it," said Hicks.

Boxee Readies Its Set-Top Box

Excerpted from NY Times Report by Nick Bilton

Boxee, software that allows users to watch movies and TV shows from the web on a television, is preparing to introduce a set-top box (STV) in a few months.

This video shows Zach Klein, who oversees design at Boxee, talking about the box and the production methods the company is using to make it.

Boxee announced the new STB late last December. Boxee is working with D-Link, a Taiwanese manufacturer of networking equipment, to build the box, which is expected to go on sale for around $200.

World's First Pirate ISP Launching Soon

Excerpted Tom's Guide Report by Kevin Parrish

What's a good way to thwart - and upset them - the RIAA and MPAA? Create your own pirate-based ISP! Apparently that's what is taking place over in Sweden thanks to the Swedish Pirate Party.

The move started as a means to keep The Pirate Bay (TPB) afloat when the group volunteered bandwidth. Then at the beginning of July, the Pirate Party actually began to host the site from within the Swedish Parliament using Parliamentary Immunity. Now the group is working with technology partners to actually launch what they call Pirate ISP. The new service will deliver broadband connections to customers who share the same interests as the Pirate Party.

"If you see something and you think it's broken you build a patch and fix it," said Gustav Nipe, Pirate Party member and CEO of Pirate ISP in an interview with TorrentFreak. "With that as a reference point we are launching an ISP. This is one way to tackle the big brother society. The Pirate ISP is needed in different ways. One is to compete with other ISPs, let them fight more for our Internet. If they don't behave there will always be someone else taking their share."

The ISP in itself is making a bold statement. According to Nipe, the service will offer maximum privacy for all of its customers. The ISP will also not allow the Swedish Government to monitor its users, and it will not retain logs of user activities. He also said that Pirate ISP will not respond to outside interference, especially threats made from the United States.

"They can bring on whatever they have, we will refuse to follow there," he said. "We don't agree with what they are saying and we don't agree with the laws they are making so if they have an issue with us, then we will have an issue - but that's it."

He also assured potential Pirate ISP users that they have tricks up their sleeves to fend off local authorities looking to snag a few file sharers. "It would be a pity to reveal all the tricks that we have, so we will save those for later," he added. "But we have ways to ensure that no customer should have to get a sad letter home from Henrik Ponten."

Coming Events of Interest

NY Games Conference - September 21st in New York, NY.The most influential decision-makers in the digital media industry gather to network, do deals, and share ideas about the future of games and connected entertainment. Now in its 3rd year, this show features lively debate on timely cutting-edge business topics.

M2M Evolution Conference - October 4th-6th in Los Angeles, CA. Machine-to-machine (M2M) embraces the any-to-any strategy of the Internet today. "M2M: Transformers on the Net" showcases the solutions, and examines the data strategies and technological requirements that enterprises and carriers need to capitalize on a market segment that is estimated to grow to $300 Billion in the year ahead.

Digital Content Monetization 2010 - October 4th-7th in New York, NY. DCM 2010 is a rights-holder focused event exploring how media and entertainment owners can develop sustainable digital content monetization strategies.

Digital Music Forum West - October 6th-7th in Los Angeles, CA. Over 300 of the most influential decision-makers in the music industry gather in Los Angeles each year for this incredible 2-day deal-makers forum to network, do deals, and share ideas about the business.

Digital Hollywood Fall - October 18th-21st in Santa Monica, CA. Digital Hollywood Spring (DHS) is the premier entertainment and technology conference in the country covering the convergence of entertainment, the web, television, and technology.

P2P Streaming Workshop - October 29th in Firenze, Italy. ACM Multimedia presents this workshop on advanced video streaming techniques for P2P networks and social networking. The focus will be on novel contributions on all aspects of P2P-based video coding, streaming, and content distribution, which is informed by social networks.

Fifth International Conference on P2P, Parallel, Grid, Cloud, and Internet Computing - November 4th-6th in Fukuoka, Japan. The aim of this conference is to present innovative research results, methods and development techniques from both theoretical and practical perspectives related to P2P, grid, cloud and Internet computing. A number of workshops will take place.

Copyright 2008 Distributed Computing Industry Association
This page last updated August 1, 2010
Privacy Policy