March 21, 2011
Volume XXXIV, Issue 6
Paramount to Distribute "The Tunnel" for Free on BitTorrent
Excerpted from Digital Media Wire Report by Mark Hefflinger
Paramount Pictures has partnered with BitTorrent, to distribute its horror flick "The Tunnel" simultaneously on DVD and for free via the BitTorrent file-sharing network.
"Our experience with Paramount has been positive, and we're impressed with how forward-thinking they've been on considering our specific project," Enzo Tedeschi, the film's producer, told TorrentFreak.
"From day one we've maintained that 'The Tunnel' is not supporting or condoning piracy, but instead trying to incorporate a legitimate use of peer-to-peer (P2P) in our distribution strategy internationally."
Study Shows TV Streaming, Cord-Cutting on the Rise
Excerpted from Media Daily News Report by Joe Mandese
More than one-third (35%) of Americans ages 13-54 say they watch streaming video programming from a TV network, up from 29% in 2006, according to findings of an ongoing tracking study from Knowledge Networks.
The study, TV's Web Connections, also found that 5% of the population (or 17% of those who watched network programming online) said they have "reduced or eliminated regular TV service in the past year because of their Internet enabled viewing.
This figure is up from 9% of streaming/download viewers in 2009 (or 3% of the total population).
Cloud Music Services to Hit 161 Million Subscribers by 2016
Excerpted from Digital Media Wire Report by Mark Hefflinger
Cloud-based music streaming services, such as those offered by Rhapsody and Spotify, "will become a more important form of access to music than owning albums or songs by 2016, according to a report from market researcher ABI Research.
Driven by the growing use of smart-phones, ABI projects the number of subscribers to cloud-based music services will grow at an annual rate of nearly 95%, and exceed 161 million in 2016.
The number of subscribers to cloud music services is expected to top 5.9 million by the end of this year. Sometime next year, ABI analyst Aapo Markkanen predicts, "the Asia-Pacific area will become the largest regional market for mobile music streaming."
Prices for cloud music services are expected to gradually decline as they reach mass markets. "Forecasts of declining prices are based on the assumption that the rights-holders will lower their royalty demands," said ABI's Neil Strother.
"Record labels and collecting societies should not overplay their hands when it comes to royalty issues. If consumers do not have convenient and affordable legal alternatives, they will simply enjoy their music by other means."
Forecast: IPTV Subs Will Hit 131 Million by 2015
Excerpted from Media Daily News Report by Wayne Friedman
A new survey from Pyramid Research said Internet-protocol television (IPTV) subscriptions will reach 131.6 million globally by year-end 2015. Currently, the company estimates that worldwide IPTV subscriptions were 46.2 million at the end of 2010. At that pace, the business will see a 23% compounded growth rate per year.
Much of this is coming from big growth in Asia, especially the single-fastest and largest market: China. The Cambridge, MA based researcher says China is on pace to surpass France as the largest IPTV market by the end of 2011.
China Telecom is already the biggest worldwide single provider of IPTV. Pyramid says more than half of the worldwide IPTV subscriptions will be coming from Asia in five years.
By way of comparison, 6.9 million IPTV consumers are in the US, per SNL Kagan. This would be around 15% of the global 46.2 million in IPTV subscriptions.
Other recent reports regarding IPTV growth have been more modest.
UK based IMS Research, which has called IPTV "the platform of the future," says subscriber growth in China, Latin America, and Eastern Europe will mean worldwide IPTV subscribers at 70 million in five years. It says the current market is around 36.5 million.
Report from CEO Marty Lafferty
We were excited this week by Octoshape Solutions: Breaking the Constraints of Scale, Quality, Cost, and Global Reach in Content Delivery.. This white paper presents a superb example of how distributed computing technologies can be innovatively developed, integrated, and deployed to bring real breakthroughs that offer enormous benefits to high-value video content distribution.
Content delivery over the Internet until now has been governed by one fundamental rule that has impacted the architecture, scale, quality, and underlying economics of distribution. The rule is that the distance from the streaming server is directly relative to the quality of the video the consumer receives.
Octoshape has effectively defined a new set of rules for content delivery that, without hyperbole, effectively explode the limitations that this rule has for some time imposed over the content distribution ecosystem. The company has raised the bar for quality, scale, cost, and global reach relative to media delivery over the Internet.
The most popular protocol used to deliver media over the Internet today is HTTP. The component of HTTP that binds distance to quality is TCP, as an underlying transport algorithm that has acknowledgment-based mechanisms to track and ensure that if the sender sent data, the receiver ultimately will receive it.
This reduces the complexity required at the application level so it doesn't have to worry about missing video. This reliable transport mechanism clearly has value, but it unfortunately comes with a cost.
The TCP algorithm window starts small, and then grows until a packet is lost to achieve maximum throughput. This loss of a packet is considered a sign of congestion, for which the algorithm responds by drastically reducing the window size to avoid the congestion.
This in turn reduces the effective throughput immediately. The TCP window will begin to grow again until another congestion event. Visually the throughput variability looks like an hour glass inside a given set of capacity. This is fundamentally why TCP technologies cannot support TV quality, consistent bit-rates.
There are two main architectures currently deployed by CDNs to address this problem today: deploying edge hosts and deploying edge networking. The first method is to deploy huge distributed host infrastructure at the edge of the Internet. This method attempts to address the issue by deploying servers very close.
The second approach is to build a peering network directly to the edge networks. The effect this has is reducing the router hops across the normal Internet backbone, thus making a more centrally located set of hosts "look" closer from a latency perspective than they normally would.
The common component to both of these methods is that they require a significant amount of capital and operational expense. The constraints are real, and hold the scale of the Internet back based on the extent of capital or operational expense CDNs are willing to invest and that content providers are willing to accept.
Traditional streaming technologies are at the mercy of the fluctuating bandwidths inherent to TCP or HTTP based technologies. These technologies must make significant tradeoffs and work-arounds to make up for this deficiency.
Adaptive bit-rate technologies were born to trade-off buffering and slow start-up times for video quality. These technologies were built to shift the video quality many times during the session, as quickly as every two seconds to adapt to the current throughput available between the user and the streaming server. The real problem with this approach is that it is not acceptable for experiences displayed on the television.
Another approach to counteract the inherent variable throughput profile of TCP based technologies is opening multiple HTTP connections at the same time. This approach attempts to increase the actual throughput by parallelizing the flow of traffic. While in low scale this can have an additive effect to the throughput profile of a session, at scale it exacerbates the congestion issue on the Internet, and to the streaming servers themselves.
Another drawback to HTTP based technologies is that they are not actually streaming technologies at all. They are simply progressive download algorithms that split the stream into many thousands of physical files, and download those files as fast as they can.
Instead of a smooth flow of data, it looks like hundreds of thousands of small spikes of bandwidth use to the network. As the event grows in size, these spikes grow in volume and become very difficult on the streaming infrastructure to manage.
Octoshape's technology eliminates the need for this onerous and resource-devouring process for content creators, broadcasters, and aggregators worldwide.
At its core, Octoshape solves the problems that traditional Internet video delivery technologies have today: variable throughput; distance and geography constraints; poor performance in congested, last-mile, and mobile networks; and traffic distribution scale models that are unsustainable because of capital and operational costs.
One of the keys to the constant quality Octoshape provides over best-effort networks lies in the core algorithms employed in the transport. Octoshape's core transport approach uses a unique, resilient-coding scheme inside a UDP transport.
This approach enables the Octoshape client on the end-user's viewing device to tune into multiple streamlets at once, with these sources transparently prioritized based on quality. If a streamlet source is behind a congested route or goes offline for some reason, the system pulls in other stream sources to take its place.
The underlying transport provides throughput optimization using UDP transport. Resiliency normally provided by TCP is replaced with Octoshape's resilient coding scheme. By contrast, the Octoshape approach removes the overhead requirement for resilient delivery from the client.
In the Octoshape scheme, the outbound stream from the encoder is sent to a local processor called the Octoshape broadcaster. This software processes the stream and sends it in the Octoshape throughput optimized protocol to the Octoshape ingest servers in the cloud.
This underlying resilient transport approach creates a constant bit-rate TV-like experience. The UDP resilient flow does not have the variable characteristics of a normal TCP flow. Therefore, while Octoshape features multi-bit-rate technology, it does not rely on that technology because once matched with a bit-rate, users stay put.
Since the enabling Octoshape technology is multi-path, it acts as a smooth and easy back-off mechanism as the load increases in the last mile. If a link becomes congested, Octoshape notices the increasing jitter, packet loss, and latency.
The technology moves traffic off of an affected link to other less-congested ones. In the last mile, this even load balances the traffic inbound to the last mile, opening up and leveraging the capacity available on all the pipes, instead of just congesting one pipe like traditional CDN technologies.
These core innovations have made way for dramatic architectural improvements, and have enabled distribution methods over the Internet that have before been challenged. Two of these innovations are Octoshape's Cloudmass service and the company's suite of multicast technologies.
The Cloudmass technology is an extension of the Octoshape deployment and provisioning technology. As load increases, Octoshape can provision resources around the globe using the APIs of multiple cloud service providers in real time. As these sources come online, Octoshape client technology sees them as valid sources for building a resilient mesh of streaming sources.
Since Octoshape has broken the relationship between distance and quality, it is not important on what cloud, or what region in the world these cloud resources are provisioned.
It does not matter to the Octoshape infrastructure if one cloud becomes overloaded, or if there is a fiber cut to a particular datacenter, or if a specific rack of computers loses power. The Octoshape system is resilient to these types of glitches in the network. The Octoshape software was designed to run on a pool of unreliable resources globally.
As the event cools down, the resources are closed down dynamically. The operative concept here is that Octoshape Cloudmass can dynamically provision and activate global resources across several clouds, without all the traditional capital expenditure, deployment, coordination, and time required to facilitate events of this size.
The impact of this technology combined with the abundance of cloud-based computer and network resources is nothing less than disruptive to the current environment. It rips down the barriers to entry the CDNs using traditional technologies have enjoyed because of the relationship between distance and quality.
The cloud provides a very unique opportunity for Octoshape where traditional technologies cannot perform. Clouds are inherently centralized; they are often shared, undedicated resources; and they are often not designed for high throughput services like video streaming.
This is problematic for TCP-based streaming technologies, as the clouds are not fundamentally designed to solve for the quality aspect of video delivery.
It is also very expensive to stream data from the cloud. Even volume-based pricing in the cloud is still an expensive proposition today. Fortunately, this is an area that Octoshape has uniquely solved with multiple approaches for efficiently moving video to the last mile without pulling all the video from the origin streaming servers.
Octoshape's suite of three multicast technologies - native source-specific multicast, automatic multicast tunneling, and Octoshape simulated multicast - provide the magnification effect enabling the vast impact of the Cloudmass technology.
In the Octoshape multicast system, the process starts with a standard off-the-shelf encoder. Octoshape supports major video formats such as Flash RTMP, Windows Media, and MPEG2_TS.
Octoshape takes the stream and applies the throughput optimization technology to the stream to improve the Internet path between the encoder and the Octoshape cloud. Once in the cloud, the stream is ready for distribution.
In the simulated multicast model, the Octoshape-enabled media player tunes to an Octoshape stream. The Octoshape server complex in the cloud immediately sends instant stream start data down to the last mile, enabling the video to begin playing.
The Octoshape system then begins sending a list of valid sources, enabling the client to create a resilient mesh of stream sources. As other clients begin to tune into the stream, the Octoshape system adds them to the valid resource pool that is communicated to other clients.
One distribution option has Octoshape inject the stream into the native multicast cloud of a last-mile provider. Octoshape provides a piece of software to the provider that resiliently pulls a stream, or set of streams, into the last mile and injects it into the native multicast environment of the provider.
In cases of packet loss, the cloud sources are reprioritized to fill the gaps. In this case, Octoshape is transparently managing cloud delivery and native multicast sources in parallel.
Automatic multicast tunneling (AMT) is another option for efficiently moving video data to the edge of the network in instances where native multicast is not enabled. AMT is a multicast tunneling process built into router code that can bridge a multicast and non-multicast domain. It can extract one copy of the video into the last mile, and serve multiple copies as a relay from there.
If bits are dropped along the way, the Octoshape client fills the holes by drawing from the cloud.
In summary, Octoshape has created the most efficient transport protocols for the delivery of constant bit-rate content across best-effort networks such as the Internet, fixed wireless, and mobile infrastructures. The technology uses standard media formats and standard media players.
The transport protocols eliminate the traditional barriers to optimal streaming of media, the chief among them being the relationship between distance from the streaming server and the quality of the stream.
With traditional CDN technologies, if quality is a fixed parameter, this relationship creates a floor for the cost of goods sold (COGS) that cannot be overcome regardless of economies of scale.
This is how Octoshape technologies usher in a new paradigm of quality, scale, and economics for TV-quality video delivery over the Internet. The technology enables the use of cloud aggregation techniques, multi-bit rate technology, and multicast distribution strategies not previously achievable with traditional technologies.
The resulting impact takes quality and scale up to a level unreachable by alternative technologies, and COGS lower any other technology can comparably reach.
This disruptive paradigm will help usher in the next generation of TV services by enabling a new range of business models and consumer offerings. For more information, visit www.octoshape.com and plan now to attend CONTENT IN THE CLOUD at NAB on April 11th where Octoshape's US GM Scott Brown will present the closing keynote address. Share wisely, and take care.
Japan Earthquake: Tech Volunteers, Companies Rally Response
Excerpted from CIO Insight Report by Susan Ninziata
As the humanitarian crisis in Japan continues to unfold in the wake of a 9.0-magnitude earthquake that struck March 11th, volunteer technologists from around the globe are coming together to offer help.
The earthquake and its aftershocks, as well as a subsequent tsunami, have devastated much of Japan and sparked crises at many of the nation's nuclear power plants.
Crisis Commons reports that more than 100 technology volunteers have signed up to lend their expertise to disaster response and recovery efforts. The organization is also providing additional support in the mobile and GIS areas through collaboration with Appcelerator's mobile development community and GISCorps. Crisis Commons says more volunteers are needed - especially those with technical skills as well as those who can provide search, translation, writing, and research skills.
Since March 11th, Crisis Commons volunteers from around the world have been collecting information and data sets in support of a UN OCHA information-gathering request. Hundreds of entries to the Crisis Commons Wiki have included data sets such as KML files and resources such as road and transportation data, the organization reports.
NetHope has been collaborating with volunteer technology groups, including Crisis Commons, working on information and data sharing activities, providing guidance on what kind of information will be useful for the response teams. Through member collaboration and by facilitating public-private partnerships with major technology companies, foundations and individuals, NetHope helps its members use their technology investments to serve people in the most remote areas of the world.
Several NetHope member organizations, which include the International Federation of Red Cross and Red Crescent Societies, Save the Children, World Vision, Oxfam, Catholic Relief Services, Mercy Corps. And Habitat for Humanity, are involved in the Japan earthquake response, according to the organization's website.
A NetHope report issued March 12th notes that undersea telecommunication cables in and out of Japan seem to have mostly survived. Mainland Chinese carrier China Unicom said two or three cables between Japan and China have been damaged, but traffic was being routed around the breaks. The quake appears to have damaged the Asia Pacific Cable Network 2, which is owned by a consortium of 14 telecom operators, let by AT&T.
NTT DoCoMo, KDDI, and Softbank Corp - the three largest mobile-phone carriers in Japan - said their services were disrupted across many regions. According to a March 14th NetHope report, mobile services are still very difficult to utilize in the affected areas. Most relief teams are utilizing satellite phones as the only reliable source of communication. NetHope is working with the US State Department, FCC, and Global VSAT Forum on clarifying the process for import of any communication equipment. Ministry of Communications in Japan is advising relief teams to only utilize satellite equipment working on the Inmarsat or Iridium terminals.
Internet traffic to and from Japan seems not to have been affected, and many people have used the Internet, including Skype and social media, to communicate with each other and outside the country.
CIO Insight's sister publication eWeek reports that it isn't immediately known how many IT facilities or data centers were washed away in the disaster. The mere fact that this horrific crisis happened serves to remind IT managers about their own business continuity systems and to consider how well-prepared they are for such an event.
A fact of human nature is that people become complacent as time passes without a real disaster alert affecting an IT system. An event like the March 11th quake ostensibly should serve to wake up those who might not have been testing their systems regularly-or scare those who, in fact, have no backup systems at all in place.
Another fact of human nature is that there is always someone waiting in the wings to exploit a disaster. Within hours of the devastating earthquake and tsunami, cyber-criminals had poisoned search results based on the Japan disaster with malicious links. For example, users searching on "most recent earthquake in Japan" may encounter some malicious links to fake anti-virus software, Trend Micro researchers said March 11th. Malware writers used black-hat search engine manipulation techniques to push these links to the top of the search results, according to a post on the company's Malware Blog.
There are plenty of positive responses as well. For example, all four major US wireless carriers are enabling customers to send free texts to aid organizations. AT&T and Verizon are additionally offering free calling and texting to Japan.
AT&T customers can also text "redcross" to 90999 to make a $10 donation to support the Red Cross' support efforts in Japan, and through March 17, can view TV Japan, the 24-hour Japanese news channel available to U-verse TV subscribers, free of charge.
Verizon, Sprint, and T-Mobile are waiving text-messaging fees for customers donating to disaster-relief organizations.
Google whipped up one of its customary crisis-response websites to provide support information for those affected by disaster. The site includes emergency lines, sources for alarms and warnings, such as the Japan Meteorological Agency Tsunami Warnings/Advisories, a disaster bulletin board and even train information to help people evacuate. A "person finder" tool helps people look for family and friends separated by the disaster. Google Maps and YouTube videos also chart the quake's path of destruction.
Major technology companies based in Japan are assessing the full impact of the disaster on their operations Many high-tech manufacturers in Japan have had to stop production to carry out safety checks. The prospect of rolling blackouts means further interruptions are likely over the coming weeks. Sony, Panasonic, Toshiba and Canon are among the companies affected.
There could be significant near-term effects on the semiconductor industry, according to analysts. Japan and Taiwan account for a huge portion of global semiconductor manufacturing, and even the smallest amount of downtime could have a large impact on chip supply and prices, the analysts said in various reports.
More than 40% of the NAND flash memory chips and about 15% of the global DRAM supplies are made in Japan, which also is a key source of chips that support such booming consumer electronics devices as smart-phones, tablets, and PCs, Jim Handy, an analyst with semiconductor market research firm Objective Analysis, said in a March 11th report.
Raymond James Equity Research wrote in a media advisory March 14th that most DRAM players have ceased "spot" price quoting activity since March 11th as a result of uncertainty regarding manufacturing-equipment damage, power disruptions and raw-wafer supply disruptions.
Verizon Executive Takes a Panoramic Stance on Cloud Deployment
Excerpted from Connected Planet Report by Susana Schwartz
Whether managed or unmanaged cloud offerings, telecom providers have a huge value proposition.
Verizon wants to pursue more than one niche in the cloud computing market, and will hence build a full range of options that offer managed and unmanaged infrastructure, on top of which the company will layer platform applications.
Connected Planet talked to Jeff Deacon, Managing Director of Verizon's Enterprise Cloud Services, about what telecoms bring to the table in terms of differentiation when competing with the likes of Microsoft, Google, Amazon and other cloud providers.
Connected Planet: What differentiates a telecom like Verizon from other cloud providers today?
Jeff Deacon: To start, the end-to-end SLAs on application performance are a key differentiator for telcos that can boast core assets such as hundreds of data centers around the world-necessary as a strong foundation for cloud service infrastructure. Also, owning the global IP networks means telecom operators can help enterprises migrate applications to the cloud with real SLA guarantees. Only a cloud provider that has control of the data center and everything in the data center (as well as the networks underlying those data centers and connecting the enterprises) can offer end-to-end SLA capabilities for mission-critical, heavy-duty applications critical to enterprise businesses.
Additionally, for decades, it is the telcos that do the metering and billing that now enable the usage-based capabilities necessary for measuring, charging and billing for what is actually used in a cloud environment. The back-office systems have to recognize the different types of cloud consumption and move the necessary information into billing and charging systems so customers know exactly what they used and for what they are being charged.
For example, today, we monitor usage on a daily basis when it comes to compute memory and storage, and in a couple of months, we will actually take that down to an hourly level so customers can see what they used on a more granular level.
Connected Planet: Outside of the telecom market, what's out there for enterprises today?
Jeff Deacon: For companies that have a "do-it-yourself" attitude, there are cloud services that offer what essentially amounts to a new breed of collocation solutions where they get unmanaged services that are best-effort in nature. While that may suit some enterprises' needs, they generally are lacking in terms of what is needed to move workloads in a robust and secure manner, and that is what our target customers want. We target true production applications that require 100-percent uptime. That means anything from a back-office ERP system like an SAP, to an e-commerce Web site that is the basis of a business and can't go down. When you have customers like airlines that do production reservations and ticketing that are mission critical to their business, there's no room for error.
And so, enterprises that consider security the number-one objection when moving enterprise workloads to the cloud will need the out-of-the-box monitoring, management, operating system patching, DoS mitigation and fundamental security around managed firewalls that only a telecom like us can offer. We have the heritage to actively do those things as we've been doing them for 15+ years. And we can do the security scans and audits on enterprise infrastructure that help us offer performance and SLA guarantees.
Of course, we do recognize there is a need for the "unmanaged" cloud services as well, and we can address that with what are essentially our legacy managed hosting offerings. That's why we, along with others in the telecom industry, are participating in such projects as the VMware vCloud Initiative, which offers enterprises real SLAs around performance and QoS-something other unmanaged cloud services do not really offer.
Connected Planet: Why does Verizon pursue a hybrid approach to cloud?
Jeff Deacon: We can guarantee QoS for network connections into our infrastructure-something providers of Internet-based IP VPNs struggle to do. Though they can guarantee a certain level of security, they do not have the control over QoS or latency because they are routed over the Internet.
This is particularly of interest for enterprises where security is a concern; they do not want to be forced to consume cloud services over the Internet, and that is where our extensive MPLS backbone comes into play in bridging enterprise WANs into the cloud. We can move their workloads to our cloud and yet they appear to their IT departments as an extension to their existing data center footprints. With a hybrid cloud offering, our customers can leverage our servers and storage as though they are in a public cloud, but because we logically partition it off, customers access capacity over our MPLS connection.
Connected Planet: What is Verizon's strategy going forward?
Jeff Deacon: We're going to take our base infrastructure that we built for cloud and we will add additional functionality around platform-as-a-service (PaaS). We will continue to expand our software-as-a-service (SaaS) applications. We want to layer applications on top of our infrastructure-as-a-service (IaaS) platform so that we can offer SaaS on a per-user, per-month basis, as well as memory and storage for the platforms we roll out in the future.
Rather than focus on one niche, we want to compete against the platform players like Microsoft and Google, as well as against the unmanaged infrastructure players like Amazon. And, we will layer platform applications on top of what we have so that enterprises have a full range of options, as we have a very diverse base of customers that have different needs at different phases of maturity.
Cloud Computing: A Sustaining or Disruptive Innovation?
Excerpted from Network World Report by Bernard Golden
If you've read this blog over the past couple of years, it should be no surprise that I am a huge advocate of the theories of Clayton Christensen, author of The Innovator's Dilemma. Christensen and his book were brought to mind this week by the cover story in Forbes about his severe health problems, his experience with the U..S healthcare system, and his prescriptions for how to fix it.
Christensen posits two type of innovation: sustaining and disruptive. Sustaining innovation is that which extends existing technologies, improving them incrementally. As an example, at one point auto manufacturing moved from body-on-frame to unibody construction. Christensen points out that it is very difficult for a new market entrant to gain traction with an incremental innovation, since the market incumbents can easily incorporate the new technology while maintaining their other advantages like brand awareness, cost efficiency, and so on.
By contrast, disruptive innovation represents entirely new technology solutions that bring a new twist to an existing market -- typically at a far lower price point. Christensen offers numerous examples of disruptive innovation; for instance, transistor radios disrupted the existing market for vacuum tube-based radios. Christensen notes that typically, disruptive innovations come to market and are considered inadequate substitutes for existing solutions by the largest users of those solutions. Tube radio manufacturers evaluated transistor capability and found that transistors could not run table radios with large speakers that required significant power to generate sound volume.
Consequently, disruptive innovations must seek out new users who are, in Christensen's term, overserved by existing solutions and are willing to embrace less capable, cheaper offerings. Transistor radios were first embraced by teenagers who wanted to listen to rock and roll with their friends and wouldn't be caught dead listening to it in the company of their parents, at home, in front of the vacuum tube-powered table radio. They didn't mind that their cheap transistor radios sounded tinny. They were affordable and allowed teenagers to listen to their music in the company of friends.
The denouement to this dynamic is that disruptive innovations gradually improve over time until they become functionally equivalent to the incumbent technology, at which point they seize the market and consign the previous champion to the dustbin of history. One of the most poignant comments about this process I've ever read was the statement by a former Silicon Graphics executive lamenting how SGI was put out of business by the shift to x86-based graphics systems -- he said that everyone at the company had read The Innovator's Dilemma, but, even knowing the likely fate of their company if they didn't dramatically change direction, were unable to do so, and inexorably found themselves in bankruptcy.
So is cloud computing a sustaining or disruptive innovation?
At first glance, one might evaluate it as sustaining. It is, after all, built upon the foundation of virtualization, an existing and widely applied data center technology. Cloud computing builds upon that foundation, adding automation, self-service, and elasticity. Certainly the plans by many existing virtualization users to create private cloud environments in their current data centers argues that cloud computing sustains existing solutions.
On the other hand, the initial entrant into the field was Amazon, via its Amazon Web Services offering. The fact that a new player -- one not even considered a technology vendor -- brought this capability to market might indicate that the technology should be evaluated as disruptive. Moreover, Amazon brought an entirely new payment model -- the pay-per-use, metered approach -- along with its offering. And it delivered the offering at heretofore unimaginable pricing levels -- mere pennies per hour for computing capacity. Amazon has since been joined by other cloud service providers offering similar cloud computing capabilities.
And, as Christensen notes about disruptive innovation being considered as incapable of meeting incumbent requirements, the AWS offering is commonly described as insufficient for enterprise needs, lacking security, compliance certainty, strong enough SLAs, and other shortcomings as well.
My own view is that cloud computing is disruptive -- but to the users, not the providers of the technology. Organizations that run data centers and plan to implement private clouds will find that it is not enough to provide automated, self-service virtualization. Private clouds will need to offer the same level of scalability and platform services (e.g., highly scalable queuing functionality) as their public counterparts -- and will need to deliver it at the same kind of price points as they do.
A telling analysis of the primary cited shortcoming of public clouds -- security -- was shared with me by a cloud analyst at a leading firm. User concern about public cloud security, he said, drops away dramatically at around the two year mark -- once the user gets familiar enough and comfortable with the security capability of the public provider. At that point, he stated, the user organization begins to strongly embrace the public option due to its ease of self-service, vast scalability, and low cost. Those organizations that reach that two year milestone quickly turn their back on previous private cloud plans, concluding they are no longer necessary, given the increased comfort with the public option.
This tells me that the benchmark for private cloud computing will not be, is it better than what went before -- the static, expensive, slow- responding infrastructure options of traditional data center operations. The benchmark will be the functionality of the public providers -- the agile, inexpensive, easily scalable infrastructure offered via gigantic server farms operated with high levels of administrative automation and powered by inexpensive commodity gear.
The challenge for internal data centers will focus on whether they can quickly enough transform their current practices, processes, and cost assumptions to meet the new benchmark offered by public cloud service providers.
SGI itself once dismissed the x86-based graphics offerings, characterizing them as slow and low quality; when they improved enough to meet the SGI offerings, the company had no response other than to gradually shrink and finally declare Chapter 11. Symbolically enough, the former Silicon Graphics campus is now home to Google, a leader in the new mode of delivering computing services. It will be interesting to see if internal data centers can avoid the fate of SGI and avert eviction by Google and its brethren public cloud providers.
Can Cloud Computing Save The American Economy?
Excerpted from Forbes Digital Download Report by Art Coviello
The American dream is in peril from the confluence of sky rocketing deficits, high unemployment, and the ticking time bomb of an aging baby boomer generation, with its coincident increase in the burden of entitlements as a percentage of GDP. For the first time, the next generation of Americans, our grandchildren, risk having a lower standard of living than we enjoyed. It is not a problem that can be remedied with tax increases and budget reductions. We will not save or cut our way back to economic prosperity.
The way forward is innovation. America must innovate its way out of economic stagnation and back to economic growth. As has been the case for the last 150 years, Americans have always responded well in a crisis and yet again, we are well positioned to lead the world out of this one. Want proof?
American businesses systemically and culturally react fast. Two years after the economic downturn began the United States was generating 97% of its economic output with only 90% of the labor. This sort of gain in productivity ultimately translates into increased economic activity, the ability to pay down debt and a higher standard of living for those of us who are employed. Unfortunately it does not directly address the issue of unemployment.
The fact is that productivity gains from working harder can only take us so far. Innovation and technology can and must take us the rest of the way, creating new jobs and new industries. Our "so called" information economy, for example, is ripe for innovation. Today, all organizations are dependent on information technology. What makes me optimistic about the future is that we have not even begun to scratch the surface of all that can be accomplished by actually applying information technology pervasively.
We have spent trillions of dollars worldwide for the computers to create and process information, networks to move it around and the hardware to store it. But we are at a point where we spend 60 to 70% of "IT" budgets just to maintain those systems and infrastructures. No wonder progress in applying IT is so slow. This is the technology equivalent of every organization in the world, big or small, investing the capital and human resources to build and operate their own electricity producing power plants.
But instead, picture a world where software platforms are available online and easily customizable. Picture a world where compute power is generated off site, available in quantities when and where you need it. And picture a world where information is safely stored, efficiently managed and accessible, when and where you need it.
These are cloud infrastructures. The economies of scale, flexibility and efficiency they offer will not only save organizations massive amounts of capital and maintenance costs but emancipate them to apply and use information as never before. An unbelievable opportunity to raise productivity while creating unprecedented opportunities for businesses and workers.
Now picture a health-care system where a doctor has medical records at his fingertips, can see x-rays with the click of a mouse, is able to learn and apply the latest diagnostic and surgical technique from anywhere in the world. Think of the efficiencies in hospital supply chains, the delivery of prescription drugs, the processing of billing and insurance claims, reductions in fraud, and the application of best practices for cost controls. The capacity for improvement is endless. As a matter of fact, these innovations are already being applied in isolated pockets. But for us to seize the opportunity before us it's imperative that we move from isolated centers of excellence to connected systems of excellence. Pick any industry and systemic improvements like these are available.
A new age of innovation and technology advancement is within our grasp - an opportunity for job creation, greater productivity and economic growth. The time for cloud computing is now. We need government and industry to accelerate broad scale adoption of cloud infrastructures so we can reap the rewards of a true information based economy.
As I said at the outset, Americans respond well in a crisis. It is the nature of our society: egalitarian, free, open and competitive that make us the most adaptive, inventive and resilient country in the world. Time again for us to lead.
Why It's Time to Embrace the Cloud Now: IDC
Excerpted from eWeek Report by Chris Preimesberger
IDC's frontman, Senior Vice President and Chief Analyst Frank Gens, sees a significant fork in the IT road in 2011 similar to one that happened 25 years ago.
Back in 1986, PCs and desktop computers were five years old and starting to work their way into daily use in dedicated enterprise networks and home offices, replacing typewriters and word processors.
The Internet to connect them all was still a decade away, but the groundwork was already being laid for it. "In 1986, mainframes and terminals were the standard. Coming up was a new class of end-user device (the PC) and new types of networks and computing platforms driven by the PC radically expanded the users - and uses - of IT," Gens told a full-house audience of about 1,200 in his keynote at the 46th annual IDC Directions conference at the San Jose Convention Center March 15th.
"IT companies looked at what was happening, made some strategic decisions and chose a direction. As you can imagine, some of them gauged what was happening correctly, and some did not. Now, 25 years later, we're again at a crossroads, and taking the correct path is as crucial now as it was then."
Gens of course was referring to traditional client-server computing versus new-generation IT based on on-demand software and services via the Internet, otherwise known as cloud computing. Gens and IDC are convinced that now is the time that IT device and components makers, system providers, software developers and service specialists need to embrace the new IT combination model of public/private/hybrid cloud systems, mobile devices and on-demand services to prepare their products and services for buyers with these third-platform preferences.
The clear implication here: In 2011, it's either the cloud way or the highway.
Coming Events of Interest
Cloud Computing for Government - March 29th in Washington, DC. Special event at the National Press Club explores the US federal government's $76 billion IT spend annually on more than 10,000 different systems and how cloud computing will change that.
NAB Show - April 9th-14th in Las Vegas, NV. For more than 85 years, the NAB Show has been the essential destination for "broader-casting" professionals who share a passion for bringing content to life on any platform - even if they have to invent it. From creation to consumption, this is the place where possibilities become realities.
CONTENT IN THE CLOUD at NAB - April 11th in Las Vegas, NV. What are the latest cloud computing offerings that will have the greatest impact on the broadcasting industry? How is cloud computing being harnessed to benefit the digital distribution of television programs, movies, music, and games?
1st International Conference on Cloud Computing - May 7th-9th in Noordwijkerhout, Netherlands. This first-ever event focuses on the emerging area of cloud computing, inspired by some latest advances that concern the infrastructure, operations, and available services through the global network.
Cloud Computing Asia - May 30th - June 2nd in Singapore. Cloud services are gaining popularity among information IT users, allowing them to access applications, platforms, storage and whole segments of infrastructure over a public or private network.CCA showcases cloud-computing products and services. Learn from top industry analysts, successful cloud customers, and cloud computing experts.
Cloud Expo 2011 - June 6th-9th in New York, NY. Cloud Expo is returning to New York with more than 7,000 delegates and over 200 sponsors and exhibitors. "Cloud" has become synonymous with "computing" and "software" in two short years. Cloud Expo is the new PC Expo, Comdex, and InternetWorld of our decade.
CIO Cloud Summit - June 14th-16th in Scottsdale, AZ. The summit will bring together CIOs from Fortune 1000 organizations, leading IT analysts, and innovative solution providers to network and discuss the latest cloud computing topics and trends in a relaxed, yet focused business setting.
|