Distributed Computing Industry
Weekly Newsletter

Cloud Computing Expo

In This Issue

Partners & Sponsors

Kulabyte

Digital Watermarking Alliance

MusicDish Network

Digital Music News

Cloud News

CloudCoverTV

P2P Safety

Clouderati

Industry News

Data Bank

Techno Features

Anti-Piracy

September 26, 2011
Volume XXXVI, Issue 9


Verizon Predicts Cloud Services Market at $150 Billion in 2020

Excerpted from Bloomberg News Report by Maaike Noordhuis

Verizon Communications (VZ), the second-largest US phone company, expects that the total market for cloud computing services will grow to $150 billion by 2020 from about $10 billion now.

"We think we'll have a pretty big share of that," Kerry Bailey, Group President of Verizon unit Terremark Worldwide, said during a press meeting in Amsterdam today. "A lot of companies are moving into this space."

The market for cloud services will grow by $90 billion in the next four years, Bailey forecast. Verizon has spent "well over $2 billion" on cloud technology this year, after acquiring information-technology services company Terremark and CloudSwitch, a provider of software, according to Bailey.

Cloud computing allows users to store data and programs on offsite computers and access them via the Internet. Verizon will open its flagship data center for Europe in Amsterdam this week.

Cloud Computing May Be a Shot in the Arm our Economy Needs

Excerpted from Forbes Magazine Report by Joe McKendrick

Economists and pundits have long feared the emergence of what they called "hollow corporations," or businesses that don't actually produce actual goods or services themselves, but instead act as brokers or intermediaries relying on networks of suppliers and partners. But now, thanks to technology, successful businesses surprisingly are often brokers of services, delivered via technology, from providers and on to consumers.

Where are these services coming from? Look to the cloud.

Yes, cloud computing enables cost savings - as companies can access technology and applications on-demand on an as-needed basis and pay for only what they use. And yes, this fosters greater agility, with less reliance on legacy IT assets. But the changes go even deeper than that. Consider the ways cloud computing is altering our business landscape:

"Loosely coupled" corporations: I don't think anyone should fear that our corporations are becoming "hollow." Rather, "loosely coupled corporations" may be a better way to describe what is happening. The term "loosely coupled" came into vogue with service-oriented architecture a few years ago, meaning an entity or system stands fine on its own, but when linked to other like systems, the magic happens. Cloud computing is paving the way for the loosely coupled company - which may be an entity that exists purely as an aggregation of third-party services, provided on an on-demand basis to meet customer demands. Most of these services will be passed through as cloud services, both from within the enterprise and from outside.

Blurring of IT consumers and providers: In the IT world, the divide has been very clear cut: there were the vendors who provided technology products and services, and there were customers that purchased and used them. Cloud computing is blurring these distinctions. There's nothing stopping companies that are adept at building and supporting their own private clouds from offering these services to partners and customers beyond the firewall. In fact, many already do. Amazon was an online retailer that began to offer its excess capacity to outside companies. Even non-IT companies are becoming cloud providers. Cloud computing may finally mean a way for IT to finally become a profit center.

Start-ups on a dime: Let's face it, there's no point in investing $50,000 or more in servers and software when everything you need is right in the cloud. I like the story of GigaVox, a podcasting provider, which launched off of Amazon Web Services a few years back. Their start-up IT costs? About $80 a month, for everything from storage to back-end processing.

As Chris Sacca, a software startup investor and former Google executive, put it: "The biggest line item in software start-up companies now is rent and food. A decade ago, I don't think you could write a line of code for less than $1 million." As we ponder unemployment and underemployment in our economy, the availability of cheap cloud computing may be laying the groundwork for a start-up boom, the likes we have never seen before. This applies to departments of larger organizations as well, by the way. Designing new products, without the need to go through corporate finance and IT approvals definitely is a great way to instill entrepreneurial spirit.

More software innovation: Even the smallest software firms - say a one or two-person shop - can sell services, or apps, and build a business on micropayments - earning a few cents or dollars per sale. We see this in action at the app stores, in which software authors can post their offerings for a wide audience and receive about 70% of the proceeds, with the app store taking the rest. A 16-year-old may be putting apps in the cloud that will be used by Global 1,000 companies, and, conversely, enlightened developers in those same companies may be distributing and selling their own apps to the rest of the world.

Rise of "micro-outsourcing": Cloud computing is essentially is a form of micro-outsourcing. The old model of outsourcing - in which multi-million-dollar contracts to run data centers or build platforms are awarded - is giving way to a much more fine-grained, incremental approaches. Companies from around the globe can quickly tap into services needed at the time they are needed. Again, cloud provides amazing opportunities for entrepreneurs or startups looking to support businesses that need additional support.

Cloud computing isn't revolutionary because it's changing the mode of technology delivery. The real revolution that is underway is that it is opening up new lines of business in information technology or service delivery - even among non-IT businesses.

Report from CEO Marty Lafferty

Photo of CEO Marty LaffertyOn Friday, at the Thomas Jefferson High School for Science and Technology in Alexandria, VA, President Obama signed into law the America Invents Act of 2011.

The law, which was supported by Microsoft and IBM but opposed by Apple and Google, takes a historic step in transitioning America from a "First-to-Invent" to a "First-to-File" country, which will bring the US more in sync with the patent application approach practiced by most other nations.

It's not clear, however, that the Act will actually speed-up the patent process - one of its intended benefits; it may actually expose inventors to new risks; and critics remain unsatisfied because it fails to mitigate the often lengthy and costly litigation required to resolve disputes over patent coverage.

Nevertheless, the measure contains a number of positives that can be built upon as a larger patent reform movement, hopefully, continues in a constructive manner. Follow-up is definitely needed on a number of fronts in this arena.

Significantly, this Act as passed was opposed by the National Small Business Association (NSBA), which said it was "strongly tilted in favor of large incumbent corporations."

"First-to-File" seems to imply, for instance, as outrageous as this may seem, that a cool new web app, launched by a modestly funded start-up without patenting it first, could be picked-off by a patent troll who would file to patent it and then actually sue its inventor.

The DCIA always recommends that its Member companies and DCINFO readers obtain their own legal counsel to advise them on matters such as the implications of this law, but common sense would suggest that with "First-to-File" in place, the very first action to take upon determining (e.g., through a search of prior art) that an invention appears to be patentable, now should be to file for a patent.

In order to protect inventors, a "File First" policy is necessary and should precede disclosing, implementing, or offering in any way an invention to third parties.

And if an initial filer is challenged within a year by third parties claiming to be co-inventors, who file a substantially similar patent application, the "inventorship" of the first application can be contested through a newly enacted "Derivation Proceeding."

On the positive side, the fact that the new law encourages earlier filing and therefore disclosure should be a good thing and support progress (versus keeping ideas secret); and disallowing unpublished prior art from preventing patents should simplify that aspect of the process.

Related to this, if an entity is already using an invention or method before another entity patents it, now it can continue doing so with immunity from patent infringement and without needing to license it from the filing party. This is true even if the initial usage is internal rather than in a commercial relationship with third parties so long as the implementation is continuously performed.

A new "Post-Grant Review (PGR)" provision permits challengers to file petitions for PGR within 9 months after the issue of a patent. PGRs are to be conducted by means of familiar legal procedures including depositions and discovery, with challengers required to show evidence of "un-patentability" (e.g., prior art, lack of written description, or non-enablement). Petitions are to be completed within 18 months. There are also protections against frivolous PGRs and incentives to promote settlement among disputing parties.

The law also updates and improves enforcement provisions against "false patent marking," which can unlawfully be done to deceive the public that an item is patented or has a patent pending when in fact it does not.

And to accelerate the patent review and approval process, applicants can now pay extra for "Prioritized Examination" in order to obtain grants within approximately a year of filing for $4,800 in addition to the regular filing fees for a small or large entity, which currently range in the hundreds of dollars.

But back on the negative side, there is an immediate 15% increase in patent office filing fees along with greater diligence requirements and surveillance costs within relatively short deadlines.

The Act also creates additional work for the United States Patent and Trademark Office (USPTO), and creates a situation where some patent applications under certain circumstances will be covered by the new patent law, whereas other patent applications under other circumstances will not be.

It also allows the USPTO to set new fees, which will probably result in further increases.

Intellectual property (IP) attorneys have been increasingly concerned that the USPTO review and approval backlog - now averaging 34 months and totaling over 1.2 million unresolved applications - is growing worse.

The Act creates a reserve fund to help with this, but requires the USPTO, which currently more than pays for itself through collected fees, specifically to ask Congress before being allowed to spend any of this money.

Congress has historically funneled USPTO funds to other projects, and lawmakers defeated amendments to the Act that would have prevented such "fee diversion."

Critics claim that Congressional appropriation and reallocation of USPTO fees since 1992 has crippled the Office's ability to keep pace with growth in applications and, therefore, it's questionable as to whether or not the review and approval process will in fact be accelerated.

The DCIA urges DCINFO readers to convey to your Congressional representatives that, particularly during the current economic climate, it now amend this Act to fund the USPTO adequately so as to fulfill the promise of the patent system as envisioned by our founding fathers - to promote the progress of science - and to increase the economic value of inventions to the US.

For additional background on the passage of this important law, which will have many ramifications in the distributed computing industry going forward, please read the Leahy-Smith America Invents Act and these references. Share wisely, and take care.

Appeals Court Arbitrarily Deciding What Is and What's Not Patentable

Excerpted from Techdirt Report by Mike Masnick

After the Supreme Court totally punted on the question of business model and software patents in the Bilski ruling, the courts have been a mixed bag. Without a brightline rule, they're sort of fumbling around.

We recently wrote about one case that suggested that the courts might be much more willing to dump software patents, but other rulings are going in a different direction. The Electronic Frontier Foundation (EFF) has noted that a series of recent decisions from the Federal Circuit (CAFC) have basically left lawyers scratching their heads over what is and what is not patentable:

"Taken together, these post-Bilski cases confuse, rather than clarify, the standard for impermissible abstraction. In four cases (Bilski, Ultramercial, Classen, and CyberSource), two patents were too abstract (patents for hedging risks and detecting credit card fraud) and two were not (patents for showing ads before copyrighted content and devising immunization schedules).

For laypeople and attorneys alike, it is hard to understand why the latter two patents were any more concrete than the former. One might argue that the upheld patents required added complexity (computer programming and administering an immunization), but the abstract patents would likewise require additional steps to execute. What distinguishes those steps that are too abstract from those that are not?"

As James Bessen has said repeatedly, a working patent system would lead to clear boundaries. A broken patent system is one with ridiculously vague boundaries, because all that does is increase litigation.

The Supreme Court really should have made a clear ruling in Bilski. Instead, in many ways, the confusion and uncertainty is making the system worse, and just encouraging greater litigation.

Distributed Computing Makes Itself at Home

Excerpted from PC Magazine Report by Chandra Steele

A lot can be achieved by harnessing the power of many. The gaming community is used to applying this concept by teaming up for raids but they recently got behind a bigger cause and advanced HIV and AIDS research by solving a puzzle that had frustrated scientists for years.

Distributed computing builds on what sounds like a philosophical idea with a practical one. By uniting the processing power of thousands, or even millions, of computers spread out over a distance, a distributed system acts as a decentralized supercomputer.

In some cases of distributed computing, volunteers contribute unused cycles from their idle computers. In others, companies own a stable of individual systems. Google uses distributed computing to produce search results (in one-fourth of a second) that are almost faster than a blink of an eye (which takes one-tenth of a second). An excerpt from its technology overview reads, "In addition to smart coding, on the back end we've developed distributed computing systems around that globe that ensure you get fast response times."

Cloud computing works as a reverse form of distributed computing, in which one system harnesses the aggregate power of a distributed network. But if cloud computing is a bright side of distributed computing, hacking can be considered the dark side. In a distributed denial-of-service (DDoS) attack, one main computer enslaves others into bombarding a site with so many requests that it shuts down. Even hacking groups themselves, like Anonymous and LulzSec, by virtue of being worldwide collectives of computing power with common missions, can be seen as a form of distributed computing.

Please click here for ten ways distributed computing processes and advances information, from organically mimicking the nervous system of fruit flies to programming stellar screensavers to search for stars.

Unleashing the True Power of Cloud Computing

Excerpted from Cloud Computing Journal Report by Bill Kalma

The first generation of cloud computing was revolutionary in that it added business value to organizations by reducing development time, eliminating the need to procure infrastructure, providing massive scaling potential, establishing scale through multi-tenancy, and by allowing IT people to focus on solving business problems versus technical ones. As cloud-based applications grew in prominence, they fought to achieve parity with legacy on-premise software solutions that were feature/functionality rich. These goals were achieved in short order, and thus the new generation of cloud solutions was born.

Web 2.0 goes far beyond parity with legacy systems and leverages the power of information sharing, collaboration, and the social grid to achieve value that would be impossible with a paradigm of pushing information out from a single location. The new age cloud applications are powerful because they break down silos and simplify the process of converting data into information. As Web 2.0 systems in the consumer space evolve, companies are taking notice and looking to unleash the same power in their enterprise applications.

Enterprises face a unique challenge when adopting a cloud strategy. They do not have the benefit of starting with a blank slate and building their applications from the ground up, but they want to take advantage of the efficiency gains and collaborative benefits. Instead, they must find the best way to harvest the investment they have made in existing databases and applications, while realizing a magnitude of change from the cloud. The secret to Web 2.0 - whether it be the development of new applications from scratch, or the marriage between new and legacy applications - is integration.

Systems integration comes in several forms, each of which has validity based on a particular use case. The goal is to establish awareness of the major types of integration such that they can be appropriately applied as the cloud presents possibilities that we previously thought were impossible. With no one-size-fits-all solution, the synergy between these approaches is where the power is unleashed. For the purposes of this article, the major types of integration are data integration, interoperability and UI mash-ups.

Data level integration is the traditional means of migrating data from end point to end point. This type of integration requires data replication, either real time, or on a defined interval.

The migration of data is done using custom-developed code or through ETL (Extract Transform Load) technologies that simplify the process of mapping data points and transforming disparate data sets from source to target. There are countless permutations and topologies that exist for this approach, but consistent are the key benefits that make this solution attractive.

First, end-user access to data in the cloud-based system is accomplished through a single secured connection that aids in both performance and complexity. By avoiding the need to make calls to an external system (possible an on-premise one), there is no need for call outs or remote access through a firewall.

An additional benefit is that replicating data maximizes the control that the target system has over the data. The data can be manipulated in the transformation process so that it meets the schema of the target system to ease reporting and data manipulation activity at the application layer. This means that just the right amount of normalizing or de-normalizing of the data can be done at transfer time to meet the application's needs.

There are some inherent disadvantages of replicating data across systems. By definition, the duplication of data requires that the target system and the system of record be synchronized at some interval to close the gap on having "two versions of the truth" exist for end users.

In most cases, this is not real time, so each use case needs to be evaluated to balance latency with the overhead of synchronization.

By replicating data to a new data store, there is also the need to re-define security rules in the target environment; it would be a design misstep to allow users a workaround by making information available in one environment that is not available in another, but that means replicating security rules.

Finally, there is a cost to the development and maintenance of data replication solutions in the form of ETL tools, custom code maintenance, additional storage and, of course, duplicity of data management.

In the cloud space, there are many options for implementing data layer integrations that are not only powerful but also cost-effective when compared to their on-premise counterparts. Some players include Informatica, Cast Iron, Boomi.com, Talend, Pervasive, and Scribe.

Leveraging interoperability as part of an integration strategy unleashes the power of the cloud. Interoperability provides systems with the power to establish synergies by talking with one another through web services interfaces. Architected appropriately, interoperability between systems is a superior strategy over traditional data integration because it eliminates the need to manage duplicate data sets, eliminates the synchronization challenges, allows data to be accessed in real time and quickly aggregates data from multiple systems into information.

Integration through interoperability is lightweight and contextual because your cloud application is only accessing the data that it needs from an external service when it needs it. Service calls are made to services to return information to the end user who is shielded from the actual source of the data.

For example, suppose a cloud-based application holds a list of customers and it would be useful to aggregate as much data as possible on this customer. One option would be to subscribe to market data, load it into a local data store, and display it as part of your application.

With the interoperability of the cloud, it's possible to simply make calls to a third-party NASDAQ, Hoovers, or D&B service to bring back real-time customer data that you don't have to maintain. There are countless services available in the cloud that are more comprehensive and robust than anything we could develop ourselves.

Interoperability is not limited to external services; it can (and should) be leveraged internally as well. In the same cloud-based customer database, it might be useful to aggregate sales information that is sourced in the ERP system. Instead of duplicating the sales history in the cloud, a preferred solution would be to expose sales history data through a service that can be called to return contextual sales information for a customer.

From an end-user standpoint, the experience is seamless because the data is displayed to them in the context of the customer application, but there is no replication, synchronization issues, or latency.

The potential of interoperability goes far beyond these simple examples. In fact, it is interoperability that is helping to define Web 2.0. Social networking sites like LinkedIn, Facebook, and Twitter have services that are consumed in countless other applications to provide information about people, their networks, and personal preferences.

Furthermore, some of the fast-growing and most popular services today like Groupon, Living Social, and Foursquare leverage interoperability with other services to market to individuals rather than demographics, and have unleashed the potential of crowd-sourcing.

In the cloud, a special type of interoperability is UI mash-ups. A UI mash-up is a web interface that combines the presentation of two or more sources to create a new one. For example, Google, Yahoo, and Bing all have feature-rich mapping applications for location and direction. End-users understand the paradigm of these applications because they use them heavily in both their personal and professional lives. That being the case, these mapping applications are used all over the web as mash-ups to enrich other applications like location finders and routing systems and for location plotting.

The significance of the mash-up comes down to reuse and end-user experience. There is a benefit to the cloud developer to be able to tap into and reuse other services (even user interfaces) because doing so drastically reduces development time. It is also a benefit to the end user to work within a familiar context that they already enjoy.

New entrants into the cloud computing space need to understand the full benefit of a cloud solution is not merely in eliminating on-premise hardware. Integrating systems creates synergy and power that exceeds that of traditional paradigms. The vast cloud service community has done much of the hard work already, but businesses need to be smart about how they connect the dots to ensure that cloud applications are Web 2.0 enabled.

The Economic Benefit of Cloud Computing

Excerpted from Forbes Magazine Report by Kevin Jackson

Cloud computing, as defined by the National Institute of Standards and Technology, is a model for enabling "convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction."

NIST is implying the economies of scale that go with cloud computing when it refers to a pool of configurable computing resources.

Cloud computing is often referred to as a technology. However, it is actually a significant shift in the business and economic models for provisioning and consuming information technology (IT) that can lead to a significant cost savings.

This cost savings can only be realized through the use of significant pooling of these "configurable computing resources" or resource pooling. According to NIST, this capability is an essential characteristic of cloud computing.

Resource pooling is the ability of a cloud to serve multiple customers using a multi-tenant model with different physical and virtual resources dynamically assigned and reassigned according to demand.

Cloud computing economics depends on four customer population metrics: 1)Number of Unique Customer Sets, 2) Customer Set Duty Cycles, 3) Relative Duty Cycle Displacement, and
4) Customer Set Load.

These metrics drive the cloud provider's ability to use the minimum amount of physical IT resources to service a maximum level of IT resource demand. Properly balancing these factors across a well characterized user group can lead to approximately 30% savings in IT resources, and enables the near real-time modification of the underlying physical infrastructure required for the delivery of the desired "illusion of infinite resources" synonymous with a cloud computing user's experience.

When implemented properly, the cloud computing economic model can drastically reduce the operations and maintenance cost of IT infrastructures. A 2009 Booz Allen Hamilton study concluded that a cloud computing approach could save 50-to-67% of the lifecycle cost for a 1,000-server deployment. Another Deloitte study confirmed that cloud deployments delivered greater investment returns with a shorter payback period when compared to the traditional on-premise delivery option.

In considering cloud computing for the Intelligence Community, security is an obvious concern. Given the legal and operational concerns, classified information should always be processed in properly protected and certified IC private or community clouds. If a secure cloud model can be designed, economic savings can certainly be realized.

When used to process unclassified information, sharing cloud computing resources can nominally provide the operational advantages of a private cloud with a cost closer to that of a public cloud due to the expected economies of scale from combined user communities.

The federal government is currently deploying a federal community cloud. Officially referred to as the General Services Administration Infrastructure as a Service Blanket Purchase Agreement (GSA IaaS BPA; item #4 in the White House CIO's "25 Point Implementation Plan to Reform Federal Information Technology Management"), this Government Wide Acquisition Contract (GWAC) vehicle is designed to implement a community cloud economic model to support the federal government. The Office of Management and Budget (OMB) expects this community to provide approximately $20 billion in cloud computing services to a community made up of more than 25 agencies.

Using the BAH study as a guide, and assuming that community cloud economies mimic those expected from a hybrid cloud, transitioning IT services from an agency-owned IT infrastructure to the GSA IaaS platform should deliver benefit cost ratios of approximately 7:1.

Cloud computing provides some strong benefits and economic incentives. Selecting a public, private, hybrid, or community cloud implementation will depend on a customer's specific application, performance, security, and compliance requirements.

Proper deployment can provide significant savings, better IT services and a higher level of reliability: 1) Lower Costs; 2) Cap-Ex Free Computing; 3) Deploy Projects Faster, Foster Innovation; 4) Scale as Needed; 5) Lower Maintenance Costs; and 6) Resiliency and Redundancy.

Why Cloud Computing Is No Risk but a Tried and Tested Way of Working

Excerpted from Cloud Pro Report by Richard Pharro

Cloud computing may be the latest buzzword, but it has its roots in well-established ways of working.

I run a SME in the UK and do not come from an IT background, so several months ago, I asked my MIS manager what we were doing to move in to the cloud.

Having read all the literature recently I assumed that this was a new phenomenon, but was told we have been in the cloud for years. This triggered some further investigation on my part.

The cloud industry is growing at a rate of 16.6% year-on-year and with research predicting that global revenues for cloud services will reach $150 billion by 2013, I wanted to find out why the cloud has only recently become a phenomenon. My simple conclusion is that it is being used for much more business sensitive and business critical information. For those readers who are new to cloud, there are four types:

A private cloud is either internally managed or outsourced to a single supplier but effectively is controlled by the user and the vendor.

A community cloud serves a specific community and through that community's purchasing power is controlled by the community and the vendor.

The public cloud is accessed by a multitude of users and effectively is owned by the vendor.

Due to current move in technology and innovation there are a variety of hybrid clouds that combine various features of the above.

I think, as an outsider, that most of the discussion and debate around the cloud focuses on the public cloud.

From a business viewpoint, I see the cloud as no more than a distribution model. There is nothing intrinsically different to working with vendors in the cloud than working with vendors on the ground. The two fundamental benefits that cloud vendors bring are on demand services and elasticity of supply. There is an expectation that what one wants is available instantly and the vendor can provide infinite capacity. This was best illustrated to me a couple of years ago when someone working at the forefront of services explained that he was renting capacity from vendors, paying by credit card and providing a demonstration of his product within days rather than within months. So if you choose to outsource or acquire services delivered by a cloud provider to provide on demand access and elasticity of service what should you do? My advice is you go about a procurement process in the same way as any other procurement process. The same questions, issues, concerns and risks apply:

Can the provider demonstrate competency in everything that they are offering? Do they have a successful track record in the delivery of service? Can you build an appropriate relationship with the vendor (very difficult but are you going to trust your confidential data to someone you have never met)? Is their service truly relevant to you? How would you manage risk in the provision of that service and overall capability? Do they have the resources, financial strength and management systems to deliver what they promised and what you want?

Industry body, the Cloud Industry Forum, recognizes the challenges that the customer faces in purchasing services in the cloud. Feedback from IT organizations showed that security and a lack of confidence are the primary barriers stopping UK businesses from adopting cloud Computing. Additionally, 62% of respondents cited a code of practice as an important driver when selecting a cloud supplier with a further 28% saying it was essential.

To deal with these issues, the Cloud Industry Forum has introduced its Code of Practice to assist customers in selecting vendors based on three criteria: Transparency, Capability, and Accountability.

In short, buying services in the cloud shouldn't be, and is no different from buying services from any other vendor. The mystique and excitement is in the opportunities it opens to all organizations and the tremendous benefits it can bring to businesses that are in need of on demand service and the computing power required to grow their business.

With almost 70% of business leaders questioned by the Cloud Industry Forum believing that cloud computing will be very important in the coming years, this is certainly an area to watch.

Small Business Latches onto Cloud Computing

Excerpted from Voice and Data Report by Merri Mack

With all the words written about cloud computing this year, one wonders why Rob Livingstone wrote a book on the topic, with the title Navigating through the Cloud.

Essentially, Livingstone says it's because he could not buy a book that allows individuals and organizations alike to make an objective and well-informed assessment of the value of the cloud. The book is a plain English guide to surviving the risks, costs and governance pitfalls of cloud computing.

Livingstone is an academic with an impressive IT pedigree, having been a CIO of a number of multinationals. He currently runs an independent advisory business.

This month, Livingstone hosted a panel session comprising a number of businesses from the VMware ecosystem in Australia to discuss the revenue opportunities that have emerged as businesses evaluate the adoption of cloud computing.

Some of the issues discussed included lack of portability to move from one cloud provider to another; legacy systems holding back enterprises from adopting cloud; security and data residency; and jurisdiction.

Duncan Bennett, Managing Director, VMware ANZ, has just returned from the international VMware talk fest in Las Vegas,NV which was attended by 19,000 delegates. There are 5,600 partners in the partner program.

"Partners are important. If partners are successful, then VMware is successful, and the more revenues we get," said Bennett.

This year VMware has recruited over 300 partners locally to the partner program, which has tripled the number of partners here.

New Lease is an ex-hosting company which now focuses on subscription software licensing, solely to the service provider community for the private cloud sector, but it also helps them tap into the hybrid cloud.

Doug Tutus, Managing Director of New Lease, said, "The company has been around for eight years growing at a rate of 40% year-on-year for the last seven years, but in 2011 we have grown by 54% and signed up 125 partners this year compared to a hundred last year.

"Partners are doing much more, too. And there is the phenomenon of accidental cloud providers such as franchisers who supply cloud services to their franchisees. In a nutshell this is the benefit of the cloud.

"For example, four childcare centers in the same group started with public cloud mail services, another service provider provided MYOB accounting services. New Lease aggregated all services into one cloud service for them and we did it all within three days," said Tutus.

New Lease's Head of Cloud Strategy, Stephen JK Parker, said, "A business definition of cloud is anything you want. Cloud is really not a technical discussion; it's really a business discussion."

As part of his role, Parker holds road-shows around Australia to educate providers on what the cloud is. A year ago, 10% of his audience told him that their customers were asking for cloud services. This year, 90% of service providers reported that customers actively want cloud services.

10% of these service providers are ready to provide cloud services and another 20% are preparing to provide cloud services.

"There will be a tipping point that is driven by customers incrementally who go viral on the successes of using cloud services," said Parker.

Nicki Pereira, General Manager ZettaGrid, said, "Cloud can bring technologies only available to the enterprise to SMBs. SMBs are now talking about enterprise concepts such as disaster recovery and data resiliency. It gives SMBs more opportunities as cloud ticks the entry points to provide more services."

It seems when customers do sign up for cloud services from a cloud provider they stick around. Tutus said churn is less than 1% and this is with a wide range of customers from small to large enterprises. Pereira agreed, "Customers are more sticky in the cloud with very little churn."

Perth-based IntegraNet Technology Group uses ZettaGrid's infrastructure-as-a-service (IaaS) to help its SMB customers to adopt cloud computing infrastructure. Director of IntegraNet Peter Peou said, "We are seeing unprecedented interest and we are refocusing our business in order to capitalize on this business opportunity."

Partnering with Canberra-based Dialogue IT, IntegraNet has built a managed service offering on infrastructure which has been successful within the ACT. It has also helped a Norwegian company with its IT refresh program in Perth and has been asked to do this for the same company in Kuala Lumpur.

"The great benefit of cloud is that we can reach out to the rest of the world," said Peou.

All panel members agreed that an AU $300 monthly subscription VMware license fee gives a 3-or-4 person SMB the opportunity to start operating and growing a business as it provides a low barrier to entry level. The panel commented that university students are starting up businesses based on this offering.

Bennett said he would love to see another Google emerge out of Australia, and that it is within the realms of possibility with cloud.

The last word goes to Livingstone. "With all the positives of cloud computing, people are realizing it is not a silver bullet and not a panacea," said Livingstone.

Metrics Will Soon Transform Public Cloud Market 

Excerpted from Seeking Alpha Report by Dana Blankenhorn

One of the maddening aspects of the cloud computing arena is the lack of adequate metrics with which to compare offerings and providers. Given that lack, anyone can say just about anything.

But that is about to change, and that will have a dramatic impact on the market, especially for public clouds. Once customers can know who has game and who has claims, market shares could change quickly, and this will have a dramatic impact on the underlying stocks.

Past efforts in this area have either been limited - Amazon's CloudStatus mainly tells you if the cloud is up - or focused on data serving, like Yahoo's Cloud Serving Benchmark.

But now we're about to get apples-to-apples comparisons between working clouds using standard workloads.

Duke professor Xiaowei Yang has just completed a study on cloud metrics with two Microsoft researchers and graduate student Ang Li. They call their metrics for comparing clouds CloudCmp.

They focused on the relative speed of three basic functions: 1) Table - A measure for handling databases. How long does it take to get a row from a database, insert a row, and look-up a row. 2) Blob - A measure of file transfer speed. How long does it take to upload or download a picture or other object from the database? And 3) Queue - A measure of speed with handling queries. How long does it take to send or receive a message from a queue?

The researchers then took measurements, using these metrics across four public clouds - Amazon EC2, Google AppEngine, Microsoft Azure and Rackspace CloudSpaces. The authors did not identify which cloud was which in discussing their results.

What's important is that they found wide variation among the clouds studied. There was also a wide variety on pricing and pricing models. One provider prices per CPU used; others price based on use of four or even 8 cores, per instance or per program running.

Costs per data transaction ranged from a low of less than .1 of a cent to a full cent. It doesn't sound like much but these systems are designed to handle many transactions per second. Scaling latencies also varied widely, but in general Windows latency was longer.

The work is not complete. There are variables among networks that must be measured, both internal to the cloud and external, and variables based on the type of application being tested - an e-commerce application, a game-type application requiring low latency, a scientific application that is compute intensive - to evaluate.

The authors also know that there is a trade-off to be made between breadth and depth in making these measurements, that there may be a difference between a "snapshot" look at speed and continuous measurements over time, and that (as usual) your mileage will vary. Comparing what happens to your applications, and comparing those numbers with those from CloudCmp, gives a better result.

This is an early study, in other words, but it's pretty clear that serious speed comparisons among clouds from reliable third parties could be a matter of months, not years away.

Be aware of that as you invest.

Hybrid Cloud Computing Growing Quickly

Excerpted from CenterBeam Report

With public cloud computing services growing quickly, the hybrid cloud may not be far behind, according to Johan De Gelas on AnandTech's website

De Gelas said the hybrid cloud would ideally let people transfer their cloud workload between their own private cloud data center and public clouds. He writes the idea started to materialize in a realistic way in the last year. While it could eventually be a working model, he said that in asking around, several people have said it makes service level agreements more complex or impossible, but others have seemed to enjoy the speed and agility the hybrid cloud brings. 

"Making use of infrastructure-as-a-service (IaaS) is a lot cheaper than buying and administering too many servers just to be able to handle any bursty peak of traffic," De Gelas said. "But once you run 24/7 services on IaaS, the Amazon prices go up significantly and it remains to be seen if making use of a public cloud is really cheaper than running your applications in your own data-room. So combining the best of both worlds seems like a very good idea." 

Olafur Ingthorsson, who writes on cloud computing topics, adds that with hybrid clouds, IT managers can decide on what information should be in the private or moved to the public cloud. By having this option, he said overcapacity is minimized and applications are balanced out. Companies are also able to move peak-loads and less critical apps to the public cloud.

The Future Belongs to the Hybrid Cloud

Excerpted from IT Business Edge Report by Arthur Cole

Cloud computing can take many forms and will likely tap into numerous systems and environments at the typical data center. However, if one trend is becoming clear, it's that most organizations will deploy some form of hybrid cloud over the next several years.

This makes sense considering the hybrid model splits the difference between the benefits and liabilities of public and private clouds. On the one hand, you gain the tremendous level of scalability that public services offer, and yet you can maintain a high degree of control over what leaves the confines of your own infrastructure and what doesn't.

It seems that deployment of hybrid clouds, at least for specialized functions like backup and recovery, is getting easier. Companies like Asigra are touting hybrid cloud appliances that offer quick and easy setup for smaller firms that lack the resources to man a broader infrastructure. In Asigra's case, the company utilizes Intel's Hybrid Cloud platform and storage technology along with its own agentless architecture capable of providing WAN optimization, continuous data protection and FIPS 140-2 data integrity.

Still, some questions remain as to how simple it actually is to integrate turnkey cloud platforms into existing data infrastructure. As Gale Technologies' Garima Thockchom points out in a piece written for CIO, it's important not to overlook some basic requirements for hybrid clouds. These include reliable automation so that both the provisioning and decommissioning of services is handled in a timely and efficient manner. You'll also need to make sure the cloud platform has the appropriate adapters to seamlessly tie into legacy management systems, and use the correct templates so that compute, network and storage configurations are maintained. Anything less, and your turnkey solution could turn into a massive headache.

This is one of the main reasons why top platform providers have been on a tear to acquire management integration technology, according to Internet Evolution's Mary Jander. Going way back to 2007 when SAP acquired Business Objects, followed by Oracle's purchase of BEA and IBM's recent takeover of Cast Iron Systems, the need to optimize not just provisioning but capacity planning, service levels, security, and a range of other functions across disparate architectures has been a running theme.

Even internal developments are trending toward servicing hybrid environments. Witness the raft of new tools from VMware, says eWeek's Chris Preimesberger. From the new vCloud Connector and Global Connect system to the vFabric Data Director database-as-a-service (DaaS), the goal is to put enterprises on the road to the cloud as quickly and easily as possible.

If the prognosticators are right, the hybrid cloud will become the standard-issue data center infrastructure within a relatively short time. That means enterprises of all stripes are under the gun to gain a working knowledge of all the technologies involved as quickly as possible.

That won't be an easy thing to do in today's rapidly changing environment, but it is IT's leading strategic imperative as the decade unfolds.

Coming Events of Interest

OMMA Global - September 26th-27th in New York, NY. The semi-annual gathering of MediaPost insiders featuring the most up-to-the-minute news, information, and ideas about the hottest online sectors - mobile, social, video, direct, display - presented for easy access and consumption.

Digital Music Forum West - October 5th-6th in Los Angeles. CA. Top music, technology, and policy leaders come together for high-level discussions and debate, intimate meetings, and unrivaled networking about the future of digital music. Digital Music Forum is known worldwide.

Digital Hollywood Fall - October 17th-20th in Marina del Rey, CA. Digital Hollywood (DH), the premier entertainment and technology conference in the country, once again welcomes the Variety Summit, which has been co-located with its past three DH events.

Executive Summit on Cloud Computing for Financial Services and Insurance Companies - October 25th-26th in New York, NY. This two-day conference will showcase the strategies and methods for determining whether to use cloud computing and how to do so in the most effective manner.

Future of Film Summit - November 7th-8th in Los Angeles, CA. An exclusive group of industry thought-leaders discuss the current state of the industry, and how film and transmedia deals will be struck in the coming years. This is a unique opportunity for creatives, producers, buyers, and film financiers.

Streaming Media West - November 8th-9th in Los Angeles, CA. Attended by more than 2,500 executives last year, SMW covers the entire online video ecosystem from content creation and management, to monetization and distribution. The number-one place to come see, learn, and discuss what is taking place with all forms of online video business models and technology.

World Telecom Summit 2011 - November 9th-11th in Singapore. The 2011 program will focus on topics that demonstrate innovation across the telecommunications industry, both on a commercial and technical level, to improve profitability and quality of next generation technologies and customer experiences.

Future of Television - November 17th-18th in New York, NY. Top television and digital media industry executives discuss the increasing importance digital media for the future of the television industry. Topics include viewer trends; programming for non-traditional platforms including online video, VoD, HD, IPTV, broadband and mobile.

Copyright 2008 Distributed Computing Industry Association
This page last updated September 25, 2011
Privacy Policy