Distributed Computing Industry
Weekly Newsletter

Cloud Computing Expo

In This Issue

Partners & Sponsors

Dancing on a Cloud

aspera

Avid

Chyron

Front Porch Digital

Rackspace

Digital Watermarking Alliance

MusicDish Network

Digital Music News

Cloud News

CloudCoverTV

P2P Safety

Clouderati

Industry News

Data Bank

Techno Features

Anti-Piracy

June 18, 2012
Volume XXXIX, Issue 11


CLOUD COMPUTING WEST 2012 Call for Sponsors & Exhibitors

The DCIA and CCA this week began offering industry-leading firms opportunities to sponsor and exhibit at the inaugural CLOUD COMPUTING WEST 2012 (CCW:2012) summit taking place November 8th-9th in Santa Monica, CA.

This first-ever business strategies summit for senior management responsible for decisions associated with the adoption of cloud-based solutions will feature three co-located conferences covering ENTERTAINMENT CONTENT DELIVERY, NETWORK INFRASTRUCTURE, and INVESTING IN THE CLOUD.

Major topics will range from latest trends in cloud solutions for high-value content production and distribution to pitfalls to avoid in adopting cloud solutions for content development/delivery; from how cloud migration is positively impacting broadband network operations and businesses to the drawbacks of cloud deployments from broadband network operators' perspectives; and from new updates on venture capital and M&A activity in the cloud computing space to liabilities that need to concern investors regarding cloud-based businesses.

There will also be analyses of newest cloud offerings for entertainment and industry's direction, and problem areas affecting cloud adoption in the entertainment sector; network resource usage by data centers and new ISP cloud services, and challenges for ISPs created by proliferation of cloud computing; and capital structuring and strategic alliances for cloud computing firms, and problem areas affecting investments/mergers of cloud services.

In addition, special sessions will explore in-depth file-based production workflow leveraging cloud computing for collaboration, dailies, editing, metadata, and pilots; the implications on network infrastructure of third-party SaaS, PaaS, and IaaS deployments, effects of various data centers, interconnection issues, and types of architectures; and the differing investment implications public clouds, private clouds, hybrid clouds, virtual private clouds, and community clouds.

And finally, panels will examine the entertainment distribution side: cloud transcoding, storage, delivery, data, and analytics; cloud mobility, virtualization, interoperability, and scalability; as well as green computing, big data, and open source as these topical considerations impact financing, VC criteria, and exit strategies.

Registration enables delegates to also participate in any session at the three conferences being presented at CCW:2012 on ENTERTAINMENT CONTENT DELIVERY, NETWORK INFRASTRUCTURE, and INVESTING IN THE CLOUD.

CCW:2012 features one common exhibit hall. and all networking functions (e.g., luncheon, refreshment breaks, evening cocktail reception, etc.) are open to all attendees at no additional cost.

The Cloud Is Huge: Verizon's Uppernet Demystifies Cloud Computing

Excerpted from AdBeat Report by Laura Waldman

Welcome to the Uppernet. Working for a SaaS company that leverages the power of cloud computing, I am familiar with a lot of weak attempts at explaining what the "Cloud" actually is. This amorphous image of a big fluffy air space floating around is almost the exact opposite of what cloud computing represents.

Finally, Verizon's Uppernet has hit the nail on the head by effectively communicating the previously elusive intangible concept.

"Our cloud is not soft and fluffy, our cloud is made of bedrock, concrete, and steel. Our cloud is the smartest brains combating the latest security threats"¦and is scalable as far as the mind can see."

Well said. We couldn't agree more over at Nextpoint.

Watch the commercial here: Verizon Uppernet.

The Audacity of Oracle Cloud

Excerpted from The Street Report by Dana Blankenhorn

When is a cloud not a cloud? When it's an Oracle Cloud.

Cloud computing has many elements.

Virtualization. We don't care what operating system that program was written under. It will run in our cloud because our hypervisor will make it do so.

Distributed computing. We don't care that your data and software are too big for one machine, or that demand might overwhelm one quite suddenly. Use the whole server room.

The ability to handle big data sets. Analyzing haystacks to find needles? Cloud does that in a jiffy.

Oracle Cloud has all this. But there are other things Oracle Cloud doesn't have. Commodity hardware? No. Open source? No. Vendor choice? Definitely not.

So why was Larry Ellison smiling and throwing out attacks on competitors like Muhammad Ali in his prime? That's the audacity of Oracle Cloud.

If you think it would be tough to leave your iPhone for Android, or Windows for the Mac, you know nothing about the pain one of the Fortunate 500 feels in thinking about their database vendor. For a big company, the database is the company. It's the crown jewels, it's the money vault. Lose that and you might as well put out the "gone fishing" sign.

Beyond the database are the key applications that run on it. Your customer relationships. Your business processes. Your enterprise planning. It may have cost millions to build, but it's worth billions, and would cost hundreds of millions to replace.

Oracle's database applications are the pinnacle of what's called enterprise computing. What was once client-server has evolved into an architecture. You increase capacity by buying a server. You pay for your software with per-server license fees, and maintenance fees, every year.

Report from CEO Marty Lafferty

Photo of CEO Marty LaffertyAs the Distributed Computing Industry Association (DCIA) and the Cloud Computing Association (CCA) ramp-up for our inaugural strategic summit, CLOUD COMPUTING WEST 2012, we pause to celebrate the success of Sys-Con's tenth international developers' conference, Cloud Expo 2012.

Two unstoppable enterprise information technology (IT) trends - cloud computing and big data - were the central themes at this event, which was held June 11th-14th at the Javits Convention Center in New York, NY with an estimated 10,000 attendees.

The expo featured industry keynotes, technical breakout sessions, and "power panels," as well as a busy exhibit floor with leading solutions vendors displaying their latest offerings.

The State of Cloud Computing was the topic of discussion in a power panel recorded the day before the event opened.

The preview highlighted recent IDC research showing that worldwide spending on cloud services will grow almost threefold, reaching $44.2 billion by 2013 and a recent Gartner report predicting that the volume of enterprise data overall will increase by a phenomenal 650% over the next five years.

It was clear at Cloud Expo that the cloud is now being adopted by mainstream companies, organizations, and even national governments to leverage the power of data on demand at a scale and pace never seen before in the history of the Internet.

Cloud Computing Bootcamp, led by Larry Carvalho, helped make sense of this hottest new technology that is still rapidly evolving, while also continuously being peppered with hype. With prospective customers finding it hard to determine what aspects technology will yield the greatest benefits, Cloud Computing Bootcamp offered a practical understanding of the technology.

Citrix VP Peder Ulander cut through the hype and clarified the ontology for cloud computing in his Crash Course in Open Source Cloud Computing, focusing on the open-source software that can be used to build compute clouds for infrastructure-as-a-service (IaaS) deployments, and the complementary open-source management tools that can be combined to automate the management of cloud-computing environments.

Hadoop, MapReduce, Hive, Hbase, Lucene, Solr? The only thing growing faster than enterprise data is the landscape of big data tools. These tools, which are designed to help organizations turn big data into opportunities, are gaining deeper insight into massive volumes of information.

The time is now for IT decision makers to determine which big data tools are the best - and most cost-effective - for their organization. In The Growing Big Data Tools Landscape, David Lucas, Chief Strategy Officer at GCE, ran through what enterprises need to know about this growing set of big data tools - including those being leveraged by organizations today as well as new and innovative ones just arriving on the scene.

Blake Yeager, Product Manager Lead for IaaS at HP Cloud Services, in Run and Operate Your Web Services at Scale took attendees through Hewlett-Packard's (HP) next public cloud infrastructure, platform services, and cloud solutions, showing how easy it can be to spin-up instances of compute, storage, and content delivery networking (CDN).

In Cloud Computing and Big Data - It's the Applications, Tom Leyden, Director of Alliances and Marketing at Amplidata, noted that, "While there is still a lot of interest in Big Data Analytics, we see an increasing focus on Big Unstructured Data. Object storage is the new paradigm to store those massive amounts of free-form data."

IT Cloud Management strategies enable organizations to maximize the business value of their private clouds. Joe Fitzgerald, Chief Product Officer & Co-founder of ManageIQ, discussed customer experiences and how these tactical approaches increase agility, improve service delivery levels, and reduce operating expenses in Cloud Computing: Enterprise Cloud Management.

Shannon Williams, VP of Market Development for the Cloud Platforms Group at Citrix and a Co-founder of Cloud.com, in Architecting Your Cloud, discussed how CloudStack has been the platform of choice for more than a hundred public and private production clouds, and provided an insider's view of the company's experiences in designing the right architecture to meet customers' clouds.

IT departments are experiencing storage capacity needs doubling every 12-18 months, 50x the amount of information and 75x the number of files. IT managers are dealing with growing constraints on space, power, and costs to manage their data center infrastructures. The Growth and Consolidation of Big Data in the Cloud explored how Intel is helping businesses and users realize the benefits of cloud computing technology by working to develop open standards that operate across disparate IT infrastructures and by delivering cloud-based architectures.

Securing Big Data Input addressed one of the most widely asked questions about big data today: "How do we get valuable analytics from big data?" As data continues to grow exponentially, so does the variety of data (structured and unstructured) coming from humans, machines, and applications. In order to pull valuable information from it all, proper data gathering is critical, and the data itself needs to be timely and accurate.

And in The Ever-Expanding Role of Big Data, William Bain, Founder & CEO of ScaleOut Software observed that, "Security standards for moving data into and out of the cloud and for hosting it within the cloud will dramatically help accelerate adoption of the cloud as a secure computing platform, and additional standards for creating elastic clusters that are physically co-located and use high-speed networking will also help in hosting applications."

There is no longer any question that the cloud computing model will be the prevailing style of delivery for computing over the coming decades. Forrester Research predicts that the global market for cloud computing will grow to more than $241 billion in 2020. Cloud - Vision to Reality explored how greenfield application development projects can be designed from the outset to benefit from cloud-computing features such as elastic scalability, automated provisioning, and infrastructure level APIs.

SHI, a $4 billion+ global provider of IT products, and Rackspace Hosting, a services leader in cloud computing, were Platinum Plus Sponsors of SYS-CON's Expo. For developers, it was a must-attend event.

According to IBM's 2011 Tech Trends Report, 75% of respondents said that over the next 2 years their organizations will begin to build cloud infrastructure and in the next 24 months "developing new applications" will be the top cloud adoption activity, overtaking the current top investment areas of virtualization and storage.

Huge cloud-driven opportunities for wealth creation exist today - but the race is to the swift. The cloud-computing industry is one in which even a few months can make all the difference.

DCINFO readers are encouraged to sign-up now for the CLOUD COMPUTING WEST 2012 (CCW:2012) summit being presented November 8th-9th in Santa Monica, CA by the Cloud Computing Association (CCA) and the Distributed Computing Industry Association (DCIA). CCW:2012 features three co-located conferences geared for management charged with addressing the key strategies and business decisions critical to cloud computing adoption in the entertainment, telecom, and investment sectors. Share wisely, and take care.

Cloud Expo: The Question Moves from "What" to "Why" to "How"

Excerpted from Sys-Con MediaReport by Roger Strukhoff

Cloud computing has crossed a Rubicon of sorts, and is now being embraced by a majority of enterprise IT shops - at least according to attendees and vendors at Cloud Expo in New York, NY.

I remember interviewing Hal Stern (late of Sun and Oracle) a couple of years ago at the event, when he said that people were asking him, "Why should I do cloud?" rather than "What is cloud?" This year, the question is "How should I do cloud?"

There is a mad dash among big vendors, for one thing. IBM and HP have embraced the cloud fully, even to the extent of offering traditional PaaS development services as part of their infrastructure (IaaS) solution. Microsoft has re-launched Azure, in effect, working with new vendors to expand beyond its PaaS roots to become an IaaS vendor designed to compete directly with Amazon. Oracle's Larry Ellison now speaks of cloud as if he invented it, as the database monster now seeks to maintain grip on hundreds of thousands of enterprise IT customers.

Meanwhile, the Battle of the Stacks among Eucalyptus, OpenStack, and Citrix CloudStack is merely part of a larger struggle for market share among the three Open Source companies against VMware, the company that triggered the move toward cloud in the first place.

Cloud Expo had a few highly interesting sub-events within it. In addition to its traditional Cloud Computing Boot Camp and the RightScale conference, this time Cloud Expo hosted a day-long presentation from the Open Data Center Alliance (ODCA), and the initial DeployCon event, which focused on the pack of PaaS vendors who are rubbing against one another for supremacy in this key space.

The word of the day here was "multi-cloud." It turns out that enterprise IT is complex, and that cloud is not going to eliminate that complexity, at least with larger shops. However, it will continue the push in recent years to eliminate silos, decouple and loosely recouple services, get a grip on measuring things, and provide the vaunted "single pane of glass" through which IT management can view and manage what's going on.

Cloud's potential to offer apparently infinite elasticity and to remove some of the day-to-day management headaches when moving things offsite remain as great future opportunities for cloud. But it seems that customers are doing their best to avoid Vendor Lock-In 2.0 and to work with multiple companies to get what they need.

From what I saw at Cloud Expo, the need for highly skilled IT worker bees and managers will only increase as companies realize that they really need to know what they're doing in the cloud; it's not just a buzzterm, not a panacea to IT complexity, but rather, a foundational, transformational change.

Cloud is a State of the Business, not just IT

Excerpted from GigaOM Report by Mark Thiele

In my last blog I posited that cloud and by extension cloud management was a strategic versus tactical activity and as such should have the appropriate people involved in definition gathering and decision making for a variety of reasons. Now, I'd like to cover several examples of why cloud management is strategic in nature and clarify why the CIO, and others outside of the purely technical staff are critical to project and solution success.

Cloud isn't just a more powerful engine in the same old car.

For a legacy IT organization to adopt cloud solutions without significant organizational realignment and improved business participation, the benefits would largely be wasted. It's akin to thinking you can put a modern 500-horse power engine in a 1970s economy car and get all the same performance and protection characteristics you would enjoy in a 2012 model year luxury sedan.

In fact, the introduction of cloud without organizational improvements would likely increase enterprise risk and potentially cost. The real opportunity of a cloud operating model comes from the alignment of technical solutions, people, and process. So when a business opportunity presents itself, your processes and technology will seamlessly keep pace with the natural development of the initiative.

Keeping pace means more than just creating a new pile of IT resources quickly, it means a repeatable process that also provides the appropriate controls and governance in order to minimize risk to your business, provided at the appropriate value to the opportunity.

For those trying to figure this realignment out, here are examples of how your cloud operating model (which includes cloud management) can provide real differentiation in the way IT solutions are delivered.

Contestability: an economic theory that can save you money.

Can you easily swap hardware? Hypervisors?

A real cloud-operating model enables you to plug in different cloud providers, different hypervisors, different hardware, different PaaS solutions, different provisioning systems, monitoring systems, etc. Through true contestability you have the opportunity of replacing key portions of your infrastructure for better or lower cost solutions, without risking the larger framework or architectural design of the environment.

Ask yourself these questions. Under your current model: Can you use a mix of external providers such as Amazon EC2, TerreMark, Savvis, and CSC? Easily change from HP to Cisco hardware (or vice versa)? Switch from VMWare to HyperV 3 (or vice versa)? Can you easily adopt a new provisioning/scripting framework like chef/puppet/cfengine? Reuse the policy enforcement and auditing system you currently have while making any of the changes above? Reuse your procurement portal and provisioning workflows while achieving the above?

If your answer to these questions isn't, "Yes, it would be easy and relatively inexpensive," then you aren't operating under a true cloud-operating model. Very few companies have these kinds of capabilities, because until just recently the management platforms needed didn't exist. But they are coming online and are different from traditional enterprise software in that they were designed from scratch to be abstractions above hypervisors and cloud providers.

Plus, there is very little traction in this space from established vendors, the majority of which all want to build vertically integrated solutions that specifically do not allow this form of inter-platform competition.

It's easy to get lost in the cool technology here and lose focus on how you would actually achieve a vendor neutral platform and what the financial impacts would be. As an example, let's start with the Commonwealth Bank of Australia (CBA). It was able to cut its IT spend by 10 percent by encouraging its vendors to compete for workloads.

CBA did it by implementing a cloud management solution and then performing expert vendor management to allow it to quickly adopt new vendors and to gain leverage over existing ones. In just two years it pulled this off using off-the-shelf software and freed up $100 million a year going forward to reinvest in new capabilities.

Think IaaS+.

Don't let business processes slow down your cloud.

Many companies implementing a cloud solution completely miss the boat on their first pass. IT often focuses so hard on basic server provisioning that they lose site of the bigger picture.

While server provisioning is interesting, it's likely one of the smallest benefits. The real advantages are obtained by moving up the stack and giving the business the ability to deploy applications and solutions much faster.

The primary examples are any company wanting to deploy Agile Development, DevOps, or PaaS solutions. A great example of using a cloud management operating model is UBS and its workplace automation solution. UBS implemented a cloud management solution to deploy virtual desktop infrastructure (VDI) but not traditional VDI.

Instead the UBS version of VDI is a mixed set of services that are delivered to iPads and Android devices backed by a set of traditional desktops to do the workflow automation. The user almost never sees the desktop, and the desktops are stateless. Further, depending on the user, it automatically routes them to a freely available desktop in the correct country, running the correct software and profile for that individual.

Complexity reduction.

Another major benefit of a cloud management platform is that it coalesces many different solutions and products under a single umbrella. This complexity reduction can directly translate into increased uptime and improved stability as well as cost savings. A major financial institution with 15,000 servers was able to save almost $30 million a year just in complexity management.

Reduced work orders to configure solutions, fewer vendor products, fewer project managers allocated to managing change and best of all fewer meetings. It all adds up. This is why your cloud management strategy has to integrate many different things.

It's not enough to say you are just going to focus on deployment of a virtual machine (VM). You need to focus on how you will deploy that VM's storage, networking, compute, DNS entries, the software on the VM, and the firewalls in between. You also need to consider where to deploy or what regulations will apply depending on what you're deploying and where it's deployed, etc.

Leave out any of these and instead of a single simple solution you have something that requires project management, meetings, and more. Not having these systems fully automated and managed by a policy configuration engine also means that the risk of failure due to simple things like fat fingering mistakes go up dramatically.

The lesson here is the strategic use and management of cloud can help you scale your IT (and your business), but first you have to get your IT and business ready to scale.

Huawei Takes Data Centers to the Cloud

Excerpted from IT-Online Report

Cloud computing is a transformational and disruptive technology that will fundamentally change the ways in which people live and do business. A data center comprises the basic hardware medium for cloud computing and requires new solutions that are able to accommodate cloud requirements and meet related challenges.

Gartner and IDC predict that data centers will develop over the next five years, and will increasingly focus on cloud computing virtualization, efficient energy utilization and management and high-density design. As a result, traditional data centers that do not keep pace will become obsolete.

Modernizing traditional data centers to support cloud computing is a new challenge for many executives. In response, Huawei has launched its 4S high-density modular data center that offers customers a simplified cloud computing solution that can be placed in either indoor or outdoor environments.

Huawei's 4S Data Center infrastructures consists of eight sub-systems: power distribution; refrigeration; integrated cabling; management system; fire control; surge protection; cabinet; and decoration. Unlike a traditional data center, Huawei's 4S Data Center solutions are standardized and can be easily pre-installed, pre-tested, and quickly assembled onsite.

Standard ports provide a clear engineering interface and easy management, thereby minimizing onsite engineering while simplifying solution planning and accelerating deployment.

Compared to a traditional data center, which takes two years to build, a Huawei 4S Data Center can be constructed in 60% less time. An indoor 4S Data Center requires a net height of just 2,8 meters without a raised floor, while the outdoor 4S Data Center can be easily transported and powered up immediately.

Three core modules - the data center, power supply and distribution, and outdoor cooling modules - and two auxiliary systems - the decoration and fire control systems - can be flexibly designed and installed as required.

The air conditioning module provides cool air, and uninterrupted power supplies (UPSes) provide power on demand. Unlike the one-step planning and excessive investment common to traditional data centers, each Huawei 4S Data Center solution plan can be deployed and expanded in stages to avoid extraneous investment and to allow flexible response to changes in services and technology.

To meet the requirements for unified intelligent management for cloud computing, Huawei integrates all management elements of L1 to allow for remote and unattended operations as well as integrated monitoring and efficient management.

This also enables the realization of visible network management and interaction between the infrastructure facility and service layers so that service changes can be dynamically detected to provide cool air and to accurately calculate the electricity charge of each cabinet so that the requisite power can be supplied.

The Huawei 4S Data Center solution reduces the total cost of operation by 30% through a combination of energy conservation, time savings, and carbon footprint reduction, due to the use of high-density 30kW cabinets, which can be stacked.

Moreover, through the use of modular air conditioners that intelligently provide cooling air on demand, efficiency can be improved by 30%. UPSes with a conversion efficiency of 96% can increase load rate by more than 60%, reducing OpEx by 30%.

As of the end of 2011, Huawei had successfully delivered more than 230 data centers, including 16 cloud data centers, across the globe. Huawei is able to tap into its vast experience in telecommunications and IT to provide optimal end-to-end delivery design, full supply chain support and superior 4S Data Centers, allowing customers to take full advantage of the possibilities offered by the cloud computing era.

Tennis Channel Selects America ONE Sports and Octoshape Infinite HD-M 

Octoshape and America ONE Sports announced have again been chosen by Tennis Channel as the exclusive provider of video streaming services for the prestigious 111th French Open.

This year the linear video is delivered to consumers via the Octoshape Infinite HD-M Federated Multicast Broadband TV platform. This technology enables the quality, scale, and economics of traditional broadcast technologies over the public Internet. Telco and cable operators that are part of the Infinite HD-M Federated network receive the signals via native IP Multicast in a way that allows them to easily manage large volumes of traffic without upgrading their Internet capacity.

America ONE Sports is providing the onsite downlink services with ONE CONNXT, as well as the enhanced multicourt video player experience. The encoded video signals are acquired using the Octoshape Infinite Uplink technologies. The live and on-demand stream coverage includes unique Octoshape features such as instant on, DVR functionality, and picture-in-picture viewing. The player experience also includes bonus features developed by America ONE Sports, such as live chat and event stats.

"A high quality, interactive experience is priority number one for consumers of Tennis Channel content," said Robyn Miller, Senior Vice President, Tennis Channel. "With America ONE Sports and Octoshape we achieve that goal. This year we provide content efficiently allowing broadband providers to fully meet consumer demand."

Tennis Channel has the most hours of live French Open Coverage of any US television network, approximately 75 hours overall. The network is available in more than 55 million homes during the tournament, the most prestigious annual clay-court event in tennis. For those who do not currently receive Tennis Channel, TennisChannel.com provides a live online stream that allows viewers to watch all the normal coverage for free.

"The French Open coverage by the Tennis Channel is a marquee event to showcase America ONE Sports and ONE CONNXT Transport Services. The quality and viewer experience is exceptional and truly adds a great deal of value to the French Open," said Paul Dingwitz, CTO of ONE Media Corp.

"Supporting the French Open for the Tennis Channel again this year is a testament to our commitment to TV quality video," said Michael Koehn Milland, CEO of Octoshape. "With our new Infinite HD-M platform, we can now also offer the scale and economics of traditional broadcast TV distribution, over the Internet."

Guide to Cloud for SMEs

Excerpted from Computer World Report by Derek du Preez

The Open Group, a global consortium that aims to help businesses achieve objectives through IT standards, has published an extensive guide to cloud computing for small to medium sized enterprises (SMEs) in a bid to provide clarity on what services should be deployed in an over-crowded cloud marketplace.

Maximizing the Value of Cloud for Small-Medium Enterprises defines an SME in the UK as a company with fewer than 250 employees and earning less than $50 million.

The report argues that cloud computing can help solve some of the traditional problems facing SMEs today.

"From experience, many businesses, irrespective of their size, lack business-IT alignment; hence IT planning becomes a matter of trying to hit a moving target. To make matters worse, small businesses struggle with a lack of skilled IT personnel, operational insufficiencies, and poor IT management," the Open Group said.

"Cloud computing won't be the answer to all of the above. But it can help simplify IT so that SMEs stay business-focused."

The Open Group believes that to overcome the IT-business alignment problem, IT departments should move away from being reactive, whereby they take time to respond to requests for IT services from the business, and instead offer the business a pre-defined catalog of IT services that could be provisioned quickly and easily.

However, developing an "off-the-shelf" catalog of IT services in house is costly and requires a significant investment in capital and time. As such, cloud computing becomes an interesting option for SMEs.

The report states, "Instead of internally building these services upfront, why not source these services from somewhere else? The service provider can work with an economy-of-scale effect not available at the individual SME level and serve hundreds of SMEs."

"IT organizations of SMEs should consider the idea of becoming an IT service broker where services are provided to consumers through a pay-per-use arrangement and an as needed business model while sourced from outside from cloud service providers."

The Open Group believes that the cloud computing model can also help SMEs attract additional revenue. By sourcing IT services from outside the organization, SMEs benefit from a reduced time-to-market, which will gain them new customers that might have gone to competitors previously, and in turn prevents those competitors from becoming stronger, it said.

It also points to the benefits of moving from capital expenditure (CapEx) to operational expenditure (OpEx), when procuring IT services from the cloud.

"IT organizations don't have to invest in IT assets anymore, but instead "rent" them from cloud service providers. But only as long as they are needed by the consumers," reads the report.

"Reducing and/or optimizing costs is the keyword here, and the adoption of cloud computing can support this objective."

Despite the benefits of the cloud for the SME market, The Open Group does also highlight that organizations need to consider the security implications when adopting cloud as a delivery model.

It asks SMEs to consider: What if IT organizations decide to decommission the cloud services? How will data be transferred from the cloud service provider back in-house or to another service provider? How has access to sensitive data? SMEs are responsible for security and data integrity, but are the cloud providers willing to undergo external audits and certifications? Is data being segregated properly?

"This guide reminds us that, with cloud, smaller budgets are no longer a hindrance to a SME's ability to compete in the market. This is because, with cloud, instead of SMBs operating workloads such as desktops, storage and communications through physical in-house servers, they are hosted on centralized virtual servers in a data center, which means the cost of investments is more affordable; the deployment and set-up times are rapid and less complicated than traditional configurations; and solutions are more scalable, secure and agile," the consortium concluded.

How "Systems Thinking" Is Making the Cloud Transparent

Excerpted from GigaOM Report by James Urquhart

Given my current obsession with understanding everything I can about how cloud computing is beginning to look, feel and behave like a variety of other complex adaptive systems, I've started paying close attention to the widespread practice (outside of IT, it seems) of systems thinking.

Defined in Wikipedia as "the process of understanding how things influence one another within a whole," systems thinking represents a modeling, analysis and design discipline that carefully explores "macro" aspects of highly interdependent systems. Systems thinking is heavily utilized in such fields as the social sciences, organizational dynamics, and industrial engineering to evaluate, model, and/or design how systems are composed and how they behave.

Systems thinking is difficult for those that have been educated to always apply reductionist thinking to problem solving. The idea in systems thinking is not to drill down to a root cause or a fundamental principle, but instead to continuously expand your knowledge about the system as a whole.

It's one of the fascinating questions that faces anyone trying to model a system: What are the system's boundaries? When everything is so highly interdependent (economies are linked to governments are linked to societies are linked to individual people, etc), how do you know where to start modeling, and where to stop?

In her classic book on systems thinking, the late Donella H. Meadows articulated brilliantly the challenge that "systems thinkers" are faced when scoping the problem they need to address:

"The lesson of boundaries is hard even for systems thinkers to get. There is no single, legitimate boundary to draw around a system. We have to invent boundaries for clarity and sanity; and boundaries can produce problems when we forget that we've artificially created them..."

"There are no separate systems. The world is a continuum. Where to draw a boundary around a system depends on the purpose of the discussion - the questions we want to ask."

She goes on to say:

"The right boundary for thinking about a problem rarely coincides with the boundary of an academic discipline, or with a political boundary. Rivers make handy borders between countries, but the worst possible borders for managing the quantity and quality of the water. Air is worse than water in its insistence on crossing political borders. National boundaries mean nothing when it comes to ozone depletion in the stratosphere, or greenhouse gases in the atmosphere, or ocean dumping."

This, I think, is a critical observation for people building large scale cloud-computing applications and services that integrate with other applications and services in the cloud. Understanding where the boundaries of source code and data models lie is relatively straightforward, but understanding the boundaries of operations - monitoring, compliance, decision making, liability and so on in cloud-based applications - is not so straightforward.

In fact, I would argue that the nature of cloud-systems boundaries will themselves be highly dynamic, in part because of the comings and goings of technologies and services (not to mention politics, economics, and so on). However, it is also true that it will take time to discover the full extent of those systems for each application or service you operate, as everything is so interconnected.

This is different from what we experienced with so-called "traditional IT", as we could typically maintain control of all but a few elements of our application systems, and the applications were generally quite isolated. We strived for stability, and that included stable boundaries. It is clear to me that this is becoming increasingly impossible.

There is also an interesting corollary to the problem of boundaries that must be considered when planning any application or service that might be consumed by outside parties. If you do not necessarily know which third-party services affect your "system", it stands to reason that you also do not know which external systems or applications your offering affects.

In other words, how do you know the application systems that you may ultimately impact if anyone can consume your service at any time? Are you making it easy for them to design "around" you for their own resilience?

All of this leads me to what I think is the key conclusion that has to be reached about the future architecture of our shared cloud computing "system:" transparency is essential. Without a steady stream of feedback data from whatever sources we determine - over time - have a significant impact on the operation of our applications, we are doomed to be unable to properly find the right "boundaries" for those applications.

Information about the functioning state of infrastructure (like compute nodes and networks), services (like data stores and platform services) or even other applications (like SaaS or your partners' applications) will be critical to evolving the automation that successfully enables resiliency. And, as I noted in my keynote at last month's excellent Gluecon in Colorado, one key goal in these systems is resiliency.

Will such transparency happen? I believe it already is. Just this week, Amazon Web Services announced a method for downloading billing information for their services. At one point in time, it was postulated that Amazon would never do this. However, customers have spoken, and the need to access real time costs of Amazon's services programmatically has forced transparency.

Regardless, it is important to start thinking about your applications and services in the cloud as systems, not just stand-alone components. The challenge before you is to determine what the boundaries of those systems are, and how to design, build and operate your software to thrive within those boundaries.

Security Risks Impact even Businesses that Stay out of the Cloud

Excerpted from eWeek Report by Robert Mullins

There's a lot that businesses still have to ask their cloud service providers before signing up for service, especially about how secure their cloud environment is, the chief operations officer of the Cloud Security Alliance (CSA) said at a cloud conference in Santa Clara, CA.

John Howie explained the security risks associated with cloud computing and the ways businesses can protect themselves and their data at the Cloud Leadership Forum held June 13th-14th. Howie warned that some cloud providers actually turn around and have customer workloads managed by yet another cloud provider. He also warned against using free consumer-grade cloud services for enterprise-grade computing.

The Cloud Security Alliance is a nonprofit organization that provides free information to its membership of 150 companies and 35,000 individuals on how to choose cloud services - private, public or hybrid - wisely and with a focus on data security in the cloud.

Howie sought to dispel the notion that the IT department or other managers can claim that they don't need to worry about cloud security because they don't use cloud services. Typically, individual employees subscribe to cloud services on their own. He gave the example of a businessman he met who was on the phone and couldn't send an email because the size of the attached file was too large. The man said he would upload it to DropBox, a cloud-based file-sharing service, instead.

"You use DropBox?" Howie asked the man. "'Well, not officially,'" came the reply. "That's what you're finding in your organizations today."

There's another reason to avoid consumer-oriented cloud file-sharing or storing services such as DropBox, Google Drive or Microsoft SkyDrive, he continued. They are free because they're advertising-supported and they index the user data to glean information from it on what ads to deliver.

"They are probably indexing your data, not because they want to know what your data is at a human level," Howie explained. "But at the machine level, they want to know what advertisements to send to you to increase the click-through."

It may be harmless enough for consumers to have their data indexed but an enterprise should not take that risk. There are paid file-sharing services that are better designed for enterprise users and their important security needs.

A related issue is how businesses can remain compliant with government and industry regulations for the security and privacy of company data in the cloud. Not only are there varying regulations on the state and federal level in the United States, there are myriad regulations globally and, increasingly, both companies and cloud service providers operate globally. What regulations a company has to comply with depends on where the cloud service provider's data centers are located as well as where the company's data centers are located, Howie said.

Obama Seeks to Speed Broadband Infrastructure Deployment 

Excerpted from Information Week Report by Patience Wait

President Obama signed an executive order Thursday directing the creation of a federal working group to address broadband deployment access issues for government lands, buildings, and rights-of-way.

The Broadband Deployment on Federal Property working group will include representatives from six cabinet-level agencies (the Departments of Defense, Interior, Agriculture, Commerce, Transportation, and Veterans Affairs, US Postal Service, Federal Communications Commission (FCC), Council on Environmental Quality, Advisory Council on Historic Preservation, and the National Security Staff.

The working group's task is to create a consistent, streamlined approach to policy and permitting requirements for deployment of broadband facilities on federally owned lands, buildings, rights-of-way, tribal lands, and highways that receive federal assistance. The intent is to give carriers a single approach to leasing federal assets for broadband deployment. The government controls nearly 30% of the land in the United States, thousands of miles of roads, and more than 10,000 buildings.

In conjunction with the release of the executive order, the White House announced a new public-private partnership, US Ignite, among industry, cities, and national research institutions, dedicated to accelerating development of a wide range of ultra-fast broadband and software-defined network applications for industries as diverse as advanced manufacturing, healthcare, energy, and education.

With US Ignite, the White House announced a number of initiatives undertaken by participants, including:

The National Science Foundation (NSF) is expanding its initial $40 million investment in the Global Environment for Networking Innovations (GENI) project, investing a further $20 million "to transition from building GENI to using it for Internet-scale experiments.

Six project grantees of the Commerce Department's National Telecommunications and Information Administration (NTIA) are joining the US Ignite initiative.

The Defense Research and Engineering Network (DREN) will serve as a test bed to accelerate development and deployment of ultra-high-speed bandwidth applications. DREN is the networking part of the Defense Department's High Performance Computing Modernization Program; as part of this initiative, it will connect research sites at the US Military Academy, Naval Postgraduate School, Space and Naval Warfare Systems Command, Naval Research Laboratory, Air Force Research Laboratory, and Army Research Laboratory, and participate in an agreement between the Maui High-Performance Computing Center and the University of Hawaii.

Corporate partners announced at the unveiling of US Ignite include Juniper Networks, NEC, Cisco, Verizon, Comcast, HP, AT&T, Ciena, and Big Switch Networks.

Comcast Defends its Subscribers against Copyright Trolls

Excerpted from WebProNews Report by Zach Walton

People have a lot of bad things to say about ISPs, but we should give them credit when they do something pro-consumer. Remember when Verizon refused to comply with a subpoena that sought the identities behind IP addresses? That was pretty awesome and pro-consumer. Another major ISP has joined Verizon in protecting consumers' identities.

Comcast has requested that the court quash the subpoenas being used in an Illinois District Court. The subpoenas, like others before it, are demanding that Comcast identify the people behind the IP addresses that have been found downloading content over BitTorrent. The reasoning behind the quash respect is sound which Comcast's lawyers lay out in easy to understand terms.

They argue that the subpoenas are "overbroad and exceed the boundaries of fair discovery." As for the other argument, let's have Comcast speak for itself:

"Third, plaintiffs should not be allowed to profit from unfair litigation tactics whereby they use the offices of the Court as an inexpensive means to gain Doe defendants' personal information and coerce 'settlements' from them. It is evident in these cases - and the multitude of cases filed by plaintiffs and other pornographers represented by their counsel - that plaintiffs have no interest in actually litigating their claims against the Doe defendants, but simply seek to use the Court and its subpoena powers to obtain sufficient information to shake down the Doe defendants. The Federal Rules require the Court to deny discovery 'to protect a party or person from annoyance, embarrassment, oppression, or undue burden or expense.' This case requires such relief."

Interestingly enough, AF Holdings accuses the defendants of participating in a BitTorrent "swarm." The idea here is that everybody who downloaded a movie from AF Holdings did so together with the intention of turning around and seeding it as soon as they had finished downloading it. It seems that pornography studios don't understand the Internet and how BitTorrent works, but Comcast apparently does.

"The plaintiffs allege in their complaints that the Doe defendants 'took concerted action' and 'were collectively engaged in the conspiracy even if they were not engaged in the swarm contemporaneously.' However, courts have found that 'much of the BitTorrent protocol operates invisibly to the user after downloading a file, subsequent uploading takes place automatically if the user fails to close the program.' Simply alleging the use of BitTorrent technology "¦ does not comport with the requirements under Rule 20(a) for permissive joinder."

If accepted by the court, it would help shape the definition of what kind of BitTorrent activity is actually considered infringement. A lot of people don't find the act of downloading unauthorized content over BitTorrent actually to be infringement, but the act of uploading the content to share is. The problem comes from the fact that many BitTorrent clients automatically set the user to share the content over BitTorrent upon finishing the download.

In short, this case is absolutely fascinating. Unlike Verizon, which just refused the subpoena, Comcast is making a great argument for the rights of their subscribers and BitTorrent users everywhere. We'll keep watching the case to see what verdict the judge returns. Either way, it's encouraging to see ISPs fighting for consumer rights. Now if they could just get rid of data caps.

Please also see Comcast Reply.

Coming Events of Interest

IEEE 32nd International Conference on Distributed Computing - June 18th-21st in Taipa, Macao. ICDCS brings together scientists and engineers in industry, academia, and government: Cloud Computing Systems, Algorithms and Theory, Distributed OS and Middleware, Data Management and Data Centers, Network/Web/P2P Protocols and Applications, Fault Tolerance and Dependability, Wireless, Mobile, Sensor, and Ubiquitous Computing, Security and Privacy.

Cloud Management Summit - June 19th in Mountain View, CA. A forum for corporate decision-makers to learn about how to manage today's public, private, and hybrid clouds using the latest cloud solutions and strategies aimed at addressing their application management, access control, performance management, helpdesk, security, storage, and service management requirements on-premise and in the cloud.

2012 Creative Storage Conference - June 26th in Culver City. CA. In association with key industry sponsors, CS2012 is finalizing a series of technology, application, and trend sessions that will feature distinguished experts from the professional media and entertainment industries.

CLOUD COMPUTING WEST 2012 - November 8th-9th in Santa Monica. CA. CCW:2012 will zero in on the latest advances in applying cloud-based solutions to all aspects of high-value entertainment content production, storage, and delivery; the impact of cloud services on broadband network management and economics; and evaluating and investing in cloud computing services providers.

Copyright 2008 Distributed Computing Industry Association
This page last updated June 24, 2012
Privacy Policy