May 28, 2012
Volume XXXIX, Issue 8
Call for INVESTING IN THE CLOUD Speakers at CCW:2012
The DCIA and CCA this week announced a call for speakers at INVESTING IN THE CLOUD, one of three co-located conferences taking place at the CLOUD COMPUTING WEST 2012 (CCW:2012) summit November 8th-9th in Santa Monica, CA.
Major topics will range from new updates on venture capital and M&A activity in the cloud computing space to liabilities that need to concern investors regarding cloud-based businesses, along with analyses of capital structuring and strategic alliances for cloud computing firms, and problem areas affecting investments/mergers of cloud services.
In addition, special sessions will explore in-depth the differing investment implications public clouds, private clouds, hybrid clouds, virtual private clouds, and community clouds.
And finally, INVESTING IN THE CLOUD panels will examine green computing, big data, and open source as these topical considerations impact financing, VC criteria, and exit strategies.
Registration enables delegates to also participate in any session at the two additional conferences being presented at CCW:2012 on ENTERTAINMENT CONTENT DELIVERY and NETWORK INFRASTRUCTURE, as well as INVESTING IN THE CLOUD.
CCW:2012 features one common exhibit hall. and all networking functions (e.g., luncheon, refreshment breaks, evening cocktail reception, etc.) are open to all attendees at no additional cost.
Solid State Networks Unveils Digital Delivery Application Platform
Solid State Networks, a leading developer of content delivery solutions, this week announced the availability of the DIRECT 3 application platform for digital delivery. DIRECT 3 is a third generation technology for application developers, content publishers, and e-commerce providers that deliver digital products and services online.
Designed for rapid development, DIRECT 3 includes a native client that incorporates the features and functionality needed for most digital delivery applications with support for extensive customization through standard web technologies, including JavaScript, HTML, and CSS. DIRECT 3 can support a variety of use cases for digital distribution to consumers and enterprise users, as demonstrated in recent deployments by Adobe Systems for software delivery and BioWare for game delivery and updates.
The DIRECT 3 solution includes native clients for Windows and Mac operating systems, publishing workflow tools and a reporting system. DIRECT 3 features many specific capabilities required for advanced digital delivery applications, such as versioning, differencing, updating, advanced proxy support, real-time delivery logistics, dynamic payload assembly, and the ability to integrate securely with publisher backend systems or third party services.
Solid State will announce additional services to be offered in connection with DIRECT 3 and support for other operating systems later in 2012.
"DIRECT 3 was developed with the benefit of having input from many companies that we have had the pleasure of working with over a period of years. We are excited about our capability to address their many and diverse needs with a single solution," said Rick Buonincontri, CEO of Solid State Networks. "We are particularly grateful to Adobe and BioWare for their contributions as early adopters."
"We observed measurable improvements in installation success when we transitioned to delivering Flash and Reader using DIRECT 3," said Steve Snell, Group Product Manager of Adobe Systems. "We have found DIRECT 3 to be very flexible in its ability to accommodate our evolving business and technical requirements."
Report from CEO Marty Lafferty
We commend US Senator Ron Wyden (D-OR) for introducing legislation this week that would clarify the US Trade Representative's (USTR) obligation to share information on trade agreements with Members of Congress.
Sen. Wyden, who spoke on the floor of the Senate about why this is necessary, has been a critic of the Administration's handling of international treaties including the Anti-Counterfeiting Trade Agreement (ACTA) and most recently the Trans Pacific Partnership (TPP).
At the heart of this new practice of skipping Congressional oversight and approval on international treaties like ACTA and TPP is the USTR. They have also been responsible for negotiating these treaties in secret.
In his "Statement for the Record" on the introduction of the Congressional Oversight Over Trade Negotiations Act, Wyden pointed out that the USTR has continually stymied Congress's efforts to learn more about the negotiations between the USTR and other countries.
According to Wyden, the lack of transparency is beyond the pale because corporations and interest groups know more about the negotiations for this latest treaty than lawmakers do:
"Right now, the Obama Administration is in the process of negotiating what might prove to be the most far-reaching economic agreement since the World Trade Organization was established nearly twenty years ago.
The goal of this agreement - known as the Trans Pacific Partnership (TPP) - is to economically bind together the economies of the Asia Pacific. It involves countries ranging from Australia, Singapore, Vietnam, Peru, Chile and the United States and holds the potential to include many more countries, like Japan, Korea, Canada, and Mexico. If successful, the agreement will set norms for the trade of goods and services and includes disciplines related to intellectual property, access to medicines, Internet governance, investment, government procurement, worker rights and environmental standards.
If agreed to, TPP will set the tone for our nation's economic future for years to come, impacting the way Congress intervenes and acts on behalf of the American people it represents.
It may be the USTR's current job to negotiate trade agreements on behalf of the United States, but Article 1 Section 8 of the U.S. Constitution gives Congress - not the USTR or any other member of the Executive Branch - the responsibility of regulating foreign commerce. It was our Founding Fathers' intention to ensure that the laws and policies that govern the American people take into account the interests of all the American people, not just a privileged few.
And yet, Mr. President, the majority of Congress is being kept in the dark as to the substance of the TPP negotiations, while representatives of US corporations - like Halliburton, Chevron, PHRMA, Comcast, and the Motion Picture Association of America - are being consulted and made privy to details of the agreement. As the Office of the USTR will tell you, the President gives it broad power to keep information about the trade policies it advances and negotiates, secret. Let me tell you, the USTR is making full use of this authority.
As the Chairman of the Senate Finance Committee's Subcommittee on International Trade, Customs, and Global Competitiveness, my office is responsible for conducting oversight over the USTR and trade negotiations. To do that, I asked that my staff obtain the proper security credentials to view the information that USTR keeps confidential and secret. This is material that fully describes what the USTR is seeking in the TPP talks on behalf of the American people and on behalf of Congress. More than two months after receiving the proper security credentials, my staff is still barred from viewing the details of the proposals that USTR is advancing.
Mr. President, we hear that the process by which TPP is being negotiated has been a model of transparency. I disagree with that statement. And not just because the Staff Director of the Senate subcommittee responsible for oversight of international trade continues to be denied access to substantive and detailed information that pertains to the TPP talks.
Mr. President, Congress passed legislation in 2002 to form the Congressional Oversight Group, or COG, to foster more USTR consultation with Congress. I was a senator in 2002. I voted for that law and I can tell you the intention of that law was to ensure that USTR consulted with more Members of Congress not less.
In trying to get to the bottom of why my staff is being denied information, it seems that some in the Executive Branch may be interpreting the law that established the COG to mean that only the few Members of Congress who belong to the COG can be given access to trade negotiation information, while every other Member of Congress, and their staff, must be denied such access. So, this is not just a question of whether or not cleared staff should have access to information about the TPP talks, this is a question of whether or not the administration believes that most Members of Congress can or should have a say in trade negotiations.
Again, having voted for that law, I strongly disagree with such an interpretation and find it offensive that some would suggest that a law meant to foster more consultation with Congress is intended to limit it. But given that the TPP negotiations are currently underway and I - and the vast majority of my colleagues and their staff - continue to be denied a full understanding of what the USTR is seeking in the agreement, we do not have time to waste on a protracted legal battle over this issue. Therefore, I am introducing legislation to clarify the intent of the COG statute.
The legislation, I propose, is straightforward. It gives all Members of Congress and staff with appropriate clearance access to the substance of trade negotiations. Finally, Members of Congress who are responsible for conducting oversight over the enforcement of trade agreements will be provided information by the Executive Branch indicating whether our trading partners are living up to their trade obligations. Put simply, this legislation would ensure that the representatives elected by the American people are afforded the same level of influence over our nation's policies as the paid representatives of PHRMA, Halliburton and the Motion Picture Association.
My intent is to do everything I can to see that this legislation is advanced quickly and becomes law, so that elected Members of Congress can do what the Constitution requires and what their constituents expect."
Share wisely, and take care.
Thrill Customers in the Cloud to Raise Profit
Excerpted from Investor's Business Daily Report by Amy Alexander
Want to innovate tomorrow? Put your head in the cloud today.
What is it? Cloud computing is the online storage of information within a network of data centers. A decade ago, businesses had to buy and maintain a block of their own in-house servers. The process was expensive and tough to plan.
Now firms can get as much or as little computer storage as they need from Internet companies such as Amazon that have built banks of servers to rent out streaming data processing power. Software can run off-site.
Example: Google's popular Gmail exists in the cloud.
When a firm, small or large, needs to transmit or save chunks of information, it can do so quickly and more cost effectively by tapping the cloud.
"The cloud is enabling people to buy computing services just like electricity," Sacha Labourey, CEO of Cloud platform provider CloudBees, told IBD.
Why does it matter? The cloud makes innovation easier and cheaper. Using the cloud, startups can open for business faster, without having to buy servers or maintain software, then scale up as they grow. Companies can spot-test new strategic angles.
"This kind of productivity is going to be a huge boost," Labourey said.
Get ready. Hard goods - such as compact discs or books - are quickly being traded for digital products that live in the cloud. It's because of cloud computing that you can carry your library everywhere you go.
Jump in. Start brainstorming. How could you use the cloud at your company?
"Today the cloud does not play much of a role in the economy. But five or 10 years from now it's going to play a big role," Labourey said. "Most companies that do not want to lose that war are already ramped up in the cloud."
Move. Even if you don't think you'll ever work directly with the cloud, a seismic business shift is under way.
Once customers and employees get used to a cloud-based world, their behavior and expectations will change.
So says Thomas Koulopoulos, co-founder of Delphi Group, a consultancy that helps industry leaders respond to new technology.
In his new book, "Cloud Surfing," Koulopoulos suggests the cloud will transform the way people cope with every challenge they face over the next 100 years.
"The cloud is to the Internet what intelligent life is to primordial soup," he said.
Swoop in. In exchange for the ease and convenience of accessing the cloud, consumers will be willing to give up a lot of information about their lives, Koulopoulos predicts. Firms can use the data to rapidly sense what customers want, then dream up new notions that meet a real-time demand.
DDN Announces WOS Object Storage Support for OCP Hardware
DataDirect Networks (DDN), world leader in massively scalable storage, this week announced support for DDN's Web Object Scaler (WOS), its award-winning hyperscale object storage solution, on the recently announced Open Compute server and storage platforms in cooperation with the Open Compute Project (OCP).
WOS software enables organizations to easily build and deploy their own storage clouds across geographically distributed sites that can scale to unprecedented levels, while still being managed as a single entity. Designed for performance, resiliency and capacity scalability, WOS can ingest and distribute over 55 billion objects per day with today's commodity storage hardware.
With a fundamental design to support scalability to over an exabyte of global storage capacity and trillions of objects, global data distribution and latency-optimized global access, WOS is built to enable multi-site, global organizations to connect to and collaborate on data without the bottlenecks and overhead associated with traditional file access.
"Historically, there has not been an industry movement around standardizing and driving the adoption of mass-market hyperscale hardware technology," said Jean-Luc Chatelain, Executive Vice President of Strategy and Technology, DDN.
"With the new OCP storage hardware specification, DDN is able to focus its cloud storage efforts and investments on the software intelligence that drives today's business and social connection. The Open Compute movement allows us to harness the power of crowd-sourced hardware design and a highly optimized supply chain to drive the best value for our customers."
"The challenges around building web-scale infrastructure are particularly thorny when it comes to designing the storage component, particularly as organizations look to deal efficiently and effectively with storing very large data volumes," said Simon Robinson, Research Vice President at 451 Research.
"This is leading to some creative approaches, not only in terms of innovative technology but also in terms of exploring new pathways to market. Alliances between proprietary offerings such as DDN's WOS and open source initiatives such as the Open Compute Project are one such example, and we're fascinated to see how this could help create new value in the market."
OCP is a non-profit organization focused on the development of open standards to support massively scalable, efficient, and economical computing infrastructure around the world. They recently held their third OCP summit, at which Chatelain discussed object storage, the peer-voted most popular topic of the workshop.
"It was an exciting inflection point in the DDN story to be chosen to speak to such an engaged collection of scalability specialists about the opportunity presented by hyperscale object storage and the market disruptions which are driving this to become the de facto storage model in web-scale data centers," said Chatelain.
DDN WOS software will be qualifying the OCP storage platform when it becomes generally available from hardware manufacturers, which is expected to be within the next year.
Ten Principles of Big Data
Excerpted from Social Media Insider Report by David Berkowitz
Once upon a time, there was a little data. People thought it was important but not all that sexy. Then it grew up, and everyone started calling it Big Data. Now it's on all the Sexiest Trend Alive covers, and there are rumors that it is having affairs with all of the Kardashians, including Bruce Jenner.
Big Data arguably grew up, at least in popular parlance, thanks to all the information it has been gleaning from social and mobile media. Gotham Media Ventures focused on it during an Internet Week event, "Data Wars: The Future of Personal Information and Advertising."
I had the pleasure of joining an especially insightful bunch on the panel: Clickable Director of Social Strategy Jordan Franklin, MicroStrategy Senior Director Marc Hayem, LocalResponse President Kathy Leake, and Sonar CEO Brett Martin, along with moderator Terri Seligman of Frankfurt Kurnit Klein & Selz PC.
Over the course of the discussion, all of the panelists inspired a series of principles governing this era of Big Data. Here are the top ten:
1) We are all data. There's very little about us that can't be expressed as a data point. Our data includes who we talk to, where we go, our physical activity, what we consume, and what we don't consume. We are the data.
2) We value data differently depending on whether we share it implicitly or explicitly. We don't mind being asked to give consent to share data, and we usually don't mind being ignorant. We only become alarmed when we become aware of data that we previously shared implicitly.
3) There must be a value exchange for sharing data. Brands, publishers, technology companies, and others determine what someone receives in return for providing the data. Sometimes this is through an overt agreement, while more often it's a tacit barter.
4) What matters is how people perceive the value. A marketer may not think a $1 coupon is very valuable, but consumers may covet it. Similarly, a marketer may give away a new car that seems highly valuable, but the chance to win it may not move consumers to act.
5) Virtual value is just as real as tangible value. Consider a $1 coupon compared to a leaderboard where someone earns 100 points for checking into a location. The latter has no monetary value, but if the program is deployed properly, then those 100 points - and the bragging rights and self-esteem that go with them - may be more valuable than the tangible goods.
6)There's a value chain beyond the exchange. If a user shares data with a publisher, that leads to more relevant advertising, which makes users more likely to return and earns more revenue per user, which leads to more profits, which leads the publisher to invest in hiring better talent and creating better content, which makes users more likely to share that content with others, which makes new users more likely to visit and become regular users.
7) Norms are changing. Jordan brought up the example of Facebook Beacon, which launched in 2007 and was promptly killed due to user backlash, compared to Facebook's "frictionless sharing," which may well deserve the same fate as Beacon but hasn't attracted as much ire. The latest privacy controversy comes from SceneTap, which uses facial detection to show the demographic composition of bar patrons. It inspired Violet Blue to write in ZDNet, "San Francisco Hates Your Start-up." Many people commenting on such posts defend SceneTap, and in a few years, such technology may not raise a single eyebrow.
8) Standards are different online. Accept it. When you move into a new home, that Pottery Barn catalog will always find you within days, and yet none of the tens of millions of people who change residences each year protest about the surreptitious data usage. The standards are higher for social and mobile media because that data feels more personal.
9) Government regulation isn't as challenging for brands as changes in terms of service. The government of any country may enact regulation that affects the use of data, yet governments don't move nearly as fast as Facebook, Google, Pinterest, and other technology companies.
10) Change is happening at an exponential pace. The data that's available and the options for harnessing it keep multiplying. Keep pace however you can. Think creatively about how you can use all this data. Most importantly, remember that you're not a brand or a publisher or an agency or a corporation; you're a person. Respect people. The Golden Rule lives on.
Geniatech Supercharges Broadband TV with Octoshape Infinite
Octoshape, an industry leader in cloud-based global streaming technologies, announced today a partnership with Shenzen based Geniatech. The partnership enables the wide range of Geniatech Android TV devices to connect to Octoshape's Infinite HD-M Federated Linear Broadband TV Platform, an industry-first service that deploys Multicast over broadband for high quality linear video delivery.
Geniatech's adoption of Octoshape's technology brings the scale, quality and economics of traditional broadcast TV to broadband connected devices in the living room. With the Infinite HD-M "flat rate per channel" pricing model for linear channels, broadcasters can now deliver linear video with predictable and affordable economics to the television over Geniatech devices.
"We are excited to play a role in the evolution of the OTT market", said Fang Jijun, General Manager of Geniatech. "We constantly look to integrate technology solutions that increase the ability for consumers to enjoy high quality content using our devices. The Infinite HD-M solution enables these high quality video experiences for the broadband masses."
"The partnership with Geniatech is an integral part of our strategy to extend our global Infinite HD-M ecosystem to the television", says Michael Koehn Milland, CEO of Octoshape. "Geniatech's products and their drive for innovation combined with our technologies will take the broadband TV experience of our customers to a new level."
Octoshape's federated linear broadband TV ecosystem will continue to expand globally throughout 2012 in carefully planned phases adding content contribution partners, Tier 1 broadband providers, connected television manufacturers and conditional access providers.
Top 10 Reasons to Love Cloud Computing
Excerpted from Cloud Computing Journal by Roger Strukhoff
My Top 10 Reasons to Love Cloud Computing...
10. It's an approach, not a technology.
9. It's truly global, in the way that food and oil are global.
8. It levels the playing field for developing companies and developing nations.
7. It's revived the software industry in a way that was unthinkable a decade ago.
6. It fosters the growth of apolitical Open Source.
5. It pushes countries to improve their bandwidth.
4. It drives the creation of shiny new toys.
3. It's green, no matter what Greenpeace says to the contrary.
2. Its imprecise definition means technology writers will have jobs trying to explain it for a long, long time.
1. It isn't Facebook.
Huawei Launches Solutions in Convergent Billing, Telco Cloud, Value Growth
Excerpted from TelecomTiger Report
Huawei, a leading global information and communications technology (ICT) solutions provider, on Thursday launched of three telecommunications software solutions at the Telecom Management Forum (TMF) in Dublin. They are Convergent Billing Solution (CBS) R5, Telco Cloud, and Value Growth Solution (VGS).
Based on Huawei's flexible design concept, these solutions further Huawei's commitment to bringing end users better experiences, while helping operators and partners achieve greater business success through improved monetization of those experiences.
Huawei's Convergent Billing Solution R5 features a refined system architecture that enables carriers to enhance flexibility and support different business models, satisfy new and dynamic market requirements, and expand their businesses with increased agility.
Telco Cloud offers a full range of cloud components, including facilities and infrastructure, such as storage and servers. Based on its deep understanding of the telecommunications industry, Huawei has built a XaaS (IaaS, PaaS, SaaS) ecosystem in cooperation with various ISVs (Independent Software Vendors). Huawei's Telco Cloud helps carriers migrate to the cloud, meeting market demand for consulting services, management services, smart pipes, and strong service level agreement guarantees.
By focusing on traffic management and enhanced bandwidth revenue, Huawei's latest Value Growth Solution aids carriers in their transformation from voice operators to bandwidth traffic management operators. With user content adaptation and bandwidth control, VGS provides rich user experiences and traffic management capabilities, enabling carriers to maximize business value, increase revenue per bit, and enhance mobile broadband penetration.
The Sky Is the Limit for Cloud Computing Hiring
Excerpted from Seattle Post Intelligencer Report
During April 2012, more than 12,000 cloud computing jobs were seen advertised online, according to WANTED Analytics, the leading source of real-time business intelligence for the talent marketplace. Hiring increased almost 50% year-over-year when compared to April of 2011 and more than 275% versus April 2010.
Technology occupations saw the highest number of jobs. Some of the most commonly advertised cloud computing related job titles were Software Engineers, Java Developer, Systems Engineer, Network Engineer, and Websphere Cloud Computing Engineer. Other occupations with high demand for cloud computing skills were Marketing Managers, Sales Representatives, Management Analysts, Operations Managers, and Market Research Analysts.
Potential candidates for cloud computing jobs are often required to have knowledge or experience with several tools and technologies, including:
1. Cloud computing 2. Oracle Java 3. Linux 4. Structured query language (SQL) 5. UNIX 6. Software as a Service (SaaS) 7. VMware Software 8. Salesforce CRM 9. Python extensible programming language 10. Practical Extraction and Reporting Language (Perl)
The five metropolitan areas with the highest volume of cloud computing job ads during April were San Jose, Washington (DC), San Francisco, Seattle, and New York. Not only was the most demand seen in San Jose, but employers here also increased demand the most of these five metro areas. In April, over 1,700 job ads were posted online, 94% more than in April of 2011.
Nationwide, companies are likely to find cloud computing jobs hard-to-recruit, with varying degrees of difficulty in each location depending on the available talent supply. According to the Hiring Scale, employers in Seattle are currently experiencing some of the heaviest competition to attract talent and may experience a longer time-to-fill than many other areas across the United States.
In fact, the average posting period in Seattle is more than 7 weeks. The Hiring Scale also shows that the best places to recruit cloud computing professionals are currently Poughkeepsie (NY), Harrisburg (PA), and Sarasota (FL). These areas are likely to fill job openings faster than the rest of the United States. Online job ads in these areas are posted for an average of about 5 weeks, more than 2 weeks shorter than in Seattle.
The Hiring Scale measures conditions in local job markets by comparing hiring demand and labor supply. The Hiring Scale is part of the WANTED Analytics platform that offers business intelligence for the talent marketplace.
To see additional charts and detail, please click here. The Hiring Scale is available here.
Terremark Boosts European Cloud Capacity
Excerpted from ITWeb Report by Admire Moyo
Terremark has increased its cloud infrastructure capacity by deploying a node of its Enterprise Cloud Managed Edition at its data center, in London, Data Center Knowledge reports.
Terremark, a Verizon company, says the additional capacity would support customers with growing requirements related to the 2012 Olympics, which commence in London in late July.
"The ability to quickly and efficiently realize the benefits of enterprise cloud services has been of particular interest to businesses in Europe this year, as they are faced with business continuity concerns in connection with the London 2012 Olympics," says Kerry Bailey, Verizon Enterprise Solutions' chief marketing officer.
"It's important for us to invest to support our customers where they need us - this is obviously timely in terms of London 2012, but is also part of our focusing on investing in infrastructure to support our customers' future business growth."
The London data center provides the robust physical infrastructure that helps Terremark meet the increasing demand from customers for advanced cloud computing, security and IT infrastructure services, Market Watch notes.
As with the cloud deployments in its cloud-enabled data centers around the world, Terremark delivers security services to support its public cloud infrastructure and offers a range of IT infrastructure and hybrid cloud solutions.
Why Cloud Computing Is Good News for Retailers
Excerpted from Dynamic Business Report by Ian Kinsella
Why is everyone talking about cloud computing? Simply because it's changing the way we do business. Cloud computing isn't new. What's new is that improved broadband and new applications make it a real option for businesses to manage their IT needs without having to buy hardware and software that rapidly outdates.
Cloud computing is simply "Internet based computing, whereby shared servers provide resources, software, and data to computers and other devices on demand," according to Wikipedia.
The fundamental concept of cloud computing is that the processing and storage of your data is carried out on servers outside of your physical business location using software licensed on a per user basis. For this reason the term is also used to describe what is sometimes known as Software as a Service. Ordinarily on a pay-per-use system, this enables businesses to get the benefit of massive processing ability without having to own the physical infrastructure or bespoke software, avoiding the need for considerable upfront capital investment.
Opening a new store?
It's exhausting enough opening a retail outlet without the added complexity of setting up a new IT network and POS system. With a hosted system and good Internet access, you can get your new store online and utilizing your existing POS system simply by installing a new PC . New staff can be supplied with user licenses, added to your authorized user list and the hosted POS software can be accessible in your new premises instantly.
Another benefit is the ability to log in to your software from home, office or shop via a simple web browser. This means you could be at home, but still log in to check the day's sales or check stock levels.
Enhancing the customer experience.
By utilizing a hosted server running their POS system and customer data records, sales staff on the shop floor can update customer records during a sales transaction into a central database. In this particular case, such records would include previous purchases and design/style/taste preferences.
Such information is accessible to floor staff across all stores, so that a regular customer can be given personal attention regardless of which store he/she walks into and changes to their buying patterns or preferences can be updated following a store visit anywhere in the country.
Security and data storage.
A further consideration is the benefit of data security and storage. With hosted software, the threat of theft or damage to one store poses no risk to loss of transactional data or customer records. Just think about the vast number of businesses affected by flood damage in Queensland recently.
What does this mean for you?
The next time you consider spending money expanding your store network, upgrading your server, getting faster workstations or buying new software, consider the benefits of a cloud solution.
Easier maintenance of your IT infrastructure.
More disciplined security for your data.
Automatic continual back up of your data.
All maintenance is dealt with by the service provider.
No need for repetition of hardware and network infrastructure across retail stores.
Extended life for workstations that no longer need large memory to run applications.
Employees at any store location can easily access a central data source and share information.
All for a single monthly subscription fee that can be fully expensed.
SAP Buys US Cloud Computing Firm Ariba for $4.3 Billion
Excerpted from BBC News Report
The German business software firm, SAP, has announced that it is to acquire US software maker, Ariba.
The deal, valued at $4.3 billion (3.4billion euros) marks a big push by SAP into the world of so-called cloud-computing.
Ariba makes web-based software that connects suppliers and buyers online.
SAP, the world's biggest software maker is competing against US rival Oracle for dominance of 'the cloud', where growth is set to soar in future.
"The cloud has profoundly changed the way people interact," said SAP co-chief executives Bill McDermott and Jim Hagemann Snabe said in a statement.
"Cloud-based collaboration is redefining business network innovation, and we are catching this wave in the early stage of its evolution."
'Good fit'
More and more businesses are turning to the cloud, essentially the Internet, for their software needs.
Thousands of companies now use software that is hosted on remote servers and accessed via the Internet because it removes the need for them to install and maintain software in-house.
According to some estimates software delivered via the web (cloud), is expected to grow five times as quickly as sales of programs installed on business premises.
Analyst Kirk Materne with Evercore said that SAP and Ariba complement each other well.
"Ariba is a fairly unique asset that would seem to be a good fit with SAP's revamped cloud strategy," he said.
Ariba is the second largest cloud vendor by revenue, according to SAP, and some analysts believe that a rival bid for the California based firm could be forthcoming.
"There's a history of bidding wars between SAP and Oracle and this is exactly the kind of strategic company that would spark something like that," said analyst Richard Williams at Cross Research in the US.
Start Small, Grow Tall: Why Cloud Now
Excerpted for ITworld Report
The cloud is the hottest thing in computing today, and enterprises are eagerly seeking to adopt it. They realize that cloud computing holds the promise of curing today's "data center sprawl," with its colossal complexity, considerable costs, and substantial capital investment.
For their part, service providers see the cloud as a catalyst for revenue growth. Executives and leaders look forward to the day when information technology will be delivered as a pure service throughout the organization - metered, ubiquitous, and available on demand much like electricity or water. But the reality is that the cloud isn't yet so mature or capable that it's ready to replace traditional IT. Companies face a number of obstacles to cloud adoption. Among them: differences between business and IT executives about the pace of adoption; differing stages of maturity within the cloud adoption continuum; and the need to avoid compromising the cloud's benefits with scattershot, uncoordinated adoption.
As you'll see, without a proper goal and a clear plan to get there, organizations risk re-infecting their IT environments with complexity and sprawl that are every bit as counterproductive as the data center problems the cloud was meant to correct.
Download Now
After Amazon, How Many Clouds Do We Need?
Excerpted from GigaOM Report by Barb Darrow
With news that Google and Microsoft plan to take on the Amazon Web Services monolith with infrastructure services of their own, you have to ask: How many clouds do we need?
This Google-Microsoft news, broken this week by Derrick Harris, proves to anyone who didn't already realize it that Amazon is the biggest cloud computing force (by far) and, as such, wears a big fat target on its back. With the success of Amazon cloud services, which started out as plain vanilla infrastructure but has evolved to include workflow and storage gateways to enterprise data centers, Amazon's got everyone - including big enterprise players like Microsoft, IBM and HP - worried. Very worried.
These vendors are betting big that they can give Amazon a run for its money and that their cloud services will help them retain existing customers and (knock wood) win some newbies. Microsoft built Azure as a full-fledged platform as a service, but in the face of Amazon's success had to tack to offer IaaS-type services, including VM Roles, which has been in beta for more than a year.
Amazon as enterprise apps platform? Don't laugh.
Take the news late this week that IBM is working with Ogilvy and Mather to move the advertising giant's SAP implementation from its current hosted environment to "SmartCloud for SAP Applications hosted in IBM's state-of-the-art, green Smarter Data Center." (Note to IBM: brevity is beauty when it comes to branding.)
Don't think that little tidbit is unrelated to last week's announcement that SAP and Amazon together certified yet another SAP application - All-in-One - to run on Amazon's EC2. This sort of news validates Amazon as an enterprise-class cloud platform, and that's the last thing IBM or HP or Microsoft wants to see happen. So every one of these players - plus Google - are taking aim at Amazon.
Some hardware players, including HP, which is reportedly about to cut 30,000 jobs, see the cloud as a way to stay relevant, and oh, by the way, keep customers workloads running on their hardware and software. HP's OpenStack-based public cloud went to public beta earlier this month.
Case in point: Along with the SAP migration news, IBM also said its SmartCloud Enterprise+, IBM's managed enterprise cloud infrastructure offers: "unprecedented support for both x86 and P-Series [servers] running Windows, Linux and AIX on top of either VMware or PowerVM hypervisor and SCE+ is designed to support different workloads and associated technology platforms including a new System z shared environment that will be available in the US and UK later this year.
Hmmm. P-Series and System Z - not exactly the sort of commodity hardware that modern webscale cloud companies run, but they are integral to IBM's well-being.
Vendor clouds to lock customers in.
This illustrates what prospective buyers should know: Despite all the talk about openness and interoperability, a vendor's cloud will be that vendor's cloud. It represents a way to make sure customers run that company's hardware and software as long as possible. But legacy IT vendors are not alone in trying to keep customers on the farm.
Amazon is making its own offerings stickier so that the more higher-value services a customer uses, the harder it will be to move to another cloud. As Amazon continues what one competitor calls its "Sherman's march on Atlanta," legacy IT vendors are building cloud services as fast as they can in hopes that they can keep their customers in-house. For them, there had better be demand for at least one more cloud.
There will doubtless be more discussion on this and other cloud topics at the GigaOM Structure Conference in San Francisco next month.
Design Guidelines for Cloud Computing
Excerpted from Sys-Con Media Report by Mario Meir-Huber
Infrastructure as a Service and Platform as a Service offer us easy scaling of services. However, scaling is not as easy as it seems to be in the Cloud. If your software architecture isn't done right, your services and applications might not scale as expected, even if you add new instances. As for most distributed systems, there are a couple of guidelines you should consider. I have summed up the ones I use most often for designing distributed systems.
Design for failure.
As Moore stated, everything that can fail, will fail. So it is very clear that a distributed system will fail at a certain time, even though cloud computing providers tell us that it is very unlikely. We had some outages [1][2] in the last year of some of the major platforms, and there might be even more of them. Therefore, your application should be able to deal with an outage of your cloud provider. This can be done with different techniques such as distributing an application in more than one availability zone (which should be done anyway). Netflix has a very interesting approach to steadily test their software for errors - they have employed an army of "Chaos Monkeys" [3]. Of course, they are not real monkeys. It is software that randomly takes down different Instances. Netflix produces errors on purpose to see how their system reacts and if it is still performing well. The question is not if there will be another outage; the question is when the next outage will be.
Design for at least three running systems.
For on-premise systems, we always used to do an "N+1" Design. This still applies in the cloud. There should always be one more system available than actually necessary. In the cloud, this can easily be achieved by running your instances in different geographical locations and availability zones. In case one region fails, the other region will take over. Some platforms offer intelligent routing and can easily forward traffic to another zone if one zone is down. However, there is this "rule of three," that basically says you should have three systems available: one for me, one for the customer and one if there is a failure. This will minimize the risk of an outage significantly for you.
Design for monitoring.
We all need to know what is going on in our datacenters and on our systems. Therefore, monitoring is an important aspect for every application you build. If you want to design intelligent monitoring, I/O performance or other metrics are not the only important things. It would be best if your system could "predict" your future load - this could either be done by statistical data you have from your applications' history or from your applications' domain. If your application is on sports betting, you might have high load on during major sports events. If it is for social games, your load might be higher during the day or when the weather is bad outside. However, your system should be monitored all the time and it should tell you in case a major failure might come up.
Design for rollback.
Large systems are typically owned by different teams in your company. This means that a lot of people work on your systems and rollout happens often. Even though there should be a lot of testing involved, it will still happen that new features will affect other services of your application. To prevent from that, our application should allow an easy rollback mechanism.
Design No State.
State kills. If you store states on your systems, this will make load balancing much more complicated for you. State should be eliminated wherever and whenever possible. There are several techniques to reduce or remove state in your application. Modern devices such as tablets or smart-phones have sufficient performance to store state information on the client. Every service call should be independent and it shouldn't be necessary to have a session state on the server. All session state should be transferred to the client, as described by Roy Fielding [4]. Architectural styles such as ROA support this idea and help you make your services stateless. I will dig into ReST and ROA in one of my upcoming articles since this is really great for distributed systems.
Design to disable services.
It should be easy to disable services that are not performing well or influencing your system in a way that is poisoning the entire application. Therefore, it will be important to isolate each service from each other, since it should not affect the entire system's functionality. Imagine the comment function of Amazon is not working - this might be essential to make up your mind about buying a book, but it wouldn't prevent you from buying the book.
Design different roles.
However, with distributed systems we have a lot of servers involved - and it's necessary not to scale a front-end or back-end server, but to scale individual services. If there is exactly one front-end system that hosts all roles and a specific service experiences high load, why would it be necessary to scale up all services, even those services that have minor load? You might improve your systems if you have them split up in different roles. As already described by Bertram Meyer [5] with Command Query Separation, your application should also be split in different roles. This is basically a key thing for SOA applications; however, I still see that most services are not separated. There should be more separation of concerns based on the services. Implement some kind of application role separation for your application and services to improve scaling.
There might be additional principles for distributed systems. I see this article as a rather "living" one and will extend it over the time.
Coming Events of Interest
Cloud Computing Forum & Workshop V - June 5th-7th in Washington, DC. The National Institute of Standards and Technology (NIST) hosts this meeting focused on reviewing progress on the Priority Action Plans (PAPs) for each of the 10 high-priority requirements related to interoperability, portability, and security that were identified by US government agencies for adopting cloud computing.
Cloud Expo - June 11th-14th in New York, NY. Two unstoppable enterprise IT trends, Cloud Computing and Big Data, will converge in New York at the tenth annual Cloud Expo being held at the Javits Convention Center. A vast selection of technical and strategic General Sessions, Industry Keynotes, Power Panels, Breakout Sessions, and a bustling Expo Floor.
IEEE 32nd International Conference on Distributed Computing - June 18th-21st in Taipa, Macao. ICDCS brings together scientists and engineers in industry, academia, and government: Cloud Computing Systems, Algorithms and Theory, Distributed OS and Middleware, Data Management and Data Centers, Network/Web/P2P Protocols and Applications, Fault Tolerance and Dependability, Wireless, Mobile, Sensor, and Ubiquitous Computing, Security and Privacy.
Cloud Management Summit - June 19th in Mountain View, CA. A forum for corporate decision-makers to learn about how to manage today's public, private, and hybrid clouds using the latest cloud solutions and strategies aimed at addressing their application management, access control, performance management, helpdesk, security, storage, and service management requirements on-premise and in the cloud.
2012 Creative Storage Conference - June 26th in Culver City. CA. In association with key industry sponsors, CS2012 is finalizing a series of technology, application, and trend sessions that will feature distinguished experts from the professional media and entertainment industries.
CLOUD COMPUTING WEST 2012 - November 8th-9th in Santa Monica. CA. CCW:2012 will zero in on the latest advances in applying cloud-based solutions to all aspects of high-value entertainment content production, storage, and delivery; the impact of cloud services on broadband network management and economics; and evaluating and investing in cloud computing services providers.
|