Distributed Computing Industry
Weekly Newsletter

In This Issue

Partners & Sponsors

A10 Networks

Aspera

Citrix

FalconStor

ShareFile

VeriStor

Cloud News

CloudCoverTV

P2P Safety

Clouderati

gCLOUD

hCLOUD

fCLOUD

Industry News

Data Bank

Techno Features

Anti-Piracy

July 1, 2013
Volume XLIV, Issue 5


Tuesday July 16th Webinar: BUILD, BUY, OR RENT?

How should you implement your optimal cloud storage solution?

For many companies, one of the biggest decisions they will make, or have made, when moving to the cloud will be how they implement their cloud storage infrastructure.

Issues around ease of access, uptime, security, redundancy, SLA, maintenance, and total cost of ownership drive the decision to one of three options: build, buy, or rent.

This webinar will explore these issues and compare the three options, including an in-depth comparison of monthly Amazon pricing and OpenStack build-it-yourself costs, helping you optimize your plan for your business needs and growth.

Please join featured guest Henry Baltazar, Senior Analyst serving Infrastructure & Operations Professionals at Forrester Research and Tom Leyden, Director of Object Storage Product Marketing, DataDirect Networks on July 16th at 11:00 AM ET, for this educational webinar to learn about the challenges and solution options for implementing storage in the cloud.

Participants are eligible to receive a cloud-enabled tablet computer.

The registration deadline for this webinar is Monday July 15th.

Report from CEO Marty Lafferty

Photo of CEO Marty LaffertyThe DCIA joins the entire cloud computing industry this week in celebrating a declaration of peace among former adversaries.

On June 24th, giant rivals Oracle and Microsoft announced plans to partner in the cloud, delivering compatible software and services over the Internet.

Microsoft's Windows Azure cloud-computing service will run Oracle's database software, Java programming tools, and application-connecting middleware.

This is particularly notable given Larry Ellison's and Bill Gates' track record of high-profile disputes coupled with their companies' fierce competition in certain categories.

"The cloud is the tipping point that made this happen," said Oracle Co-President Mark Hurd.

"This made a lot of sense for both of us."

"It's about time, and we're really glad to have the chance to work in this much newer and more constructive way with Oracle," added Microsoft CEO Steve Ballmer.

"The partnership has an immediate benefit to customers of every size and shape."

On June 25th, Oracle said it would also work closely with Salesforce.com, a pioneer of cloud-based CRM.

And on June 26th, Oracle announced a third partnership, with NetSuite, a leading provider of cloud-based enterprise software; and launched a cloud version of its database software.

These alliances have been driven by the rapidity with which business customers are migrating to cloud-delivered services, such as those offered by Amazon Web Services (AWS).

The collaboration should lure customers seeking more technical compatibility between Microsoft and Oracle products.

Offering the benefits of interoperability coupled with addressing the fear of vendor lock-in, Oracle will ensure that its software runs well on Azure and Microsoft will promote Oracle's database software and to its customers.

Oracle also will make its version of the open-source Linux operating system available through Azure.

With Salesforce.com, Oracle will provide the technology on which Salesforce's platform and applications will run, and integrate Salesforce's cloud-based applications with its own ones for finance and human-resources management. Salesforce will promote Oracle's products in these areas.

Salesforce is buying Oracle hardware and software to power its applications.

NetSuite will use the upcoming 12c database from Oracle to power its applications.

Oracle's moves signal a shift in focus to subscription-based online software rather than programs installed on customers' own machines, a transition forced in part by nimbler cloud-computing rivals, including Google and AWS.

Azure's main competitor in infrastructure-as-a-service (IaaS) is AWS, which rents computing power, storage, and database software via the Internet.

IaaS is the fastest-growing part of the cloud market, according to Gartner, which estimates sales in the market segment to surge by an average of 38 per cent annually to $30.6 billion by 2017, from $6.17 billion last year.

"This deal gives Microsoft clear competitive advantages against two of its top rivals," said James Staten, an Analyst at Forrester Research.

It bolsters Microsoft's efforts to compete with VMware, the market leader in virtualization software, and "gives Windows Azure near-equal position against AWS in the cloud platform wars," he added."

The very nature of cloud computing, in which software applications span multiple companies' infrastructure software and websites, deserves much of the credit for this week's very positive advances. Share wisely, and take care.

Cloud Could Usher in $11 Billion in Savings

Excerpted from Healthcare IT News Report by Tom Sullivan

While 90 percent of healthcare CIOs view IT innovation as critical to success, the more surprising statistic is that fewer than one-fourth consider their existing infrastructure capable of supporting such technological advancement.

That's according to a report from MeriTalk, published Monday, examining the potential of IT-as-a-service within the healthcare realm by surveying 109 CHIME members.

The ITaaS model can be used to lower operational costs, shift from capital to operational expenditure, boost service levels and streamline application deployment, David Dimond, Healthcare Solutions Chief Strategist at EMC, which sponsored the report, explained in a statement.

And 47 percent of respondents said their existing IT portfolio has the potential to be delivered ITaaS-style, be that via private, hybrid, or public clouds.

Embracing the model could reduce IT costs by 9 percent, the report found, or $11 billion in savings across three years.

"As IT departments transform their operations to run IT as a service, their role will also transform, from exclusive providers of IT services to brokers of IT services," Dimond said. "An ITaaS framework enables providers to support the pace of change and organizational transformation to meet accountable care goals."

The migration toward a services approach has already begun, MeriTalk noted in the report, with respondents indicating 15 percent of their current IT portfolio is presently delivered as a service, and 87 percent are deploying virtualization technologies, while 73 percent said they are streamlining operations and 48 percent are centralizing IT management.

"Healthcare reform is forcing new efficiencies," said Steve O'Keeffe, founder of MeriTalk. "ITaaS results to date show enormous potential. This is a crucial step if we want to revitalize our US healthcare system."

Naturally, ITaaS and cloud computing bring new challenges. MeriTalk determined that 52 percent of respondents are having trouble finding and hiring skilled workers, and only 30 percent are using a structured process to measure IT return on investment.

Defense Department Seeks $450 Million Cloud Builder

Excerpted from InformationWeek Report by Charles Babcock

The contracting office of the Defense Information Systems Agency (DISA) is seeking a contractor to build a cloud for the entire US Defense Department. DISA put out a request for proposals June 24th on establishing $450 million in cloud computing services for the Department of Defense (DoD), expected to be operative a year from the date of the award.

If established successfully, the contract would be extended for additional years, possibly through 2017. The request doesn't specify public cloud, private cloud or some combination, but it does say a "commercial cloud service" that will be able to meet requirements submitted from throughout the Department of Defense. That means at least part of a DISA cloud would likely be a private-cloud style of operation, with servers dedicated to a single user or set of users within the department.

The DISA request shows that the agency is positioned to become the cloud supplier to the entire, sprawling Defense Department, if it can find a successful implementer of the contract.

Coming on the heels of a disputed bid for $600 million in CIA cloud services, the two initiatives add up to over $1 billion in cloud services being sought in the same time period. The two contracts illustrate how necessary the federal government considers it to have access to large amounts of flexible, cloud services. Cloud computing has attained status as a technology well-defined enough to put hundreds of millions of federal dollars into it. In public cloud services, end users may provision the virtual servers they need and have them automatically assigned related resources, such as storage, networking and database services.

The CIA specified that it wanted a private cloud operated on its premises by a cloud service provider. Amazon Web Services got the nod from the CIA for the $600 million contract, but IBM, the low bidder, disputed that call. After a recent General Accounting Office review, the contract might have to be rebid.

The call for a second major government cloud was first revealed by NextGov, a news site devoted to US government technology and innovation, and GigaOM, a San Francisco, CA technology reporting service. Amazon Web Services (AWS) offers a virtual private data center service in its public cloud infrastructure, such as its US East location in Ashburn, VA., or US West in OR. But it hasn't disclosed up to this point that it has ever constructed and then operated a private cloud on a customer's premises. If it ends up in the CIA bid, it will be its first such project in the public eye.

Leveraging Cloud Computing at Yelp

Excerpted from High Scalability Report by Jim Blomo

In Q1 2013, Yelp had 102 million unique visitors including approximately 10 million unique mobile devices using the Yelp app on a monthly average basis. Yelpers have written more than 39 million rich, local reviews, making Yelp the leading local guide on everything from boutiques and mechanics to restaurants and dentists. With respect to data, one of the most unique things about Yelp is the variety of data: reviews, user profiles, business descriptions, menus, check-ins, food photos... the list goes on. We have many ways to deal data, but today I'll focus on how we handle offline data processing and analytics.

In late 2009, Yelp investigated using Amazon's Elastic MapReduce (EMR) as an alternative to an in-house cluster built from spare computers. By mid 2010, we had moved production processing completely to EMR and turned off our Hadoop cluster. Today we run over 500 jobs a day, from integration tests to advertising metrics. We've learned a few lessons along the way that can hopefully benefit you as well.

One of EMR's biggest advantages is instant scalability: every job flow can be configured with as many instances as needed for the task. But the scalability does not come for free, the main drawbacks are 1) spinning up a cluster can take 5-20 minutes, 2) you are billed per hour or fraction thereof. This means if your job finishes in 2 hours 10 minutes, you are charged for the full three hours.

This may not seem like a big deal, until you start running hundreds of jobs and the wasted time at the end of a job flow starts adding up. To decrease the amount of wasted billing hours, mrjob implements "job flow pooling." Instead of shutting down a job flow at the end of a job, mrjob keeps the flow alive in case another job wants to use it. If another job comes along that has similar cluster requirements, mrjob will reuse the job flow.

There are a few subtleties in implementing this: 1) what does it mean to have "similar cluster requirements", 2) how to avoid race conditions between multiple jobs, and 3) when is the cluster finally shut down?

Similar job flows are defined meeting the criteria: same Amazon Machine Image (AMI) version; Same Hadoop version; same mrjob version; same bootstrap steps (bootstrap steps can set Hadoop or cluster options); same or greater RAM and Compute Units for each node type (e.g.,. Hadoop master vs workers); and accepts new jobs (job flows handle a max of 256 steps).

Race conditions are avoided by using locking. Locking is implemented using S3 in the regions that support consistency, US West by default. Jobs write to a specific S3 key with information about the job name and cluster type. Locks may timeout in the case of failures, which allow other jobs or the job terminator to reclaim the job flow.

Job flow termination is handled by a cron job which runs every 5 minutes, checks for idle job flows that are about to hit their hourly charge, and terminates them. This ensures that the worst case, never sharing job flows, is no more costly than the default.

To further improve utilization of job flows, jobs can wait a predetermined amount of time to find a job flow it can re-use. For example, for development jobs, mrjobs will try to find free job flows for 30 seconds before starting a new one. In production, jobs that don't have strict deadlines could set the wait time to several hours.

We estimate around a 10% cost savings from using job flow pooling. From a developer's perspective, this cost saving has come almost for free: by setting a few config settings, these changes went into effect without any action from developers. In fact, we saw a side benefit: iterative development of jobs was sped up significantly since subsequent runs of a modified MapReduce job could reuse a cluster and the cluster startup time was eliminated.

While AWS charges per machine hour by default, it offers a few other purchasing options that can reduce cost. Reserved instances are one of the more straightforward options: pay money upfront to receive a lower per-hour cost. When comparing AWS prices to buying servers, I encourage people to investigate this option: it is a more fair comparison to the capital costs and commitments of buying servers.

When is it cheaper overall to buy a reserved instance? It depends on how many hours an instance is used in a year. The plot above shows the cost of running a large standard instance with the different reserve instance pricing options: light, medium, or heavy usage. You can pay the on demand price, $0.26/hour, but after around 3000 hours (4 months) it becomes cheaper to cough up the $243 reserve price and only pay $0.17/hour for light usage of a reserved instance. Using more than 3000 hours per year? Then its time to investigate the increased usage plans, with "heavy" being the largest upfront, but lowest per-hour plan.

How many reserved instances should your company purchase? It depends on your usage. Rather than try to predict how much we will be using, we wrote a tool that analyzes past usage and recommends a purchasing plan that would have saved us the most money. The assumption is that our future usage will look similar to our past usage, and that the extra work and risk of forecasting was not worth the relatively small amount of money it would save. The tool is call EMRio and we open sourced it last year. It analyzes usage of EMR and recommends how many reserved instances to buy, as well as generates some pretty graphs.

It's important to note that reserved instance pricing is a billing construct. That is, you are not physically reserving a machine. At the end of the month, Amazon simply looks at how many instance hours you've used and applies the reserved instance rates to any instances that were running, up to the number of instances purchased.

Understand the trade-offs when moving to a cloud solution. For Yelp, the primary benefit of using AWS has been multiplying developer productivity by decreasing coordination costs and feature latency. Coordination costs come from requiring product teams to forecast and request resources from the system team. Purchasing resources, be it racks of servers or network capacity, can take weeks and increases the latency of feature launches. Latency has its own associated costs of decreased moral (great developers love shipping products) and context switching between projects. The dollar cost of AWS may be higher than a fully utilized, customized in-house solution, but the idea is you've bought much more productivity.

Focus on big wins: Incremental adoption of cloud technology is possible -- we're doing it! Yelp started with EMR because it was our biggest win. Offline processing has spiky load characteristics, often does not require coordination between teams, and makes developers much more productive by providing leverage to their experiments. To best use the cloud, focus on solving your worst bottlenecks one at a time.

Build on abstractions: Don't spin everyone up on all the details of a cloud service, just like you don't spin everyone up on the details of your data center. Remember your trade-offs: the goal is to make developers much more productive, not be buzzword compliant. Having a scalable, adaptive infrastructure doesn't matter if developers can't use it as easily as they would a local script. Our favorite abstraction is mrjob, which lets us write and run MapReduce jobs in Python. Running a job on a local machine vs an EMR cluster is a matter of changing two command line arguments.

Establish policies and integration plans: Spinning up a single instance is easy, but when do you spin down a machine? Processing a day's worth of logs is straightforward, but how do you reliably transfer logs to S3? Have a plan for all of the supporting engineering that goes into making a system work: data integration, testing, backups, monitoring, and alerting. Yelp has policies around PII, separates production environments from development, and uses tools from the mrjob package to watch for run-away clusters.

Optimize after stability. There are many ways you can cut costs, but most of them require some complexity and future inflexibility. Make sure you have a working, abstracted solution before pursuing them so you can evaluate ROI. Yelp wrote EMRio after we had several months of data on EMR usage with mrjob. Trying to optimize before seeing how we actually used EMR would have been shooting in the dark.

Evaluating ROI: with some optimizations, cost evaluation is straightforward: how much would we have saved if we both reserved instances 2 months ago? Some are more difficult: what are the bottlenecks in the development process and can a cloud solution remove them? Easy or difficult, though, it is important to evaluate them before going into action. Just like one should profile code before writing it, take a look at your usage before spending resources to cut costs.

As Yelp grows its service oriented architecture, we are running into similar bottlenecks encountered for offline batch processing: coordination of resources, testing ideas, forecasting usage before launching new features. 

Big Data and Analytics Are an Ideal Match

Excerpted from Baseline Report by Tony Kontzer

Long before the concept of big data took hold at iconic clothing retailer Guess, the company considered itself a sort of business intelligence innovator.

Armed with MicroStrategy's business intelligence (BI) application backed by an Oracle database, the Los Angeles, CA based company was collecting an abundance of sales and inventory data, and was using it to generate informative reports. But only a handful of power users were taking advantage of the tool and driving most of the reporting.

The company needed to figure out a way to get the increasingly valuable data into the BI environment and then into the hands of the merchants who were deciding which products went to which stores and in what quantities.

"They're touchy-feely people who like products and visuals, and getting them to drill down into business intelligence is always a challenge," Michael Relich, Executive Vice President and CIO at Guess, says of the company's merchants. "We'd have analysts dump things into spreadsheets and then cut and paste pictures. It was crazy."

Adding to the challenge was the fact that the database couldn't process the mushrooming data volume fast enough to keep up with merchants' growing taste for answers. For instance, if merchants wanted to figure out what sizes had been selling at the company's 1,500-plus retail locations over the previous six months, the related queries of the BI system would run for hours before timing out.

The path to an answer started four years ago, when Relich and his team decided to look for a solution better suited to processing big data, eventually choosing HP's Vertica analytics platform. Relich was attracted by Vertica's use of massive parallel processing to split queries into multiple boxes, and its ability to scale inexpensively with commodity hardware. Even so, he wasn't prepared for performance that was as much as 100 times better than the database provided.

"When we did the first queries, they were done so fast, we thought they were broken," Relich recalls. "Queries that would run in minutes in Oracle run in seconds in Vertica."

The experience at Guess taught it what many companies are learning today, namely that big data and business intelligence and business analytics (BA) don't merely feed each other. When used in tandem, they take data analysis to a whole new level. Having a good BI/BA application in place makes it easier for users to tap big data, and having big data technologies in place can fuel the value of a BI/BA system.

"Augmenting your business intelligence practice with big data is a very intelligent thing to do," says Mike Matchett, Senior Analyst with the Taneja Group. "The two go together very strongly."

It's a practice that should become much more commonplace now that more affordable analytics services and appliances are joining open-source big data tools in the market, says Dana Gardner, Principal Analyst at Interarbor Solutions.

The one-two punch of big data and business intelligence enabled Guess to develop an iPad application that won an innovation award from The Data Warehousing Institute. By combining images from its e-commerce system with the flow of information coming from the data warehouse, Guess? is now able to deliver sales and inventory data to merchants fast and with a visual element to boot.

"It's the equivalent of about 18 different dashboards combined into one app," says Relich.

Now, instead of arriving at stores armed with binders filled with dated information, merchants have real-time insight into sales trends, store data and product availability at their fingertips. Although the app was designed specifically for merchants, its 150 regular users include district managers attracted by its ability to help them with store planning.

The impact of the app has been huge: Product markdowns have been reduced, allocations have improved, and merchants and district managers have a better idea of what, how much and when product is needed.

"Retail is all about having the right quantity of the right product in the right locations," says Relich. "We're able to identify store issues much quicker and respond to them."

That, according to Interarbor's Gardner, is the kind of result companies aspire to achieve with big data initiatives. "They want to get all the data possible in order to make a decision with the highest order of likelihood of being correct," he says.

At Ford Motor Company, big data-enabled analytics has been tied to $100 million in annual profits, a figure that led to a recent analytics award from the Institute for Operations Research and the Management Sciences. Part of that success is attributable to the efforts of Michael Cavaretta, Technical Leader for Predictive Analytics and Data Mining for the automaker's research and advanced engineering group, who is focused on using data to improve Dearborn, MI based Ford's internal business processes.

Cavaretta's team is using a combination of big data tools and business analytics applications in a number of interesting ways. They're creating data mashups of previously siloed information by linking business processes to warranty and marketing data and the like; crunching internal and external social media posts and figuring out how to link them with and inform business processes; and capturing huge amounts of data generated by vehicles - not only to refine vehicle design, but also to determine what additional types of data could be collected.

The latter of these, in particular, has huge implications as automakers add more sensors to vehicles so they can monitor performance, crank up customer service levels and improve future designs.

For instance, Ford's Fusion Energi plug-in hybrid generates and stores 25 gigabytes of data per hour on everything from engine temperature, speed and vehicle load to road conditions and general operating efficiency. That data flow can increase to as much as 4 terabytes per hour when running tests with special instruments—instruments that Cavaretta says could easily become standard equipment in a few years. Being able to capture, store and analyze that data, and then apply the insight to the right processes in real time will require finely tuned big data and analytics platforms.

Along those lines, Ford has been experimenting with a gamut of open-source and commercial technologies. Cavaretta says his team has worked with big data tools such as Hadoop, HIVE, and Pig on the big data side; traditional databases such as SQLServer, MySQL, Oracle and Teradata; BI and BA software such as IBM's PASW Statistics and R; and specialized data mining tools like Weka, RapidMiner, and KNIME.

It's an assortment that flies in the face of predictions that big data was essentially a replacement for business intelligence.

"The initial impression a lot of people had was that this was going to be a whole new thing: Put in big data and business intelligence goes away," says Cavaretta. "I don't think that's the case. There are a lot of BI initiatives that would be greatly helped by big data."

That said, Taneja Group's Matchett believes one of the big data mistakes that companies can make is to jump the gun before a viable business intelligence or analytics solution is in place. "If I just invest in big data without an application for it, I'm not going to get much of a return," he says.

As Ford works to refine its big data-business intelligence/analytics intersection, there's little doubt in Cavaretta's mind that the combination of the two is powerful.

"The biggest thing about big data is that it changes the value of analytics," he says. "People have been focusing BI/BA on large data sets, but not at the level where big data needed to play.

"Now, new tools are giving them the ability to analyze data in new ways. Soon, technology will make things relatively easy, and what you'll be left with is analytics that can give the company value."

New Joyent Service Offers Analytics without Having To Move Data

Excerpted from Techcrunch Report by Alex Williams

Data is really hard to move. It becomes pretty much intractable as it increases in mass. Sure it can be pulled out, but that takes time and bandwidth and incurs a host of costs typically associated with using a cloud service. To really get the most of all that stored data, it's increasingly apparent that moving it is not really a good idea.

Joyent's new Manta Storage Service puts the compute together with the data in the cloud where it can be processed in one place. The compute is available directly on the object store, meaning that the data can be queried immediately without having to manage all the underlying infrastructure.

The new storage allows customers to analyze log data, financials and other data-intensive functions without moving data into separate clusters, which can take hours depending on the amount of data needed to be processed. Services like Amazon Web Services, by contrast, separate compute from storage, which can mean a lot of time and cost spent just moving data around.

The ramifications of the Manta service are considerable and make Joyent more relevant as an infrastructure and a services provider. In an interview last week, CTO and Co-Founder Jason Hoffman said Joyent will make a number of announcements in the coming months about new data services.

The Joyent news is another example of the disruption to the network attached storage market. Joyent Manta Storage means a customer can spin up instances without waiting. This means that the company can charge by the second as opposed to by the minute or hour. It also means data gets processed in one place without the need to spin up any number of servers to keep a service running.

That's the big trick here, and it took Joyent four years to research, develop and make it happen. Now we'll have to see how much traction Joyent can get in a market that includes AWS, Microsoft, IBM, and a host of other competitors.

Cloud Computing Can Reduce Greenhouse Gas Emissions 95%

Excerpted from Environmental Leader Report

As cloud computing ramps up, it will reduce greenhouse gas (GHG) emission by 95 percent, leading to savings of more than $2.2 billion, according to a study sponsored by Microsoft Europe and the Global e-Sustainability Initiative (GeSI). Expanding cloud usage beyond the basics to large-scale information and communication technology (ICT) will scale those savings up to $1.2 trillion, GeSI says.

Researchers from Harvard University, Imperial College and Reading University explored cloud computing's impact on lowering GHG in Europe, Brazil, China, Canada, and Indonesia. They claim that energy usage will drop by 11.2 TWh annually when 80 percent of public and private organizations in those regions opt to provide cloud-based email, customer relationship management (CRM) and groupware solutions to their staff, going beyond current levels of adoption.

To put it in perspective, this equals 75 percent of the energy consumed by the capital region of Brussels or 25 percent of the energy consumed by London. It is equivalent to abating 4.5 million metric tons of CO2 emissions annually, the study says. And 60 percent of these potential savings relate to small firms.

Cloud infrastructure outperforms on-site services and power-hungry data centers, according to lead researcher Peter Thomond. He says that for every metric ton of GHG emissions generated by a cloud vendor that provides email, CRM and groupware, 20 metric tons are reduced for its clients.

The study examines both the energy savings and GHG abatement potential of cloud computing in 11 countries: Brazil, Canada, China, Czech Republic, France, Germany, Indonesia, Poland, Portugal, Sweden, and the UK.

GeSI, which is a partnership of the ICT sector, says that cloud-based email and CRM is only the tip of the iceberg and large-scale broadband and information and communication technology could deliver a 16.5 percent reduction in GHGs and save up to $1.9 trillion in savings by 2020.

The new Microsoft/GeSI study says vendors and governments have created hurdles to wider adoption of cloud-based services. Thomond says governments can influence wider adoption of cloud computing if they use it for their own services, although the ultimate responsibility lies with cloud vendors. More evidence of the cloud's ability to reduce GHGs and acceptance of the challenges that come with behavioral changes when shifting to the cloud will help convince the public, he says.

Last week, GeSI partnered with EcoVadis to launch an online platform that monitors sustainability practices in the ICT sector supply chains. The Electronic Tool for Accountable Supply Chains (E-TASC) supply chain monitoring software from EcoVadis will allow the ICT industry to access data and scorecards, covering 21 environmental and social criteria. The data is adapted to the specific regulations and CSR issues covering more than 150 purchasing categories and suppliers in more than a 95 countries, GeSI says.

ATV Azerbaijan Selects Octoshape Infinite HD-M

Octoshape, an industry leader in cloud-based streaming technology, has been chosen by ATV Azerbaijan to provide broadband TV video distribution services to ATV AZ customers globally. The services will be powered by the Octoshape Infinite HD-M suite of multicast technologies, which enables ATV AZ to provide the highest video quality to their end users regardless of their location in the world.

The partnership leverages Octoshape's Infinite HD-M federated multicast platform to expand its service offerings into Azerbaijan while ensuring that ATV AZ content can be viewed throughout the world. Octoshape's advanced video distribution technology provides high definition Internet video regardless of the geographic location, connectivity or network conditions of the viewer. 

"Our goal at ATV AZ is to provide the absolute best quality service for our users anywhere, anytime and on any device they desire," said Fikret Azimov, IT Manager at ATV AZ. "We are thrilled to say that we have achieved that goal through the Erstream service powered by Octoshape. On average, our video start up time and overall content viewing time have both significantly improved, leading to an overall increase in advertising revenues."

The Infinite HD-M solution enables TV quality, TV scale and TV economics for broadband TV over unmanaged IP networks by utilizing existing infrastructure in the telco and operator network. Infinite HD-M enables large volumes of broadband TV to be delivered efficiently over last mile networks without requiring the vast infrastructure upgrades necessary with traditional video delivery platforms. 

"ATV AZ is another great broadcaster requiring TV quality over the Internet," said Michael Koehn Milland, CEO of Octoshape. "Extending its reach globally, redefining quality expectations, and doing so all while maintaining a predictable cost model, are the core differentiators to our value proposition."

Telco and cable operators that are part of the Infinite HD-M federated network receive the signals via native IP Multicast in a way that allows them to easily manage large volumes of traffic without upgrading their Internet capacity. 

Octoshape's federated linear broadband TV ecosystem will continue to expand globally in carefully planned phases, adding content contribution partners, Tier 1 broadband providers, connected television manufacturers and conditional access providers.

European Commission Calls for Expert Help on Cloud Contracts

Excerpted from Business Cloud News Report

The European Commission last week called for applications for cloud experts to help it develop clearer contract terms for cloud computing services.

The Commission hopes that cloud computing experts will help identify concerns of customers and companies that are reluctant to use cloud services because contracts are either unfairly balanced or strongly favor cloud service providers.

"Contract Law is an important part of our cloud computing strategy. Making full use of the cloud could deliver 2.5 million extra jobs in Europe, and add around one per cent a year to EU GDP by 2020," said Vice President Viviane Reding, the EU's Justice Commissioner. "Uncertainty around cloud computing contracts may hinder cross-border trade. As this is a very complex area, we are asking experts for advice before we decide on the next steps."

Experts will consist of cloud service providers, consumers, SMBs, academics and legal professionals, and their work will likely contribute to the creation of best practice for cloud computing, including terms and conditions for cloud contracts. It is hoped that the new contract terms will serve as a model for the contractual relationship between cloud service providers and consumers or SMBs.

The Commission also made reference to privacy in Thursday's announcement, saying that a future group would be created to work specifically on personal data protection aspects relevant for cloud computing contracts. Privacy and data protection have as of late received significant attention due to recent leaks detailing PRISM, a US-led intelligence program involving widespread harvesting of data from leading cloud service providers, and the difficult negotiations surrounding pan-EU data protection legislation currently making its way through the Commission.

Pixies Launch New Single on BitTorrent

Excerpted from Music Ally Report

New music from the reformed Pixies previously amounted to merely a one-off iTunes-only single ('Bam Thwok') in 2004, but days after the announcement that Kim Deal had left the band, they have released the brand new 'Bagboy' track as a BitTorrent bundle.

It's a two-step redemption process. Anyone going to the BitTorrent/Pixies link page — http://bundles.bittorrent.com/pixies/ — will automatically get a live version of 'Where Is My Mind?' from the band's performance at the 2004 Coachella festival.

In order to get the new song, however, there is an email-harvesting angle where fans have to enter their email address — but as an incentive for doing so they also get the band's full 20-song Coachella show as a set of MP3s.

BitTorrent is increasingly keen to position itself as being on the side of the artists and an important new marketing and promotional channels for those who don't see it as synonymous with piracy. Alongside working with acts like DJ Shadow, Pretty Lights and Alex Day, earlier this month Public Enemy used the platform to release a new track as well as its assorted stems for a fan-driven remix competition.

And earlier this week, Matt Mason from the company blogged in response to accusations of BitTorrent being a hotbed for piracy, especially for episodes of Game Of Thrones.

"We don't host infringing content," he wrote. "We don't point to it. It's literally impossible to 'illegally download something on BitTorrent'. To pirate stuff, you need more than a protocol. You need search, a pirate content site, and a content manager. We offer none of those things. If you're using BitTorrent for piracy, you're doing it wrong."

This is unlikely to appease the label community, many of who see it as repeatedly smashing the windows of their candy shop. Artists, however, are increasingly drawn to it.

Maybe the involvement of Public Enemy and the Pixies is BitTorrent trying to appeal to Music Fans Of A Certain Age (namely those who were teens in the late-Eighties when both bands were at their creative peak). But both acts have been, in their own way, testing the waters with digital music and a DIY culture.

Public Enemy's Chuck D has long been a fan of the disruptive power of the web (although the band's attempt to crowdfund an album on SellaBand in 2009 amounted to naught). Equally, the Pixies have not been shy of experimentation here. 

On their 2004 reunion tour, they were one of the first acts to partner with DiscLive to sell CDs and downloads of their shows as soon as they walked off stage. Also in 2010, the band played two shows at The Troxy in east London and sold all the tickets direct to fans using the Topspin platform.

Cloud Computing and Disaster Readiness

Excerpted from BDaily Report by Terry Philpott

The advance of cloud computing has been swift and steady. Businesses all over the globe are switching to clouds to energize their revenue, cut costs and enable more cohesive information technology within their organizations. From public clouds accessible to anyone with a laptop, to private and bespoke administrations serving businesses of all shapes and sizes, the future looks distinctly cloudy - in a good way.

There are so many ways that cloud computing can streamline a business, yet one real advantage often goes overlooked until its' way too late. We all know how crucial our organizations are to our survival and how much could potentially be lost in the event of a fire, flood or other natural disaster. We also wouldn't dare think about how to manage a total systems failure or unanticipated issue which can bring a whole business to a halt in a matter of minutes. 

It would be difficult to think of many organizations which could stay afloat for long periods of time where crucial information and technology was unattainable, especially small to medium enterprises (SMEs). Following the increasing impact of natural and man-made disasters on the growing data cache required for managing a modern business effectively, it makes sense to think about combating this.

The beauty of cloud computing is it enhances 'disaster readiness' for you. It does this in a number of ways. Firstly, the access to your files in the event of a disaster of any kind is usually a lot more speedy than if you needed to get hold of these another way. 

In some cases you simply would not have stored these anywhere but the potential disaster zone in physically accessible formats. Cloud computing solves this problem by allowing you to get at backed up information quickly and easily.

In addition, much of the work that goes into managing a server is alleviated in a virtual cloud, such as regular updates and ensuring compatibility between programs and files. This leaves you to focus on all of the other aspects of disaster recovery if you end up needing to. 

Recovering your data in the event of a disaster will be cheaper than ever before, as you simply tap into your stored cloud using the power of the Internet and accessing your hosted domain. 

Everything is available twenty four hours a day for you to manage. You could even set up new documents and communication methods through the shared access to your cloud in the event of a disaster, enabling your employees to tap in too and stay productive wherever possible.

The Global Results of the Disaster Preparedness Survey in 2012 demonstrated that cloud computing is being embraced. Public and private services up and down the country are preparing contingency planning involving cloud computing and virtual servers. These are holding information, contacts, documents, applications and files which employees can access using alternative devices from home or clients' offices. Stay ahead of the game when it comes to your business by thinking about cloud computing.

HP Delivers a Common Architecture for Converged Cloud

Excerpted from Biztech2 Report

HP has announced new innovations to its Converged Cloud portfolio, delivering improved agility, greater innovation and reduced cost for hybrid cloud environments. HP has now delivered the next phase of a common OpenStack-based architecture for HP's private, managed and public cloud offerings.

In addition, the company announced new software and services to accelerate time to market for private cloud implementations, and unveiled new capabilities in HP's managed and public cloud solutions. "In order for enterprises to deliver the new style of IT, they need control over their data, as well as flexibility in service levels and delivery models for their workloads," said V Ramachandran, Country Manager, Converged Infrastructure & Cloud Solutions, Cloud Systems, HP India.

"Only HP has the portfolio and expertise that combines the innovative power of open systems with enterprise-grade manageability and security. The result for customers is a cloud that enterprises can rely on."

New research commissioned on behalf of HP reveals that by 2016, it is expected to be a hybrid world with 77 percent of enterprise IT delivered across private, managed and public clouds in Asia Pacific.

Further, almost half of the respondents believe that open standards are important in the emergence of cloud computing. This indicates that customers want an open, hybrid cloud solution to generate new opportunities, drive competitive differentiation and lower the cost of operations.

HP Cloud OS is an open and extensible cloud technology platform that leverages the power of OpenStack technology to enable workload portability, simplified installation and enhanced life cycle management across hybrid clouds. HP CloudSystem, HP's private cloud offering, already embeds HP Cloud operating system (OS) technology, offering customers greater choice in deployment options.

To help customers get started quickly with an initial private cloud deployment, HP also is offering the HP CloudSystem Enterprise Starter Suite. This solution provides a rapid, cost-effective way for organizations to get started with rich application cloud services, and reduces up-front costs by up to 20 percent with the bundled offering.

HP Moonshot servers also will be offered with HP Cloud OS, providing simplified provisioning and management for specific cloud workloads such as dedicated hosting and large-scale websites.

HP Cloud Services, HP's public cloud offering, also leverages HP Cloud OS technology, providing one of the leading public clouds based on OpenStack technology.

Further, for customers to evaluate and understand the benefits of an OpenStack-based architecture for their cloud needs, HP is offering HP Cloud OS Sandbox for experimentation at no cost.

Fifty two percent of respondents in the HP-commissioned survey said that finding the right strategic partner to get them started was a barrier to cloud adoption.

To help guide customers on their cloud journey, HP introduced the HP Converged Cloud Professional Services Suite. Building on cloud service offerings including application transformation, cloud journey workshops and support, HP announced new streamlined services designed to help customers take advantage of the cloud.

These new offerings include HP Converged Cloud Support, HP Cloud Design Service, HP Cloud-ready Networking Services, HP Proactive Care for CloudSystem, and HP Cloud Security Risk and Controls Advisory Services, as well as enhancements to HP Applications Transformation to Cloud Services.

Coming Events of Interest

Cloud Computing Summit - July 16th-17th in Bradenton, South Africa. Advance your awareness of the latest trends and innovations from the world of cloud computing. This year's ITWeb-sponsored event will focus on key advances relating to the infrastructure, operations, and available services through the global network.

NordiCloud 2013 - September 1st-3rd in Oslo, Norway. The Nordic Symposium on Cloud Computing & Internet Technologies (NordiCloud) aims at providing an industrial and scientific forum for enhancing collaboration between industry and academic communities from Nordic and Baltic countries in the area of Cloud Computing and Internet Technologies.

P2P 2013: IEEE International Conference on Peer-to-Peer Computing - September 9th-11th in Trento, Italy. The IEEE P2P Conference is a forum to present and discuss all aspects of mostly decentralized, large-scale distributed systems and applications. This forum furthers the state-of-the-art in the design and analysis of large-scale distributed applications and systems.

CLOUD COMPUTING WEST 2013October 27th-29th in Las Vegas, NV. Two major conference tracks will zero in on the latest advances in applying cloud-based solutions to all aspects of high-value entertainment content production, storage, and delivery; and the impact of mobile cloud computing and Big Data analytics in this space.

CCISA 2013February 12th–14th in Turin, Italy. The second international special session on  Cloud computing and Infrastructure as a Service (IaaS) and its Applications within the 22nd Euromicro International Conference on Parallel, Distributed and  Network-Based Processing.

Copyright 2008 Distributed Computing Industry Association
This page last updated July 14, 2013
Privacy Policy