May 14, 2012
Volume XXXIX, Issue 6
Plan Now for CLOUD COMPUTING WEST 2012
The DCIA and CCA proudly present three co-located conferences that will zero in on the latest advances in applying cloud-based solutions to all aspects of high-value entertainment content production, storage, and delivery; the impact of cloud services on broadband network management and economics; and evaluating and investing in cloud-computing services providers.
Registration to attend CLOUD COMPUTING WEST 2012, taking place November 8th-9th in Santa Monica, CA, enables delegates to participate in any session in any of the three conferences presented on: ENTERTAINMENT CONTENT DELIVERY, NETWORK INFRASTRUCTURE, and INVESTING IN THE CLOUD.
Roundtable sessions are scheduled to begin and end at the same time, so it is also easy to move from session to session offered in any break-out track.
CLOUD COMPUTING WEST: 2012 features one common exhibit hall. and all networking functions (e.g., luncheon, refreshment breaks, evening cocktail reception, etc.) are open to all attendees at no additional cost.
Content Gets More Personal with the Cloud
Excerpted from WSO2 Report by Chris Haddad
With the availability of cable and satellite content delivery over the last few decades, we've become accustomed to choosing from hundreds of channels and pay-per-view options. However, the emergence of the cloud for content delivery has led to an explosion in the volume, forms, and sources of broadcast content available, which will fundamentally change the dynamics of the industry.
I had an opportunity to discuss this evolution as part of the "Years Ahead for Cloud Computing" panel at the CLOUD COMPUTING CONFERENCE held in conjunction with NAB 2012. Here are a few of my observations from that session.
A major catalyst is the tremendous amount self-generated content on behalf of users and end-users. Increasing numbers of people are turning to YouTube versus traditional TV channels, and it's not a stretch to imagine that soon they will be getting their news from Twitter rather than the 6pm news show. As a result, we will see broadcasting start to move from a push medium to an on-demand, pull-through model.
With some 4 million content creators today and growing, there is an opportunity to tap into that creative base with micro monetization. In parallel, we'll see a move toward more tiered subscriptions for the consumption of media.
For example, when I put together a presentation for a university computer club, I was able to find content from the Internet, but the licensing models didn't fit my needs. One person wanted $220 per image because the model was to put it up on a website for a year with global distribution. I didn't want to spend $2,200 for 10 images to use in a one-hour presentation. Clearly, we need to rethink the rules for content monetization.
Further complicating monetization is the licensing model as we see a significant convergence of multiple devices that can, for example, access a video library on the cloud. I may access that library from my smart phone, my TV screen, my laptop, or my tablet. As content delivery is mixed across delivery options, we need to ensure that content licenses support this model.
While we're rethinking monetization and licensing, we also need a better model for marketing to content consumers. Today, analytics drive customized promotions to the sites we visit, but those analytics are based on a very rudimentary and incomplete understanding of our interests. If we believe in unicasting, why not ask consumers what they want? If I want to buy a new car, show me car advertisements. Then, when I buy a car, stop! Let's just find a mechanism that asks people what products or services they are really interested in and then tap into the deep inventory of advertisers that want to target those individuals.
Finally, we need to look at the power of the edge device and how we can use edge devices for capturing, caching, storing, and transforming content. This brings us into the world of an augmented reality in which, for instance, I can snap a picture of my window and then superimpose a new blind or drape onto it to see if it meets my needs.
Moreover, we are becoming hyper-connected-simultaneously accessing the computer TV, gaming console, and cell-phone all at one time. There is a significant opportunity for the broadcast industry to harness the cloud in order to tap into that convergence and start blending game interactivity, entertainment, and real-time news. Imagine a stockbroker being able to simultaneously view a financial deal stream and the news of the day. Using the cloud to deliver a richer, converged, and augmented experience would be extremely powerful.
DCINFO Editor's Note: Chris Haddad, WSO2 Vice President of Technology Evangelism works closely with developers, architects, and C-level executives to increase WSO2 technology adoption and maximize customer value. Previously, Chris led research teams at Gartner and the Burton Group advising Fortune 500 enterprises and technology infrastructure vendors on adoption strategies, architecture, product selection, governance, and organizational alignment.
Report from CEO Marty Lafferty
Continuing our focus this week on issues surrounding CISPA - more heinous than SOPA, and it just passed the US House of Representatives, DCINFO readers now need to alert US Senators of their concerns.
The Senate's version of the cybersecurity bill is straying even more off the mark of what should be covered in such a measure than the House version: protecting critical American infrastructure against attacks in the digital realm.
This is not the bill to attempt to address a host of other Internet-related items that various Senators are seeking to include based on differing political considerations.
That will only make matters worse, and even the special interests pushing for some of the expanded provisions stand to be hurt by unintended consequences of such amendments.
Senator John McCain's (R-AZ) remark this week, that "unelected Digital Homeland Security bureaucrats could divert resources from actual cybersecurity to compliance with government mandates," should raise a major red flag to all observers of this process.
Since this bill's purported raison d'etre is to protect security, shouldn't its "mandatory" provisions be aimed at accomplishing precisely that?
Instead, the way the discussion is heading now, the bill that emerges will be more likely to do nothing to protect vital American interests from cyberattacks, and actually harm privacy - both individual and institutional - as well as add operating expenses to US companies.
And as Preventing Counterfeits (excerpted below) suggests, private sector solutions will leave public sector attempts to legislate remedies here far behind.
Meanwhile, in another example of how challenging this space has become to lawmakers, The Password Protection Act (PPA) was introduced by Democrats this week in both the House and Senate, illustrating the converse problem to CISPA's growing loss of focus, and that's the problem of "techno-legislative-micro-management."
PPA's stated intent, echoing a Maryland law, is to prevent employers from demanding access to Facebook passwords of employees and job applicants.
Congressmen Ed Perlmutter (D-CO) and Martin Heinrich (D-NM) introduced in the House an identical version of the measure introduced in the Senate by Senator Richard Blumenthal (D-CT), who supported a petition on this subject, which failed to achieve its goal of 60,000 signatures, suggesting that citizens may not want this "help."
PPA includes provisions intended to prohibit employers from requiring private social network and e-mail account access as a condition of employment and from discriminating against individuals who refuse to provide it. Exceptions include employees with access to national security information and, for inexplicable reasons, students.
Senator Blumenthal's claim that, "This legislation, which I am proud to introduce, ensures that employees and job seekers are free from these invasive and intrusive practices," is another indication that legislators are long on seeking politically advantageous credit for their efforts, but in the Internet law arena are short on delivering substantive value.
Moreover, Blumenthal's assertions that employers requiring such information are perpetrating an "unreasonable and intolerable invasion of privacy" and that "no American should have to provide their confidential personal passwords as a condition of employment," strike us as demagogic hyperbole.
The bill itself represents an unwarranted "intrusion" by the federal government into the internal workings of private sector organizations. There are numerous instances where the mission or culture of a particular institution wholly justifies heightened transparency and a deepened level of integrity in employer-employee relations, and this should not be prohibited by law.
Circumstances of these relationships vary tremendously; and our point is that, in a free society, neither employers nor employees should have this specific aspect of their association dictated by the federal government.
We tend to agree with Senator Patrick Toomey (R-PA), who said at a related Senate Commerce hearing on privacy protections this week, "It's premature to begin discussing specific legislative fixes when we don't fully know whether a problem exists."
Senator Toomey was speaking against the Federal Trade Commission's (FTC) bid to expand its powers to interfere with evolving privacy practices of Internet-based companies like Facebook and Google, absent regulatory authorization.
DCINFO readers will recall that the White House last Fall put forward what it called a Privacy Bill of Rights to provide basic online protection guidelines.
Those rights were presented as voluntary codes of conduct, and the DCIA applauded them. Industry in response launched a "Do Not Track" initiative along the lines of the "Do Not Call" list, which even FTC Chairman Jon Leibowitz acknowledged is working.
The eight basic principles included Individual Control, Transparency, Respect for Context (data used consistent with context in which consumers provided it), Security, Access and Accuracy, Focused Collection ("reasonable limits") and Accountability (appropriate safeguards for data collection).
These are sufficiently broad not to be overly prescriptive, and companies can readily determine those that apply to them and those that don't. A firm which voluntarily complies but then violates its commitment will be subject to FTC sanction for false and deceptive practices.
The DCIA believes that self-regulation will go a long way here because, among other reasons, social media users are more vocal with their complaints.
"The right to express one's views, practice one's faith, peacefully assemble with others to pursue political or social change - these are all rights to which all human beings are entitled, whether they choose to exercise them in a city square or an Internet chat room," the US Secretary of State, Hillary Rodham Clinton, said at the end of 2011 at an Internet conference in the Netherlands.
"And just as we have worked together since the last century to secure these rights in the material world, we must work together in this century to secure them in cyberspace." Share wisely, and take care.
Cloud Tipping Point Is at Hand
Excerpted from Wall Street Journal Report by Steve Rosenbush
The economics of cloud computing are driving down the cost structure of business so far and so fast that it's scary, Google CIO Ben Fried says. "It deeply disturbed me. In 2006 and 2007, consumer companies were forcing efficiencies on a scale never seen before," Fried said Thursday during remarks at the Bloomberg Link Enterprise Technology Summit in New York.
At the time, Fried was working in the technology group at investment bank Morgan Stanley, where he was a managing director of application infrastructure, in charge of software development, electronic commerce, and knowledge worker productivity. In 2008, he left the bank and headed to Google, which was at the heart of the disruption that was emanating from the consumer market and beginning to spread through the business world.
Workers, accustomed to using free and simple tools such as Google Apps, Skype, Flickr, and iTunes for their personal affairs, now wanted to use those cloud-based software tools at work. And CIOs and other technology executives were beginning to let them, and to experiment themselves with those services.
At the same time, enterprise-focused cloud services such as Amazon Web Services were making it possible for start-ups and other companies to run their businesses at much lower cost.
Now, just four years later, cloud-based computing is fast approaching a tipping point that will make it the standard for IT, says Fried. "Here's where I think it is going. The macroeconomic tides - you can't fight them forever - will force companies to adapt. We're probably close to that point now," he says.
The economics of the cloud do more than lower costs. They change the structure of businesses and markets, especially at Google itself, according to Fried. He said Google can afford to offer free, ad-supported services to millions of people because it has taken costs out of its own business.
That requires owning many of the elements of its supply chain, because few other companies have the scale to run them as cheaply as it can. Hence, Google builds its own data centers, locating them to take advantage of the lowest-cost source of power - including one right near a fully depreciated hydroelectric dam, Fried said.
The economics of the cloud have led, he said, "to a level of vertical integration never seen before."
The cloud is also forcing companies, and CIOs in particular, to reassess what their core business truly is, and where they want to invest their capital. The big difference between an enterprise product and a consumer product is that consumer products aren't customized.
"We don't offer a special version of Gmail for financial services firms," Fried said. "You have to give up that control with consumer technologies. As a CIO, you have to figure out what is really important to you. Do you really want to worry about customizing e-mail and word processing? You give up a little, but you can get back a lot."
The ripple effects of cloud computing are far from over. Until now, cloud computing has been mostly about the distribution of applications. "The next wave" of cloud computing will enable the sharing of the environment to run those applications, Fried said. "You will be able to take advantage of what we had to build in order to create those applications," Fried said.
New Report Provides Cloud Computing Forecast from Top Tech Executives
The Software & Information Industry Association (SIIA) this week released Vision from the Top, its second annual report on the future of the software and services industry. The publication features insight and forecasts from 49 CEOs and high level executives, who discuss how the cloud will continue to revolutionize the way the world does business.
The report was released during the All about the Cloud ISV conference on cloud computing.
"Technological innovation is the engine driving growth in the software industry and our economy," said Rhianna Collier, Vice President of SIIA's Software Division. "With the economy in recovery and IT budgets expanding, we asked 49 top executives for their view of where the software industry stands today-and where it's headed over the next decade."
Among the questions answered by executives in the report are: In 2020, looking back on this decade, what will be the single most impactful technical advancement driving business growth? Given that the economic outlook in many parts of the world seems uncertain: What's your philosophy on maintaining a focus on innovation? What do you do as CEO to keep your organization focused on customers and value? Does mobile fall into one of your top 5 priorities for 2012? If so, how will you be attacking it?
"The forecasts address challenges brought by cloud computing - including the changing roles of IT, sales and the customer - as well as growth areas," continued Collier. "It's clear that the value of the cloud will continue to skyrocket - fueled by investment in mobile, social media, and hybrid cloud strategies as well as the next wave of technological innovation."
Vision from the Top features insights from 49 leading executives including: Judson Althoff, Senior Vice President, Worldwide A&C and Embedded Sales, Oracle; Jim Corgel, General Manager, ISV, Academic, Entrepreneurs and Developer Relations, IBM; Rick Nucci, General Manager, Dell Boomi; Audrey Spangenberg, CEO, FPX; Treb Ryan, CEO, OpSource; and Lonnie Wills, CEO, CloudTrigger.
Steven Perkins, National Technology Industry Practice Leader for Grant Thornton LLP, provided an executive summary of the report. In it, he commented, "Should we expect to see greater adoption of cloud-based services? The answer is a resounding yes. Other predictions focused on the enhancement of recent innovations and capabilities, and on the explosion of information both structured and unstructured. For example, some executives view business analytics as an area for increased innovation and business value. There is also a suggestion that identity management will become even more critical in the future."
Preventing Counterfeits with an iPhone and Digital DNA
Excerpted from GigaOM Report by Derrick Harris
If you're looking for a foolproof way to secure your supply chain and prevent the spread of counterfeit goods, Applied DNA Sciences (ADNAS) thinks it has created just the tool. Its new product, called digitalDNA, creates unique plant-based DNA signatures that are encrypted onto QR codes readable by an iPhone app.
When phones scan the code, data is analyzed by a cloud database to identify possible theft or counterfeiting.
It's mobile meets cloud computing meets big data, with genomics as glue holding them all together.
In order to understand digitalDNA, though, you first must be familiar with ADNAS's core technology. Its flagship product, called SigNature DNA, takes specially created, double-stranded DNA signatures derived from plant DNA and combines them in solution made out of ink or some other material. That solution can be applied directly to a product - anything from textiles to microchips to documents - or applied to an invisible bar code that can be read by scanners capable of detecting the DNA strand. Marks can also be swabbed and sent to ADNAS for verification.
Companies using SigNature can verify the authenticity of shipments by scanning the products they receive. If the products aren't legit, businesses don't accept them and, presumably, an investigation ensures. Presently, Miller said, this process is unreproducible, meaning would-be counterfeiters can't one-up ADNAS customers by replicating their authentication method as well as the product itself. In January, Wired published an article about how the US Department of Defense is using SigNature to detect bogus microchips in military equipment.
Aside from simply stopping counterfeiting activity, though, SigNature is also used to prosecute criminals because the DNA markers are all-but irrefutable evidence (the false positive rate is 1 in a trillion) that someone is in possession of stolen goods. In the United Kingdom, Miller told me, more than a quarter of all cash in banks is marked with using SigNature in order to catch criminals who steal it from transporters such as ADNAS customer Loomis. ADNAS also sells products that pre-mark certain items in order to transfer DNA to thieves, or that spray fleeing intruders with DNA.
Another company, called DNA Technologies, claims to use a similar method for anti-counterfeiting and actually tagged the footballs to be used in Super Bowl XLVI. Unlike RFID tags, DNA marks can be placed even on small individual objects, or incorporated into them in the case of clothing, for example, and cannot be easily removed.
The new digitalDNA product takes SigNature to the next level by tying it to cloud computing, big data and mobile phones. The unique DNA signature still exists on the physical QR code applied to packages, but it has also been digitally encrypted onto to a 2-dimensional QR code in a way ADNAS claims is not copyable. As packages move along the supply chain, employees equipped with iPhones and the ADNAS app can scan products to chart their progress and verify authenticity. But that's just the beginning.
With every scan, information is also sent to a cloud-based database where it's stored and analyzed a set of algorithms specially designed to identify patterns associated with counterfeiting or theft. If something pops up, companies can be proactive in trying to determine the problem or take measures to prevent a crime. And even if there isn't nefarious activity taking place, digitalDNA users can still use the geospatial data they're generating to get a better handle on their supply chain dynamics.
Looking to the future, Miller said ADNAS is also experimenting with methods for using the ubiquity of iPhones to bring consumers and retail outlets into the fold. That could mean anything from scanning the DNA-based QR code to ensure the freshness of a product to helping stores identify sales trends. Admittedly, though, those uses are a while out and would require cooperation from ADNAS's customers, which are the ones dealing directly with resellers and consumers. Presumably, the DNA-based QR codes could provide more granular data because they're tied to individual units of products.
However digitalDNA usage evolves, even if it never really takes off, the high-level concept behind the product is sound. As we'll discuss in numerous sessions at our Structure Conference next month in San Francisco, CA, there's an undeniable connection among cloud computing, big data, and mobile technologies as relates to capturing, storing, and processing entirely new types of data.
When literally anybody with a mobile phone and the right app can scan a code and send rich data up to the cloud, it opens up entirely new possibilities around both analytics and application architectures.
LG Slated to Launch Internet-Enabled TV with Google TV Technology
Excerpted from MediaPost Top of the News Report
Stateside, LG Electronics is reportedly planning to launch Internet-enabled TV based on Google's platform before the end of the month.
"The move reflects an aggressive push by the duo to defend against a potential threat from Apple, which reshaped the handset market with its iPhone smart-phone and is widely expected to unveil a full-fledged TV product later this year or early next year," Reuters reports.
As 9To5Google reminds us: "Google TV is a Google-branded smart TV platform that integrates the Android operating system and Chrome browser to create an interactive television user-interface that is overlaid onto existing Internet television and WebTV."
Late last year, "Google Executive Chairman Eric Schmidt predicted that most TVs sold by summer 2012 will come with Google TV on board," recalls ComputerWorld.
"But, with Google TV's lukewarm welcome by consumers, and a non-existent Apple television set grabbing headlines, will LG fare better than Google's first round of partners?" VentureBeat asks.
Indeed, "Since launching in 2010, Google TV has struggled to capture a big audience in the Smart TV business," USAToday remarks. "The company pushed out a massive update last October, and unveiled partnerships with TV makers LG and Vizio during January's Consumer Electronics Show in Las Vegas."
As for LG, "Despite already having its own 'smart TV' platform, the world's No. 2 TV maker is clearly hoping the Google TV technology will be a way to stand out from its peers," ZDNet writes.
"Features of Google TV include being able to use a smart-phone as a remote control, searching the Internet on the TV while watching a show, and creating a home-page with app launch icons and TV channels," CNet notes.
Yet, regarding the new TVs, Gizmodo writes: "Whether they can save Google's TV dream, well, that's anyone's guess."
Want to Reinvent TV? Don't Forget the TV
Excerpted from GigaOM Report by Janko Roettgers
Smart TVs, dumb TVs, Google TVs, Ikea TVs, and even everything we know about the rumored Apple TV set all have something in common: In the end, they're just TVs. That's whether they're 42, 50, or 60 inches in size, with a bezel that frames your viewing experience. And whether it's Netflix, YouTube or just plain old cable TV, the way we watch video on them is fairly similar as well.
Sure, the bits may come from different places, and you might even have funky widgets on your iPad or on-screen while you watch TV. But take a step back and today's TV still looks very much like the TV of yesteryear. Turn it on, watch something, turn it off, and be done with it.
That's not what the future of the TV will look like at all, if we can believe the folks at NDS. The Israel-based TV services provider, which Cisco acquired for $5 billion in March, has been exploring what the actual TV set will look like five years from now. Company executives came to San Francisco, CA this week to showcase some of their research, and the results are pretty intriguing.
To sum it up briefly, NDS was showcasing a big matrix of six bezel-less flat screen TVs that were combined to form a huge, almost overwhelming TV wall. NDS CTO Nick Thexton then went on to demonstrate big displays like these can be broken up, showing a video of varying sizes somewhere in the middle, with personalized and content-relevant widgets off to the side. And once you get some cinematic 4k content, you might even want to use the whole screen. Check out Christina Bonnington's story at Wired for more details about the demo, which was neat.
But what I found fascinating was the points that Thexton and NDS Chief Marketing Officer Nigel Smith raised about the future of TV. The real question, Smith told me, is, "If you have a TV the size of a wall, how are you going to interact with it?"
The future of TV will be modular.
NDS uses a PC with multiple video outputs to power its six-display TV wall. Soon, this could be done by small mesh networking-capable modules.
We have all gotten used to the fact that TVs are getting bigger and bigger every year, and the NDS demo of a TV screen that would fill your entire living room wall seems to fit quite well into that narrative. However, Thexton was very vocal about this not being a question of size. "We are not advocating just big TVs," he told me while standing in front of the giant NDS demo screen.
Instead, Thexton thinks that TVs may become modular and actually consist of much smaller displays that can be combined to fit the room. Think of 6-inch to 8-inch bezel-less squares that you can buy individually and then mount to the wall next to one another, gradually growing the size of your display to fit your needs. These displays would automatically work together, making sure your Saturday night movie runs on all of them at once.
NDS is currently using a PC with multiple video outputs to run its six-screen demo, but Thexton told me the company is developing a small module to connect to each screen separately and then mesh network these to coordinate the complete video output. Mesh networking devices like these could also come in handy if you wanted to include another TV on a second wall, for example to run a news feed or an in-home video stream while you're interacting with other media on the main screen.
The future of TV will be ambient.
One of the main points of the NDS demo was that huge displays don't always equal huge videos. Instead of watching your morning news in theater mode, you're going to watch clips with a much smaller size and use the rest of the screen for other information.
In fact, sometimes you might not be watching TV at all but will still find it useful to leave the large screen wall on.
For example, it could display cover art for the music you are listening to while giving you access to your calendar reminders, a wall-sized clock and your Twitter feed. Home automation and security-camera footage are also applications that could be useful to run all day, or fade in and out as needed.
A huge screen doesn't necessarily mean that you watch everything blown up to the max.
But with that big ambient screen also comes a unique new challenge: You really don't want to turn it off. Anyone with a big TV screen is already aware that the device can look like a big, black annoying hole in the middle of your living room when not in use. Now multiply this by three, four or even six and you end up with a whole lot of ugly dark screen estate.
Leaving your big TV wall running all day, though, will cost you a fortune in electricity. The solution will be e-ink-like display technologies that allow you to keep a visual wallpaper or even some widgets up and running without burning a hole in your wallet.
The future of TV will need new interaction models.
NDS ran its demo off an iPad, allowing me to change the immersion level - and display size - of a video with simple sliders. That was good enough for a demonstration, but it still seemed somewhat complex for everyday use. Thexton told me that the company had evaluated Kinect-like gesture control as well as Siri-like voice control but eventually abandoned both because they seemed to require too much effort and were too prone to errors.
In the end, he said, people didn't want to control their TV in a Minority Report-like fashion but with something that felt more natural. "We don't want people to feel weird in their living room," he said.
But is the tablet the be-all and end-all? Thexton didn't think so, and he reminded me that controlling a TV traditionally can be boiled down to just a few core indicators. Give someone a remote control with a D-Pad, and they can pretty much navigate through any cable guide or online video app.
So if only four to six buttons are needed, how about replacing these with interactions that can be accomplished without any remote control at all? The key might just be to treat the TV like a pet, said Thexton, and develop a kind of interactive language both you and your TV understand. In other words: Don't be trained to use a remote; train it to do the things you want.
Define TV's future without its constraints.
A TV that consists of many little displays working together, a TV that's always on, a TV the size of your living room wall and a TV that obeys you like a well-trained dog: That's a lot to swallow, especially if you've thought of the next wave of apps as innovation in the TV space.
However, it may be time to think bigger, and leave some of the assumptions of what TV is - and what TV sets are - behind. "TV has to start defining a future for itself," Thexton told me. And that future may not fit into a 60- or 70-inch bezel.
Garth Ancier Advises Intel on Virtual-MSO Plan
Excerpted from Variety Report by Andrew Wallenstein
Hollywood and Silicon Valley may be worlds apart, but Intel is relying on a TV-biz veteran to bridge the gap.
Sources say former BBC Worldwide America CEO Garth Ancier has been serving as the face of the chip manufacturer in boardrooms at all the major content companies to advance Intel's ambition to launch a bundle of Internet-delivered TV channels that would rival offerings from cable and satellite operators.
A spokeswoman for Intel confirmed Ancier is working in an advisory capacity to the Intel Media Group but declined to specify what he's working on or what that division is doing. Ancier did not respond to inquiries for comment.
But Ancier is playing an instrumental role in the deployment of a set-top box (STB) powered by Intel processors capable of transmitting high-definition channels. The Santa Clara, CA based company is hoping to be first to market with the still-unnamed product before others reportedly working on their own so-called virtual MSO plans, including Sony, Apple, and Google.
The virtual MSO model involves packaging a tier of linear TV channels and delivering them to consumers via broadband without the geographic restrictions that confine cable operators.
It makes sense for a company like Intel to turn to an executive like Ancier. He has a well-stocked Rolodex, having worked in prominent posts at many different media companies over the past several decades including NBC, Fox, the WB, Turner Broadcasting, Sony, and Disney. In his most recent full-time stint, he spent three years at BBC Worldwide America ending in early 2010.
Having a familiar face like Ancier at the negotiating table could prove useful for Intel given historically frosty relations between the tech and content industries. He could serve as something of a translator between the two worlds, educating Intel on the intricacies of the TV business - a skill-set many of the tech firms trying to outmaneuver Intel have been knocked for lacking.
Companies from Google to Apple have been accused of failing to respect the value or copyright of content, a criticism that may partly explain the struggles of Google TV, which launched weakly last year without access to broadcast content. Speculation has already begun on how stingy media companies could get with Apple in support of the highly anticipated TV set, unofficially dubbed "iTV," rumored to be in development.
Intel isn't exactly a newcomer to Hollywood given its chips have long provided piracy protection for digital distribution. But this new product marks a major departure for a company best known for back-end technology.
Ancier was brought in by fellow BBC alumnus Erik Huggers, who joined Intel last year as head of Intel Media Group after leading the Beeb's digital media division.
Intel hasn't made clear what its intentions are since the closure last October of its digital home group, the division that created the chips that powered an early iteration of Google TV.
Huggers was shifted to the newly formed unit Intel Media Group, which is charged with delivering some kind of programming experience for smart TVs, though the company has never confirmed reports that first surfaced in March that it has settled on a virtual-MSO strategy.
While Ancier took a lead role in establishing relationships between Intel and content companies, sources say his involvement going forward is purely in a consulting role. But he's brought in various business affairs and legal execs with media-company experience capable of getting complicated carriage deals done.
There's going to be plenty of work for them to do if they're to going to meet Intel's goal of launching in test markets by Christmas with even a few channels in place. Though Intel is talking to everyone, it is said to be far from any finished deals.
That speaks to the difficulty of doing any carriage agreements, which can take years to nail down. Add in the complexities of coming to terms on an entirely new business model, one that can't violate the most-favored-nation (MFN) clauses in place that prevent programmers from giving Intel any advantage over top MSOs like Comcast.
And while the company has pockets deep enough to afford the multibillion-dollar price-tag for entering the multichannel video business, steep programming costs will likely keep Intel from charging consumers anything less than incumbent services.
As if getting deals that provide any edge over the cable operators isn't difficult enough, consider that the broadband connections Intel's strategy relies on are largely controlled by the very same cable operators.
No doubt Intel is keeping an eye on the debate over network neutrality, the principle that broadband providers can't give preferential treatment to any one source of data. But recent allegations that Comcast is doing just that have given pause to other potential market entrants considering a virtual MSO, like Sony.
Even if Intel engineers an incredible product, it faces a tough market to crack given that the incumbents seem to be recovering from a sluggish 2011 with two consecutive quarters of modest growth. MSOs added 422,000 video subscribers in the first quarter of the year, according to Bernstein Research, on top of the 243,000 added in the fourth quarter of last year.
Despite all the obstacles, Intel execs are said to believe that its competitive edge lies its ability to deliver a seamless TV experience across an IP connection with little of the bugginess or buffering that plagues similar products.
Still, even if they can make their aggressive Christmas deadline, they may still be beaten to market by Verizon and Redbox, which announced in February their intent to launch an unspecified video venture, also expected to be a virtual-MSO play, in the second half of the year.
Next-Gen Nielsen TV Meters Enhance Compliance, Cross-Platform Measurement
Excerpted from Media Daily News Report by Joe Mandese
In what is likely the most significant change in the methods Nielsen uses to measure TV -- and potentially all forms of video content -- the ratings company this week quietly began informing clients of a major initiative to develop a suite of new audience meters and digital tracking codes that could begin replacing its current meters as soon as 2014.
Dubbed "GTAM," which stands for Global Television Audience Metering, the initiative includes the development of four new audience metering technologies designed to deal with all of the conceivable challenges involved in measuring the viewing behavior of contemporary consumer households.
The initiative is significant for several reasons beyond the technologies being developed, including the fact that it is a major reaffirmation of Nielsen's strategy for basing audience measurement around in-home viewing, which has been the foundation of its audience measurement systems, although some components of the GTAM initiative will make it easier for Nielsen to incorporate mobile, wireless and Internet-based video audience exposure as well. The other major reason the plan is significant is that as its name might imply, it will be a global effort -- and the technologies being developed would likely be deployed as part of a standardized methodology across the 16 international markets Nielsen currently measures media audiences in.
The four new metering solutions include the so-called "GTAM meter," which will be the primary device Nielsen plans to use for audience measurement. The GTAM meter is said to be smaller, more ergonometric, easier for consumers to interact with, and far less "invasive" than Nielsen's current industry standard "A/P meters." Like the A/P meters, which stand for active/passive metering components, the new GTAM meter is expected to utilize a combination of active and passive measurement technologies, but unlike Nielsen's current meters it will not require it to be physically connected to any household media devices, such as a TV set, set-top tuner, DVR, etc., to function.
The second technology in development is a lighter, somewhat less sophisticated meter, aptly named the "GTAM Lite Meter," which is capable of measuring TV audiences in households that have fewer electronic devices in them and are less complicated to measure.
A third device, code-named the "Code Reader," is an even smaller device that relies entirely on its ability to monitor the digital codes associated with TV and video programming. All the new metering technologies are being designed to work with a new, bulletproof digital watermarking technology Nielsen has developed that is capable of surviving any conceivable compression technologies that would otherwise strip away current versions of digital codes and watermarks. Dubbed "Watermark," the new code is said to be integral to Nielsen's plans to accelerate cross-platform video measurement and integration, because it is also a solution to measuring video exposure across wired and wireless Internet platforms.
The fourth metering technology in the initiative potentially may be the most controversial in the mix, because it is designed to explicitly replace its current people meters, now the state-of-the-art in Nielsen's TV metering portfolio. Unlike Nielsen's current generation of people meters, which utilize blinking lights to remind viewers to push buttons to indicate they are actively watching TV programming, the new meters will feature an LED screen that will give respondents written instructions and prompts for complying with the measurement process.
That meter is dubbed the "Scrolling Text People Meter," and it could be controversial, because it is designed to improve the cooperation and compliance of people watching TV in a sample household, which could potentially influence the way they watch TV.
Nielsen is expected to vet the new approaches and technologies among its various client groups, and industry bodies before deploying anything, and the likely time frame is that the first versions of the new meters would not be installed in sample households until early 2014, following a year of evaluation during 2013.
Internet Radio on the Rise
Excerpted from Online Media Daily Report by Erik Sass
Internet radio listening is surging, according to new data unveiled this week by TargetSpot, which operates a digital audio ad network, and Pandora, the leading online audio platform.
The TargetSpot data is drawn from the Digital Audio Benchmark and Trend Study, based on a survey of adult US broadband households conducted from January 7th to 17th of this year by Park Associates. The survey showed that Internet radio has penetrated to 42% of adult US broadband households, up 8% from 33% in 2011. Within this cohort, 42% are households with children, 64% own their own homes, and 22% have a household income of $100,000 per year or more - up 29% from 2011.
Digital audio listeners also display significant engagement with the medium, with 80% listening from one to three hours per day. Increased listening is facilitated in part by the proliferation of mobile devices with Internet connectivity: among Internet radio listeners, tablet ownership increased 87% from 2011 to 2012 and 48% are spending more time listening on their tablets, while smart-phone ownership increased 22% and 38% are spending more time listening on their phones. In-car listening is also increasing, with 14% of digital audio listeners using an Internet radio player in their automobiles.
The TargetSpot-Park study also showed substantial recall and response rates for online audio advertising. Here, 58% of digital audio listeners said they recalled having seen or heard an Internet radio ad in the last month, up from 52% in 2011. Among listeners who recalled ads, 44% said they responded to an Internet radio ad, up from 40% in 2011. Ad support is clearly important to Internet radio's viability, as 86% of listeners say they do not pay for digital audio content.
Meanwhile, Pandora released data showing that its online audio service now constitutes 6% of all radio listening, with 1.06 billion listener hours in April 2012, up 87% from 566 million hours in April 2011. Active listeners numbered 51.9 million at the end of April 2012, up 52% from 34 million in April 2011.
Separate data released by Arbitron in March of this year shows that broadcast radio reaches 241 million listeners per week, representing 93% of the total U.S. population, while Arbitron data from January suggests average total listening of about 14.6 billion hours per month.
Telefonica "TU Me" App Targets Skype With Free iPhone Calls
Excerpted from PC Magazine Report by Angela Moscaritolo
Spanish telecommunications company Telefonica is going after Skype and WhatsApp with a new iPhone app that lets users make free calls without using up their minutes.
The app, dubbed TU Me, is available for free to anyone with an iPhone worldwide, regardless of the phone operator, Telefonica announced on Wednesday. An Android version is expected to be released in the coming weeks.
The app lets users make calls, record and send voice messages, exchange text messages, and share photos and location information with other TU Me users - all from one screen, without needing to switch between apps or tabs. The service works through a user's mobile data plan or over Wi-Fi.
TU Me will check a user's address book to find other TU Me users. Additionally, users can invite their friends to start using the app via SMS or e-mail with an "invite" button on the app's contact page.
Calls and other interactions are stored in a searchable timeline format, so users can scroll through and keep a history of their conversations. The app stores data online, in the cloud, making it available whenever a user logs into the app, even if their device is lost. TU Me includes a year of free storage for all conversations.
"TU Me puts all your communications needs into one place, for free, and is a great way for people to stay in touch with those close to them," Stephen Shurrock, Telefonica's Digital Chief Commercial Officer, said.
TU Me is available now as a free download in Apple's App Store.
Huawei Announces Next-Gen CloudEngine Switches
Excerpted from Sci-Tech Today Report by Barry Levine
China-based Huawei has announced its next-generation CloudEngine 12800 data center series of switches, which provide what the company describes as the largest single-frame switching capacity in the industry. The announcement was made at the Interop 2012 show in Las Vegas, NV.
The new switches provide a capacity of up to 48T and support switching of 100GE, 40GE, 10GE, and GE interfaces. They also offer virtualization and convergence of computing, storage, and networks, which Huawei said could help data centers project their use over a 10-year lifespan.
CloudEngine is intended to handle the extreme networking challenges that come with cloud-based services, such as traffic growth, service scale, and manageability. The company said that it was offering customers a solution for this environment with solid reference architecture and a highly scalable, virtualized platform.
The switches are intended for midmarket and high-end enterprise customers, with management through a single interface.
Features of the CloudEngine series include bandwidth per slot of up to 2 Tbps, and switching capacity of up to 48 Tbps, which Huawei said was 300 percent more than the highest level in the industry. The switches also support moving from GE/10GE servers, over a 10-year span, to 40GE/100GE servers.
Traffic bursts, not uncommon for such cloud apps as data access and parallel computing, are processed by non-blocking CLOS fabric architecture and large distributed buffers. The switches also offer front-to-rear ventilation channel design for heat dissipation.
Multiple switches can be virtualized into one logical switch using the Cluster Switch System, and the Virtual System can virtualize a single switch into many logical devices. The result, the company said, is the ability to facilitate allocating network resources on demand.
Virtual machines are supported, in that network administrators can build large, Layer 2 networks with over 500 nodes, permitting flexibility in service deployment and speed in migration.
Coupled with the nCenter network management system, Huawei said that the CloudEngine products can provide 10 times more virtual parallel processing capability than the average in the industry.
The CloudEngine series also supports Fiber Channel over Ethernet, allowing companies to deploy SAN-based network traffic over Ethernet, thereby establishing a converged network. Additionally, Priority Flow Control helps to provide non-blocking transmission.
Last year, Huawei began moving on a more organized basis into the US and global enterprise market, with cut-rate products that appear to target Cisco.
In addition to the CloudEngine announcement, Huawei also announced at Interop that it would sell various products - including IP network infrastructure and communication/collaboration products - to enterprise customers in the US through a distribution deal with Synnex.
The company has set a goal of $7 billion in enterprise networking revenue by the end of this year, up from the $2 billion it was doing when it embarked on its global rollout in 2011. If it did hit that goal, it would be second to Cisco in that market. In 2011, Huawei had global revenue of $32 billion, compared with Cisco's $44 billion.
Coming Events of Interest
Data Center + Network: The Converged Cloud - May 17th Webinar. By making data centers more agile, increasing provisioning speed, and reducing capital expenditures, cloud is forever altering the way enterprises deploy technology.
Cloud Computing Forum & Workshop V - June 5th-7th in Washington, DC. The National Institute of Standards and Technology (NIST) hosts this meeting focused on reviewing progress on the Priority Action Plans (PAPs) for each of the 10 high-priority requirements related to interoperability, portability, and security that were identified by US government agencies for adopting cloud computing.
Cloud Expo - June 11th-14th in New York, NY. Two unstoppable enterprise IT trends, Cloud Computing and Big Data, will converge in New York at the tenth annual Cloud Expo being held at the Javits Convention Center. A vast selection of technical and strategic General Sessions, Industry Keynotes, Power Panels, Breakout Sessions, and a bustling Expo Floor.
IEEE 32nd International Conference on Distributed Computing - June 18th-21st in Taipa, Macao. ICDCS brings together scientists and engineers in industry, academia, and government: Cloud Computing Systems, Algorithms and Theory, Distributed OS and Middleware, Data Management and Data Centers, Network/Web/P2P Protocols and Applications, Fault Tolerance and Dependability, Wireless, Mobile, Sensor, and Ubiquitous Computing, Security and Privacy.
Cloud Management Summit - June 19th in Mountain View, CA. A forum for corporate decision-makers to learn about how to manage today's public, private, and hybrid clouds using the latest cloud solutions and strategies aimed at addressing their application management, access control, performance management, helpdesk, security, storage, and service management requirements on-premise and in the cloud.
2012 Creative Storage Conference - June 26th in Culver City. CA. In association with key industry sponsors, CS2012 is finalizing a series of technology, application, and trend sessions that will feature distinguished experts from the professional media and entertainment industries.
CLOUD COMPUTING WEST 2012 - November 8th-9th in Santa Monica. CA. CCW:2012 will zero in on the latest advances in applying cloud-based solutions to all aspects of high-value entertainment content production, storage, and delivery; the impact of cloud services on broadband network management and economics; and evaluating and investing in cloud computing services providers. |