Thursday, February 28, 2013

Federal Government Makes Silo-Busting, Startup-Unleashing Healthcare Move

Editor’s note: Dave Chase is the CEO of Avado.com, a patient portal and relationship management company that was a TechCrunch Disrupt finalist. Previously he was a management consultant for Accenture’s healthcare practice and founder of Microsoft’s $2 billion health platform business. You can follow him on Twitter @chasedave.

For the first time, the federal government has provided large financial incentives to share one’s health data between authorized healthcare providers and with patient themselves to facilitate patient engagement. In the past, there was a disincentive for providers to share information outside of their silo. This has been a central reason why healthcare has been a technology backwater.

Technology monocultures can thrive in the old silo’ed environment. A recent decision has created a strong new incentive for providers that has the byproduct of opening up opportunities for startups that didn’t exist before creating a more diverse ecosystem of interoperable web services. The recent rulings are one of the key technology enablers for healthcare’s trillion-dollar disruption.

“The bible of health policy” (Health Affairs) devotes this month’s publication to the “New Era of Patient Engagement.” The reason is patient engagement has been called the blockbuster drug of the century for its profound impact on improving health outcomes. Patient engagement is defined as “actions that people take for their health and to benefit from care.” Combining patient engagement with other proven approaches such as choice architecture (the way in which decisions may and can be influenced by how the choices are presented in order to influence the outcome â€" see Wikipedia for more) can dramatically improve health outcomes.

Evidence is overwhelming that healthcare providers who engage with their patients and caregivers have far better outcomes (e.g. 73 percent to 88 percent reduction in mortality). A big move just announced by the government provides further evidence of patient engagement moving mainstream, creating demands for new software and opportunity that is without precedent in healthcare.

The nation’s “healthIT czar” (Dr. Farzad Mostashari) runs an organization called the Office of the National Coordinator (ONC), originally set up by the Bush Administration. Later, the Obama Administration doubled down on efforts to catch healthcare up with the rest of society in terms of adoption of modern computing approaches in electronic health records (EHRs).

Even today, the “Wang era” (i.e. one vendor provides the entire technology stack) persists in healthcare. Like the Wang era, there is limited ability to interface with other systems. For even simple integrations, $10,000 is a normal charge to allow integration that would be free or near-free in modern web services architecture. To its credit, the ONC is trying to break healthcare out of its technology twilight zone and unleash innovation akin to the post-web, post iOS/Android app store world.

Federal Standards/Ruling Poised To Open Up Startup Innovation

The ONC is trying to unleash startup ventures and is in the process of rolling out the Automated Blue Button Initiative to open up health information that is technically owned by consumers but has been difficult to access. Until recently, healthcare moved at such a glacial pace that it didn’t follow the pattern nearly every other industry followed. Zina Moukheiber wrote a piece recently with a headline that looks like it was written by The Onion â€" Government Should Slow Down Race to Implement Electronic Health Records.

I can’t remember a time when a technology company complained about things moving too fast â€" especially in healthcare where decision processes can be painfully long. In fact, that extreme length of decision-making is at the heart of why I have long said that healthcare is where tech startups go to die â€" super-long decision processes naturally favor legacy vendors who are milking products that may be 10 or more years old.

Whether you think it’s a good or bad thing that government is involved in healthcare, there is no disputing that they are a major player.

At HIMSS13 next week, legacy vendors will make announcements such as “investing $25 million in patient engagement” that include developing mobile apps that are the equivalent of the Verizon app store. We know how that turned out â€" depending on one vendor greatly limits innovation. This is in contrast to the ONC’s initiative. While some of ONC’s moves can appear small at first, the market-signaling effect can turn the snowball into an avalanche of new EHR-agnostic, carrier-independent apps. For example, a leading healthcare journalist speculated on how the recent ruling could give health systems an alternative to the expensive practice of buying medical practices to buy loyalty.

Jan Oldenburg (editor of Engage! Transforming Healthcare Through Digital Patient Engagement) tweeted ”This looks like the first ONC acknowledgment that patient (portal) experience needs to have community data”. [Disclosure: I had the honor of helping write and edit that book with Jan, Kate Christensen and Brad Tritle. There is also a track at HIMSS13 focused on Patient Experience where we'll highlight patient engagement successes and lessons learned.]

Recognizing that breaking down information silos is paramount to improving outcomes, the ONC ruled that healthcare providers utilizing EHR-agnostic (i.e. systems not from the EHR vendor themselves), multi-provider patient portals can gain a major advantage over those using the old model of silo’ed EHR patient portals. In a nutshell, providers have to get a portion of their patients to actively use data. We’ll use a scenario to describe the benefit:

Let’s say I’m a primary care physician and I send my patient to a cardiologist. After seeing the cardiologist, the patient then logs into a portal that’s shared by me and the cardiologist. The bonus for me, then, is that I get credit for that patient logging into the portal, even though the patient logged in after visiting another doctor, not me. Had the cardiologist and I NOT shared a portal, then obviously I’d get no credit for the patient logging into the cardiologist’s own portal after the patient visited the specialist.

As leading healthIT pundit, Leonard Kish, stated, “This is a big step forward in patient engagement that recognizes consumers prefer a tool that’s similar to Quicken or Mint in personal finances, a place where all their information can be brought into one place that’s portable, and if need be, shareable within a larger community. While diminishing patient options, the single entry to a portal also diminishes the ability of the hospital to build the relationships with a network of outside providers and patients who are so critical to improving care, and ultimately, bringing more patients to the hospital. The key for both patients and providers is building a community of shared information that can drive improved outcomes through group learning.”

What’s Next?

Whether you think it’s a good or bad thing that government is involved in healthcare, there is no disputing that they are a major player. As one of the few areas of healthcare where there is bipartisan agreement, the ONC has widespread support. The only legitimate concern I’ve heard raised is that the ONC is moving too slowly and setting targets for achieving “Meaningful Use” laughably low. For example, the aforementioned ruling applies to a requirement to have 5 percent of patients view, download or transmit their health information.

Meanwhile, some organizations are getting over 50 percent of their patients to do this, and that is on a single, silo’ed patient portal â€" not one that covers all of their providers. In light of the fact that one has to be pretty “engaged” to go to the trouble of scheduling a doctor’s appointment and likely taking half their day, a more desirable goal would be 20 percent. Getting one out of five patients to take an action that they are pre-disposed to take in the majority of situations doesn’t seem like it’s tough.

The best patient portal in the world has limited value if you can’t take that information with you.

I can empathize with the ONC as they are getting an earful to slow things down, but they also recognize that lives are at stake when things get slowed down. Perhaps they can take comfort in knowing that there is an old saw that the best way to kill a bad product is good advertising. Perhaps the new saw is the best way to kill a bad EHR is good Meaningful Use requirements.

I don’t know of anyone who lived a full life who has had only one doctor their entire life. This is especially true with the 50 million Americans moving every year and many more changing jobs (and thus health plans/providers). I took stock of my family of four who had a healthy 2012 and we still went to seven different providers and I don’t want seven user id/password combinations. We’ve also been on three different health plans in the last four years. The best patient portal in the world has limited value if you can’t take that information with you.

That healthcare still applauds “patient-centric” organizations that charge $1 per printed page of your health records when you leave their health plan/system speaks volumes. There would be congressional hearings if financial institutions prevented consumers from downloading their transactional history or they were charged $1 per printed page of their history. Yet the equivalent of that in healthcare has oddly become the norm.

Fortunately, it appears the ONC is going to rectify that in Stage 3 of Meaningful Use by having the default be that a patient can choose where their data is sent. One can only imagine the criticism the federal government would receive if, after sending $28 billion of taxpayer money on encouraging EHR adoption that consumers were unable to get their own data (not to mention the $750 billion waste in healthcare). The fact that this is still a topic of discussion is stunning. Having spent time with the ONC leaders (see also Mr. Obama, Tear Down This Wall(ed Garden), I have 100 percent confidence in Dr. Mostashari’s leadership that his organization won’t be swayed by the silo-loving legacy vendors. Anything less would be the missed opportunity of the century.

Wednesday, February 27, 2013

The Forgotten Secrets Of The Enterprise Giants: Virality, Word Of Mouth, And Other Radical Experiments

Editor’s note: David Barrett is founder and CEO of Expensify. Follow him on Twitter @expensify.

I would strongly encourage everybody who competes with Expensify to study Roman Stanek’s latest article, “Forget Virality, Selling Enterprise Software Is Still Old School”. To everyone else, I’d offer an opposing view: Enterprise sales is undergoing the most radical shakeup since the turn of the century, and today’s experiments will be tomorrow’s best practices.

Indeed, we often forget that today’s “best practices” were once the radical experiments of their day. Rewind any great business and you’ll usually find a crazy, high-risk, totally radical startup. And all the things that are said about today’s startups were said about theirs, too â€" by people whose names we’ve long forgotten.

Furthermore, those startups generally got started doing things in a completely different way than they do it today. Far from being great companies birthed whole, the backstories of today’s giants involve involve twists, turns, one-time opportunities, and unexpected constraints. They involve radical experiments. Contrary to what the old school teaches, the great companies of today pivoted in their day. A lot.

For example, take the two greatest SaaS enterprise companies: Salesforce and Concur. Both are held up as paradigms of enterprise sales, and are often highlighted as examples of how it’s always been done (and presumably, always will be). But their real histories are far more nuanced than they’re summarized to be.

Did you know that Salesforce initially launched with an activity-based pricing model, where the first two seats were perpetually free? It was designed to get critical mass in a company to soften it up for an inside sales call â€" and it worked great. This was their sales model up to about $17 million in sales, until the dot-com crash wiped out fundraising opportunities right at a moment when they had high customer churn combined with a major spend on inside sales, creating a cashflow nightmare. For this reason they switched away from activity-based monthly billing to prepaid annual contracts: to get more cash up front to fund the business recognizing that the price of capital had spiked when they needed it most. This permanently changed the pricing model, and thus the compensation and lead generation models, and basically every other model of the company to what we know it to be today.

And did you know that Concur’s first product was a desktop expense reporting application modeled off of Quicken, sold directly to individual salespeople to â€" once again â€" soften up the enterprise to an inside sales call?  And that this model worked great? This continued until Concur was competing with American Express on a major deal â€" a leading Detroit auto maker. I don’t know that the details of that situation are public, but here’s what happened next: American Express cancelled their expense reporting application, purchased a 25 percent stake in Concur, and then began selling Concur directly to their enormous range of commercial card customers. This instantly shifted Concur’s sales strategy to the current (now “classic”) approach of top-down sales to large enterprises. But it sure wasn’t where they started, and they’ve struggled to go back downmarket ever since.

Even more interesting than the “large enterprise” space are the clever techniques used to lock up the “small enterprise” â€" a world where classic acquisition techniques just don’t work at all. Take Intuit, which holds something like 84 percent retail market share of small business accounting with its QuickBooks product. But that’s only of “retail” market share â€" it ignores that most very small businesses still use Excel for their accounting, not choosing to buy any accounting package.

Furthermore, 85 percent of that QuickBooks business came “word of mouth” â€" not through some big marketing campaign â€" a number that’s held true from the start to today. And most of the revenue from QuickBooks comes not from QuickBooks itself, but secondary financial services. QuickBooks establishes a priceless channel to sell more profitable SMB accounting products that can’t be effectively sold in any other way. So the story of QuickBooks, one of the greatest and most widely used accounting packages in the world, isn’t as clear as it seems at first glance.

Weirder still, the inspiration for QuickBooks came when they were getting a bunch of requests to add invoicing to Quicken: a personal finance product. (Side note: Quicken inspired both the world’s leading invoicing and expense reporting applications… hmm…) Intuit then built QuickBooks and tried to sell it directly to accountants who at the time, had no interest whatsoever. Accountants are very tech savvy, but not exactly early adopters. So instead, they just bundled QuickBooks with Quicken, reaching out directly to the small business owners who made those feature requests â€" and then those business owners forced their accountants to adopt. Today, Intuit is generally recognized as the only party to “own” the accounting channel, but they came at it via a totally radical approach that its competitors seem to have forgotten (which is probably why Intuit has had such firm footing for decades, despite legions of challengers).

Enterprise sales is undergoing the most radical shakeup since the turn of the century, and today’s radical experiments will be tomorrow’s best practices. However, let’s not forget that today’s best practices were once radical experiments themselves â€" and that many of today’s experiments are old ideas retried under new conditions. The world is a complex place; ideas come and go, and sometimes old ideas lay new roots in a different market with evolved technology.

If there’s one lesson that the old school continuously fails to teach, it is stop listening to the so-called (and self-styled) experts. Do your own research, talk to the original sources, and just go out and build your crazy thing. You’ll probably fail, maybe a lot. But eventually you won’t, and your insane ideas might someday become commonplace. The line between genius and stupidity is very, very faint and often only visible in the rear-view mirror.

Don’t be afraid of failure. Be afraid of not trying. Salesforce, Concur and Intuit weren’t, and now we can’t imagine a world without them.


David Barrett is the founder of Expensify, a new startup that launched in the demopit of the TechCrunch 50 and was selected as the second-place “popular choice” out of over a hundred other companies. Previously he was a software engineer at Akamai until April 2008, when he was terminated for public comments critical of an Akamai customer (see here and here).

â†' Learn more

David Barrett founded Expensify in May 2008; Witold Stankiewicz joined him in August 2008, and together they launched an Alpha product at TechCrunch 50, taking home the “DemoPit 2nd Place” prize. In March 2009, they launched a Beta version and demoed it at FinovateStartup 2009. Expensify’s mission? Help people create expense reports that don’t suck! In May 2009, Expensify raised $1M, hired some additional engineers, and went to Istanbul for a month in order to write the award-winning Salesforce.com...

â†' Learn more

Tuesday, February 26, 2013

Mobile Is About Doing One Thing Great, Not Just Being Mobile First

Editor’s note: Mrinal Desai is co-founder and CEO of addappt, developer of an iPhone address book app that your friends maintain. He was also the first business development manager at LinkedIn. Follow him on Twitter @mrinaldesai. 

Curly: Do you know what the secret of life is? [Holds up one finger.] This.
Mitch: Your finger?
Curly: One thing. Just one thing. You stick to that and the rest don’t mean shit.
Mitch: But, what is the “one thing?”
Curly: [Smiles] That’s what *you* have to find out.

For startups, a lot has been discussed about how to play the market: mobile first, web second. Many startups present themselves as mobile-first operations. Larger companies like Facebook are now saying they are a mobile company â€" not surprising since their mobile DAUs for the first time surpassed the desktop web DAUs.

These discussions about whether to be mobile or web first mimic discussions that take place about companies in the retail space: Should we open stores in BRIC countries, Europe or Latin America? And then should the store look like those at home or built locally to fit the culture?

But what exactly is it about the mobile market that changes the rules of the game and why might this be the best thing for startups?

Mind Share â€" Being First With A Differentiated Product

The size of the screen, the pattern of time availability and our “location” in the real world lead to a more single-task orientation than when confronted with the desktop web. One does one task at a time on the phone, and it tends to capture undivided attention for that short time span. Accordingly, the device aligns itself to bursts of one-task activities â€" there is less time (and room if you consider the screen size) for distractions.

I want to get somewhere and I need directions, and I don’t want to easily get distracted with news or a game. It’s lunch time; I want to eat and want to know which restaurant I should pick, so I pull up reviews â€" I don’t want to easily get lost in my email at that time. If I am responding to an email, it is going to be short and I am not going to switch to another app until I am done (I might on the desktop). I have downtime waiting in line, so I cut ropes, slash fruits or crush pigs with my nimble fingers. I am in bed, and I want to catch my news quickly. It is all about making decisions quickly. Mind share enables that.

This has led to an inherent difference from the web â€" the “one-thing” mobile app ecosystem versus the “many-thing” web. You have to look no further than the growth of the app economy where four out of every five minutes on a phone are being spent on an app. Or consider Apple’s recent announcement of 40 billion downloads, half of which were in 2012 alone.

On web, mobile or any business, companies always work hard to differentiate with that one thing with which they want to capture the user’s imagination. When you capture imagination, you get attention, and when you get attention, you get engagement, which leads to loyalty. Whether it is on mobile, web or consoles, everything else must be built and extended to protect that one thing â€" Gmail, Google+ and YouTube are all examples of Google protecting search. Facebook Camera and Poke are efforts to protect photo sharing on Facebook.

But what has changed with mobile is that no (large) company has been able to pull off a “fast follow” to unseat the incumbent startup who has mind share with that one differentiated thing. Some just chose to acquire the mobile startup instead â€" Twitter bought Tweetie and Zynga bought Words with Friends, for example.

The best thing that can happen to a startup in the mobile space today is a big company copying it.

It became evident with Facebook’s recent attempt to Poke a hole in Snapchat’s market share. Or Facebook pursuing Instagram; Twitter or Yahoo/Flickr adding filters to pursue Instagram users; Facebook (killed Places after one year of launch) or Yelp chasing Foursquare with checkins; or Apple’s iMessage and the continuing growth of WhatsApp. With almost 263 million monthly active users, Rovio comes very close to the 311 million MAU for all of Zynga. Furthermore, now we are starting to see startups that had the first-mover advantage grow even further when a larger company tried to do the same “one thing.” The barrier of entry is that mind share.

It was and is still very different in the “many-thing,” non-mobile world. We saw Myspace come from behind and dethrone Friendster and then Facebook come from behind to remove Myspace. Gmail followed fast and took on Hotmail and Yahoo Mail successfully. Internet Explorer decimated Netscape and now Firefox and Chrome are giving IE a run for its money on the desktop. Microsoft persisted with Xbox and is a leading console now. RedBox, Hulu and Amazon Prime Video are nipping away successfully at Netflix.

Before, every startup when raising capital was invariably asked the dreaded questions, “What if Microsoft or Google built this?” and more recently, “What if Facebook built this?” Now in mobile, if that is the question your startup is asked, you are in luck because it implies that you are first to market with a differentiated product. And if a “fast-follow” attempt happens, it will probably validate your product further, highlight your effort and help you gather even more mind share leading to further market share. Believe it or not, the best thing that can happen to a startup in the mobile space today is a big company copying it.

The first-to-market differentiation, that one thing, is of paramount importance, though. It was filters for Instagram, ephemerality of photos with Snapchat (the complete opposite of Instagram and Facebook), cross-platform free messaging with WhatsApp, Checkins for Foursquare or user experience with Tweetie. As Curly says, you have to find out that one thing first.

It is hard to build a consumer service, or any good business, anywhere and in any industry, but if you are first with that one thing right on mobile, a larger company being interested in what you do might be the one best thing to happen to your startup.

[Image: Jack Palance in City Slickers]


Addappt allows you to change your contact information once and see it update everywhere - in all your connected friends’ address books. Have up-to-date, rich and complete address book entries automatically on your mobile, desktop, iCloud and all your backups. All your connected friends’ contact info is always available even if your entire address book gets ‘lost’. Just open addappt again and accept their pending connections. addappt is private â€" there is no ‘’brand management’.
 There are no updates to...

â†' Learn more

Monday, February 25, 2013

Cloud Latency Issues? Dedicated Network Connections Will Help

Editor’s note: Jelle Frank van der Zwet is Segment Marketing Manager, Cloud, for Interxion, and David Strom is a freelance writer. 

Do you ever encounter delays in loading web-based applications and get impatient? Resulting from poor network latency, or how fast data is transferred from one location to another, these delays are frustrating to as end users, but are far more costly to Internet businesses that depend on lightning-quick web experiences for their customers. With the proliferation of cloud computing placing added demands on Internet speed and connectivity, latency is becoming a more critical concern for everyone, from the end user to the enterprise.

Pre-Internet, latency was characterized by the number of router hops between you and your application, and the delay that packets took to get from source to destination. For the most part, your corporation owned all of the intervening routers, so network delays remained fairly consistent and predictable. As businesses migrate and deploy more and more applications to the cloud, the issue of latency is becoming increasingly complex. In order to leverage cloud computing as a true business enabler, it is critical that organizations learn how to manage and reduce latency.

The first step in reducing latency is to identify its causes. To do this, we must examine latency not as it relates to the web, but as it relates to the inherent components of cloud computing. After some analysis, you’ll see that it isn’t just poor latency in the cloud but the unpredictable nature of the various network connections between on-premise and cloud applications that can cause problems. We’ve provided below five layers of complexity that are unpredictable in nature and must therefore be considered when migrating applications to the cloud.

  • Distributed computing is one component adding to cloud latency’s complexity. With enterprise data centers a thing of the past, the nature of applications has completely changed from being contained within a local infrastructure to being distributed all over the world. The proliferation of Big Data applications using tools such as R and Hadoop are incentivizing distributed computing even more. The problem lies in the fact that these applications, which are deployed all over the world, have varying degrees of latency with each of their Internet connections. Furthermore, latencies are entirely dependent on Internet traffic, which waxes and wanes to compete for the same bandwidth and infrastructure.
  • Virtualization adds another layer of complexity to latency in the cloud. Gone are the days of rack-mounted servers; enterprises are building virtualized environments for consolidation and cost efficiency, and today’s data centers are now a complex web of hypervisors running dozens of virtual machines. Unfortunately, virtualized network infrastructure can introduce its own series of packet delays before the data even leaves the rack itself.
  • Another complexity layer lies in the lack of measurement tools for modern applications. While ping and traceroute can be used to test Internet connection, modern applications don’t have anything to do with ICMP, the protocol behind these tracing devices. Instead, modern applications and networks use other protocols such as HTTP and FTP and therefore need to try to measure their performances accordingly.
  • Prioritizing traffic and Quality of Service (QoS) delve deeper into cloud latency’s complexity. Pre-cloud, Service Level Agreements (SLAs) and QoS were created to prioritize traffic and to make sure that latency-sensitive applications would have the network resources to run properly. However, cloud and virtualized services make this a dated process, given that we need to now differentiate between certain items like an outage in a server, a network card, a piece of the storage infrastructure, or a security exploit. Different cloud applications have different tolerances for network latency, depending on their level of criticality; while an application controlling back-office reporting may tolerate lower uptime, not all corporate processes can allow for downtime without having a significant impact on the business. This makes it increasingly important for SLAs to prioritize particular applications based on performance and availability.
  • Evasive cloud providers, who sometimes neglect to inform you about where their cloud data centers are located, can also complicate latency. To really understand latency, you should know the answers to the following questions:
    • Are your VMs stored on different SANs or different hypervisors, for example?
    • Do you have any say in decisions that will impact your own latency?
    • How many router hops are in your cloud provider’s internal network and what bandwidth is used in their own infrastructure?

Despite latency’s complexity, it provides a great opportunity for innovative cloud-based solutions, such as the Radar benchmarking tool from Cedexis, which provides insight into what goes on across various IaaS providers, as well as tools like Gomez, which are helpful in comparing providers. While tools are of course helpful in providing insight on trends, the overarching solution to measuring and mitigating cloud latency is providing more consistent network connections.

The best available option is to connect to a public cloud platform. Amazon’s Direct Connect is the best that we’ve seen in providing more predictable metrics for bandwidth and latency. [Disclosure: Interxion recently announced a direct connect to the AWS platform in each of its data centers.] Another viable option is Windows Azure â€" both products are particularly useful for companies looking to build hybrid solutions, as they allow some data to be stored on premise while other solution components can be migrated to the cloud.

Finally, by colocating within a third-party data center, companies can rest assured that their cloud applications are equipped to handle all of latency’s challenges and reap extra benefits in terms of monitoring, troubleshooting, support, and cost. Colocation facilities that offer specific Cloud Hubs can provide excellent connectivity and cross-connections with cloud providers, exchanges and carriers that improve performance and reduce latency to end users. Furthermore, colocation data centers ensure that companies not only have the best coverage for their business, but also a premium network at their fingertips.

In this connected, always-on world, users increasingly demand immediate results for optimal website and application performance. For businesses looking to boost ROI and maintain customer satisfaction, every millisecond counts. While several dimensions and complicating factors of latency can introduce a number of disturbances for users and providers of cloud services, having dedicated network connections can help avoid these pitfalls and achieve optimal cloud performance.


Interxion (NYSE:INXN) is a European provider of carrier-neutral colocation data centre services. Founded in 1998 in the Netherlands, the firm was publicly listed on the New York Stock Exchange on January 28, 2011. Interxion is headquartered in Schiphol-Rijk, the Netherlands, and delivers its services through 28 data centres in 11 countries located in major metropolitan areas, including London, Frankfurt, Paris, Amsterdam and Madrid, the main data centre markets in Europe. The company’s core offering is carrier-neutral colocation, which includes provision of...

â†' Learn more

Sunday, February 24, 2013

San Quentin Prison Demo Day Gives Entrepreneurs Behind Bars A Second Chance

Barbed wire and armed guards aren’t your typical intro to a startup pitch event. But today, San Quentin Prison hosted The Last Mile demo day featuring presentations by seven inmates. The Last Mile hopes that through entrepreneurship, it can prepare convicts for employment and reduce recidivism. Considering these founders have never used the Internet or an app, their business plans were remarkable.

“It makes me feel like I’m already contributing to society” said Crisfino Kenyatta Leal, one of the first inmates to go through The Last Mile who presented at its first demo day in 2012. The program was set up by accelerator KickLabs, and funded by its founders Chris Redlitz of AdAuction and Beverly Parenti of First Virtual holdings, two succesful 90s tech companies. Investors, entrepreneurs, and authors like First Round Capital’s Josh Kopelman, MC Hammer, and AllTop’s Guy Kawasaki come in to mentor the inmates. Though these captive founders aren’t looking for funding now, many hope to launch their businesses once they earn their freedom.

California Death Penalty

Cell Block Startups

“IDs!” the guard at the first checkpoint yelled. He yelled a lot of things, leading me to assume working at a prison is a great job if you enjoy raising your voice. As I got escorted across San Quentin’s scenic bay-side compound, I was assured the inmates do not have an ocean view. My passport was triple-checked and I got shuffled through the iron gate airlock into the prison itself.

An all-prisoner jazz band swung through a few jams in the on-site church before Redlitz gave an opening speech, explaining that Last Mile participants have to show years of dedication to their education before being admitted to the program. After a quick inspirational rap song from two more inmates, the first of the founders took the stage.

Chris SchuhmacherWhat was immediately clear was how much these men had practiced. When Chris Schuhmacher presented his plan for Fitness Monkey, a startup aimed to help drug abusers replace substances with “a healthy addiction to fitness,” he looked more polished than many Y Combinator companies I’ve seen demo. Schuhmacher cited statistics, framed his market, walked through the app he envisioned, and rattled off one-liners like “Fitness Monkey is the product of my life, and my life sentence.” If delivered elsewhere, you probably wouldn’t have guessed Chris was convicted of second-degree murder.

Eddie Griffin, in San Quentin for cocaine possession, pitched an app called “At The Club” that lets jazz music fans stream concerts. Art Felt Productions from second-degree murderer Tommy Winfrey gives prisoners a way to sell their art on sites like eBay and Etsy despite being banned from the Internet. Heracio Harts, serving 10 years for manslaughter, devised a plan to fight urban obesity by turning abandoned homes into urban centers that host farmers markets and exercise spaces.

Occasionally, the fact that the inmates could only read about new technologies but not actually use them detracted from their startup plans. Several planned to use QR codes, which seem like a good idea, but no one actually scans. Darnell Hill, who is serving 10 years to life for robbery and kidnapping, laid out a plan for a mobile app to treat victims of post-traumatic stress disorder. He was betting QR code wristbands could drive downloads.

Several of the ideas centered around keeping people out of jail. The inmates seemed eager to prevent others from sharing their fate.

Schumacher explained that drugs and alcohol play into the crimes of over 80 percent of people incarcerated in the U.S. He hoped to shift addicts onto a “runner’s high” to keep them on the straight and narrow. Larry Histon, serving 29 years to life for first degree murder, proposed a company called TechSage which would turn newly released ex-cons into mobile app developers so they’d have jobs and stay out of prison. Histon granted that “the world has changed in 18 years” since he was sentenced. A former IT professional, Histon tries to stay current by getting his hands on tech magazines like Computer World and Information Week, which inmates are allowed to have.

Innovating Without The Internet

Rather than the impressive quality of the pitches, it was the fact that prisoners are not allowed to use the Internet directly that was most surprising. That the U.S. spends $50,000 a year to incarcerate people, but won’t give them access to a learning tool with infinite potential was downright depressing. The web could give convicts a chance to learn skills that could earn them jobs once they’re out. Last Mile co-founder Beverly Parenti lamented that “20 years in prison cost $1 million, but inmates are released with no training.” That leads many to wind up back in jail. “It’s a really bad investment,” Parenti tells me.

Despite the barriers, San Quentin’s inmates do their best to participate in the digital world. They’re allowed to fill out “tweet sheets” which volunteers can then type in to let the prisoners use Twitter. They love to read print-outs or paperback copies of popular ebooks like Brian Solis’ “The End Of Business As Usual”. The Last Mile even set up a program to get inmates on Quora, which has spawned some of the knowledge base’s most fascinating content. Winfrey’s heartbreaking answer to “What does it feel like to murder someone?” has received almost 2,500 upvotes. But he’s never surfed the site; he’s required to submit answers on paper that are entered by someone else.

If inmates were allowed to use the Internet, they’d obviously need to be monitored, and perhaps barred from direct communication. Still, just the ability to passively browse certain parts of the web could make prisoners feel less isolated, and more ready to eventually rejoin the world. At least one of the convicts who presented today, though, came up with a brilliant business plan without the web.Last Mile Demo Day 2013

Jorge Heredia started his presentation by saying “I’m here to tell you even produce deserves a second chance.” Before going to prison for attempted murder, Heredia discovered that tons of fresh produce that is perfectly good to eat is thrown away because it doesn’t look good enough to sit on a grocery shelf. He tells me in his first experiment in the business “I bought $375 of onions that would have been thrown away. I packed them in my truck, went door-to-door selling them to local restaurants, and came back with $1,500.”

After 15 years behind bars, he wants to return to #2 produce-selling with a company he calls The Funky Onion. It plans to buy bruised fruits and vegetables for cheap, and sell them to businesses that don’t are about their appearance since they’ll be diced or canned anyways. Heredia’s also got a plan for a mobile app that educates people about the nutritional value and low prices of #2 produce, plus shows them where to buy it. The app even has a great viral hook: It encourages users to share photos of the “funkiest looking produce” they find.

As the presentations ended and the jazz band kicked up again, it was beautiful to see the founders getting patted on the back in congratulations by their fellow inmates. There was a distinct vibe of “at least you’re going to make it out of here”.

The Last Mile plans to eventually expand to other prisons beyond San Quentin and inspire similar entrepreneurship programs across the country. They have the potential to not only get prisoners back on track to being productive members of the community, but also to inject fresh ideas into technology. “Their ideas haven’t been corrupted by others’ approaches. In here your mind can just roam,” says Tulio Cardozo. After seven years in prison, he joined up with KickLabs upon release to mentor Last Mile participants while building his own program called Collaborative Benefit, which is like a LinkedIn for former inmates.

The tech world is supposed to embrace failure as long as it’s followed by hard work. Collaborative Benefit and The Last Mile hope that the we can see past what prisoners have done in the past, and give them a chance to redeem themselves. In his blue prison jumpsuit from the front of the stage, Leal urged, “If you treat a man as he can and should be, he’ll become what he can and should be.”

[Image Credit: AP]

Saturday, February 23, 2013

RedBus Continues To Dominate In India, But That’s Not What Makes Them Special

As I start my trip in India along with Dave McClure’s “Geeks On A Plane,” I started to read about all of the startups in India that are used to proving that the country is making inroads and is relevant. It’s a story that I’m very familiar with in emerging markets: a group of VCs and entrepreneurs that want to prove that India is worthy of time, effort and, more importantly, investment.

One of the companies that I had heard quite a bit about is called RedBus.in, a service that has standardized and centralized India’s bus system. Having been to India before, I kind of laughed at the notion of that being possible, even after having read Sarah Lacy’s fantastic piece about them from two years ago. This was supposed to be one of the “ones,” the company that is supposed to make the investment world make the pilgrimage to Bangalore, India, which is about 18 hours worth of flights away from Silicon Valley.

558695_09fW4Rw-OkH8a8UrLnW-zZrTr-fS3SF0LGsFkSGxkpcI visited the RedBus offices, which don’t seem to have change much since Lacy described them, but I did notice something new. I spent time in India two years ago, so trust me on this. It was a sign that happily announced that RedBus was hiring. It doesn’t sound like much â€" maybe a sign that the company is growing â€" but it was my first signal that something very special was happening in these offices.

Our entire “tour” packed into RedBus’ conference room, and once we were introduced to the team it didn’t feel like we were in India. We were in a successful startup’s office and they were about to matter-of-factly explain to us why and how they’ve disrupted a system in a country that had no business being disrupted.

All Aboard

Buses in India are a lifeline, along with manual and motorized rickshaws. There’s not much joyriding happening in this country; it’s very much a “point A to point B” proposition. When I say that, I mean that a group of friends aren’t going out to a club. Someone is going to the market to bring food back for their family. When it comes to travelling outside the city in which residents live, the bus is the only option.

There are thousands of buses. They’re cheap compared to trains and flights, and people will take a 23-hour bus ride from one side of the country to the other without batting an eyelash. Yes, 23 hours, and I just complained about 18 hours worth of flights from one side of the world to the other.

If someone were to tell me I had to sit on a bus, even with a few rest stops, for 23 hours, I’d probably have something that looked and felt like a panic attack. Throw in the fact that I wouldn’t know where to get the bus, whether the bus would be on time or show up at all or how much I’d have to pay for it.

Yes, it’s a stressful situation, yet India’s billion residents do it daily. In India, you can’t pull up an app and have a nice comfy car come pick you up and take you somewhere for a rate that won’t make you poor for a month or more.

India’s bus system is…India

Everything you’ve ever heard about India from trustworthy people is probably true. It’s a country of hustlers, trying to pay rent, put food on the table and make a good life for themselves and the people that they care about. When purchasing one of these bus tickets in the past, the experience is similar to walking into Burger King, ordering a hamburger and then being charged whatever the cashier feels like charging you. If you look hungry and desperate, that burger might cost you 10 bucks. That couldn’t happen in the United States, yet it happens in India every single day.

That’s the situation that RedBus saw as an opportunity. Without any standardization of pricing or centralization of routes, fares and information about the bus fleets, India’s bus system ran like the rest of the country tends to â€" in complete chaos.

Screenshot 2013-02-22 at 7.55.17 AM

Sure, there was a massive opportunity to make money and control the show here, but I found that RedBus has other motives that make it a truly special company, whether it’s based in Mountain View or Mumbai.

A Googler’s touch

Alok-Goel-150x150We were given a basic demo of RedBus and shown some pretty interesting statistics on how far they’ve come since launching six years ago. But the meeting wasn’t led by its CEO. It was led by its Chief Product Officer, Alok Goel. Goel spent roughly three years at Google in India, focusing on geo and local, as well as becoming the head of mobile search and products after spending less than two years there. The most interesting part is that Goel joined RedBus in October of just last year.

hockeystick

As soon as he started speaking, I turned to my travel pal Sean Percival of Wittlebee and said “He clearly has a Googlemind.” A Googlemind is something I’ve noticed from current and former Googlers where they attack a problem in a way that whether they have $1 or $1 billion, they act as if they have all of the world’s resources at their fingertips. That type of moonshot thinking is what propels a startup into a world-class business.

Not only does RedBus want to be the only trustworthy source for purchasing bus tickets in India, it wants to be seen as a company that does “no evil” and truly cares about the people who use their service. Yes, Goel has that empathetic DNA that I’ve written about before. He knows that if he makes the purchasing experience a good one on the first try, then that person will be a customer for life. A customer for life is exactly why RedBus is dominating India’s tech scene, as well as making itself valuable globally, from a learning perspective.

It’s not all gravy, though, as India’s government and policy is way stricter than what we see in the States. At any moment, it feels like someone could step in and disrupt the disruptor, but it hasn’t happened yet.

Just two questions

Anyone could see that RedBus is successful, but that’s not why they’re intriguing. I asked Goel just two questions, the first one being, “Did bus fleets always accept an SMS ticket receipt as payment?”, to which he answered, “No, up until last year, most bus drivers required a print out of a ticket, which is difficult for most people to do.” With only 120 million desktop Internet users in the entire country, that’s a big ask. Even I don’t like to print things out.

I pressed with the obvious followup: “So, you basically forced bus drivers to accept SMS because it was the best experience for your users,” to which he answered “Yes.”

That fire, confidence and drive to do what’s right for both your company and your users is a hallmark of every successful company. Goel and his team know that unless people feel good while using the service, they won’t be a return customer, and if they’re not a return customer, then the old and broken bus system will win. That’s disruption.

After a few more questions from the group, Goel showed us some of RedBus’ new features, which include photos and panaoramic shots from all of the major bus stops that its users rely on. The idea, Goel says, was to make sure that you knew exactly where you were supposed to be, along with when you should be there. That type of empathy goes a long way, and the approach sounds very Googly to me, especially with its streetview-esque pictures and Google Maps integration. Tie that in with community review and ratings functionality for over 7,000 available buses, and you’ve got the beginnings of a complete solution.

I also had the opportunity to take a look at something that hasn’t been launched yet, something that I’m looking forward to giving a test-run, and it’s probably something that could grow the company’s revenue 5X easily. Why hasn’t it been released yet? It simply wasn’t time, Goel said.

My second question was “How many times has Google tried to acquire you?” to which Goel answered “They’re not in this market yet,” and smiled. He didn’t answer my question, and it’s clear that RedBus is building the type of company that utilized a brilliant approach to infrastructure to completely change the way that an entire country operates and gets from place to place.

Again, this isn’t a perfect company, or market. No company or market is, and it seems like there’s quite a bit of work to do in the mobile space, but in a country that is slow to adopt and adapt, RedBus has taken the lead.

How often do you find a company that can do that no matter where they’re based? Not many. Maybe it’s time to call a win a win, no matter where a team is based or the market that they choose to attack. When asked, Goel stated that with what RedBus has built, the Indian bus industry wouldn’t be able to survive without them now, even if it tried.

I think if a country like India can stop worrying about being like Silicon Valley and find its true self, there could be a new RedBus every other week. It’s moonshot thinking, of course, but that’s what it takes. The real story isn’t that RedBus is dominating a market in India. It’s how.


Friday, February 22, 2013

What Games Are: Should Sony Move Beyond PlayStation?

Editor’s note:Tadhg Kelly is a veteran game designer, creator of leading game design blog What Games Are and creative director of Jawfish Games. You can follow him on Twitter here.

So then. February 20th. New York City. Big news. This, it seems, is what the content of a viral video aired this week by Sony promises. Slowly we saw the signature four shapes of the PS controller slide into view while spark trails coursed over their frames. The intended message (presumably) was cometh the hour, cometh the PlayStation 4.

Sony has struggled in the PS3 years. The company has sold 77m machines at last count, but over a very long period (Apple has sold over 500m iOS devices in a shorter timeframe). It suffered a humiliating debacle with last year’s PlayStation Network hack. It has attempted (and largely failed) to get into the Wii/Kinect space with the Move controller, and also updated the PSP line with a new handheld â€" the already-a-dud PS Vita. Layoffs, losses and difficulty have been its watchword.

Yet the company also marshalled considerable effort behind key franchises like the Uncharted series and Journey. In many ways Sony has been trying to rebuild innovation credibility after a period of perceived arrogance at the height of the PS2 era. The Move, for example, is a great gestural controller and is useful beyond the screen, such as in the indie game Johann Sebastian Joust. There have also been initiatives to deliver more App-Store-like distribution for games and to support unusual titles like Book of Spells.

Despite all of that, the PlayStation still has a perception problem. What I’m wondering is whether the PlayStation brand is one that Sony can ever fully rehabilitate, or whether the company would be better off in the long run by ditching it.

Gaming Epochs

Video game journalists have always interpreted the industry in the grand narratives of generations, console wars, legacies and heritage. To them the industry has a touch of the saga, and its cycles become vaguely mythic. Fans support their favourite platforms with the fervour of religions. There are stories of rises and falls, of felled heroes and platforms that should have been. Journalists and fans generally consider, for example, that this is the seventh age of gaming. (And yes, it does all sound a little bit Tolkienesque on occasion.)

As a part of that view, the sense that no power on Earth can sustain an idea whose time has gone is pervasive. While some systems have managed to successfully sequel a predecessor (the SNES, Xbox 360, the PS2), once a platform brand is judged over then it’s over. And often that means that the company behind that platform goes to Hades. Sega never really got over the Genesis years. Nor Commodore with the Amiga.

The only platform holder that regularly manages to buck that trend is Nintendo. Primarily it’s because Nintendo often focuses a new brand on a new control innovation. The Nintendo DS is defined by it’s dual screens, and the Nintendo 64 similarly by its analog stick controller. So, in the epochal view, Nintendo thrives because it moves with the ages where Sega did not. The question for Sony is whether dumping PlayStation would let them be like Nintendo and reinvent, or go the way of Sega into hardware oblivion.

Too Much Heritage?

Privately, the PS4 is seen by many in the industry as something of a last-chance saloon. While Sony was once the king under the mountain of the console industry, various dragons have stolen its throne. Chief among them is Microsoft, and the press narrative has it that the signature moment of implosion for Sony was the so-called “Giant Enemy Crab” conference at E3 2006. That’s the moment when Sony went from first to last in their eyes.

Since then PlayStation has struggled to seem relevant because of its legacy. PlayStation carries a weight of other games with it, from Wipeout and Tekken to Gran Turismo, Metal Gear Solid and Final Fantasy series. As each iteration of the platform appears, each game is brought along for the ride and so the catalog looks immediately stale. It’s difficult to get journalists excited by yet another iteration of a game that they have already played to death, on a platform that they view as undying rather than living.

Those factors automatically sour any announcement that Sony could make. The biggest challenge facing Kaz Hirai (CEO of Sony) is how to take a broken-down brand with all of its trappings and make it new again. He only has to look at how Microsoft is mishandling Windows 8 to see that sometimes holding onto the past too much is a recipe for innovating half way. Rejuvenation can be done, but to do it implies making a definitive break and being willing to consign the past to the past.

RejuveStation

Steve Jobs revived a brand when re-founding Mac OS as OS X. Part of the advantage of working in a media landscape that thinks epochally is that that instinct can be played up to, and a marketing storyteller like Jobs knew this. He realised early that half the battle is about capturing and distorting reality into a myth, which turned journalists from inquisitors to evangelists. Sony used to be good at this, too, but lost their aura. There is always the chance to get it back by leaping forward, though.

If Sony rocks up on stage in three weeks’ time and announces a PS4 (or “PlayStation Next,” “PlayStation X” etc.) with much the same stylings as PS3, it will be greeted with a sense of fatigue. All interested eyes will turn to see what Microsoft is doing with Xbox, and then probably back to the much more interesting story of Ouya, Steamboxes, Gamestick and the emerging microconsole story. On the other hand, if Hirai appears and announces something new? If he shows off a new way to control games, a new platform brand and tells a new story? That could be the real turning point for which Sony has searched.

A digital-only console would be an interesting start, as would a console whose every unit can be turned into a development kit. An app store that everyone can submit games to would send the signal that all bets are off. To be the company that finally brings the console out of the 20th century mould and changes what we think when we think “console” into something other than “expensive dumb game box with pretension of being a media centre”? Who knows. Or maybe he’ll just do what we expect and announce a PS4.

Here’s hoping not.


Sony is one of the leading manufacturers of electronics, video, communications, video game consoles, and information technology products for the consumer and professional markets.

â†' Learn more

Thursday, February 21, 2013

EE, Three, BT, Vodafone, O2 All Win 4G Spectrum In The UK, But £2.3B In Bids Falls Short Of The £3.5B Expected

Ofcom, the UK communications regulator, has just announced the winners of the UK spectrum auctions for 4G spectrum on the low-frequency 800MHz band, used for LTE and other mobile broadband services. The list is an attempt at playing fair: it includes fixed line incumbent BT; major mobile carriers Vodafone, Telefonica/O2, and EE; as well as smaller mobile upstart Three.

Meanwhile, two applicatants, MLL Telecom and HKT (UK) Company Limited (PCCW and Hong Kong Telecom in Hong Kong), were unsuccessful in their bids, Ofcom said.

In all, some £2.3 billion ($3.6 billion) was bid by the winning carriers, and the first services will come in 6 months. That means it fell short of the £3.5 billion that the UK government thought it might raise, and an even bigger drop down from the £22.5 billion raised by 3G auctions a decade ago.

“Despite all the noise being made about the UK’s 4G auction, what you can’t hear is the sound of champagne corks popping over at the Treasury as Ofcom’s 4G auction fails to raise George Osborne’s optimistic expectation of £3.5 billion coming in at £2.34 billion,” noted Matthew Howett, telecoms regulation analyst at Ovum. “For the mobile operators there must be widespread relief that the amount paid is a mere fraction of the £22.5bn they were asked to cough up during the 3G licencing process.”

Still, this is a major auction in terms of value (let’s say money saved by carriers, perhaps) as well as capacity, and potential services on that capacity. Ofcom says that it went through more than 50 rounds of bidding, and a total of 250 MHz of spectrum was auctioned in two separate bands in 800 MHz and 2.6 GHz â€" “equivalent to two-thirds of the radio frequencies currently used by wireless devices such as tablets, smartphones and laptops,” Ofcom says.

For a country that has one of the highest smartphone penetrations in the developed world (61% according to Kantar Worldpanel), the UK has been running behind others like the U.S. when it comes to rolling out superfast broadband services, specifically on LTE. Finally setting this auction date, after many delays, back in July 2012, and now completing the bidding, Ofcom believes it’s now getting off on the right foot to change that.

“This is a positive outcome for competition in the UK, which will lead to faster and more widespread mobile broadband, and substantial benefits for consumers and businesses across the country. We are confident that the UK will be among the most competitive markets in the world for 4G services,” said Ed Richards, Ofcom Chief Executive, in a statement.

Still, you can also argue that consumer appetite in the UK for the services is still nascent â€" or at least waiting for more price competition in the area of LTE. EE, the combined efforts of T-Mobile and France Telecom/Orange in the UK, launched the first UK LTE network last year on unused spectrum in another band. Yesterday, the carrier revealed that it’s had 201,000 net-adds in the last quarter.

As a bit of background on the two spectrum areas: the lower-frequency 800 MHz band was part of the ‘digital dividend’ freed up when analogue terrestrial TV was switched off. It’s good for long-distance coverage over big areas, such as the UK’s rural expanses and smaller towns and villages. The higher-frequency 2.6 GHz band is better for short distances and fast speeds, ideal in urban environments. Ofcom says that in all the coverage should extend to 98% of the UK’s population, and 99% outdoors. By 2017 “at the latest,” coverage should also extend to 98% of each of the UK’s individual nations â€" England, Scotland, Wales and Northern Ireland.

Ofcom is careful not to be prescriptive when it comes to exactly what technology winning bidders will implement on their spectrum. While mobile carriers like EE, Three, Vodafone and O2 are likely to use it for LTE, BT â€" which backed out of being a mobile network owner when it spun off its Cellnet network as O2 (now owned by Spanish incumbent Telefonica) â€" may end up using it for other services like building out its existing WiFi network as well as wholesale services.

Ofcom had originally set aside a special tranche of spectrum for on bidder to provide mobile broadband service specifically for indoor reception, and the winner of that lot, it said, was Telefonica/O2.

Here is how the spectrum awards break down, according to Ofcom:

Ofcom 4g spectrum awards

Release below.

After more than 50 rounds of bidding, Everything Everywhere Ltd, Hutchison 3G UK Ltd, Niche Spectrum Ventures Ltd (a subsidiary of BT Group plc), Telefónica UK Ltd and Vodafone Ltd have all won spectrum. This is suitable for rolling out new superfast mobile broadband services to consumers and to small and large businesses across the UK1.

The auction has achieved Ofcom’s purpose of promoting strong competition in the 4G mobile market. This is expected to lead to faster mobile broadband speeds, lower prices, greater innovation, new investment and better coverage. Almost the whole UK population will be able to receive 4G mobile services by the end of 2017 at the latest.

A total of 250 MHz of spectrum was auctioned in two separate bands â€" 800 MHz and 2.6 GHz. This is equivalent to two-thirds of the radio frequencies currently used by wireless devices such as tablets, smartphones and laptops.

The lower-frequency 800 MHz band is part of the ‘digital dividend’ freed up when analogue terrestrial TV was switched off, and is ideal for widespread mobile coverage. The higher-frequency 2.6 GHz band is ideal for delivering the capacity needed for faster speeds. The availability of the two will allow 4G networks to achieve widespread coverage as well as offering capacity to cope with significant demand in urban centres.

Ed Richards, Ofcom Chief Executive, said: “This is a positive outcome for competition in the UK, which will lead to faster and more widespread mobile broadband, and substantial benefits for consumers and businesses across the country. We are confident that the UK will be among the most competitive markets in the world for 4G services.

“4G coverage will extend far beyond that of existing 3G services, covering 98% of the UK population indoors â€" and even more when outdoors â€" which is good news for parts of the country currently underserved by mobile broadband.

“We also want consumers to be well informed about 4G, so we will be conducting research at the end of this year to show who is deploying services, in which areas and at what speeds. This will help consumers and businesses to choose their most suitable provider.”

Widespread 4G coverage

Ofcom has attached a coverage obligation to one of the 800 MHz lots of spectrum. The winner of this lot is Telefónica UK Ltd. This operator is obliged to provide a mobile broadband service for indoor reception to at least 98% of the UK population (expected to cover at least 99% when outdoors) and at least 95% of the population of each of the UK nations â€" England, Northern Ireland, Scotland and Wales â€" by the end of 2017 at the latest.


Ofcom is the communications regulator. They regulate the TV and radio sectors, fixed line telecoms, mobiles, postal services, plus the airwaves over which wireless devices operate.

â†' Learn more

Wednesday, February 20, 2013

Want Better Personal Video? Think Underwater Tech And Free Cloud Storage

Editor’s note: Michael Chang is CEO of YesVideo, a video-transfer and sharing service. He was previously CEO and co-founder of Greystripe, a mobile ad network acquired in 2011 by ValueClick. Follow him on Twitter @michaelmchang.

The world of digital video has undergone a massive transformation in the past few years, thanks to the proliferation of smartphones and innovation in the field. As a result, personal video content creation has exploded; according to the NPD Group, we started 2012 with phones accounting for 25 percent of all photos and videos captured. That’s huge, but I expect that 2013 will see even more accelerated growth.

I see three huge trends in the world of personal video today, and they have one common theme: answering the call of the consumer. Companies like Apple and Google are looking to consumers to see what they want and building products and platforms that meet their needs.

Here’s how I think personal video tech is being affected by consumer demand and what the industry can do to meet these demands.

1. The Next Hardware Revolution Will Happen Underwater

How do you make a successful video camera in the age of smartphones? Go where phones can’t go: underwater.

That’s precisely what GoPro has focused on, with its set of video cameras built for the adventurist in all of us. And the device has proven its worth: one camera survived seven months in the Atlantic Ocean and was returned to its owner with video intact. Consumers and media saw the value of the product, and GoPro’s value proposition was clear â€" so clear, in fact, that they they recently took a $200 million round of funding from Foxconn, valuing the company at $2.25 billion.

How long will it be until smartphone makers take a cue from GoPro and create waterproof cameras with better low-light settings? Not long. I predict that Samsung, Sony and other legacy video hardware companies will try to acquire GoPro in 2013, which will prove pricey given Foxconn’s capital boost.

2. Video Cloud Storage Will Become Free

Consumers today have a real and urgent need for video-centric cloud storage, and I expect that cloud services will see a boom of video material in the coming year and react accordingly. Right now the major players in cloud storage â€" Google, Apple, Microsoft, and Amazon â€" provide deeper integration with their own services and devices. Dropbox and Box are the most device-agnostic. Yet none of these cloud services have been designed specifically for video storage.

Last year Google dropped the pricing of Google Cloud Storage twice â€" and both drops occurred in the same week. How will Google’s move affect the strategy of other big players in this highly competitive space? The Google threat to other players is significant because of Google’s massive and highly profitable search revenue, which can fund a long-term ‘subsidy’ intended to drive consumers toward their solution. Google is the best positioned at this point in time to fund a completely “free” play in the space that results in more customer engagement on its properties and further acquisition of customer data, both of which feed back into its advertising machine. To put the scale of their search business in perspective, if you were to start a new $1 billion business within Google this year, such a new business unit would only contribute to annual revenues by 2 percent.

In terms of using cloud storage as a loss leader, Amazon too has very significant alternative revenue streams, but the margins are slimmer than in search and it’s not as clear that consumer storage is a related, linear play vis-à-vis its offline e-commerce business. That said, the longer-term future of Amazon’s e-commerce shifts to digital goods and Kindle-related products â€" and, therefore, a popular consumer-facing storage business with a sophisticated application/content layer â€" fits right into that strategy. They already have a large and growing connected device customer base. Amazon may look to purchase Dropbox in order to quickly leap ahead in the consumer storage category â€" particularly around media (photos, videos, music).

With the core cloud storage business becoming commoditized and prices dropping, cloud providers are in danger of becoming “dumb clouds” à la telcos that over time became “dumb pipes.” The margins in core cloud storage will continue to decline similarly to what occurred with telco carriers serving broadband into the home. Carriers have been forced to play nicely with Apple while losing revenue on services they would have normally provided but can’t because of Apple’s disruptive lead with consumers and stringent models. So it’s still a business that competes on price; they were not able to move upstream or laterally to add more value.

The same will occur with cloud storage providers. But Dropbox and Box are executing on a great strategy of becoming not just storage, but platforms for data to be shared between the application layer and users. Once they are the default platform (think OS on a PC), more and more apps plug in and then customers (or the ‘OEMs’ of our time) are willing to pay â€" think “OS in the cloud” paradigm. The other play is to actually provide apps for specific categories, but the challenge there is finding customers willing to pay for one-off apps. Bottom line: being stuck in the middle is to be doomed.

3. Consumers Become Prosumers

It’s a tough economy, and entertainment companies are feeling the pinch just like other industries. This year, Hollywood, the music world and sports leagues have all taken steps to encourage prosumers to create their own content in an effort to connect with fans. In this case, the prosumers will be die-hard fans equipped with smartphones and cameras.

Cheap cloud storage, slick tools offered in high-tech sandboxes, and the desire to capture consumer attention are all driving a wave of fan-edit/co-creation opportunities. The pros are finally allowing consumers to lean forward and take part in their content. Just look at the recent moves of Netflix, Amazon Prime Instant Video, Hulu, and YouTube. These companies are now offering interactive contests, judging opportunities and the use of professional studios to give consumers a chance to be a part of creating content alongside celebrities.

Facebook, Twitter and Google+ create an added bonus for these companies, since prosumers will likely share their work with family and friends.

In 2013, we will see players in the video industry continue to answer the call of the consumer, and do all they can to make personal video easy to make and take â€" from the cloud to the ocean. What trends do you see happening in the personal video arena?


YesVideo is the global leader in digitizing and sharing personal videos. The company’s patented technology efficiently transfer personal videos at scale. Customers can then view, edit and share their personal video in the cloud. YesVideo has unlocked billions of classic family video moments and life events for over 7 million customers. Customers can drop off their personal film, tape, prints and photo albums at one of YesVideo’s 34,000 retail partner locations, or online at YesVideo.com.

â†' Learn more

What Games Are: Why The Xbox’s $5 Problem Is Great For OUYA

Editor’s note: Tadhg Kelly is a veteran game designer, creator of leading game design blog What Games Are and creative director of Jawfish Games. You can follow him on Twitter here.

Game developers, publishers and platform holders regularly argue the toss on the appropriate pricing of games. They struggle with questions of whether being too expensive cuts off oxygen to new players, or whether being too cheap means they become devalued. In a free-to-play era especially, this question grows even more complicated. Those kinds of games require certain compromises in game design, and those compromises do not work for all kinds of game.

The results of that argument play out differently over various platforms. Facebook games, for example, are almost entirely free-to-play, while console games are retailed at comparatively high prices. Steam makes most of its bones in holiday sales when games are often 75 percent off, while iOS advocates endlessly debate the virtue of $0.99 or $1.99, or the occasional outlier like Minecraft at $6.99. Each breeds its own tensions, but â€" for me at least â€" it seems that the optimum price for non-free-to-play digital games is around $5.

I base this from a study that development studio 2DBoy ran a few years ago. The studio sold copies of its hit game World of Goo using a flexible pricing model. You could buy the game at whatever price you liked. The developers then compiled and released all of the sales data. The results were interesting, showing that of all the price points that users chose, $5 seemed to work best in terms of numbers of downloads versus pay-off.

Informally, I’ve seen the same magic number seem to work wonders across the board. Steam sales often encourage mass purchases from users of games in or around the $5 mark (ask any PC gamer just how many impulse purchases they have made at this level that they have not yet played). Similarly, second-hand retailing of games often serves as a kind of library for gamers, where they essentially maintain a deposit with a store by buying a game, and then trade-in-and-top-up with a few dollars each time to get the next game, and the next one.

The reason is that $5 is cheap enough to consider a punt on a game without going through the rigmarole of trying a demo, or pirating the game, and it especially makes sense for kids. Kids operate on a pocket-money budget most of the time, which means that they swap, loan, copy and otherwise get access to many more games than they can reasonably afford. This kind of activity is how they play a lot, form their tastes and go on to become valuable adult customers down the road.

That’s why the recent news broken by Edge (Disclosure: I write a column for Edge magazine) about the next Xbox being always-online and locking games to accounts is so significant. The notion that a game that you buy soul-binds itself to your account is not new. Your iPad already does exactly this, for example, as does your Steam account. Once you buy a copy of XCOM Enemy Unknown or Temple Run 2, you can’t pass it on.

This, largely, is the thinking behind locking games on console for disk- and digital-sold games. If you buy your Halo 5 in Gamestop and insert it into your next Xbox, the thinking goes, it will only ever work for you. So, no second sales or borrows or swaps.

In principle this makes publishers very happy because they resent the grey markets of piracy and second-hand retail with a passion. Likewise, everyone assumes that the entire console business is going digital anyway, and with the death of media retailers like HMV it’s only a matter of time. The combination of the two seems to promise a future where all console games will be sold at high prices, all the time, and everybody will make money.

Microsoft’s $5 Problem

Maybe, but at the expense of younger players. During my formative years I was an inveterate pirate. I got my start with a Sinclair Spectrum and games recorded on cassette tape. In the schoolyard I would swap, copy and even compile mix-tape compilations of games on larger cassettes, writing start numbers on the inlays. I would also buy games by the truckload. As a result, I played a lot of games.

Whether you came up in the console era and swapped cartridges like a maniac at lunchtime, or copied floppy disks at home and prayed they would work, try lots of free-to-play games on Facebook, or buy lots of cheap apps, chances are that you’ve played many cheap or free games. It’s a part of growing up that you have way more time than money. You bore easily, but also become highly passionate toward the things that do not bore you, and you learn like crazy. You become invested, perhaps in a few games or in the whole culture of being a gamer, and this turns you into a lifelong fan of that platform.

Unless, of course, you can’t afford to. Then you just go elsewhere.

The problem as I see it for the next Xbox, or PS4, and this wish to lock games in, is that I just don’t see Microsoft lowering itself enough to charge $5 for Halo 5. Rather, I think they’re thinking to turn every previously-a-borrower potential customer into a $50 actual customer, and so is every other publisher that wants to see this kind of locking happen. And that’s simply ludicrous. A locked-in strategy only works in a $5-or-less world.

Microsoft should have owned the digital gaming space years ago, not Apple. It had the solution in place, millions of customers, and the hardware to play really great games. But the problem was that Microsoft could not really allow itself to enable those games to find their natural price point. Even today if you browse through the Xbox 360 back catalogue there are many 2- and 3-year old games that are being offered at twice or three times what you would pay for them new at retail, never mind second-hand prices.

Even while iOS races ahead as a gaming platform, console makers like Microsoft, Sony and Nintendo have not meaningfully responded. Part of the psychosis of these companies is that they still believe that they are a premium offering, and so they act like a fancy Nordstrom store selling perfume, luggage and handbags for vastly inflated prices. They get tied up in arguments over whether they should allow their product to be devalued (as though the rest of the world does not exist), and whether that is good for games in the long term.

Long story short: They are not in the position where they can make $5 work, and their publishers are not willing to sell Call of Duty for such small amounts. And that opens the door to OUYA, Gamestick, Steamboxes and the oncoming storm of microconsoles.

Whereas Xbox and such are all bound up in these massive and complicated tangles, microconsoles like the OUYA are perfectly happy to work with smaller developers and publishers. They are keen to deliver that App-Store-like relationship to the TV space, and they are increasingly well-placed to make the price argument. If it comes down to Microsoft demanding premium money for a console and games with no comeback or resale, versus an OUYA for $99 that sells games for $5, that becomes an easy decision for any parent. So the OUYA becomes the platform that onboards young players.

Perhaps there is a growing market for a console that is just for adults, for so-called “graymers” who are happy to pay for premium experiences. Perhaps there is an emerging market for an exclusive kind of console, a sort of super-premium category that does not need young players any more. If there is, I wonder how long that market could sustain an expensive platform like the next Xbox by itself though.


Xbox is a sixth generation video game console produced by Microsoft. It was Microsoft’s first foray into the gaming console market.

â†' Learn more

Wednesday, February 13, 2013

You Won’t See Facebook’s Graph Search On iPhone Or Android Anytime Soon

Editor’s note: Tareq Ismail is the UX lead at Maluuba, a personal assistant app for Android and Windows Phone that was a Battlefield participant at TechCrunch Disrupt SF 2012. Follow him on Twitter @tareqismail.

The release of Facebook’s Graph Search has raised much discussion among technology pundits and investors. One of the biggest questions surrounding the highly anticipated feature is its availability on mobile.

After all, Facebook CEO Mark Zuckerberg has said on a number of occasions that Facebook is a mobile company. “On mobile we are going to make a lot more money than on desktop,” he said at TechCruch Disrupt SF 2012, adding “a lot more people have phones than computers, and mobile users are more likely to be daily active users.” Facebook understands mobile’s importance, so why wouldn’t it offer Graph Search for Android and iPhone from the start?

It’s simple: Graph Search for mobile would need to incorporate speech, which is a different beast altogether.

Many of the examples given during the Graph Search keynote contained long sentences, which are not easy to type on a mobile device. Think of the example “My college friends who like roller blading that live in Palo Alto.” Search engines like Google get around this on mobile by offering autofill suggestions, but their suggestions come from billions of queries. For Facebook, since their search is based on hundreds of individual values like “fencing” or “college friends” specific to each user and not a group, autofill suggestions will often not be useful, or worse, will require a lot of tapping and swiping to drill down to the full request.

What’s more is that Graph Search queries are designed to be written out naturally in full-form sentences with verbs, pronouns, etc., which is 0something that keyword search engines like Google do not need. If you’re looking for sushi places to eat on Google, it’s a five-character search for the keyword “sushi.” With Graph Search, Facebook wants to show you sushi results refined by a group of your friends, so the same search would require writing out “sushi restaurants my friends have been to” or “sushi restaurants my friends like.” That’s a lot more typing.

It’s clear that on mobile, Graph Search would need to be powered by speech to make it most effective. No one will want to type out such long sentences. Not to mention, with services like Google Now and Siri, people will come to expect control through speech.

Supporting speech is a different problem altogether than what they’ve solved so far and they’ll have to do a lot more work until it’s available on any major mobile platform. Here are four reasons why.

Speech Recognition Doesn’t Come Cheap

If time is money, then speech recognition is very expensive. It’s well-known that it requires a considerable amount of investment to develop and no one knows this better than Apple and Google.

Apple chose to not make their own speech recognition but rather license Nuance’s technology for Siri. Nuance has spent over 20 years perfecting their speech recognition; it’s not an easy task and they’ve had to acquire a number of companies along the way.

Google, on the other hand, chose to develop their own speech recognition and needed to build a clever system to collect data to catch up to Nuance. The system, called Google 411, set up a phone number where people could call in from landlines and feature phones to ask for local results. Once they got the data they needed, they shut down the service and used it to build their recognition system. It’s taken a company like Google, who masters search, over three years to come to where they are now with their speech recognition.

Even if it takes Facebook half as long to come up with a similarly clever solution, they’ll need to start soon for it to be released any time in the next year.

Names Are Facebook’s Strength And Speech Recognition’s Weakness

One of Facebook’s early successes has been names. The company’s algorithms to return the most relevant person when making a search for a friend played a key role in its early success. People are accustomed to saying “add me on Facebook” without the need to specify a username or handle, an advantage that makes their entry into speech that much harder.

Names are speech recognition’s biggest challenge. Speech recognition relies on having a dictionary or list of expected words that are paired to sample voice data given to the system. That’s why most engines do really well when recognizing common English words but have such a hard time with out-of-the-norm names and varying pronunciations. Facebook has hundreds of thousands of names to deal with and it’s a key part of their experience, so they’ll need to master the domain for it to be useful for their users. Now, one could argue that having access to all these names may give Facebook the edge to solving this problem, but they’ll need to work on a solution for some time for it to become anywhere near acceptable.

Supporting Natural Language Isn’t Easy

The final piece of the puzzle may be the most difficult: supporting natural language is really, really hard. Working at natural language processing company Maluuba, I can attest to just how hard a problem this is to solve. Natural language processing is the ability to understand and extract meaning from naturally formed sentences.

This also includes pairing sentences that have the same meaning but are said differently. For example, with Graph search, I can type “friends that like sushi” and it shows a list of my friends who have identified sushi as an interest, but if I type “friends that like eating sushi” it looks for the interest “eating sushi” â€" which none of my friends have listed â€" and it returns zero results. In reality, both sentences mean the same thing but are worded differently. Understanding natural language involves understanding the real intent behind a request, not just its literal intent.

On a desktop browser, they may be able to get users to learn how to search in specific sentence templates, especially with the help of autofill suggestions. But for speech it’s nearly impossible. People ask for things differently almost every time; even the same person can ask for the same request in a different fashion when speaking. Ask 10 of your friends how they would search for nearby sushi restaurants. I have no doubt most, if not all, responses will be different from one another.

Now, they could fix the sushi example I gave earlier but that may cause false positives with other aspects of the system. Understanding natural language requires large data sets and complex machine learning to get right, something that Facebook’s Graph Search team may be investigating but will not be able to master any time soon. It’s just not a simple problem to solve. That’s why Apple jumped into a bidding war to buy Siri, which at its core is a natural language processor. To put into perspective how difficult it was, Siri spun out of the DARPA project that took over five years to build with over 300 top researchers from the best universities in the country.

Languages, Languages, Languages

Facebook has over a billion users who collectively speak hundreds of different languages. Facebook has said they’re beginning their launch with English. How long until all billion users’ languages are supported for the desktop? And since speech is significantly harder, how long until those users are supported on mobile? It’s one thing to support hundreds of languages through text and a much harder thing to support it through speech. This will be the problem they face for the next decade.

Facebook acknowledges that their future lies in mobile. Mobile begs for Graph Search to be powered by speech, something that Facebook simply cannot do yet. I have no doubt they will but it most definitely won’t be to any acceptable quality anytime soon. They’ve taken the first step but they have a long journey ahead of them.


Maluuba’s mission is to empower people with the ability to find exactly what they want by speaking to their smart phone. Maluuba’s proprietary, patent-pending engine provides superior capabilities to traditional voice recognition systems. Asking a question like, “what movies are playing nearby?” enables users to buy tickets, find theater directions, and share search results on social platform­s such as Facebook and Twitter. The Maluuba language engine is a product of two years of advanced research in artificial intelligence, machine learning...

â†' Learn more