Wednesday, October 31, 2012

Aiming To Be A Full-Service Fitness Platform, Runtastic Launches New Indoor App Suite; Hits 14M Downloads

Thanks to MapMyFitness, RunKeeper, Nike and many more, apps that keep track of your exercise and push you to drop those extra pounds are by no means novel. Nonetheless, bootstrapped European startup Runtastic is still managing to carve out a name for itself in a crowded space by offering a simple user experience, while still offering the deeper functionality of higher-end products.

Continuing to build out what it intends to be a full-service health and fitness platform, Runtastic today expanded beyond what has been its core focus to date â€" outdoor recreation, like running and cycling â€" launching a new suite of apps that target indoor exercise enthusiasts. Its so-called “Fitness App Collection,” which is now available for Android, iOS and the Web, includes four motion-activated apps, including PullUps, PushUps, SitUps and Squats. You can probably guess at their content.

But, quickly, for those unfamiliar, Runtastic offers a bevy of apps and online services that measure, track and analyze health and fitness data aimed at motivating novices and exercise-aholics to both get in shape and stay healthy. To date, the company has primarily catered to runners and cyclists with features that allow them to track the speed, elevation and distance of their exercise, while storing that data in the cloud so that they can keep track of all their athletic achievements (or in my case, disappointments).

Thus far, the company has released nine free and paid apps (thirteen with its latest additions), with its most popular app being its namesake (Runtastic), which allows users, among other things, to connect to their social media accounts so they can compete with friends, share updates and post pictures. It also offers Nike+-style audio feedback (like words of encouragement during routines), along with a number of post-exercise features that enable users to comment on their exercise, leaving notes about the run, the weather, conditions of the trail, etc., with the ability to return to that stored info in the future.

While these features are things we’ve come to expect from our fitness and exercise apps, Runtastic wants to create an end-to-end fitness platform â€" something no company has completely nailed. At least not yet. For example, earlier this year, the company its product line to include hardware.

Its portfolio now includes a GPS-enabled watch with a heart rate tracker, a separate chest strap and heart rate tracker among others. As of now, the company’s hardware offerings are only available in Europe (where it’s headquartered), but CTO and co-founder Christian Kaar tells us the team plans to launch its hardware portfolio in the U.S. in January 2013.

Founded in late 2009, the company’s products really started to pick up steam in 2012. All told, its apps have collected over 14 million downloads, but nearly 10 million of those downloads have since January of this year. This growth â€" an increasingly diversified portfolio â€" saw the company enter cashflow-positive territory at the end of 2011.

As for its new Fitness App Collection, each focuses on a specific indoor workout routine, offering training plans developed by fitness experts to help users improve strength and stamina by working toward a set number of repetitions, be that 20 push-ups or 50 pull-ups. Using your phone’s proximity sensor and accelerometer, the apps count the number of repetitions you complete, offering voice coaching and an automatic timer to create the sensation that you’re working out with your own (virtual) personal trainer. Users can buy the training plans in-app or use the app without the plans for free.

To connect these new apps to the Runtastic platform â€" and to each other â€" the company also today released a new page on its website called PumpIt, which ranks the community’s activities across the four apps in a master leaderboard and automatically uploads stats from each workout to both PumpIt as well as the user’s personal fitness profile.

Kaar tells us that this is Runtastic’s first stab at offering a public leaderboard, as it allows each person to view the “scores” of the top three users in each app (and activity) and compare their own times to those of the leaders. Users can then share their progress by email, Google+, Facebook, Twitter, etc.

The iOS versions of the new apps also include a tab that allows users to integrate data from across the suite, much like PumpIt does for the Web, which enables viewing of fitness activity history, badges, and so on.

For those Android users still rolling their eyes, it’s also worth checking out the new PRO version of Runtastic the company launched earlier this month, which, thanks to Google Earth integration, allows users to watch their runs or bike rides after they happen â€" in 3D video. Users can view birds-eye views of the course they followed, reliving each exciting moment as-recreated-by Google using GPS data â€" and served with pace, time and elevation on-screen, live in 3D.

“You may think you’ll never be able to do 100 pushups or 150 squats, but our latest apps offer step-by-step training plans to boost your performance over time and make these goals 100 percent attainable,” said CEO Florian Gschwandtner. “By expanding our app offerings, Runtastic encourages all users to achieve well-rounded workout routines, and new gamification features provide extra motivation to take workouts to the next level.”

And for those of us entering winter, indoor exercise tracking is welcome, especially if those apps live in a more expansive network. Runtastic still has a ways to go before it can catch up with the leaders in the U.S. market, but it’s building out a team based in San Francisco, and if it can keep pushing forward with integrations, partnerships and hardware, this may not be the last we hear from the bootstrapped Austrian fitness lovers.

More on Runtastic at home here.


With The New iPad, Apple Accelerates; With The iPad Mini, It’s Pedal To The Metal

“So why is iPad so phenomenally successful? Well it turns out that there’s a simple reason for this,” Apple CEO Tim Cook told an audience at the Apple event last week in San Jose. “People love their iPads.”

The response drew some awkward laughs as it seemed almost like the punchline of a misfired joke. But it wasn’t a joke â€" Cook was absolutely serious.

On the surface, such an answer seems to lack the depth to provide insight into the tablet riddle. After all, there have been many failed tablets before the iPad, and perhaps even more since the iPad. But it turns out that everyone may have been over-thinking it. The iPad is successful because people love it. Said another way, the iPad is successful because Apple was able to create a brilliant product. It’s not about having certain specs or being a certain price. It’s not about checking off boxes. It’s about the product as a whole. It’s the culmination of intangibles which only Apple seems to be able to nail time and time again. There is no question, the iPad has been a phenomenon.

“But we’re not taking our foot off the gas,” Cook continued.

The Fourth Generation iPad

What Apple proceeded to show off was two products. The first, was the iPad we’re all well aware of by now. The 9.7-inch, 1.5 pound slab of glass and aluminum. Even though Apple had just revealed the third-generation of this device (dubbed simply “the *new* iPad”) this past March, Apple really wasn’t taking their foot of the gas â€" it was time to show off the fourth generation of the device.

Truth be told, the fourth generation of the iPad isn’t all that different from the third-generation model. It’s more of an “iPad 3S”, if you will. I don’t mean that as an insult, or to downplay the update, but this is primarily an under-the-hood update. It’s all about taking a great product and making it better.

I’ve been playing with this latest version of the iPad for the past week. Yes, it’s faster. Apple claims 2x CPU and graphics performance thanks to the new A6X chip. That claim has been a little hard to test since no apps are yet optimized to take advantage of the new power â€" and mainly because the previous iPad was already so fast â€" but things do generally seem to launch and run a bit faster than they do on the third-generation iPad. I did get a chance to see a demo of a game that was optimized for the new chip (though it’s not out yet) and that’s clearly where this new iPad is going to shine.

For now, one primary way you’ll notice that this iPad is better than the last version is in the front-facing camera. Previously, it was a VGA-quality lens (0.3 megapixels). Now it’s HD-quality (1.2 megapixels), capable of capturing 720p video. This is key for FaceTime. Apple has slowly but surely been rolling out FaceTime HD video capabilities across all their products. Now the iPad is on board as well.

The new version of the iPad also gets the update to Apple’s new Lightning connector, matching the iPhone 5 and the new iPod touches and nanos in this regard. This makes the bottom of the device look a little cleaner, but connection performance is the same.

The real key to the Lightning connector may be that it allowed Apple to tweak the internals of the iPad, since this new connector takes up much less space. As a result, we get better performance while maintaining the same, awesome 10-hour+ battery life. Perhaps more importantly, the new iPad doesn’t seem to run as hot as the last version did.

While “heat-gate” (“warm-gate”?) was yet another overblown situation surrounding Apple a few months back, the temperature of the device at times was noticeable (though far less than any laptop, for example). Now it seems less so. Or maybe my hands have just grown callouses due to my lack of oven mitt wearing while handling the device. Hard to say.

If you were going to get an iPad before, obviously, you’ll want to get this one now. In fact, you don’t even have a choice â€" Apple has discontinued the third-generation model. The prices remain the same across the board as do all of the other features (WiFi/LTE, Retina display, etc).

Yes, it is kind of lame for those of us who bought third-generation models that Apple updated the line so quickly, but well, that’s Apple. To me, the fourth-generation leap doesn’t seem to be nearly as big as the leap from the first to second generation or from the second to third generation, so perhaps take some solace in that.

It simply seems like Apple decided they wanted to push out a new iPad version before the holidays with the new connector and gave it a spec boost as a bonus. Why not do more? Because they don’t need to right now. As Phil Schiller put it on stage, “We were already so far ahead of the competition, I just can’t… I can’t see them in the rearview mirror.” Maybe the Nexus 10 and the larger Kindle Fire HD changes that. Or maybe not. It will be interesting to see what, if anything, Apple does in the spring, their typical time to push major iPad changes. My guess is that this is a boost intended to hold them over until a year from now.

The iPad mini

“It’s all about helping customers to learn about this great new technology and use it in ways they’ve never dreamed of. So what can we do to get people to come up with new uses for iPad?,” Schiller asked on stage as the fourth-generation iPad rotated to reveal the main event: the iPad mini.

I’ve also had the chance to try out the iPad mini for the past week. My thoughts are much more straightforward here: Yes.

The iPad mini isn’t perfect â€" for one reason in particular (more on that below) â€" but it’s damn close to my ideal device. In my review of the Nexus 7 (which I really liked, to the shock of many), I kept coming back to one thing: the form-factor. Mix this with iOS and Apple’s app ecosystem and the intangibles I spoke about earlier and the iPad mini is an explosion of handheld joy.

The first thing you’ll notice is how light the iPad mini is. It’s similar to the effect when holding an iPhone 5 for the first time, though not quite as jarring. At the same time, the iPad mini is far less than half the weight of a full-sized iPad (0.68 pounds vs. 1.5 pounds), so the difference is very noticeable. The iPad mini is not quite as light as a Kindle, though it’s not far from that weight. And it’s lighter than a Nexus 7.

The next thing you’ll notice is how thin the iPad mini is. When you hold it, it almost feels like you’re just holding a sheet of glass. Amazingly, it’s thinner than the iPhone 5. Yes, you read that correctly.

By comparison, the regular iPad feels like you’re holding a full flat-panel monitor. And the Nexus 7 feels like you’re holding a piece of plastic with a screen bolted on. Apple has really nailed the feel of this device with its slightly rounded sides (much less pronounced than the larger iPad), chamfered edge (just like the iPhone 5), and flat aluminum back.

One of the smarter things Apple has done with the iPad mini was trim down the front side bezels of the device. On other tablets (including the iPad), the bezels are extremely noticeable, especially when compared to something like the iPhone, which has basically no side bezel. This detracts from the screen.

The reason for these large bezels is simple: you need somewhere to rest your hands when holding the device. But because you’re likely to hold the iPad mini in one hand, Apple has decided to instead fix the bezel issue with software. iOS smartly recognizes when your hand is placed on either side of the screen (when portrait-aligned) and recognizes that this isn’t a touch meant for input purposes. For people who have been using iOS for a while, this takes some getting used to â€" you think you’re going to trigger something on the screen by placing your palm on the screen. But trust the software.

The truth is that Apple likely had to come up with some solution here because if they had tried to keep thicker bezels with the iPad mini, it would have been awkwardly shaped. As it is, the mini is already quite a bit wider than the Nexus 7. It’s about as wide as it can comfortably be. The bezels would have pushed it over the edge, so to speak.

The reason for this is the screen. As you might imagine, the screen is the single most important factor of the iPad mini â€" and even more so with this tablet in particular. In order for the iPad mini to make sense, Apple felt it needed to maintain the same screen resolution as the iPad 2: 1024-by-768 (more on this below). This meant they couldn’t do a screen closer to the 16×9 ratio that rivals are using for their mini tablets, and instead had to stick with the standard iPad ratio (which is closer to 4×3).

While we’re on the subject of the screen, let’s not beat around the bush â€" if there is a weakness of this device, it’s the screen. But that statement comes with a very big asterisk. As someone who is used to a “retina” display on my phone, tablet, and even now computer, the downgrade to a non-retina display is quite noticeable. This goes away over time as you use the iPad mini non-stop, but if you switch back a retina screen, it’s jarring.

That’s not to say the iPad mini screen is bad â€" it’s not by any stretch of the word. It’s just not retina-level. At 163 pixels per inch, it’s actually quite a bit better than the iPad 2 screen (the last non-retina iPad), but you really can’t compare it to a retina display.

In fact, you can’t really even compare it to the Nexus 7 display, which is 216 pixels per inch. Text definitely renders sharper on that display than it does on the iPad mini. However, the overall display quality (brightness, contrast, color levels) of the iPad mini seems better than the Nexus 7.

If you haven’t used a retina iPad before, the pixel density is unlikely to be an issue â€" after all, it was just a couple years ago that Apple introduced retina displays. And overall, I don’t expect it to be a major issue in the broad market. Remember that the iPad 2 continues to sell very well despite its lack of retina display. But again, it needs to be mentioned as one potential weakness of the device.

If you can get beyond that (and again, I can), it’s hard to find another fault with the iPad mini. When Apple claims that they’ve essentially taken an iPad 2 and put it in this new form factor, they’re not lying. That’s perhaps the most remarkable thing about this device. It’s an iPad 2 â€" a brilliant device in its own right, still â€" at a fraction of the size.

In fact, it’s even better than an iPad 2 in a few respects. The cameras (both front and back) are much better. The WiFi technology found inside is better. There’s an option to get LTE connectivity (though the model I tested did not have this included). The Bluetooth is better. You can get more storage space (up to 64 GB).

I still can’t believe it has the same 10 hours of battery life, but it does. In fact, the battery may even be a little better than the larger iPad. Insane.

The single biggest selling point of the iPad mini is likely to be the app ecosystem. Because the iPad mini runs the same version of iOS as all other iPads, it can also run all the applications that any other iPad can run â€" and iPhone/iPod touch apps too. And because of the aforementioned resolution, it can run all of these apps completely unmodified.

I’m including this paragraph so you’ll take a moment to consider just how important that last sentence is.

The iPad mini can run over 275,000 iPad apps without any modification. And it can run the over 700,000 iOS apps for all devices just as the regular iPad can (at scale or scaled up 2x). Scaled up 2x, some of the apps built for iPhone don’t look great (the text in particular), but that was always an issue with the iPad as well. Luckily, most of your favorite apps have already been tailored for the iPad now.

Yes, touch-targets are slightly smaller on the iPad mini than they are on the iPad, but I haven’t had an issue with this. If anything, it’s a little easier to type with the on-screen keyboard because the keys are closer together, in my opinion.

If Apple had only made the iPad mini as a gaming device, I think it would be one of the best-selling gadgets of all time. Some of the iPad games play so beautifully on the iPad mini that you’d think they were custom-tailored for this form factor. Playing games on the regular iPad is great. Playing games on the iPad mini is fantastic because the device is much easier to hold for extended periods of time in the landscape position.

Gaming has already proven to be a massive opportunity for Apple with iOS. The iPad mini is going to expand this possibility. If I were Sony, Nintendo, and yes, Microsoft, I’d be very worried about this.

The iPad mini is also a great media player. You’ll be able to access all the typical iTunes fare, and it’s good for watching videos in particular because again, it’s so light to hold now.

Books, magazines, and reading apps are likely to be another big use-case for the iPad mini. If you can get past the non-retina text resolution, this device is clearly more conducive to reading in bed than its larger counterpart.

So, all of this (beyond the screen caveat) sounds great. Home run, right? In my mind, yes. I can easily see the iPad mini becoming what the iPod mini was to the iPod â€" that is, the version that takes a popular, iconic device and vastly expands its user base. Apple says they have sold 100 million iPads since the initial launch two and a half years ago. That leaves roughly 5.9 billion people on this planet without one. The iPad mini can help that.

Perhaps the biggest question mark in my mind is about the price. Leading up to the unveiling, rumors swirled ranging from $249 to $299 â€" some event suggested Apple may try to lay the hammer down on rivals with a $199 price tag. The reality was much more Apple-like: prices starting at $329.

Apple has grown to be the most successful company in the world because they sell quality devices that people want at a healthy margin. As a result, the profits have rolled in. Last week during their earnings call, Apple made a point of saying that the iPad mini is going to have lower margins than the rest of their products â€" yes, even at $329. That’s not an excuse for the price, that’s the reality of the price.

But how will a $329 tablet fare in a world of $199 tablets? It’s hard to know for sure, but my guess would be in the range of “quite well” to “spectacular”. Apple has done a good job of making the case that the iPad mini is not just another 7-inch tablet â€" in fact, it’s not a 7-inch tablet at all. It’s a 7.9-inch tablet â€" a subtle, but important difference. As a result, it can utilize every iOS app already in existence. And it can access the entire iTunes ecosystem. And it will be sold in Apple Stores.

Apple isn’t looking at this as $329 versus $199. They’re looking at this as an impossibly small iPad 2 sold at the most affordable price for an iPad yet. In other words, they’re not looking at the tablet competition. This isn’t a tablet. It’s an iPad. People love these things.


Tuesday, October 30, 2012

LinkedIn And The Mutable Rules Of Social Networking

What is a social network? In general terms, Facebook is a network of friends and family. Twitter is a network of people/things you find interesting. And LinkedIn is a network of colleagues â€" to cover off a few of the big ones. (I’m still trying to figure out a neat description for Google+ â€" feel free to add yours in the comments.) But those neat descriptions are simplifications of more complex and changeable realities.

The rules of social networking are mutable. Necessarily so. As the services shift and evolve â€" to encourage more people to join and do more interacting â€" your individual use has to change to keep up (or drop off entirely as you abandon the service). And as the size of your network grows it can also demand new rules of interaction that work with a larger audience.

Plus, the more you use a social network, the more it can change you â€" the more personal info you share on Facebook, say, the more normal sharing that info becomes, maybe encouraging you to share even more. Even if you start out with hard and fast rules a careless click or two can soon reconfigure all that.

With all that in mind I’m curious to know how people approach LinkedIn. What are your rules for connecting with people on LinkedIn? And how have they changed?

I ask because I feel I’m at a juncture where my current rules need updating. When I started using LinkedIn (in 2008) the service put a lot of emphasis on only connecting with people you had indubitably ‘done business with’. Which made it pretty straightforward to decide when to click ‘accept’ and when to pass by on the other side. In any case, the vast majority of LinkedIn requests came from direct or indirect workmates.

But in recent years â€" and even more so since joining TechCrunch â€" I’ve been getting increasing numbers of LinkedIn requests from people I haven’t worked with, even tangentially. Sometimes these people are in a similar line of work or in the same industry. And sometimes requests appear entirely random â€" with no apparent connection at all â€" and not all look like mistakes/spam. (Being a journalist complicates the picture, of course, since it’s a line of work that necessitates getting in contact with people you don’t know yet.)

Put simply: The old rules of LinkedIn interaction aren’t working anymore.

I must admit to not being a particularly involved user of LinkedIn. Twitter has been my network of choice for years. But taking a fresh look now, LinkedIn looks to have evolved from being a service that links you with the people you work with right now, to one that’s about building networks of people you might work with in future and/or who might be able to facilitate your career in some way.

Which makes perfect sense â€" that’s what traditional business networking is all about â€" but it also means using the service requires a lot more thought than it used to, deciding who it makes sense to connect with and who to avoid, on a case by case basis. (Interestingly LinkedIn CEO Jeff Weiner approached the question of what LinkedIn is from the other way round, when he described it as a service for connecting talent to companies.)

LinkedIn has got a whole lot bigger since I joined â€" membership has climbed from 32 million to 175 million+ since January 2009. Over the years it’s also incorporated Facebook and Twitter style features such as status updates and Likes, and most recently the ability to follow key influencers â€" so again, it’s a whole lot more involved than it used to be.

Another way LinkedIn appears to be trying to steer/encourage users to broaden their networks is by polarising the options for responding to connection requests â€" to either ‘accept’ or ‘report spam’ (though procrastinators can still just ignore the request). 

In an effort to get a sense of how people are using LinkedIn these days, I queried my Twitter followers to ask what their rules for accepting LinkedIn connection requests are â€" asking whether they A) accept every request they get; B) only accept requests from people they know personally/can vouch for their work; or C) accept requests on a case-by-case basis.

While the responses spanned the range from “I only accept if I have met or spoken to the person” to “Mostly a). Why not?” â€" most people said they accept requests on a case-by-case basis â€" presumably connecting with people they haven’t personally worked with where they feel the link might be relevant/useful to them.

But almost as many people said they only accept requests from people they know personally/can vouch for â€" suggesting a lot of people are still treating LinkedIn as a strictly limited network of current colleagues.

While interesting, this was only a snap poll so I’m keen to hear more views on how people are using LinkedIn â€" tell me your rules of interaction in the comments.

Judging from this small sample, LinkedIn use appears to be transitioning from a network of ‘known knowns’, to a broader network of ‘unknown knowns’. (And the user uncertainly during this state of flux explains this additional response that was tweeted back to me: “D) only log in every six months, take one look at the list of total strangers wanting to connect, and run back to Facebook.”)

From LinkedIn’s point of view getting more people connecting is essential to continue growing its user-base and therefore its business. Which explains its shift in emphasis from a tight circle of current colleagues to a network of virtual strangers with the potential to further each other’s careers.

But when it comes to getting a large swathe of its user base to get over their aversion to connecting with total strangers â€" well, there’s clearly some work to be done there.

[Image: vladeb]


Why The Future Of Search May Look More Like Yahoo Than Google

When Nirvana was cool

Rewind to the late 90s. Almost everything you needed â€" email, news, sports, stocks, maps and more â€" was conveniently on one site: Yahoo!. Yahoo! was the “portal” to the Internet that strived to deliver everything you could ever want on its own properties.

The crazy thing is how successful Yahoo! was at this. Few, if any, have ever done content or media online at that scale.

But then Google came along and exploited the fact that the Internet was growing too fast for Yahoo! to keep up. Suddenly, Yahoo! was struggling to find its way, as Google made buckets of money by taking the opposite strategy: redirecting its traffic across the Internet to the best sites for each query (e.g. popular topics landed you on Wikipedia, movies on IMDB).

Search replaced the portal as the dominant paradigm, and it became accepted that no single organization could successfully cover every aspect of the web by itself.

Portals reborn

Fast-forward to the late 2000s…Facebook and smartphones, primarily the iPhone, are the new portals, reminiscent of the days of Yahoo!. They have every experience you need, from weather and stocks to games and maps. The difference is that they’re not foolish enough to think that they can keep up by doing it all by themselves â€" instead, they use their platform to scale the portal with partners.

As a result, today’s mobile and social portals offer distribution to third parties that develop content for their own platforms. In this world, the AppStore is the new Google.

But wait, this shift certainly has problems of its own. It puts the burden back on the user to know which apps to install and which to actually use.

Even if a perfect app exists for what I want to do, it’s too much hassle to download, install, and learn about a new app for something I’m doing just once. While there are many excellent apps today, finding the best one for my task is next to impossible.

Then, if I do manage to get the right app, it doesn’t know anything about me â€" I have to register and personalize it to some degree. Not to mention, I have to learn the app’s custom user interface, and try to remember for next time what the app does.

It makes me actually start to miss the simplicity of the Yahoo! world. A world where you’re already logged in, you don’t have to remember a hundred places to go, and the apps don’t overlap in functionality or act differently.

Search has to evolve

These issues are the opportunities at stake for tomorrow’s search engines. When you search, you know what you want (or something close to it) and the search engine points you in the right direction. You don’t need to pre-install the results or pre-select the right app.

We’ve seen that Google, Bing, and Yahoo! have already made efforts to “appify” their results. When you search for “The Dark Knight,” they all load movie show times, searching for addresses brings up maps, cities get weather forecasts, and stock symbols get charts, not to mention news results, image results, video results, etc. But, they’re all trying to cover every scenario without the help of partners, which, as we learned from Yahoo!, is impossible in the long run.

The answer isn’t to do it all in-house, nor is it only to enlist a community of app developers. The answer is a combination of the two: a portal connecting us to the app most qualified to accomplish the given query.

If the portal sees a query that’s pretty common, they should make their own app. In Yahoo!’s case, they’d keep their own stocks and news apps, but they’d farm out things that are a little more niche (think Yelp, Spotify, social readers, etc.) This is similar to how the iPhone comes with a calendar app and stock app, but doesn’t come with any games. There’s always going to be a set of things nobody builds apps for â€" in that case, we always have the old school 10 blue links.

How would this work for search? Imagine if searching “black eyed peas” loaded the Spotify app right inside the website, or searching “French gourmet” loads the Yelp app? They can let third parties build as many apps as they want, with each competing for distribution by proving their value to the search engine’s organic ranking system.

Such a search engine would mean that instead of having to install a hundred apps and remember which to use for which scenario, I’d have access to thousands of apps custom-built for each and every scenario I could ever desire. And even better, this lets me focus on my intent and what I want to do â€" not which app I should use to do it. Right now I have to think about the app category, like “I want restaurant info,” to know which app to use. What I want is to skip this step and just state my intent: “I want French Laundry.”

Even Apple is acknowledging that this is where the world is headed. Take Siri, for example. You don’t say “ask Yelp to find French Laundry.” You merely say “find a table at French Laundry” and it figures out which app or data source is best suited.

Why Yahoo!, not Google!?

Google is a search company at its core, and they make a lot of money doing it. If you’re the head of search at Google, are you really going to bet the farm on a new model when your current one works? It’s risky, and it’s not in your DNA. You’re somewhat a victim of your own success. Not to mention how distracted you are by competing with Facebook.

And Bing won’t appify like this because Google isn’t doing it. Bing’s goal is to put a dent in Google’s profits, making it harder for Google to go after Microsoft’s other cash-cow businesses, so, oddly, it counters Google’s search strategy by mimicking its approach.

Yahoo!, on the other hand, already thinks like this. For more than ten years they’ve been focused on delivering users all the scenarios on their site. They still have plenty of traffic and they’re a content company. It makes total sense for Yahoo! to become the Apple of the web.

Caption: Music apps competing for the best result on search for “Black Eyed Peas.”

Editor’s note: This is a guest post by Adrian Aoun, the founder and CEO of Wavii, an app that provides instant news feeds for any topic. Prior to Wavii, Adrian was a director at Fox Interactive Media and worked at Microsoft working on bringing Microsoft Office to the web. He also obviously has some unorthodox opinions about search.


Monday, October 29, 2012

As Google And Amazon Fight Up, Apple Refuses To Fight Down

After this past week’s Apple event, one thing stood out to me above all others. And just to make sure, I watched the event again. Same result.

The shots fired at Android tablets.

For everything that Apple announced (new MacBook Pros, Mac minis, iMacs, iPads, and iPad minis), this was what I walked away thinking about. It was a fascinating look into the collective mind of Apple.

In realtime, this seemed straightforward: Apple was trying to explain why their tablets were better than those made by the competition. Standard practice, right? But something about it stuck in my head as weird for Apple â€" especially once the price was revealed.

In the tablet space, Apple is without question the dominant player. By even acknowledging the competition, it’s a form of validation. Put another way: you should always fight up, not down. But here, Apple appeared to be fighting down.

But after the second watching of the keynote, my interpretation is slightly different. It’s a subtle difference, but an important one. I believe Apple was simply explaining to everyone that they were not going to fight down.

Consider this: there was ultimately only one reason why Apple acknowledged their Android competitors on stage. It may have not been obvious at first, but it was all about the price (which in Apple’s mind is directly related to quality). We couldn’t know it in realtime, but in hindsight, all of that Android tablet talk was to set the stage for the $329 starting price for the iPad mini.

Apple was simply trying to mitigate the fall-out over a price-point $130 higher than the Nexus 7 and the Kindle Fire. And they were doing it by making the case that they weren’t actually competitors. In Apple’s mind, they’re not trying to compete in the 7-inch tablet space, they’re simply trying to expand the user base of the iPad with a smaller version.

Again, subtle, but different. Apple is not going to make a $199 tablet â€" and certainly not out of fear. They’re going to make the best smaller tablet they can and price in a way they believe to be fair for the quality they’re going after.

Now, maybe that price is right or maybe they ultimately have to change it. (Remember, Apple had to drop the price of the original iPhone after only a few months.) But they’re clearly confident that they’ll have another hit on their hands.

And while some may view the $329 price point as greedy, consider that during their earnings call, Apple made it very clear that the iPad mini has margins far below those of their other major products. In other words, the iPad mini at $329 may be their best value product.

Remember that both the Kindle Fire and Nexus 7 are being sold at either break-even or at a loss. That’s nice for consumers looking for cheap products, but that’s not the business Apple is in. Apple makes money selling hardware. It makes no sense for them to sell the iPad mini at $199 if they’re going to make little or no money off of it. Apple simply would never make that product â€" it’s exactly why they stopped making printers when Steve Jobs returned to the company in the 1990s.

Further, the jury is still out as to whether or not this will end up being a good model for either Amazon or Google (or any of the other OEMs making and selling cheaper tablets). Amazon just reported a loss for the quarter, and Google’s numbers suggest they’re not monetizing Android as well as they’d like. Different models. We’ll see.

Meanwhile, the iPad mini will add billions â€" yes, billions â€" to Apple’s bottom line each quarter. No question. Apple makes a product. You buy it for more money than it cost to make. Apple makes money. Amazing how well that model works, isn’t it?

Of course, Apple has the luxury to do this. Why? The iPad itself.

“It seems like everyday, there’s another tablet shipping,” Tim Cook said during the keynote. “But when you look at the ones that are really being used, the numbers tell a different story. iPad accounts for over 90 percent of the web traffic from tablets. And we know that this is the thing that people do most often on a tablet.”

In other words, while you may have heard about other tablets out there, it’s not clear that anyone is actually using them after they buy them. Cook then notes that “we’re not taking our foot off the gas” and brings out Phil Schiller to show-off both the 4th-generation iPad (essentially an iPad 3S, if you will) and the iPad mini.

“We were already so far ahead of the competition, I just can’t… I can’t see them in the rearviewmirror,” Schiller notes to laughter from the audience. And then, as he shows off the iPad mini for the first time, Schiller compares it at first not to any Android tablet, but to a pencil (its thickness) and a pad of paper (its weight).

He notes that all of the 275,000+ applications developed for the iPad can work on the iPad mini completely unchanged. This will end up being the most important element of the device, and Apple knows it.

Only then does he dive into the Android tablet comparison. He shows a picture of a Nexus 7, but doesn’t address it by name. It’s simply an “Android tablet” â€" though he does acknowledge that it’s the “latest, greatest most favorite reviewed new device,” a tip of the cap to all the positive reviews.

Schiller notes how the iPad mini uses quality aluminum where the Android tablet uses plastic. He notes that the Nexus 7 is thicker and heavier despite its smaller display. He then goes into the all-important difference between a 7-inch display and a 7.9-inch display.

This is key because most consumers won’t consider this a big difference â€" since it’s not, on paper â€" so Schiller puts it in different terms: 21.9 square inches versus 29.6 square inches. Actual space, not just diagonal space. Then he puts it in human terms by looking at the web browsers on the devices. In this mode, the iPad mini has a 35 percent larger viewing area in portrait mode and a 67 percent larger viewing area in landscape mode. The countdown to Google removing chrome from Chrome starts now…

“Not a great experience,” Schiller keeps saying over and over again in reference to “that other product”. There’s a special emphasis on stretched-up mobile apps versus apps built for tablets (which Google is finally starting to acknowledge is an issue).

Aside from the screen size, there’s no side-by-side spec comparison. Because Apple isn’t interested in competing against those other tablets, they’re simply trying to show that the iPad mini is not a 7-inch tablet. Instead, it’s “a great iPad â€" equal to or better than iPad 2 in every way,” Schiller says.

Then comes the product video featuring Jony Ive. “Our goal was to take all the amazing things you can do with the full-sized iPad, but pack them into a product that is so much smaller,” he says. “If all we had done was taken the original iPad and reduced it, you’d be aware of all that was missing. There’s an inherent loss in just reducing size. What we did â€" we took the time to design a product that was a concentration of, not a reduction of the original.”

Depending on what side you’re on, this will sound either brilliant, or like brilliant bullshit. But it doesn’t matter. That is how Apple is thinking about this device. It’s not a 7-inch tablet, it’s a smaller, more affordable version of their high-end tablet that is dominating the market.

It’s the same approach Apple took when creating and unveiling the iPod mini almost nine years ago. Step 1: take a market-leading product. Step 2: make it smaller and slightly more affordable. Step 3: profit. Guess what? It worked.

During the height of the netbook craze, many in the tech press kept demanding that Apple make a cheap laptop. Instead, Apple made a (relatively) expensive MacBook Air and then the iPad. And they won on both fronts. They refused to fight down.

And so while it may seem like the keynote and maybe even the iPad mini itself is Apple fighting down, just look at the price. If Apple had released a $199 tablet, I don’t think it’s unreasonable to think that the other tablets would be dead in the water. But that’s not Apple’s model and they’re not interested in “winning” this way, because it’s not actually winning in their mind. It’s a race to the bottom. It’s ceding the high ground. It’s fighting down.

By the way, when you see Amazon advertise the Kindle Fire advantages versus the iPad mini on their homepage and in the highlights of their earnings release (while convieniently leaving out their less favorable specs), that’s fine too. They’re fighting up. Amazon is still new to the space and is much smaller than Apple. It’s their job to swing up, and it’s Apple’s job not to respond. You’ll notice that Amazon was never mentioned in the keynote, nor was the Kindle Fire ever shown.

As for the Nexus 7 and Android references at the keynote, it was perhaps a risky move because it seems at first like fighting down. But sometimes you do have to declare that you’re not interested in fighting down, if only to remind everyone who the boss really is.


Tablet First, Mobile Second.

Editor’s note: Tadhg Kelly is a game designer with 20 years experience. He is the creator of leading game design blog What Games Are, and consults for many companies on game design and development. You can follow him on Twitter here.

For the past six months I have owned the epitome of geek chic: an iPad 3 with a Logitech Ultrathin. I bought it because it seemed like a neat and lightweight solution to a persistent back pain problem that I have with laptops. However as I used it, it would draw much attention.

Its appearance on a table in a conference would spark conversations. Clients would stare mystified while I showed them how it worked and then ask where they could get one of their own. Random passers-by would walk up to me at events, on the train or in cafes and start much the same conversation. What kind of wunder-kit is this? How does it work? What kind of battery life does it have? And so on.

Seamless keyboard and touch-screen interaction has been a part of my workflow for a while (to the point that I now hate using laptops), but for most people tablets have basically been content devices. They’re used to watch Netflix in bed or read ebooks in the dark. In the words of one satirical video from the first iPad launch, they have seemed to be little more than a big-assed iPod Touches.

This perception of what tablets are used for also affects how game developers approach them. Studios tend to think “mobile first, tablet second” in their priorities, and with good reason. For one thing, the install base of smartphones is much higher than for tablet. They may be exotic â€" cool, even â€" but developers reason that this also means lower sales.

So what they tend to do is create mobile version first and then embiggen them for tablet. Or just not bother with tablet at all, relying on the player to enjoy a magnified version of the mobile app. I don’t just mean for small games from tiny studios. Even comparatively big success stories like Rage of Bahamut still have that mobile-first sensibility.

A tablet is not a stretched-out mobile device. The kinds of interaction that tend to be fun within the tablet environment focus around drawing, multi-touch and direct manipulation of on-screen elements, whereas mobile is more about selecting and tilting. Tablets are also more comfortable to use in landscape mode than portrait (mobile is the reverse), and have more concentrated use-cases: In mobile gaming an individual game session should fit within the period of time spent waiting for a bus, but tablets are often used for sit-down-and-play sessions.

Tablets do not have the same problems of thumb occlusion as mobiles. Many console-style games have found it difficult to translate to mobile because they need a soft joypad, and so your thumbs block the screen. On tablet that’s not nearly as much of a problem (although it still lacks haptic feedback). Conversely, tilting on mobile is a relatively comfortable experience (such as steering in Real Racing), but on tablet is awkward. If, like me, you use stands with your tablet then tilting really does not work at all.

The fluidity of interaction with a tablet is also different to mobile. Smartphones are organised around holding the device in one hand and using the thumb on that same hand to interact. However tablets are either fully held in one hand and interacted with by index finger, or sit in a stand and accept multiple finger input. As I type this article, the screen is within reach. I often reach up to select, move the cursor, cut or copy text â€" and that happens with much less intermediation than a laptop touchpad.

What this means (for both apps and games) is that the kind of application that works well in one does not work so well in the other. As the market for tablets continues to expand, this also means that that tablet-centric kind of interaction is becoming more important to consider. Multi-column content, for example, and drawing verbs become predominant. Keyboard covers are not yet common for many tablets, but they will be. The first Microsoft Surface is a (flawed for several reasons) case in point. It seeds the idea of keyboard, stand and tablet together.

So if the physical case can be made that tablet games should be thought of as different to â€" rather than an extension of â€" mobile, what about the economic argument? Won’t games sell more on mobile, as they already have? This, in my opinion, is not a particularly smart strategy to adopt going forward.

In the last three years Apple has sold over 100m iPads. They still dominate the market, but increasingly there is also competition from Amazon, Samsung, Google and Microsoft. 2-3 years now there may be as many 400m tablets in various configurations out there, going at the current rate. Even the 100m mark is extremely impressive: It’s more than most video game consoles manage.

The price point of software is also an important consideration. On mobile the best price point tends to be very low (Free, or £0.69/$0.99) but on tablet it’s higher. So a tablet user is worth two or more mobile users, if you look at it that way. And given that they are likely to engage for longer periods of time, they are also more likely to be free-to-play customers if that’s your bag. Tablets are a massive market, in other words, but few are the game makers really looking at it that way.

Perhaps the first game that really took advantage of tablet was Draw Something. The mobile version was fiddly and difficult, but the tablet version was beautiful. You could draw awesome shapes, and watch other users do the same, and the size of the form factor simply made it work. Another game that is clearly better on tablet is CSR Racing. The bigger screen lets the cars really shine, and the gear-shifter and accelerator are not sources of thumb occlusion as they on mobile. These games are only just the start, and their attachment to mobile is still strong.

Understanding that tablet is more than just stretched-out mobile leads to all sorts of interesting design challenges. It also creates more of an opportunity to tell a marketing story, or at the very least garner some publicity around new kinds of game. You can delight your users in whole new ways and stand more of a chance of catching Apple’s attention by being powerful, beautiful and without regret.

The real question is: Are you ready to try? Are you willing, as Flipboard did, to go tablet-first and cut down for mobile later? Are you willing to consider game designs that actually don’t work at all on mobile? If you have a larger canvas to work with and new verbs to explore and burgeoning market, are you willing to try and explore that new space rather than the embiggened one you’ve assumed it to be up until now?

Tablets are increasingly ubiquitous, and their influence will only continue to grow. It’s time to get over the big-assed iPod Touch phase and take them seriously.

Sunday, October 28, 2012

Big Data Right Now: Five Trendy Open Source Technologies

Big Data is on every CIO’s mind this quarter, and for good reason. Companies will have spent $4.3 billion on Big Data technologies by the end of 2012.

But here’s where it gets interesting. Those initial investments will in turn trigger a domino effect of upgrades and new initiatives that are valued at $34 billion for 2013, per Gartner. Over a 5 year period, spend is estimated at $232 billion.

What you’re seeing right now is only the tip of a gigantic iceberg.

Big Data is presently synonymous with technologies like Hadoop, and the “NoSQL” class of databases including Mongo (document stores) and Cassandra (key-values).  Today it’s possible to stream real-time analytics with ease. Spinning clusters up and down is a (relative) cinch, accomplished in 20 minutes or less. We have table stakes.

But there are new, untapped advantages and non-trivially large opportunities beyond these usual suspects.

Did you know that there are over 250K viable open source technologies on the market today? Innovation is all around us. The increasing complexity of systems, in fact, looks something like this:

We have a lot of…choices, to say the least.

What’s on our own radar, and what’s coming down the pipe for Fortune 2000 companies? What new projects are the most viable candidates for production-grade usage? Which deserve your undivided attention?

We did all the research and testing so you don’t have to. Let’s look at five new technologies that are shaking things up in Big Data. Here is the newest class of tools that you can’t afford to overlook, coming soon to an enterprise near you.

Storm and Kafka

Storm and Kafka are the future of stream processing, and they are already in use at a number of high-profile companies including Groupon, Alibaba, and The Weather Channel.

Born inside of Twitter, Storm is a “distributed real-time computation system”. Storm does for real-time processing what Hadoop did for batch processing. Kafka for its part is a messaging system developed at LinkedIn to serve as the foundation for their activity stream and the data processing pipeline behind it.

When paired together, you get the stream, you get it in-real time, and you get it at linear scale.

Why should you care?

With Storm and Kafka, you can conduct stream processing at linear scale, assured that every message gets processed in real-time, reliably. In tandem, Storm and Kafka can handle data velocities of tens of thousands of messages every second.

Stream processing solutions like Storm and Kafka have caught the attention of many enterprises due to their superior approach to ETL (extract, transform, load) and data integration.

Storm and Kafka are also great at in-memory analytics, and real-time decision support. Companies are quickly realizing that batch processing in Hadoop does not support real-time business needs. Real-time streaming analytics is a must-have component in any enterprise Big Data solution or stack, because of how elegantly they handle the “three V’s” â€" volume, velocity and variety.

Storm and Kafka are the two technologies on the list that we’re most committed to at Infochimps, and it is reasonable to expect that they’ll be a formal part of our platform soon.

Drill and Dremel

Drill and Dremel make large-scale, ad-hoc querying of data possible, with radically lower latencies that are especially apt for data exploration. They make it possible to scan over petabytes of data in seconds, to answer ad hoc queries and presumably, power compelling visualizations.

Drill and Dremel put power in the hands of business analysts, and not just data engineers. The business side of the house will love Drill and Dremel.

Drill is the open source version of what Google is doing with Dremel (Google also offers Dremel-as-a-Service with its BigQuery offering). Companies are going to want to make the tool their own, which why Drill is the thing to watch mostly closely. Although it’s not quite there yet, strong interest by the development community is helping the tool mature rapidly.

Why should you care?

Drill and Dremel compare favorably to Hadoop for anything ad-hoc. Hadoop is all about batch processing workflows, which creates certain disadvantages.

The Hadoop ecosystem worked very hard to make MapReduce an approachable tool for ad hoc analyses. From Sawzall to Pig and Hive, many interface layers have been built on top of Hadoop to make it more friendly, and business-accessible. Yet, for all of the SQL-like familiarity, these abstraction layers ignore one fundamental reality â€" MapReduce (and thereby Hadoop) is purpose-built for organized data processing (read: running jobs, or “workflows”).

What if you’re not worried about running jobs? What if you’re more concerned with asking questions and getting answers â€" slicing and dicing, looking for insights?

That’s “ad hoc exploration” in a nutshell â€" if you assume data that’s been processed already, how can you optimize for speed? You shouldn’t have to run a new job and wait, sometimes for considerable lengths of time, every time you want to ask a new question.

In stark contrast to workflow-based methodology, most business-driven BI and analytics queries are fundamentally ad hoc, interactive, low-latency analyses. Writing Map Reduce workflows is prohibitive for many business analysts. Waiting minutes for jobs to start and hours for workflows to complete is not conducive to an interactive experience of data, the comparing and contrasting, and the zooming in and out that ultimately creates fundamentally new insights.

Some data scientists even speculate that Drill and Dremel may actually be better than Hadoop in the wider sense, and a potential replacement, even. That’s a little too edgy a stance to embrace right now, but there is merit in an approach to analytics that is more query-oriented and low latency.

At Infochimps we like the Elasticsearch full-text search engine and database for doing high-level data exploration, but for truly capable Big Data querying at the (relative) seat level, we think that Drill will become the de facto solution.

R

R is an open source statistical programming language. It is incredibly powerful. Over two million (and counting) analysts use R. It’s been around since 1997 if you can believe it. It is a modern version of the S language for statistical computing that originally came out of the Bell Labs. Today, R is quickly becoming the new standard for statistics.

R performs complex data science at a much smaller price (both literally and figuratively). R is making serious headway in ousting SAS and SPSS from their thrones, and has become the tool of choice for the world’s best statisticians (and data scientists, and analysts too).

Why should you care?

Because it has an unusually strong community around it, you can find R libraries for almost anything under the sun â€" making virtually any kind of data science capability accessible without new code. R is exciting because of who is working on it, and how much net-new innovation is happening on a daily basis. the R community is one of the most thrilling places to be in Big Data right now.

R is a also wonderful way to future-proof your Big Data program. In the last few months, literally thousands of new features have been introduced, replete with publicly available knowledge bases for every analysis type you’d want to do as an organization.

Also, R works very well with Hadoop, making it an ideal part of an integrated Big Data approach.

To keep an eye on: Julia is an interesting and growing alternative to R, because it combats R’s notoriously slow language interpreter problem. The community around Julia isn’t nearly as strong right now, but if you have a need for speed…

Gremlin and Giraph

Gremlin and Giraph help empower graph analysis, and are often used coupled with graph databases like Neo4j or InfiniteGraph, or in the case of Giraph, working with Hadoop. Golden Orb is another high-profile example of a graph-based project picking up steam.

Graph databases are pretty cutting edge. They have interesting differences with relational databases, which mean that sometimes you might want to take a graph approach rather than a relational approach from the very beginning.

The common analogue for graph-based approaches is Google’s Pregel, of which Gremlin and Giraph are open source alternatives. In fact, here’s a great read on how mimicry of Google technologies is a cottage industry unto itself.

Why should you care?

Graphs do a great job of modeling computer networks, and social networks, too â€" anything that links data together. Another common use is mapping, and geographic pathways â€" calculating shortest routes for example, from place A to place B (or to return to the social case, tracing the proximity of stated relationships from person A to person B).

Graphs are also popular for bioscience and physics use cases for this reason â€" they can chart molecular structures unusually well, for example.

Big picture, graph databases and analysis languages and frameworks are a great illustration of how the world is starting to realize that Big Data is not about having one database or one programming framework that accomplishes everything. Graph-based approaches are a killer app, so to speak, for anything that involves large networks with many nodes, and many linked pathways between those nodes.

The most innovative scientists and engineers know to apply the right tool for each job, making sure everything plays nice and can talk to each other (the glue in this sense becomes the core competence).

SAP Hana

SAP Hana is an in-memory analytics platform that includes an in-memory database and a suite of tools and software for creating analytical processes and moving data in and out, in the right formats.

Why should you care?

SAP is going against the grain of most entrenched enterprise mega-players by providing a very powerful open source product.  And it’s not only that â€" SAP is also creating meaningful incentives for startups to embrace Hana as well. They are authentically fostering community involvement and there is uniformly positive sentiment around Hana as a result.

Hana highly benefits any applications with unusually fast processing needs, such as financial modeling and decision support, website personalization, and fraud detection, among many other use cases.

The biggest drawback of Hana is that “in-memory” means that it by definition leverages access to solid state memory, which has clear advantages, but is much more expensive than conventional disk storage.

For organizations that don’t mind the added operational cost, Hana means incredible speed for very-low latency big data processing.

Honorable mention: D3

D3 doesn’t make the list quite yet, but it’s close, and worth mentioning for that reason.

D3 is a javascript document visualization library that revolutionizes how powerfully and creatively we can visualize information, and make data truly interactive. It was created by Michael Bostock and came out of his work at the New York Times, where he is the Graphics Editor.

For example, you can use D3 to generate an HTML table from an array of numbers. Or, you can use the same data to create an interactive  bar chart with smooth transitions and interaction.

Here’s an example of D3 in action, making President Obama’s 2013 budget proposal understandable, and navigable.

With D3, programmers can create dashboards galore. Organizations of all sizes are quickly embracing D3 as a superior visualization platform to the heads-up displays of yesteryear.

Editor’s note: Tim Gasper is the Product Manager at Infochimps, the #1 Big Data platform in the cloud. He leads product marketing, product development, and customer discovery. Previously, he was co-founder and CMO at Keepstream, a social media curation and analytics company that Infochimps acquired in August of 2010. You should follow him on Twitter here.

 

Social Annotation Site Diigo.com Recovering After Domain Hijacking Nightmare

Diigo, a social bookmarking and annotation site, is finally back online 50 hours after the domain was first hijacked. It’s an incredible story that involves crisis management, blackmail, investigative research, payoffs, a clever thief, and points to potential problems with the domain name registry system that could affect anyone with a website. Diigo’s co-founder called it a nightmare and crisis that he’d like to help other companies avoid.

Diigo has 5 million registered users. For two days this week, they couldn’t access the site. The service is both a collaborative research tool, and a social content site. TechCrunch called Diigo “a research tool that rocks”, back in 2006. I’m a big fan and started using Diigo (pronounced Dee’go) to bookmark websites after Yahoo shut down its popular bookmarking site Delicious.

What Happened To Diigo.com

This past Wednesday, I tried using Diigo’s browser bookmarklet to save a site to my library. But, it didn’t work. I went to the Diigo.com site and it got one of those junky parked domain pages that you see when you mistype a URL. My first thought was, did the site close or perhaps their domain name expire? I checked Diigo’s twitter account and learned their domain was hijacked. The twitter account directed users to an emergency announcement that was put up at diigo.net, not diigo.com.

“Dear Diigo users,
We’re terribly sorry to inform you that we’re experiencing domain hijacking, ie. someone gained access to our Yahoo domain registrar account, and illegally hijacked the domain, www.diigo.com. Very soon www.diigo.com may not be accessible to you until this issue is resolved.

But please rest assured that all our servers and user data are NOT compromised…”

The message also included a way users could help:

“Meanwhile, if you’re an avid Diigo/ twitter user, plesae (sic) help RT and speed up the recovery. Thanks!

@Yahoo @YSmallBusiness, pls help prevent the stealing of http://diigo.com , as done here http://bit.ly/Xqi6Ki …! pls RT!”

On Friday afternoon, after 50 hours, the Diigo.com came back online.

Diigo posted an update saying:

“After an unbelieveable 48 hours roller coaster ordeal, Diigo.com is back! While all our servers and user data were completely unaffected during this time, our domain name registered through yahoo domain service (completely separated access from Diigo servers / user data) was “hijacked” for the past 2 days (no, our domain didn’t expire, but was literally stolen and illegally “transferred out”. According to Yahoo’s log, the thief even called into Yahoo and pretended to be the owner to inquire the transfer, if you can believe that!)

Simply looking-around the web shows that domain theft / hijacking has been causing a lot of disruptions and economic damage. During this ordeal, we have learned some valuable lessons to share with you all. Stay tuned after we get some much needed rest first!”

The Backstory

I contacted Wade Ren, Diigo’s Co-founder and Executive Chairman to get the details of what happened. He agreed to share his story in the hope that other companies will learn some valuable lessons and not have a similar crisis.

Ren told me “it’s a nightmare since it was unexpected. It was a crisis because it may damage Diigo the brand if it isn’t resolved quickly. And it was an ordeal to go begging for help and getting frustrating go-arounds.”

The Diigo team learned their site was being redirected Wednesday morning. They did a WHOIS search and learned their domain was moved from their Yahoo domains to another domain registrar called Aust Domains.

Ren called Yahoo to find out what happened. Ren says he had several calls with Yahoo over the course of 30 hours, but Yahoo staffers repeatedly told him they couldn’t do anything to help. They insisted the only option was to file a police report, which Ren knew, at best, would take a long time to get his domain back.

Ren also discovered Yahoo is not an official domain name registry operator, like GoDaddy, eNom, Tucows, and Melbourne IT. It turns out Yahoo is a domain reseller, and anyone using Yahoo Domains really uses a third party DNS registry operator. Ren’s account used Melbourne IT Ltd., based in Australia.

I discovered that Yahoo discloses this in the fine print in our Small Business Terms of Service

In section 1.3,

“Certain Services that You purchase or receive from Yahoo! may be provided by one or more third-party vendors, contractors, or affiliates selected by Yahoo! … Currently such third parties include: Melbourne IT Ltd for Yahoo! Merchant Solutions, Yahoo! Web Hosting, Yahoo! Business Email, and Yahoo! Domains customers.”

Ren discovered that the actual DNS registry operator, Melbourne IT, would need to get involved to get this resolved. After much pleading, a Yahoo staffer called Melbourne IT to help, and was told that since the domain was transferred out, there was nothing they can do.

At the same time, Ren called and sent an email to Aust Domains, where diigo.com was now registered. His email, titled “high traffic domains stolen, please help!” got a boilerplate reply from customer support saying:

“In this case, you will need to contact your domain registrar (Yahoo) to submit a complaint to Verisign (Global domain registry).

Once we receive the formal decision from Verisign, we will take the further action.”

Aust Domains and Yahoo weren’t going to help Ren get his domain back quickly. But then Ren was contacted by someone who could. The thief.

The thief, who had a yahoo email address, wanted money in exchange for Diigo to get their domain back. Ren says the thief bragged about how he had done this many times before and was very careful.

Of course, Ren in principle didn’t want to do business with a cyber blackmailer. But, he wanted to get his site back as quickly as possible for his users and didn’t want to deal with this problem much longer. He said the thief was well aware of the timing. He said the criminal knew it may still take 2 weeks for Diigo to get their site back even with the help of Yahoo, and it would be a lot quicker to pay him to get the domain back, otherwise known as blackmail.

Weighting options, Ren decided to pay the money and was given the account information at Aust Domain so Diigo could get their site back, by pointing the DNS settings back to his servers. Ren doesn’t want to disclose the exact amount of the payment, but it was in the 3-figures.

Searching the web, Ren found many cases of domain hijacking, and in one case, by the same hijacker at HowardForum.com, the thief was paid $400. You can read the timeline of that attack here.

In that case, the website owner says his registrar, GoDaddy, worked with Aust Domains to get the domain back. It took 13 days. Howard shared some of the emails he got from the thief:

Hello, I’m ready to sell that domain for 400 $. let me know if you are interested so we can talk about the transaction method.

My offer is valid for 12 hours anyway. Good luck.

I’m not looking for any trouble, You pay and I’ll provide you the info instantly after payment

The important thing is I’m the owner of this domain at this moment and after few weeks I decided to sell this domain…. you are wasting my time by asking unrelated questions.

Back to Diigo, Ren says that at the same time he was in contact with the criminal, a more senior person at Yahoo got in touch with him. This person was much more eager to help.

I sent requests via email and phone to Yahoo for comment. After 22 hours, Yahoo’s PR department told me they will look into this. I’m still awaiting their reply and will update this post with any response.

Lessons Learned

Ren says he’s learned several lessons this past week that he wants to share.

Ren isn’t sure how the thief got the account’s password. He speculates it could have happened on some public wifi network and was perhaps sold to the blackmailer. But, all the thief needed to transfer the domain was his email and password.

The thief was very careful according to Ren. He doesn’t let his target know that he’s hijacking their domain until it’s too late. The thief didn’t change his Yahoo account password. He just took actions to transfer the domain to the new registrar.

Since the thief still had access to the Yahoo account’s email, Ren suspects the thief was watching his emails and quickly deleted ones that might have warned Ren of the domain transfer. This wasn’t Ren’s main email account so he didn’t check it as often.

He says 2-step verification of logins could have prevented all this. Yahoo offers 2-step verification where “any sign-in attempt Yahoo! deems suspicious will require a second verification, either answering your account’s security question or entering a verification code we send to the mobile phone or non-Yahoo! alternate email address we have on file.”

Ren says that unfortunately, this security feature is still in beta and does not seem to work as promised. After the hijacking happened, Ren says he tested his account and was surprised to find that he could still login without the verification step. When Ren told Yahoo about this problem during the hijacking, they asked him to fill out a bug ticket to report it.

Would the domain locking featured offered by Yahoo and other registrars have helped? Ren says no, it only provides false hope. Since the thief had access to his account, the thief was simply able to turn domain locking off. And the thief was able to get the domain transfer authorization code, designed to prevent fraudulent or unauthorized transfer, because he had access to the account.

Ren says he’s learned it’s better to use a domain name registry operator, rather than a reseller.

Based on his experience, Ren says the the domain name registry system is flawed and it needs a system to freeze a domain transfer and revert the domain to its pre-transfer state, immediately after a transfer dispute is submitted, pending further investigation.

Ren makes a comparison to the online banking industry. If someone steals you financial account, you have more recourse and security since further verification steps are typically required. But even though your website might be your most business important asset, you don’t have the same protection from your domain host, and there ought to be better procedures and recourse in place to prevent this from happening.

Until that happens, criminals will still be out there taking advantage of the situation and prying on unsuspecting website owners.


Saturday, October 27, 2012

Arguments About Politics Are Like Arguments About Phones

We’ve all been inundated with political back and forth over the past few weeks. Curse you, social networks! And you too, Internet! And yet, what’s really frustrating isn’t the political back and forth, but the content that is being volleyed back and forth. More specifically, the lack of content. Watch 9 seconds of any recent episode of the Daily Show and you’ll see that networks, pundits, experts, you-name-its are all happily willing to spend their days blowing an out-of-context sentence completely out of proportion and make said sentence seem like the biggest offense to politics since Watergate.

As I sat at dinner one night, trying to zone out a completely nonsensical argument between a Romney-fanatic and an Obama-fanatic about which candidate was stupidest (actual word used in the argument), it came to me: this is the same type of  nonsensical arguing that goes on between iOS-fanatics and Android-fanatics. In fact, the tech community even names its scandals after political scandals.

But, this is good news! It means that there’s a way to relate politics (zZzZz…) to technology (!!!). Don’t believe me? Here’s a breakdown of the similarities the two share:

One Sided Opinions: Check.

One sided opinions are rampant in both debates. You’re either for iOS or against iOS (see what I did there?). People are quick to share their opinions and loath to listen to others. I’ve heard people go on for hours about all the awesome things you can do on Android that you simply can’t do on iOS. Or about the all the great things Obama will be able to finally accomplish if he’s re-elected.

The problem with one sided monologues (which is how they tend to come across, especially thanks to Twitter and Facebook, et. al) is that they are anything but comprehensive. Sure, you can talk all you want about Romney’s plan for letting in high-skill immigrants, but that’s only 1% of what he will do while he’s president.

Note, these are especially useless because they don’t help anyone: it doesn’t help the firm believers because they don’t need any more convincing in the first place, it doesn’t help the firm opposers because it just pisses them off, and it doesn’t help anyone in the middle because it’s too one sided to provide any actionable information.

One Size Fits All: Yup

It took me less than 3 seconds to find this:

Here’s a good check to see if you’re well informed and thinking rationally: If you are able to determine when someone else (e.g. a friend or family member) should vote for the other candidate or opt for the other operating system based on their needs, you pass! If you think that one choice is unequivocally better than another, be it Android or Romney, iOS or Obama, Windows Phone 8 or whatever the heck the name is of the Libertarian candidate1, then you fail. Miserably.

Blowing Things Out of Proportion: You Betcha

At some level this is understandable. How do news networks or tech blogs fill all that white space and generate revenue? By making everything they talk about seem incredibly important. There’s an important distinction to be made here though: it’s one thing if news networks or blogs do this, but it is an entirely more egregious offense for consumers to partake in this sort of nonsense. Just because something is presented to you via a media channel does not mean you have to accept its importance at face value. Reviews and opinions may be helpful, but keep in mind that they aren’t always  infallible.

You may be thinking, “Okay, so what?” Well here’s where the kicker comes in: Politics actually matter. Sure, you use your phone every day and what mobile OS you opt for can (arguably) have a significant impact on your quality of life. But, it won’t make it into the history books. It won’t affect the next generation’s quality of life. Or our competitiveness as a country. Or our immigration policy. Politics, on the other hand, do. Keep that in mind as election day approaches.

So how did I decide on iOS vs. Android, Obama vs. Romney? Simple, I made my decision like any good (mechanical) engineer would: I collected facts, weighted the issues most important in my eyes, tallied the results, did a gut/reality check, and marked my ballot/bought my phone. Done. No petty arguments, no bugging everyone with my obviously superior decision, and no pressuring other people to follow my lead.

By the way, if you’re wondering who I am, the answer is: nobody, really. I’m mostly an outsider to the programming-centric startup environment that TechCrunch focuses on, although I do have a pretty sweet job (I may or may not work on designing and building the most lethal aircraft  currently flying). If my lack of a Klout score discredits my opinion, just know that I didn’t like you either.

1Side note: the comparison of Windows Phone 8 to the Libertarian candidate is pretty apt. Nobody is willing to vote for them because (as of this writing), neither stands a chance to do more than crawl home with a laughable 3rd place share. That’s not to say that no one likes them or that it’s good for the general public that they don’t stand a chance, but just calling it how it is (for now, at least).

How Long Will Programmers Be So Well-Paid?

Last week Glassdoor published its most recent software engineering salary report. Short version: it pays to code. Google and Facebook employees earn a base salary of ~$125K, not counting benefits, 401k matching, stock options/grants, etc., and even Yahoo! developers pull in six figures. Everyone knows why: ask anyone in the Valley, or NYC, or, well, practically anywhere, and they’ll tell you that good engineers are awfully hard to find. Demand has skyrocketed, supply has stagnated, prices have risen. Basic economics.

But why has the supply of good engineers remained so strained? We’re talking about work that can, in principle, be performed by anyone anywhere with a half-decent computer and a decent Internet connection. Development tools have never been more accessible than in this era of $100 Android phones, free-tier web services, and industry-standard open-source platforms. Distributed companies with employees scattered all around the world are increasingly normal and acceptable. (I work for one. We’re hiring.) And everyone knows that software experts make big bucks, because software is eating the world. What’s more, technology may well be destroying jobs faster than it creates them. Basic economics would seem to dictate that an exponentially larger number of people will flood into the field, bringing salaries back down to earth despite the ever-increasing demand.

But reality has stubbornly refused to follow that dictation. Even way back during the first dot-com boom people were already predicting that American and European coders would soon be driven into the poorhouse by a flood of competition from low-cost nations like India and Brazil. But there’s still no sign of that happening. Why not? And when will it happen, if ever?

Well. I have a theory. I’ve spend the last couple of days chilling out in Chiang Mai, northern Thailand, a city where you could live like royalty and save money while making merely half of Google’s average developer salary. Which doesn’t tempt me â€" I prefer Where Things Happen to Away From It All â€" but has tempted thousands of expats who now live here. And their presence has sparked a possible explanation for this apparent paradox.

To be clear, I’m only talking about very-good-to-excellent developers. Everyone claims to only hire “A-listers,” and that may even be true of a select few companies, including Facebook and Google. (Though even B-listers and C-listers are in relative demand.) Think of such skilled engineers as emerging from the end of a pipeline which draws from the entire population of the world. Economic incentives act like gravity, pulling almost everyone down that pipe â€" so what are the stages that filter people out of it nonetheless?

First, you have to grow up wealthy enough to have a decent education, some exposure to technology, and the ability to choose between options in your life, which immediately rules out most of the planet. Then you have to have both an interest in and a talent for development, and there’s evidence that that talent is rare: “between 30% and 60% of every university computer science department’s intake fail the first programming course.“. Then you either have to get a good professional education â€" eg at a good university like India’s IIT campuses â€" or supplement a crappy one with home hacking or on-the-job training.

(Or maybe, maybe, learn-coding-at-home sites like Codecademy and the likeâ€"but I’m pretty skeptical about those. I’ve said before that I think think such services are like learning French from books, and then going to France and finding out that you can’t actually communicate and it would take you years to be become fluent. Programming is like English: it’s fairly easy to learn the rudimentary basics, but very hard to master.)

Regardless, all of those filters should be allowing many more people through every year. The world as a whole is much wealthier than it was twelve years ago. (That’s when I was last in Thailand. This time around it’s a different and far more prosperous place.) A fixed proportion of people may have the programming gene â€" though I’ll be watching Estonia’s experiments with interest â€" but there’s little doubt that interest has erupted. Top-notch university courses are available online worldwide, and industry-standard development tools are within reach of all.

But it’s the very last stage that matters most. Even after you’ve gotten your basic programming education, you still have to put in your thousands of hours to achieve mastery. That doesn’t mean doing the same thing again and again for thousands of hours; it means challenging yourself with new tools, new languages, new objectives. Otherwise you get people writing code of the sort I see all too often these days, when HappyFunCorp (my employer) is brought on to clean up someone else’s hot mess:

All too true.
(From Abstruse Goose)

My theory that if it’s sheer economics, the lure of a better paycheck, that initially draws you into software engineering, then you’re much less likely to master it. Instead you’ll advance to the point at which you’re reasonably happy with your paycheck, which studies indicate is about $70,000/year in America. (But much less in Chiang Mai or Bangalore.) So my theory is that there are many more software engineers out there â€" but the ones drawn in by economic forces are content to compete with each other for mediocre (but happy-making) jobs, rather than put in the thousands of hours of mentally gruelling work required to become really good at what they do.

(Don’t get me wrong: that work is fun, too. But undeniably gruelling.)

So why aren’t there more people drawn into the field out of sheer interest? Because when you’re poor, which most of the world is, money is more important than passion. It’s not until you reach a near First-World level of development that pursuing your passions rather than escaping poverty seems like a reasonable and/or admirable thing to do. So if my theory is correct, the shortage of excellent engineers will eventually alleviate or even end, as the world grows wealthier everywhere … but not for another decade or more.

Image credit: Don Hankins, Flickr.