Two ways to watch the SOTU

Last night I watched the 2011 State of the Union address, conveniently streaming it via YouTube in 720p over the home wifi and out to my flatscreen Samsung TV here in Finland. I found the archive easily enough and noticed that the White House had conveniently split the screen to show helpful infographics on the right synced up with what was being said by President Obama on the left.

We are now officially a PowerPoint nation. Simple talking is no longer enough to engage us.

I’ve always voted democrat and am generally support the President but always take a grain of salt with any spoon-fed messages such as the one above (why is the kid on the right jumping for joy? Is it the tax cut?). There are other ways to look a the speech and the Vox project over at Rutgers is an interesting one.

Vox Event Analytics takes a look at the speech and syncs up a filtered tweet feed based on keywords and hashtags and then plays those back in the right margin while you view the speech. Along with this synchronized playback is also a playback of the tweet volume, keyword analysis, and sentiment of tweets over time. Vox, as the name implies, tries to reflect the reaction of the people (as reflected through their tweets) and it’s interesting to see what’s being said as the speech is going on.

Which do you prefer?

Skeuonyms and Collapsitarians

Just finished Kevin Kelly’s latest book, What Technology Wants, and learned two new words/phrases.

Skeuonym – an expression left over from an older technology, no longer used. Examples include, “rewinding the tape,” “dialing the phone, “filming a movie,” or “cranking the engine.”

Anti-civilization Collapsitarian – folks like the Unabomber who view technology with suspicion and fear that mass adoption is the beginning of the end and seek to hasten the collapse of society built up around the worship of technology.

The book was thought-provoking and I’m sure I’ll be referring it to again and again. My full review here on Good Reads.

More LinkedIn Visualizations

A couple more visualizations that use the LinkedIn API that I missed on the previous post.

LinkedIn Maps announced yesterday creates a poster-ready view of your LinkedIn network complete with color-coding showing how everyone is related. This type of visualization can reveal interesting things about your career. In my case, it looks like my colleagues from Dow Jones are really off on their own while my Six Apart, Yahoo, and web 2.0 friends are much closer together with my more recent connections from Nokia are scattered in between.

LinkedIn Infinity uses the cooliris technology to let you look at nothin’ but faces.  Nothing really innovative about this but a crowd-pleasure nonetheless.

Finally, LinkedIn Signal, while not a visualization,  is extremely useful way to parse your social network feeds using your LinkedIn networks as a filter. This is really powerful because it lets you filter away the noise and see updates filtered by Industry, Location, School, or Company. If you add multiple filters together, you quickly get very focused results. Want to see what people in your network (and their friends) that work at Microsoft in Japan are saying about Facebook? Tick a few boxes and you have it.

All this and more are featured on the LinkedIn Labs page.

Smart LinkedIn Integration

Congratulations to whomever is turning up the heat over at LinkedIn. It’s been just over a year since they opened up their API and now we’re really starting to see the fruits of this effort.  The latest integration with Fortune on their 100 Best Companies to Work For demonstrates how a professional social network can add value to a web publication. Browse through this list while logged into LinkedIn and on each companies profile page you’ll see a list of any of your connections that work at that company. It’s like the old Six Degrees game but with a purpose. You’ll be surprised at who shows up (Hi Mark!)

  • The hackday-inspired Resume Builder takes the data you’ve already added to your profile and gives you a series of templates for a cleaner output in PDF format suitable for sending via email or printing.
  • LinkedIn Share buttons that you can add to your site works just like the Facebook Like button, crowd sourcing the curation of the web.
  • Integration with OneSource iSell product to combine their “triggers” with to help Sales teams connect with their prospects through existing relationships.
  • Bump integration making connecting via LinkedIn easier than ever.
  • A Microsoft Outlook social connector to add LinkedIn profile information to your email and contacts.
  • Ribbit Mobile integration resulting in a product they call Mobile Caller ID 2.0. It installs on your mobile phone (sorry, UK and US numbers only) and does a dynamic lookup on incoming numbers to see if LinkedIn (or other connected networks) has any information about who is calling and what they have recently shared on the social web.
  • LinkedIn Tweets, an application that has a cool, somewhat hidden feature, that creates a twitter list of all your LinkedIn connections that have twitter accounts and (and here’s the cool thing) will add new members to that list automatically as you add new connections on LinkedIn.

All this is on top of heaps of new features they’ve added to the site including the faceted search UI and the ability to customize your profile to name just a few. Really stellar work.

Finally, what prompted this whole post to begin with, and I’m not sure how widespread these emails are, was this customized visual that summarized who in your network has changed jobs. What a contrast to the old, text-heavy, anti-social LinkedIn of 2009 where “connections go to die” – the new LinkedIn is much more vibrant and connected with the world outside. Looks like they’ve taken Dave McClure’s advise from over a year and a half ago when he berated them and screamed,  it’s all about the faces.

LinkedIn 2009
LinkedIn 2011

Celebrity Sellouts in Japan

Here’s a bit of Friday fun for you.

Way back when the internet was limited to newsgroups and those with access to a university network, I put up a bunch of scans I made of celebrities shilling for Japanese products. The other day, while clearing a bunch of old files off my wife’s Macbook, I ran across all the old files and have reposted them here on everwas.com. Enjoy

Celebrity Endorsements

Charlie hawking shoes.

Facebook and your Contact Info, a Proposal

Facebook just announced that they are suspending a previously announced expansion of their API allowing third party developers to request access to a user’s address and phone number. Some history and a modest suggestion follow.

When Facebook announced Facebook Connect in 2008, Dave Morin wrote about a concept he called Dynamic Privacy. Facebook Connect would let developers to access your profile but data retention policies required developers to flush this cache of data and refresh it every 24 hours. This way, Facebook could  guarantee your data would not only be current but also deleted if you decided to revoke an application’s permission to access your profile.

Since then, Facebook’s data caching policies have been relaxed. With every Facebook platform developer hitting their servers for a data refresh every 24-hours you can imagine the impact this had on the Facebook infrastructure. In April 2010, Facebook announced that the 24-hour data caching policy would be removed. Developers rejoiced. Facebook operations could relax again. But, for users, the promise of Dynamic Privacy was no more.

Fast forward to last Friday’s announcement that Facebook would allow developers to ask for access to your profile Contact information such as home address and phone number. Without Dynamic Privacy, an application could ask for access to your contact information and keep it. One stray click could give out some very personal data.  There’s no way to opt out of giving out this information in error. No way to put your phone number or address into a special bucket that is locked down to all but a handful of mobile or shopping applications that would be greatly enhanced with access to your phone number.

Rules-based Privacy

Is there a way for Facebook (or any service) to grant access to information provided the conditions under which I grant this access are maintained? How can Facebook ensure that anytime I delete my information it will also be removed from any sites that ever had access to this info? What if I store my private information with a site such as threewords.me which, after only a few weeks in play, is auctioned off to the highest bidder? Is there a way to require the eventual new owner to re-acquire permissions to my contact data. The Facebook Platform Policy currently states:

You will not sell any data. If you are acquired by or merge with a third party, you can continue to use user data within your application, but you cannot transfer data outside your application.

My reading of this is that the new owner of threewords.me can use the data as long as it is used in conjunction with the operation of threewords.me. This includes any future features added should they improve the site to meet their needs. In 2008, the passage of 24-hours required a data refresh, in 2011, at a minimum, legal change of control should do the same. The Platform Policy further states,

You must not give your secret key to another party, unless that party is an agent acting on your behalf as an operator of your application. You are responsible for all activities that occur under your account identifiers.

What if the statement was re-written so that an application’s secret key can never be transferred? Any new owner of an application could run it using their own secret key but it would kick off a refresh of all requested user data. This request could be sent out as via a notification on Facebook Messaging or an alert that would appear the next time the user tries to use the application or web site. Maybe this is already the case but it would be better to state this clearly.

So my modest proposal to bring back the original intent of Dynamic Privacy is,

  1. Revision of the Facebook Platform Policy to clearly state that change in ownership would require re-authorization of grated user permissions.
  2. Enforcing limitations on transferring application secret keys by tying each key to verified named accounts only. An example of this is how domain names are tied to an administrative and technical contact who are legally and technically responsible for activity on that domain.
  3. Requiring all applications to support the Deauthorization Callback and extending it with an API call that is authorized to overwrite or remove data on the 3rd party server. All domain-name root servers are given the ability to update the hosts file information on their downstream servers. Might a similar root server role be appropriate for Facebook as the provider for your private data stored on all downstream applications?
  4. The option for users to place personal data into a more secure area which would require more than a single click to grant access.  Something that requires two step authorization and sends me a confirmation email informing me that this access has been granted.

The best way to build up trust is to put in place features that give users control and the option to take something back. These are the post-lunch ramblings of an observer. Please correct me if what I’m suggested is crazy talk!

Raindrops and Private Clouds

Before Christmas I posted about the possible break-up of clouds.  For the past 5 years or so, the usual suspects such as Yahoo, and Google, and more recently Facebook and a re-vitalized AOL have been sucking up smaller collectives of socially active sites in search of rich pockets of user engagement.

Clouds are an apt metaphor because we’re reaching a time when some of these large, ad-supported clouds are getting too heavy and are starting to look for ways to offload sites which don’t monetize by either shutting them down or selling them off. Think of the threat late last year to shut down delicious.com as the first cloudburst which resulted in a shower of users taking their data fleeing that cloud in search of a new home.

FourSquare announced that they’ve added photo uploads to venues for their check-in service.  This leads me to ask, why they make me upload new photos for places I’ve been. I’ve got year’s worth GPS-encoded of photos sitting on Flickr. 4Sq can cross-reference the location and time stamp on my photos, match it up with my check-in history and get a bunch of photos for venues right away.

Why doesn’t FourSquare let me push in photos from my Flickr account? Are they worried about mis-matches? They could use a little Machine Tag foo and let me select which photos to link to a location. Most likely it’s just a pain to build a connector. Better to start over and build up your own dataset right? It’s cleaner, more current, and they avoid the legal hassles of having to partner with Yahoo, much better to own the data right?

The nagging problem about tying venue photos to images hosted on another cloud is that it opens 4Sq up to dependencies. Do they really want to rely on flickr to host their venue photos? Not only do they lose  editorial control over those photos and if a photo turns out to be offensive or violates some kind of copyright, who is at fault? FourSquare? Flickr? Yahoo? Most likely you’re going to have to cut some kind of deal which means the Biz Dev guys have to get involved. Contracts, SLAs, a big pain which limits your options in the future.

Tim O’Reilly posted a while back that the tendency of Web 2.0 companies is to monopolize their vertical to secure control and cut dependencies:

One of the points I’ve made repeatedly about Web 2.0 is that it is the design of systems that get better the more people use them, and that over time, such systems have a natural tendency towards monopoly.

If big companies get too protective of their data and the legal hassles around free exchange of data make it harder for consumers to connect their data in these clouds together, we’ll all be forced to either throw our lot into a single cloud which gives us the most complete suite of connected services (facebook or google) or risk tenuous connections in search of our own, best-of-breed solution.

Consider the alternative. Consumers hosting their own data. Check out Pogoplug, this neat little service that sticks an ethernet port into the back of a external hard drive that sits on your desk and connects directly to the Internet, turning that hard drive into your own little “private cloud.”

What if your Pogoplug held all your photos, blog posts, status updates, scrobbling history, and other lifestream detrius? You can stream it out the back and use it to feed flickr, facebook, and your other favorite caching layers where people can view it. Again, I’m not suggesting you serve up to the internet at large via this little box on your desk, that would be madness. Just have all or originals there and use your favorite social network, photo/video/link sharing service as the copy that feeds your fans. The important point is that the source, the seed for all these large clouds to which you syndicate, is under your control.

If a shiny new photo-sharing startup catches your eye you can give it a shot by forking off a feed of your photos to it’s API endpoint and get started with a collection of your own stuff on their service. No need to export from old photo-sharing site to this new one, you’ve got the raw data sitting on your “private cloud” and can start with a clean copy of your entire archive.

For further reading, there’s a healthy thread between Jeffery Zeldman, Tom Henrich, Jeff Croft, Tantek Çelik, Kevin Marks, Glenda Bautista, Andy Rutledge and others about the methods, and even necessity of hosting your own data. Tantek, for one, has put his money where his mouth is and is busy writing software and pushing this vision.

I’m building a solution, bit by bit. It’s certainly incomplete, and with rough edges (Jeffrey has pointed out plenty of the areas that need work), but iteratively improving as I find time and inspiration to work on it.

I’d rather host my data and live with such awkwardness in the open than be a sharecropper on so many beautiful social content farms.

This is what I mean by “own your data”. Your site should be the source and hub for everything you post online. This doesn’t exist yet, it’s a forward looking vision, and I and others are hard at work building it. It’s the future of the indie web.

Tantek.com, On Owning Your Data

If this kind of stuff interests you, check out IndieWebCamp in Portland this June.

Social Cruft

First I read through a longish piece outlining how Forbes is re-inventing itself into a hub that harvests it’s audience and transform them into content producers in a new media factory.  Then I read about how Gawker is embracing the transformation of the web into a visual medium, prepping their web pages for the eventual living room, lean-back consumption model.

And now I click through (via twitter of course) to land on this abomination of design from MSNBC.

I count no less than twelve potential interaction points to share or otherwise spindle this piece back into the social-sphere. This isn’t even counting the 50+ links that are drawing me off this page. I guess what really sends me off are the four icons next to the scroll bar. Some genius thought that click through rates on those little gee-gaws increased engagement. Look at it, there are only two lines of the article above the fold!

All I can think of  is that this site is looking like that kid in your neighborhood who would deck out his bicycle with fancy horns, reflectors, and baseball card/clothespins on the rear wheel to make his old Scwhinn look cooler than it really was.

I think we’re in the awkward, adolescent stage of Mass Media adoption of social media. Eventually more sane minds will prevail and attention and praise will flow towards more nuanced design. Less is more my friends, really.

Japan Shatters Tweets Per Second Record

I spent the year end holidays in Tokyo with family and friends. As with every visit I was blown away by the pace and energy of the city and came away re-charged with optimism. I am especially happy when I hear about new companies such as Twitter coming to Japan and finding a fit. Not only is there a television drama centered around characters that tweet to each other, corporate twitter handles are regularly mentioned on advertising and the mass media assumption is that everyone knows what twitter is and how it works.

The latest proof point of twitter’s growing popularity (Japanese make up 16% of all twitter users compared to less than 10% in the US) is a post on the twitter blog. Almost 7,000 tweets per second which is more than double their last record of 3,000 during the World Cup.