In part three of his five part piece on Media Futures, Majestic Research co-founder and former Entrepreneur in Residence at Flatiron Partners, Seth Goldstein comments on the development of the Web API.
As of 2005, the Internet has replaced the desktop PC as the primary platform for APIs. Unlike Microsoft and the desktop, however, nobody controls the web as a platform; although certain companies do oversee enormous pools of user data and have the opportunity to direct such traffic as they see fit.
He goes on to list several examples of traditional websites (Amazon, Google, EBay, etc) publishing an open API to yield secondary applications developed by the general public. He goes on to call the web-based API, put into the hands of the developing public,
the hinge between the algorithm that processes raw human meta data and the moment of alchemy that occurs when you discover something you didn’t even know you were looking for, courtesy of some people that you didn’t even know that you knew.
It’s John Battelle’s Database of Intentions set free by a collection of vendors & search engines which open up their data so that it can be collated and analyzed in new and exciting ways.
This is a great case when two APIs get hooked up to make something greater than what each service could offer on its own. Paul Rademach, a tech lead for animation tools at Dreamworks, has connected Google Maps to Craigslist to present a visual UI for real estate listings. You can set your location and price parameters and get a map that you can zoom in on and scroll with pinmarks for every “hit.”
A yellow pin indicates that photos are associated with the listing and clicking on the pin will bring up the information from Craigslist as shown in the image on the left.
I dreamed something like this would be possible with other layers being added in as needed like those old Mylar overlays you would see in atlases or anatomy textbooks. I can already think of two overlays that I’d like to see if I were a homebuyer. Comparables and School Districts. Once geo-locater enabled web services are exposed for this data, it would be fairly trivial. I think much harder is getting this data.
The URLs tell you something about the two companies.
code.google.com or developer.yahoo.net
At Google it’s about the code, at Yahoo it’s about the developer.
PS. Six Apart has just launched its own version of the above. www.sixapart.com/pronet/docs/powertools so I guess that means we’re not about the code or the developer. It’s the tools and we hope you use ones that give you Power!
Just in time for it’s 10 year anniversary, Yahoo opened up API access to its search platform. Allowing programmatic access to search services via URLs is a trail that Google has already blazed but in what maybe another arms race as we saw with hosted email storage, Yahoo allows five times the number of queries; 5,000 over a 24 hour period.
There’s already an O’Reilly Hacks book in the queue (what’s with the cowboy boots on the cover anyway?), a growing list of applications that’s hosted on a wiki, and, a developer’s weblog running on our favorite blog platform, Movable Type.
Jeremy Zawodny has done a great job of bringing together all the right tools to get this ecosystem off the ground and is clearly the booster that made it happen. Great work!
Opening up access this way ties in nicely with Yahoo’s media hub strategy which distributes their services in order to drive people back to Yahoo properties, boosting page views for advertising and brand awareness. The question on everyone’s mind is if the Search API set is a trial balloon for a broader rollout of other services. Yahoo IM? Finance? Music? Maps? Horoscope API anyone?
I mean it’s the classic example of Clayton Christensen’s innovator’s dilemma. When HTML came out everybody said “Hey this is so crude, you can’t build rich interfaces like you can on a PC – it’ll never work”. Well it did something that people wanted, it kind of grew more and more popular, became more and more powerful, people figured out ways to extend it. Yes a lot of those extensions were kludges, but HTML really took over the world. And I think RSS is very much on the same track. It started out doing a fairly simple job, people found more and more creative things to do with it, and hack by hack it has become more powerful, more useful, more important. And I don’t think the story is over yet.
I realize I’m getting liberal with the “Milestone” tag but it truly seems as if we are living in historic times. I just got off of an amazing three days in San Francisco where I saw history being made all around me at Web 2.0 in San Francisco. Maybe I’m still new to this industry but checking with others around me, they also shared my view that the mood of the crowd was upbeat and excited for the possibilities of the future. What was nice though is that despite the potential to go wild-eyed and overboard there were enough scarred veterans in the crowd to keep things realistic. Many of the things being discussed have been done before during the heady bubble days but this time it looks like we’ve got the platform to really make it happen.
Remember moreover.com? RSS before anyone knew what to do with it. Geocities was like weblogs 1.0 but instead of letting people subscribe to an RSS feed, you had a small text entry box where you would ask for an email address so you could push out a notice when the page updated. One good line from Martin Nisenholtz of nytimes.com is that the promise of RSS is that he can now send content directly to the reader without having to cut bad deals with Biz Dev execs at portal companies. Web 2.0 is all about cutting right by the intermediary.
I listened in on the interview behind this article with Google’s Peter Norvig in eWeek and later talked to him about sentiment analysis as it might apply to blog posts. He said that Google is applying techniques to skip over indexing spam comments such as “I really like your page, have you seen www.spampage.com?”
I asked Jerry Yang if he had any advice for a software startup interested in going after the corporate market. Yahoo had an enterprise portal group that eventually threw in the towel and turned their customers over to Tibco. His advice was to stay away from corporate IT because they have a vested interest in avoiding the new and different.
I witnessed as Brewster Kahle struck a deal with the folks from Morpheus to work together. “I’ve got this great archive of all this wonderful stuff and you’ve got this great mesh of a distribution network.” Ok, I egged them on a bit.
The most inspirational talk was by Lawrence Lessig who railed against the specter of old world copyright law that threatened our ability and right to mix and mash digital content to express ourselves. Something’s not right when you can teach your kids to write creatively and encourage them to quote and incorporate styles & nuances but cannot teach them how to remix music or video to make a point, political or personal. Indeed. For an mp3 of the speech, click here.
The conference was organized by publisher Tim O’Reilly and MC’d by John Battelle who have been noodling over the theme of the Web as a Platform/Web as an OS theme for the past year or so. This conference brought together the best and brightest of that conversation and put them on stage for a number of interesting and insightful discussions.
In the initial post that kicked off this blog, I said that I would focus on how the promise of the ASP/Web Services vision is being realized with the connection of various web-based APIs into a new type of platform which lives on the internet. One way to experience the power of this vision is using outputs of each of these services and embedding them into your weblog template. Once you learn how easy it is to pull together a page of contextually related information that updates every time it’s refreshed, you start to think how other things can be connected together.
Outputs of one service can act as inputs to other services to further process and refine information via relationships that we set up in advance. It’s basic programming but using web services instead of self-enclosed objects, classes and libraries.
Jason Kottke brings the meme up-to-date with some of the latest services out there and thinks how a bundle of them could make for a comprehensive personal information management system:
Think of it like Unix…small pieces loosely joined. Each specific service handles what it’s good at. Gmail for mail, iCal for calendars, TypePad for short bits of text, etc. Web client, desktop client, it doesn’t much matter…whatever the user is most comfortable with. Then you just (just! ha!) pipe all these together however you want with services (or desktop apps) handling any filtering/processing that you need, and output it to the file/device/service of your choice. New services can be inserted into the process as they become available. You don’t need to wait for Gmail to output RSS…just pipe your email to Feedburner and they’ll hook you up.
One other benefit that comes to mind as I move my identity from one PC to another and one ISP to the next as part of a job change and relocation – distributed data is ubiquitous and never needs to moved from client to client.