Google Search Appliance 2.0

First launched in 2002, the Google Search Appliance is a rack-mounted unit designed to crawl and index intranet pages for enterprise search. Combined with OS-level integration points such as the Google Deskbar, the appliance is the bridge between an index of your PC hard drive and the internet. While Google has not yet announced an integration into the PC level index space, several third party vendors have announced the ability to add adaptive crawl technologies integrated with Microsoft Windows and Office. Most notable is Lookout which even looks like a Google knock-off.

The GB-1001 is a rack-mounted two-unit (2U) appliance that can be licensed to search up to 1.5 million documents at a rate of 300 queries per minute.

Our entry level license indexes up to 150,000 documents and costs $32,000 for a two-year license with hardware, software and technical support all included. Pricing scales upwards based on the number of documents.

A list of published customers.


URL is the new command line

Flash forward to where we in this debate today. John Gruber points out in his post The Location Field Is the New Command Line, that web-based applications are leapfrogging hardware-specific applications, despite their inferiority.

What they’ve got going for them in the ease-of-use department is that they don’t need to be installed, and they free you from worrying about where and how your data is stored. Exhibit A: web-based email apps. In terms of features, especially comfort features such as a polished UI, drag-and-drop, and a rich set of keyboard shortcuts, web-based email clients just can’t compare to desktop email clients.


With web-based email, you can get your email from any browser on any computer on the Internet. “Installation” consists of typing a URL into the browser’s location field. The location field is the new command line.
from Daring Fireball

He concludes that Microsoft missed the boat by targeting Netscape. It’s not the company that was a threat, not even the browser, it was the applications that were enabled by the URL concept that is a threat to the Windows monopoly.


The Web as a Platform

Today I’m starting a new weblog that will focus on a discussion that has been gaining momentum over the past two years. As web services gain favor and companies, customers, vendors, and providers begin to communicate via these standardized APIs, we all realize new economies of scale as well as lowered barriers to entry.

My initial “aha” moment was during a trip to Redmond where a Program Manager walked us through a demonstration of the .NET version of Visual Basic and showed how in 30 minutes with something like 5 lines of code he was able to build a simple web application.

The scenario was a CTO talking with his IT guy on a plane ride. The CTO asks the IT guy what all the bug-a-boo over web services is about. Jacking into the net via the seat back phone, he strings together three separate applications that pipe their results to each other to bring back a result that confirms the obvious.

1. Input your flight number >
2. Flight number acts as an input to geo-tracking service like Flight Tracker >
3. GPS coordinates of flight act as an input service that translates GPS to Zip Code >
4. Zip Code acts as input into weather tracking service for radar image of weather conditions.
5. Look out your window and confirm weather conditions

No jokes about Bob Dylan and not needing a weatherman for such an exercise, this was just an example to get the juices flowing. If you think about various web applications as something that can be negotiated with the http equivalents of “grep” and “|” then you’ll begin to appreciate the transformative (and one could say, disruptive) power of this model. Add RSS feeds to automate the connections and it’s like adding oil to the machine – everything starts to run even more smoothly.

So, to kick off this discussion/weblog I’m pointing to Tim O’Reilly’s original posting that sums it up very nicely:

Bit by bit, we’ll watch the transformation of the Web services wilderness. The first stage, the pioneer stage, is marked by screen scraping and “unauthorized” special purpose interfaces to database-backed Web sites. In the second stage, the Web sites themselves will offer more efficient, XML-based APIs. (This is starting to happen now.) In the third stage, the hodgepodge of individual services will be integrated into a true operating system layer, in which a single vendor (or a few competing vendors) will provide a comprehensive set of APIs that turns the Internet into a huge collection of program-callable components, and integrates those components into applications that are used every day by non-technical people.

From Inventing the Future, April 9, 2002