This one’s my favorite but there’s a whole bunch more photoshop fun over on Fark.com.
Google Labs just announced that they are now providing a video search engine. Details in a BBC article here. This is slightly different than the video search announced by Yahoo earlier in that is indexes the closed caption content provided with television shows and returns results that show where in television segment the search terms were spoken and then shows a screen capture from that segment.
For an example, here’s a persistent search showing mentions of the word blogs.
It’s still in the labs so the actual video footage is not available but if they do point to when the show aired and when you might be able to catch the segment again. If Google delivers on what they are writing about, this could be a version of Google acting as a gigantic, internet-enabled TiVo for the rest of us.
More detail with screenshots here on the about page.
UPDATE: According to CNet, Yahoo has been working on a similar index of closed caption video text of Bloomberg and BBC programs in their partnership with TVeyes. The article also mentions Blinkx but I couldn’t get it to pull up any meaningful results.
I took notes at the inaugural meeting of the new TiE Special Interest Group focused on the internet. The event was titled, “Wikis, Blogs, and Other Four Letter Words” and put together by Manish Chandra who wants to create a program, “to educate and inspire people to innovate and enter the next dimension of the Internet.” If you have ideas for the future meetings (located in Santa Clara, CA), contact Manish at email@example.com.
Panel members are:
- Reid Hoffman, Founder & CEO of LinkedIn
- Andrew Anker, EVP, Corporate Development, Six Apart
- Stewart Butterfield, President, Flickr/Ludicorp
- Joe Kraus, CEO, JotSpot
What about the uploading of video files and audio files?
Andrew: it’s already happening with Podcasting where people are uploading audio files to blogs. It’s just a question of the bandwidth catching up.
Andrew: blogs are my filter. I let the interesting stories filter up through the blogosphere and use popularity rankings to point me to things I should read from the traditional media, I leverage the emergent intelligence of the blogging community. Bloglines, Feedster, Newsgator all help me filter the blogs. Better to leverage the collective intelligence of the 5 million blogs out there.
Scott Rafer invited up to show how Feedster works. As a “search engine for developers” so that less and less of their traffic will be from individuals running searches and more and more from machines that are coming to Feedster to getting information. Each search has an XML output that can feed into an application. Feedster will soon launch their job postings service that will run on the same model. The engine will be used to create streams of location-specific job postings that Feedster will sell to publications that want to re-purpose it.
How do you make money on all this?
Andrew: TypePad is integrated with the Amazon Associates program. They will also integrate an ad program with Kanoodle in Q1 2005. Six Apart’s job is to enable people to use our tools to make money for themselves.
Joe: JotSpot makes money on direct revenue. By also making tool development easy and accessible, we’ll enable all the small IT shops out there to quickly develop customized apps that solve specific problems. This enables them to charge for more billable hours.
With all these tools, is this reducing face-to-face interaction?
Many hands go up to say they use IM at work to communicate to people less than two feet away.
Stewart mentions the example of someone’s son talking to a computer screen expecting to talk with his grandfather in Estonia. The computer is less a box and more a window. More and more people are looking at the cell phone screens and not talking on them.
Will Flickr, TypePad, JotSpot merge? They’re all about sharing information.
Reid: it’s the next generation of Yahoo broken up into many different pieces.
Stewart – the average Yahoo user uses 2.1 of the 40 sites on Yahoo.
Andrew: they have already merged. You can have a Flickr sidebar on your TypePad blog. You can point to wiki post from within a blog. Integration is already there, it’s better to have things interchangeable as needed.
What challenges are there in “crossing the chasm?” Where are you on the scale of 1 to 10?
Andrew: the blog is on a “2” – Instapundit.com is now in the top ten in terms of pageviews next to all the big media sites. People are learning to read and in the process of reading, they begin to think, “I can do that too,” then, they will begin to blog. We’re still really in the “people learning to read” stage.
Joe: We’re still very early days. We believe in nerd power but we’re still way out of the mainstream. We’re maybe a “1” but other wiki tools are maybe a “0.3”
Stewart: We’re a “3” but our challenge is not to alienate the core users by taking the app to the mainstream. We’re getting close to the point where most everyone will either own or be related to someone that owns a digital camera – they will all want to share these pictures and that is when the market will really grow.
Reid: What will blogging look like three years from now?
Andrew, it will push into families just as email and IM has done so in the past. Blogs are really the third leg of communication. Blogs are used to document the “full record” of a conversation that’s going on.
Reid – What will be some of the cool apps on JotSpot?
Joe says people are now running call centers, project management, a number of other traditional apps on JotSpot. The best ones are the micro-solutions that solve specific problems for a small group of people really well. Picking up on the three years from now thread, he says that the Wiki will be just another app just like email and a shared network drive.
400,000 people make money on eBay. Joe would love to spawn a network of small time developers to create and make money off apps that they sell that run on JotSpot.
Reid – What of photo sharing in three years?
Stewart says that photo sharing will become a new notification method. Instead of a phone call to say that she arrived safely, someone might post a picture of their luggage arriving. Photostreams will act to document movements (he cites the example of him taking a picture at a Giants game and then someone in San Francisco giving him grief for not stopping by while he was in town).
He goes on to say that he likes the fact that whole communities are developing around specific tags. Vintage 50’s toys and these groups sharing their favorite toys via Flickr – people connecting.
Reid: Isn’t a blog just another way to publish a web page? Why is Six Apart charging?
Andrew answers by telling the story of how Ben and Mena were pulled into supporting Movable Type for enterprises by their customers that wanted an upgrade cycle and official support for the product.
Reid: What of Open Source? What do you tell developers? Why should they develop to your platform?
Joe says that you can do much more on JotSpot as far as integrating with enterprise systems than you can using open source tools. One attraction of open source is that it’s “hackable,” JotSpot has tried to retain this so that applications that are built on it are easily modified. They also have made JotSpot inexpensive so that it is easily accesible. They were inspired by Six Apart’s pricing for TypePad where you can get up and running for under $5.
Stewart says we don’t really think of ourselves as a competitor of Ofoto, Snapfish, etc. because photo finishing is expensive. These finishing sites give away sharing as a way to sell finishing services. Flickr will sell better sharing tools as a way to differentiate. Many households have digital cameras so there is a market for the pro-sumer digital camera geeks that Flickr can address. 82% of the 2 million photos on Flickr are public – there is an opportunity to sell advertising around tags related to the public shared photos.
Reid is posing the questions. Are these new apps, (JotSpot, TypePad, Flickr) a platform?
Andrew is talking about the TypePad app as being an extensible platform off of which simple applications can be built. He talks about how Typelists that list books from Amazon tie into web APIs but via a simple, web-based front end.
Joe talks about two articles. The first is the Chris Anderson’s Long Tail piece in Wired. The second is Situated Software by Clay Shirky. He believes there is a long tail in the software business. A vast majority of business is run on the backs of simple Excel spreadsheets that are shared via email, not the large, bulky CRM or SFA apps. Joe feels that there is a need for a platform to share information quickly and easily.
Stewart talks about what happens when you release your API into the wild. Within a few days there was an iPhoto plug in that allowed Mac users to upload photos to Flickr. Leveraging the talents of the community has helped him support the larger community with better tools. In general, outside developers can do a better job that you can yourself.
Stewart is showing his Flickr page. 55% of their users came to Flickr via blogs that were pointing to Flickr pages.
He is showing how to search and add metadata. The process of adding metadata is “collaborative and social,” because Flickr can bring together commonality through your network of contacts or common metatags. He also is showing how you can add tags to photos of your contacts.
Stewart has 400 contacts.
The ability to tag needs to be social – the failure of current machine translation shows that automated tagging has a long way to go.
As an example of a quick & easy extension of a developer network, he is showing the Pay Pal site that runs on TypePad. This shows how developers can quickly get the word out to their developer network without investing in an infrastructure to support it..
The event is going to be back-to-back demos. Joe is currently demoing JotSpot.
He is showing how the JotSpot version of a wiki can easily transform itself into a platform for “rapidly building lightweight, customized application” that integrate data from your local hard drive, your network, and across the internet.
The creepy tone of the background music sets the stage for this look back at the demise of traditional media as we know it from the perspective of 2014. Amazon, Google, Microsoft, Friendster and the trend towards personalized and automated filters to help manage information flow pull down the Fourth Estate.
“The New York Times becomes a print-only newsletter for the elderly and elite.”
The ending leaves me cold. Watch the developments over at Pegasus News as they build an alternative to this algorithmic nightmare.
I just downloaded Firefox 1.0 this morning and with my Noia theme there’s a big blue lollipop thing right next to the address bar. Hover text says, "Type a location in the address field, then click Go"
It looks like it’s using the Google "I’m feeling Lucky" result which I stayed away from because of the labeling ("feeling lucky? nah, I’m here doing research!"). Once I tried it though, I was amazed at how many times it found exactly what I was looking for.
In a move that took everyone by surprise, Google announced a new downloadable product that installs on your hard drive, indexes your email, Word, Excel, Powerpoint, and AIM chat logs and adds them to the Google Search results window. The expected move was that Google would launch their own, Google-centric browser but they have once again side-stepped popular wisdom and done something that new and fantastic.
You’ll do a double-take the first time you run a search after installing Google Desktop Search. Up on top of your results, right under the paid search ads, you see links to personal email and files that contain hits on your query. Instead of bringing the web to your desktop, by putting hits on your desktop files into the Google UI it now looks (and feels) like Google has put your desktop onto the web.
Rael Dornfest explains what’s going on behind the scenes:
What’s actually going on is that the local Google Desktop server is intercepting any Google web searches, passing them on to Google.com in your stead, and running the same search against your computer’s local index. It’s then intercepting the Web search results as they come back from Google, pasting in local finds, and presenting it to you in your browser as a cohesive whole.
John Battelle caught up with Marissa Mayer, Google’s director of consumer web products, and found out that the app is only 400k and runs on only 8 MB of RAM. She also says that the relevance algorithm obviously doesn’t use PageRank but does use 150 other proprietary variables (bolding, font size, etc) to determine relevance.
Danny Sullivan writes in depth about this new tool going on to say that the Google page that you see when you launch Desktop Search is not actually on the web but is being served up by the web server that comes with the app. This is apparent when you see the address of the URL [http://127.0.0.1:4664/&s=400994545] which is a local address.
Another benefit is the caching so that you can now quickly peek into the contents of a file without having to wait for Excel to fire up. If there are multiple copies in cache, there’s version history which can save you if you’ve overwritten a file using the same name.
It’s still in beta so I’ll forgive the fact that it only runs on Windows and indexes only AIM chat and Internet Explorer caches but other than that, this is a most impressive product that redefines its category.
Why has Google News been in Beta for three years? Because as soon as they try and place ads on the Google News pages to monetize the traffic, they’ll be hit with a barrage of cease & desist letters from publishers around the world says Wired’s Adam Penenberg.
In a review of the Google and Yahoo news sites in Online Journalism Review, Ethan Zuckerman puts forward a very interesting theory as to why the alternative news sites bubble up to the top of the relevance ranking algorithms at Google News.
“I think what you’re seeing is an odd little linguistic artifact,” said Zuckerman, former vice president of Tripod.com and now a fellow at Harvard’s Berkman Center for Internet and Society who studies search engines. The chief culprit, he theorized, is that mainstream news publications refer to the senator on second reference as Kerry, while alternative news sites often use the phrase “John Kerry” multiple times, for effect or derision. To Google News’ eye, that’s a more exact search result.
A second possible factor, Zuckerman said, is that small, alternative news sites have no hesitancy about using “John Kerry” in a headline, while most mainstream news sites eschew first names in headlines. The inadvertent result is that the smaller sites score better results with the search engines.
As I look for the cross-section of schools and interesting-but-reasonably-priced places to exist (does such a thing exist in the Bay Area?) I found myself wanting for a school district map overlayed on top of a map showing available placed to live. I’ve found pieces of the puzzle:
In the course of looking for a tool that could tie zip codes to neighborhoods to school districts, I ran across this wonderful site by MIT Media Lab doctoral candidate, Ben Fry. His interactive Zip Code tool is one of the coolest things I’ve seen in awhile.
UPDATE: Now 10 years later Google Maps has started to layer this information as an extension of their Google Maps service. Check out the mash-up of greatschools.org ratings and Google Maps.
Check out more mashups at the Google Maps Gallery.
Adam Penenberg writes in his Wired story, Searching for The New York Times, that there is a very real financial incentive for the NYT web site to continue to hide it’s stories behind a subscription wall. A $20 million/year all-you-can-eat royalty agreement with Lexis-Nexis is an awfully hard arrangement to tear up. John Battelle noodles on this idea a bit more and ponders when the attraction of differentiated revenue from individuals, finding stories on their own via Google and other search engines, will outweigh the guaranteed revenue stream from L-N. He also adds:
What revenue stream accounts for the lion’s share of search’s margin? Advertising. That’s a one legged stool ready to tip over. As the search giants become more and more media companies, they must develop subscription services, and because users won’t want to pay for something they already believe is free (searching) search engines will have to figure out a way to become middlemen to paid content. After all, they own distribution, so they should become…distributors. Were they to execute this service in a scaled and elegant fashion, it might be viewed as a benefit – in many cases, subscribers will get more content for less than they were paying in the past (that’s the benefit of volume).
Google as a portal to premium content? Haven’t we been there before? One comment to John’s post points out that this has been AOL’s model for the past 10 years. Yahoo has continually tried to push premium services and could easily bundle in targeted content. Northern Light also blazed this trail but flamed out after a failure to bring together enough content.
There are many ways to get to content and Google is the current flavor of the month. Of greater demand is having a single account that aggregates access fees to each site for a reasonable monthly fee. Why pay nytimes.com and wsj.com separately when you’d rather pay a single bill for unfettered access to these sites and more? Yes, Google has the distribution network but PayPal or American Express might be a better player for a unified subscription account. After setting up unified subscription fees, the next step is working with each of the major content vendors on feeding RSS feeds of their content to the major search engine vendors so that their content begins to move up in the rankings. Portions of the proceeds of the subscription fees could then go to each of the search engine vendors, paid out as a proportion of the amount of traffic they drive to the payment vendor for signup. $40/month sounds about right – we’ll call it a “global media press pass.”
UPDATE: Cory cuts to the chase on boingboing.net
The NYT’s registration system and expiring pages have doomed them to google-obscurity. Wired News argues that they’ve gone from being the paper of record to a Web-era irrelevancy, and all to protect a Lexis-Nexis agreement and to bring in two to three percent of the digital division’s profits.