S E R V E R   S I D E
View current page
...more recent posts

John Udell has a piece on the Disenchanted link back (which is a lot like what I've been calling reference logging.) I wonder if they have automated it in the same basic way that I have? In any case, Udell seems to grasp why this might be very cool inside the blogging world.

Looks like decafbad has a very basic version of the same idea now too. As well as diveintomark. Cool. From what I can tell I'm the only one grabbing actual text off the referring pages. But that just might mean that mine won't scale.
- jim 5-06-2002 12:13 am [link] [add a comment]

What big bang?

This makes me happy since I've been dismissed off hand more than once for suggesting that the big bang is in no way "proven" to be the true story. In fact, if I understand correctly, it's almost entirely based on the red shift of very distant stars. But this could be caused by lots of things. Maybe the speed of light is getting faster.

(Interesting Shulgin article on this topic.)
- jim 5-05-2002 11:35 pm [link] [add a comment]

While understanding that this truly reveals the amateurishness of my coding abilities, I've posted the PHP code for the reference logging feature I built. (No, this isn't useful in any real way - I'm just posting it in case someone was wondering how I did it. Maybe someone could get an idea from it. But it's too tied to the rest of my system for someone else to be able to use this fragment. Still, I'd like to see others implement their own versions of this feature.)

After a page is served from the database here, the system checks whether reference logging is turned on for that page. If so it includes the snippet of code linked above which determines if there was an external referer who had linked directly to a specific post here (a link to a URL you get when you click on any [link] link.) If so this bit of code gets the HTML of the external page, and parses it so that only the bit of text right around the link to us is left, and stores that text and link in the database here.

It's not pretty. But it does seem to work. I guess, like all of my stuff, it should probably be thought of as a proof of concept. Maybe some day a real coder could write a more elegant version. Still, I'm not sure that version would actually work any better.

I'm very interested to see people's reaction to this feature. This has been hard, so far, because it's not immediately clear what I'm up to. But the implications could be rather large. Especially in the weblog world. There are lots of conversations going on between pages, but no real way for someone unknown to break into the conversational loop. Or rather, the only way for someone unknown to break into the loop is to be pointed at by someone already in the loop. This leads to a certain level of cliquishness. But if all specific references to a page showed up as a link and a snippet of text on the page being linked to (well, actually on a sub page, but noted from the page itself) then new people could be introduced into conversations just by commenting on them.

This takes some power away from the individual author (in the sense that they aren't vetting every single link, some are just appearing.) So there could be resistence on that point. I wonder.
- jim 5-05-2002 7:59 pm [link] [2 refs] [add a comment]

Back to web services. There is some debate over exactly what is meant by this term. I understand it as the web minus the HTML presentation layer. Or, in other words, web services return data in response to specific requests. Web sites, on the other hand, return web pages (formatted in HTML) in response to specific requests. So web services are a good way for computers (or computer programs, really) to talk to each other. A computer program wants data from external sources to be formatted in a rigorously standard way. That's what web services provide. People, on the other hand, want data formatted in a visually pleasing way. That's what web pages (try to) do.

So web services are just standards for communication. As I mentioned the other day, this is something people who build on line applications can get very excited about. Sure, I could always build a program that would connect to a web site, download a specific page, and then sift through the HTML to extract certain information. The problem is that if the web site changes the visual design of their page, it will probably break your program. The piece of data you want won't be in the same place any more. With web services the web site publishes a specification which details exactly how the information will be presented. This might mean a something very basic, like a comma seperated list (like: date,time,theatre,price) or some sophisticated XML schema. The key is just that the structure is agreed upon and doesn't change. This gives third party developers confidence to write software that uses that service. The confidence is that your program will continue to work in the future.

And this turns out to be a huge deal. It's very web like. It's about cooperation.

The recent flood of thinking and writing on this subject has been largely fueled by Google. They published what they are calling the Google Web API. Maybe you remember hearing this term API during the Microsoft trial. It's what some on the government side kept saying they wanted Microsoft to "open up." API stands for application programming interface. The agreed upon data structures that comprise web services are APIs. Web services are what happen when web sites publish APIs and developers build tools that use them. (Microsoft Window's has a set of APIs too. They detail how programs running on top of Windows can make calls to the system to take care of basic low level operations, and the responses a program should expect to get back from the system. Allegedly Microsoft does not reveal their entire API to outsiders, thus Microsoft's own programs - like Word, or Excel - have a huge advantage.)

The Google API is completely open. It allows other programs to query the google search engine. The API specifies how you should send your request (the actual structure of your request) and how the results will be sent back to you. Google is calling this a test. Anyone can use the API, but you have to sign up with them (for free,) you are limited to 1,000 queries a day, and you can't use it for commercial purposes. They can keep track of how many queries you use a day because part of the API specifies that each request must be sent with a unique ID you receive from google when you register.

This is really cool stuff. People like me get very excited when we suddenly gain lots of power for building things on line. I can now write a program that harnesses the amazing data set and algorithms of google. And I can do this in the background, without actually sending my users to google. By publishing their API google has effectively added all the capabilities of google to whatever programming langauage I am using. It almost seems like too much power. It's intoxicating. Still, I can't think of exactly what to build. There's no sense in just writing a front end for searching - google's web page is already perfectly fast and minimal. But there is undoubtedly more that can be done. And lots of people are having a really good time trying to figure this out.

If it works, the web of the future will be largely about web services. And this means that the web will be more and more about assembling the information you view as a user from a variety of different sources which are all live and machine accessible over the internet. Or, in other words, it's about all of us agreeing on the sturcture of the language we're going to use for our programs to talk and work with one another. And agreeing to work together makes us all more powerful. Lots more on this topic to come...
- jim 5-04-2002 10:48 pm [link] [add a comment]

Technophobia:

Though the album was rejected by one major label as uncommercial, Wilco's "Yankee Hotel Foxtrot" defied record-industry expectations by selling 55,573 copies in its first week and debuting at No. 13 on the Billboard album chart--by far exceeding the band's past sales achievements....

Last summer, Reprise Records let Wilco walk away from its record deal because executives said "Foxtrot," an experimental pop album, lacked an obvious hit single and therefore wouldn't sell. The band began Net-streaming the album on its Web site, allowing listeners to preview songs for free.

Rather than hurting the band's sales, the strategy appears to have only built anticipation for the official release.
Where are the record company investors? Why haven't they kicked Valenti out yet? Isn't that what's supposed to happen?
- jim 5-04-2002 9:30 pm [link] [1 comment]

Here's another new page at datamantic.
- jim 5-04-2002 9:05 pm [link] [add a comment]

A bill has been introduced in Peru which would require the government to use free software.

Microsoft is of course outraged, and has complained. Here is the utterly amazing reply from Dr. Edgar David Villanueva Nunez, Congressman of the Republica of Peru. He says, in part:

To guarantee the free access of citizens to public information, it is indespensable that the encoding of data is not tied to a single provider. The use of standard and open formats gives a guarantee of this free access, if necessary through the creation of compatible free software.

To guarantee the permanence of public data, it is necessary that the usability and maintenance of the software does not depend on the goodwill of the suppliers, or on the monopoly conditions imposed by them. For this reason the State needs systems the development of which can be guaranteed due to the availability of the source code.
Amen. The whole letter is worth a read. (from MeFi)
- jim 5-03-2002 7:41 pm [link] [5 comments]

Today is my 33rd birthday.
- jim 5-03-2002 6:54 pm [link] [1 ref] [23 comments]

David Weinberger is wondering about the odd capitalization of Tom's blog title, IMproPRieTies. I thought it was a phonetic fudge of "I'm pretty." IM PR+T

We'll have to wait for some independent verification on this. I mean on whether or not he's pretty.
- jim 5-02-2002 10:09 pm [link] [add a comment]

"The conversation continues..." is another mailing list I'm on (is it still called a mailing list if it's only one way?) This one is put out by Kevin Werbach and Esther Dyson, publishers of the influential Release 1.0: Esther Dyson's Monthly Report. Not nearly as entertaining as EGR, but it is pretty good coverage of the tech world. I don't learn too much new, but they have a knack for summing up the current thinking in an easy to swallow form. The last one featured a piece titled "The New WWW" where Kevin Werbach argues that Weblogs, Web Services, and WiFi are the new WWW.

"The old grassroots energy is coming back. Web services, Weblogs and WiFi are the new WWW."
I've been trying to get something together about web services. This is an important emerging area. And it does seem true that "the old grassroots energy" is coming back. We'll see.

Compare Kevin's thought to megnut's:
All this talk about APIs and web services warms my heart. We've passed the nadir of the dot-com hype and we're coming back to the Web in interesting and important ways -- opening up sites through APIs and services and working together to build better and more powerful applications. People are getting excited again about the potential of the Web and it's really great to see.

She points to a recent Kottke post where he sees the same thing: "But I admit that Web services makes me feel just a little bit tingly."

What's all the fuss about? More soon.

(You could subscribe to 'The conversation continues..." here.)
- jim 5-02-2002 8:36 pm [link] [add a comment]

older posts...