This is what I mean: * User goes to order.cgi * User clicks on 'add book', linked to 'book.cgi?callback=order.cgi%3Fbook%3D'. * User asked to 'choose author', linked to 'author.cgi?callback=book.cgi%3Fcallback%3Dorder.cgi%253Fbook%253D%26author%3D' * Authors are listed. User clicks on 'Tolkien' linked to 'book.cgi?callback=order.cgi%3Fbook%3d&author=Tolkien'. * Books are listed. User clicks on 'Silmarillion', linked to 'order.cgi?book=Silmarillion' * ordering process continues.... The analogy is with stepping through a program which uses callbacks in a debugger. The point is for all 'components' to expose a way of attaching a callback (eg 'callback=x') , pass this to components it itself uses, and to present this callback url suffixed with the value this component has been used to select (that's horrible English. Look at the example again, and count to 10) Why would you do this? Its pretty clunky - Amazon are unlikely to patent it! The reason is, that it allows you to build composable cgi/asp/servlet components. But that's a dumb way of composing them! Yes, of course it is. Consider: people still come up with 'lunchtime cgis'. And servlets. And ASP. and CFM's...some of these provide useful lookup services that would be far ''more'' useful if they could be used by another program. Normally we tackle this by integrating the backends with databases, or middleware, or nearer the front end with brokers, or at the front end with screenscapers (yeeeech!). The idiom above leaves the 'weblet' open for use directly by other components. To take an example: consider a service written this way which 'provided' a map coordinate. Someone later comes along, from another site, and writes a tool for recording great bike routes. Right now, you might go to the map site, write down or cut n paste the links, and add them to your description. Instead, you could use this map service by having a callback link for 'Add Waypoint...' on the route tool (compare to add book above). In the bike route these get used to link back to the map site. No middleware. No XML. No broker. No screenscraping. Just a polite interface for calling back to other websites. It makes SOAP look bloated. In practice, using this would probably be a Bad Thing, as it exposes the list of sites you've been to to the sites you're going to (an invasion of privacy). However, as a low-overhead way of turning a CGI script into something reusable by another technology it's interesting. It might be nice in a lab on servlets, say, to get the students each to write a different service supporting callback, then tie them together in the last 10 minutes. I've never seen ''anyone'' use UrlCallback as a means of composing active web content in the manner described above. In fact, I've never seen anyone use UrlCallback ''at all'' (although there are quite a few things, like w3C's web validator, which take URLs as arguments). Maybe other people thought of it and considered it dumb. Seems likely to me, callbacks are nothing new. It probably existed and became extinct when cookies arrived. If you've seen it before, let me know. I just thought this was worth documenting in case anyone finds a use for it. CategoryRipleysBelieveItOrNot , possibly CategoryAntiPattern if anyone actually uses it... -- BrianEwins ---- Actually this sounds to me more as a continuation than as a callback. A continuation (as a CS term) is a complete future for the computation. It is like a subroutine call that never returns. You can implement subroutines that ''do'' return on top of this by passing the "return continuation" as a parameter. This is exactly what is happening above. Frankly, I think it is quite cunning. It seems an elegant solution to some problem I had. -- StephanHouben ---- I agree, It is a continuation unless the called URL returns the browser to the caller. I've used this mechanism in at least two of my web-systems that I've written or co-written in the past. My best example would be on one system that requires users to be logged in for some features. All the features (urls) that require the login just call (local call, not url-call) a module to check that the user is logged in. If they're not logged in, the module dumps out a login template with some info on the feature they're trying to access... that form has a url embedded in it that will eventually return the user to the feature they're trying to access. The login form is submitted to the user-handling feature (url.. handles logins, new users, edits, etc), which eventually will send the user back to the original feature that they were trying to access once they've logged in. The feature -> login feature -> back to feature thing makes it a UrlCallback. Note that the full callback in the above example is ''not'' the best solution in my opinion. It requires the callee to send a redirect back to the browser telling it to go to the login feature.. ''[Actually, the whole call stack has to be passed. This is very similar to some Scheme implementations, that add a linked list of return continuations as a hidden argument to each procedure call.]'' that means extra traffic and latency and all that not-good-stuff. I only made it a full callback because the system I was using didn't have a built in feature to do internal redirection (current ASP & Servlet specs do), and there wasn't enough time left to refactor to easily allow internal-redirection. I only see one real advantage in using this mechanism: you're not tying yourself to one system (physical or logical). You could easily get the URLs to jump to different servers using a proxy. That's good for at least two reasons: load-spreading (more machines actually doing something -> faster site) and system-independent calls (ie, continue to a feature written in servlets from a Perl script) -- GeorgeGruschow (longing to switch to a web development job on a team with a clue.. I miss the feeling of urgency.. not only do you have the project deadline (typically yesterday on web projects) but you also get the added bonus that any technology you use this morning will become old news by the time you get to lunch) ''[The whole call stack is passed assuming that each "called" URL has the full URL to return to it's "caller" in it.. this is in fact a very serious limitation because URLs are limited in size.] -- gg'' Theres a few things in that last comment I should clear up. * You only need to pass the call stack that you are passed. This may or may not be the 'full call stack'. For example A links to B(A) links back to A(x) is the standard scenario, but A links to B(C) links to C(x) is also one way of using this idiom as a kind of indirect goto. (x is the value produced by B) ''[This resembles tail call optimization, like Scheme does]'' * The original page (A) is not given a call stack, although in a users head they might perceive one (the history list), or even see one marked on the page ('back'- , 'up'- buttons, etc). So in one sense we never use the full call stack, as its never made explicit until its in the URL. * There is not actually a limit on URL size in the standard. The limit is in some implementations where it is restricted to 255 or 1023 chars. This is especially true of CGIs (but not, for example, servlets) which inherit unix or windows shell limitations. ''[Although the RFC doesn't specify a limit, many browsers are limited to 4k which imho means the size is limited -- gg]'' Using relative links on a single server its less of a problem, but I do have an ugly solution for the general case (below) The ugly solution for the URL length problem is to construct an object which turns a long URL into a short label (usually a numerical sequence number, or a random string). A second component can unwrap these references and return you to the original caller. The analogous programming structure here might be a lambda. Taking the A->B(C)->C(x) example above, the flow now looks like: A-user->L(B,C)-auto->B(D)-user->D(x)-auto->C(x). In this case, L constructs a 'fake URL' D which is like (define D (lambda (x) (C x))). As you begin to compose more and more of these, the size of 'D' type URLs is constant, whereas the size of 'C' type URLs would grow with stack depth. Again I didn't want to mention this at the start 'cos it just muddies the waters. Oh dear :o) -- BrianEwins. I think the water was pretty muddy before you even mentioned UrlCallback's. Web development has always seemed to be one hack after another to me until pretty recently. -- gg ------- See also: AcidAndLongTransactions