One of the benefits of the client/server computing model is that one can often avoid having to send each record to the client in order to perform some sifting or summarization operation. For example, if you want to search 50,000 records for a street address containing the string "foo", then you could issue a query to the central database to perform the search and only return the matching results, which may be just a dozen bytes or so. If we let the client do the processing, then we would have to transfer possibly all 50,000 records to the client in order to be inspected. This can be a real bandwidth hog. It was a problem for LAN-based or WAN-based solutions where the server was treated like just a hard-drive. ''This isn't a benefit of the client/server model. It's a benefit of server side processing. Sending all 50,000 responses to the client to examine is still client/server.'' That conflicts with the definition as I understood it. ''There's nothing in the client/server model that says where filtering has to take place.'' Client/server is not a well-defined term. Historically, didn't it emerge purely as a contrast to peer-to-peer? That was when networks were first getting into offices, and people were just hooking up workstations and sharing files. The idea of client/server was that there should be a machine dedicated to serving, which didn't have a console user, and could have specialised hardware (eg a big disk but a poor video card). In this sense, remote iteration and query transmission are both client/server. [I'm confused. If you don't think it a well-defined term, then why did you bother saying, ''"That conflicts with the definition as I understood it"''? Anyhow, your last statement seems accurate: they are both client/server. So... it would probably be wise to reword your RemoteIterationIssue such that it doesn't call it an ''benefit of client/server'', but rather of the ability to send an ''ad-hoc'' query to the server. You might want to also note there's a problem if all you can do is send single-item requests... forcing you to send, say, 20,000 requests to get all the data :(.] ----------------- An interesting worldview for the above scenario: ''what the client really did was send part of '''itself''' to the server, ran on the server's machine, then sent a filtered down part of itself back to its main body.'' Thus, rather than viewing the query as a function of the server, we view it as a function of the client. The same sort of role is often reversed: ''JavaScript code running in your browser is really a piece of the server, shipped over to your computer to twiddle your display''. When this worldview is fully embraced - i.e. by codifying it into a programming language, communications protocol, and server/browser plugins/libs (to avoid being a TechniqueWithManyPrerequisites) - it forces a departure from the common security view that code and data are separate. DataAndCodeAreTheSameThing. Ideally, we should be able to talk to one another using a general-purpose language... and if we don't allow this, we'll ultimately be doomed to reinvent it, '''badly'''. I.e. in the above cases, the code is extremely limited: one can't do something useful like send code that fetches data from server A then sends some commands to server B. A major hindrance to this view, naturally, is there aren't many 'secure' general purpose programming languages; even EeLanguage is relatively new, and it doesn't really provide protection against denial-of-service. Anyhow, this is the idea behind TransparentDistribution and TierlessProgramming. I've been researching for a few years now on the best way to develop a language suitable for this. ------ CategoryDatabase, CategoryOptimization