[under constru This is a thought experiment related to the issue of finding objective or rigorous inspect-able evidence that structured blocks are better than goto's. *-----* init. source[1] (ISC) -->| | money (M) -------------->| PBB |----> resulting_source_code (RSC) reqrmts. doc (RD) ------>| | *-----* [1] If modification requested. This is Programming Black Box (PBB). We found it in the middle of the Arizona desert one morning on the walk back to our car after waking up from an excessive drinking binge. To simplify things, let's say it recognizes Fortran, C, C++, VB classic, and Pascal such that we don't have to specify the language. We feed it starting (initial) source code (ISC), money (measured per hour), and a requirements document. Once we feed it the RD and source (if not a new app), it gives us an estimate and starts working when we put the money in. If it goes under the estimate, we get a refund. If it goes over, it asks for more to continue. The PBB generally functions like a typical human programmer, except '''we don't know how it works''' inside. We cannot peak into the black box to see what makes it tick; we just know it writes and modifies programs. Now, we can do empirical tests on this black box to see if it's cheaper to write block code or goto code (we can request code style as part of the requirements doc). We can count the money (time) it takes to create and modify code, and also keep a log of discovered bugs along with a bug severity level from 1 to 9. (We don't let the severity judges see the source code, only rate the severity of the flaw in terms of input and/or outputs to the running RSC.) This would be a decent empirical test by most standards. To start things simple, let's for now ignore other programmers reading the result code, and focus just on the PBB. (It's roughly comparable to a one-programmer shop.) We can test its performance on blocks versus goto's empirically, but we cannot say anything about ''why'' at this point. If it does goto's better (less money and bug score), we don't have any direct way to know why. At best we could feed it different programs as experiments and try to tease out the patterns based on performance. But, this is expensive and time-consuming for us so we are ruling it out. We cannot assume it thinks like ourselves when we program. We don't even know if a human is doing it, a machine, or a biological alien. Using our own programming introspection to "understand" how the PBB might work is speculative at best. Now suppose we did have information that it was a human inside (or controlling it remotely). We could make some fairly reasonable assumptions that they will grok the programs '''roughly''' the same way we do, belonging to the same species, but that is still speculative. The results would be far from rigorous. Other humans are a "grey box" rather than a block box. Further, we are testing/modelling the properties of humans, not necessarily universal or absolute properties of blocks and goto's themselves. --top ---- Re: "This would be a decent empirical test by most standards" No, it isn't. We don't have artificial intelligence that can do such a thing. If we did have artificial intelligence that could do such a thing, it would be biased because it would be "like the way a human thinks" and not say the way a GoTo happy CPU chip thinks. Programs ultimately get compiled into a bunch of GoTo statements anyway; if you could skip the step of compiling and configure a brain to write directly in machine code, you would save the compilation step. Saving the compilation step would save time, and likely the brain would be so smart it wouldn't make any mistakes, it would just spit out perfect machine code. ''You seem to be misunderstanding the premise. We don't know if it's AI.''