Taken from DoMicroprocessorsLoveCee which originated on SufficientlySmartCompiler. The main point might be this: Would/does/did one of these processors actually run its target language FasterThanCee? ---- ''Processors would look very different if Forth were the dominant programming language. AplLanguage would produce a different architecture (many parallel vector units maybe?) and architectures targeted at Lisp would be different again (hardware support for read and write barriers? cons, car, and cdr instructions? Tagged integers?). A Python processor might have built-in support for dictionaries.'' ChuckMoore has been designing processors around Forth. Would someone like to comment on the result? Is his work on asynchronous processors in any way related to Forth or is it just something that is cool by itself? ;-) What would a HaskellLanguage-optimized CPU would look like? The problem of Haskell performance is supposedly its laziness, which throws things out of cache and adds levels of indirection. How would a high-level processor deal with this? Can we learn something from how the Haskell VM deals with this? And the LispMachine''''''s? Was there anything special about them except the typed memory and the hardware car/cdrs? Java processors seem to have existed or almost existed at some point. How does C compete on such a beast? Is there a C implementation for the JVM? Would that be relevant, or would C on a JVM just be a crippled Java? ''A lot of Java processors were/are rebranded Forth chips. StackMachine architecture common to both...'' Did there ever exist or almost exist a processor designed for the Smalltalk VM? ''There was once a project called SOAR, which stood for Smalltalk On A Risc.'' ---- '''nested function definitions''' ''(moved from CeeLanguage)'' ...features that were used in other languages in the late 1960s/early 1970s that were left out of C very much on purpose in order to make C a faster language ... * NestedFunctions (available in Algol60 and Pascal). These require a runtime "display": a little gadget that keeps track of the current value of a parent function's variable values. They're a nuisance to implement, and in theory can slow down access to those variables arbitrarily (depending on the depth of runtime recursion). * ''In practice, however, this doesn't. When I code in C, I very often find myself implementing functions that ought to be so-called "inner functions" or "nested functions," and VERY deeply miss the ability to reference parent function variables. The overhead shed from not having to construct "displays" is offset by having to constantly pass variables to and from static functions all over the place, especially if you have a LOT of them.'' Special hardware ("display registers") makes nested functions blazing fast. C was developed on machines that didn't have this hardware. http://en.wikipedia.org/wiki/Burroughs_B5000 claims that "display registers" makes "access to entities in outer and global environments is just as efficient as local variable access." Later microprocessor designers specifically tuned their designs to run fast on benchmark code. Almost all benchmark code is written in C and Fortran. If their benchmark code was written in some other language, they would have tuned their designs differently. DoMicroprocessorsLoveCee . Even if microprocessors had been tuned differently, they still would have run C code pretty fast. But question ("Would some other language run faster than C?") still remains. Even with extra hardware specifically designed to support language features lacking in C, hardware would (probably) run C just as fast. ''It's funny that nobody has mentioned that the addition of SIMD in modern mainstream processors makes it hard to write efficient C code without resorting to raw assembler. Only lately have compilers started to catch up, with advanced analysis methods. This is probably an area where less imperative languages have the possibility of an upper hand.'' ---- See: DoMicroprocessorsLoveCee,MetaModel CategoryComputerArchitecture