Have you guys noticed that most (all?) programming languages are restricted to use only the 94 printable ASCII characters? That may be because currently that's all that most keyboards are able to do (with the exception of a few extra symbols if your keyboard has a non-US layout http://en.wikipedia.org/wiki/Keyboard_layout). But... if as developers we started using keyboards as OptimusMaximusKeyboard then we would be freed from that limitation, we could perhaps find a visually better notation, for example: *MathematicalNotation: Math is full of symbols that are not available in the keyboard that could help express better some algorithms... (for example the lambda in LambdaCalculus) *A TrueRelationalDatabaseLanguage could use some of the symbols used in SetTheory of RelationalAlgebra to become both more terse and easier to read. ** ''What's wrong with abbreviations such as "un", "intr", "cmp", etc? They are quicker to learn because they roughly "say" their own names. Plus, to have extra potential parameters, I vote prefix over infix. SmeQl generally reflects this viewpoint. -top '' ** {I don't know what "un", "intr", or "cmp" are. '''That''' is the problem with abbreviations.} *LogicProgramming: I have never liked PrologSyntax, but I have always liked handwritten "LogicSyntax". (one has to go through lots of contortions to write logical predicates with such a limited list of symbols) ---- Not all programming languages are restricted to 94 ASCII characters - AplLanguage and ZeeNotation for example. While use of a more flexible keyboard might be a partial solution, I've a feeling that a more practical solution would be to support more keyboard macros in common editors (e.g. so after typing 'up', ALT flips to an up-arrow or other related characters (based on other surrounding characters) and back in a rapid user-controlled cycle (Alt,Alt,Alt to the right character or back to the original with Ctrl+z). There are other macro schema that also work. This overloads things a bit, but for the same reasons that the number pad was heavily overloaded on phone text messages - if done right, with a good and adjustable macro language, it shouldn't be too much a hassle. ''And perhaps something like a RoundMenu (http://spininfo.homelinux.com/news/GIMP_UI_brainstorm/2007/12/24/round_menu_with_custom_tools) could appear showing you the symbols that are available as you write the combination of keys that will produce the specialized symbol'' I don't think that the reason is the limitation of the keyboard. After all any special character in any special text, be it math or economics or design or otherwise must be entered somehow - usually indeed by some menu or your edit tool of preference. And that is the problem: Source code is often expected to be viewable and editable by the most primitive tools available. This said I'm wondering why SmalltalkLanguage doesn't have special characters. ''(Actually, it did. Early Smalltalk used ↑ (up-arrow) for "return" and ← (left-arrow) for assignment.)'' Its closed environment should have made this easily possible. Hm. So maybe the reason is rather that the those special symbols aren't really needed or even distracting from the uniform presentation. -- GunnarZarncke Of course, technically we don't ''require'' more than two symbols, perhaps representing '1' and '0'; uniform presentation at the symbolic level is not a goal for most programming languages. While it may be that source-code is expected to be viewable and editable by primitive tools, I think that's a lesser reason - the larger issue is that source-code needs to be ''easy'' to edit, and reaching up to grab something out of a menu is a real pain. Programmers won't put up with it. Any language designer who follows EatYourOwnDogFood wouldn't put up with it - at least not on a regular basis. If it were a really rare symbol that didn't come up but once in several thousand characters, perhaps it would be tolerable... but it gets to the point that a keyword is easier to enter and just as clear when it comes to these rare symbols without being overly verbose (because verbosity = frequency * width; low frequency can balance a greater width). Language designers, instead, end up parceling out a few critical symbols - those readily available above the number-keys and the right pinky side of the QWERTY keyboard - to things that are high frequency and of semantic significance, and that ideally are familiar to programmers from their prior education. They also start using combined symbols, like '->', '=>', '>>=', '==', ':-', etc. - which can graphically approximate several other symbols. Even a closed environment doesn't change issues of convenience or the basic formula for verbosity. Symbols strictly aren't ''necessary'' for anything, but they do reduce verbosity (and consequently improve clarity) - i.e. Smalltalk could have been written without any symbols at all outside of 'a-z' and spaces, but it would have been far more verbose. ---- It would seem possible to have an editor that would render certain keywords of a language as symbols. For example, in LispLanguage, "lambda" could be rendered with a single lambda character. You could still use primitive tools to work with the code (it could all be standard ASCII chars), but in the "special" editor, you can see a symbol. Somewhat like syntax-based coloring and fonts, but you replace the whole keyword with a symbol. ''Perhaps, though how one then handles quoted lists containing 'lambda' that might or might not later be evaluated is an issue. For non-HomoiconicLanguages, this is probably quite doable. I suppose it depends on the other language properties for which the designer is aiming. HomoiconicLanguages would need special handling because suddenly the same code can look different in different places based on whether it is data or code in that particular location. And if you're aiming at SymmetryOfLanguage, then ExtensibleProgrammingLanguage''''''s, to have symmetry, would require an equally extensible editor - i.e. the mechanism by which 'lambda' is collapsed to the greek character would need to be just as applicable to any other syntax rule for the language.'' ''Personally, I'd be against it, if only because it seems like an attempt to elevate ASCII upon some pedestal. I'd prefer the editor simply offer greater access to Unicode (e.g. through keyboard macros), and that it simply be accepted that the "simple tools" will be required to both render and accept Unicode. Unicode is now state-of-the-art. There isn't anything 'special' about ASCII that requires we continue using it as a primary basis for communication between programmer, program editor, and program compiler/interpreter.'' Yes. I agree that most symbols one might come up with are easily available in UniCode. It seems as the legacy of ASCII is still showing and slowing us. ASCII really got hold on programming languages early on and so will not let go shortly. There are really enough editors and viewers out now that can cope with UniCode. The programmer editor par excellence Emacs has support for Unicode probably from the beginnings of unicode so one has to wonder why new programming languages didn't use the ability to define characters a long time ago. I mean e.g. in Forth it'd be too easy. And in Prolog it's also possible. On the other hand one could indeed just render the code differently for presentation issues like generated docs. I once toyed with the idea of using TeX notation for formulas that would have the added bonus of rendering compactly. But I agree that reflection/homoiconicity can make this really difficult. -- .gz I also gave ''serious'' consideration to TeX and similar annotated code in the language I'm designing, but I decided against it; it seems to me that it would be a similar mistake to mixing content structure with style information in HTML: display is not a primary purpose for code. However, a smart display system that can understand the syntax and automatically format code for documentation purposes would be very worthwhile, and depending on stuff inside comments seems to be quite hackish. To support this, I did eventually include a ''generic'' facility for including post-processor annotations that are not intended to directly affect the semantics of the content - e.g. for giving a programmer's guesstimate profile to the optimizer or offering search strategy suggestions to the constraint logic analysis ... or even offering formatting suggestions for documentation. You misunderstood. I didn't mean TeX in comments or the like but rather executable TeX (nickname: eXo (echo)): Example: a=\sum_{0