I have recently read about suggestions to programming code being defined in ways other than plain text. As a developer of a new language I have thought quite a lot about this topic and I would like to solicit other opinions. The English language consists of 26 letters and 10 digits (I know about the symbols etc). Words are made of combinations of these characters. Ideas and sentences are made by combining sets of these words. The extensibility of ideas using this scheme is unlimited. Extensibility is a laudable goal in programming because we don't know where our design will go. Other forms of communication work well for humans like pictures and diagrams but these are quite opaque to computer programs. Words are therefore something that has unlimited extensibility and can convey meaning to both humans and computers. * Configuration files on Unix systems have traditionally been plain text files that can be read or changed by any text editor. * HTML uses only 7bit ASCII text that has a somewhat wordy tag and key=value syntax. Any HTML can be read and interpreted by humans or browsers. A binary format would have been much more compact but it wouldn't have been both human and computer friendly. * XML also has the same useful properties as HTML and it is verbose as well. * Should your remote control send plain text commands to your PVR or should they be propitiatory? * Should devices that send instructions to the lights in your house be binary or just standard text? * What about commands to a robot? An accounting system? Computer languages are optimized for creating and parsing standard text words and their associated parameters. Standard text is also easy for humans to understand if done correctly. Have you ever tried to make a macro that can navigate an accounting system that has a mouse interface? I am advocating that all functions of all computer systems have a text based interface even if the user interacts with the system with a mouse, pictures or whatever. With a text based internal system, it would be easier to debug and it wouldn't limit the kinds of devices that could interact with that system in the future. If instructions from your steering wheel, gas pedal and brake were all text based, you could implement systems that automatically avoid collisions that don't even exist yet. Obviously you would have security safeguards so that the system couldn't get hacked or used in ways not authorized by the owner of the car. I believe that most software in the future will be created by computers rather than programmers although systems need to accommodate both for the foreseeable future. This can be accomplished only if software, data and meta-data exist inside the systems we create. A character interface would facilitate this eventuality. -- DavidClarkd ---- See PowerOfPlainText; The UnixWay is to use PlainText. I guess there are even RFCs to this end. ''I looked at the link you provide and this is part of what I am saying but I am taking the concept much further. I am advocating that all systems of programs have a character/command interface at the center of that system and all other interfaces actually use that interface. The reasons why that is a good thing are the few I have advocated above and also many of the same for storing configuration and other data as your link suggests. I am not suggesting that data be stored in flat text only form. I am referring to accessing the functionality of any computer system rather than how data is stored. I am also not talking about the interface between multiple programs as in the "Unix Way". -- DavidClarkd'' *I don't think that's going to be practical for GUI's. As much as I believe in the PowerOfPlainText, it would be better on focussing on the KillerUserInterface, than finding text translations of GUI actions. See ThreeDimensionalVisualizationModel. I think one reason we do not have a composable GUI language that is worth the name is that when GUIs and programming languages were invented deaf (and blind) people were not in on the job. But deaf people do communicate with a graphically language of gestures. Maybe much can be leared for a composable graphical language from sign language. Note that sign language is not simply a translation of words into gestures. Sign language uses the gesturing '''space''' for placing language referents and thus e.g. allowing much richer 'relative' pronouns. It allows to combine gestures for verbs with prepositions naturally (e.g. the sign for 'fly to' is really a 'fly' gesture moving in gesture space to the target (of which there can be more than in a spoken sentence because space is no metaphore here). -- Gunnar Zarncke ''A counter example is written language. Pictures and diagrams (other than storing or retrieving them for humans) are only useful in computer processes when static information has been processed out of them. I think most systems will be manipulated by programs in the future and providing an extensible character interface is like having most of the internet using English. It's not the best language BUT it is extensible and pervasive which is what matters most. -- DavidClarkd''