Sub-pixel rendering is used to make text clearer (such as MS's ClearText). However, it can also be used to make images clearer. I'm wondering why the graphics drivers couldn't render the entire screen image at the sub-pixel level, and then post-process the screen image for existing displays. That way a distinction between text and graphics doesn't have to be made. Possible reasons: * Hardware or graphics drivers not up to the task yet. * Patents * Text and images are best handled with somewhat different sub-pixeling algorithms.