As a special case of LanguageChoiceImposesSocialStructure, we find that some computer languages, by their very design, inhibit refactoring. That is, the design of the language makes it difficult to improve the design of the program after the fact, without changing its behavior. Examples: * '''Mutexes and Semaphores:''' a mutex severely inhibits refactoring at any place a hold-and-wait might be required, as one must avoid deadlock. These are non-composable concurrency management primitives, and one cannot readily decompose or refactor (into procedures or frameworks) that which one cannot safely compose. Alternatives do exist: SoftwareTransactionalMemory, and (of course) serializing everything. * '''Cost-of-abstraction:''' if you need to pay a lot for abstractions in terms of space, speed, guarantees (about realtime, memory, security, or safety requirements) or various other precious resources important to a (potentially competitive) NonFunctionalRequirement, then that cost of abstraction will reasonably inhibit refactoring. ''How much'' one pays for abstractions is a function of the language implementation, especially of the optimizer quality. However, the (potential) quality of the optimizer in turn depends heavily on language: the more invariants an optimizer can assume or prove, the better it can do, and the limits of what it can assume and the cost of proofs are essentially determined by language semantics. Since this is (albeit indirectly) a function of the language, one can say that certain language decisions inhibit refactoring because they make it difficult to maintain NFRs while refactoring. * '''Source Components and Modularity:''' While rare today, some languages don't provide for effective ''modularity'', by which I mean the breakup of code into units that can be maintained separately and used in at least two different composite programs. This inhibits refactoring of functionality common to domains or feature-sets into associated libraries. There are actually quite a few ways modularity can still be improved... e.g. allowing polymorphism to access methods declared outside the scope; duck typing with overloading by predicate dispatch; allowing Haskell/OCaml pattern-matching functions to be defined with different pieces in different modules rather than all in one place (and same for logic predicates in Prolog or Mercury); etc. For modularity today we tend to focus on elimination of ''unnecessary coupling'', where one piece of code shouldn't need to be aware of or change with the implementation details of another. This is critical, but yet more modularity can come from eliminating ''required cohesion'', where language semantics force that a piece of code be defined or shadowed all in one place (this advantage being related to CrossCuttingConcern''''''s). * '''Frameworks''' are refactoring forms that separate behavior from policy, but frameworks themselves inhibit refactoring. This is because frameworks are rarely ''composable'', so while you can refactor features into a framework, you rarely can make those features work with the features from another framework. Instead, you'll need to modify one framework to include the features of the other. Also note that modern-day applications are pretty much frameworks... NoApplication solutions would seem to be wiser approaches if you wish to support refactoring across projects. The degree to which frameworks are necessary depends largely upon the dataflow and MessagePassing capabilities of the language, especially of the asynchronous variety. Better message-passing and dispatch semantics can greatly reduce or eliminate need for certain frameworks and design patterns (see AreDesignPatternsMissingLanguageFeatures). * '''Behavior Capture:''' Anonymous blocks, functor objects, function-pointer + void*, closures and lambdas effectively allow capturing behaviors and essential context in one place and time and applying those behaviors at another place at a future time. Lacking support for ''at least one'' mechanism to do this inhibits refactoring greatly because it prevents factoring behavior framework from the behavior itself. Essentially, without one of these one will need to duplicate the framework for every behavior. * '''GarbageCollection:''' Structures end up being shared and figuring out whose responsibility it is to delete the memory, determining when it can be deleted, etc. becomes complex, especially as frameworks and concurrency are introduced. This is problematic for refactoring because it introduces behavioral coupling between components. There are a number of ways to approach this problem -- e.g. you may attempt to localize responsibility to delete by performing deep-copies of values and switching to a more MessagePassing style -- but such solutions sacrifice performance (which, again, ''inhibits refactoring''). In C++ you might switch to smart pointers, but those are only good so long as all the modules you're interacting with use the same brand of smart pointers (thus creating a type coupling that ''inhibits refactoring''). The most generic solution is GarbageCollection, where you leave the task up to a far less myopic service common to the entire language runtime (or more, if distributed). * '''Value and Behavior Abstraction:''' almost all GeneralPurposeProgrammingLanguage''''''s have these (often merging the two, as per pure procedural and impure functional). However, TuringTarpit languages often ''entirely'' lack functional or procedural abstraction. This prevents refactoring in a very direct manner. Examples include UnLambda language, BefungeLanguage, BrainfuckLanguage. * '''Syntactic Abstraction:''' Similarly, languages that lack syntactic abstraction techniques (e.g. RealMacros) will often require repeating common syntactic constructs (including loops and such). While use of micro-frameworks with behavior capture will usually serve (essentially passing HOFs to a 'for_each' parameterized procedure instead of calling a macro), these solutions still violate OnceAndOnlyOnce patterns of behavior capture. Essentially, you must keep repeating the tired old 'lambda' constructs - or possibly ''much'' worse in languages that lack anonymous functions or don't provide closure over local variables. As a solution, sometimes primitive syntax abstractions (a'la the C preprocessor) are GoodEnough, but the more formal and integrated syntactic abstractions are both more powerful and less likely to introduce SymmetryOfLanguage violations (which also tend to inhibit refactoring). * '''Text Layout:''' Languages encoded in two dimensions on a text page, such as SnuspLanguage, can require massive reworking of code to perform even a small mutation. Admittedly most languages are less fragile than that, but it's worth recognizing. At the other end of the scale are ConcatenativeLanguage''''''s. It would be interesting to compare ConcatenativeLanguage''''''s to others from a refactoring standpoint. * '''Implicit Context:''' Examples of implicit context include global values, ThreadLocalStore, Haskell monads (when using syntactic sugar to forward them), and Lisp's SpecialVariable''''''s... but even in languages that have one of these features (like globals), it may become insufficient once concurrency or multi-user (& security context) systems are developed. Of course, the need for context depends very heavily on FunctionalRequirements. Insufficient support for implicit context can inhibit refactoring a great deal because you'll sometimes be forced to pass and thread a ContextObject explicitly, which (generally) won't integrate with any existing libraries. With ManifestTyping the problem is exacerbated since the code also becomes coupled to the type of the context object. (related: ExplicitManagementOfImplicitContext) * '''Strict ManifestTyping''' is problematic for refactoring, as it often introduces undesirable syntactic coupling between code and the type that in turn makes it difficult to modularize in an application-independent way. One example was with the ContextObject above, but coupling a class to a 'Visitor' class is equally problematic. One can avoid this problem by DynamicTyping, SoftTyping, or some form of TypeInference. ''Optional'' manifest type annotations don't suffer this problem. * '''Data Composition and Data Types:''' if you cannot '''compose''' values then you will be forced to repeat parameter-groups and/or repeat operations. For example, consider complex numbers. With the '''Complex''' type, one might simply use '''Complex c = a * b'''. This same operation without composition is: '''c_real = a_real * b_real - a_complex * b_complex; c_complex = a_real * b_complex + a_complex * b_real;''' And even if you were to utilize a function for OnceAndOnlyOnce, your best solution essentially looks like: '''mul_complex(a_real, a_complex, b_real, b_complex, &c_real, &c_complex)'''. In any case, the need to repeatedly thread or carry around the '''(a_real,a_complex)''' pair considerably inhibits refactoring. * '''CrossCuttingConcern''''''s:''' While ''logging'' is a traditional example, there are many forms of cross cutting concerns (security policy, performance, concurrency, hardware availability, communications protocols, file formats, etc.). If one cannot centralize expressions of or solutions to these concerns, then they will necessarily be spread throughout the code and largely inaccessible to both refactoring and modularization. So, essentially, a language that lacks support for cross cutting concerns will ''inhibit refactoring''. But one needn't jump straight to AspectOrientedProgramming... there are many partial solutions that can help a lot here, including use of ExplicitManagementOfImplicitContext, distributed definition MultiMethods (or other values defined in parts across modules), MetaObjectProtocol, the ability to shadow or override or intercept certain common services (such as memory allocation), etc. * '''SymmetryOfLanguage violations:''' a language violates SymmetryOfLanguage whenever there is an 'exception' to a manipulation rule. Each such violation inhibits the creation of yet-more-generic abstracted solutions to a problem, as the exceptions must be handled explicitly (often through duplication or complex workarounds, like VisitorPattern to emulate DoubleDispatch, or Greenspunning a system that makes hard semantic distinctions between CompileTime and runtime (e.g. by reinventing types, parsers, evaluators, etc.)). Arbitrary limits also qualify, which is partially why all those 'n-bit-wide' integers are problematic. Related: MissingFeatureSmell ----- A computer language which inhibits refactoring encourages BigDesign. A computer language which makes small changes costly inhibits explorative programming and prototyping, and thus encourages BigDesign. This is most often symptomatic of the ``We Design, You Code'' types of outfits. Thus, perhaps LanguageChoiceImposesSocialStructure. In a social structure where information flows downwards from a pyramid, manifestly typed languages like CeePlusPlus are a natural choice, where information gloms into a BigBallOfMud, LispLanguage or SmallTalk would be a better choice. The last sentence there seems to be saying the opposite, i.e. that social structure imposes, or at least influences, language choice. (It may not be a complete coincidence that after the company that I work for was purchased by a larger, more hidebound corporation, the decision was made to rewrite our Smalltalk product in C++.) My own opinion is that it cuts both ways. An organization with a minimum of rules is more likely to choose an DynamicallyTyped language like Smalltalk. Smalltalk in turn encourages a freewheeling, do-your-own-thing style which has an impact on the organization's culture. ---- ''...inhibit refactoring...'' I wonder what language was in the OP's mind when he wrote this. Never came upon a language that inhibited refactoring. The only thing that really inhibits refactoring is the anxiety to introduce errors while refactoring. A: Specialized languages tend to inhibit refactoring. Relational Database Stored Procedures often have arbitrary limitations on size, nesting level (number of nested calls allowed), and data structures. And, communication with host programs is usually quite limited. Sybase/SQLServer don't allow the call stack to be more than 16 levels deep. In practice, things get unreliable beyond 12. ''I've hit size limitations on Sybase and Ingres procedures/triggers. -- JeffGrigg'' In COBOL it's easy to refactor flow of control, but data is a problem because it all must be global to the program. You can break a program into separate programs (or even multiple programs in one file, as of COBOL-85), but the complete lack of user defined types (as of COBOL-85) makes it almost impossible to pass data around between programs. OK, maybe you can use "copy members" (COBOL's version of #include files) to simulate user defined types, but it ain't pretty. ''(...not to mention resulting in programs that would be '''scary''' for 99.999% of COBOL programmers who might see it.)'' Old versions of FORTAN and BASIC lacked user defined types. Early versions of BASIC lacked parameterized function/subroutines; try global variables and GOSUB. PICK Info/BASIC imposed a severe performance penalty on each call to a parameterized subroutine (...which were always "external" and "dynamically linked.") Life was difficult. ''Don't get me started on dBASE-II or dBASE-III: Limits on number of variables, total size of all data, no local variables, and more... -- JeffGrigg'' Refactoring SQL can be an interesting experience. (Also, you may or may not be refactoring the database design at the same time, to solve SQL problems.) ''However, I've found that refactoring is possible in all languages I've ever used. It may be more or less difficult, but it's always possible, and always worthwhile. -- JeffGrigg'' "Inhibit" can mean "absolutely forbid", or it can mean "discourage". I think this page is about the second meaning but some authors are using the first. ----- Storing data in a relational database can make refactoring difficult, because it's difficult to refactor the data in the database. * You obviously don't use RedGate tools. Refactoring, merging, export/import, etc is all painless. No, I don't work for them. The actual syntax of saving data, changing table structure, and creating new tables with a new design is annoying enough, but most large organizations also impose procedural complexity: All work that changes a relational database in any way must be done through some other independent group. ''(No exceptions allowed, typically.)'' ''There are good reasons why database work should be done by a separate DBA group, but it does inhibit refactoring.'' If you think of the structure of the database as just another piece of code, then a separate DBA group is a form of code ownership: they own the database-structure code. See RefactoringWithRelationalDatabases for more. ---- I would probably list assembly language as a language that inhibits refactoring. Assembly based program often uses direct manipulation of registers and it can be quite difficult to keep track of what goes where. It also makes it quite difficult to see similar functions for reduction. Refactoring is still worthwhile, but the language does tend to make it a few degrees more difficult.