This CeePlusPlus idiom (ResourceAcquisitionIsInitialization or RAII) relies on the life-cycle of objects: constructors called to initialize the object, and a destructor called when it goes away. Objects are usually created either automatically in a local scope, in which case they are destroyed when the scope is exited (via normal flow, return, or exception), or they can be created with "new" in which case they are destroyed by an explicit "delete" (which is not RAII). ''(Not to be confused with RIAA - RecordingIndustryAssociationOfAmerica. RAII isn't evil.)'' If the resource in question (a file, lock, window, etc.) is acquired (opened, locked, created) in a constructor, and released (closed, unlocked, disposed) in the destructor, then the use of the resource is tied to the lifetime of the object. Since object lifetimes are well-defined, so is the use of the resource. Local objects can be used particularly elegantly: { File f("/some/path/name"); // use f } Here, assuming the class File opens a named file upon construction and closes it upon destruction, this section of code will open a file (or throw an exception), do whatever it cares to in the "use f" section, and then close it. It is common to use this idiom to simply wrap the resource use itself, e.g. given a Mutex class with lock() and unlock() methods, a RAII class Mutex''''''Lock can be written whose constructor lock()s (and remembers) some Mutex and whose destructor unlock()s it; then one can write code like: { Mutex''''''Lock l1(mutex1); Mutex''''''Lock l2(mutex2); // critical section } Since destruction is in the reverse order of construction, this produces the right semantics, for mutexes as for many other resources. Note, you have to be very careful that the object you use for a mutex still exists at the end of the function, otherwise things get very messy, very quickly. In C++, of course, new'd objects must be explicitly deleted, but in many cases the RAII idiom can be used to automate this; the ANSI C++ standard defines an ''auto_ptr" template to standardize this idiomatic use. While the idiom is particularly elegant for use with lexically-scoped objects, it applies equally to dynamically-allocated, 'semantic lifetime' ones: a new'd File object as described above will hold open the file resources until it is later deleted, at which time it will close the file before the memory actually returns to the heap. For dynamic-lifetime objects, the 'delete' operation acts as a universal release-your-resources indication. Since, in a properly-written program you at least need to always release the memory, you can generally rely on that happening. ---- '''LispLanguage:''' ''The Lisp way has been to not use destructors, finalizers, what have you, and to use macros instead. Therefore...'' (with-open-file (file :direction :input) ...) ''and come hell or high water, 'file' will be closed when the scope of the 'with-open-file' ends. Now of course, this won't be sufficient for all uses, but it is just as powerful as the C++ idiom, and you have OPEN and CLOSE for the rest. One sees this over and over again, whenever you're interested in this kind of discipline: WITH-LOCK, WITH-TRANSACTION, etc.'' ''Note also that these macros are built on top of the standard UNWIND-PROTECT (see UnwindProtect) form (which basically says "do this form, then do these cleanup forms no matter what happened during the execution of this stuff" - not so unlike Java's try/finally would be if you didn't catch anything), so if you invent a new kind of resource you can invent a new WITH-FOO macro to go with it. You're not limited to whatever someone else thought you'd need'' No, the Lisp idiom is -- believe it or not -- ''not'' as powerful as the C++ idiom. Yes, it is equivalent for the degenerate case when the resource handle does not escape the scope in which it is created. But the C++ idiom allows a much more powerful pattern, where the resource can be passed ''up'' the call stack. Consider the following C++: my_file special_file (string name) { my_file file {name}; // Initialize the contents of my_file return file; } void do_stuff () { my_file file = special_file ("foo"); // ... // file is cleaned up here } Calling 'special_file ("foo")' will give you an RAII resource unbounded by lexical scope ''but still guaranteed to be destroyed when the handle is destroyed''. Macros derived from UNWIND-PROTECT cannot do this, by design; the cleanup code will be called at the end of the macro form, even if you want the resource to persist beyond the form. (For the curious, 'return file;' does not cause a copy; where NRVO is not performed, it's a move. Regardless, there are ways of implementing RAII-style handles that handle copying gracefully, e.g. with std::shared_ptr.) ---- '''RubyLanguage:''' File.open('/tmp/file', 'r') do |aFile| process process process end ---- '''CeeSharp:''' using (File f = new File(@"C:\foo.txt")) { // File must implement System.IDisposable; // do stuff to file // not RAII because of necessary 'using' block } ''compiles to code that looks like it came from...'' '''JavaLanguage:''' Before Java 7: final File f = new File(@"C:\foo.txt"); try { // do stuff to file } finally { f.Dispose() } Java 7 and later: try (File f = new File(@"C:\foo.txt")) { // do stuff to file } ---- '''PerlLanguage:''' { open (my $fh, '<', $file) or die "Couldn't open $file: $!"; # now read stuff from $fh... } ''$fh goes out of scope, thus destructs, thus closes. Shweet!'' ---- One problem with the name ResourceAcquisitionIsInitialization is that it is inaccurate: it's not always about resources, and it is nothing to do with initialization -- what makes the idiom tick is finalization. Other than that, the name's fine ;-> For more on this, and a small PatternLanguage for C++ that resolves such paired-action issues, see ExecuteAroundSequences. -- KevlinHenney ''It is always about resources: things you must give back when you're done -- and it has everything to do with initialization: the whole point of RAII is to encapsulate a resource in an object, with the acquisition of the resource done when the object is constructed. That RAII can be viewed as an example of a broader pattern doesn't mean that RAII isn't RAII.'' I think the point is that we don't have a name for that broader pattern, and so we use the term RAII for it. http://www.boost.org/doc/libs/1_48_0/libs/scope_exit specifically mentions RAII and resource acquisition even though Boost.ScopeExit is applicable more generally than solely in resource-management scenarios (e.g. logging the exit from a function). It would be of benefit to all, but particularly novices, to have a more concise, and less opaque term for this pattern than ResourceAcquisitionIsInitialization. -- RobDesbois ---- This is one of the few times I have wished SmalltalkLanguage had destructors. The problem with tenured GarbageCollection algorithms is that you don't get notice when an object is about to die, all you find are object corpses of the John Doe variety. P.s. for Smalltalk experts, is there a tidy way to handle termination (such as releasing resources=) these days? -- AlistairCockburn I expect that Smalltalk could use the same "blocks" idiom that RubyLanguage uses. E.g. where a Ruby coder would write a File class that could be used like this: File.open("filename") do |stream| ... read from stream ... end a Smalltalk coder could write a File class that could be used like this: File open: "filename" do: [ stream | ... read from stream ... ]. ---- AndreiAlexandrescu and Petru Marginean introduced the ScopeGuard idiom which is a way of utilizing ResourceAcquisitionIsInitialization without having to create new classes for each resource. This provides for a simple way of writing exception safe code. -- KristianDupont ''scope guards map to try/catch/finally (they are explicitly defined as transformations to same) -- which is an alternative to RAII, not a form of RAII.'' ---- One of the important aspects of RAII that is often missed is that of design invariant guarantees. This shows up most clearly in the design of, say, mutexes. For example (original Turner example corrected): class Mutex { public: class Scoped''''''Lock { Mutex& m_; public: Scoped''''''Lock(Mutex& m): m_(m) { m_.lock(); } ~Scoped''''''Lock() { m_.unlock(); } }; Mutex() {} private: void lock(); void unlock(); friend class Mutex::Scoped''''''Lock; // non-copyable Mutex (const Mutex&); Mutex& operator= (const Mutex&); }; void foo() { Mutex m; Mutex::Scoped''''''Lock l(m); // ... } By making the lock() and unlock() methods private, the designer of the Mutex class can guarantee that the mutex will only by operated through the sentry Mutex::Scoped''''''Lock class (as this is the only accessible way of locking the mutex). Thus the design invariant "for every call to lock() there shall be a corresponding call to unlock()" is enforced. Most importantly, the "correct" way becomes the "easy" way for the user of this class. Contrast this with "try/finally" and "using" clauses. -- DavidKeithTurner ---- The important aspect of RAII is automatic(!!) finalization. Many examples on this page (CeeSharp's using or Alexandrescu's scope guard) are not RAII because they involve manual resource management on every location where they are used. In that sense they are the opposite of RAII. ''Sorry, I don't see how RAII implies automatic finalization. Objects allocated on the stack in local scope have their destructors called automatically, but objects allocated on the heap require manual finalization. In garbage collected environments destructors may never be called.'' Objects allocated on the heap are controlled (released) by a stack object and therefore require no manual finalization. That's the point of RAII. ---- ''Objects allocated on the heap are controlled (released) by a stack object and therefore require no manual finalization. That's the point of RAII. '' Until, that is, you find yourself returning objects (initialized or otherwise) to another function. Then, RAII breaks down. The heap exists because not everything can be expressed on the call stack. --SamuelFalvo ''No it doesn't. Instead of returning a new Foo(), return a shared_ptr(new Foo()) from BoostLibraries or TechnicalReportOne. RAII still applies because the shared_ptr will only destroy its resource when the last reference to the shared_ptr goes out of scope.'' Or even better, use the ''polymorphic object'' idiom, which makes an object containing pointers look like a simple object that can be copied, returned, passed as an argument, etc. This doesn't cost much because the object you are copying is just a reference count and a pointer. Unfortunately, with currently available C++ language features it is a bit of a pain to implement such a type, but I understand that situation should be improved with the 0x standard. -- SteveHeller *Wait, isn't this what shared_ptr is? How is it different if not? The difference is that polymorphic objects have value semantics, whereas shared_ptr has pointer semantics. -- SteveHeller *This means nothing to me. Is it possible to rephrase the answer in a more complete way? Using polymorphic object idiom, it seems you're going to be passing around a "smart handle", a pointer + reference count combination. This is precisely what shared_ptr<> is. Another thing I thought of while waiting for the answer was, what happens if you have a ''single'' object you're referencing, but ''multiple'' shared_ptr<> / polymorphic objects, particularly in different threads, for example? This seems like it'd break the system, as if one thread's reference count drops to zero, the object is disposed, despite live references from other threads. The reference count really does have to be associated with the ''object'', not its reference. The reference itself, however, can be smart enough to automatically adjust it as needed. --SamuelFalvo *Indeed this "difference" does not exist and is nonsensical -- shared_ptr is a used as a value, which is '''convertible''' to a pointer type, namely the type of the pointer it contains. 0x (now C++11) is irrelevant ... it has move semantics, which allows unique_ptr (no reference count, it's transferred between owners).'' I wasn't aware of the shared_ptr type; I checked http://www.boost.org/boost/shared_ptr.hpp and see that it implements a reference-counted wrapper around the object. A pretty slick workaround to the problem, but one which appears to be very high overhead. Use it sparingly if you're even remotely concerned about performance. --SamuelFalvo The overhead is lower than you might expect, and a number of optimizations are available if you're concerned about extra indirection; the big cost is the extra heap allocation to carry the reference counts, and even that can be avoided if one is willing to utilize an invasive solution (i.e. adding reference counts directly to the object). Your advice, I think, is a little extreme. I'd suggest, instead: "don't hesitate to use it except where performance is a primary concern." You won't often be attempting ''resource acquisition'' within a tight inner-loop. Resist PrematureOptimization. ---- Contributors: JimPerry, AlistairCockburn, GrahamHughes, TomAnderson, NathanSharfi, JohnDouglasPorter, DirckBlaskey, KevlinHenney, JohnAllensworth ---- See also: ResourceAcquisitionIsInvocation, InitializationIsResourceAcquisition, ForgetToCloseTheFile, CeePlusPlusIdioms ---- CategoryCpp