Replacing legacy systems can often bring up some interesting situations... Please, add your interesting LegacyReplacementStories. ---- Placed here by an AnonymousDonor with permission of the author SteveHayes. ''We were replacing an options system. The original system was written in the US and we were using it in Australia. The US-specific pieces had been tested pretty well, the non-US portions less so. When we came to test the yield curve calculations we kept getting small discrepancies after a certain period. Eventually I have to step through the two programs line by line in parallel debuggers (in different languages on different platforms) to find the difference. It turned out there was a systematic mistreatment of interest rates quoted a particular way (using a 365-day year, as opposed to the standard US 360 day year). We confirmed with the original authors that they had made a mistake. Unfortunately we were overvaluing the existing portfolio. So if we corrected the algorithm the business unit would take a hit on their P&L. They weren't terribly happy about this, so we waited for a better time.'' ----- The biggest legacy replacement I ever did was a conversion of an application from ASP/IIS/MSSQL to JSP/WebSphere/Oracle. The app was about 300 ASP pages, some VB code, and a proprietary (read: expensive) workflow management product. The original goal was a precise, bug-for-bug port, then to make minor improvements/bugfixes after the app was fully converted. There were a few extra challenges on this project. For political reasons, we could not discuss the project with the other development team. We also did not have access to the VB source code until later in the project. Also, the requirements documents were old and incomplete (of course), and we did not have ready access to the user base to discuss the app's functionality. These are the reasons that led us to do a "naive" port, only looking at the code and not the requirements. We decided to do the port in phases, replacing one component at a time. The first phase was to convert the ASP to JSP but still use the VB code. We used an open source COM<->Java bridge called JACOB to interface with the VB .DLL, and wrote a little tool to convert the ASP pages using regexes. It wasn't perfect, but it got 80% of the page converted and the rest could be done manually. We also built a small set of classes which aimed to mirror some of the ASP/ADO functions. At this point we also started writing unit tests. We built a framework around HTTPUnit, which hit both the ASP and JSP site with the same query, and diff'ed the results -- we called these parity tests. Any difference whatsoever in headers, cookies, HTML, etc. were flagged as errors. After the major paths of the app had been ported, we started the second phase: convert the VB code. We wrote a quick and dirty VB converter based on a free JavaCC VB parser. Since most of the code followed just a few basic patterns, it was not difficult to convert most of the code this way. We then started modifying the JSP pages to use the new Java code instead of the .DLL, in parallel with the first phase. The parity tests proved invaluable here, as they caught more than their fair share of bugs during this process. The third phase was perhaps the most difficult: to replace the propietary workflow product. This required first discovering the functionality of the product, as we had no documentation or support for any of this. We decided to build a parallel implementation which would run aside the original, then after the two were fully running in tandem, we would start removing the original. Again, the unit tests were critical. The final phases were to port to Oracle and WebSphere. These were the least difficult intellectually, but still took a lot of legwork. There were many atrocities being committed in the original code with respect to schema, queries, and database interface code -- for instance, only a third of the tables had foreign key constraints. The WebSphere port was just plain annoying, having to deal with all its vagrancies (we had been developing under Tomcat all this time). At this time, we took the product through an old-fashioned QA cycle while also refining the parity tests. The manual QA was a necessary step, in that it caught many browser bugs and logical errors we could not have caught in the unit tests. We ended up with about 1000 tests total, with maybe 1/4 of the bugs caught by QA, and 3/4 caught by unit tests. The deployment went smoothly, only a few hiccups with the data migration scripts (the old developers had thoughtfully ignored the concept of database transactions, so the DB was full of inconsistencies). Most users did not notice a thing had changed, except hopefully that the app was faster and less prone to random crashes. Lessons learned: * It is possible to port an app to another platform by looking only at the code, and generating the requirements as you go along. However, it is difficult to resolve inconsistencies (and fix bugs) without customer input. * Our unit tests that compared the output of the two sites were very powerful and helpful. However, they are more difficult to maintain as the app changes, because you must encode the delta between the two apps, instead of the expected results of one app. * We tracked each file's conversion and testing status, and distributed reports and graphs every night to developers. This helped people keep focused and see the progress of their work, which improved morale. * Automated tests are no substitute for traditional QA. You really need both, and it's better to have QA involved at the beginning. * When you take over a legacy app, you can only use the excuse "their design sucked" until the release date. Then all the bugs are your fault. - DeveloperBidingHisTimeUntilTheRecovery ----- See CthreeAndLegacySystems for some of RonJeffries' musings; the ChryslerComprehensiveCompensation system was an incremental attempt to replace legacy payroll systems. ---- CategoryStories