Fun with reflection: Invoking a private method

I was recently working on a project where the use of reflection became one of the proposed solutions.  Before I go any further, allow me to preface this by saying that reflection can be costly and have negative performance impacts.  It is not always type safe and may lead to run-time exceptions.  Other alternatives should be considered before committing to using reflection as a solution. 

The scenario we had was we needed to access a private method inside a legacy .NET assembly.  I'm not going to get into the reasons why.  The alternative was to duplicate all of the logic that was needed from legacy code.  I mentioned to a co-worker that we could use reflection to invoke a private method.  He was skeptical so I provided him with the following spike solution code showing how to do just that. It's a simple, but complete working example that I thought I would share with all of you.

  1. using System;
  2. using System.Collections.Generic;
  3. using System.Reflection;
  4.  
  5. namespace ConsoleApplication1 {
  6.     class Program {
  7.         static void Main(string[] args) {
  8.             HelloWorld hw = new HelloWorld();
  9.  
  10.             hw.SayWhatsUp();
  11.             hw.name = "World";
  12.  
  13.             BindingFlags eFlags = BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic;
  14.             MethodInfo hi = null;
  15.             
  16.             hi = hw.GetType().GetMethod("SayHi", eFlags);
  17.  
  18.             object[] a = new List<object>().ToArray();
  19.             if (hi != null) {
  20.                 hi.Invoke(hw, a);
  21.             } else {
  22.                 Console.WriteLine("Method Not Found.");
  23.             }
  24.  
  25.             Console.ReadLine();
  26.         }
  27.     }
  28.  
  29.  
  30.     public class HelloWorld {
  31.         public string name { get; set; }
  32.  
  33.         public void SayWhatsUp() {
  34.             Console.WriteLine("Whats Up");
  35.         }
  36.  
  37.         private void SayHi() {
  38.             Console.WriteLine("Hi " + name);
  39.         }
  40.     }
  41. }

When to trust that gut feeling...

This week has emphasized some common knowledge that I went against earlier this week. Monday morning I wasn't feeling well but figured I could make it through the day. When I got to work, I was informed of a bug in our production code that I had introduced a few months ago. Though the problem had existed for a few months, it was considered critical and needed to be fixed immediately. Despite not feeling up to par, I began investigating the issue. As my day progressed, I began to feel worse yet I continued to work at the problem. By mid-day, I was nearly complete with a solution one of my team members suggested despite having a gut feeling it wasn't quite right. I couldn't explain how it wasn't right or prove it wasn't the perfect solution so I went with it. I rushed through what I felt was a complete solution and did some quick testing. Everything seemed to work and I needed to go home. I felt like I was beginning to develop a fever.

I quickly deployed the changes to our TEST environment and updated my team on what I had done. I then went home and went to bed. I basically slept through the night and called in sick on Tuesday. My technical lead confirmed that QA had signed off on the hot fix I prepared and formalized the paperwork for deployment. It went out early Tuesday afternoon seeming to have resolved the existing issue.

Fast forward a few days to today...Friday. Whatever bug I had earlier in the week has long since passed. Around 2:45pm today, I received an email depicting an exception that the application I had worked on was causing. Again, it was deemed as critical for billing and payroll purposes. Needless to say, it needed to be fixed immediately. Investigating the bug revealed that the hot fix that was pushed out on Tuesday has now caused this new issue which requires a hot fix. I tried my best to resolve the issue by the end of day but with each fix I added a new error occurred. I certainly wasn't going to push out a hot fix that is known to be flawed and hasn't been tested at 5:00pm on a Friday. The fix will have to wait until Monday.

I think there are a few important lessons to be learned here:
1. Don't be a hero. If you aren't feeling well, just call in sick.
2. Don't work on critical code unless you are thinking clearly.
3. Don't rush critical fixes no matter how trivial they may seem.
4. All code should be tested. Changes resolving a critical issue MUST be tested thoroughly! At the very least, test the fix itself and perform some basic regression testing. A full regression test should be performed if possible.
5. Trust your gut! If you have a bad feeling about something, there is likely a reason for it.

Important Lessons!

Tonight I experienced the fear that many of my friends and family have felt when they thought they lost everything on their PC. I recently purchased a copy of Windows 7 and was eager to install it. My existing set up was a Tri-boot (Windows 7 RC, Vista, & XP Pro). The XP pro for some reason wouldn't allow me to boot into it after about a month of successfully running all 3 operating systems on the same drive. My machine also has an internal drive partition that I used strictly for data storage amongst all the configurations and a separate partition used for a common "My Documents" folder and some other miscellaneous items. With Windows 7 in hand, I diligently went through the each OS and moved all of the data stored in the OS partitions into my data storage partition for safe keeping. Once I was sure I had everything copied over, I decided I would wipe the existing 3 operating systems and start fresh with a single copy of Windows 7 Ultimate edition.

I popped the install disc in and booted back to the cd. Upon starting the install, I chose to delete my "XP" partion, my "Vista" partition, and my "Win7RC" partition. I completed the install process on the new unallocated space and booted back into windows. During bootup, something strange caught my eye though. The OS selection briefly appeared asking if I wanted to boot into Windows 7 or Windows 7 indicating that there were 2 operating systems still installed. "Impossible!", I thought. I logged into the fresh install of Windows 7 and immediately went to "My Computer" to access my data storage drive. Much to my surprise, it wasn't there! In its place, however, was my previous Win7RC partition staring right back at me. After the initial panic and wave of nausea passed from thinking I just deleted over 400 GB worth of personal data (of which only 1/2 was properly backed up to an external drive), I started to think it through.

When a file gets deleted in windows, the data itself is not typically deleted/removed. Instead, windows gets notified that the space that data had occupied is now available for allocation. I began thinking that the deleting a partition would likely follow the same rules since I didn't reformat the disk. The only question was how to access the deleted partition...

Using my wife's laptop (since I was afraid doing anything on my machine would increase the chance my precious files would be over written), I began to google for free utilities that would retrieve the deleted files. As I was doing that, I figured it would be worth a shot to pull the plug for force a shutdown without saving anything and reboot back into my pre-existing Windows 7 RC install.

The chance to boot back into the old OS proved to be the perfect solution. After logging in, I opened "My Computer". Though the XP and Vista partitions were no longer showing, the data storage drive appeared to be fully intact!!! I can't even begin to explain the relief I felt knowing that all my photos, music, movies, financial records, and source code were still there. As I write this, all my data is being copied to an external hard drive which will be disconnected to ensure I don't make any more foolish mistakes. Tomorrow, I will finish setting up and customizing the new OS as I want it, then I will decide on a more appropriate and consistent back up strategy to prevent this type of scenario from occurring again.

So, what I have learned out of all this? First and foremost is that I have been a hypocrite in telling friends and family to implement some sort of data back up plan on a regular basis. Second, that I need to decide on what the best backup strategy is for me and my family....and third, that I need to actually USE the backup strategy that I thought through. Backing up once every 6 - 12 months just doesn't cut it. Tonight I consider myself very fortunate that all was not lost. I hope this may be a learned lesson to all of you as well. Though I got lucky this time, I don't think I would be the next time. I do know this though, I will do everything I can to ensure that I am never in the position again. It's terrifying to think about losing approximately 9 years worth of digital data!

PostSharp

Last week I ran into a sporadic issue with a WCF service timing out. I was unsure if the timeout was being caused by the network connection, the business processing, or the data access layer. I was unsure if the issue was data related. That led me to the task of profiling my code base. Unfortunately, I have not yet been able to reproduce or find the flaw, but I have learned a lot about our current code base with the help of PostSharp. In case you haven't heard of it, it is an Aspect Oriented Programming (AOP) framework that can be hooked quite easily into any virtually .NET application.

I've only just begun to learn about PostSharp myself, but I found it very easy to hook into the code so that it can provide the duration of every method call in the application. I have it set up so it will conditionally log information including the method name, parameter data, and the actually duration if it exceeds a configurable threshold I set. I know that this is just a very minute detail of what this tool offers but I intend to continue exploring with it. From what I can tell, it seems like it will be a priceless tool...one that I would value as much as Lutz Reflector!

Requirements from the Developer's perspective

Requirements. It's not a very ambiguous term though the meaning behind requirements in software development seems to be one of the hardest concepts to grasp. We all know that they are needed for a successful project...yet it seems that they are often a one of the primary reasons for project failure. Whether the requirements simply don't exist, are incomplete, are too vague, or constantly change, they have the ability to make or break a project.

One of the issues with requirements is they often self-contradict themselves. The system needs to do perform x, y, & z. It needs to perform at this rate, be dynamic enough to handle any change, be scalable, be database independent, persistent ignorant, etc. Simply put, requirements can't be cast in stone. They are all about compromise one need/feature for another. Perhaps performance is increased by 1% taking on the risk that 1 out of every 100,000 reads against the database might have stale data. Who knows? The fact of the matter is that requirements are similar to a person's growth. Early on, there are lots of changes. They grow to be stagnant for awhile only to be following by a few more spurts of rapid and drastic changes. Eventually, the requirements satisfy most needs and are generally accepted thus becoming stagnant once again. Sooner or later, they will become obsolete and die off only to be replaced by a new set of requirements defining a different scope.

So, whose job is it to obtain the requirements? Is it the user, the analyst, architect, project manager, or developer? I think it's everybody's responsibility to contribute. Each person mentioned has some stake in the matter. Granted, a single person may fill more than one role, but that is not the point here. The point is that each role has different needs and wants. Their perspective on the requirements may differ vastly from the next. There needs to be a single person that is the owner of the requirements. All others are just contributors.

As a developer in a project that has had been under constant flux, I have felt the pain of changing requirements all too well. I feel that as a developer, I failed in my role as a contributor. Though I didn't own the requirements, I did recognize some potential pitfalls early on and had mentioned them. I had brought them up in a meeting and consensus ruled that they would not be issues. I should have stood up and spoken with more conviction at that point. The pitfalls I suspected were critical in the overall design. These issues have been debated and discussed over numerous meetings since then...each time with a reactive approach instead of proactive.

The moral is that, as a developer, I should have recognized the impact the issues I suspected would have and my level of conviction should have followed suit. Instead, I just went along with the consensus. As the issues truly did arise, my pains in trying redesign a core portion of the application have increased exponentially as our deadline approached. Had I done a more thorough design up front, I think I would have noticed more of the inconsistencies in the requirements. The questions I would have uncovered would have clarified the requirements and exposed the risks they presented.

Last minute requirement changes....

Though I had intended to keep this blog an informative location on the web, I need to rant. I understand that requirements change. I'm a big believer in iterative development cycles for this very reason. There are, however, few things that irritate me more in my professional life than a breaking requirements change at 4:00pm on the Friday there is supposed to be a code freeze. As valid as the case may be, it's a sure way to ruin a person / team's weekend. It is demoralizing to have a working application going into the code freeze without any bugs reported only to have the requirements change 1.5 hours before you plan to leave work...especially after putting in extra time the entire week to accommodate breaking requirements changed the previous Friday. The change in requirements resulted in an extra 4 hours of work today, a broken service, and the need to work through the weekend with hopes of having a fully working and tested service by Monday.

I suppose that I need to look at the bright side of things though. I am fortunate enough to be employed at a stable company. Also, I am extremely grateful that last minute breaking requirement changes are rare for my team.

System.MissingMethodException while UnitTesting in VS 2008

I've run across a strange situation where I have recently modified a WCF service and service contract adding new functionality. All I have done so far is stub in a new method with the appropriate updates to the service, the service contract, & the proxy (which is hand coded). When I try to run my existing unit tests, I am now getting the following error:

Class Initialization method <namespace>.<classname>_UnitTests.MyClassInitialize threw exception. System.MissingMethodException: System.MissingMethodException: Method not found: '<namespace>.<returntype> <namespace>.<contractinterface>.<methodname>()'.
<namespace><classname><namespace><returntype><namespace><contractinterface><methodname>

Ironically, the method does exist and if I run the unit test in debug mode, the exception does not occur. If I run all the tests in debug mode, they all pass without a problem (expected result)...but if I just run the tests without being in debug mode, they fail with the MissingMethodException. IF I run the tests in debug mode, get them to all pass, then run them without restarting visual studio and being in debug mode, they pass. This is obviously some kind of a caching issue.

As it turns out, the issue appeared to be related to the *.testrunconfig file. Initially, code coverage was enabled. I disabled code coverage then deleted the testrunconfig file. I then cleaned the solution, closed visual studio, re-opened visual studio, & rebuilt the solution. The unit tests then began passing