My favorite ssms Toolspack snippet

Sometimes we all need a good laugh.  One of the tools I use is the SSMS Tools Plug-in for Microsoft SQL Server Management Studio (SSMS).  Today, I learned that there are shortcut snippets for some of the more common SQL commands.  As I was looking through the snippet shortcuts, I had to laugh when I discovered the last default snippet entry.  Check it out and give props to the authors of this tool for creating the tool with a shortcut to express how we all sometimes feel! 


SSMS Tools is a free plug-in that you can download from  It does offer many other useful features.  I do recommend this tool as a nice enhancement to SSMS.  It really is a great plug-in and it deserves proper recognition as well.  Thank you for creating such a great tool and maintaining a sense of humor while doing so. 

Fun with reflection: Invoking a private method

I was recently working on a project where the use of reflection became one of the proposed solutions.  Before I go any further, allow me to preface this by saying that reflection can be costly and have negative performance impacts.  It is not always type safe and may lead to run-time exceptions.  Other alternatives should be considered before committing to using reflection as a solution. 

The scenario we had was we needed to access a private method inside a legacy .NET assembly.  I'm not going to get into the reasons why.  The alternative was to duplicate all of the logic that was needed from legacy code.  I mentioned to a co-worker that we could use reflection to invoke a private method.  He was skeptical so I provided him with the following spike solution code showing how to do just that. It's a simple, but complete working example that I thought I would share with all of you.

  1. using System;
  2. using System.Collections.Generic;
  3. using System.Reflection;
  5. namespace ConsoleApplication1 {
  6.     class Program {
  7.         static void Main(string[] args) {
  8.             HelloWorld hw = new HelloWorld();
  10.             hw.SayWhatsUp();
  11.    = "World";
  13.             BindingFlags eFlags = BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic;
  14.             MethodInfo hi = null;
  16.             hi = hw.GetType().GetMethod("SayHi", eFlags);
  18.             object[] a = new List<object>().ToArray();
  19.             if (hi != null) {
  20.                 hi.Invoke(hw, a);
  21.             } else {
  22.                 Console.WriteLine("Method Not Found.");
  23.             }
  25.             Console.ReadLine();
  26.         }
  27.     }
  30.     public class HelloWorld {
  31.         public string name { get; set; }
  33.         public void SayWhatsUp() {
  34.             Console.WriteLine("Whats Up");
  35.         }
  37.         private void SayHi() {
  38.             Console.WriteLine("Hi " + name);
  39.         }
  40.     }
  41. }

When to trust that gut feeling...

This week has emphasized some common knowledge that I went against earlier this week. Monday morning I wasn't feeling well but figured I could make it through the day. When I got to work, I was informed of a bug in our production code that I had introduced a few months ago. Though the problem had existed for a few months, it was considered critical and needed to be fixed immediately. Despite not feeling up to par, I began investigating the issue. As my day progressed, I began to feel worse yet I continued to work at the problem. By mid-day, I was nearly complete with a solution one of my team members suggested despite having a gut feeling it wasn't quite right. I couldn't explain how it wasn't right or prove it wasn't the perfect solution so I went with it. I rushed through what I felt was a complete solution and did some quick testing. Everything seemed to work and I needed to go home. I felt like I was beginning to develop a fever.

I quickly deployed the changes to our TEST environment and updated my team on what I had done. I then went home and went to bed. I basically slept through the night and called in sick on Tuesday. My technical lead confirmed that QA had signed off on the hot fix I prepared and formalized the paperwork for deployment. It went out early Tuesday afternoon seeming to have resolved the existing issue.

Fast forward a few days to today...Friday. Whatever bug I had earlier in the week has long since passed. Around 2:45pm today, I received an email depicting an exception that the application I had worked on was causing. Again, it was deemed as critical for billing and payroll purposes. Needless to say, it needed to be fixed immediately. Investigating the bug revealed that the hot fix that was pushed out on Tuesday has now caused this new issue which requires a hot fix. I tried my best to resolve the issue by the end of day but with each fix I added a new error occurred. I certainly wasn't going to push out a hot fix that is known to be flawed and hasn't been tested at 5:00pm on a Friday. The fix will have to wait until Monday.

I think there are a few important lessons to be learned here:
1. Don't be a hero. If you aren't feeling well, just call in sick.
2. Don't work on critical code unless you are thinking clearly.
3. Don't rush critical fixes no matter how trivial they may seem.
4. All code should be tested. Changes resolving a critical issue MUST be tested thoroughly! At the very least, test the fix itself and perform some basic regression testing. A full regression test should be performed if possible.
5. Trust your gut! If you have a bad feeling about something, there is likely a reason for it.

Important Lessons!

Tonight I experienced the fear that many of my friends and family have felt when they thought they lost everything on their PC. I recently purchased a copy of Windows 7 and was eager to install it. My existing set up was a Tri-boot (Windows 7 RC, Vista, & XP Pro). The XP pro for some reason wouldn't allow me to boot into it after about a month of successfully running all 3 operating systems on the same drive. My machine also has an internal drive partition that I used strictly for data storage amongst all the configurations and a separate partition used for a common "My Documents" folder and some other miscellaneous items. With Windows 7 in hand, I diligently went through the each OS and moved all of the data stored in the OS partitions into my data storage partition for safe keeping. Once I was sure I had everything copied over, I decided I would wipe the existing 3 operating systems and start fresh with a single copy of Windows 7 Ultimate edition.

I popped the install disc in and booted back to the cd. Upon starting the install, I chose to delete my "XP" partion, my "Vista" partition, and my "Win7RC" partition. I completed the install process on the new unallocated space and booted back into windows. During bootup, something strange caught my eye though. The OS selection briefly appeared asking if I wanted to boot into Windows 7 or Windows 7 indicating that there were 2 operating systems still installed. "Impossible!", I thought. I logged into the fresh install of Windows 7 and immediately went to "My Computer" to access my data storage drive. Much to my surprise, it wasn't there! In its place, however, was my previous Win7RC partition staring right back at me. After the initial panic and wave of nausea passed from thinking I just deleted over 400 GB worth of personal data (of which only 1/2 was properly backed up to an external drive), I started to think it through.

When a file gets deleted in windows, the data itself is not typically deleted/removed. Instead, windows gets notified that the space that data had occupied is now available for allocation. I began thinking that the deleting a partition would likely follow the same rules since I didn't reformat the disk. The only question was how to access the deleted partition...

Using my wife's laptop (since I was afraid doing anything on my machine would increase the chance my precious files would be over written), I began to google for free utilities that would retrieve the deleted files. As I was doing that, I figured it would be worth a shot to pull the plug for force a shutdown without saving anything and reboot back into my pre-existing Windows 7 RC install.

The chance to boot back into the old OS proved to be the perfect solution. After logging in, I opened "My Computer". Though the XP and Vista partitions were no longer showing, the data storage drive appeared to be fully intact!!! I can't even begin to explain the relief I felt knowing that all my photos, music, movies, financial records, and source code were still there. As I write this, all my data is being copied to an external hard drive which will be disconnected to ensure I don't make any more foolish mistakes. Tomorrow, I will finish setting up and customizing the new OS as I want it, then I will decide on a more appropriate and consistent back up strategy to prevent this type of scenario from occurring again.

So, what I have learned out of all this? First and foremost is that I have been a hypocrite in telling friends and family to implement some sort of data back up plan on a regular basis. Second, that I need to decide on what the best backup strategy is for me and my family....and third, that I need to actually USE the backup strategy that I thought through. Backing up once every 6 - 12 months just doesn't cut it. Tonight I consider myself very fortunate that all was not lost. I hope this may be a learned lesson to all of you as well. Though I got lucky this time, I don't think I would be the next time. I do know this though, I will do everything I can to ensure that I am never in the position again. It's terrifying to think about losing approximately 9 years worth of digital data!


Last week I ran into a sporadic issue with a WCF service timing out. I was unsure if the timeout was being caused by the network connection, the business processing, or the data access layer. I was unsure if the issue was data related. That led me to the task of profiling my code base. Unfortunately, I have not yet been able to reproduce or find the flaw, but I have learned a lot about our current code base with the help of PostSharp. In case you haven't heard of it, it is an Aspect Oriented Programming (AOP) framework that can be hooked quite easily into any virtually .NET application.

I've only just begun to learn about PostSharp myself, but I found it very easy to hook into the code so that it can provide the duration of every method call in the application. I have it set up so it will conditionally log information including the method name, parameter data, and the actually duration if it exceeds a configurable threshold I set. I know that this is just a very minute detail of what this tool offers but I intend to continue exploring with it. From what I can tell, it seems like it will be a priceless that I would value as much as Lutz Reflector!

Requirements from the Developer's perspective

Requirements. It's not a very ambiguous term though the meaning behind requirements in software development seems to be one of the hardest concepts to grasp. We all know that they are needed for a successful project...yet it seems that they are often a one of the primary reasons for project failure. Whether the requirements simply don't exist, are incomplete, are too vague, or constantly change, they have the ability to make or break a project.

One of the issues with requirements is they often self-contradict themselves. The system needs to do perform x, y, & z. It needs to perform at this rate, be dynamic enough to handle any change, be scalable, be database independent, persistent ignorant, etc. Simply put, requirements can't be cast in stone. They are all about compromise one need/feature for another. Perhaps performance is increased by 1% taking on the risk that 1 out of every 100,000 reads against the database might have stale data. Who knows? The fact of the matter is that requirements are similar to a person's growth. Early on, there are lots of changes. They grow to be stagnant for awhile only to be following by a few more spurts of rapid and drastic changes. Eventually, the requirements satisfy most needs and are generally accepted thus becoming stagnant once again. Sooner or later, they will become obsolete and die off only to be replaced by a new set of requirements defining a different scope.

So, whose job is it to obtain the requirements? Is it the user, the analyst, architect, project manager, or developer? I think it's everybody's responsibility to contribute. Each person mentioned has some stake in the matter. Granted, a single person may fill more than one role, but that is not the point here. The point is that each role has different needs and wants. Their perspective on the requirements may differ vastly from the next. There needs to be a single person that is the owner of the requirements. All others are just contributors.

As a developer in a project that has had been under constant flux, I have felt the pain of changing requirements all too well. I feel that as a developer, I failed in my role as a contributor. Though I didn't own the requirements, I did recognize some potential pitfalls early on and had mentioned them. I had brought them up in a meeting and consensus ruled that they would not be issues. I should have stood up and spoken with more conviction at that point. The pitfalls I suspected were critical in the overall design. These issues have been debated and discussed over numerous meetings since then...each time with a reactive approach instead of proactive.

The moral is that, as a developer, I should have recognized the impact the issues I suspected would have and my level of conviction should have followed suit. Instead, I just went along with the consensus. As the issues truly did arise, my pains in trying redesign a core portion of the application have increased exponentially as our deadline approached. Had I done a more thorough design up front, I think I would have noticed more of the inconsistencies in the requirements. The questions I would have uncovered would have clarified the requirements and exposed the risks they presented.

Last minute requirement changes....

Though I had intended to keep this blog an informative location on the web, I need to rant. I understand that requirements change. I'm a big believer in iterative development cycles for this very reason. There are, however, few things that irritate me more in my professional life than a breaking requirements change at 4:00pm on the Friday there is supposed to be a code freeze. As valid as the case may be, it's a sure way to ruin a person / team's weekend. It is demoralizing to have a working application going into the code freeze without any bugs reported only to have the requirements change 1.5 hours before you plan to leave work...especially after putting in extra time the entire week to accommodate breaking requirements changed the previous Friday. The change in requirements resulted in an extra 4 hours of work today, a broken service, and the need to work through the weekend with hopes of having a fully working and tested service by Monday.

I suppose that I need to look at the bright side of things though. I am fortunate enough to be employed at a stable company. Also, I am extremely grateful that last minute breaking requirement changes are rare for my team.

System.MissingMethodException while UnitTesting in VS 2008

I've run across a strange situation where I have recently modified a WCF service and service contract adding new functionality. All I have done so far is stub in a new method with the appropriate updates to the service, the service contract, & the proxy (which is hand coded). When I try to run my existing unit tests, I am now getting the following error:

Class Initialization method <namespace>.<classname>_UnitTests.MyClassInitialize threw exception. System.MissingMethodException: System.MissingMethodException: Method not found: '<namespace>.<returntype> <namespace>.<contractinterface>.<methodname>()'.

Ironically, the method does exist and if I run the unit test in debug mode, the exception does not occur. If I run all the tests in debug mode, they all pass without a problem (expected result)...but if I just run the tests without being in debug mode, they fail with the MissingMethodException. IF I run the tests in debug mode, get them to all pass, then run them without restarting visual studio and being in debug mode, they pass. This is obviously some kind of a caching issue.

As it turns out, the issue appeared to be related to the *.testrunconfig file. Initially, code coverage was enabled. I disabled code coverage then deleted the testrunconfig file. I then cleaned the solution, closed visual studio, re-opened visual studio, & rebuilt the solution. The unit tests then began passing

Team System Web Access: Time Entry Host Custom Control

Previously, Michael Ruminer had created a Silverlight TFS TimeEntry control for use within Team Explorer. His project is hosted within the MSDN Code Gallery at: "". His project creates everything needed for a rudimentary time tracking system within TFS. One of the remaining open issues was how to utilize the same Silverlight TimeEntry control through Team System Web Access (TSWA).

I have created a simple server side web control that will host Michael's TimeEntry control. When I first tried to tackle this issue, I had to learn about the Work Item Templates used by TSWA and what controls already existed. I found the existing controls to be quite limited. As a result, I opted to create my own custom web control. To my surprise, the code changes ended up being extremely simple. First, let me explain what is required to create a custom web control that can be used by a work item template in TSWA.

For TSWA to render a custom control, the control must:
1. Inherit from System.Web.UI.WebControls.WebControl
2. The control must have a default constructor
3. Implement Microsoft.TeamFoundation.WorkItemTracking.Controls.IWorkItemControl *
4. Implement Microsoft.TeamFoundation.WebAccess.WorkItemTracking.Controls.IWorkItemWebControl *

* Note: These two interfaces can both be found in: "C:\Program Files\Microsoft Visual Studio 2008 Team System Web Access\Web\bin\". My initial struggles with getting the control render related to the fact I failed to implement BOTH interfaces. Creating a custom windows control only requires the "IWorkItemControl" interface be implemented.

I had wanted to keep the control as simple as possible so all it does is render a simple HTML page with an IFrame. The IFrame source points to the silverlight control. At first, I tried to just render an IFrame on it's own. The control seemed to load and function properly, however, I was getting a javascript error indicating "this.m_buttonsTable.offsetLeft" was null. I found this somewhat confusing since I hadn't written any javascript and the error occured regardless of what the IFrame pointed to. I then realized the script was in a WebResource.axd file, therefore, it must have been generated by the .NET framework. I resolved the javascript issue by wrapping the IFrame in a fully valid HTML page. The control continued to work but without throwing any javascript errors.

Once the control has been built, you have to deploy it. Create a "wicc" file by the same name as the library. Copy both the wicc file and the dll to "C:\Program Files\Microsoft Visual Studio 2008 Team System Web Access\Web\App_Data\CustomControls\" on the server hosting TSWA (assuming you used the default install for TSWA). TSWA will look in the CustomControls folder by default when trying to resolve the assemblies. In my example, my library name was "TimeEntryHost" so my wicc file was titled "TimeEntryHost.wicc".

Its contents are:

  1. <?xml version="1.0"?>
  2. <CustomControl xmlns:xsi="" xmlns:xsd="">
  3.   <Assembly>TimeEntryHost.dll</Assembly>
  4.   <FullClassName>TFS.TimeEntryHost</FullClassName>
  5. </CustomControl>

The template itself renders the control in a tab. Here is the snippet showing how that should be:
  1. <Tab Label="Time Entry">
  2.   <Control Type="TimeEntryHost" Label="" LabelPosition="Top" Dock="Left" />
  3. </Tab>

This control could be used to host any web page as well. I'm actually surprised that it wasn't one of the standard pre-built controls due to its simplicity. There is, however, a caveat to the whole thing IF you are rendering DIFFERENT custom controls in Team Explorer (windows) and TSWA (web). Simply put, the work item template has a layout tag. Well, the layout tag needs to be duplicated each with a target attribute. For windows, it should read and for TSWA it should read . I had found this useful tidbit on Shai Raiten's blog at

The final issue I ran into was importing the template into TFS so it rendered in both Team Explorer and TSWA. If you use the "Process Editor" tool in Visual Studio, you will lose one of the layouts you created causing the control to only render in one of the environments. To get around this issue, import the template from the commmand line. I don't recall where I found that information otherwise I would give proper credit. The import command will look something like the one below:
witimport /f "C:\\Task.xml" /t http:// /p ""

Below is the source code to the control itself. I'm sure it is not the most efficient and there is definitely room to improve, but it simple working example of how you can host a Silverlight control in a Work Item Template.

  1. using System;
  2. using System.ComponentModel;
  3. using System.Web.UI;
  4. using System.Web.UI.WebControls;
  5. using Microsoft.TeamFoundation.WorkItemTracking.Controls;
  6. using Microsoft.TeamFoundation.WebAccess.WorkItemTracking.Controls;
  7. using Microsoft.TeamFoundation.WorkItemTracking.Client;
  9. namespace TfsImplementation {
  10.     [Serializable]
  11.     public class TimeEntryHost : WebControl, IWorkItemControl, IWorkItemWebControl {
  12.         public TimeEntryHost() {
  13.             SetDefaults();
  14.         }
  16.         private void SetDefaults() {
  17.             this.Width = 843;
  18.             this.Height = 493;
  19.         }
  21.         protected override void RenderContents(HtmlTextWriter output) {
  22.             string iframeHtml = "<iframe src=\"{0}\" style=\"WIDTH: {1}; HEIGHT: {2};\"></iframe>";
  23.             string urlPath = string.Format("http://TimeControl_TFS.Web/TimeControl_TFSTestPage.aspx?name={0}&wi={1}",
  24.                 Environment.UserName,
  25.                 _workItem.Id);
  27.             output.Write("<html>");
  28.             output.Write("<head>");
  29.             output.Write("<title></title>");
  30.             output.Write("</head>");
  31.             output.Write("<body>");
  32.             output.Write("<form id=\"form1\">");
  33.             output.Write("<div>");
  34.             output.Write(string.Format(iframeHtml, urlPath, Width, Height));
  35.             output.Write("</div>");
  36.             output.Write("</form>");
  37.             output.Write("</body>");
  38.             output.Write("</html>");
  39.         }
  41.         #region IWorkItemControl Members
  43.         public event EventHandler AfterUpdateDatasource;
  44.         public event EventHandler BeforeUpdateDatasource;
  46.         public void Clear() {
  47.             RenderContents(new HtmlTextWriter(new System.IO.StringWriter()));
  48.         }
  50.         public void FlushToDatasource() {
  51.             RenderContents(new HtmlTextWriter(new System.IO.StringWriter()));
  52.         }
  54.         public void InvalidateDatasource() {
  55.             RenderContents(new HtmlTextWriter(new System.IO.StringWriter()));
  56.         }
  58.         public void SetSite(IServiceProvider serviceProvider) {
  59.             //throw new NotImplementedException();
  60.         }
  62.         public System.Collections.Specialized.StringDictionary Properties { get; set; }
  63.         public bool ReadOnly { get; set; }
  64.         public string WorkItemFieldName { get; set; }
  66.         private WorkItem _workItem;
  67.         public object WorkItemDatasource {
  68.             get { return _workItem; }
  69.             set { _workItem = value as WorkItem; }
  70.         }
  73.         #endregion
  75.         #region IWorkItemWebControl Members
  77.         public string ClientEditorObjectId { get; set; }
  78.         public string ClientObjectId { get; private set; }
  79.         public string ControlId { get; set; }
  80.         public string Label { get; set; }
  81.         public string ThemeUrl { get; set; }
  83.         public string GetClientUpdateScript() {
  84.             return string.Empty;
  85.         }
  87.         public void InitializeControl() {
  88.             //throw new NotImplementedException();
  89.         }
  91.         #endregion
  92.     }
  93. }

TFS / TSWA Custom Controls - "Unable to create workitem control 'ControlName'." - Part 2

As it turns out, I had missed one of the required interfaces. When creating the web control, it requires that it inherits from WebControl and implements IWorkItemControl & IWorkItemWebControl. My initial attempts were missing the IWorkItemWebControl implementation. Once I had that resolved, my control began rendering. Still have some bugs to work out but those should be relatively easy. Once I have the control complete, I will post it with a link to the actual silverlight control I am hosting!

TFS / TSWA Custom Controls - "Unable to create workitem control 'ControlName'."

I have been trying to create a fairly simple custom control to be used by the TFS Web Access application. At my day job, we use TFS extensively and are working towards a solution that allows us to use it exclusively too. One of the major drawbacks we are currently facing is the lack of a built in time entry control. Another colleague of mine has created a time entry control in Silverlight to facilitate use in the two environments. In Team Explorer, a simple browser control was added to a tab to house the Time Entry component. It works well though it still has some minor quirks. I have been assigned to port the same concept (preferably to use the same Silverlight control) to the web side.

The issue I have run into is that the Work Item Template does not have any controls to support Silverlight or any other web site redirection. My thought was to use an IFrame to embed the Silverlight right in but I am not aware of any way to use the existing controls to support it. My next thought was to create a very simple server side control that will just render an IFrame directed to the appropriate URL. The new control is working in test environments (dummy pages), however, when I try to integrate it into the Work Item Template, it fails every time.

I have received numerous errors simply indicating that the control can not be created. Using FUSLOGVW.exe, I began to dig into the issue. I've verified in some cases that the library, dependencies, and the .wicc file are all properly located and found. Still, the control can't be created. I've tried modifiying the wicc file, changing namespaces, and updating the web.config (restarting IIS after every change). Every attempt still fails but now I'm getting different errors.

In the wicc file for the assembly name, if I include the ".dll" at the end of the assembly name, it searches for "assembly.dll.dll" and/or "assembly.dll.exe". Obviously it will not find a file by either of those names. If I remove the ".dll" at the end of the assembly name, it searches for just the assembly name with no extension at all....which again, it will never find.

So the million dollar question is what is going on here? I'm sure it is something stupid that I am overlooking. These things usually turn out that way which only raises the frustrations, but I will figure it out and post my resolution when I do.

The Worst Job I've Had...

...was likely the best thing possible for my professional career. Though this may seem strange, let me explain the reasoning. I am currently one of the Senior Web Developers within my organization. This is a far cry from the Bachelors Degree in Management I originally graduated with.

I started as a self-taught programmer with my first job out of college. I began my career with a small medical software company (2 developers, < 10 employees total) writing low-level communication drivers and APIs (using Delphi, C, & SQL). Being such a small company, alot was demanded from me. I had to design, code, test (with some help), and deploy all of the code I wrote myself to the external customers whom I worked with daily. After 3 years and a greatly expanded set of responsibilities, I was feeling confident. I felt that I knew what I needed to make the next step in my career. With confidence brewing, I moved on to a much larger corporation in the middle of trying to attain Sarbanes-Oxley (SOX) compliance.

Needless to say, it was a drastic environmental change for me. I needed to learn to work more as a member of a team rather than being the whole team. I needed to learn new processes and how to formally document my analysis, development plans, and testing plans. The code base was over 100x bigger than anything I had ever worked with deep object inheritance (most of my prior experience had more of a functional form). My manager was new at the role and lacked the experience to recognize how much I struggled...or just didn't have the know how to help me adapt. I worked through it for one year before I moved on again.

In preparation for moving on, I recognized that Delphi was a dying language. I wanted to enter the .NET realm and pursue a Masters Degree in Software Engineering. Luck was in my favor and I actually went back to my former employer to work on a new project in .NET with the opportunity to work from home. I found the lackadaisical environment to be comforting having just left a place with significant turbulence. Not to mention, working from home was a great experience and it provided the flexibility I need to attend grad school.

Over the next 3 years, I worked towards a Masters in Software Engineering and always found myself reading technical books outside of the classroom. I have taken on the challenge of trying to learn best practices and keep up with new technologies while gaining deeper understanding of existing technologies. I had also discovered how informative podcasts can be (.NET Rocks, Hanselminutes, & ALT.NET to name a few). I recognized that reading the blogs of the industry gurus and my peers provided priceless information, examples, and viewpoints. I've become more of a proponent for open-source and have spent significant time looking at open-source code.

Since finishing my Masters, I have moved on to my current employer (which is again a large, highly structured organization). Our development team consists of about 50 employees and contractors. Here I have taken a strong interest in continuous integration, automated builds, and utilizing aspects of agile methodologies in my daily development practices. I am still always reading at least one technical book at a time (often 2 - 3) and follow numerous blogs on a daily basis. I try to be more aware of the things I do not know. I've come to believe that relentless education is the only way to be successful in the software development industry because of it's accellerated rate of growth. For anybody that thinks they know enough to carry them through to retirement, they better plan on retiring soon! Technology is simply changing too fast. If we don't pursue it, we will just be left behind with future prospects fading quickly.

So, back to the original statement. The worst job I've had was likely the best thing possible for my professional career. It was quite the humbling experience. I try to remain humble and am always eager to learn something new. I suggest that, you too, implement a relentless pursuit for continued education in the areas you find most interesting. Never consider yourself to THE expert. Be humble, yet confident. There is always something left to be learned no matter how good everything seems.

Too Many Stored Procedures

Like many others before me, the code base I am working on has been in existence for many years. It is massive in scope and in actual source code. Behind it all lies a massive database structure with a single common means of accessing all of the data from the application: stored procedures (sprocs). While this may seem like a good idea, there is a point where things get out of control. Our current system has about 2,000 tables and nearly 14,000 sprocs. The only logical grouping of the sprocs is found in the naming conventions used, though this proven itself inconsistent as well. To make matters worse, there is far too much business logic in many of the sprocs. Now, I understand that number of lines of code is not a very reliable metric, but some of the sprocs are in excess of 2,000 lines with many of the sprocs at least 300 - 500 lines long.

I recently did a trace through a stored procedure that didn't appear too bad on the surface (it was only 640 lines long). The trace revealed a disturbing revelation. The shortest path through that stored procedure involved calls to 4 additional sprocs. That is acceptable, but that is to determine that it has nothing to do. The shortest functional path through involves calls to over 80 additional of which calls itself recursively. Needless to say, it was a painful experience to trace through to figure out what it is actually doing (oh, I forgot to mention that our specs and documentation are virtually non-existent).

So this all brings me to my current point. There is a time when there are too many stored procedures and any benefit gained by them are lost through the loss in maintability. With the exorbitant number of sprocs, most of my co-workers (myself included) will take a short look to see if one already exists meeting our needs, but typically just take the easy road and write a new sproc further propagating the problem. I would like to spearhead an effort to bring some sort of control to this situation and to bring it back to a maintainable state. The million dollar question is simply "How?".

Starting my blog...

I've been putting off creating my own blog for about a year or so. I've had good intentions but I never seem to give myself the time to do so. So, why now? What makes today so different? Nothing really. I've just become fed up with how I'm utilizing my own time. Life is too short to not go after the things you want in life. My driving motive for this blog is to provide a place for me to document lessons learned (often the hard way), my thoughts on new technologies, and provide a running dialog of my professional growth. I also intend to use this blog as one of many means to help improve my knowledge on any topics discussed while improving my communication skills (hopefully). I tend to be over-ambitious but we will see how this chapter turns out. My goal is to post new messages every few days. I think that is a reasonable expectation but only time will tell.