Styles

Saturday, September 18, 2010

Why unhandled exceptions can be good

There are always going to be scenarios in a piece of software that the developers haven't thought of, causing serious issues in the application.  Sometimes these issues can be caused by faulty software, but they can also be caused by corrupt data, corrupt files, problems with the operating system, etc.  If one of these issues causes the application to crash, it is generally called an unhandled exception.

Unhandled exceptions can make software users nervous about the quality of their software, and so software developers rightly try to avoid them.  This idea can be taken too far, though.  I have worked on a few applications where the developers involved tried to avoid showing the user an unhandled exception at all costs.  However, that meant that there were times when something went wrong, the error was swallowed by the software, and the user was never notified.  This can often result in messy data.  To see why, consider this scenario:

A doctor at a hospital is entering her patient's medication information in the computer to monitor the patient's medication types and amounts to prevent over-dosage and dangerous drug interactions.  This patient has just been administered three drugs.  In entering the information, the application has unforeseen problems with medication #2.  If the error allows the application to crash, the doctor will know that something has gone wrong and will restart the application to see what information will need to be reentered.  This is annoying for the doctor, but is not life-threatening.  If instead the application had swallowed the exception, and either gave the doctor a cryptic message or no message at all (which is sadly often the case), the doctor will have no idea to go into the system and double-check those entries.  I can only imagine the problems that can be caused by incorrect medication information being stored for a patient.

So what does this mean for you?  If you are on the implementation side, do not give into the temptation to bubble-wrap your application to the point where unhandled exceptions can never occur.  Certainly handle the exceptions you can foresee.  Your users will rightly expect this.  But do not hide problems to the extent that no one can tell they occurred, or that the user does not understand the severity of the error.  Finally, be sure that you are notified through an automatic e-mail or similar measure if such an error arises.  If you are a user, do not overreact when such an error occurs to you.  It is certainly frustrating to have your software application quit on you, but keep the alternative in mind.  At least you know something went badly wrong.

Sunday, September 12, 2010

What business people need to know about software development

Claiming ignorance of information technology in this day and age is just like claiming ignorance of accounting.  Every manager must have a basic knowledge of IT just like they must be able to read a balance sheet.  Determining what, exactly, every manager must know is difficult, though.  Like most things, it depends on the context.  I'll start with what business stakeholders need to know about software development, in large part because this is my specialty.

Basic Terminology:
User Interface - This is the term for the way in which the software user interacts with the application
Database - This is software that stores the data in most scenarios
Server - This is a computer, usually with a large amount of memory, that hosts the software or database
Hardware - This refers to the physical components of a computer
RAM - Think of this like short-term memory in humans: it stores information that might be needed in a few minutes, but does not hold information in the long-term
Refactoring - The process of rewriting working code to make it more readable
Data - Another name for "information", though "data" is plural

Development Process:
Writing software does not follow a predictable method or process.  If it did, we could write software that could write software for us and save everyone a lot of time and money.  (Most good programmers will do that when and where it is appropriate.)  Because software writing is a highly creative and unpredictable process, nailing down solid estimates can be tough.  If you try to force your developers to stick to their original time and cost estimates, you will likely create a contentious atmosphere which will get in everyone's way when trying to get work done.  Instead be flexible to estimate changes.  If it looks like a task or component is taking too long to complete, decide if it is worth completing after all.  If so, keep in mind that the next estimate very well might be too large and you'll be even in the long run.  If not, it is better to stop a time-consuming task early to avoid throwing good money after bad.

Early detection of problems is important
If you notice something strange in your application, especially if your data looks wrong or missing, find out what the cause is soon.  Chances are that there's something wrong with your user interface and your data is fine, but if there's something in the application that's corrupting the data the sooner you can find the problem the easier it can be corrected.  Lost data can be recovered from backups, but most companies only keep backups for a certain period of time.

Early detection is also important for problems like software slowness.  If your software is slow, it is not likely going to speed up on its own, and will likely get worse over time.  Fixing the problem early before complaints start coming in will save both you and your developer time and frustration.


Know what will change in your business requirements...and what won't
Many developers would disagree with me on this one, but I think it's important to communicate which of your business requirements will never change and which ones will certainly change over time.  For example, if you have an e-commerce site that sells shirts, you might expect to add colors to your offerings, but you will not likely add sizes.  The reason for this is that most developers will program non-changing requirements differently than ones that will change often.  The non-changing requirements can be coded in such a way to make it easier and faster to write the first time, but harder to change going forward.  The opposite is true for the changing requirements.  This concept isn't critical, but it is something to keep in mind.


When problems arise, adding more bodies is probably not the solution
If a software project falls behind schedule, resist the urge to request additional resources to get it done.  Why?  Since, as I mentioned before, software writing is a relatively complex process, it will take time to get new resources up-to-speed on the specific requirements of your project.  This not only means that these new resources will not be as effective as your current ones, but your current resources will be less effective than they were as they help bring the new resources into the project.  Your best bet is probably to adjust your expected finish dates.  If that is not possible, then adjusting the number of features to be added should be the next option.





It was working before....
Refactoring is an important part of the software development process.  Adding features adds code, and unless you change existing code so everything makes sense to the developer you will eventually end up with a tangled, unmaintainable mess, to put it mildly.  Therefore, most good developers will spend some time rewriting code to prevent this from happening.  Every time this happens, the developer runs the risk of breaking something that was working before.  There are ways of mitigating that risk, such as writing and running unit tests, but the risk cannot be completely eliminated.  If you are testing changes made to your software, try to test areas that might be affected.  Also, if you get complaints right after changes were applied about functionality that was working before, take them as seriously as any other.


Know the general limitations of your approach
I am not suggesting that you become a programming expert by any means, but you should be aware of the limitations of the software delivery method you choose.  For instance, applications for a hand-held device might be difficult to move over to other brands of devices.  Desktop applications are hard to update with bug fixes or changes.  Web programming is limited by the fact that most of the processing must be done by the server.  (Going into more detail would require another blog post on its own.)  Being aware of the limitations will help you ask for features that are appropriate for your medium, and will make the conversations you have with your developers about changing features much more productive.

Keep in mind that you are a team
Finally, keep in mind that most programmers, no matter what their background, experience, or skill level, want to write good software.  Their focus may be different (most software developers I know will often over-emphasize quality code creation at the expense of the software as a whole), but in the end they want the same thing.  They want the software to be successful as much as you do.  From what I've seen, many of the problems between business and IT could be mitigated if everyone remembered that the end goal for everyone else is the same.  Build trust, and good things should follow.

Monday, September 6, 2010

Improving Business to IT Communication

A lot has been written about the disconnect between IT and the rest of the business. As an example of a common communication problem, here is an example of a conversation (a bit bland and contrived, I'll admit) which I have experienced all too often:

Business user: I have a customer with a problem with the application.
Programmer: What seems to be the problem?
Business user: When they push this button, I want them to see a message.  Except they're not seeing one.
Programmer: Let me take a look to see what's going on.
- Pause -
Programmer: Their account is missing some information, which is causing the message to stay hidden.
Business user: That's not good.
Programmer: You can fix this problem by entering information on the "My Information" page.
Business user: Thanks, I'll ask them to do that.

Is there anything wrong here?  Some people might argue that there is not, since the business user is able to do what he/she needs to, and this was just a training issue.  I would argue, though, that there are two problems here:
  1. The application is not intuitive enough
  2. The underlying business problem is not fixed
In this case, both programmer and business user are more interested in solving the situation at hand rather than making a long-term fix to the problem.  The user will likely continue to have problems with this message not showing up and the programmer will focus his or her efforts elsewhere.  Perhaps this might be a better approach:


Programmer: Their account is missing some information, which is causing the message to stay hidden.
Business user: That's not good.
Programmer: You can fix this problem by entering information on the "My Information" page, but I'd bet this will continue to be a problem with other users.
Business user: I agree.  I can see how that message would be dependent on the "My Information" settings.  Can we make filling out that information required?
Programmer: We could do that.  We could also add default responses and ask the users to change the information if necessary.
Business user: Let's do that.  I don't want to force them to fill out more information than is needed.

The difference here is that both the programmer and business user see that the underlying cause of the problem - missing information on the customer's account - should be fixed for more than just this one customer.  Doing so would cause fewer headaches for both business user and programmer.  They came to an understanding, and found a solution that worked for both.

While this situation was vague, the problem it illustrates is not.  Technology personnel need to be sensitive to the business drivers behind their requests in order to suggest more complete solutions.  Next week I will get into some more specific recommendations for business managers to get more out of their software development initiatives.

Sunday, August 29, 2010

Benefits of Unit Testing

Writing unit tests is an important part of writing software, but I rarely see someone take a balanced view of them.  I've seen developers shun them because of the perception that they don't provide enough value for the effort necessary to make them, and I've seen developers who act as if the presence of unit tests in a solution automatically makes good software.  Neither of these are true.  They are worth writing, but it is important to realize their benefits and drawbacks.

How unit tests work

Unit tests are essentially bits of code written and run for the sole purpose of testing components of production software.  The idea is that it is no longer necessary to run the entire application to test the functionality of these components.  For example, assume that we're writing software that takes applications for a program, and if the application is received less than 30 days before the start of the program, then an extra $100 is applied to the program fee. Our unit tests (in pseudocode) might look like this:

Function AppliesBeforeDeadline
DateTime today = "8/15/2010"
DateTime startDate = "10/1/2010"
Assert.IsFalse(CheckLateFeeNeeded(today, startDate))
End Function

Function AppliesAfterDeadline
DateTime today = "8/15/2010"
DateTime startDate = "9/1/2010"
Assert.IsTrue(CheckLateFeeNeeded(today, startDate))
End Function

Function AppliesOnDeadline
DateTime today = "8/1/2010"
DateTime startDate = "8/30/2010"
Assert.IsTrue(CheckLateFeeNeeded(today, startDate))
End Function

Function AppliesRightBeforeDeadline
DateTime today = "8/1/2010"
DateTime startDate = "8/31/2010"
Assert.IsFalse(CheckLateFeeNeeded(today, startDate))
End Function

There are four tests here.  The first, "AppliesBeforeDeadline", tests to make sure that when we apply prior to 30 days before the deadline, the fee is not applied.  "AppliesAfterDeadline" tests for the opposite scenario in which we are applying after the 30 days prior to the deadline.  There are also two tests that check for boundary conditions; meaning the the last two tests test scenarios close to where the functionality should change.

The real benefit here is that now that we've tested these scenarios, we don't have to do nearly as much testing with the application is running.  We can test these normal circumstances by running a simple function, which is less time-consuming compared to starting the entire application.  Perhaps the best benefit is that we can use these tests as regression tests at a later date.  If someone wanted to change the way "CheckLateFeeNeeded" works at some point, it would be nice to know that he/she didn't break anything in the process of making those changes.

The major drawback I see to unit tests is that developers will often assume that because a particular bit of code has unit tests written against it, it doesn't need to be tested in the application as a whole.  To see why this assumption is dangerous, let's take a closer look at our unit tests. In this scenario, we didn't even test for all of the types of inputs we could reasonably expect, much less test for exceptional conditions.  Some further tests might need are:

  • What happens when one of the dates is in February, a month with only 28 days?  
  • If one of the dates is in UTC, is the function able to correctly calculate the difference in days?  
  • Will the function still work if the days are 375 days apart (does the function incorporate years into its calculation)?  
  • Will the function still work if one date is in December and the other is in January?  

Even if we write tests and all these pass, we also must consider scenarios where one of the dates is missing or null and/or the start date is after the application date.  Just because we have unit tests doesn't mean the function works in all scenarios.

Unit tests also can't guarantee that the function is always called correctly.  A developer could easily switch the two dates and enter the start date where the application date goes and vice versa.  If the developer (or project manager, supervisor, or stakeholder) assumes that because a unit test exists the application must be functioning well he/she will be in for some unpleasant surprises when the user starts applying for programs.

The bottom line is that unit tests can be very helpful if used properly.  Regardless of whether they are used properly, they can give you a false sense of security that your application has fewer bugs than it does.  It is important to realize their benefits and limitations before deciding how much to use them in your project.

Monday, August 23, 2010

Multitasking

I came across this blog about the drawbacks of multitasking.  I have noticed that I can get more done if I simply focus on a single task at any given time.  I also find my work more enjoyable, and I usually have more energy at the end of a day with a smaller amount of multitasking than the other way around.  While I multitask occasionally, I try to avoid it whenever possible.  Then I read this rebuttal and realized that "multitasking" can mean different things to different people.  So how would I define multitasking?
  • Focus your attention on a single task at a time.  If another issue emerges that is more important, that should get 100% of your attention.  If it is not more important, then it can wait.
  • Multitasking is about the attention to a task and does not refer to the number of thoughts you must keep in your head at any given time
So avoiding multitasking doesn't mean closing off all contact with the outside world while you're working on a particular task.  You could designate certain times during the day for contact for non-urgent issues, with the understanding that you would be available by phone at other times if needed.  You could treat e-mail the same way - check e-mail periodically throughout the day, but only respond to non-critical e-mails at certain times.

Avoiding multitasking also doesn't mean that you can't put aside a particular task if you're stuck.  It means focusing 100% of your attention on whatever task you are trying to accomplish at any given moment.  If you are not making any headway, going to another task temporarily is not multitasking in my view; it's serial single-tasking.

Finally, the author of the rebuttal blog seemed to think that doing complex tasks qualifies as multitasking.  As a programmer, when I get a request for a feature change to software that I'm building, I need to consider many different ideas:
  • The most effective way to implement the solution
  • The needs of power users
  • The needs of the first-time user
  • Costs for my client
  • Unintended consequences that might arise because of my fixes
However, all of these ideas are working towards a single goal: determining how best to implement a feature requested by my client.  It is a single, multi-faceted task.  When a single task gets too complex to control, then I break it down into multiple tasks and address each one separately.  I'm still not multitasking, and by single-tasking the components I have an easier job putting the pieces together in the overall task.

So if multitasking makes one work less effectively and accomplishes nothing worth noting, then why do it on a regular basis?

Saturday, August 7, 2010

Tips for mentoring others

LHere is a great blog post that was written about mentoring new programmers, but could easily be applied to other disciplines and experienced hires too:

http://blogs.techrepublic.com.com/five-tips/?p=212

These tips are rather incomplete, though.  I'd like to expand on their tips:
  1. The issue with mentoring seems not to be lack of will, but lack of time.  More on this in a bit.
  2. Make sure your road map covers more than just the first few hours.  It is all well and good to show the new employee the location of the documentation, but have a plan that covers at least the first six months.  This plan should include ways to make the employee more independent as time goes on.
  3. Be tolerant of mistakes, but try to prevent common or serious ones.  Confidence building is key.
  4. Try to assign projects at first that have a training purpose as well as a practical purpose for the company.  This should be a mixture of easy projects to give the employee confidence and stretch projects to keep him or her engaged.
While most of us would agree with these tips, very often they're not followed.  Why?  The biggest reason seems to be that people usually aren't hired for a potential need, they're hired to fill a current one.  While that approach lowers the risk for the company of hiring someone who will need to be laid off, it also makes it harder for managers and experienced co-workers to get the employee up to speed in a fashion that benefits both the employee and the company.  The easiest way around this that I can see is to keep track of potential future needs (perhaps by using risk modeling tools such as these) and hire based on that.  Outsourcing and contract-to-hire positions can mitigate the risk of over-hiring, if needed.  Good mentoring is a lot easier when one feels like there is time to do it well.

Sunday, July 25, 2010

Programming assumptions

A database specialist and blogger I follow came across this post:

http://stackoverflow.com/questions/888224/what-is-your-longest-held-programming-assumption-that-turned-out-to-be-incorrect

which got me wondering, what assumptions have I made that turned out to be incorrect?  The one that came to mind for me was the idea that going to the database is slower than pulling in all the data you need at once and querying the stored data when necessary.  It really depends on your situation.  When you don't have to deal with remoting or firewalls, it is very often faster to query the database than to use a stored list.

More importantly, though, this question should make everyone ask themselves: what assumptions do I make today that are based on incomplete, out-dated, or inaccurate information?  A post like this one is good in that it helps everyone to see what assumptions had been proved false and allows all the posters and readers to learn from others' mistakes.  However, the goal should be to question our current assumptions, not just take pride in previous lessons learned.  Looking back is certainly a valuable exercise, but looking for new lessons to be learned is more important.

Sunday, July 11, 2010

What band instrument repair taught me about I.T.

When people find out that I was a band instrument repair person and became a computer programmer, most people seem surprised.  They don't see a connection between the two.  However, I've found that there are a lot of similarities in the mind-set that makes a successful programmer vs. one that makes a successful band instrument repair person.  Here are some things that I learned repairing instruments that are directly applicable to I.T.

Many different approaches can accomplish a task
In order to seat a pad in a woodwind instrument, one can take many different approaches.  You could "float" a pad in place with hot glue or liquid shellac, glue the pad in place and clamp the pad down so it will take the necessary shape (called impressions), use cork pads and sand the cork to the necessary shape, use flexible pads and manually shape the pads to the tone hole, and so forth.  Each method has its strengths and weaknesses.  Many players (especially saxophonists) prefer the feel of pads glued in with shellac, but it gets brittle in cold weather.  Padding with no impressions results in a longer-lasting repair, but takes longer to prepare the instrument beforehand.  The same concept holds true in I.T.  No single programming language/operating system/software package can do everything you need every time perfectly.  PHP might be good for getting a web site up quickly, but Java or .NET might be more appropriate for a large-scale web site.  It is important to know and explore the alternatives to come up with the best solution.

To a hammer, everything looks like a nail
Continuing the previous thought, some people will stick to a particular methodology because it is the "best" for whatever reason.  Just like some repair professionals will claim that stick shellac is a superior choice of glue over other hot melt glues, some I.T. professionals will stick by a database, operating system, programming language, etc., for similar reasons.  While choosing a methodology for its convenience can usually get the job done, the best products I've seen are the result of knowing the alternatives and choosing the best methodology based on its merits, not based on convenience.  The idea that one becomes an "expert" in one area by refusing to learn other areas usually leads to more limitations than expertise.

Perfection is impossible
If I were to try to get a flute to respond as quickly as possible, I would make the pads and tone holes as flat as possible (usually within .0005 inches), use the thinnest possible padding between metal contact points, remove all the play in all of the mechanisms, and so on.  However, doing so results in a flute whose mechanisms are noisy and more susceptible to adjustment problems.  Finding the right balance between durability, quietness, and response is difficult and should depend on the individual needs of the customer.  The same holds true in software development.  An application built for performance is often harder to change, where an application built for ease-of-use might take longer to load.  Sure, there are cases when changing some code can result in an application that's easier to maintain and faster to load, but at some point your goals in writing software become conflicting.

Talent is less important than most people think
Finally, I've met countless people in the music industry who became good at what they do not through talent, but through hard work.  Success comes to people who work hard, always strive to do better, and to recognize potential areas for improvement in what otherwise seems like a success.  People who do those things almost always succeed in the end, no matter how they might define "success".

Saturday, June 26, 2010

The 60 hour work week

Here's an interesting article about the effects of working more than 40 hours per week:

http://archives.igda.org/articles/erobinson_crunch.php

Basically, it states that the output you get from an employee who works 60 hours a week is roughly the same as one who works 40 hours a week.

So by pushing employees to work longer hours, a company is not only likely driving away its best employees, but it's also not getting any more work out of the employees who stay.  I have to think this is more pronounced for I.T. workers.  Since most of the good I.T. workers I've encountered get energy for their job by working on fun projects on the side, they would likely approach burnout quickly as a result of working too much.  What makes matters worse, burned out workers produce less, which in turn increases the pressure put on them, which increases the feeling of burnout, which in turn lowers productivity....  You get the idea.  I can't see any advantages to pushing employees this hard, so it's a wonder so many firms out there do it.

Saturday, June 19, 2010

Using outsourcing effectively


There are plenty of stories available online about outsourcing projects that failed to live up to expectations.  Yet I see very few explanations as to why such failures occur, and more importantly, how to prevent a similar failure from happening again.  While I'll be the first to admit I don't have much experience with outsourced projects on the scale that usually make these types of headlines, I'd like to offer some advice on how to outsource effectively, from the perspective of a consultant who carries out outsourced work.

Before I get too far into that, though, I'd like to discuss the difference between outsourcing and off-shoring.  Outsourcing is the process of moving one or more business functions to an outside company, likely a specialist in that particular function.  Off-shoring is the process of moving a particular business function to another location outside of your current company, typically to a country with less expensive labor.  You can have one without the other, or do both at the same time.  I will stick to outsourcing for now and leave off-shoring for another time.

If you’re outsourcing the development of a computer application, you will help both yourself and your vendor if you start by getting as good an idea as possible of what you want out of your application.  Most vendors should expect that you will change your mind about some of the application's functionality during the course of making the software; unfortunately such "scope creep" is an expected part of building software.  However, requirements that are too vague will make it extremely difficult for your vendor to build what you want.  If you know you will have trouble making requirements, hire someone who will help you discover and articulate them.

It is also important to determine where your priorities lie.  Everyone wants their application done quickly, at a low cost, and with high quality.  To make matters even more unclear, "quality" can take on different meanings to different people.  Application speed, ease of use, application up-time, number of bugs, scalability (i.e. the ability for the application to grow with few growing pains), ease to add features, secureness of the application, etc., can all contribute to the perception of "quality".  Unfortunately, many of these are conflicting goals.  Making an application execute very quickly takes longer to develop and is more expensive to build, just like adding a large number features can make your application harder to maintain.  You and your vendor should be on the same page as to where compromises must be made.  Be sure to communicate these goals and make sure your vendor follows them.  Communication up front along these lines can save a lot of headaches and discussion when the bill arrives.

One thing that will help you and your vendor manage your project is to break your large projects into smaller, yet still useful, chunks.  More often than not, once a client starts working with their application, they discover that some of the ideas generated don't work as well in the application as they do in the planning stages.  If you try to develop too much at once, problems are identified late, and the later a problem is found, the more expensive it is to fix.  If you start small, you will see the results of your work sooner, allowing you and your vendor to adapt to each other's styles.  This will allow each of you to communicate more effectively.

You may want to consider billing your projects on a time and materials basis only.  Fixed bid projects are tougher for both parties.  The outsourcing company very often tries to put in as many features as possible in an effort to get the most for their money.  This can cause the company to lose focus on their core needs from the software.  The vendor must protect its interests, and therefore cannot be very open or cooperative to change requests.  If a project is billed on time and materials, a company can focus on which features/changes are most important to them, and the outsourcer can focus on how to most efficiently fill these requests.  In other words, the discussion turns from extracting the most value possible from the other company in a fixed-bid situation to a more cooperative effort to build the best software possible in a time and materials situation.

Also, be sure to stay active in your project.  You may be tempted to give your vendor all the information you think he or she needs and then let him or her finish the project, but that's usually a recipe for failure.  Review any documentation, samples, code, beta versions, etc., as thoroughly as possible.  Your vendor should have tried to understand your needs, but something inevitably gets lost in translation.  Be sure to voice any concerns early before they become problems.  The sooner you find errors or omissions, the cheaper and easier it is to fix them.

Most of these suggestions come down to improving communication.  Understanding your project and understanding your priorities help you give clearer directions to your vendor as to what you want from your project.  Breaking down your project into smaller, but still useful, chunks helps you and your vendor communicate requirements in a meaningful way.  Billing your project on a time and materials basis help you and your vendor communicate about what the project needs, rather than about unnecessary information.  Keeping communication clear, open, and frequent will help increase the odds of success in any of your outsourced projects.

Saturday, June 12, 2010

Competing Through I.T.


One of the hot topics in business today is the tense relationship between business and I.T.  Of the many articles that I've read, very few try to thoroughly describe how the severity of the problem can differ from company to company, or how the solutions can vary because of those differences.  Nearly 25 years ago, Steven Wheelwright and Robert Hayes published an article in the Harvard Business Review called "Competing Through Manufacturing," which described the different levels in which a manufacturing arm of an organization could be integrated into the organization as a whole, as well as the benefits and consequences of each integration level.  Their terminology could easily be extended to the integration of an I.T. department.   Here are their original four "stages," only slightly adapted for I.T.:

Stage 1: Minimize Information Technology's Negative Potential
An Information Technology department whose role is in Stage 1 would be characterized by a lack of trust between I.T. and the rest of the business.  In Stage 1, a business feels the need to control as many aspects of I.T. as possible to avoid possible negative consequences of mistakes or misunderstandings.  A large portion of the strategic management and implementation regarding technology is left to consultants.  Internal research is discouraged because it leads to uncertainty, which in turn could lead to problems down the road.  The feeling is that anyone could manage I.T., which leads to inappropriate people chosen for leadership positions within the department.  For companies whose major focus is I.T., internal applications aren't given nearly the attention that the customer-facing ones receive.

Stage 2: Achieve Parity with Competitors
An Information Technology department whose role is in Stage 2 would be characterized by the desire to expand I.T. only to keep up with competitors.  Industry best practices are followed regarding programming languages, hardware, purchased software, etc.  Investments in the department usually come in the form of investing in hardware and software, rather than investing in the people or in the process.  Unlike Stage 1 organizations, Stage 2 organizations often look internally for leadership in IT, though leadership still sometimes comes from outside the department.  Improvements are often only made when shortcomings become obvious.

Stage 3: Provide Credible Support to the Business Strategy
Stage 3 Information Technology departments would be expected to actively support and strengthen the company's competitive position.  Stage 3 I.T. departments would make sure that all of their decisions are consistent with the organization's overall strategy and that these decisions are communicated effectively to all I.T. personnel.  These departments would be aware of long-term trends in the marketplace and would be actively considering ways to use these trends to their advantage.  Stage 3 I.T. companies often stay in Stage 3 for short periods of time to implement one or two large changes then revert back to Stage 2.  As opposed to Stage 2 companies, however, Stage 3 companies will often pursue improvements for the sake of improvement, rather than based on some external force. 

Stage 4: Pursue an I.T.-Base Competitive Advantage
A company reaches Stage 4 when the competitive strategy of that company is reliant on I.T.'s ability to implement the strategy.  In other words, I.T. is just as (but not more) important as other departments, and successful projects are largely a collaboration among departments.  What sets Stage 4 I.T. departments apart from other stages is the expectation by other departments that I.T. can contribute ideas and strategies central to the business as a whole.  Wheelwright and Hayes mention briefly that some Stage 4 firms take this idea too far – a firm that promotes one department at the expense of others is doing harm, regardless of which department is being favored.

If I could be able to state with certainty how a company could bring its I.T. department from Stage 1 to Stage 4, I would probably be making seven figures as a business consultant.  Clearly, I can't do that (yet).  But by identifying some terminology, I should be able to communicate some ideas much more easily in future posts.  For example, in a previous post I wrote about improving the job interview process.  However, I hope it's clear that a Stage 1 organization would have different needs than one in Stage 4.  A Stage 1 organization may have driven away its qualified workers, leaving only workers unable to find jobs elsewhere. Its hiring efforts would not only need to focus on finding good programmers, but also on finding programmers who would be willing to endure, and help turn around, a Stage 1 environment.  Since a Stage 4 company's I.T. department is involved in the overall strategy of a business, its management would need to focus its hiring efforts on finding personnel who can understand the business as well as the technology.  

Saturday, June 5, 2010

Determining if a project is worth the money

In a hypothetical scenario, let's say I have the opportunity to take or reject a project, that will cost my company $200,000 this year, and is predicted to get my company the following yearly profits:

This year (2010): $0
2011: $20,000
2012: $50,000
2013: $100,000
2014: $80,000
2015: $30,000
2016: $0

Should I take the project?  On the surface, the project seems to bring in $80,000 profit for the company.  But I should take into account the fact that most of the profits come in years 2013 and 2014.  To see why, imagine that the bank on the corner is offering an incentive to start an account.  Just bring in $1000 and you get $100 cash on the spot.  Would you take that deal?  Now imagine that bringing in $1000 now will get you $100 a year from now.  That doesn't look so good.  Would you take the deal if you had to wait 20 years for the $100?  Probably not.  So how should I go about determining whether the delay is worth it?

It's easiest if I start by figuring out what rate of return I would like from this investment.  (You'll see why in a moment.)  If I work for a publicly traded company, I could use the Weighted Average Cost of Capital (WACC) as a starting point.  The WACC basically tries to answer the question: if my company wanted to raise money for a project today, what interest rate would I expect to pay?  I won't go into the details about how to calculate it, but it tries to predict what people would pay for new stocks and bonds issued by the company.  (While I recommend having someone calculate the value for your company if you were to use it for something important, http://www.wikiwealth.com/ seems to have numbers that are close.)

Now, I said that the WACC should be a starting point.  The reason is that it doesn't take into account any specifics of the project.  If this particular project involves a lot of risk (maybe the sales estimates are uncertain or the estimates are based on the economy continuing to grow) then I might want to adjust the rate higher.  (Riskier projects, stocks, bonds, and most other investments require a greater return if more risk is involved.)   Finally, keep in mind that since the WACC is the rate a company would pay investors for capital, it is essentially the break-even point.  Accepting a project that has returns that match the WACC would be like taking a bank loan at 6% to buy an investment that returns 6%.  In the end that investment has no net gain.

I think that my project isn't particularly risky so I'll just use my company's WACC.  For this example, I'll use both the WACC for Microsoft (9%) and CitiGroup (14%), as determined by www.wikiwealth.com.  I could then find the present value of my profits using Excel.  The present value is essentially a way of telling us the value of a series of cash flows for a given interest rate.  For example, if I wanted to determine how much the project is worth for Microsoft, I would enter:

=NPV(0.09, 20000, 50000, 100000, 80000, 30000)

Where the first term is the interest rate, and all the subsequent terms are the expected profits from each future year.  When I enter this, Excel returns a present value of $213,822.93.  Since this is more than the $200,000 investment for the project, I should accept the project.  What about CitiGroup?  Here is my present value calculation in Excel:

=NPV(0.14, 20000, 50000, 100000, 80000, 30000)

Which returns $186,461.87.  It looks like I would not take the project if I worked for CitiGroup.

I hope that gives you a little bit of insight into some of the finances behind decision making.  As you can see, the process is somewhat subjective (such as determining the interest rate and estimating the profits), but it does give you a relatively easy way of comparing the value of two different projects.

Saturday, May 29, 2010

Hiring good programmers - Part 2

In Part 1, I outlined some problems with common practices in hiring programmers (or anyone else for that matter).  What can the hiring manager do about these problems?

I would start by defining the tasks the position requires as honestly, realistically, and specifically as possible.  This isn't as easy as it sounds.  For example, many openings that I've seen for programmers require someone who can complete code "on time, within budget, and bug-free".  If a company is having trouble finding programmers who meet these requirements, what is the problem?  Yes, skill deficiencies in the programmers themselves can cause any or all of these issues.  But so can poor requirements, poorly controlled scope creep, poor communication, etc.  What exactly is the need?

A better approach might be identifying which aspects of the finished product are most important.  A programmer trying to be one of the first to create a game for a new mobile system might be more focused on time and budget than bugs.  To contrast, programming for medical devices would require much more focus on creating bug-free software.  Along with this, be aware of any road blocks within your company preventing programmers from getting work done.  Do the programmers frequently get poor requirements?  While improving the requirement gathering and documenting process, I would focus on getting programmers with enough of a business sense to be able to extrapolate missing information from the requirements.  Are projects frequently under time pressure?  Then finding programmers who can get work done quickly should be a priority.

Once the tasks have been defined, I would then break each of the tasks down into Knowledge, Skills, and Abilities (KSA).  For the sake of example, I'll start breaking down a few of the tasks and KSAs needed for a completely hypothetical mid-level ASP.NET developer position.  This position might be for a mid-sized company looking to add onto a web application designed to track inventory within a supply chain looking to add a developer to a team that can handle simpler tasks to free up some senior developers to work on other projects.

Tasks:
  1. Creates reusable, maintainable, reliable components in C#
  2. Tracks down bugs with little assistance from more experienced developers
  3. Integrates third-party components into web applications
Task 1 could be broken down into the following KSAs:
  1. Working knowledge of Object-Oriented Programming concepts
  2. Experience creating effective unit tests
  3. Makes intelligent decisions as to when a component should be broken down into sub-components
Task 2 could be broken down into the following KSAs:
  1. Can read and understand code in C#
  2. Can read and understand documentation
  3. Can refactor poorly written code into more usable/readable components
  4. Can write unit tests to reduce the amount of time testing through the user interface
Task 3 could similarly be broken down into the following KSAs:
  1. Can read and understand third-party documentation
  2. Can integrate third-party code into a larger application as seamlessly as possible
  3. Can anticipate problems with third-party components and comes up with reasonable work-around solutions
A real KSA breakdown would include many more tasks and KSAs, but hopefully you get the idea.  Now that the needed KSAs have been identified, it is time to identify the important ones.  Reading and understanding C# and documentation are common KSAs, as is seeing how small pieces fit into a whole, and writing effective unit tests.  Not found is the need for keeping up with the latest technologies or understanding large-scale structural issues.  (Keep in mind this is a hypothetical position - these skills may be needed in other ASP.NET positions depending on the company.)

With this information, you can design questions for interviews.  Since we need to identify people with an understanding of creating reusable, maintainable components, here are some questions that we might ask:

  • If you are asked to fix a method that has a return type of an object and returns a due date if the user is valid and 'false' if not, how would you go about doing so?
  • If you have some functionality in one of your classes that is needed by another, what is the best way to move that functionality to the new class?
  • If you are using a third-party library to send information from several different ASP.NET pages to a third-party store, where do you put the logic?
These questions are all specific and are all targeted to certain tasks I would need the interviewee to perform.  I have my ideal answers for each and can evaluate the interviewee consistently.  It is important to ask several questions that get to the same point because of the possibility of miscommunication or misinterpretation on either side.

It is also important to evaluate employees by watching them do work that is representative of the job they would be hired for.  I might run into someone who can talk about different technologies or approaches fairly well, but when they get to doing the job, they fall short.  I would take a similar approach to designing the test as I would designing questions; I would identify important KSAs and create tasks that would require them.  Taking this one step further, I also like to break down my tasks into three categories: tasks that every person ought to be able to do, tasks that require skills that are "nice to have", and tasks that I don't expect the candidate to complete.  This way I can get a deeper sense of the understanding a candidate has for particular skills.  A candidate who can complete some of the tasks in the last category will have an advantage over those who don't.

Finally, you may be wondering about how to determine cultural fit.  Unfortunately, cultural fit is tough to pin down and therefore it is difficult to create good questions to get at it.  To make it even tougher, many of the questions that could be asked to ascertain cultural fit open the door to discrimination lawsuits.  I don't have any good ideas here.  If anyone has any ideas, please feel free to leave a comment.  I'm certainly open to ideas.

Saturday, May 22, 2010

Hiring good programmers - Part 1

One thing that I've struggled with, and I know I'm not alone in this, is how to accurately decide if someone will be a good programmer for a company or not.  Traditionally, companies will first screen resumes, looking for particular experience, skills, etc.  They will next have a phone screen to further weed out candidates.  Finally, the candidates will be brought in for a face-to-face interview.  How effective are these methods, and what can we do to be more effective?  In Part 1, I will outline issues with common current methods of evaluation, and in Part 2, I will suggest an alternative.

What about the resume?  Many companies will pre-rank candidates (either explicitly or implicitly) based on the quality of the resume and the information presented on it.  There are a few problems with this.  First, I don't have a reliable source handy, but I recall hearing that 50% of people lie on their resume.  50%!  Couple with the proliferation of sites (such as fakeresume.com) that are intended to give advice to people who want to get away with lying on a resume, and it's tough to know which resumes to trust.  Second, many of your potential employees will have hired a professional resume writer in order to get your attention.  Great resume?  Is it because they are a detailed worker or just hired a great resume writer?

What about a degree?  Most of the jobs posted on the major job sites for programmers ask for some sort of technical bachelor's degree.  This certainly should be a consideration for an entry-level candidate.  However, in observing a number of people in a number of different industries, I've come to the conclusion that the benefits of the quality or type (or even presence) of a degree decreases severely after about 1-3 years of experience.  For experienced candidates, the ability to learn new concepts and apply those concepts in new situations becomes a much more important asset than the quality or source of their original education.  Putting it in another way, does the candidate have 10 years of experience, or 1 year of experience repeated 10 times?  In fact, I might be more inclined to hire the person with the poor quality of education who somehow managed to become a good programmer, since they have the proven ability to learn and apply concepts in less than ideal circumstances.  (Or maybe I just think that because I'm a self-taught programmer.)

What about the interview?  A lot of good information can be gotten from an interview, but a lot of misleading information can be gotten here, too.  I'll speak more about interviews in Part 2, when I actually start talking about methods that can help you find good candidates rather than merely eliminating bad ones, but I will mention here that you should always be aware of the implications of what you are asking even if the legality of questions weren't an issue.  As an extreme example, I read an article on BNET that quoted a hiring manager saying (and I'm paraphrasing) that they hired someone based on the answer to this question: "If you would be a character from 'The Wizard of Oz', which would you be and why?"  What can you learn from the answer, assuming you get one?
  1. The interviewee thinks well on his or her feet and handles the unexpected well
  2. The interviewee has an active fantasy life and empathizes with characters easily
  3. The interviewee is a huge 'Wizard of Oz' fan
  4. The interviewee trusts you that you know what you are doing, even though you are asking a question that seemingly gives you no insight into his/her ability to do your job or him/her insight into their fit into your culture, making him/her a likely "yes person"
  5. The interviewee does not have the ability to sense when something is or is not a waste of time and would be a poor choice for leadership positions
You could legitimately make a case for any of the five.  #1 is definitely a plus for your company, #2 may or may not be, depending on the position, culture, and point of view, #3 is completely irrelevant, and #4 and #5 would be considered negatives for most positions out there.  I can't see how the interviewer in this case learned anything worth knowing from this question given the uncertainty involved.

Finally, many companies have some sort of programming exercise as a part of the interview processes.  Even a poorly designed exercise will likely separate the completely incompetent from the competent programmers.  For example, what does having the programmer show all the prime numbers from 1 to 100 show about his or her ability?  You may need someone who can think mathematically, but you may not.  Here again, the best designed exercises will have specific goals and will not be some generic problem designed to eliminate the worst programmers.

In Part 2, I will demonstrate some methods for getting at the heart of the job position which will hopefully make it easier to scan resumes, design interview questions, and design programming exercises.

Thursday, May 20, 2010

Should programmers be middle managers?

Recently, I wrote about whether programmers should get an MBA here, and said that an MBA would be helpful for the programmers who wanted to get into management.  Related to that, I came across an article on CIO.com about the inherent skills needed for programmers, middle management, and executives.  I don't have any particular insight into the article, or what it means for I.T., other than it's an article worth reading.  I think I might put the book on which the article is based on my reading list.

Saturday, May 15, 2010

Using simulation software for better project estimation

Most technology professionals that I've encountered have difficulty in creating accurate estimates.  Unclear requirements, differences in personnel skill, feature difficulty, and scope creep can all contribute to uncertainty in estimates.  There are, of course, different ways of getting around these problems, such as using pre-determined formulas, padding estimates, making frequent adjustments, etc.  It's about time that I.T. professionals think about adding simulation tools to their tool belts.


(In interests of full disclosure, I'm writing this article with Palisade's @RISK software in mind, since that's the software I used in my Quantitative Analysis course.  I'm reasonably certain that there are similar software packages out there.)


How would these simulators work for estimating an I.T. project?  First, you should break down your effort into smaller, manageable components.  For the sake of example, let's say that you need to add a new feature to an existing web site.  You know that there will be 5 new pages, 7 new tables, and some business logic.  However, the business logic is extremely difficult and one of the pages will require quite a bit of extra work, perhaps using an unfamiliar technology.  You break it down, and your estimate might look like this:
  1. UI for 4 simple pages: 2 hours each (+/- 1 hour)
  2. 7 database tables: 1 hour each (30 minutes minimum, 2 hours maximum)
  3. Complex UI for one page: 8 hours (could be 6, but it could explode to 16)
  4. Business logic: 16 hours (but for whatever reason, you really don't know)
You could enter these assumptions into the software (#4 would likely be done using a normal distribution) and it would tell you the average amount of time, as well as the best- and worst-case scenarios.  Before the software, our range of possibilities would be from 20 to 70 hours.  With the software, we can reasonably say that the effort will likely take somewhere between 30 and 50 hours.


This was a relatively simple example for demonstration purposes.  I can't speak for all risk analysis software, but @RISK makes it relatively easy for a programmer to add some pretty complex logic into your scenario.  You could pretty easily plan for uncertainties due to uncertain staffing, add provisions for scope creep, or add profits (if you're a consulting firm) to you model.  The sky is the limit.


While the software can be useful, it does not protect you from errors from false assumptions.  In the example above, I didn't add provisions for getting the data from the database into the business logic layer, nor did I add a provision for testing.  If these components weren't already built into the estimate, your estimate is going to be low. Therefore, approach the information you get back from the system cautiously.


If you're interested in reading more about the subject, I would suggest the textbook I learned from, which is Data Analysis and Decision Making by Albright, Winston, and Zappe.  The examples they give are not I.T. specific, but it shouldn't be hard to apply those concepts to your estimation process.  If nothing else, you can continue to watch this blog since I plan on continuing to post about the subject from time to time.

Saturday, May 8, 2010

Should programmers get an MBA?

One question that I see among a few computer programmers these days is whether or not they should go back to school to get their MBA.  The answer really depends on what he or she wants to do in their career in the next five years.

Before I get any further into this, let me first describe what an MBA can (and can't) do.  The MBA is a very general degree focused on giving business decision makers the background they need in order to make their business thrive.  MBA programs cover high-level topics such as accounting, finance, operations management, economics, and business strategy.  Programmers, on the other hand, often need to get well acquainted with the details of the day-to-day operations that keep a business growing.  The two areas aren't completely separate, but they don't go hand-in-hand, either.

Getting back to the original question, programmers who wish become managers at some point in their careers will find knowledge gained from getting an MBA useful.  No degree should be mandatory for any position, since most knowledge can be gained through other means, but I couldn't imagine trying to be a manager without the skills I've gained from my MBA studies.  Too many people assume that being good at a job makes one qualified to manage others doing that job, which is absolutely not true.  Managing and doing are two completely different skill sets and have completely different knowledge required.  The MBA can help fill in some gaps in management skills.

For the non-managing programmer, I could see how an MBA could help a programmer understand business requirements (which was my original intent for getting mine), since the degree is about the high-level functioning of business.  I can also see why an MBA degree is strongly desired to work in the large consulting companies, since the degree not only gives you the tools to speak with business stakeholders but also helps prepare the student for project management.  I'm not convinced that the MBA is the best use of time and money for a programmer with these career goals, though.  The fit isn't bad, but there are other alternatives out there, such as PMP certifications or industry-specific degrees.

Many people look to MBA degrees as a way to help avoid unemployment.  I can't think of a worse reason for a programmer to get this degree.  (Well, maybe that's not true.  I can think of worse reasons to get an MBA.  Most of them wouldn't be considered by someone seriously considering the degree, though.)  The competition for jobs seems to be a bit stiffer for MBA grads than for experienced programmers with knowledge of the latest "hot" technology.  In fact, programmers who wish to remain programming after getting their MBA might find job hunting tougher due to the perception of being over-qualified.  Programmers should also think very carefully about spending significant amounts of time away from technology-specific studies.  Technology is changing all of the time.  Much of it should be easy for any IT professional to pick up due to its similarity to previous technologies, but some of it is not.  While it is not impossible to keep up with technology while studying for an MBA, it is not an undertaking that can be done by everyone.

In the end, it was worth it for me to get my MBA because I'm very interested in business strategy, and my MBA helps me understand issues related to management and strategy that I might not have as a pure programmer.  Also, I'm not as interested in specific technologies as I am in finding the right solution to solve problems.  That doesn't mean that the degree is right for every programmer, but what course of study is?