Styles

Tuesday, December 27, 2011

When being under an estimate can be a problem

Recently, I wrote about how software projects should be measured by other criteria than time and budget.  Time and budget measurements aren’t going away, though, so we need to make sure that we act appropriately to make sure that focusing too much on time and budget harms our ability to deliver business value.

The problems associated with being on a project over the original estimate are relatively straightforward.  Pressure to meet a pre-determined schedule could result in all parties involved cutting corners, which could result in poor designs, incomplete implementations, inadequately tested features, and so on.  Very likely these results are already familiar to you.

But what happens if the software development process is significantly under budget?  (Yes, it happens.  I’ve been involved in several.)  Generally, one of two things happens:
  1. The project is completed significantly under budget
  2. Scope creep is allowed and unnecessary features are added
The first one is not a problem.  The estimate was, after all, just an estimate.  If you’re good at estimating, you should be under half the time and over half the time, assuming a reasonable tolerance for error each way.  But what about the second result?  For reasons I wrote about in an earlier post, adding features just because you can doesn’t mean that you should.  These features, if not thought-out properly (and they usually aren’t), will likely add maintenance costs and additional points of failure.  If any stakeholders in the project must use the entire budget, define some “nice-to-have” features ahead of time.  This way these features can the thought and planning they deserve.

Wednesday, December 21, 2011

How tech-focused do we need our developers to be?

In several earlier posts, I have written about how I think it’s time for software developers to stop thinking so much about the technology they use and start focusing on the business problems they solve.  There are two primary reasons for this:
  1. Writing maintainable software isn’t easy, but it’s getting easier every day.  Integrated Development Environments, such as Visual Studio or Eclipse, give developers the tools they need to write, refactor, and debug software at their fingertips.  Modern programming languages give developers needed functionality out-of-the-box, while abstracting some of the lower-level concepts, to help developers be more productive than ever before.  Finally, and perhaps more importantly, with the thousands of technical bloggers and bulletin boards out there, it’s easier than ever to find answers to your questions using your favorite search engine.
  2. The average technologist doesn’t know about the inner workings of a business process, nor do they seem to care.  "Tell me what to build and I’ll build it" is an attitude that’s all too common among technologists today.  On the flip side, too many business people know little to nothing about the technology that they’re using and managing.  Their attitude seems to be "that's a technology issue, so the technology people should deal with it".  Someone needs to bridge the gap in order to create solutions that bring out the best of both worlds.
Business analysts are supposed to fill this gap, but too many times true business analysts are simply not present, or not focused on what truly makes a project a success.  I've heard about some cases where business analysts function as middle-men that just get in the way, because they don’t know enough about either the business or the technology.  Project managers should be able to fill this gap too, if they are to truly have the knowledge necessary to be the ones primarily responsible for delivering a product.  Time and budget constraints get in the way, though.

Should it fall on either the developer or software architect to fill the gap?  I used to think so, but I used to work in a consulting company where the developer worked directly with the client, with no business analysts or project managers to step in and help.  In that particular case, it absolutely makes sense for all of the developers to go take a business class or three and be versed in both business and technology.

But what about situations where business analysts and project managers are present?  As much as development is getting easier, it’s still hard.  It took me four years of study and experience before I felt like I knew enough to be fully comfortable as a technologist, so I started branching out by getting an MBA.  But I know full well that I don’t have the knowledge to create a site with the scalability needs of Google, or the perfection needs of a pacemaker.  The need for programmers to know other items not directly related to a particular technology is high as well, such as software readability, predicting scalability, etc.

I think the answer is that the hard-core technologist isn’t going away, despite the rise of business/technology hybrids.  Instead, I see software teams made up of a mix of strong technologists and adequate technologists with a strong business background.  It would help greatly if we could also have all people in the development process gain more specific technology and implementation knowledge.  This is not a small task, but a necessary one if we are going to enable our technology to meet business needs more easily.

Sunday, December 18, 2011

Good is the enemy of great


The first two sentences of Good to Great: Why Some Companies Make the Leap... and Others Don't by Jim Collins are:

Good is the enemy of great.

And that is one of the key reasons why we have so little that becomes great.

At some point I may write a blog about the book as a whole in some blog post, but today I’m just going to focus on those two sentences.  When I first read them, I had to put the book down and think about those two sentences before being able to move on.  I’ve encountered this in so many different ways.  Once one has achieved a certain level of competence, it’s difficult to continue to push boundaries in order to go beyond that.  Everyone has a different idea what is "great", though.  Is great creating the highest quality product possible?  Or is it finding the best balance between quality and cost?  To make matters more complicated, what we define as "great" will change as we are better able to provide services.  Before we can agree whether we have so little that becomes great, we must first define it.  Personally, I see greatness in people who can go beyond their typical job role and provide some extra benefit to their customers.  Take these examples:

A good developer gets the job done in a relatively short amount of time, and they are able to find most of their bugs in their testing process.  They understand and use programming best practices to avoid errors and help the next person viewing their code.  A great developer can find bugs just by looking at code, and errors found during testing come as a surprise.  They also understand when not to use a particular best practice if it does not apply to that particular situation.

A good designer can take a color palette and a set of business requirements and come up with a design that impresses users on first glance.  A great designer can understand the needs of the end user to create a design that is easy and natural to use both the first use and the thousandth use.

A good project manager can tell if a project is at risk of not meeting stakeholders’ expectations and can balance the needs between different groups involved, such as developers, designers, and stakeholders.  A great project manager can anticipate and mitigate problems before they arrive, as well as keep the entire team focused on what’s really important in a project.

A good business analyst takes business specifications and turns them into software specifications that the team can understand.  A great business analyst takes business requirements, creates a holistic business solution.  They then can create documentation for the software that supports this business solution.

So how is the good the enemy of great?  In all of the cases I’ve outlined above, not only can one can make a career out of being good, but some managers might think that striving for greatness detracts from the main problem at hand.  (My definitions of greatness involve going beyond traditional job roles, which might not be appreciated by no-nonsense, “get-it-done” type managers.)  Because most people can make a career out of being good, they limit themselves and never reach for true greatness.

Going beyond Collins' point, I would also argue that being good encourages complacency.  Continued complacency leads to adequacy.  Continued adequacy leads to stagnation.  In our ever-changing world, stagnation leads to mediocrity and worse.  It’s no coincidence that all of the companies Collins profiled achieved greatness by shaking up the status quo.  Don’t accept something that is merely good when you can strive for something great.  You may not reap the benefits today, but you will someday.

Thursday, December 15, 2011

The difference between a "project" and "product" in software development

What is the difference between delivering software products and completing software projects?  Both delivering a software product and completing a software project have the same end product in mind – creating working software.  However, the approaches are significantly different.

A software project’s success is primarily measured by one concept: completing all desired functionality within time and budget.  The obvious benefit to a project-based mentality is that having a (reasonably) fixed cost and duration it makes it easier to budget money and resources for future projects.  However, as I wrote in a previous post, time and budget measurements should be secondary considerations to the actual business value delivered.  If the project mentality is taken too far, it can lead to a deliberate attempt to limit needed functionality if it is determined after scope is laid out.

A software product’s success is primarily determined by the goals of the software itself.  If, and only if, the software does what it was intended to, it was a success.  The benefit to this mentality is that the core mindset becomes delivering software that exceeds expectations.  Scope creep no longer becomes an issue – if functionality that is needed is found after the scoping process, the project is re-scoped.  The development team and business stakeholders can re-evaluate priorities based on business needs and current budget without needing to concern themselves with a pre-determined budget.  A drawback is that product-based teams tend to increase scope as time goes on, which makes budgeting and resource planning more difficult.  It is also more difficult to determine success criteria in a software product environment, since this will vary from project to project.

While there are definitely benefits to a project-based approach, I would like to see more of a product-based mentality among technology teams.  We as technologists need to be more concerned with the software functionality as a whole; not in the sense of “does it do what we were asked to build” but “does it do what it’s supposed to do from a business point of view”.  Of course, time and budget are important, otherwise the methods for determining which projects to pursue start losing their relevancy.  But a greater product-based approach would help bridge the gulf between business and IT.

Sunday, December 11, 2011

Beware the Expert Fallacy

The fact that the relationship between business and technology is so contentious at times is due to behaviors on both the business and technology sides.  I’ve been writing mainly about the technology side primarily due to my familiarity with it, and not due to its relative importance.  If business and technology are to work together as one team, though, business leaders need to change too.

A friend of mine told me of a conversation they had with a co-worker that exemplifies a serious problem in the business community when it comes to technology.  In this scenario, person #1 (whom I’ll call "John") hired an outside firm to redesign a website, and person #2 (whom I’ll call "Jane") was looking at the results.

John: I’d like you to take a look at the design for our new web site.

Jane: I’d love to, our current site could use a new look.

John: Here it is.  I think they did a great job, didn’t they?

Jane: Yes they did.  I like the colors and it’s much easier to read than our old site.  I have a question, though.  This new design put a secondary product line under a different tab.  I know that’s technically correct, but are any of our users going to be able to find it there?

John: Don’t think about it too much.  This design was created by experts. [Emphasis added.]

I come across John’s attitude all too often.  Technologists are often experts in technology, but they too often know little about your particular business, or in this case, they know too little about marketing.  I wasn’t involved in the design process of this particular project, but I’m reasonably certain that the design team created a site using an existing color palette and the design specifications given to them, but didn’t give much thought into how people would want to use the site.  The result was a site organization that made sense to the business team but not to the end user.

I can tell you from personal experience that Jane’s observations are not unique.  The design team created a mediocre design, but why didn’t John say something?  Essentially, John put too much responsibility on the technology team to build a business product.  Technology can be confusing at times for anyone, so trusting the technology team to technology seems like a safe thing to do.  The problem is that while the design team knew the technology, they didn’t know the business.  In this case, no one, neither the design team nor the business team, was really focusing on the user’s experience.  Because the user’s experience was overlooked, user satisfaction will almost certainly suffer.

The solution is straightforward, though not simple: the business team must not view technology as its own area, but instead integrate the thoughts on technology into the thoughts about the business.  Any software that is created should involve as much communication as possible between the technology team, business team, and a representative sample of end users.  By working together, and not treating technologists as experts in anything that seems technology related, businesses can create software applications that are a pleasure to use.

Wednesday, December 7, 2011

Mixing jQuery Mobile and ASP.NET WebForms


I’m going to depart from my usual post subjects and get a little more technical.  I recently had an unpleasant experience trying to use jQuery Mobile with ASP.NET WebForms.  If you're thinking about doing the same, I recommend stopping right now and start using ASP.NET MVC.  To see why, read on.

To start, before I had gotten into jQuery Mobile, I had assumed that it was simply a slimmed-down version of jQuery built specifically for mobile devices.  I was wrong.  JQuery Mobile is built on top of traditional jQuery, and it is intended to simplify the styling of web sites to make them look and behave more like native mobile applications with as little development effort as possible.  For example, by only adding “data-role=’listview’” to a <ul> tag, you can specify that the <li> tags underneath show up in a typical mobile stacked list.  (You can check out the jQuery Mobile documentation to see an example.)

Anyway, I started with a typical login control on the first page, created some list pages, and created a couple of data entry pages.  It worked extremely well at first.  All of my page navigations loaded smoothly into the next page, and the user interface styling happened exactly as advertised.  Fantastic!  I was quite pleased at this point.

Then I added a “Logout” LinkButton.  It wouldn’t post back.  In order to get the page transitions to work, jQuery Mobile, when navigating between pages, loads the next page using JavaScript, then slides the new page into the view of the user.  JQuery Mobile was apparently doing its magic on all hyperlinks on the page, including my LinkButton.  A Google search gave me several suggestions on making it work, ranging from unbinding the control to adding an attribute “data-role=’none’” to the control.  None of which worked.  So I made the logout button redirect a page that logged the user out of the application.  That’s not how I would have liked to implement that functionality, but it’s better to have full functionality with a less-than-ideal implementation than an ideal implementation with a less-than-ideal set of features.

So then I started working on the data entry pages.  On one page, the user is required to enter a zip code, and a dropdown is populated with the matching counties.  Since I didn’t want to fight ViewState or turn it off altogether, I decided to use an UpdatePanel to refresh the values.  Changing the text of the zip code textbox didn’t cause a postback, but instead caused JavaScript errors.  The errors made no sense to me, so I went back to Google.  It didn’t take long (about 2 minutes, I think) for me to realize that UpdatePanels and jQuery Mobile really can’t work together.  Even if I had been able to get the JavaScript working, the CSS classes that jQuery Mobile would add to the page elements would be lost when an UpdatePanel is refreshed.  Sigh.  That wasn’t a huge deal – I turned off ViewState on the dropdown, bound it on every page postback, and wrote my own AJAX (using the ASP.NET AJAX stuff) to populate the dropdown.  I don’t like UpdatePanels anyway, so I wasn’t too heartbroken.

Then I went back to the list pages.  The navigation was working well, but I needed to give the users the ability to claim items off the list.  We wanted a certain look, so I used ImageButtons for the task.  Unfortunately, there were errors in the postback process, which made it difficult for the browser to send back the CommandName and CommandArgument from the ImageButton properly.  Ok, so time to do that manually too, using hidden fields to store what would have been the CommandName and CommandArgument that should have been returned by using the button.

Then I noticed that after I had logged out, I couldn’t log back in.  I did some digging, at it appeared that the AJAX load of the page confused ASP.NET into thinking that any postbacks after logging out should go to the logout page first.  So no matter what happened, the user would be logged out immediately after trying to log into the application a second time.  Then I noticed the same problem on some of my list pages – the browser would try to post back to the page in the URL, not the page on the screen.  At that point I gave up and removed the jQuery Mobile from the application.

JQuery Mobile seemed like a great tool with a lot of promise.  It did not mix with ASP.NET WebForms, however.  JQuery Mobile was built for a pure web environment, not a web environment with the goofy postback model that WebForms uses.  If you're looking to do mix the two, save yourself some time and frustration and skip one or the other.  The next time I do a mobile web project, I will definitely use ASP.NET MVC with jQuery Mobile.  The integration between the two should be much more natural.

Sunday, December 4, 2011

Aligning business and IT strategy is wrong-headed

Up until recently, I’ve been talking (and thinking) about how to align IT and business strategies.  I’ve come to realize that this is the wrong approach for one simple reason: it implies that IT and the rest of the business have separate strategies.  What we need to do is make the strategies of the IT department and each business function the same strategy.  That won’t be easy, though, given the disparate goals of each group.  To get an idea why, let’s look at strategies and approaches that are common in the IT industry:
  1. Do extremely high quality work, because software outages and security leaks reflects poorly on the team and can lead to long hours at unexpected times
  2. Focus on creating methods that elicit the best requirements from the business stakeholder and minimize changes, such as Agile and Scrum
  3. Create methods and tools that make technology-centric functions (like creating a new server or writing code) more efficient and less time-consuming
These strategies are not business-focused, they’re technology focused.  To see why, let’s examine them:
  1. It is often the case that doing the absolute best work possible isn’t appropriate for a given situation.  Technologists often want to do more work than necessary up front to avoid problems which they believe reflect poorly on them.  (And yes, there’s a sense of pride emphasized in the wrong place, too.)
  2. In most of my consulting experiences, the customer wants to keep the costs low as much as possible.  This is understandable to a point, but in order to cut costs, the requirements gathering process gets neglected, causing poor user experiences and costly rework.
  3. I think it’s great that technologists are continually working to improve their craft.  It’s great that we’re able to deliver more functionality with less work because of these efforts.  But the focus on understanding and using software is coming at the expense of knowing and understanding business needs.
So how is aligning IT and business strategy wrong-headed?  Because it implies that IT and business strategies are different strategies that need to be brought together when we should be finding ways of making the two the same strategy.  IT and business need to work together in order to find the right balance between cost, security, efficiency, performance, and all other aspects of software development.  How would the above scenarios change with a different approach?

  1. Create software whose quality reflects the need of the moment.  Single-use, non-mission-critical software doesn't need to be as robust as high-use, mission-critical applications.  Always have a cost/benefit analysis in mind when deciding how robust to make your application.
  2. Instead of focusing on gathering the requirements from the business leader, we as technologists need to do a better job of gathering requirements from the actual end user.  Knowing your customer is critical to getting the user experience right the first time.
  3. We need to make sure that we aren’t just focused on delivering better code, though, and be sure we’re focused on delivering better software.  To start, there should be as many user groups focused on software usage as there are on software writing.

So how do we make this happen?  Both business stakeholders and technologists are contributing to the problem by pushing their own agendas over the good for everyone.  Merely telling technologists that they need to be more business focused, or the business stakeholders that they need to be more tech-savvy, is more likely to elicit negative feedback than it is to solve any issues.  I like the idea of breaking up IT so it is no longer its own separate department.  In that case, we would still need a corporate IT department that would set company-level policies and procedures, but most of the work would be done by technology professionals within each department.  This way the people doing the actual technology implementation would be more familiar with their subject matter, reducing communication problems and the technology-first mentality.  Also, we can more easily make sure that the technology implementation team has the business strategies and priorities foremost in their minds.

Wednesday, November 30, 2011

Speeding up time to market for software development projects


In order to meet the ever expanding needs of business, technologists need to shorten their delivery times.  This is no less true in software development projects.  Perhaps I’m looking in the wrong places, but I’m seeing a lot more articles on the need than ways to meet that need.  So here are some things that you can start doing today to start delivering business value to your users in a timelier manner:

Understand the end user
You may find it strange that my first point about delivering software more quickly is to add more work to the development team.  However, a significant cause of time and cost overruns in software projects is the constant need to implement desired features without understanding the underlying needs.  If you understand the wants and needs of the end user before you start, you will be able to design a better software system the first time, significantly reducing the need for costly rework.

Right-size the development team
Don’t assume that adding more people to the development team will proportionally decrease the time-to-market of the software product.  When a team grows from one to two people, you double the number of people needing to understand the requirements; you double the number of people needing to know about changes, you increase communication issues due to the development team needing to understand each other’s code, the need for project management increases, etc.  Smaller teams are more agile, so keep teams as small as reasonable to achieve the desired results in a timely manner.

Right-size the development effort
Larger projects require disproportionally more effort to keep the code base easily maintainable.  If you do not make the effort to keep the code base clean on large projects (if your team is made up of unskilled developers), the result will be a tangled mess that can will always be too buggy to be shown to an end user.  Conversely, if you build your small project as if it could scale to easily handle a team of tens of developers and millions of users, you are certainly wasting time and money fixing unneeded scalability concerns.

When in doubt, start simple
Complex features lead to a disproportionate amount of effort, both in clarifying the business requirement and in creating an appropriate solution.  If the complex solution is the appropriate one for the situation, then put it in, you’ll save time later.  If not, put in the simplest solution that minimally meets the business requirements.  If your solution is complex, it is likely a sign that something is not understood (and it could be misunderstood by the development team, the business stakeholders, the end user, or any combination of the above).  If you don’t understand it, then it will change later.  Save time and money now, and change based on feedback if need be.

Use development tools whenever possible
I’m a big fan of using development tools, whether they are libraries such as JQuery, frameworks such as NHibernate or CSLA, or a hand-rolled tool like a JavaScript generator I made a couple of years ago.  These tools are often made by developers, for developers, and speed up development time considerably.  Many good developers out there disdain these tools because they are inefficient and inelegant, but the minor losses in elegance and execution efficiency are more than made up by the gain in ROI.  These developers need to remember that their job is to deliver working software as efficiently as possible, not deliver the cleanest code base possible.

Be careful in choosing third-party frameworks/plugins
And by third-party tools, I’m thinking of CMS systems, eCommerce systems, or user-interface control libraries.  Essentially, be wary of anything that promises full features out of the box.  These tools are great if and only if you are planning to use features exactly as they’re intended to be used.  Modifying these systems is often more trouble than it’s worth.  Most of the time, these tools are crammed with mostly-functioning features, rather than focusing on providing a few completely functioning ones.  The costs updating and maintaining features built with these tools can easily surpass the costs of creating and maintaining the same functionality built from scratch.  I’m not saying that you should avoid these tools, I’m just saying that you need to think twice before trying to bend them to do something that they weren’t explicitly designed to do.

Push your boundaries a little bit each project
Technologies available to create applications faster, better, more reliably, etc., are being created every day.  It can seem overwhelming at times to see how many new tools are available to evaluate.  I can’t speak for all developers, but I can’t get a good sense for what a technology is and is not good for without starting using it in a live-fire situation.  Starting a new project depending on entirely new tools is a recipe for disaster, but starting a new project with no new tools is a recipe for obsolescence.  Each software project you encounter, try something new.  Keep the scope small to keep the risk down, but keep the effort meaningful.  By constantly evaluating new techniques, you are putting yourself in a position to deliver greater value in the future.

Tuesday, November 29, 2011

ASP.NET WebForms vs. ASP.NET MVC


When .NET came out in the early 2000s, Microsoft replaced Active Server Pages with ASP.NET WebForms.  That was the preferred approach to web development in the Microsoft world until recently, when ASP.NET MVC came out.  (Silverlight is also an option, but I view that as a way to beautify an existing site, not act as the foundation for one.)  There is a lot of material out there that describes the difference between these platforms, but almost none of it is targeted for the high level business executive in technology, such as the CTO.  So here is my attempt at outlining the differences without getting into too much technical jargon:

ASP.NET WebForms
WebForms are by far the oldest technology in the Microsoft web stack still actively used today.  (No, I wouldn't consider ASP as still actively being used today.)  The framework was originally created to help WinForms developers write programs for the web without needing to learn about some of the peculiarities of web development.  While Microsoft cannot hide from these peculiarities completely, the framework made some tasks a lot easier.  Unfortunately, in order to make this happen, the creators of ASP.NET WebForms needed to send a lot of information to the browser, leading to potentially very large (therefore slow) pages as well as making highly interactive web sites more difficult.

ASP.NET MVC
Programmers eventually found some of the tools that WebForms provided cumbersome and awkward, so Microsoft came out with a leaner framework in the form of ASP.NET MVC.  The benefits were that MVC makes it easier to write JavaScript, saving trips to the server and improving the overall user experience.  The drawback is that the MVC developer needs to do some of the work that WebForms would do automatically.

Which one is better?  That, of course, depends on your circumstances.  Here are some pros and cons:

WebForms pros
  • Good for getting something up and running quickly
  • Easy to understand for WinForms developers
  • Even in the hands of someone only marginally competent, code can still be understandable
  • Relatively easy to create pages with a lot of functionality without making the HTML too confusing for the developers
WebForms cons
  • A lot of HTML gets generated that slows down the browser and gets in the way of AJAX requests
  • A lot of code in inaccessible code-behinds can lead to more testing through the UI than should otherwise be necessary
  • Steep learning curve for those coming from other web programming languages
MVC pros
  • Clean HTML means faster load times and easier JavaScript/AJAX
  • More unit testable
  • Shallow learning curve for those coming from PHP or Ruby On Rails
  • Cleaner HTML leads to cleaner CSS
  • More easily integrates with third-party JavaScript and CSS tools
MVC cons
  • Can lead to unreadable code more easily than WebForms
  • Pages with complex functionality will have complex HTML for the developers
  • More tools/blogs/help available that focus on WebForms
In short, developer skill level aside, if you’re creating a highly-interactive site, MVC is the more natural tool.  If you’re creating a site with a lot of functionality on each page, WebForms is the more natural tool.  When in doubt, go with the tool that you or your developers are most comfortable with, unless you’re deliberately evaluating the other option.  There is nothing you can do in one format and not the other; there are just some things that are easier in one format or the other.  I personally prefer to work in MVC because of the cleanliness of the code that’s generated, but my bias is that I prefer to make my sites highly-interactive.

Sunday, November 27, 2011

The difference between good code and good software

I’ve heard a lot of people speak and read a lot of blogs about how to write code.  Surprisingly, very few of these involved discussions on how to create better software.  And no, writing good code (or even great code) does not necessarily result in good software.  To see why, here are some characteristics of good code:
  • Easy to read and understand
  • Easy to maintain
  • Appropriately unit tested
  • Performs reasonably quickly – no tangled logic that slows the application down
  • Errors are few, but easily fixed when found
To contrast, here are some characteristics of good software:
  • Intuitive, easy to use with little to no training
  • Meets all of the needs of the user (within reason), no work-arounds needed
  • Performs reasonably quickly
  • Errors are few, but easily understood when found
If I as a developer want to learn how to write code in a better way, I could find books, user groups, or blogs pretty easily in order to do that.  But better code doesn’t result in a better user interface, nor does it directly result in a better user experience.  You might be thinking that developers write the code and designers design the interface, solving the problem of developers not knowing how to design.  As a consultant, I’ve worked on my share of applications to know that designers don’t get involved nearly as often as they should be.  When they are involved, they too often do not know the actual needs of the end user, and instead focus on making the system look good and work well for his/her co-workers.

In summary, a primary criterion for determining the quality of code is whether the developer can understand it.  This criterion is easy to understand, since most developers are trained to look for it.  The primary criterion for determining the quality of the software is whether the end user can understand it.  This is much harder to understand, because all too often software is written with business stakeholder and developer/designer input without the feedback of the end user.  Yet all too often developers assume that if you write good code, good software will follow.  It's about time that we start recognizing the difference.  If we as developers start putting as much effort into creating better software as we do in creating better code, life for everyone involved will become significantly better.

Tuesday, November 22, 2011

Object Relational Mappers - Time Saver or Programming Hack?

If you have anything to do with development, you have probably heard about “Object Relational Mappers”, or ORMs before.  There is a lot of information available on what they are, but most are overly technical, not objective, or both.  So for those of you who aren’t directly involved in writing code, here’s my attempt at describing the purpose and use of ORMs.

ORMs are essentially tools to allow developers to map information in a database to classes in the business logic layer with a relatively small amount of effort.  The productivity gains can be significant, since some ORMs help the developer create objects, data access code, and update code easily after a database change.  From a developer’s perspective, ORMs can be handy because they can effectively translate between the tabular database storage model to “objects” that exist in code.

Despite their uses, ORMs have a lot of detractors within the programming community, especially with some of the more skilled open source developers.  Actual complaints vary, but the gist of their arguments is that these tools don’t generate objects or data access code that operate as efficiently as can be.  The code generated does not meet their standards of quality, and therefore they argue ORMs should be shunned.

Since most developers do not have unlimited time, I’d suggest using some sort of ORM whenever practical.  The productivity gains will far outweigh performance hits for the vast majority of situations.  I would agree strongly that ORMs do not generate the most elegant code, but that is a necessary trade-off that needs to be made to get useful functionality.  Remember, just because something does not meet your internal standards for quality doesn’t mean that it isn’t good enough for the customer.  Finally, don’t be afraid to look at different ORMs.  Choosing the wrong ORM can cause problems for the developer and increase costs in the long run.  Choose the right one (or at least an adequate one), though, and your productivity (or your developer’s productivity) should increase significantly.

Sunday, November 20, 2011

Things instrument repairmen do better than software developers


As most of my readers know, I started my career as a software developer as a career change from repairing band instruments.  I generally liked fixing instruments, but I eventually decided that it wasn’t for me.  But because I didn’t start my career writing software, much of my outlook on software development is rooted in the way I approached repairing instruments.  There are some noticeable differences between the development and instrument repair mindsets, so I thought it would be interesting to come up with a list of things that the average repair person does better than the average software developer.  I came up with five, in no particular order:

Right tool for the right job
As a repairman, I learned early on that not all tools work equally well in all situations.  Methods for removing dents in tubas would pretty much destroy a trumpet.  Methods for padding a saxophone wouldn’t work at all on an oboe.  I had one type of glue for saxophones, another for clarinets, and still another for oboes.  There are numerous types of hammers to take dents out of trumpets.  We did this not only because the right tool saves time and money, but also because the wrong tool could literally do more harm than good.

On the other hand, many developers don’t seem to have many “tools” in their toolbelt.  They become experts in a particular technology, and then use that technology to solve all their problems.  They’ll bend that technology to do things it was never meant to do just in the name of staying within one platform.  Languages and platforms try to cater to everyone, rather than try to focus on a particular type of problem.  Instead, more technology professionals should focus on the types of problems they want to solve, rather than the types of technology they want to use.

Know when to stop
Most instrument repair personnel are very keenly aware of the budget of their customer.  In most cases, repairs are done on instruments done by musicians/schools with small budgets or on instruments for young students.  In neither case does the customer want to pay a lot of money for the repair person to get it right.  Instead, the repair person is responsible for determining which repairs are worth the money to do.

On the other hand, many developers try to create the best solution possible, regardless of need.  While I understand this point of view, we as technologists need to start keeping costs appropriate to the problem at hand.  There are times when making sure that the application never fails, moves almost instantaneously, and is built to withstand thousands of concurrent users is appropriate.  There are times where one or more of those is not.

Interviewing
When I was first getting into the repair business, my wife (then fiancée) was applying to several graduate schools, so I had no idea which part of the country I’d end up in.  So I took as many interviews in each of our possible landing spots as I could.  I noticed a remarkable difference in quality of the interviews – some were clearly thought out, while others were clearly unplanned.  What they all had in common, though, was that I repaired something for each of them.  There was never a question of whether they would check to see if I could repair something before hiring me; the only question was which repairs were most indicative of my general skill level.

On the other hand, I find it shocking how few technologists think of asking potential candidates to write some code during a job interview.  With laptops and networks readily available, it would be easy for any company to administer a quick one hour programming exercise to any candidate.  Yet most interviews I’ve been on involved talking about programming, maybe even a whiteboard exercise or two, but very little actual programming work.  This makes it much harder to determine if a potential programming candidate is up to the job.

Know the customer
In order to find what repairs are appropriate for a customer’s goals and budget, a repair person needs an understanding of what those goals and budget are.  In order to do that, most reasonably good repair people will spend several minutes with a customer, asking what their problems are and looking at their instrument, before going over options with that customer.  Having several of these interactions a day allow repair people to become very good at eliciting the short- and long-term needs from their customer.

Software developers, on the other hand, tend more to writing code based on what they, not the customer, think is right or appropriate for a given situation.  This isn’t entirely the developer’s fault; I’ve certainly had clients who were not interested in having a detailed discussion of different approaches in the development process, so I was forced to guess.  I do think that experiences like these cause software development teams to focus on delivering what they think is best unless the customer explicitly says they want to be involved; but the reverse should be true.

Find creative solutions to problems
Until recently, there were very few vendors out there who provided a wide variety of tools appropriate for repairing band instruments.  Even now, individual repairs quite frequently defy the tools that can be bought.  Repairs still need to be done, though, even if there isn’t a tool out there perfect for the job.  Repair professionals, therefore, must be good at creating their own tools.  I mean that literally.  It’s not uncommon for repair shops to have raw steel, brass, and lathes available for the primary purpose of making tools when an appropriate one isn’t available to purchase.

Developers, on the other hand, are much less likely to make their own “tools”.  Instead of trying to determine how best to fix the problem, and then building software to do that, developers more often than not hand-code the solution.  Tools can easily be made to simplify development tasks.  I’ve personally built a JavaScript generator that would create copies of script that could easily be debugged locally, but could be loaded quickly in production.  I’ve used code generators that built abstraction layers over databases simplifying data access and architecture.  We need more of this ingenuity in software development, especially in the Microsoft community.

A few final notes
By reading this blog, you may think that I’m trying to put band instrument repair on a pedestal.  Keep in mind that my goal here is not to compare the two careers, but instead is to tell software development teams of approaches to work and customer interaction that works well in other industries.  There are certainly things that the average developer does more naturally that the average band instrument repair professional, but that’s out-of-scope for today’s blog.  And no, I don’t regret the career change; I find solving business problems using technology a very rewarding career in more ways than one.

Wednesday, November 16, 2011

Why more technologists need to know marketing

I have to admit: before I got my MBA I had very little respect for Marketing as a profession.  I thought of Marketing as a combination of Advertising and Sales, and I didn’t have much respect for either.  I thought the result of Advertising was to inundate us with commercials and magazine ads that did little to entertain and even less to sell a product.  I thought of the Sales process as the attempt to ram as many products/features down a customer’s throat as possible, without regard to that customer’s needs or wants.  My MBA studies taught me that Marketing should be so much more.  I still think of Sales and Advertising done poorly as described above (though I do have respect for Sales and Advertising done well), but what I overlooked was that it is Marketing’s job to understand the customer.

This is important to point out because technologists, as a whole, know little about how to delve into the mind of the end-user/customer.  One result is that during the requirements gathering process for new software projects, I see a lot of implementation teams create a solution that matches as close as possible what the business stakeholder asks for.   The questions during this process are not "what problems are we trying to solve?", but instead are "with what technology should we implement this functionality?".  To illustrate what the differences are between these two approaches, here is a quote from Henry Ford:
If I'd asked customers what they wanted, they would have said "a faster horse".
Ford’s customers wanted easy transportation, and were used to horses.   They couldn’t conceive of transportation solutions other than horses (or wagons drawn by horses, oxen, etc.).  Instead of going to his customers and allowing himself to remain within their experience, he met their underlying needs with a novel and unexpected solution (affordable automobiles).

How does this apply to software development?  Here are some examples of common development efforts and how they commonly don’t match the user’s needs:
  • Designing a form for data entry needs to be user specific.  I can say from personal experience that different users want different things in their user interface.  A form that would work for a typical engineer would be downright confusing to a typical retiree, and something a retiree would like would be cumbersome to a typical software developer.  Software developers and designers need to start being more sensitive to these differences.
  • Upgrading an application from a legacy technology to a newer one should involve a redesign.  Most legacy systems have been “organically grown”, which is a euphemism for “when stuff is added, it is put wherever there’s space”.  (Yes, I’ve been on all sides of that situation.)  Business stakeholders in this situation usually ask for a strict move to the new technology, since that saves time and money in the short run.  But as long as you’re rebuilding, why not redesign the application in a way that best meets your user’s needs?
  • The technologies or platforms used for a particular project are often chosen based on product familiarity rather than user’s needs.  The best example I can come up with for this is that I read somewhere (and I wish I remember where) that if you’re trying to target teenagers, you’ll get a wider audience by creating a Facebook site than a traditional web site using a Content Management System.  Yet how many of you, when tasked with creating a site for a company that caters to teenagers, would think “CMS” first before even thinking “Facebook”?
The bottom line is that in order for software developers to create better applications, we need a better understanding of marketing.   If that doesn’t convince you, consider this.  Of the five trends in IT for the next half decade, three (Social Computing, Mobile Computing, and the Consumerization of IT) are all directly related to Marketing.  The need for more marketing in technology is here, so it’s time to get on board.

Sunday, November 13, 2011

Project Risk and Fixed Bid vs. Time and Materials

When I was getting my MBA, I read that by outsourcing projects under a fixed-bid basis, you (as a company hiring an outside firm) are lowering your risk in the project.  This is true, if you believe that your primary concerns in the completion of your project are whether it is completed on time and on budget.  Your primary goal should be delivering business value, not meeting a budget.  (Yes, I understand that budgets are important.  Keep in mind that a budget should only be a means for planning resources and comparing ROI and meeting a budget does not guarantee that you receive business value.)  By choosing a fixed bid route, you are actually increasing your risk of not delivering business value.  It’s not hard to see why, if you look at the process from the point of view of both the business and the implementation team.

When a project is on fixed bid, the implementation team is incented to keep the scope as small as possible.  This obviously makes it more difficult to make changes if one or both sides feel that the change alters scope.  My experience has been that scope request changes on fixed bid projects result in hurt feelings on one or both sides, doing little to build a feeling of teamwork between the parties involved.  This results in significant push-back when scope changes, even necessary ones, are requested.  When a project is billed on a time and materials basis, the implementation team is more willing to change scope because the focus becomes on meeting the business stakeholders’ wishes rather than avoiding losing money.  (There is a risk that your implementation team will be less efficient when they are billing on time and materials, but if they are concerned about building a good reputation and earning your repeat business, believe me, they will do what they can to deliver your functionality in a timely and efficient manner.)

On the other side, it follows that the business has more control over the project when it is being billed on a time and materials basis.  In this case, it is the business that has the financial risk; therefore it has the right to control the scope and delivery of the project.  Lastly, by running a project on a time and materials basis, the stakeholders are forced to determine which features are important to them, and which ones are merely nice-to-haves.  The reason should be obvious: when on fixed bid, the goal of the business team is to get as much value for their money as possible.  However, cramming as many features as possible into the implementation doesn’t necessarily result in a better product.  Having to pay for each feature individually really allows the business team to focus on what’s really important.

I am not trying to argue that projects billed using time and materials have less risk than fixed bid.  Overall, it is important to point out that one approach does not have more risks than the other, but each has its own risks which should not be ignored.  In the end, choose the approach that works best for you, your team, and your outsourcer, but don’t assume that fixed bid is best just because it appears to have low risk on the surface.

Wednesday, November 9, 2011

The dangers of gold plating

The term “Gold Plating” (when not actually referring to covering something with a thin layer of gold) comes from Lean manufacturing, essentially referring to the process of creating a part better than what its specifications require.  Applied to software development, it means putting more effort into a particular component of the software that is not needed, such as an added feature, making a needed feature unnecessarily complex, or unnecessarily fixing performance issues.  There are numerous reasons why a particular developer might choose to gold plate a certain feature, but some of the more common are:
  • Desire to meet a self-directed (and situation-agnostic) sense of quality
  • Desire to avoid getting angry phone calls due to an application crash caused by rare circumstances
  • Lack of understanding of what the end user actually needs
I’ve written about what happens when a developer has a misaligned sense of quality in software in a previous post.  It should be enough to say, though, that those involved in creating software need to start defining quality as lowering the cost per needed feature, not creating the best code base and/or user experience possible.  If a particular feature is needed, then great, by all means do it.  Nobody likes making or using mediocre software.  There are certainly cases when that extra effort pays off in the long run.  Just be sure that you are putting this extra effort in because the user needs it, not because the software team wants it.

I certainly can sympathize with the desire to avoid an angry phone call with a user due to something the application is or is not doing.  Bubble-wrapping the application is not the answer, though, for a variety of reasons.  In this case, the software architects and the business stakeholders need to have a frank conversation about what risks are acceptable and which ones are not.  The risk tolerance should change based on the type of application involved.  For example, a software team should be much more careful about making sure that every possibility is covered in creating an emergency responder system for a building vs. creating a social site for a non-critical internal app.

Finally, a lot of gold plating occurs because the software team genuinely thinks that a user would want that particular feature.  However, most people I know (including me) do a very poor job of understanding what it is a user wants without asking.  To do a good job of understanding the end-users needs, the development team must go to the end user and understand what it is they are trying to accomplish.  Once you understand their goals, then and only then can you start making educated guesses as to which features are needed and which ones are gold plating.

If none of that convinces you, remember that 40% of software features go unused according to a not-so-recent Gartner study.  Not only does adding these features add to the application the first time add to the costs of developing the product, but the presence of these features adds to the cost of extending and maintaining the product.  If you are able to stick only to the features that are needed, you will be able to reduce costs, increase ROI, and lower your headaches in product maintenance.

Monday, November 7, 2011

Can technical project managers be effective without hands-on development experience?

If I were to come up with an ideal set of skills for a project manager, it would include the following, in order of importance:
  1. Ability to create a full project plan
    1. Determining the critical path
    2. Determining which resources are appropriate for available tasks
    3. Planning the project execution timeline
    4. Creating milestones
  2. Tracking project status
    1. Determining what action is appropriate in the event of a time or budget slip
    2. Making sure that action is taken and that it has the desired results
    3. Determining the cause of a time or budget slip
  3. Ensuring that everyone involved in the project has what they need to move forward
    1. If not, removing obstacles/provide needed information
These are necessary items to occur in any non-trivial project. However, between hearing developers speak at user groups, hearing developers talk about project managers at other companies, reading blogs about project experiences, etc., I’ve come to believe that too many project managers out there reactively track status by asking the team for periodic status updates, and do little more. This approach is an issue because:
  1. Critical path planning gets done poorly, when it is done at all
  2. Poorly done critical path planning means resources aren’t added when they are needed
  3. Asking the team for a status only works when the team is both qualified and aware of all the tasks that need to be done
  4. Problems are never prevented because they are never seen until it is too late
However, without intimate knowledge of the software development process, identifying and fixing issues before they affect time or budget is difficult. This can be true for project managers who have knowledge of project management techniques, but lack the specific technology and/or business domain knowledge. But because of the expectations I outlined at the outset of this blog post, project managers are very often the ones held most responsible for the success or failure of a project. I cannot think of a greater disconnect between expectations and ability to meet them for any position.

I can’t think of an easy fix for this, though. Making sure that all software project managers had hands-on experience in software development might be a solution. That isn’t going to happen any time soon, in no small part because the skills necessary for being a good project manager are quite different than the skills necessary to be a good developer. I’d guess that most project managers would not like hands-on development, and that most developers would not like hands-on project management. Moving to a Scrum environment might be an option too, where the development team takes over the status tracking and the business stakeholders determine scope, but that makes it harder to budget resources on the enterprise level. Does anyone have experiences they'd like to share of techniques that did or did not work particularly well?

Friday, November 4, 2011

How IT is not aligned with business

One of the hot topics on IT related web sites lately is aligning business and technology goals.  While I think there are plenty of things business people can do to get more out of their technology (and that could be an entire discussion on its own), I’d like to focus on what technologists can do bring more business focus to their jobs in this post.

Most technologists I’ve spoken to believe quite strongly that they understand business and have what it takes to design business applications, yet most of them fail in doing so.  Here are some examples:
What’s the best way to develop a mobile application?  Web, native application hand-written for each environment (Android, iPhone, Blackberry, etc.), or a third-party solution that creates custom apps for each environment?
That was a question I had overheard at a user-group meeting describing the strengths and weaknesses of different third-party solutions.  Why does this developer think he’s business-focused?  Implicit in his question about what is the best way to get information to users of mobile devices; he is thinking about the balance between time-to-market (and therefore development costs) and creating a smooth, understandable interface for his users.  Where he needs to improve is that he fails to understand that the appropriate technology used will depend on the circumstances.  When time and budget is the primary issue, going with the easier-to-produce web site built for mobile apps is likely solution.  When impressing your customers or creating the most intuitive interface is needed, creating native mobile apps for each platform used is probably the way to go.  In all other cases, you have to determine the best approach that best balances the needs of your users with your budget.
Which is better, ASP.NET WebForms or ASP.NET MVC?
This was a real discussion I had with two very good technologists.  One argued that WebForms was the better technology because it had all sorts of inclusions increasing developer productivity.  The other suggested that MVC was the better technology because it resulted in cleaner HTML, and therefore you could write highly-interactive websites with it more easily.  Implicit in each argument was the business need: the WebForms guy was thinking about cost per feature, the MVC guy was thinking about creating websites with the “wow” factor with the lowest possible cost.  Here again neither technologist thought that the appropriate technology for a situation would change on the situation.
Do you have any SharePoint needs?  Do you need help with reporting?
One time I was on a sales call and the salesman asked these questions, pretty much in these words.  From a sales standpoint for a consulting organization, these questions make sense because they’re easy for us to fill.  Do you have any SharePoint needs?  If so, great!  I have a SharePoint developer right here ready for you….  Unfortunately for sales people, business problems usually aren’t so straightforward that you can just drop a piece of technology into the business and reasonably expect problems to be solved.  Instead, we as technologists should be looking at the underlying business problems first.  For example, instead of asking “Do you need help with reporting?”, ask “Do you feel that the information you are getting is sufficient for your decision making?”  Then you can start a conversation about what the problem actually is.  Is the business having problems interpreting information they already have?  Is the business having problems storing and digesting the information that they are getting?  Is the business having problems gathering the information they need in the first place?  All of these problems can be solved with technology, but reporting will not always be the solution.

To start solving this problem, we as technologists should start with the following:
  • Understand the business problem first before trying to create a solution.  As in the example above, insufficient knowledge cannot always be solved with reporting.
  • Explicitly state and agree on acceptable risk (downtime, security, errors) and associated costs.  Most developers I know have their own risk tolerance, but rarely actively think about it, much less communicate it effectively.
  • Remember that your job is to solve a business problem, not implement a specific technology.  Just because a particular solution is easier to implement or cooler to create, doesn’t mean that it is appropriate for the circumstances.
Obviously following these three steps will be insufficient to bridge the gap between business and IT, but if we all could do it, it would at least be a start.  Keep watching the blog, as I’m planning a post about what training might be appropriate for technologists. 

Wednesday, November 2, 2011

Why I don't think of myself as an ASP.NET developer

I’ve been asked by several people recently whether I’m interested in being a mobile application (or SharePoint, database, Silverlight, etc.) developer.  I’ve never really liked that question, but I didn’t understand until recently why I didn’t.  The problem is that question assumes that I’m going to be focused on a particular technology, but really IT professionals should be focusing on being experts on solving particular business problems.  Let me explain.  Programming is programming, whether you’re programming for the web, desktop, mobile, firmware, whatever.  The languages differ, some of the particular problems differ, but if the developer is reasonably knowledgeable and dedicated to learning, they should be able to pick up new skills specific to a new platform relatively easily (especially with the number of technical bloggers out there).  So while the knowledge of a particular technology, such as mobile application development on a particular platform, is useful, it’s not the most useful way of determining expertise.

Instead, we need more technology professionals who focus on a certain business need.  Such business needs might include external marketing, finance, or competitive research.  To be especially efficient, such a technologist may choose to focus further on a particular industry.  For example, an external marketing specialist may know how to lead conversations to learn what user interface is appropriate for a public-facing app, and be able to design something that both delights the user and is technologically feasible.  A finance specialist may know accounting and finance rules, but also would be extremely fluent in programming languages and platforms that best fit financial applications.  In other words, such a specialist would be equally knowledgeable of the technology they are using as well as the business function they are serving.

While people like this exist, if technologists wish to be partners in shaping business strategy, we need significantly more.  Having more technology people value MBAs would be a good start, but by no means would be a cure-all.  So my question is, how should technologists go about getting the business knowledge necessary to move outside pure technology?

Monday, October 31, 2011

A $5 solution to a $5 problem

When I was in band instrument repair, I dedicated myself to becoming the best repair technician possible. I was constantly looking for ways to make my instruments respond more quickly, play with a fuller tone, etc. I got pretty good at it, if I say so myself. However, I found that it’s not possible to do that quality of work on every instrument and be able to make a profit. Most instruments just do not deserve to get that attention to detail. As a telling example, I would look at a student flute, which cost the store about $250, and decide that it needed $600 worth of work to get it in order. And that was typical for brand new flutes. Yes the manufacturing was shoddy, but I had lost any sense of context for quality in my work. I could only define “quality” as the best possible end product, which I now realize may or may not have been what my customer needed.

Luckily, I found a new balance in computer programming. Yes, it is satisfying providing that super-fast, bullet-proof, wonderfully designed software application to a business user. Thankfully I’m just as happy providing an adequate solution, knowing that in some cases the business stakeholder values low cost and short time-to-market more than they do software perfection. My goal is to meet the user’s needs in the best way possible, not provide the best end-product possible.

I’m quite disappointed, however, to see a vast number of developers who are doing the software equivalent of putting $600 worth of work into a $250 flute. These developers are making the same mistake that I did as a band instrument repairman – losing sight of the customer’s needs to satisfy one’s own arbitrary set of standards. I think business stakeholders are partly to blame for this (for reasons I will get to in a future blog post). However, we as developers need to renew our focus on making sure there is a good reason for each and every line of code we write. If it does not add more value than it costs to write it, then doesn’t belong in the project.

Wednesday, October 26, 2011

If coding seems tedious, then there's probably a better way...

I have to admit that I hate writing repetitive code. I’m not using the word “hate” lightly here. Whenever I’m asked to write a particularly boring group of code, I have to listen to rock music just to keep my mind occupied enough so I’ll finish my task without excessively checking my e-mail, talking to others, etc. If it gets really bad, I’ll go out for a brisk walk to burn off some of my energy, just so I’ll be able to sit for the amount of time necessary to get the job done.

Luckily I get to avoid writing tedious code most of the time. Usually there are ways around it. Getting data from the database and loading it into an object can almost always be automated by using an Object Relational Mapper. Getting information from an object onto a web page can be automated by using reflection, iterating through the properties, and setting the fields automatically. (If you don’t know what that means, it’s OK, it’s not central to the point of this post.) I’ve even written my own programs that will generate JavaScript for me so I don’t have to spend as much time debugging it.

A nice benefit to my avoidance of boredom is that by automating the boring stuff, my code is almost always faster to write and less error-prone than the more tedious equivalent. It’s a win-win, right? Why don’t more programmers think this way? You will get a few programmers who won’t use ORMs or reflection because they’re slower to execute, but in most cases I’m not going to lose sleep over a few hundredths of a second. I think mostly programmers don’t automate the boring stuff because they focus on what they know rather than experiment with trying to do something faster. They seem to want their automation done for them. My question for everyone, then, is how do we motivate more programmers to be more creative in increasing their own productivity?

Monday, October 24, 2011

Measuring the success of software projects

In measuring software project success, many companies use two relatively simple metrics:
  1.        Is the project on time?
  2.        Is the project on budget?

Both of these are easy to measure, and we measure them in part because both of these are more difficult to achieve than we’d like.  However, neither of these two measurements gets at the heart of what makes a typical software project truly successful.  There should be one primary thought driving any software project that stakeholders on all sides should use to determine its success:

Does my project deliver more business value than the cost of building it?

Your end goal should primarily be delivering business value, not meeting a budget.  I’ve been involved in projects that didn’t meet time or budget limitations, but were still a business success because they delivered the promised business value, allowing for company scalability, greater market visibility, etc.  I’ve also been asked to rescue projects that were a success in terms of meeting time and budget, but failed miserably in providing the desired business value.

So why do many of us still focus on time and budget as primary measurements for software project success?  I think largely because they are easy to track and measure.  Determining ROI requires knowledge of the current costs and revenues associated with the current process, measuring and calculating all costs associated with development, and measuring the improved costs and/or revenues due to the end product.  Separating costs and revenues due to software only and those generated from other causes (marketing efforts, efficient or inefficient workflows, etc.) can be nearly impossible.  Pair this with the thought that software projects are ends to themselves and time and budget become more important measurements to project success than they should be.

The alternative requires either a strong project champion that is able to understand all of the costs and revenues from both the business and implementation sides or close collaboration between the business and implementation leaders to determine proper measurements of success.  Neither of these is common or easy to accomplish.  To get an idea of what I mean, here are a couple examples of better measurements of project success:

  • A new internal document tracking system’s success might be measured by increased productivity and worker satisfaction
  • A new content management site for a professional services company might be measured by the number and quality of unsolicited sales leads

Have any of you tried to measure project success more along these lines?  Did it work?  Why or why not?

Wednesday, October 19, 2011

Skills vs. Knowledge (and how it affects unemployment in IT)

If you troll the job board discussion groups targeted to IT professionals, you’ll see a large number of seemingly qualified, out-of-work technologists.  With an unemployment rate hovering near 4.5% in the industry, how is this possible?  After looking at the openings and the unemployed, I think the answer is fairly simple:

Employers are actively looking for skills, but employees want to emphasize their knowledge.

What do I mean?  By “skills”, I’m talking about technology-specific things that could easily be read in a book.  Being able to build a custom control in ASP.NET or being able to bend Entity Framework to create much more readable code are “skills”.  Being able to determine when it is appropriate to build a custom control, or when one should choose to abandon a framework rather than continuing to bend it would be “knowledge”.

The difference is subtle, but important when dealing in technology because the skills a developer needs to write custom solutions change quite frequently, but the knowledge does not.  Since this is the case, employees often want to market their knowledge and expect employers to train on skills, which is rather foolish.   Employers, on the other hand, do not wish to spend resources training for skills, but incorrectly think that knowledge is unimportant (or at least not worth paying for).  And so you get this strange combination of thousands of seemingly qualified IT professionals looking for jobs in a market with thousands of openings.

Saturday, March 12, 2011

Interview questions revisited

I came across this list of "oddball" interview questions asked last year:

http://www.glassdoor.com/blog/top-25-oddball-interview-questions-2010/

I look at most of these questions as a waste of time (though some more than others), both for the interviewer and interviewee.  The people asking the questions might tell themselves that these questions are designed to understand how the interviewee thinks, but in reality they give misleading information to the interviewer and tell the interviewee nothing at all about the expectations for the job.  However, there are a few of these questions that I would like to point out as possibly useful.  The questions that involve mathematics and/or logic (questions 9, 10, 11, and 18) might give insight into the interviewee's skill in logic, which would certainly be useful to all workers, though especially to programmers.  I was fascinated with this question (#20):

“You are in charge of 20 people, organize them to figure out how many bicycles were sold in your area last year.” 


My first reaction was to think that this too was a question that had more potential for misinterpretation than it had for understanding the interviewee.  After further thought, I began to realize that not all problems a business leader will face will be similar to problems faced before, nor will they be predictable.  Being able to adapt to new and unusual situations would definitely be a valuable skill in a business environment, and needing to organize people to determine bicycle sales would fall in that category.  I'm still not sure that this question is the right one to get at this ability, but I'm unable to come up with a better one at this point.

Saturday, March 5, 2011

Six Sigma for programmers

There seems to be a lot of confusion among some IT professionals as to what exactly Six Sigma refers to.  Part of the confusion seems to come from the fact that it is usually used in manufacturing, and part of the confusion seems to come from the belts that are associated with it.  I won't get into a detailed statistical explanation of what Six Sigma is, but I can at least give an overview of what it is and why IT professionals should care.

To start, those that came up with the Six Sigma idea believed that in order to achieve high quality, one must first achieve consistency.  The steps to achieve this consistency, often known by their acronym "DMAIC", are as follows:
  • Define - One must be able to define exactly what success looks like in order to determine if it has been achieved.  Otherwise two people will look at the same results and will likely have different opinions on whether it was a success or not.
  • Measure - Unquantifiable improvements can lead to ambiguity and disagreement.  Therefore some objective way to measure change must be achieved.  (If you're curious, the term "Six Sigma" comes from the number of defects measured in any given process.)
  • Analyze - One must determine the causes of the inconsistencies in order to fix them.
  • Improve - Being able to find the cause of problems doesn't do much good unless one can fix them.
  • Control - We are all creatures of habit, and unless there is some driving force for permanent change, we will likely fall into old (bad) habits.
In order to truly use Six Sigma as it was meant to be used, one must limit the number of defects to 3.4 per 1,000,000 opportunities.  To be honest, I have not yet figured out a way to apply Six Sigma standards to software development.  Software quality is tough to define.  You can focus on number of unhandled exceptions (or application crashes), but this may encourage behaviors that are not beneficial to the project as a whole.  You can focus on keeping application speed down, but I could easily imagine scenarios where one might choose to allow for less-than-ideal speeds for increased functionality.  Focusing on number of met requirements may be beneficial, but controlling such a process would be extremely difficult.  Using Six Sigma to control code quality may be more appropriate, but I think many developers already focus too much on code quality and not enough on the other aspects of software development.  (Though that's a topic for another blog.)  

Despite the fact that it may be too difficult to apply Six Sigma standards to software development, it is certainly not to difficult to apply Six Sigma methods.  Without defining and measuring your problem, scope creep and confusion will reign.  The benefits to the last three steps should be self-evident.

How would a programmer use this to his or her advantage?  Let's start with the performance example.  If someone wanted their web site to be "fast", we should define and measure how fast that is.  1 second per page load?  3 seconds?  Keep the page load time under 5 seconds for those with 56K modem connections?  Next, for the pages that are too slow, analyze the performance issues to determine the cause.  Do the pages have too much information?  Is the database connection too slow?  Is the business logic poorly written?  Are there too many business rules being run getting in the way?  Once the problem has been determined, improve the situation by fixing the specific problem.  (Since improving performance could be a book in itself, I'll hold on that for now.)  Finally, be sure to control the process to make sure that slow pages are fixed before being pushed to production, perhaps by running automated tests.  Be sure to document any exceptions you may wish to make to this rule, or break up the page into smaller ones in order to achieve the same functionality.

Saturday, February 26, 2011

The end of the IT Department?

I came across this post predicting the end of the IT department:

http://37signals.com/svn/posts/2785-the-end-of-the-it-department

While I think this blogger (David) is somewhat correct in that the traditional IT department is in danger, I think he has over-simplified the problems in communication between business and IT.  (Despite the fact that this is my second blog post recently rebutting a point made on a 37signals blog, I swear I don't have anything against the company itself or its bloggers.)

To start, David's characterization of IT personnel as creating roadblocks in order to keep jobs for themselves only perpetuates false stereotypes of IT workers.  One does not need to look hard before finding out that the "rigid and inflexible policies" put in place by the IT workers are often there to protect the company's data, either from accidental or malicious acts.  Bad information makes Information Technology useless, so it is vital that IT workers strive to protect and maintain good information.  Secondly, I have never encountered a situation where a good worker, IT or otherwise, deliberately making things hard for users in order to save jobs.  IT workers tend to want to make things more efficient in order to move on to more exciting projects using new technologies.  Finally, it will be a LONG time before medium-sized (much less large) companies can get most or all of the services it needs from "a web site somewhere".  It is extremely tough to integrate technologies, customize workflows, increase security, etc. without some sort of IT department.

I don't doubt that there are a few companies will choose to go the route that David outlines.  However, those that do may find themselves at a competitive disadvantage.  As I wrote about in an earlier post, companies that treat their IT implementation team as a true business partner will find themselves better able to build system that provides business value.  Companies ought to strive to have their IT operations in Stage 4, otherwise they cannot get the most out of their business.

I would agree, however, that the days of the traditional IT department are numbered.  I have not done sufficient research to say what the trends are, but I think it would make sense to reorganize the IT functions to break up its current functions into different areas.  In many companies, it may make sense for each department to have personnel that have IT skills appropriate for that department.  For example, public-facing web development and design may be done by marketing personnel or financial rules may be programmed by accounting personnel.  Rather than eliminating the IT department, IT personnel would instead be responsible for ensuring consistency in approach, ensuring safe data (both in terms of security and in reliability), and evaluating future trends for company applications.

For such a scenario to occur, we must all be more knowledgeable.  At least David and I agree that the future of IT involves having more technically-savvy business people, but I would take it further than he seems to.  Business people will need to learn the basics of programming, source control, and other similar items.  IT personnel will need expand their knowledge and experience beyond simply maintaining computers and computer systems to learn more about enterprise-level concerns and functions.  It would not be an easy transition, but I think it is a necessary one, both to get the most out of our technology efforts but also to prevent us from being outsourced.

Sunday, February 20, 2011

Interview questions

Here is a list of interview questions that Scott Hanselman suggests for senior software engineers:

http://www.hanselman.com/blog/NewInterviewQuestionsForSeniorSoftwareEngineers.aspx

I usually like reading what Scott writes because he is often interesting and insightful, but I was quite disappointed with this one.  Interview questions must first be designed with an end goal in mind.  Is he looking for someone to implement the business analysts' designs, or gather business requirements themselves?  He's clearly looking for the former, but a significant number of openings I've seen for senior software engineers are looking for the latter.  Is he looking for someone to lead teams of developers, or do most of the programming from base architecture work to finishing touches?  He's not really asking questions about either, so I'm not sure what his assumptions are here.  Is he open to engineers who don't believe that object-oriented programming is the one and only true way of programming?  (That question isn't entirely fair, since it's perfectly valid for a company to say that OOP may not be the only way to program, but it is the standardized way in their company.)  And the list goes on.

In the end, interviewers should be aware of lists like these because they can give an interviewer food for thought in trying to determine which interview questions to ask.  But always keep in mind that the writer has their own biases, and they may be trying to solve problems that may have nothing to do with yours.

Saturday, February 5, 2011

Marketing for IT

Before I started my MBA, I thought "marketing" was comprised of sales and advertising.  I knew little of focus groups, but I assumed that they had something to do with sales.  While this wasn't spelled out explicitly in any of my classes, I've begun to think of marketing as this process, in this order:
  1. Understand what it is your customer needs
  2. Understand how you can meet those needs
  3. Teach the customer about your product (Advertising)
  4. Persuade that your solution is worth money (Sales)
After looking at marketing in this way, I started thinking that the most important aspects are numbers 1 and 2, not sales and advertising.  Yet a very large number of companies focus on the last two.  It's not hard to imagine why.  These two activities have a direct, measurable impact on the company's bottom line.  Skipping steps 1 and 2 can lead to problems, but they can appear to be other problems if you don't know what you're looking for.

A great example of how this can be an issue can be found in a blog about native applications for mobile devices vs. mobile web for a particular company.  It's an interesting blog, but I was struck by this comment:
Eventually we came to the conclusion that we should stick with what we’re good at: web apps. We know the technologies well, we have a great development environment and workflow, we can control the release cycle, and everyone at 37signals can do the work. It’s what we already do, just on a smaller screen. We all loved our smaller screens so we were eager to dive in. Plus, since WebKit-based browsers were making their way to the webOS and Blackberry platforms too, our single web-app would eventually run on just about every popular smartphone platform.
They may be right in that web apps targeted for mobile browsers might be a better solution for their customers than programs built for specific devices.  But I would argue that their approach needs to be changed.  Their reasoning to move to mobile web is all about themselves, such as their knowledge of technologies, better control, etc.

While these points are important, it is much more important to consider what their customers need.  If their customers need the greater flexibility that can be found by creating device-specific apps, they will be short-changing their current customers and limiting their markets in the future.  They could also be alienating their current customers who don't want to be forced into a certain solution merely for the convenience of the developer.  Instead, they should determine what it is that their customers need.  Do they need cross-platform solutions?  Do they need top performance and ease-of-use?  Are there going to be a lot of updates to these apps?  Without answers to these (and other) questions, you can't reasonably limit yourself to one solution or another.

Saturday, January 29, 2011

Hiring programmers in the age of search engines

From my observations in interviews, speaking with fellow programmers, and questions on LinkedIn, I've seen that most programmer interviews focus on technical skills.  For instance, an interview for a mid- to senior-level C# developer often consists of questions on technical issues such using delegates, architecture concepts such as object-oriented programming or placing logic in the business layer, simple SQL questions, and so forth.  These are all tools that are important to have.

But are they the tools that programmers need to succeed?

Here is my list of top six skills I'd want out of a senior-level programmer, in order of importance:
  1. Ability to understand the implications (business and technical) of making certain changes
  2. Ability to understand the business need behind a certain feature and be able to suggest technical alternatives to meet that need
  3. Good grasp of general programming best practices
  4. Ability to find appropriate solutions when a difficult problem arises
  5. Ability to pick up new technologies and ideas quickly (since technology changes quickly)
  6. Specific knowledge of the language/tool-set being used at that time
It's true that if you have no knowledge of programming best practices or of the specific quirks of a particular programming environment, being able to do any of the items on the list would be difficult.  But with so many blogs available (and easily searchable using your favorite search engine), it's difficult to justify hiring programmers solely based on their knowledge of the technology.  If a programmer has a good knowledge foundation, they can easily fill knowledge gaps by running a search query.  So specific in-depth knowledge no longer becomes the most important skill needed for a great number of programming jobs.

Yet this is exactly the approach that most hiring managers seem to take.  They try to gauge a candidate's technical skills, focusing on what I'd argue is the least important skill a programmer needs.  What's the alternative?  Start by listing the most important tasks that the programmer will perform.  Don't focus on just the technical skills.  Ask questions to gauge the candidate's abilities in this area.

For further details, you can read one of my previous blog posts about the subject.