2/04/2009

Guaranteed Results!

I flashback to Ron Popeil TV commercials and SNL take-offs as I sit here writing this entry. But I will tell you I am guaranteeing results. If you implement the two techniques with your team that I suggest in this blog I guarantee that the average amount of time it takes your team to fix bugs will decrease by at least 30% within 12 months.

But wait there's more: I also guarantee that your total over bugs found after release will decrease by at least 20% for the same period...

But wait there's more: I also guarantee that these techniques are so incremental that you will not notice them affecting your schedules directly.

Here are the two techniques:

  • Assertions
  • Root Cause analysis

Now I know you are about to say that these are not new. I agree they are not but most teams do not employ these easy techniques.

Assertions are an easy way for your code to find it's own problems before your customer find them symptomatically. I would much rather have a message be emitted during testing or deployment that detects an issue than having data corrupted or having a blue screen event.

You can do this incrementally by saying that no bug fix or new feature can be checked in without all changed methods containing assertions. Assertions should test three things: input, output and state. Never assume anything is correct. Input and output assertions are pretty obvious but there are some places in the code where the program is in a unique position to verify the consistency of internal state. All three will not only help insure quality but also give you a leg up when you change your program's assumptions.

Each team must decide what mechanism to use for assertions. The assertions should be light weight and provide enough data when they are triggered so that developers can easily debug the problem. Most assertions can be active all of the time and should be. There are times in critical sections of code where assertion can greatly impact performance. In these cases, you may have to employ other techniques like turning on assertions during testing only and checking result around the performance-sensitive section at a place where the assertions will have less impact.

Because this only affects modified or new code, the incremental cost should not be great. Often assertions can be inserted in minutes per method.

The second technique is really a change to process around bug fixes. If you don't measure and analyze your bugs it is hard to avoid them in the future.

Root cause is easy to do. Don't allow anyone to close a bug without inserting a root cause. Have a fixed number of choice for why the failure occurred (e.g. requirements, design, coding, testing, deployment, other).

Once a month conduct a mortality conference where you review a rollup of causes and go through a representative sample of the bugs. This requires that someone do some statistical analysis to prepare for the meeting to identify frequent causes and commonalities for those bugs.

The next step is to establish changes that will stop those problems from occurring again. This may result in new processes or work.

The act of doing the root cause work has little overhead. As I carefully said above, this will not directly impact your schedules in a direct way. Clearly you may identify process changes or projects to address the root causes and take enough effort that might impact resources.

If you take the statistics, do the work and don't see the impact I guaranteed above, I will provide a free day of consulting to work with you to figure out how to achieve those results.

More later ...

1/26/2009

Hire Interns Today!

Do you have reduced budgets? Are you getting pressure to offshore work? Are you restricted from hiring full-time employees? Consider hiring interns.

In tough financial times, it can be very hard for new college graduates to find employment. I have been very successful at both helping new college graduates get training and get a foot in the door as well as helping companies get more affordable help and have a chance to qualify future employees.

Here are some tips in using interns:
  • Look at graduates from non-name schools with degrees in the areas you need
  • Find people with good attitudes and good communication skills
  • Don't do long interview cycles
  • Find a contract house that will manage their employment
  • Free up more senior full-time staff from doing ineffective tasks
  • Expect to spend some time training
  • Hire a group at one time
In tough times, employers hiring full-time employees often pick employees with the best pedigrees. There are a lot of great candidates with great skills that went to lesser known schools. I often find candidates through referrals from someone who knows a graduate working at a local gym or restaurant. The candidate much rather be working in a job that will provide them with experience pertinent to their long-term goals.

I typically will interview candidates for at most 30 minutes on the telephone before making them an offer. You cannot expect them to have great expertise in your product. The idea is that you will train them. I most screen for basic knowledge (tell me about a college project) and I screen for attitude and communication skills. You should know fairly quickly if they will be effective and you will not risk much if you learn there is not a match after a month.

I tend to find these interns through referrals but have them managed through a contract house because they are not full-time and they have already graduated from school (so it is not a co-op). Reputable contract houses can provide low end management including paying employer contributions and workman's comp for about 25% overhead. Many also will provide the employee with the ability to purchase their own healthcare.

You should be able to have the interns target work that frees up senior full-time employees to work on projects that require their experience and skill. While you will have to train the interns, a little investment will go a long way. Try to hire them in groups so you can train many at one time thereby lowering the overhead. In the end I have hired many of the interns for full-time employment at the end of 6-12 months.

This will be a win-win for the intern and for your company.

More later ...

1/21/2009

How should you measure engineering?

One of the common questions I get asked is how should you measure engineering. It is a loaded question. Usually there are personal biases and long painful experience that color the question. However, the sentiment of the question is correct. If we do not measure ourselves we cannot get better.

The common industry term for these measure are Key Performance Indicators (KPIs). As a general guideline, I like these to be simple and easy to measure.

The following three questions are important to the business, can give you an idea about how we are doing and have answers that are reasonably easy to measure:

  • How productive is engineering?
  • How good is the quality?
  • How quickly can we respond to customer problems?

The first two are solely engineering measures and the third is a joint measure with support.

You can easily measure engineering productivity by measuring the number of features that engineering releases to customers per quarter. In order to create this measure, you need to have a common denomination or coin of the realm to normalize the size of projects. You can classify your projects into small, medium, large and extra large where you can size those project sizes and provide some basic equivalences. For example:

  • Small - less than 1 development week effort == 1/3 medium project
  • Medium - less than 1 month of development effort
  • Large - less than 3 months of development effort == 3 medium projects
  • Extra Large - greater than 3 months of development == 9 medium projects

You could clearly make the measurements exact but this method also provides a normalization that both reflects the impact of larger projects by virtue of rounding up and removes anomalies by capping the translation. This measure will have a direct effect on the company's ability to compete and to manage costs.

You can measure quality by how many bugs need to be fixed in patches per quarter and in the next release per quarter. The idea here is that it does not matter what the company marks as the priority or severity of a bug. Instead if the company requires the bug get fixed in a patch or release, then for whatever reason engineering had to expend resources to fix the bug. Quality then becomes about addressing issues earlier and removing the need for the company to spend resources after software has been released. This measure will have a direct impact on customer satisfaction and engineering productivity.

Finally, we all know that there will be critical problems that we cannot foresee that the company must address. It is important that the company reduce the time for customer relief for critical bugs and measure it at least quarterly. The fix could be a work around developed solely by customer support or a complex engineering based fix and therefore this is a joint measure. Efforts like customer training, robust error-logging, and code assertions can help reduce time-to-relief. This measure will have a direct impact on customer satisfaction.

If you don't measure you cannot get better.

More later ...

7/13/2007

What is the role of a manager?

Opinions vary on a manager's role from writing code to ordering dinner. I tend to like my managers to be technically trained and competent individuals who have learned they can be more effective helping many people succeed. I have talked about what makes a good manager in other posts but what should they actually do in the organization? Here are the basics:

  • Planning
  • Hiring
  • Employee development
  • Execution

Managers help their bosses and their employees plan. This includes budgets, resource capacity planning for projects, schedules, etc. Expect that this is an ongoing task and will take a good percentage of time. In a fast-paced industry things change quickly from customer requirements to found opportunities. Managers must learn to hone their accuracy and deliver estimates quickly.

Once a budget is set, the most important thing a manager can do is hire. Without people they cannot accomplish the tasks at hand. This must be a concerted effort that managers spend time each day doing. From screening candidates to shepherding the process to setting expectations of interviewers and approvers. This can be daunting but is a cornerstone to any company's success.

People development is often overlooked. We cannot expect that all people who come to work for us were born with all the skills and knowledge they need. Nor can we assume they acquired the skills or knowledge in school or previous employment. Employees need both technical and soft skills. There may be times when you can find a class for an employee to take to build skills but managers should be capable and interested in direct mentoring both technical and soft skills.

Finally we get to execution. Execution will dictate company success. All of the other items discussed above are necessary but not sufficient to execute well. In addition, managers must monitor schedules, manage dependencies and troubleshoot issues. This is the day to day management of a team developing a project. You must not lose sight of this activity while doing planning for the next big thing. Managers should represent and be held accountable for his or her team's deliverables. Deliver with quality. Deliver ontime.

With enough managers capable of the above the tasks, your company can achieve its goals and success!

More later ...

6/03/2007

What is the role of an architect in software development?

Everyone has to do architecture work to develop and maintain good software products. How do architects fit into the process? What should their responsibilities be for architecting features or products? How much responsibility do they have around product direction? What impact should they have with regards to development processes?

As with many roles, you are best served by defining clear boundaries around what architects should do. It is easy to imagine turf wars between architects and a number of other constituencies including developers, managers and product managers.

Here are the three basic areas that architects should own:

  • Architectural roadmaps
  • Architectural specifications
  • Developer processes and best practices

The architectural roadmap is the first place architects feed into the process. These roadmaps should feed into product management just like other requirements including those from customers, sales, standards, competition, etc. The format can easily follow similar strategic documents I have proposed in the past:


  • Taxonomy - what is important in a particular areas
  • Report Card - how the product does in that area
  • Today picture - a visual representation of what the area looks like today
  • Tomorrow picture - a visual representation of what the area could look like
  • Projects/Tasks - a list of what must be done to actualize the tomorrow picture

The areas might include discreet technologies like storage or networking or they may reflect attributes like quality, usability or performance. The architects should develop roadmaps for each area that constitutes a value proposition for the customer.

The second place architectural oversight is necessary is around specifications. This process should occur after requirements are available from product management but before engineers develop design specifications and schedules. The architects should have unique insight into the overall product and develop these specifications as a leg-up for engineering.

Architectural specifications should include the following:

  • Refined requirements - embellishing product management requirements based on the architects broader and more detailed view of the technology and product.
  • Investigation items - these should be resolved before a specification or schedule are approved
  • Architectural requirements - these must be addressed in any design or development
  • Excluded items - these should not be included in the project

This makes the architects the bridge between product management and engineering and provides direction and guidance instead of a blank sheet of paper for engineering. The result should be more consistent products that meet broader goals including interface integrity, performance, quality, etc.

Finally the last place the architects should play a role is in the specification and oversight of development processes. This includes:

  • Developing a design/functional specification template and process
  • Developing the guidelines and process for code reviews
  • Specifying coding conventions and tool selection

The processes should have oversight from the architects. For example, they should get to approve who the reviewers should be for code reviews. Do not mistake this item with saying that the architects must do all the specifications or reviews. They should delegate but retain oversight and they should be accountable. If architects end up doing all of the detailed pieces, developers will resent them and you will not make efficient use of both the architect's and developer's skills.

All of the above items could be distributed throughout an organization but they are usually assigned to senior members of the team and often designated as architects. The important thing is that these tasks must get done and you should make it clear who is responsible for getting them done.

Make your architects more successful and your team will be more successful.

More later ...

4/27/2007

Planning

People ask me about Agile and other project planning methods in high tech today. What I tell them is that all tools can be used well or poorly. It takes leadership, forethought, common sense and perseverance to plan and follow through on a project.

Many managers do incomplete jobs at planning, or they do not manage the plan once it is developed, or they do not handle change appropriately. Each of these mistakes will affect predictability. I have seen some organizations that embrace some of these techniques as a facade for "it will get done when it gets done". The problem is that your CFO, VP of Sales and customers cannot plan appropriately in that kind of environment. Your company needs predictability in order to succeed.

I am a lot less interested in the form you use to plan than the functionality. All of these techniques can be applied using almost any method. Here are some basics:

  • Do not commit to a date until you have preliminary specifications and detailed schedules
  • Make sure every stakeholder physically signs off at each phase
  • Adopt a train model
  • Don't make top-down dates
  • Complex projects require very detailed planning that will change
  • Manage each slip with urgency and importance
  • Deal with major changes as if you were starting anew

I have talked many times about the proposal and definition phases of a project. You cannot have predictability without specifications. Those specifications must be good enough to ensure you hare heading in the correct architectural and product direction. They must also be good enough to create schedules. The schedules must prove that you can make the target date. You must work with the information you have at the time and understand that it will change. The schedules should be manageable (< 2 week tasks, measurable deliverables, etc.).

Don't assume that verbal or incidental communication is enough to get buy-in from your company. Get a signoff sheet for each phase in writing. Make sure development, QA, docs, architects, sales(customers), support, and the CEO are all in the loop. It is awful to find out after you have spent money on developing something that you have done the wrong thing.

Feature driven releases often slip. If a major feature slips and the release slips, then there is pressure to add more features since it will be a while until the next release. Adding late features can cause more integration work and potentially more slip or worse yet cause quality problems. Separate features out from releases. Make releases time based with a shorter cycle (3 months). With a time based release, the features that are ready can go out. Big features can be developed independently. You can respond to customer needs more quickly. The train model requires adequate automated testing to be successful in order to reduce the overhead of more frequent releases.

Top down dates don't work. Schedules must be bottoms up. Managers must help there teams make aggressive but accomplishable schedules. Managers must help make trade-offs around function, quality and resources and have substantive business discussion to achieve the right balance. Don't commit until everyone on your team can look you in the eye and commit. This is a pay-me-now or pay-me-later issue. It is more than a burn-out issue. It is again about predictability.

Complex projects need detailed schedules. Even if they go out two years. Schedule re-scheduling events for long projects. If you use Agile, you can translate detailed schedules into sprints but you still need the homework upfront. You need the exercises to get predictability. The schedule is really a step-by-step design. It makes your developers think. It allows your team to identify many dependencies and resource issues upfront. It will change -- get over it. There is no free lunch. You cannot have predictability with coarse grain tasks. You cannot have predictability without having your best guess at the details upfront.

You need to manage the plan. Most slips start from day one of the schedule. Require your team to have at most two tasks open per person at one time as they accumulate risk (this may require scaffolding). Require that each slip have a mitigation plan on a weekly basis. Do not shove all delays into a bucket to be dealt with later. Your team needs to adjust quickly to issues.

Finally, if you have major changes including features additions or architectural surprises, you need to go back into a planning phase. Never assume you can just add it on. Do the resource planning, specifications and schedules. Do new signoffs. Anything less will affect predictability and cause slips.

We all face these issues. True leadership in planning is the foundation for success for your team.

More later ...

3/29/2007

What are good goals for testing?

Sadly, software testing has not become a prestigious endeavor in our industry. In ASIC design, for example, the test verification folks often are paid the most and lead the teams. In software, test development and test execution are often looked at as stepping stones to product development. As a result, test development has developed a stigma and is often done as an afterthought and often not pursued by the best engineers.

Without reasonable tests and test infrastructure, your company will spend a lot of time and effort doing manual testing at each patch or release. You may not be able to respond to customer requirements in a competitive fashion. The worst result may be inflicting poor software on your customers and having them find your bugs while you lose your reputation.

So how can you set good goals and get better results in your testing effort? Here are the basics:

  • Make sure everyone feels responsible for quality
  • Define detailed acceptance criteria that drives what you test
  • Invest in infrastructure for testing
  • Drive toward specific goals around automation and responsiveness

I have talked about many of these issues both in the book and in other blogs but I will summarize them here as a reminder.

Everyone needs to feel responsible for quality. From the developer who should produce automated exhaustive functional unit tests to the project or release owner who should drive quality across many functional areas, everyone should feel responsible. Break down the organizational walls. Treat quality like performance or reliability and bring to bear the resources you need to make it happen. Don't differentiate between a development specification and a test specification -- make them one document. Send a message that there is only one thing and that is a quality product for your customers. Send a message that there is only one team and everyone must work together to achieve quality.

Don't let acceptance of a project or product only be that it contains no critical bugs. That measure is a second order measure. Define the parametrics for the product around performance, durability, interoperability, compatibility, usability, etc.. Decide on those things together with all of your developers and product management. Use those criteria along with exhaustive functional unit tests to drive the test portion of your specifications. Make sure you identify real customer scenarios as a basis for the tests. Finally, measure, measure, measure the results in both your schedule as the tests are completed and in your integration as more of the tests are deployed. Do not hide these details behind test points. Expose them in highly visible presentations. Make sure everyone can see what is important to the team with respect to determining progress and whether the project or product is done.

I hate the word framework. It is used to denote a lot of things and does not communicate what work is actually being done. Testing frameworks fall into this grouping for me. So what infrastructure needs to be in place for people to actually develop automated tests? Here is a list of the basics:

  • Tools to setup and knockdown a test. These may include tools that can reboot or reinitialize test platforms and/or test drivers automatically. They may include setting up a configuration or data set. Clearly how fast they work is a factor.
  • A programmatic interface to the test platform. This may be through a CLI, scripting language or API. It must work remotely. It must control as much of the product as possible in order to test as much as possible. Avoid doing this through the GUI as test vehicles that go through the GUI are often hard to maintain.
  • A standard output format. Define what you need for summary and detailed information. If every test produces different output formats or worse undecipherable output formats, you will spend an enormous effort in trying to figure out what is working and what is not.

Obviously there are always more items that you may need for a particular product or environment but your list should always include the items listed above.

The last thing you should do is set goals around automation and responsiveness. How much effort should it take to test a patch or a release? How much elapsed time should it take to test a patch or a release? By its nature, the last question will affect the answer to the first question (i.e. with a limited amounted of time you will need to automate more and it will reduce the effort). So here are my rules of thumb for elapsed time:

  • For a patch you need to be able to broadly but not deeply test your whole product within 12 hours.
  • For a release you need to be able to test your whole product broadly and deeply within 7 days (excluding duration tests for obvious reasons).

The patch level test is often also used on nightly builds during a development cycle.

Obviously your goals should be to reduce the effort for both kinds of test cycles as low as possible. However, there will always be a manual component to verify that GUIs look right.

If you follow the guidelines above you can set better goals around testing and achieve better results for your team and your company.

More later ...