Where Did Six Sigma Come From?

As with Lean, we can trace the roots of Six Sigma to the nineteenth-century craftsman, whose challenges as an individual a long time ago mirror the challenges of organizations today. The craftsman had to minimize wasted time, actions, and materials; he also had to make every product or service to a high standard of quality the first time, each time, every time.

Quality Beginning

The roots of what would later become Six Sigma were planted in 1908, when W. S. Gosset developed statistical tests to help analyze quality data obtained at Guinness Brewery. About the same time, A. K. Erlang studied telephone traffic problems for the Copenhagen Telephone Company in an effort to increase the reliability of service in an industry known for its inherent randomness. It’s likely that Erlang was the first mathematician to apply probability theory in an industrial setting, an effort that led to modern queuing and reliability theory. With these underpinnings, Walter Shewhart worked with Western Electric (a forerunner of AT& T) in the 1930s to develop the theoretical concepts of quality control. Lean-like industrial engineering techniques did not solve quality and variation-related problems; more statistical intelligence was needed to get to their root causes. Shewhart is also known as the originator of the Plan-Do-Check-Act cycle, which is sometimes ascribed to Dr. Edwards Deming, Shewhart’s understudy. As the story goes, Deming made the connection between quality and cost. If you find a way to prevent defects, and do everything right the first time, you won’t have any need to perform rework. Therefore, as quality goes up, the cost of doing business goes down. Deming’s words were echoed in the late 1970s by a guy named Philip Crosby, who popularized the notion that “quality is free.”

Quality Crazy

War and devastation bring us to Japan, where Deming did most of his initial quality proselytizing with another American, Dr. Joseph Juran. Both helped Japan rebuild its economy after World War II, consulting with numerous Japanese companies in the development of statistical quality control techniques, which later spread into the system known as Total Quality Control (TQC).

As the global economy grew, organizations grew in size and complexity. Many administrative, management, and enabling functions grew around the core function of a company to make this or that product. The thinking of efficiency and quality, therefore, began to spread from the manufacturing function to virtually all functions— procurement, billing, customer service, shipping, and so on. Quality is not just one person’s or one department’s job. Rather, quality is everyone’s job! This is when quality circles and suggestion programs abounded in Japanese companies: no mind should be wasted, and everyone’s ideas are necessary. Furthermore, everyone should continuously engage in finding better ways to create value and improve performance. By necessity, quality became everyone’s job, not just the job of a few … especially in Japan, at a time when there was precious little money to invest in new equipment and technology.

The rest of the story might be familiar if you’re old enough to remember. By the late 1970s, America had lost its quality edge in cars, TVs, and other electronics— and they were suffering significant market share losses. Japanese plants were far more productive and superior to American plants, according to a 1980 NBC television program, If Japan Can Why Can’t We? In response to all this, American companies took up the quality cause. They made Deming and Juran heroes, and institutionalized the Japanese-flavored TQC into its American counterpart, Total Quality Management (TQM). They developed a special government award, the Baldrige Award, to give companies that best embodied the ideal practice of TQM. They organized all the many elements and tools of quality improvement into a teachable, learnable, and doable system— and a booming field of quality professionals was born.

Quality Business

The co-founder of Six Sigma, Dr. Mikel Harry, has often said that Six Sigma shifts the focus from the business of quality to the quality of business. What he means is that for many years the practices of quality improvement floated loosely around a company, driven by the quality department. And as much as the experts said that quality improvement has to be driven and supported by top executives, it generally wasn’t. Enter Jack Welch, the iconic CEO who led General Electric through 2 decades of incredible growth and consistent returns for shareholders. In the late 1980s, Welch had a discussion with former AlliedSignal CEO Larry Bossidy, who said that Six Sigma could transform not only a process or product, but a company. In other words, GE could use Six Sigma as AlliedSignal was already doing: to improve the financial health and viability of the corporation through real and lasting operational improvements. Welch took note and hired Mikel Harry to train hundreds of his managers and specialists to become Six Sigma Black Belts, Master Black Belts, and Champions. Welch installed a deployment infrastructure so he could fan the Six Sigma methodology out as widely as possible across GE’s many departments and functions. In short, Welch elevated the idea and practice of quality from the engineering hallways of the corporation into the boardroom. Lest we not be clear, the first practical application of Six Sigma on a pervasive basis occurred at Motorola, where Dr. Harry and the co-inventor of Six Sigma, Bill Smith, worked as engineers. Bob Galvin, then CEO of Motorola, paved the way for Bossidy and Welch in that he proved how powerful Six Sigma was in solving difficult performance problems. He also used Six Sigma at Motorola to achieve unprecedented quality levels for key products. One such product was the Motorola Bandit pager, which failed so rarely that Motorola simply replaced rather than repaired them when they did fail.

The Machine that Changed the World

Who are you when you get your B.A. in political science from the University of Chicago, a Master’s from Harvard in transportation systems, and a Ph.D. in political science from MIT?

You guessed it: James Womack, the one who coined the term “Lean Manufacturing” with co-author Daniel Jones in their landmark book, The Machine That Changed the World (1990). While Womack’s education is in political science, his doctoral dissertation and subsequent work was focused on comparative industrial policy in the United States, Germany, and Japan. That’s how he developed his extensive knowledge and relationships for writing his 1990 book and his follow-up book, Lean Thinking, in 1996.

Womack’s Lean Principles are as follows:

1. Value— Act on what’s important to the customer of the process.

2. Value stream— Understand which steps in the process add value and which don’t.

3. Flow— Keep the work moving at all times and eliminate waste that creates delay.

4. Pull— Avoid making more or ordering more inputs for customer demand you don’t have.

5. Strive for perfection— There is no optimum level of performance; just continually pursue improvements.

While Ohno and Toyota built the house of Lean brick by brick, and while many other companies have adopted TPS principles and practices, Womack brought it all together into a thinkable and deployable system. Womack’s work has also gone a long way in migrating Lean practices into the heart and soul of the entire enterprise, not just the manufacturing functions. Consequently, similar to the path of quality and Six Sigma, the business world has fully awoken to the undeniable fact that Lean is for banks and hospitals and service companies as much as it is for manufacturers.

A bank used Lean to reduce loan-approval processing time from 21 days to 1 day. A hospital reduced the average emergency room patient wait time from 100 minutes to 10 minutes without adding any staff. Southwest Airlines applied Rapid Changeover to achieve best-in-class gate turnaround times. If you have a process (and who doesn’t?), the principles of Lean apply. And who can we thank or acknowledge for this? Even more than the big names like Ford, Ohno, and Womack, we can thank the thousands of companies that stamped Lean’s imprint into their organizations. They are the true testament to Lean’s universal applicability.

So if you understand the principles and aims of Lean, how do you enact them? Typically, you implement Lean changes in your organization through a series of activities called Kaizen Events.

Control Phase in Six Sigma……


To complete project work and hand off improved process to process owner, with procedures for maintaining the gains


  • Documented plan to transition improved process back to process owner, participants and sponsor
  • Before and after data on process metrics
  • Operational, training, feedback, and control documents (updated process maps and instructions, control charts and plans, training documentation, visual process controls)
  • A system for monitoring the implemented solution (Process Control Plan), along with specific metrics to be used for regular process auditing
  • Completed project documentation, including lessons learned, and recommendations for further actions or opportunities

Key steps in Control

  1. Develop supporting methods and documentation to sustain full-scale implementation.
  2. Launch implementation.
  3. Lock in performance gains. Use mistake-proofing or other measures to prevent people from performing work in old ways.
  4. Monitor implementation. Use observation, interaction, and data collection and charting; make additional improvements as appropriate.
  5. Develop Process Control Plans and hand off control to process owner.
  6. Audit the results. Confirm measures of improvements and assign dollar figures where appropriate. Give audit plan to company’s auditing group.
  7. Finalize project:
    • Document ideas about where your company could apply the methods and lessons learned from this project
    • Hold the Control Gate Review
    • Communicate project methods and results to others in the organization
    • Celebrate project completion
  8. Validate performance and financial results several months after project completion.

Gate review checklist for Control

  1. Full-scale Implementation results
    • Data charts and other before/after documentation showing that the realized gains are in line with the project charter
    • Process Control Plan
  2. Documentation and measures prepared for sustainability
    • Essential documentation of the improved process, including key procedures and process maps
    • Procedures to be used to monitor process performance and continued effectiveness of the solution
    • Control charts, capability analysis, and other data displays showing current performance and verifying gains
    • Documentation of procedures (mistake-proofing, automated process controls) used to lock in gains
  3. Evidence of buy-in, sharing and celebrating
    • Testimonials or documentation showing that:
      • The appropriate people have evaluated and signed off on the changes
      • The process owner has taken over responsibility for managing continuing operations
      • The project work has been shared with the work area and company at large (using a project database, bulletin boards, etc.)
    • Summary of lessons learned throughout the project
    • List of issues/opportunities that were not addressed in this project (to be considered as candidates for future projects)
    • Identification of opportunities to use the methods from this project in other projects
    • Plans for celebrating the hard work and successful efforts

Tips for Control Phase

  • Set up a realistic transition plan that will occur over a series of meetings, training events, and progress checks scheduled between the team and the process participants (avoid blind hand-offs of implementation plans).
  • Schedule a validation check 6 to 12 months after the control gate review. Be sure the project sponsor and local controller/finance representative is present to validate that the results are in place and stable!
  • Never anticipate perfection! Something always goes wrong. Develop a rapid response plan to address unanticipated failures via FMEA (p. 270). Identify who will be part of the “rapid response team” when a problem arises. Get permission from sponsor to use personnel should the need arise.
  • Develop tools that are easy for process participants to reference and use. It’s hard to keep paying attention to how a process operates, so you need to make it as easy as possible for people to monitor the work automatically.
  • Work out the kinds before transferring responsibility for managing the new process. Handing off (to the sponsor or process owner) a process that is still being worked on will compromise success.

Five Day Plan For Kaizen:

This article looks at some common tools and techniques for planning a successful Kaizen event, and identifies some pitfalls to avoid.

The 5 Days Journey…….

  • Day 1 – Current State Documentation
  • Day 2 – Current State Evaluation
  • Day 3 – Characterize Future State; Plan Implementation
  • Day 4 – Implement Future State
  • Day 5 – Operationalize Future State and Debrief

The intent of any Kaizen is improvement, specifically process improvement, and more specifically in some combination of three primary metrics: throughput, inventory and product/process cost. The metrics are established to provide a guidepost for progress toward a goal – a gauge of success (or failure). Use of metrics is non-negotiable. This means that collecting data on the metrics does not start during the Kaizen event; there must be a history of the relevant metrics to 1) justify that the Kaizen effort is even worth the time and 2) establish a baseline against which a goal can be defined and progress evaluated. Start to research and collect historical data relative to the metrics of the planned Kaizen event at least one month before the scheduled event. How much historical data is needed depends on the frequency of measurable events and variation. Some metrics, such as space needed to produce a product or distance walked by operators, do not require much effort to gather.

Too often, organizations employ Kaizen as a team-based brainstorming effort without the support of data. This is a mistake. Although Kaizen events are designed to be fast and intense, data analysis is still important to the process. In fact, the understanding and appropriate use of data are often the foundation of a successful Kaizen event.

Poorly executed Kaizen events can often be tied to poor (or absent) data analysis starting with insufficient understanding of KPI (key performance indicator) history. The problem gets worse when – because the team lacks time to gather the right data, they do not know what data to get or they do not know how to study the data even if they have it – root causes are identified and characterized by means of team voting or tribal knowledge. Solutions often fail because the team’s filtering of anecdotal information, which was assumed to be correct, failed to adequately select or describe the important sources of waste at a controllable level. Finally, data to track performance metrics after changes are implemented is commonly neglected, often because the team’s attention has turned to another fire. This failure will manifest itself in a lack of follow-up on open issues and a lack of understanding of the business impact resulting from the effort. Ultimately, these issues undermine the credibility of a Kaizen program.

Before the Kaizen

Kaizen events were never meant to be brainstorming events with solutions unsupported by data analysis. Unfortunately, many organizations choose this route because they have the misguided belief that data analysis is costly and contrary to the Kaizen speed culture. This approach becomes a rationalization for laziness since more time will ultimately be spent justifying or correcting solutions where appropriate data does not exist. Simply stated, without data, there is no opportunity for the team to discover anything new (i.e., innovate) as their brainstorming sessions will simply confirm what they think they already know. Do not neglect the value of the data; plan early (at least two weeks before the event) to get the necessary data, especially voice of the customer (VOC) data, and be prepared to quickly get more detailed data as questions arise during the event. As the data is gathered, it should be validated to ensure veracity.

In addition to a plan for the collection and validation of data, the Kaizen team leader will need to establish a charter with scope and objectives for the event at least two weeks prior to the event (note, this is in addition to the KPI information that should be gathered at least one month prior to the event). Tasks to be accomplished include identifying team members, notifying relevant departments about potential changes and estimating financial benefits.

The charter provides the framework necessary to create a daily agenda for deliverables in the Kaizen event. The charter and agenda should be developed in concert with (or at least approved by) the local management team, as it will dictate the planned resource requirements by day and the nature of the interruptions to the process so downtime can be sufficiently anticipated without impacting the customer. At the end of each day, it is best to meet with the Kaizen event’s champion or sponsor to review activities and conclusions, as well as barriers and resource needs for the next day. A brief description of each day in a typical five-day Kaizen event follows.

Day 1 – Current State Documentation

On Day 1, the charter should be communicated, participants should be trained and the process should be physically viewed. In addition, this is the time to create a first draft of the detailed value stream map (VSM). Through communication of the charter and a brief overview of the process, team members will be instructed on the objectives for the Kaizen event and their individual responsibilities in the Kaizen process. Site leadership should participate in the kickoff session to emphasize the importance of the event and grant authority to the team to make required changes. Training on the Kaizen approach and philosophy should be limited to one hour or less; the tools are intuitive by design and most of the learning experience will occur through live practice.

The bulk of Day 1 should be dedicated to observing the process, VOC synthesis, creating a VSM (or reviewing a recently-created VSM) and identifying the elements of waste. These efforts should be conducted with the knowledge of historical process performance as indicated by the data and any expected future conditions that will create additional challenges. Process performance should be illustrated with time series charts, histograms and pareto charts as necessary; finance personnel must participate in these efforts to provide perspective on the business impact of the historical performance relative to the objectives. The understanding gained on Day 1 will help to set priorities for the activities of the second day. End the day by starting a “newspaper” with photos of the process before any change. This newspaper summarizes all the completed actions and findings in a format that is easy to assemble and access.
Day 2 – Current State Evaluation

On Day 2, it is time to quantify the impact of the waste in terms of process metrics, take time studies, identify and prioritize bottlenecks, update the VSM, and begin root cause analysis on waste. For example, in a manufacturing process, elements of the overall equipment effectiveness (OEE) metric should be decomposed to understand the losses in line capacity and identify important losses to be eliminated or reduced.

Data should be utilized as much as possible in the root cause characterization to support graphical analysis through pareto charts, histograms, multi-vari charts, box plots, scatter plots and control charts, to name a few. Graphical observations and conclusions should be verified statistically. Other team-based tools may include: brainstorming, affinity diagrams, fishbone diagrams, critical-to-quality trees, cause-and-effect matrices, process maps (the VSM works well for this), spaghetti charts and failure mode and effects analysis (FMEA).*  The time studies should be used to create a takt time analysis, the identification and quantification of value-add versus non-value-add work, and the understanding of current standard work combinations.

* The FMEA is a powerful but potentially time-consuming tool; it should be used sparingly to understand the root causes of the most important forms of waste.

The work conducted on this day is a critical input for the work of the third day:  identifying solutions and prioritizing opportunities for improvement. At this point the team should identify additional resources necessary to complete the task list, report to management any potential roadblocks or barriers, and begin the process of transferring knowledge to support culture change and reasons to embrace the new ways.
Day 3 – Characterize Future State; Plan Implementation

The focus of Day 3 is to develop and prioritize solutions to eliminate critical waste, develop new flow scenarios with new standard work combinations, prioritize changes, plan the implementation, create contingency plans, and begin solution implementation. The rigor applied in Day 2 dictates how well the team’s time is utilized on this day.

A project plan will help define resources and timing of both immediate changes and longer term changes. A future state VSM or process map should be created to illustrate the impact of the changes visually. Improvements should always be biased toward low-tech, simple and self-manageable solutions. (Complicated or expensive solutions must be reviewed with management and finance to quantify the expected benefits.) Proposed changes should also be reviewed with departments such as health and safety, and unions so time is not later wasted with approvals and enrollment. If team membership has been selected correctly, union concerns should be minimized since members will have been involved in the process. The team should begin implementing changes on this day in order to alleviate some of the burden for the fourth day. Newspaper updates should be prepared again.
Day 4 – Implement Future State

This is a long “all hands on deck” day with intense focus on implementing the changes with minimal impact on the operation. 5S techniques (sort, straighten, shine, standardize, sustain) may be applied as equipment is rearranged, cleaned and repaired; visual aids are installed; tools/jigs are organized, refurbished and enhanced; air/power supply access points and lights are moved; standard work documentation is revised; operators are trained; and the new process is piloted. It is critical that data is collected (including time studies) during the pilot in order to understand the impact of the process changes and provide feedback for multiple iterations of minor changes to optimize the process. Results are tallied and quantified with financial impact calculated.

This can be an exhausting day; resources and equipment must be coordinated to ensure smooth execution of the changes and the pilot. Be prepared to sequence implementation of some changes over time with a project plan that tracks dates and accountabilities. All meetings on this day should take place on the production floor or process area. It is important that management is present at the end of Day 4 to show support for new processes and discuss ways to sustain the changes.
Day 5 – Operationalize Future State and Debrief

Launch the new process for regular processing of demand and prepare a report based on the results achieved on Day 5. Prepare final documentation and approvals (legal, customer, safety, etc.) as necessary. A final, formal report of the event should not be required if the management team has been engaged during the rest of the Kaizen event. At this point, there should be no need to justify changes to management as issues should have surfaced as they were identified during the event. Any final report should be a simple summary of the information already compiled in the Kaizen newspaper.

Conduct a post-mortem with the Kaizen team, capturing best practices and learnings to be applied to future Kaizen events. Data collection plans and response plans should be in place to monitor performance and systematically respond to problems over the next several weeks; these monitoring and response plans should be institutionalized as part of the management system with ownership assigned and performance management plans updated. Review the task list and Kaizen metrics for completion every week for four weeks – or until all items are completed. The task list should assign responsibility to specific employees and list deliverable dates for each task.

People, Processes and Tools

No two Kaizen events will be the same and the real skill in conducting these events is deciding which tools to use, how rigorously to apply them, which individuals to involve in their administration and what the desired outcomes are. The tools of Kaizen are simple; their application requires diligent planning and considerable creativity on the part of the team leader. Team leaders need to remain aware of the risks created by the short timeframe and physical demands of the events: hasty decisions based on groupthink are a threat to the effectiveness of the Kaizen method. The agendas described above represent a sample of tools that should be considered at a minimum. Indeed, the schedule described is intended to be a guide; the realities of an individual operation inevitably dictate a slightly different schedule. Many times work will have to be performed late at night or during off shifts, so Kaizen leaders should plan to provide basic sustenance (food and appropriate beverages) during the event; 16-hour days are not uncommon. Typically, the team should expect to complete about 80 percent of the task list during the event with the remaining tasks to be completed within four weeks.

Optimizing Kaizen Events

Conscientious examination of best practices and lessons learned will naturally produce opportunities to standardize and improve future Kaizen events. Automatically assuming that a solution or a best practice from another process will produce identical results can be a risk. These implementations need to be tested as rigorously as any other solution. Operations with a mature Kaizen culture will design facilities to support the frequent process changes necessary to maintain optimal performance in a changing economic environment. For example, equipment designed for mobility (casters where possible), power and air drops designed for quick reconfiguration, moveable lighting on tracks, strategically placed (or minimized) vertical structural supports, elimination of walls, and floors and pathways that are easy to clean and re-mark are all examples of structural design components that can enable more efficient Kaizen execution.

Even with such facility design features, properly run Kaizen events are still intense and team members should be fully aware and prepared for the expectations of this difficult assignment. This means all participants should keep safety foremost in their minds as they will be working under stress, often in environments that are not completely familiar to them and they should be recognized for their heroic efforts and commitment. The intensity of successful Kaizen events also suggests that they should be judiciously applied and participants should not be required for events on multiple consecutive weeks.

Keep the Change

Kaizen is a powerful tool for positive change. With proper planning, appropriate use of data and effective tool application, these events deliver significant results to process improvement and financial impact to businesses. Additionally, Kaizen is an effective tool for helping people learn about their own processes (what works, what does not work and what is possible) and for empowering them to effect change. These outcomes cannot be quantified financially, but they are an important foundation for a continuous improvement culture and a committed workforce that accepts responsibility for the performance of their processes. The priceless outcomes of Kaizen may well be more valuable to your organization than the directly quantifiable process improvements.


The Benchmarking Process in Simple Words

7 Steps to Better Benchmarking

To get the best results from this powerful performance improvement tool, you need a clear understanding of what it can do for you and a well-structured process for your initiatives.

Largely unheard of in the business world until the mid-1990s, when Xerox Corp. used it to enhance its competitiveness, benchmarking has evolved to become an essential element of the business performance management (BPM) toolkit and a key input to financial and business improvement efforts. Despite this, it remains one of the most widely misunderstood improvement tools. The word means different things to different people, and, as a result, benchmarking projects all too frequently fail to deliver on their promise of real results.

However, when executed correctly, benchmarking can be a powerful focus for change, driving home sometimes uncomfortable facts and convincing leaders of the need to embark upon improvement efforts. Benchmarking is a tool that enables the investigation and ultimately the achievement of excellence, based on the realities of the business environment rather than on internal standards and historical trends.

There are two good reasons for organizations to benchmark. First, doing so can help them to stay in business by enabling them to outperform similar organizations, including competitors. Second, it ensures that the organization is continually striving to improve its performance through learning. Benchmarking opens minds to ideas from new sources, both within the same industry and in unrelated sectors.

In this article, I offer a definition of benchmarking and discuss the discipline’s strategic role in the effective management of performance improvement.

What It Is, What It Isn’t

Let’s start with a look at what benchmarking is not. It’s not “industrial tourism,” in which superficial visits are undertaken in the absence of any point of reference or any real prospect of supporting the improvement process. It’s impossible to acquire detailed knowledge of an operation after only a quick glance or a short visit.

Benchmarking also should not be considered a personal performance appraisal tool. The focus should be on the organization, not the individuals within it.

Nor is it a stand-alone activity; to succeed, benchmarking must be part of a continuous improvement strategy. Organizations must ramp up their performance rapidly to remain competitive in business environments today, and the pace is further accelerated in sectors where benchmarking is commonplace, where businesses rapidly and continuously learn from one another. A prime example is the oil and gas industry, where companies have to respond with lightning speed to ever-increasing business, technological, and regulatory demands. The majority of the key players in this industry participate in focused benchmarking consortia annually.

Benchmarking is not just a competitive analysis. It goes much further than a simple examination of the pricing and features of competitors’ products or services; it considers not only the output, but also the process by which the output is obtained. And benchmarking is much more than market research, because it considers the business practices that enable the satisfaction of customer needs and thus helps the organization to realize superior business performance. Many definitions of benchmarking exist, each offering slight variations on common themes. Here’s my definition: Benchmarking is a systematic and continuous process that enables organizations to identify world-class performance and measure themselves against that. Its goals can be summarized as:

  • Identify world-class performance levels;
  • Determine the drivers of superior performance;
  • Quantify gaps between the bench marker’s performance and world-class performance;
  • Identify best practices in key business processes;
  • Share knowledge of best practices;
  • Build foundations for performance improvement

Benchmarking projects can be classified in many different ways — for example, by the subject matter of the analysis, by the type of participants, by data source, or by methodology. There’s internal and external benchmarking; competitive and noncompetitive benchmarking; functional, process, and strategic benchmarking; and database and consortium benchmarking. While different approaches have their pros and cons, and some are clearly more effective than others, they all should have the same ultimate objective: to help an organization improve its business performance.

Irrespective of the type of benchmarking an organization undertakes, a well-structured and systematic process is critical to success. The Juran 7-Step Benchmarking Process(Exhibit 1) has been developed over many years by the Juran Institute and has formed the basis of numerous annual benchmarking consortia since 1995. I’ll describe it here in terms of external consortium benchmarking, but the process is generic and equally applicable in principle to all types of benchmarking.

The process is divided into two phases. Phase 1 is a positioning analysis that provides the bench marker with a comprehensive study of the relative performance of all of the benchmarking participants and identifies any gaps between the benchmarker’s performance and that of “best-in-class” organizations:

  • Step 1: Preparation and planning.As with any other project, thorough preparation and planning are essential at the outset. Recognize the need for benchmarking, determine the methodology you’re going to use, and identify the participants    in         your project.
    • Step 2: Data collection. This stage involves deciding what you’re going to measure and how you’ll measure it. You need to define the benchmarking envelope — what is to be benchmarked and what is to be excluded. At this point, you can establish the metrics you intend to use; these, too, must be clearly and unambiguously defined in order to ensure comparability of the datasets that you will collect. Finally, you need to determine the most appropriate vehicle for data collection.
    • Step 3: Data analysis. The key activities here are the validation and normalization of data. Before you can perform any meaningful analysis, it’s essential that all data be validated to establish its accuracy and completeness. Some form of data normalization is usually required to enable like comparisons to be made between what may be very different operational subjects. Without it, direct comparisons of performance are normally impossible and may lead to misinformed conclusions. To be of value, the analysis must indicate the benchmarker’s strengths and weaknesses, determine (and, where possible, quantify) gaps between the benchmarker’s performance and the leaders’, and provide recommendations for the focus of performance improvement efforts.
    • Step 4: Reporting. The analysis must then be reported in a clear, concise, and easily understood format via an appropriate medium.

Unfortunately, many benchmarking exercises stop at this point. But to maximize the value of the initiative, organizations must go further: They must build an understanding of the practices that enable the leaders to attain their superior performance levels. This is the purpose of Phase 2 of the 7-step benchmarking process:

  • Step 5: Learning from best practices.In this step, the top-performing organizations share their best practices, to the mutual benefit of all of the benchmarkers. Of course, when some of the benchmarkers are true competitors, the options for sharing may be limited, and alternative approaches may be required to establish learning.
    • Step 6: Planning and implementing improvement actions. Once the learning points have been ascertained, each organization should develop and communicate an action plan for the changes that it will need to make in order to realize improvements. The learning points should feed into the organization’s strategic plan and should be implemented via its performance improvement processes.
    • Step 7: Institutionalizing learning. The insights that you’ve gained and the performance improvements that you’ve achieved must be fully embedded within the organization; it’s critical to ensure that the gains are rolled out throughout the business and sustained over time. Benchmarking can take place at the corporate, operational, or functional levels of the organization. Make sure that these levels are linked via a cascading series of interlinked goals to ensure systematic progress toward the vision.

Shaping the Strategic Plan

Organizations’ goals all too often fall short of stakeholder expectations. A primary contributor to this sad state of affairs is the fact that goal-setting tends to be based on past trends and current internal practices. The external perspective is frequently overlooked, yet customers’ expectations are driven by their experiences with the best providers in the industry and superior providers in other industries. Benchmarking can capture these external references and provide a basis for comparative analysis.

Exhibit 2 shows some ways in which benchmarking can help to shape an organization’s strategic direction. It depicts a typical strategic planning process for performance improvement that begins with an organization’s vision for the future:

  • The visionwill always be influenced to some extent by the organization’s business environment and what others have been able to achieve. Benchmarking supplies detailed analyses of this environment and a factual basis for understanding what it means to be world-class, thereby helping to bring the organization’s vision into focus.
    • Assessing current performance and measuring the distance from there to the vision are critical activities for ensuring an organization’s long-term sustainability. While many tools are available for measuring current performance, including market research and competitor analysis, benchmarking adds the ability to clarify the organization’s position in relation to both the external business environment and the vision and to identify performance gaps. It enables the organization to adjust its strategy so that it can close the gap between its current reality and its vision of the future.
    • Long-term plans or key strategies derived from the vision comprise strategic goals that address all aspects of the organization’s performance, including business process performance, product or service performance, competitive performance, and customer satisfaction. By necessity, these goals will be constantly evolving. Benchmarking analyses enable the organization to set these objectives based on the external reality.

How Good Do You Need to Be?

Benchmarking enables decision-makers to understand exactly how much improvement they’ll need to accomplish in order to achieve superior performance. Frequent and regular benchmarking helps you to create specific and measurable short-term plans that are based on current reality rather than historical performance, and which can support step-by-step improvements in performance over time. The objective is to overtake the top performers, turning a performance deficit into performance leadership.

An implementation process is required to convert long- and short-term plans into operational plans. You’ll need to know exactly how your specific strategic goals are to be met and who has responsibility for executing the necessary actions. You’ll want to calculate and allocate the resources required and schedule and control the implementation. The output from your benchmarking effort feeds into this effort by providing vital information about best practices.

Benchmarking is a powerful tool that can significantly enhance an organization’s ability to strategically manage its performance. It forces managers to consider the broader perspective, to learn from outstanding performers, and to push beyond their own comfort zones. By revealing the best practices of top-performing operations, it can place your organization firmly on the road to world-class leadership


How to review Risk Management Process ????????

The purpose of risk management is to identify potential problems before they occur so that risk-handling activities may be planned and invoked as needed across the life of the product or project to mitigate adverse impacts on achieving objectives.

Risk management is a continuous, forward-looking process that is an important part of business and technical management processes. Risk management should address issues that could endanger achievement of critical objectives. A continuous risk management approach is applied to effectively anticipate and mitigate the risks that have critical impact on the project.

Effective risk management includes early and aggressive risk identification through the collaboration and involvement of relevant stakeholders. Strong leadership across all relevant stakeholders is needed to establish an environment for the free and open disclosure and discussion of risk.

Although technical issues are a primary concern both early on and throughout all project phases, risk management must consider both internal and external sources for cost, schedule, and technical risk. Early and aggressive detection of risk is important because it is typically easier, less costly, and less disruptive to make changes and correct work efforts during the earlier, rather than the later, phases of the project.

Risk management can be divided into three parts: defining a risk management strategy; identifying and analyzing risks; and handling identified risks, including the implementation of risk mitigation plans when needed.

For the purpose of this review, please address the following points:

  1. Demonstrate that you have a process to determine risk sources and categories. Identification of risk sources provides a basis for systematically examining changing situations over time to uncover circumstances that impact the ability of the project to meet its objectives. Risk sources are both internal and external to the project. As the project progresses, additional sources of risk may be identified. Establishing categories for risks provides a mechanism for collecting and organizing risks as well as ensuring appropriate scrutiny and management attention for those risks that can have more serious consequences on meeting project objectives.

Typical work products would include: (1) risk source lists (external and internal) and (2) risk categories lists.

  1. Demonstrate that you have a process to define the parameters used to analyze and categorize risks, and the parameters used to control the risk management effort. Parameters for evaluating, categorizing, and prioritizing risks typically include risk likelihood (i.e., the probability of risk occurrence), risk consequence (i.e., the impact and severity of risk occurrence), and thresholds to trigger management activities.

Risk parameters are used to provide common and consistent criteria for comparing the various risks to be managed. Without these parameters, it would be very difficult to gauge the severity of the unwanted change caused by the risk and to prioritize the necessary actions required for risk mitigation planning.

Typical work products would include: (1) risk evaluation, categorization, and prioritization criteria and (2) risk management requirements (control and approval levels, reassessment intervals, etc.).

  1. Demonstrate that you have a process to establish and maintain the strategy to be used for risk management. A comprehensive risk management strategy addresses items such as: (1) The scope of the risk management effort, (2) Methods and tools to be used for risk identification, risk analysis, risk mitigation, risk monitoring, and communication, (3) Project-specific sources of risks, (4) How these risks are to be organized, categorized, compared, and consolidated, (5) Parameters, including likelihood, consequence, and thresholds, for taking action on identified risks, (6) Risk mitigation techniques to be used, such as prototyping, simulation, alternative designs, or evolutionary development, (7) Definition of risk measures to monitor the status of the risks, and (8) Time intervals for risk monitoring or reassessment.

The risk management strategy should be guided by a common vision of success that describes the desired future project outcomes in terms of the product that is delivered, its cost, and its fitness for the task. The risk management strategy is often documented in an organizational or a project risk management plan. The risk management strategy is reviewed with relevant stakeholders to promote commitment and understanding.

A typical work product would be the project risk management strategy.

  1. Demonstrate that you have a process to identify and document the risks. The identification of potential issues, hazards, threats, and vulnerabilities that could negatively affect work efforts or plans is the basis for sound and successful risk management. Risks must be identified and described in an understandable way before they can be analyzed and managed properly. Risks are documented in a concise statement that includes the context, conditions, and consequences of risk occurrence.

Risk identification should be an organized, thorough approach to seek out probable or realistic risks in achieving objectives. To be effective, risk identification should not be an attempt to address every possible event regardless of how highly improbable it may be. Use of the categories and parameters developed in the risk management strategy, along with the identified sources of risk, can provide the discipline and streamlining appropriate to risk identification. The identified risks form a baseline to initiate risk management activities. The list of risks should be reviewed periodically to reexamine possible sources of risk and changing conditions to uncover sources and risks previously overlooked or nonexistent when the risk management strategy was last updated.

Risk identification activities focus on the identification of risks, not placement of blame. The results of risk identification activities are not used by management to evaluate the performance of individuals.

There are many methods for identifying risks. Typical identification methods include (1) Examine each element of the project work breakdown structure to uncover risks; (2) Conduct a risk assessment using a risk taxonomy. Interview subject matter experts; (3) Review risk management efforts from similar products. Examine lessons-learned documents or databases; (4) Examine design specifications and agreement requirements.

A typical work product would be a list of identified risks, including the context, conditions, and consequences of risk occurrence.

  1. Demonstrate that you have a process to evaluate and categorize each identified risk using the defined risk categories and parameters, and determine its relative priority. The evaluation of risks is needed to assign relative importance to each identified risk, and is used in determining when appropriate management attention is required. Often it is useful to aggregate risks based on their interrelationships, and develop options at an aggregate level. When an aggregate risk is formed by a roll up of lower level risks, care must be taken to ensure that important lower level risks are not ignored.

A typical work product would be a list of risks, with a priority assigned to each risk.

  1. Demonstrate that you have a process to develop a risk mitigation plan for the most important risks to the project, as defined by the risk management strategy. A critical component of a risk mitigation plan is to develop alternative courses of action, workarounds, and fallback positions, with a recommended course of action for each critical risk. The risk mitigation plan for a given risk includes techniques and methods used to avoid, reduce, and control the probability of occurrence of the risk, the extent of damage incurred should the risk occur (sometimes called a “contingency plan”), or both. Risks are monitored and when they exceed the established thresholds, the risk mitigation plans are deployed to return the impacted effort to an acceptable risk level. If the risk cannot be mitigated, a contingency plan may be invoked. Both risk mitigation and contingency plans are often generated only for selected risks where the consequences of the risks are determined to be high or unacceptable; other risks may be accepted and simply monitored.

Options for handling risks typically include alternatives such as: (1) Risk avoidance: Changing or lowering requirements while still meeting the user’s needs; (2) Risk control: Taking active steps to minimize risks; (3) Risk transfer: Reallocating design requirements to lower the risks; (4) Risk monitoring: Watching and periodically reevaluating the risk for changes to the assigned risk parameters; (5) Risk acceptance: Acknowledgment of risk but not taking any action. Often, especially for high risks, more than one approach to handling a risk should be generated.

In many cases, risks will be accepted or watched. Risk acceptance is usually done when the risk is judged too low for formal mitigation, or when there appears to be no viable way to reduce the risk. If a risk is accepted, the rationale for this decision should be documented. Risks are watched when there is an objectively defined, verifiable, and documented threshold of performance, time, or risk exposure (the combination of likelihood and consequence) that will trigger risk mitigation planning or invoke a contingency plan if it is needed.

Adequate consideration should be given early to technology demonstrations, models, simulations, and prototypes as part of risk mitigation planning.

Typical work products would include: (1) Documented handling options for each identified risk; (2) Risk mitigation plans; (3) Contingency plans; and (4) a list of those responsible for tracking and addressing each risk

  1. Demonstrate that you have a process to monitor the status of each risk periodically and implement the risk mitigation plan as appropriate. To control and manage risks effectively during the work effort, follow a program to monitor risks and their status and the results of risk-handling actions regularly. The risk management strategy defines the intervals at which the risk status should be revisited. This activity may result in the discovery of new risks or new risk-handling options that may require re-planning and reassessment. In either event, the acceptability thresholds associated with the risk should be compared against the status to determine the need for implementing a risk mitigation plan.

Typical work products would include: (1) Updated lists of risk status; (2) Updated assessments of risk likelihood, consequence, and thresholds; (3) Updated lists of risk-handling options; (4) Updated list of actions taken to handle risks; and (5) Risk mitigation plans.

  1. Demonstrate that you have established and maintain an organizational policy for planning and performing the risk management processes. 
  1. Demonstrate that you establish and maintain a plan for performing the risk management process. Typically, this plan for performing the risk management process is included in (or referenced by) the project plan. This would address the comprehensive planning for all of the specific practices in the project plan, from determining risk sources and categories all the way through to the implementation of risk mitigation plans.
  1. Demonstrate that you provide adequate resources for performing the risk management process, developing the work products, and providing the services of the process. Examples of resources provided are: risk management databases, risk mitigation tools, prototyping tools, and modeling and simulation.
  1. Demonstrate that you assign responsibility and authority for performing the process, developing the work products, and providing the services of the risk management process.
  1. Demonstrate that you train the people performing or supporting the risk management process as needed.
  1. Demonstrate that you place designated work products of the risk management process under appropriate levels of configuration management. 
  1. Demonstrate that you identify and involve the relevant stakeholders of the risk management process as planned.
  1. Demonstrate that you monitor and control the risk management process against the plan for performing the process and take appropriate corrective action.
  1. Demonstrate that you objectively evaluate adherence of the risk management process against its process description, standards, and procedures, and address noncompliance.
  1. Demonstrate that you review the activities, status, and results of the risk management process with higher level management and resolve issues. Reviews of the project risk status are held on a periodic and event-driven basis with appropriate levels of management, to provide visibility into the potential for project risk exposure and appropriate corrective action. Typically, these reviews will include a summary of the most critical risks, key risk parameters (such as likelihood and consequence of these risks), and the status of risk mitigation efforts.

 How to Calculate Sample Size??????

 Sample Size Calculation:

How many responses do you really need? This simple question is a never-ending quandary for researchers. A larger sample can yield more accurate results — but excessive responses can be pricey.

Consequential research requires an understanding of the statistics that drive sample size decisions. A simple equation will help you put the migraine pills away and sample confidently.

Before you can calculate a sample size, you need to determine a few things about the target population and the sample you need:

  1. Population Size — how many total people fit your demographic? For instance, if you want to know about mothers living in the US, your population size would be the total number of mothers living in the US. Don’t worry if you are unsure about this number. It is common for the population to be unknown or approximated.
  2. Margin of Error (Confidence Interval) — No sample will be perfect, so you need to decide how much error to allow. The confidence interval determines how much higher or lower than the population mean you are willing to let your sample mean fall. If you’ve ever seen a political poll on the news, you’ve seen a confidence interval. It will look something like this: “68% of voters said yes to Proposition Z, with a margin of error of +/- 5%.”
  3. Confidence Level — How confident do you want to be that the actual mean falls within your confidence interval? The most common confidence intervals are 90% confident, 95% confident, and 99% confident.
  4. Standard of Deviation — How much variance do you expect in your responses? Since we haven’t actually administered our survey yet, the safe decision is to use .5 – this is the most forgiving number and ensures that your sample will be large enough.

Your confidence level corresponds to a Z-score. This is a constant value needed for this equation. Here are the z-scores for the most common confidence levels:

  • 90% – Z Score = 1.645
  • 95% – Z Score = 1.96
  • 99% – Z Score = 2.576

If you choose a different confidence level, use Z-Score Table to find your score.

Next, plug in your Z-score, Standard of Deviation, and confidence interval into this equation:

Necessary Sample Size = (Z-score) ² * StdDev*(1-StdDev) / (margin of error) ²

Here is how the math works assuming you chose a 95% confidence level, .5 standard deviation, and a margin of error (confidence interval) of +/- 5%.

((1.96)² x .5(.5)) / (.05)²
(3.8416 x .25) / .0025
.9604 / .0025
385 respondents are needed

Link  for Z-Score Table