Analyze Phase in Six Sigma

Purpose

To pinpoint and verify causes affecting the key input and output variables tied to project goals. (“Finding the critical Xs”)

Deliverables

  • Documentation of potential causes considered in your analysis
  • Data charts and other analyses that show the link between the targeted input and process (Xs) variables and critical output (Y)
  • Identification of value-add and non-value-add work
  • Calculation of process cycle efficiency

Key steps in Analyze

  1. Conduct value analysis. Identify value-add, non-value-add and business non-value-add steps
  2. Calculate Process Cycle Efficiency (PCE). Compare to world-class benchmarks to help determine how much improvement is needed.
  3. Analyze the process flow. Identify bottleneck points and constraints in a process, fallout and rework points, and assess their impact on the process throughput and its ability to meet customer demands and CTQs.
  4. Analyze data collected in Measure.
  5. Generate theories to explain potential causes. Use brainstorming, FMEA, C&E diagrams or matrices, and other tools to come up with potential causes of the observed effects.
  6. Narrow the search. Use brainstorming, selection, and prioritization techniques (Pareto charts, hypothesis testing, etc.) to narrow the search for root causes and significant cause-and-effect relationships.
  7. Collect additional data to verify root causes. Use scatter plots or more sophisticated statistical tools (such as hypothesis testing, ANOVA, or regression) to verify significant relationships.
  8. Prepare for Analyze gate review.

Gate review checklist for Analyze

  1. Process Analysis
    • Calculations of Process Cycle Efficiency
    • Where process flow problems exist
  2. Root Cause Analysis
    • Documentation of the range of potential Key Process Input Variables (KPIVs) that were considered (such as cause-and-effect diagrams; FMEA)
    • Documentation of how the list of potential causes was narrowed (stratification, multivoting, Pareto analysis, etc.)
    • Statistical analyses and/or data charts that confirm or refute a cause-and-effect relationship and indicate the strength of the relationship (scatter plot, design of experiment results, regression calculations, ANOVA, component of variation, lead time calculations showing how much improvement is possible by elimination of NVA activities, etc.)
    • Documentation of which root causes will be targeted for action in Improve (include criteria used for selection)
  3. Updated charter and project plans
    • Team recommendations on potential changes in team membership considering what may happen in Improve (expertise and skills needed, work areas affected, etc.)
    • Revisions/updates to project plans for Improve, such as time and resource commitments needed to complete the project
    • Team analysis of project status (still on track? still appropriate to focus on original goals?)
    • Team analysis of current risks and potential for acceleration
    • Plans for the Improve phase

Tips for Analyze 

  • If you identify a quick-hit improvement opportunity, implement using a Kaizen approach. Get partial benefits now, then continue with project.
  • Be critical about your own data collection—the data must help you understand the causes of the problem you’re investigating. Avoid “paralysis by analysis”: wasting valuable project time by collecting data that don’t move the project forward.
  • This is a good time in a project to celebrate team success for finding the critical Xs and implementing some quick hits!

  MSA(Measurement System Analysis)

THE SIGMA ANALYTICS

Measurement System Analysis: Hidden Factory Evaluation 

What Comprises the Hidden Factory in a Process/Production Area?

  • Reprocessed and Scrap materials — First time out of spec, not reworkable
  • Over-processed materials — Run higher than target with higher
    than needed utilities or reagents
  • Over-analyzed materials — High Capability, but multiple in-process
    samples are run, improper SPC leading to over-control

What Comprises the Hidden Factory in a Laboratory Setting?

  • Incapable Measurement Systems — purchased, but are unusable
    due to high repeatability variation and poor discrimination
  • Repetitive Analysis — Test that runs with repeats to improve known
    variation or to unsuccessfully deal with overwhelming sampling issues
  • Laboratory “Noise” Issues — Lab Tech to Lab Tech Variation, Shift to
    Shift Variation, Machine to Machine Variation, Lab to Lab Variation

Hidden factory Linkage –

  • Production Environments generally rely upon in-process sampling for adjustment
  • As Processes attain Six Sigma performance they begin to rely less on sampling…

View original post 270 more words

  MSA(Measurement System Analysis)

Measurement System Analysis: Hidden Factory Evaluation 

What Comprises the Hidden Factory in a Process/Production Area?

  • Reprocessed and Scrap materials — First time out of spec, not reworkable
  • Over-processed materials — Run higher than target with higher
    than needed utilities or reagents
  • Over-analyzed materials — High Capability, but multiple in-process
    samples are run, improper SPC leading to over-control

What Comprises the Hidden Factory in a Laboratory Setting?

  • Incapable Measurement Systems — purchased, but are unusable
    due to high repeatability variation and poor discrimination
  • Repetitive Analysis — Test that runs with repeats to improve known
    variation or to unsuccessfully deal with overwhelming sampling issues
  • Laboratory “Noise” Issues — Lab Tech to Lab Tech Variation, Shift to
    Shift Variation, Machine to Machine Variation, Lab to Lab Variation

Hidden factory Linkage –

  • Production Environments generally rely upon in-process sampling for adjustment
  • As Processes attain Six Sigma performance they begin to rely less on sampling and more upon leveraging the few influential X variables
  • The few influential X variables are determined largely through multi-vari studies and Design of Experimentation (DOE)
  • Good multi-vari and DOE results are based upon acceptable measurement analysis

Picture1

Picture2

Picture3

Measurement System Terminology

Discrimination Smallest detectable increment between two measured values

Accuracy related terms

True value – Theoretically correct value

Bias – Difference between the average value of all measurements of a sample and the true value for that sample

Precision related terms

Repeatability – Variability inherent in the measurement system under constant conditions

Reproducibility – Variability among measurements made under different conditions (e.g. different operators, measuring devices, etc

Stability distribution of measurements that remains constant and predictable over time for both the mean and standard deviation

Linearity A measure of any change in accuracy or precision over the range of instrument capability

Measurement System Capability Index – Precision to Tolerance Ratio:

  •  P/T = [5.15* Sigma (MS)]/Tolerence
  • Addresses what percent of the tolerance is taken up by measurement error
  • Includes both repeatability and reproducibility:  Operator * Unit * Trial experiment
  • Best case: 10%  Acceptable:  30%

Note: 5.15 standard deviations accounts for 99% of Measurement System (MS) variation.  The use of 5.15 is an industry standard.

Measurement System Capability Index – %Gage R & R:

  • % R & R =[Sigma (MS)/Sigma(Observed Process Variation)]*100
  • Addresses what percent of the Observed Process Variation is taken up by measurement error
  • %R&R is the best estimate of the effect of measurement systems on the validity of process improvement studies (DOE)
  • Includes both repeatability and reproducibility
  • As a target, look for %R&R < 30%

 

 

Why MSA and How it is different from Calibration??????

Measurement System Analysis:

Statistical Process Control has taught us to look at and evaluate the variation in processes. More the complexity of the processes more is the potential variation. What we get at the output end is the stacked up variation that is a resultant of variation at every step.

Measurement is a process of evaluating an unknown quantity and expressing it into numbers. The Measurement Process too is subject to all the laws of variation and Statistical Process Control.

Measurement Systems Analysis is the scientific and statistical Analysis of Variation that is induced into the process of measurement.

Why MSA?  

A measurement system tells us in numerical terms, an important information about the entity that we measure. How sure can we be about the data that the measurement system delivers? Is it the real value of the measure that we obtain out of the measurement process, or is it the measurement system error that we see? Indeed, measurement systems errors can be expensive, and can cost our capability to obtain the true value of what we measure. So, we can say that we can be confident about our reading of a parameter only to the extent that our measurement system can allow.

How does MSA differ from calibration?  

It is a standard practice to periodically calibrate all gages and measuring instruments used in measurement on the shop floor.

In simple terms, Calibration is a process of matching up the measuring instrument scale against standards of known value, and correcting the difference, if any. Calibration is done under controlled environment and by specially trained personnel.

How Non-Fatal Errors contributes to decrease in Quality?

Many customer contact centers report quality performance that they believe is acceptable.  However, high performance centers have found that in order to drive real business performance — customer satisfaction improvement and reduction in costly errors — they have to rethink how they measure and report Quality.

i have consulted  three customer contact centers on this topic.  A key finding: The best centers distinguish fatal from non-fatal errors — they know that one quality score doesn’t work!

However, most centers have just one quality score for a transaction (a call, an email, etc.) and they establish a threshold that they think is appropriate.  For example, one center’s quality form has 25 elements (many are weighted differently) with a passing grade of 80%.  This approach is typical, but it doesn’t work to drive high performance.

High performance centers create a distinct score for both fatal (or critical) and non-fatal (or non-critical) errors.  This enables them to (a) focus on fixing those errors that have the most impact on the business, and (b) drive performance to very high levels.

In my previous Blog about “Transactional Quality”, i have explained about Fatal and Non-Fatal Errors

What Is A Fatal Error?

We find that there are at least six types of fatal errors, which fall into two categories.  The first category includes those things that impact the customer.  Fatal errors in this category include:

1.  Giving the customer the wrong answer.  This can be further divided into two types:

• The customer will call back or otherwise re-contact the center.  This is the “classic” fatal error.

• The customer does not know they received the wrong answer (e.g., telling the customer they are not eligible for something that they are, in fact, eligible for).

2.  Something that costs the customer unnecessary expense.  An example would be telling the customer that they need to visit a retail store when they could have handled the inquiry over the phone.

3.  Anything highly correlated with customer satisfaction.  We find that first-call resolution is the single attribute most often correlated with customer satisfaction, although attribute correlations are different for different businesses (e.g., one center found that agent professionalism was the number-two driver of customer satisfaction—unusual given that professionalism is typically a non-fatal attribute).

The second category includes the next three fatal errors — those things that affect the business:

4.  Anything illegal.  The best example of this is breach of privacy (e.g., a HIPAA violation in a healthcare contact center, or an FDCPA violation in a collections center).

5.  Something that costs the company.  A good example is typing the wrong address into the system, which then results in undelivered mail.  This is another “classic” fatal error.

6.  Lost revenue opportunity.  This is primarily for a sales or collections center.

So… What is a Non-Fatal Error?

Non-fatal errors can be considered as annoyances.  These typically include misspellings on emails and what is often referred to as “soft skills” (using the customer’s name, politeness, etc.) on the phone.

If they are annoyances, then why spend time tracking them?  Because too many non-fatal errors can create a transaction that is fatally defective.  One misspelling or one bad word choice on an email probably won’t even elicit a response from a customer, but multiple misspellings, bad word choices, bad sentence structures, etc. will cause the customer to think that the substance of the email is likely incorrect.

What’s the Right Way to Score?

In a high performance center, one fatal error will make the entire transaction defective.  There is no middle ground.  So, the score for the center at the end of the month is simple—it’s the number of transactions (e.g., calls) without a fatal error divided by the number of transactions monitored.

So, what happens in a center that changes from the traditional scoring to the more accurate “one fatal error = defect” scoring.  This center thought that their quality performance was good.  However, when they re-scored, they found that the percentage of transactions with a fatal error ranged from 2%-15%, with the average at about 10%.  This was a real shock to the executives who had been used to hearing that their quality was around 97%.