Pre Control in Lean Six Sigma ?

The Pre-control Technique

Pre-control is a control charting methodology that uses specification limits instead of statistically-derived control limits to determine process capability over time. Pre-control charting is useful in initial process setup to get a rough idea of process capability. Pre-control charting does not use continuous data found upstream in the process which is more in alignment with prevention thinking.

An easy method of controlling the process average is known as “pre-control.” Pre-control was developed in 1954 by a group of consultants (including Dorin Shainin) in an attempt to replace the control chart. Pre-control is most successful with processes which are inherently stable and not subject to rapid process drifts once they are set up. Pre-control can act both as a guide in setting process aim and monitoring the continuing process.

The idea behind pre-control is to divide the total tolerance into zones. The two boundaries within the tolerance are called pre-control lines. The location of these lines is halfway between the center of the specification and specification limits. It can be shown that 86%of the parts will be inside the P-C lines with 7% in each of the outer sections, if the process is normally distributed and Cpk= 1. Usually the process will occupy much less of the tolerance range, so this extreme case will not apply.

The chance that two parts in a row will fall outside either P-C line is 1/7 times 1/7, or 1/49. This means that only once in every 49 pieces can we expect to get two pieces in a row outside the P-C lines just due to chance. There is a much greater chance (48/49) that the process has shifted. It is advisable, therefore, to reset the process to the center. It is equally unlikely that one piece will be outside one P-C line and the next outside the other P-C line. This is a definite indication that a special factor has widened the variation and action must be taken to find that special cause before continuing.

Pre-control rules:

. Set-up: The job is OK to run if five pieces in a row are inside the target .

. Running: Sample two consecutive pieces

. If the first piece is within target, run (don’t measure the second piece)

. If the first piece is not within target, check the second piece

. If the second piece is within target, continue to run

. If both pieces are out of target, adjust the process, go back to set up

. Any time a reading is out-of-specification, stop and adjust

The ideal frequency of sampling is 25 checks until a reset is required. Sampling can be relaxed if the process does not need adjustment in greater than 25 checks. Sampling must be increased if the opposite is true. To make pre-control even easier to use, gauges for the target area may be painted green. Yellow is used for the outer zones and red for out-of-specification.

The advantages of pre-control include:

. Shifts in process centering or increases in process spread can be detected

. The percentage of non-conforming product will not exceed a pre-determined level

. No recording, calculating or plotting is required

. Attribute or visual characteristics can be used

. Can serve as a set-up plan for short production runs, often found in job shops

. The specification tolerance is used directly

. Very simple instructions are needed for operators

The disadvantages of pre-control include:

. There is no permanent paper record of adjustments

. Subtle changes in process capability cannot be calculated

. It will not work for an unstable process

. It will not work effectively if the process spread is greater than the tolerance

Risk Management Framework

 

RMF

How to Calculate Asset value (AV)

The asset value (AV) is calculated on the basis of range value.

Range value is the product of the values of “C”, “I” and “A”.

 Range Value = C * I * A

 In case all three parameters of (C,I,A) are not applicable for an asset and only one or two out of the 3 parameters are applicable then the range value is calculated as the product of the applicable parameters.  Once the range value is calculated for an asset, the asset value (AV) is obtained as per the defined table which maps the range value with the AV depending on the number of applicable parameters.

How to Calculate “C”

Conf_Parameters

How to Calculate “A”

avail_para

How to Calculate “I”

intig_para

Analyze Phase in Six Sigma

Purpose

To pinpoint and verify causes affecting the key input and output variables tied to project goals. (“Finding the critical Xs”)

Deliverables

  • Documentation of potential causes considered in your analysis
  • Data charts and other analyses that show the link between the targeted input and process (Xs) variables and critical output (Y)
  • Identification of value-add and non-value-add work
  • Calculation of process cycle efficiency

Key steps in Analyze

  1. Conduct value analysis. Identify value-add, non-value-add and business non-value-add steps
  2. Calculate Process Cycle Efficiency (PCE). Compare to world-class benchmarks to help determine how much improvement is needed.
  3. Analyze the process flow. Identify bottleneck points and constraints in a process, fallout and rework points, and assess their impact on the process throughput and its ability to meet customer demands and CTQs.
  4. Analyze data collected in Measure.
  5. Generate theories to explain potential causes. Use brainstorming, FMEA, C&E diagrams or matrices, and other tools to come up with potential causes of the observed effects.
  6. Narrow the search. Use brainstorming, selection, and prioritization techniques (Pareto charts, hypothesis testing, etc.) to narrow the search for root causes and significant cause-and-effect relationships.
  7. Collect additional data to verify root causes. Use scatter plots or more sophisticated statistical tools (such as hypothesis testing, ANOVA, or regression) to verify significant relationships.
  8. Prepare for Analyze gate review.

Gate review checklist for Analyze

  1. Process Analysis
    • Calculations of Process Cycle Efficiency
    • Where process flow problems exist
  2. Root Cause Analysis
    • Documentation of the range of potential Key Process Input Variables (KPIVs) that were considered (such as cause-and-effect diagrams; FMEA)
    • Documentation of how the list of potential causes was narrowed (stratification, multivoting, Pareto analysis, etc.)
    • Statistical analyses and/or data charts that confirm or refute a cause-and-effect relationship and indicate the strength of the relationship (scatter plot, design of experiment results, regression calculations, ANOVA, component of variation, lead time calculations showing how much improvement is possible by elimination of NVA activities, etc.)
    • Documentation of which root causes will be targeted for action in Improve (include criteria used for selection)
  3. Updated charter and project plans
    • Team recommendations on potential changes in team membership considering what may happen in Improve (expertise and skills needed, work areas affected, etc.)
    • Revisions/updates to project plans for Improve, such as time and resource commitments needed to complete the project
    • Team analysis of project status (still on track? still appropriate to focus on original goals?)
    • Team analysis of current risks and potential for acceleration
    • Plans for the Improve phase

Tips for Analyze 

  • If you identify a quick-hit improvement opportunity, implement using a Kaizen approach. Get partial benefits now, then continue with project.
  • Be critical about your own data collection—the data must help you understand the causes of the problem you’re investigating. Avoid “paralysis by analysis”: wasting valuable project time by collecting data that don’t move the project forward.
  • This is a good time in a project to celebrate team success for finding the critical Xs and implementing some quick hits!

  MSA(Measurement System Analysis)

Measurement System Analysis: Hidden Factory Evaluation 

What Comprises the Hidden Factory in a Process/Production Area?

  • Reprocessed and Scrap materials — First time out of spec, not reworkable
  • Over-processed materials — Run higher than target with higher
    than needed utilities or reagents
  • Over-analyzed materials — High Capability, but multiple in-process
    samples are run, improper SPC leading to over-control

What Comprises the Hidden Factory in a Laboratory Setting?

  • Incapable Measurement Systems — purchased, but are unusable
    due to high repeatability variation and poor discrimination
  • Repetitive Analysis — Test that runs with repeats to improve known
    variation or to unsuccessfully deal with overwhelming sampling issues
  • Laboratory “Noise” Issues — Lab Tech to Lab Tech Variation, Shift to
    Shift Variation, Machine to Machine Variation, Lab to Lab Variation

Hidden factory Linkage –

  • Production Environments generally rely upon in-process sampling for adjustment
  • As Processes attain Six Sigma performance they begin to rely less on sampling and more upon leveraging the few influential X variables
  • The few influential X variables are determined largely through multi-vari studies and Design of Experimentation (DOE)
  • Good multi-vari and DOE results are based upon acceptable measurement analysis

Picture1

Picture2

Picture3

Measurement System Terminology

Discrimination Smallest detectable increment between two measured values

Accuracy related terms

True value – Theoretically correct value

Bias – Difference between the average value of all measurements of a sample and the true value for that sample

Precision related terms

Repeatability – Variability inherent in the measurement system under constant conditions

Reproducibility – Variability among measurements made under different conditions (e.g. different operators, measuring devices, etc

Stability distribution of measurements that remains constant and predictable over time for both the mean and standard deviation

Linearity A measure of any change in accuracy or precision over the range of instrument capability

Measurement System Capability Index – Precision to Tolerance Ratio:

  •  P/T = [5.15* Sigma (MS)]/Tolerence
  • Addresses what percent of the tolerance is taken up by measurement error
  • Includes both repeatability and reproducibility:  Operator * Unit * Trial experiment
  • Best case: 10%  Acceptable:  30%

Note: 5.15 standard deviations accounts for 99% of Measurement System (MS) variation.  The use of 5.15 is an industry standard.

Measurement System Capability Index – %Gage R & R:

  • % R & R =[Sigma (MS)/Sigma(Observed Process Variation)]*100
  • Addresses what percent of the Observed Process Variation is taken up by measurement error
  • %R&R is the best estimate of the effect of measurement systems on the validity of process improvement studies (DOE)
  • Includes both repeatability and reproducibility
  • As a target, look for %R&R < 30%

 

 

How Non-Fatal Errors contributes to decrease in Quality?

Many customer contact centers report quality performance that they believe is acceptable.  However, high performance centers have found that in order to drive real business performance — customer satisfaction improvement and reduction in costly errors — they have to rethink how they measure and report Quality.

i have consulted  three customer contact centers on this topic.  A key finding: The best centers distinguish fatal from non-fatal errors — they know that one quality score doesn’t work!

However, most centers have just one quality score for a transaction (a call, an email, etc.) and they establish a threshold that they think is appropriate.  For example, one center’s quality form has 25 elements (many are weighted differently) with a passing grade of 80%.  This approach is typical, but it doesn’t work to drive high performance.

High performance centers create a distinct score for both fatal (or critical) and non-fatal (or non-critical) errors.  This enables them to (a) focus on fixing those errors that have the most impact on the business, and (b) drive performance to very high levels.

In my previous Blog about “Transactional Quality”, i have explained about Fatal and Non-Fatal Errors

What Is A Fatal Error?

We find that there are at least six types of fatal errors, which fall into two categories.  The first category includes those things that impact the customer.  Fatal errors in this category include:

1.  Giving the customer the wrong answer.  This can be further divided into two types:

• The customer will call back or otherwise re-contact the center.  This is the “classic” fatal error.

• The customer does not know they received the wrong answer (e.g., telling the customer they are not eligible for something that they are, in fact, eligible for).

2.  Something that costs the customer unnecessary expense.  An example would be telling the customer that they need to visit a retail store when they could have handled the inquiry over the phone.

3.  Anything highly correlated with customer satisfaction.  We find that first-call resolution is the single attribute most often correlated with customer satisfaction, although attribute correlations are different for different businesses (e.g., one center found that agent professionalism was the number-two driver of customer satisfaction—unusual given that professionalism is typically a non-fatal attribute).

The second category includes the next three fatal errors — those things that affect the business:

4.  Anything illegal.  The best example of this is breach of privacy (e.g., a HIPAA violation in a healthcare contact center, or an FDCPA violation in a collections center).

5.  Something that costs the company.  A good example is typing the wrong address into the system, which then results in undelivered mail.  This is another “classic” fatal error.

6.  Lost revenue opportunity.  This is primarily for a sales or collections center.

So… What is a Non-Fatal Error?

Non-fatal errors can be considered as annoyances.  These typically include misspellings on emails and what is often referred to as “soft skills” (using the customer’s name, politeness, etc.) on the phone.

If they are annoyances, then why spend time tracking them?  Because too many non-fatal errors can create a transaction that is fatally defective.  One misspelling or one bad word choice on an email probably won’t even elicit a response from a customer, but multiple misspellings, bad word choices, bad sentence structures, etc. will cause the customer to think that the substance of the email is likely incorrect.

What’s the Right Way to Score?

In a high performance center, one fatal error will make the entire transaction defective.  There is no middle ground.  So, the score for the center at the end of the month is simple—it’s the number of transactions (e.g., calls) without a fatal error divided by the number of transactions monitored.

So, what happens in a center that changes from the traditional scoring to the more accurate “one fatal error = defect” scoring.  This center thought that their quality performance was good.  However, when they re-scored, they found that the percentage of transactions with a fatal error ranged from 2%-15%, with the average at about 10%.  This was a real shock to the executives who had been used to hearing that their quality was around 97%.

 What is Transactional Quality?????

Transactional Quality

 What are Transactions?

Interactions with end-users are called Transactions. Examples of calls, faxes -mails, web-based session’s etc. Monitoring of all types of end-user transactions is done to ensure that call-centre, client and end-user requirements and targets are met.

Why Transaction Monitoring? 

  • Lesser mistakes and satisfied customers.
  • Helps trainers identify training needs of CSRs.
  • Ensures the deliverability of the set targets, standards &parameters defined in S.L.A. (Service Level Agreement) with the client.
  • Positive impact on profitability & growth of business.
  • Positive impact on Personal growth, skill set improvement, confidence &motivational level of a CSR.

And Also for ….…

Process Control 

To maintain our own standard of quality of work.

Process Analysis

Calculate FA and NFA Scores Studying trends over a period of time and incorporate that accordingly.

Continual Improvement 

To be able to identify problem areas and take preventive actions.

How is it done?

 There are six basic levels of quality monitoring:

  • Walk-around observation
  • Side-by-side monitoring
  • Plug-in/double jack monitoring
  • Silent monitoring
  • Record and review
  • Voice and screen/multi-media monitoring

Monitoring Methods for Telephone Transactions

Remote Monitoring:

Auditing recorded calls.

Live Barge-in:

Auditing real time calls.

Screen Capture:

Auditing voice and screen component of recorded/ live calls.

Side by Side Monitoring:

Auditing a call sitting next to a CSR.

Terminologies in TM

CTQ:

Critical To Quality Characteristics. Customer performance requirements of a product or service.

Defect:

Any event that does not meet the specifications of a CTQ.

Defect Opportunity:

Any event that can be measured that provides a chance of not meeting a customer requirement. These are the number of parameters (on account of Non-Fatal Errors) which are monitored in any one call. In case of multiple calls, these are a product of number of calls by the number of parameters. (Note- This will exclude the compliance parameters or the Fatal Error parameters)


Fatal Error:

Any Defect in the transaction that has legal or financial implications or gross errors on customer handling such as rude or abusive language is termed as fatal error. Any fatal errors would result in the whole transaction being declared VOID.

There are 6 such categories:

  • Wrong Resolution
  • Misleading Information
  • Financial loss to the client (wrong address details)
  • Foul language
  • Case Note defects like incomplete details mentioned in the case notes, wrong customer profile.

Non-Fatal Error:

Any parameter, the occurrence of which is not desirable yet may not result in a VOID transaction. Defects which may lead to customer dissatisfaction are also included in this category.

Threshold Scores:

Any score above which a transaction is deemed pass and below which it is considered failed.

Defective Transaction:

Any transaction which is monitored, and is deemed VOID on account of any FATAL ERROR occurrence. Note – Any transaction, which may not have any fatal errors, yet may have multiple Non-Fatal errors, resulting in a Transaction score below 75% will also be considered as a defective transaction.

Sampling Methodology:

Calls are picked at random from the recording device based on Random Table to make the sample relevant, representative and remove bias. Some minimum length calls are always included in the sample to ensure review of all aspects.

How is it measured?

Metrics 

Following accuracy metrics are measured During TM:

  • Fatal Accuracy: COPC Threshold >98%
  • Non-Fatal Accuracy: COPC Threshold >98%
  • TM Score: SLA Threshold

 TM Calculations

  • FA – Number of pass calls / Total Calls
  • NFA – 100% – (Non-fatal defects/ Total Opp.)
  • Total Opp. – Total Calls x Number of parameters
  • TM Score – Absolute scores/ Total calls

Audit Sheets

An Audit sheet is used to mark the observations of Transaction Monitoring during a call audit session by a Monitor. It is the tool which has the following mentioned:

  • Parameters (Fatal and Non-fatal) based on Call Flow
  • Brief description
  • Weightages
  • Score methodology
  • Space for comments

Different audit sheets are generally used during monitoring of different type of transactions. Example: In call Audit sheet, Side-by-Side Audit sheet, Escalation audit sheet, Email Audit Sheet.

 

Where Did Six Sigma Come From?

As with Lean, we can trace the roots of Six Sigma to the nineteenth-century craftsman, whose challenges as an individual a long time ago mirror the challenges of organizations today. The craftsman had to minimize wasted time, actions, and materials; he also had to make every product or service to a high standard of quality the first time, each time, every time.

Quality Beginning

The roots of what would later become Six Sigma were planted in 1908, when W. S. Gosset developed statistical tests to help analyze quality data obtained at Guinness Brewery. About the same time, A. K. Erlang studied telephone traffic problems for the Copenhagen Telephone Company in an effort to increase the reliability of service in an industry known for its inherent randomness. It’s likely that Erlang was the first mathematician to apply probability theory in an industrial setting, an effort that led to modern queuing and reliability theory. With these underpinnings, Walter Shewhart worked with Western Electric (a forerunner of AT& T) in the 1930s to develop the theoretical concepts of quality control. Lean-like industrial engineering techniques did not solve quality and variation-related problems; more statistical intelligence was needed to get to their root causes. Shewhart is also known as the originator of the Plan-Do-Check-Act cycle, which is sometimes ascribed to Dr. Edwards Deming, Shewhart’s understudy. As the story goes, Deming made the connection between quality and cost. If you find a way to prevent defects, and do everything right the first time, you won’t have any need to perform rework. Therefore, as quality goes up, the cost of doing business goes down. Deming’s words were echoed in the late 1970s by a guy named Philip Crosby, who popularized the notion that “quality is free.”

Quality Crazy

War and devastation bring us to Japan, where Deming did most of his initial quality proselytizing with another American, Dr. Joseph Juran. Both helped Japan rebuild its economy after World War II, consulting with numerous Japanese companies in the development of statistical quality control techniques, which later spread into the system known as Total Quality Control (TQC).

As the global economy grew, organizations grew in size and complexity. Many administrative, management, and enabling functions grew around the core function of a company to make this or that product. The thinking of efficiency and quality, therefore, began to spread from the manufacturing function to virtually all functions— procurement, billing, customer service, shipping, and so on. Quality is not just one person’s or one department’s job. Rather, quality is everyone’s job! This is when quality circles and suggestion programs abounded in Japanese companies: no mind should be wasted, and everyone’s ideas are necessary. Furthermore, everyone should continuously engage in finding better ways to create value and improve performance. By necessity, quality became everyone’s job, not just the job of a few … especially in Japan, at a time when there was precious little money to invest in new equipment and technology.

The rest of the story might be familiar if you’re old enough to remember. By the late 1970s, America had lost its quality edge in cars, TVs, and other electronics— and they were suffering significant market share losses. Japanese plants were far more productive and superior to American plants, according to a 1980 NBC television program, If Japan Can Why Can’t We? In response to all this, American companies took up the quality cause. They made Deming and Juran heroes, and institutionalized the Japanese-flavored TQC into its American counterpart, Total Quality Management (TQM). They developed a special government award, the Baldrige Award, to give companies that best embodied the ideal practice of TQM. They organized all the many elements and tools of quality improvement into a teachable, learnable, and doable system— and a booming field of quality professionals was born.

Quality Business

The co-founder of Six Sigma, Dr. Mikel Harry, has often said that Six Sigma shifts the focus from the business of quality to the quality of business. What he means is that for many years the practices of quality improvement floated loosely around a company, driven by the quality department. And as much as the experts said that quality improvement has to be driven and supported by top executives, it generally wasn’t. Enter Jack Welch, the iconic CEO who led General Electric through 2 decades of incredible growth and consistent returns for shareholders. In the late 1980s, Welch had a discussion with former AlliedSignal CEO Larry Bossidy, who said that Six Sigma could transform not only a process or product, but a company. In other words, GE could use Six Sigma as AlliedSignal was already doing: to improve the financial health and viability of the corporation through real and lasting operational improvements. Welch took note and hired Mikel Harry to train hundreds of his managers and specialists to become Six Sigma Black Belts, Master Black Belts, and Champions. Welch installed a deployment infrastructure so he could fan the Six Sigma methodology out as widely as possible across GE’s many departments and functions. In short, Welch elevated the idea and practice of quality from the engineering hallways of the corporation into the boardroom. Lest we not be clear, the first practical application of Six Sigma on a pervasive basis occurred at Motorola, where Dr. Harry and the co-inventor of Six Sigma, Bill Smith, worked as engineers. Bob Galvin, then CEO of Motorola, paved the way for Bossidy and Welch in that he proved how powerful Six Sigma was in solving difficult performance problems. He also used Six Sigma at Motorola to achieve unprecedented quality levels for key products. One such product was the Motorola Bandit pager, which failed so rarely that Motorola simply replaced rather than repaired them when they did fail.