How Non-Fatal Errors contributes to decrease in Quality?

Many customer contact centers report quality performance that they believe is acceptable.  However, high performance centers have found that in order to drive real business performance — customer satisfaction improvement and reduction in costly errors — they have to rethink how they measure and report Quality.

i have consulted  three customer contact centers on this topic.  A key finding: The best centers distinguish fatal from non-fatal errors — they know that one quality score doesn’t work!

However, most centers have just one quality score for a transaction (a call, an email, etc.) and they establish a threshold that they think is appropriate.  For example, one center’s quality form has 25 elements (many are weighted differently) with a passing grade of 80%.  This approach is typical, but it doesn’t work to drive high performance.

High performance centers create a distinct score for both fatal (or critical) and non-fatal (or non-critical) errors.  This enables them to (a) focus on fixing those errors that have the most impact on the business, and (b) drive performance to very high levels.

In my previous Blog about “Transactional Quality”, i have explained about Fatal and Non-Fatal Errors

What Is A Fatal Error?

We find that there are at least six types of fatal errors, which fall into two categories.  The first category includes those things that impact the customer.  Fatal errors in this category include:

1.  Giving the customer the wrong answer.  This can be further divided into two types:

• The customer will call back or otherwise re-contact the center.  This is the “classic” fatal error.

• The customer does not know they received the wrong answer (e.g., telling the customer they are not eligible for something that they are, in fact, eligible for).

2.  Something that costs the customer unnecessary expense.  An example would be telling the customer that they need to visit a retail store when they could have handled the inquiry over the phone.

3.  Anything highly correlated with customer satisfaction.  We find that first-call resolution is the single attribute most often correlated with customer satisfaction, although attribute correlations are different for different businesses (e.g., one center found that agent professionalism was the number-two driver of customer satisfaction—unusual given that professionalism is typically a non-fatal attribute).

The second category includes the next three fatal errors — those things that affect the business:

4.  Anything illegal.  The best example of this is breach of privacy (e.g., a HIPAA violation in a healthcare contact center, or an FDCPA violation in a collections center).

5.  Something that costs the company.  A good example is typing the wrong address into the system, which then results in undelivered mail.  This is another “classic” fatal error.

6.  Lost revenue opportunity.  This is primarily for a sales or collections center.

So… What is a Non-Fatal Error?

Non-fatal errors can be considered as annoyances.  These typically include misspellings on emails and what is often referred to as “soft skills” (using the customer’s name, politeness, etc.) on the phone.

If they are annoyances, then why spend time tracking them?  Because too many non-fatal errors can create a transaction that is fatally defective.  One misspelling or one bad word choice on an email probably won’t even elicit a response from a customer, but multiple misspellings, bad word choices, bad sentence structures, etc. will cause the customer to think that the substance of the email is likely incorrect.

What’s the Right Way to Score?

In a high performance center, one fatal error will make the entire transaction defective.  There is no middle ground.  So, the score for the center at the end of the month is simple—it’s the number of transactions (e.g., calls) without a fatal error divided by the number of transactions monitored.

So, what happens in a center that changes from the traditional scoring to the more accurate “one fatal error = defect” scoring.  This center thought that their quality performance was good.  However, when they re-scored, they found that the percentage of transactions with a fatal error ranged from 2%-15%, with the average at about 10%.  This was a real shock to the executives who had been used to hearing that their quality was around 97%.

Measure Phase In DMAIC in Six Sigma

Purpose

To thoroughly understand the current state of the process and collect reliable data on process speed, quality, and costs that you will use to expose the underlying causes of problems

Deliverables

  • Fully developed current-state value stream map
  • Reliable data on critical inputs (Xs) and critical outputs (Ys) to be used for analyzing defects, variation, process flow, and speed
  • Baseline measures of process capability, including process Sigma Quality Level, and lead time
  • Refined definitions of improvement goals
  • A capable measurement system
  • Revised project charter (if data interpretation warrants a change)

Key steps in Measure

  1. Create/validate a value stream map to confirm current process flow. Use a basic process map or deployment flowchart to get started. Add defect, time, and other process data to generate a value stream map.
  2. Identify the outputs, inputs, and process variables relevant to your project. You want to collect data that relates to your project goals and targeted customers.
  3. Create a data collection plan including operational definitions for all measures.
  4. Create a data analysis plan. Verify what types of tools can be used for the type of data you will collect. Modify your data collection plan as needed.
  5. Use Measurement System Analysis and Gage R&R , or other procedure to ensure accurate, consistent, reliable data.
    • If using measurement instruments, be sure to calibrate them if it hasn’t been done recently
      • Make sure Operational Definitions of all metrics are commonly used and applied by all data collectors
    • Collect data to establish baselines.
    • Update value stream map with data.
    • Use Little’s Law to calculate lead time.
    • Perform process capability evaluation.
    • Make quick-hit improvements if warranted by data analysis and risk analysis so you can get partial benefits now (be sure you are in a position to measure and show improvement), then continue with project.
      • Use a Kaizen approach or, minimally, follow guidelines on implementing obvious solutions
      • If solution ideas pop up but the risks are high or unknown, keep track of the ideas for potential implementation but continue with your DMAIC project
    • Prepare for Measure gate review.

Gate review checklist for Measure

  1. Detailed value stream map (VSM)
    • Documentation of people who were involved in creating the value stream map (should include representative operators, technical experts, supervisors, perhaps customers and selected suppliers)
    • Map showing the main process steps relevant to the project scope, along with the inventories/work in process, lead times, queues, customer demand (takt) rate, and cycle times for those steps
    • Supplier and customer loops clearly identified; output and inputs clearly understood
  2. Data and Metrics
    • Lists of key process output variables (KPOVs) and key process and input variables (KPIVs) identified and checked for consistency against the SIPOC diagram
    • Indications of how KPOVs are tied to Critical-to-Quality customer requirements (CTQs)
    • Notations on which KPOVs are selected as the primary improvement focus
    • Operational Definitions and data collection plan that were created, tested, and implemented for all metrics
    • Documentation of measurement system analysis  or its equivalent performed to ensure accuracy, consistency, and reliability of data
    • Notes on problems or challenges with data collection and how they were addressed
    • Notes on ANY assumptions that were made
    • Copies or printouts of completed data collection forms
  3. Capability Analysis
    • Time-ordered data collected on process outputs, charted on a control chart , and analyzed for special and common causes
    • Baseline capability calculations for key output metrics (Ys)
    • Product/service specifications framed in terms of external customer requirements or intenal performance expectations (note any assumptions made)
    • Documentation on reliability of the capability estimates (is the measurement process stable, and does it have the expected distribution?)
    • Project goals reframed in terms of shifting the mean, reducing variation, or both
  4. Updated project charter and plans
    • Project charter, financial benefits, and schedule timeline updated to reflect new knowledge
    • Project risks re-evaluated
    • Documentation of issues/concerns that may impact project success
    • Team recommendation on whether it makes business sense to continue with project
    • Detailed plans for Analyze, including anything that requires sponsor approval (changes in scope, budget, timing, resources)
  5. Quick improvements

Actions recommended for immediate implementation, such as:

    • Non-value-added process steps, sources of special cause variation that can be eliminated to improve process time and/or capability
    • Required resources (budget, training, time) for implementation.

 What is Transactional Quality?????

Transactional Quality

 What are Transactions?

Interactions with end-users are called Transactions. Examples of calls, faxes -mails, web-based session’s etc. Monitoring of all types of end-user transactions is done to ensure that call-centre, client and end-user requirements and targets are met.

Why Transaction Monitoring? 

  • Lesser mistakes and satisfied customers.
  • Helps trainers identify training needs of CSRs.
  • Ensures the deliverability of the set targets, standards &parameters defined in S.L.A. (Service Level Agreement) with the client.
  • Positive impact on profitability & growth of business.
  • Positive impact on Personal growth, skill set improvement, confidence &motivational level of a CSR.

And Also for ….…

Process Control 

To maintain our own standard of quality of work.

Process Analysis

Calculate FA and NFA Scores Studying trends over a period of time and incorporate that accordingly.

Continual Improvement 

To be able to identify problem areas and take preventive actions.

How is it done?

 There are six basic levels of quality monitoring:

  • Walk-around observation
  • Side-by-side monitoring
  • Plug-in/double jack monitoring
  • Silent monitoring
  • Record and review
  • Voice and screen/multi-media monitoring

Monitoring Methods for Telephone Transactions

Remote Monitoring:

Auditing recorded calls.

Live Barge-in:

Auditing real time calls.

Screen Capture:

Auditing voice and screen component of recorded/ live calls.

Side by Side Monitoring:

Auditing a call sitting next to a CSR.

Terminologies in TM

CTQ:

Critical To Quality Characteristics. Customer performance requirements of a product or service.

Defect:

Any event that does not meet the specifications of a CTQ.

Defect Opportunity:

Any event that can be measured that provides a chance of not meeting a customer requirement. These are the number of parameters (on account of Non-Fatal Errors) which are monitored in any one call. In case of multiple calls, these are a product of number of calls by the number of parameters. (Note- This will exclude the compliance parameters or the Fatal Error parameters)


Fatal Error:

Any Defect in the transaction that has legal or financial implications or gross errors on customer handling such as rude or abusive language is termed as fatal error. Any fatal errors would result in the whole transaction being declared VOID.

There are 6 such categories:

  • Wrong Resolution
  • Misleading Information
  • Financial loss to the client (wrong address details)
  • Foul language
  • Case Note defects like incomplete details mentioned in the case notes, wrong customer profile.

Non-Fatal Error:

Any parameter, the occurrence of which is not desirable yet may not result in a VOID transaction. Defects which may lead to customer dissatisfaction are also included in this category.

Threshold Scores:

Any score above which a transaction is deemed pass and below which it is considered failed.

Defective Transaction:

Any transaction which is monitored, and is deemed VOID on account of any FATAL ERROR occurrence. Note – Any transaction, which may not have any fatal errors, yet may have multiple Non-Fatal errors, resulting in a Transaction score below 75% will also be considered as a defective transaction.

Sampling Methodology:

Calls are picked at random from the recording device based on Random Table to make the sample relevant, representative and remove bias. Some minimum length calls are always included in the sample to ensure review of all aspects.

How is it measured?

Metrics 

Following accuracy metrics are measured During TM:

  • Fatal Accuracy: COPC Threshold >98%
  • Non-Fatal Accuracy: COPC Threshold >98%
  • TM Score: SLA Threshold

 TM Calculations

  • FA – Number of pass calls / Total Calls
  • NFA – 100% – (Non-fatal defects/ Total Opp.)
  • Total Opp. – Total Calls x Number of parameters
  • TM Score – Absolute scores/ Total calls

Audit Sheets

An Audit sheet is used to mark the observations of Transaction Monitoring during a call audit session by a Monitor. It is the tool which has the following mentioned:

  • Parameters (Fatal and Non-fatal) based on Call Flow
  • Brief description
  • Weightages
  • Score methodology
  • Space for comments

Different audit sheets are generally used during monitoring of different type of transactions. Example: In call Audit sheet, Side-by-Side Audit sheet, Escalation audit sheet, Email Audit Sheet.

 

Where Did Six Sigma Come From?

As with Lean, we can trace the roots of Six Sigma to the nineteenth-century craftsman, whose challenges as an individual a long time ago mirror the challenges of organizations today. The craftsman had to minimize wasted time, actions, and materials; he also had to make every product or service to a high standard of quality the first time, each time, every time.

Quality Beginning

The roots of what would later become Six Sigma were planted in 1908, when W. S. Gosset developed statistical tests to help analyze quality data obtained at Guinness Brewery. About the same time, A. K. Erlang studied telephone traffic problems for the Copenhagen Telephone Company in an effort to increase the reliability of service in an industry known for its inherent randomness. It’s likely that Erlang was the first mathematician to apply probability theory in an industrial setting, an effort that led to modern queuing and reliability theory. With these underpinnings, Walter Shewhart worked with Western Electric (a forerunner of AT& T) in the 1930s to develop the theoretical concepts of quality control. Lean-like industrial engineering techniques did not solve quality and variation-related problems; more statistical intelligence was needed to get to their root causes. Shewhart is also known as the originator of the Plan-Do-Check-Act cycle, which is sometimes ascribed to Dr. Edwards Deming, Shewhart’s understudy. As the story goes, Deming made the connection between quality and cost. If you find a way to prevent defects, and do everything right the first time, you won’t have any need to perform rework. Therefore, as quality goes up, the cost of doing business goes down. Deming’s words were echoed in the late 1970s by a guy named Philip Crosby, who popularized the notion that “quality is free.”

Quality Crazy

War and devastation bring us to Japan, where Deming did most of his initial quality proselytizing with another American, Dr. Joseph Juran. Both helped Japan rebuild its economy after World War II, consulting with numerous Japanese companies in the development of statistical quality control techniques, which later spread into the system known as Total Quality Control (TQC).

As the global economy grew, organizations grew in size and complexity. Many administrative, management, and enabling functions grew around the core function of a company to make this or that product. The thinking of efficiency and quality, therefore, began to spread from the manufacturing function to virtually all functions— procurement, billing, customer service, shipping, and so on. Quality is not just one person’s or one department’s job. Rather, quality is everyone’s job! This is when quality circles and suggestion programs abounded in Japanese companies: no mind should be wasted, and everyone’s ideas are necessary. Furthermore, everyone should continuously engage in finding better ways to create value and improve performance. By necessity, quality became everyone’s job, not just the job of a few … especially in Japan, at a time when there was precious little money to invest in new equipment and technology.

The rest of the story might be familiar if you’re old enough to remember. By the late 1970s, America had lost its quality edge in cars, TVs, and other electronics— and they were suffering significant market share losses. Japanese plants were far more productive and superior to American plants, according to a 1980 NBC television program, If Japan Can Why Can’t We? In response to all this, American companies took up the quality cause. They made Deming and Juran heroes, and institutionalized the Japanese-flavored TQC into its American counterpart, Total Quality Management (TQM). They developed a special government award, the Baldrige Award, to give companies that best embodied the ideal practice of TQM. They organized all the many elements and tools of quality improvement into a teachable, learnable, and doable system— and a booming field of quality professionals was born.

Quality Business

The co-founder of Six Sigma, Dr. Mikel Harry, has often said that Six Sigma shifts the focus from the business of quality to the quality of business. What he means is that for many years the practices of quality improvement floated loosely around a company, driven by the quality department. And as much as the experts said that quality improvement has to be driven and supported by top executives, it generally wasn’t. Enter Jack Welch, the iconic CEO who led General Electric through 2 decades of incredible growth and consistent returns for shareholders. In the late 1980s, Welch had a discussion with former AlliedSignal CEO Larry Bossidy, who said that Six Sigma could transform not only a process or product, but a company. In other words, GE could use Six Sigma as AlliedSignal was already doing: to improve the financial health and viability of the corporation through real and lasting operational improvements. Welch took note and hired Mikel Harry to train hundreds of his managers and specialists to become Six Sigma Black Belts, Master Black Belts, and Champions. Welch installed a deployment infrastructure so he could fan the Six Sigma methodology out as widely as possible across GE’s many departments and functions. In short, Welch elevated the idea and practice of quality from the engineering hallways of the corporation into the boardroom. Lest we not be clear, the first practical application of Six Sigma on a pervasive basis occurred at Motorola, where Dr. Harry and the co-inventor of Six Sigma, Bill Smith, worked as engineers. Bob Galvin, then CEO of Motorola, paved the way for Bossidy and Welch in that he proved how powerful Six Sigma was in solving difficult performance problems. He also used Six Sigma at Motorola to achieve unprecedented quality levels for key products. One such product was the Motorola Bandit pager, which failed so rarely that Motorola simply replaced rather than repaired them when they did fail.

Six Sigma Overview

What is Six Sigma?

Sigma is a statistical concept that represents the amount of variation present in a process relative to customer requirements or specifications. When a process operates at the six sigma level, the variation is so small that the resulting products and services are 99.9997% defect free.

“Six Sigma” is commonly denoted in several different ways. You might see it written as “6σ,” “6 Sigma,” or “6s.”

In addition to being a statistical measure of variation, the term Six Sigma also refers to a business philosophy of focusing on continuous improvement by understanding customers’ needs, analyzing business processes, and instituting proper measurement methods. Furthermore, it is a methodology that an organization uses to ensure that it is improving its key processes.

While Six Sigma corresponds to being 99.9997% defect free, not all business processes need to attain this high a goal. Companies can also use the Six Sigma methodology to identify which of their key business processes would benefit most from improvement and then focus their improvement efforts there.

Process Capability

To increase your organization’s process-sigma level, you must decrease the amount of variation that occurs.

Having less variation gives you the following benefits:

• Greater predictability in the process.

• Less waste and rework, which lowers costs.

• Products and services that perform better and last longer.

• Happier customers who value you as a supplier.

The simple example below illustrates the concept of Six Sigma. Note that the amount of data in this example is limited, but it serves to describe the concept adequately.

Two companies deliver pizza to your house. You want to determine which one can better meet your needs. You always want your pizza delivered at 6 p.m. but are willing to tolerate a delivery anytime between 5:45 p.m. and 6:15 p.m. In this example, the target is 6 p.m. and the customer specifications are 5:45 p.m. on the low side and 6:15 p.m. on the high side.

You decide to order two pizzas at the same time every night for ten days—one pizza from Company A, and one from Company B. You track the delivery times for ten days and collect the following data:

Dominos Table

As the chart above shows, Company A had two occurrences—on Day 2 and Day 6—of pizza arrival times that were outside of your tolerance window of between 5:45 and 6:15. In Six Sigma terminology, these two occurrences are called defects.

 

 

Nine Principles of Process Improvement

Process improvement and Six Sigma embrace many principles, the most important of which in our opinion are discussed in this section. When understood, these principles may cause a transformation in how you view life in general and work in particular

The principles are as follows:

Principle 1— Life is a process (a process orientation).

Principle 2— All processes exhibit variation.

Principle 3— Two causes of variation exist in all processes.

Principle 4— Life in stable and unstable processes is different.

Principle 5— Continuous improvement is always economical, absent capital investment.

Principle 6— Many processes exhibit waste.

Principle 7— Effective communication requires operational definitions.

Principle 8— Expansion of knowledge requires theory.

Principle 9— Planning requires stability. Plans are built on assumptions.