Use and Impact of Statistical Methods for Quality Improvement

Session Chair: I-Li Lu, The Boeing Corporation


Assessing the Impact of Taguchi Methods on UK Manufacturing Organisations: an Empirical Study and Agenda for Further Research

Jiju Antony and Ross Stevenson
University of Strathclyde

Abstract: Taguchi’s approach to experimental design offers a powerful methodology for reducing variation in product and process functional performance and thereby achieving robustness in functional performance in the user’s environment. Since its introduction to the UK manufacturing industry over 25 years ago, the method has gained extensive coverage in quality and statistics literature. However, despite numerous case studies demonstrating the power of the approach, there appears to be very little evidence in the academic literature of the true impact which this method has had on UK manufacturing organisations. This paper presents the findings of a research project which investigated the status and impact of Taguchi Methods (TMs) within the UK manufacturing industry. The findings are based on empirical data generated from two questionnaire surveys.

A short pilot questionnaire was sent to a large sample of manufacturing organisations throughout the UK (approximately 750). Over 80 useful responses were received from companies within Scotland, England and Wales. The analysis of initial results show that although more than 70% of companies are aware of Taguchi Methods (TMs) and over 75% believe variability to be a frequent problem within their manufacturing processes, the powerful methodology still remains largely under-utilised.

A detailed questionnaire was then sent to a smaller sample of companies that are applying Taguchi’s methodology. The findings have helped highlight the key barriers experienced by practitioners when applying Taguchi Methods within industry, and also the factors which they consider to be critical for the successful implementation of TMs (also known as Critical Success Factors or CSFs). The paper concludes by discussing the perceived impact of TMs which these applications have had on organisational performance within the practicing companies. The impact is analysed against a variety of performance metrics covering the 4 key perspectives of performance proposed in Balanced Scorecard.


Predictive Model Development for Surgical Site Infection in Transplant Patients: a Pediatric Pilot Study

Aleksandra Stein and Valeriya Kettelhut
University of Nebraska-Lincoln and University of Nebraska Medical Center

Abstract: Surgical site infections (SSI) after liver and small bowel transplantation lead to higher resource utilization and increased risk for mortality. Scant information about factors contributing to SSI occurrence in transplant patients inhibits transplant centers from benchmarking data for performance improvement. Our pilot study models risk for SSI in a unique pediatric population of solid organ transplant patients. We consider several risk factors including intra-operative glucose levels, first time transplantation vs. re-transplant, as well as different covariates for socioeconomic status. We will address implications of our findings and potential long term benefits to the quality of health care, focusing on identifying physician-modifiable risk factors associated with SSI health-care expenditures and utilization of the models for benchmarking.


Statistical Detection of Medical Provider Insurance Fraud and Abuse: Follow the Money

Andrea L. Long
Bell Laboratories (Ret.)

Abstract: Medical provider insurance fraud and abuse, subclassified by HIPAA Title II (1996) and estimated to account for $100 billion per year, is significantly underdetected, underreported, underrecovered and undercompensated. All four economically suboptimal outcomes can be improved by (1) econometric (stochastic) modeling of risk and uncertainty in medical "spells" or streams of services from a provider to a consumer (2) exploitation of longitudinal event-history data, multiple imputation or linked external data, and prior information applied to patient data; and (3) application of analytic and inductive statistical tools and techniques extending beyond the enormous nonstochastic data mining efforts that insurers, regulators and Federal fraud investigators already apply.

This paper first identifies subclasses of medical provider insurance fraud and abuse commonly and less frequently enumerated, with examples. Medical negligence involving adverse events and provider diagnostic and treatment errors, plus deliberate criminal induction of adverse events to maximize provider profits to "treat" covertly medicated patients, are only two of the newer forms of medical provider insurance fraud and abuse, but with far larger costs than those traditionally enumerated.

Two, the four economic risk and cost measures to insurance fraud and abuse are properly estimated when true economic costs, collective private plus public costs, dynamic costs (over the lifecycle) are taken into account and when risk and uncertainty are factored. Reported adverse events, reported patient safety infractions, and litigated claims are swamped by underreported, hidden, squelched, and nonaccepted malpractice and fraud events by profit-maximizing medical providers who evade or defraud payment of the true costs they impose.

Three, stochastic modeling frameworks for provider and for consumer (based on exploiting longitudinal event-history data at insurance company- and investigating-agency disposal) are described. Kinds of data that are underexploited are reviewed with examples.

Four, stochastic statistical (econometric) modeling must be applied to the right economic microdata and must be augmented by longitudinal or panel data structuring, multiple datafile imputation, and prior information on the morbidity and the provider. Relying too heavily on Claim Summary data and on mechanistic edits lengthens the time to detection and intervention; allows medical providers to deliberately induce adverse events or continue harmful "services" until repeated authorizations or policy limits are exhausted; and unknowingly pays those providers to provide remedial "services" to "treat" or correct the adverse events and morbidities that they knowingly caused. Thus, longitudinal microdata on medical providers' interventions, patients' preexisting morbidities, and patients' outcomes from the interventions, must reconstruct the cause-effect order in which they each occur, and better yet, their timing. The paper identifies specific multiple datafile imputation needed to prevent as well as detect fraud and abuse, plus prior information "vectors" on the patient side and on the provider side.

Five, the paper enumerates what data are available vs. missing or unobservable even with the medical record automation planned by many institutions. Given the alternative data framework proposed, the last section identifies what statistical tools and techniques are useful for optimal prevention and detection of medical provider insurance fraud and abuse.

In summary, economic microdata and statistical techniques to detect and (best) prevent medical provider insurance fraud and abuse, as well as criminal activity on patients require data constructions and statistical techniques similar to those used in industrial process monitoring. But unlike SPC, they require modifications for "following the money."


A Quality, Reliability and Continuous Improvement (QRCII) Non-Profit, Public Service Institute.

Jorge Luis Romeu
Syracuse University

In many occasions, engineers learn statistics, on their own, after college, because they do not receive sufficient instruction on this topic in school. For, often engineers find, when they start practicing, that statistics is of Key importance to their engineering work: http://www.stat.auckland.ac.nz/~iase/publications/17/4A1_ROME.pdf But there is a gap between the statistics taught in college, represented by traditional one-semester statistics curriculum, and the statistics needed at work, represented by BOKs of ASQ/CRE/CQE.

Therefore, it is of interest to Academe, Professional Societies and Industry to provide engineers with efficient ways to close such GAP, taking advantage of already existing (and newly created) statistical education material for after College learning of statistics: http://web.cortland.edu/romeu/FTCPaper07.pdf. One efficient way is to create QRCII institutes that would use students as interns, and faculty and consultants as mentors. Then practicing engineers could interact and learn statistics from these, and from each other.