Back to Program

QPRC Contributed Session Abstracts


Session Title: Analytical Measurement Systems with Case Studies at Eli Lilly - ELANCO
Chair: Joanne Wendelberger, Los Alamos National Laboratory

Speakers

Analytical Method Improvement Yields Dramatic Reduction in Variation for Both Analytical and final Formulation Processes
Authors:
Roger Norris, LeRoy Franklin, Dr. Richard Dunbar, Justin Self, and Scott Burd (Clinton Laboratories)
Speaker:
Roger Norris
email: NORRIS_ROGER_A "AT" LILLY.COM

Abstract

An animal health premix manufactured by Elanco (a subsidiary of Eli Lilly and Company) was subject to structural and common cause variation. The variation was large enough that internal specifications were exceeded creating investigations and reprocessing. A team formed using six sigma tools to improve the process reliability and decrease cost. The dominant source of variation was the analytical method providing information about this manufacturing process.

A series of historical data analyses and designed experiments were used to understand this analytical measurement system better. This presentation will provide an overview of the work done. It will emphasize a design utilizing the JMP™ 5.1 custom design platform that was used to understand complex technical questions where scientist’s experience differed. The result of the controls implemented on the analytical measurement process was a dramatic reduction in variation in both the analytical process and associated manufacturing results.


A Designed Experiment Examining HPLC Configurations and Injection Volumes Yields both Expected and Unexpected Results
Authors:
Bill Sarell and LeRoy Franklin (Clinton Laboratories)
Speaker: Bill Sarell
email:
SARELL_BILL "AT" LILLY.COM

Abstract

An animal health premix manufactured by ELANCO (a subsidiary of Eli Lilly and Company) has historically exhibited instability in the process average for a specific product. This created periods of substantial rework for the facility, increasing cost and interrupting a consistent supply of product to a growing market. An experimental design was undertaken to compare 3 High Performance Liquid Chromatography (HPLC) instrument configurations and several injection volumes. In the course of the study, careful analysis implicated another factor that could influence HPLC results. This presentation discusses all this and a redesign of the experiment “on the fly” to explore this implicated factor. Final analyses involved combining studies and pointing to a cost effective solution. This solution has since been implemented as part of the Long Term 6-Sigma Analytical Measurement study undertaken at the site. Although several specific injection volumes are given (e.g. 1, 2, and 5 micro-liters), for proprietary reasons, specific instrument brands, specific configurations, and products are not identified.


A DOE Examining HPLC Cap-Vial Combination Yields Unexpected and Significant Results
Authors:
LeRoy Franklin, William E. Sarell, Billy R. Hardas, Howard Kruzan, Scott Reely, and Heather Delany (Clinton Laboratories)
Speaker: LeRoy Franklin
email: FRANKLIN_LEROY_A "AT" LILLY.COM

Abstract

A variability reduction project for an animal health premix found success in an unsuspected source in the QC Laboratory: High Performance Liquid Chromatography (HPLC) vial caps utilized in the analysis. The premix manufactured by ELANCO (a subsidiary of Eli Lilly and Company) had historical instability in the process average, creating periods of substantial rework, increasing cost, and interrupting consistent supply to a growing market. A D.O.E. study undertaken to compare HPLC instrument configurations implicated evaporation from the HPLC vials as the source, yielding significant differences based on HPLC cap-vial combinations. The results of a D.O.E. for 8 cap types are presented. Assay results increased as much as 0.4 %/hour due to the evaporation of organic solvent (diluent), depending on cap-vial combination utilized for the analyte in methanol. For volatile analytes, as similar decrease could be anticipated. A two phase solution of cap-vial types that has since been implemented in the QC Lab is discussed as well. For proprietary reasons, specific cap brands, specific configurations, and products are not identified.


Session Title: Further Topics in Design of Experiments
Chair: David Drain, University of Missouri-Rolla

Speakers

Experiments with Changes in the Design Region
Author:
Joanne Wendelberger (Los Alamos National Laboratory)
Speaker: Joanne Wendelberger
email: joanne "AT" lanl.gov

Abstract 

Statistical experiment design provides efficient strategies for selecting experimental runs to meet specified experimental objectives. Typically, the preliminary stages of planning experiments include discussion of an appropriate design region. An experiment plan is developed based on the assumed design region to achieve design objectives. Unfortunately, the assumed design region may change prior to, or even during, the experiment. Changes in the design region may be due to constraints or deviations in the intended settings of the experimental variables. These changes may impact the properties of the design as well as the resulting analysis.


Optimal Block Sequences for Blocked Fractional Factorial Split-Plot Designs
Author:
Robert McLeod (University of Winnipeg)
Speaker: Robert McLeod
email: r.mcleod "AT" uwinnipeg.ca 

Abstract 

It is known that for blocked 2n–k designs a judicious sequencing of blocks may allow one to obtain early and insightful results regarding influential parameters in the experiment. Such findings may justify the early termination of the experiment thereby producing cost and time savings. This talk introduces an approach for selecting the optimal sequence of blocks for regular two-level blocked fractional factorial split-plot screening experiments. An optimality criterion is developed so as to give priority to the early estimation of low-order factorial effects. This criterion is applied to available catalogs of minimum aberration blocked fractional factorial split-plot designs. In turn, a catalog of optimal block sequences for 32-run minimum aberration blocked fractional factorial split-plot designs, run in either 4 or 8 blocks, is formed.

Presentation


Design and Analysis of Experiments Applied to Critical Infrastructure Simulation
Author:
Leslie Moore (Los Alamos National Laboratory)
Speaker: Leslie Moore
email: lmoore "AT" lanl.gov

Abstract

Critical infrastructures are a complex “system of systems” and interdependent infrastructure simulation models are useful to assess consequences of disruptions initiated in any infrastructure. A risk-informed decision support tool using systems dynamics methods, CIPDSS, has been developed at Los Alamos National Laboratory to provide an efficient running simulation tool to gain insight for making critical infrastructure protection related decisions in the presence of uncertainty. Modeling of consequences of an infectious disease outbreak provides a case study and opportunity to demonstrate exploratory statistical experiment planning and analysis capability. In addition to modeling consequences of an incident, alternative mitigation strategies can be implemented and consequences under these alternatives compared. Statistical analysis includes sensitivity and uncertainty analysis in addition to comparing relative consequences from implementation of different mitigation strategies. This presentation will include a description of statistical analysis tools useful for assessing computer model results.


Session Title: Nonlinear and Simulation-Based Methods
Chair: William Wodall, Virginia Tech

Speakers

Improvements on Nonlinear Regression Models for Cycle Time – Throughput Curves
Author:
Rachael Johnson (Arizona State University)
Speaker: Rachael Johnson
email: rtjohns1 "AT" asu.edu

Abstract

Cycle time – throughput (CT – TH) curves are often used as a management decision tool in the manufacturing setting. They provide a manger with information about the distribution of the factory’s cycle time, or how long it will take to complete a given lot or job, based upon the rate at which their factory is starting lots or jobs into the factory floor (throughput). Any given throughput level corresponds to a single point on the CT – TH curve. For this reason, a discrete event simulation is often used to model the factory and create the points that make up this valuable CT – TH curve. Once the points are created, generally a meta-model is fit to the data or managers “eye-ball” the predicted cycle time at a certain throughput level proposed. The literature suggests several nonlinear regression meta-models to fit a cycle time throughput curve. These meta-models have two problems that are addressed here. The first is that these models do not take into consideration the problem of non-constant variance. In general, CT – TH curves have a variance proportional to the mean (as throughput increases, variance does as well). An alternative method is proposed for fitting nonlinear regression models to the CT – TH curve given the issue of non-constant variance. Several techniques were used to obtain weights for the nonlinear regression model including the residuals and the asymptotic variance, which for small models, can be found in a closed form solution. The second problem addressed is that in general the nonlinear regression curves are fit to averaged values at a given throughput point. Traditionally, regression analysis takes all of the points as individual replicates and fits the curve to points. This method is explored and compared to the method of averages and the discrepancy in standard errors obtained in compared.

Presentation


Statistical Robustness Study for Kinetic Models
Authors:
J.P. Engelbrecht and R.L.J. Coetzer (Sasol Technology)
Speaker: J.P. Engelbrecht
email: pirow.engelbrecht "AT" sasol.com

Abstract

Kinetic models are non-linear systems that depict the dependence between process variables and components or products where the process variables are usually assumed to be fixed. This is under the assumption that the process variables that govern the outputs are fully controllable. However, process variables are not always fully controllable and are more often hard-to-control during normal operation on a full-scale chemical production plant. This presentation outlines the methodology of statistical robustness studies for kinetic models. We illustrate the design and analysis (DACE) of computer experiments and evaluate different response models and designs for determining optimum conditions which are robust against the variability in the hard-to-control variables. We demonstrate the methodology with an example, namely the Ethoxylation of Ethylene Glycol in an inter-cooled pipe reactor.

Presentation


Statistical Process Monitoring of Nonlinear Profiles Using Wavelets
Authors:
Eric Chicken, Joseph J. Pignatiello, James Simpson (Florida State University)
Speaker: Eric Chicken
email: chicken "AT" ani.stat.fsu.edu

Abstract

Sequential data is increasingly being collected as a series of functions or signals (profiles), rather than as a series of point values.  Such profiles need to be monitored for deviations in an SPC setting.  A successful method for profiles should work for a variety of function types, quickly and accurately signal when the sequence of profiles is out-of-control, pinpoint the time the out-of-control profile occurred, and quantify the degree of change between the in-control and out-of-control function profiles.

We propose a semiparametric wavelet method for monitoring for changes in sequences of nonlinear profiles.  No assumptions are made on the nature of the form of the changes between the profiles other than finite square-integrability.  This method extends dramatically the scope of profiling by its ability to analyze nonlinear profiles in addition to linear ones. Based on a likelihood ratio test involving a change point model, we use the spatial adaptivity properties of wavelets to accurately detect profile changes taking nearly limitless functional forms.  An estimate of the change point is provided, and the degree of change is approximated in terms of integrated squared error between in-control and out-of-control profiles.  Performance of the method is assessed with Monte Carlo simulation and is compared against other wavelet SPC methods.  The results presented indicate the method can quickly detect a wide variety of changes from a given, in-control profile, and show a marked improvement over existing wavelet SPC methods.  The presented method is very sensitive, for example estimating nearly error-free the change point and integrate squared error when the signal-to-noise ratio for the out-of-control profile is less than one.

Presentation


Session Title: Methods in Statistical Process Monitoring and Control
Chair: Willis Jensen, W.L. Gore & Associates

Speakers

The Relationship Between the Recurrence Interval and Time-to-Signal Properties of Surveillance Schemes
Authors:
Shannon Fraker, William Woodall, and Shabnam Mousavi (Virginia Tech)
Speaker: Shannon Fraker
e-mail: sfraker "AT" vt.edu

Abstract

In the literature there is confusion between the definitions of the in-control average time-to-signal (ATS) and the recurrence interval. The recurrence interval is typically used in public health surveillance where time-to-signal measures are used in industrial statistical process control. We compare the recurrence interval and measures based on the time-to-signal properties for the temporal monitoring case using scan statistics, exponentially weighted moving average (EWMA) charts, cumulative sum (CUSUM) charts, and Markov dependent signaling processes. The in-control average time-between-signals (ATBS) and the in-control average signaling event length (ASEL) are introduced as performance measures that are useful when a monitoring process is not reset to its initial state after a signal. We show that the recurrence interval is limited in its applicability and often fails to summarize important information about the performance of the monitoring process. We, therefore, recommend that measures based on the time-to-signal properties be used instead of the recurrence interval to evaluate the performance of surveillance schemes.

Presentation


A New Multivariate Extensions of CUSUM Procedure
Author:
Hong Li (Kent State University)
Speaker: Hong Li
email: honli "AT" math.kent.edu

Abstract

In quality control, to use the recent history data of the process, Page(1954) posted the CUSUM procedure for the univariate case. The stopping time in the procedure is defined as r = inf{n > 1 : max(Sn-1 + xn-1; 0) ¸ h}. It has been proved(Moustakides,1986) that the CUSUM procedure had the smallest expected run length out-of-control among all procedures with the same in-control ARL. In this article, we investigate the multivariate extension of Page's CUSUM procedure. If we have X = (X1,X2) and the CUSUM Si;n = n Xk=1 Xi;k; i = 1; 2, and the stopping time is N = minfN1;N2g, here Ni = inffn > 0 : Si;n > hig; i = 1; 2. The expectation and the variance of the run length for various multivariate distributions were studied. Both analytical and simulation results of the ARL and variance are given. In this paper, the exact expression for the ARL of a trinomial model for any decision intervals (h1; h2) are given. And the program computing the ARL and variance for any given decision intervals (h1; h2) is also given.


An Assessment of Methods for the Statistical Monitoring of Autocorrelated Data
Authors:
Victor Morin (ECOLAB Research Center) and Barbara Bennie ( University of Wisconsin)
Speaker: Victor Morin
email: Victor.Morin "AT" ecolab.com

Abstract

The problems associated with the application of statistical control charting methods with autocorrelated data have been well documented and a number of methods have been developed for improved control charting methods to mitigate the impact of the autocorrelated data. The majority of methods use various time-series analysis models to remove the autocorrelation structure and perform the control charting on the residuals.

We review an assessment of the practical implementation of some of these methods in an industrial setting. Ecolab manufactures dozens of chemical products across multiple plants, each with multiple production lines and at high volumes. The key characteristics monitored with statistical process control are all autocorrelated at the sampling rates required to maintain adequate control. The assessment was done to identify a method that is both effective and easy to implement. Many of the methods are effective, but are totally impractical for many manufacturing environments because the skills, resources and effort required to develop time-series models either do not exist or are cost-prohibitive. We review the range of data types and autocorrelation in our plants and summarize our assessment of the practical implementation of the better known methods.

Presentation


Session Title: Tools in Quality and Productivity
Chair: William Brenneman, Proctor and Gamble Company

Speakers

Using Statistical Thinking to Optimize Scientific Studies
Authors:
Kristi Griffiths, Laura L. Scheppers, and Rebecca J. Elliott (Eli Lilly & Co.)
Speaker: Kristi Griffiths
email: GRIFFITHS_KRISTI "AT" LILLY.COM

Abstract

Across all industries today, the need for quick answers may result in haphazard, inefficient, and less effective experimental studies. There are several issues that poor studies have in common, e.g. a vaguely articulated research question, no specific idea of what information to gather, insufficient/inefficient study design, improper execution and/or data collection, inability to analyze/interpret the data, and/or ultimately an inability to turn raw data into useful information. This presentation will explore these issues and offer practical solutions to reach the goal of timely, accurate, and cost-effective decision-making.

Presentation


A Quality Assessment Tool to Support Improvement Planning
Author
:
Sam Woolford (Bentley College)
Speaker:
Sam Woolford
email: SWoolford "AT" bentley.edu

Abstract

Performing quality assessments and utilizing the results to establish priorities for improvement is a critical activity in any effective ongoing quality improvement program. Many organizations have a difficult time objectively evaluating improvement needs across various organizational areas for a myriad of reasons such as a lack of requisite data, an inability to compare the benefits from various improvement opportunities, a lack of resources to conduct appropriate assessments and competing management agendas. This is particularly true in small companies or even larger companies that are geographically dispersed.

This presentation describes a self-assessment tool that collects relatively straight forward data from managers which is then analyzed to identify and prioritize improvement needs. The assessment tool is based on an application of the Analytic Hierarchy Process which provides a framework for efficiently evaluating the assessment data in a manner that can minimize the impact of competing agendas, prioritize improvements in a manner that is deemed to have the biggest impact on organizational performance and provide management with information needed to establish an annual improvement plan.

Presentation


Data Mining, In Theory and In Practice
Author:
Valerie Peters (Sandia National Laboratories)
Speaker: Valerie Peters
email: vapeter "AT" sandia.gov

Abstract

Data Analysts know that decisions are only as good as the data upon which they are based. Less frequently cited, however, is the fact that the data can only be as good as the systems used to collect, store, and retrieve it. Discrepancies between how data systems are designed, how they are used to collect data, and how their output is used, can create significant gaps in otherwise robust decision-making processes. At best, these gaps can create confusion; at worst, they can lead to costly and dangerous conclusions.

When abstract data is used to draw real world conclusions, data miners must understand the constraints and advantages of the data systems themselves, in order to realize the greatest benefit from the real (and hence imperfect) data. This “reality check” is the best way to maximize the relevance of data-based decisions. Key steps are outlined for data miners who wish to increase the real-world value they provide to decision-makers. These steps have been applied in academic, industrial, and military settings and can be used with both large and small projects. Hints and suggestions for successful implementation of the steps are provided, and each step is evaluated in terms of degree of difficulty, potential impact to the project, potential benefit to the organization, and potential benefit to data quality.

Presentation


Hands On Learning Aids for Teaching DOE Concepts
Author:
Jim Alloway (EMSQ Associates)
Speaker: Jim Alloway
email: jalloway "AT" earthlink.net

Abstract

DOE is perhaps the most powerful tool to progress from data to information to decision making. Its perceived level of difficulty precludes its use by most nonstatisticians, particularly those in nonmanufacturing organizations. This perception is easily shattered with visual, hands on manipulatives to illustrate the key underlying concepts and visualization of results.

This session illustrates three dimensional models that emphasize the link between the design, data collection, and analysis phases of DOE. A physical simulation model is used to generate data to illustrate the use of these manipulatives in teaching fundamental DOE concepts.


Session Title: Residuals, Loss Functions, and Control Charts
Chair: Mark Bailey, SAS Institute, Inc.

Speakers

Residuals and Their Analyses for Accelerated Life Tests with Step- and Varying-Stress
Author:
Wayne Nelson (Wayne Nelson Statistical Consulting)
Speaker: Wayne Nelson
email: Wnconsult "AT" aol.com

Abstract

Residuals are widely used to assess a model fitted to data, especially the model distribution.  Model assessment for accelerated life testing is particularly critical, as the model is used to extrapolate product life with respect to the acceleration stresses and also often into the lower tail of the life distribution.  Suitable residuals for life data from step- and varying-stress tests have not been developed.  Nelson (1990, p. 503) presents crude residuals that are unsatisfactory.  This paper defines suitable residuals and presents graphical analyses of them.  The residuals are illustrated with data from a step-stress test of cable insulation.
 
These residuals can also be used to assess models fitted to field data on units subject to different varying stress profiles over time.  Modeling such data is a common unsolved problem in automotive and other applications.
 
Nelson, Wayne (1990), ACCELERATED TESTING: STATISTICAL MODELS, DATA ANALYSES, AND TEST PLANS, Wiley, New York
.

Presentation


Multivariate Inverted-Normal Loss Functions and Their Applications in Process Targeting
Authors:
David Drain and Elizabeth Cudney (University of Missouri-Rolla)
Speaker: David Drain
email: draind "AT" umr.edu

Abstract

Loss functions quantify the “cost” of producing a product that differs from customer expectations. The simplest loss function is step-function loss, which states that there is no loss if a product is within specification limits and assigns some fixed, positive loss if it is outside specification limits. Taguchi introduced quadratic loss functions that assigned non-zero loss to any deviation from target, with large deviations incurring greater losses. His quadratic loss functions are unbounded, which can lead to some counterproductive decisions in cases where it is possible to produce material far from target. Bounded loss functions such as the INVL (inverted normal loss function) remedy this shortcoming.

We describe properties of univariate and multivariate INVL loss functions both mathematically and through examples. In the examples we also describe how one might estimate parameters for these functions and use them in an industrial environment for target-setting and process improvement identification. Examples include cases with correlated production parameters having synergistic or antagonistic losses.

Presentation


Session Title: Industrial Applications
Chair: Kristi Griffiths, Eli Lilly & Co.

Speakers

Exploring the Statistical Relationship Between Culture, Strategies and Manufacturing Processes in Japanese Automotive Parts Production Facilities in North America
Author: Scott Dickenson (Indiana State University (Doctor of Philosophy student in Technology Management with emphasis in Manufacturing Systems))
Speaker: Scott Dickenson
email: SDicken179 "AT" aol.com

Abstract

This study reviews numerous variables and attributes associated with the production of rubber products at a Japanese auto parts manufacturing firm in the Midwest that supplies products primarily to the Toyota Motor Manufacturing Group in North America. The production of such products account for a significant portion of the overall sales of the firm and this product class has been targeted as a strategically significant product structure by the firm. As a result of global escalations in raw material costs and the increase of competitor rubber production facilities in China and Southeast Asia, rubber parts production has become extremely competitive on a global basis. Consequentially, domestic producers have experienced the need to be much more efficient in their business operations to remain competitive. There are numerous variables and attributes associated with the production of rubber products for the automotive industry. Any of these variables can affect the successful outcome of the end rubber part manufacturing cycle.

Although numerous variables and attributes are associated with defect generation, some play a larger role in the potential of defect production than others. This study explores the relationship between such variables and attributes specifically correlated to cultural influence and internal practices and identifies which of the primary variables represent a higher potential to generate the production of defective rubber products and thus the strategic targets and in turn should be the focus of extended process control. Multiple regression techniques are used to explore the relationship between such variables and to develop a method capable of predicting the output of defect production. This study also explores the tenuous potential that cultural influence may play in terms of manufacturing strategy execution and process control.


Productivity Gains Through Extended Run Qualification
Author:
Cara Estes (Old Dominion University)
Speaker: Cara Estes
email: modcara "AT" yahoo.com

Abstract

In a highly regulated food manufacturing environment, productivity gains must be accompanied by strict adherence to quality guidelines. A $750M annual productivity gain was achieved in a super-premium ice cream manufacturing facility by extending run time from 19 to 72 hours. This enabled the facility to meet increased product demand by eliminating daily sanitation shut-down periods without sacrificing product quality. Qualification hinged on accurate collection and analysis of equipment run time and microbiological test results. This presentation will focus on pre-qualification system analysis, operator training, data collection, communication with regulating agencies, micro investigation, result analysis and presentation for qualification.


A Trend­Cycle Forecasting Model of Automotive Sales, Expected Registrations or Industry Registrations
Author: James Wendelberger (Urban Science Applications, Inc.)
Speaker: James Wendelberger
email: jgwendelberger "AT" URBANSCIENCE.com

Abstract

It is often desirable to predict future automobile sales, expected registrations or industry registrations from past values of this data.  The data are fit to a trend-cycle model defined as a sine function plus a linear term.  This univariate trend-cycle model is useful when the data of the known series is for a partial cycle up to two cycles or more.  For yearly data in the auto industry this means that at least 6 to 20 yearly observations are needed for a reasonable model fit.  The structure of the model is useful for smoothing out or accounting for the up or down trends produced by the cyclic behavior.  Without this accounting a linear projection might over compensate for the cyclic behavior and provide forecasts which are too high or too low (even decreasing in a growing industry).  The trend-cycle model has been used successfully in practice for many automobile time series forecasts.  These forecasts have included automobile retail new car industry data by market for all markets in the United States.

Back to Top