Wednesday, August 31, 2011

Verification of Process Control


Many food microbiologists are familiar with sampling plans that use microbiological data to make
decisions regarding the quality or safety of a specific lot of food. Ideally, the statistical basis for this
type of testing is that analyses are performed on a sufficient number of samples from a single lot such
that there is a high degree of confidence that the lot does not have an unacceptable level of microorganisms
that affect the quality or suitability of the food.
An important concept in understanding the statistical basis for such lot-by-lot or within-lot testing
is that of defect rates, i.e., the portion of servings or containers that do not satisfy some attribute, such
as absence in a defined quantity of product, or below a specified concentration (ICMSF 2002). Such
sampling programs become increasingly more resource intensive as the acceptable defect rate
becomes smaller. Once a standard method with the appropriate sensitivity has been selected for analyzing
samples, achieving the desired test stringency as the defect rate decreases is typically accomplished
by analyzing more samples from the lot or by increasing the size of the analytical units
examined. When the acceptable defect rate is low (e.g., <5%), the number of samples that need to be
analyzed can be a severe practical impediment to using microbiological testing. For example, consider
two lots of ready-to-eat food that are required to be free of Salmonella, one with 50% of the
servings contaminated and a second where 1% of the servings are defective. In the first lot, examining
three servings would have a high probability (87.5%) of identifying the lot as contaminated, whereas
the probability of identifying the second lot as containing Salmonella would only be 63% if 100 servings
were examined.
Another important concept associated with within-lot testing is the underlying assumption that
there is little or no knowledge about the product and the processes and conditions under which it was
manufactured and distributed. In such instances, microbiological testing is used as a control measure
to segregate sound and unsound lots. An important consequence of this assumption is that since no
prior knowledge of the lot is assumed, the results from testing one lot cannot be considered predictive
of the status of other lots.
While within-lot testing plays an important role in food safety particularly for examination of foods
at ports of entry for regulatory actions, typically microbiological data collected is not based on traditional
within-lot sampling plans and statistics. Instead, sampling is often conducted periodically and
on only a portion of the lots. Furthermore, the extent of testing (i.e., number and size of samples analyzed)
is typically at a level that it does not provide a high level of confidence that a lot contaminated
at a low rate would be detected. This is not to imply that this type of testing does not provide manufacturers
or control authorities with important microbiological data; however, too often such testing
programs are conducted in a manner that does not provide the best use of the data acquired.
Chapter 3
Verification of Process Control
34 3 Verification of Process Control
These testing programs are referred to as process control testing or between-lot testing, and their
usefulness can be enhanced significantly if they are appropriately designed, including appropriate
analysis, interpretation and review of the data. When this is done testing programs provide a powerful
tool for evaluating and correcting the systems used to control microbiological safety and quality
before the system crosses the threshold where the product is no longer suitable for commerce. This
chapter provides a brief introduction to the concepts and application of this type of microbiological
data acquisition. Detailed requirements for establishing such a testing program are found in other
standard references (Does et al. 1996; Roes et al. 1999; ICMSF 2002; Hubbard 2003; NAS US
National Academy of Sciences 2003; ECF 2004; NIST/SEMATECH 2006).
Understanding the differences in the goals and assumptions associated with within-lot and
between-lot testing is important for successful process control testing. Within-lot testing is used to
establish the safety or quality of a specific lot of product, presumably because of a lack of knowledge
about the effectiveness of the means for controlling contamination and ensuring safe production, processing
and marketing. The purpose of between-lot testing is not to establish the safety of a specific
lot; rather safety is assumed to have been achieved by establishing and validating processes and practices
that control significant hazards including the variability of ingredients, processes and products.
The purpose of between-lot testing is to verify that the process and practices for ensuring safety are
still performing as intended. The underlying assumption in this case is that there is detailed knowledge
of how the food was manufactured. Thus, process control sampling is most effectively implemented
as part of an overall food safety risk management program such as HACCP (ICMSF 1988).
To reiterate the different applications of within-lot and between-lot testing – if the testing of all lots
using within-lot sampling plans was implemented in a HACCP program, that sampling would be both
a control measure (that would likely be a critical control point) and part of monitoring activities.
Conversely, between-lot testing would be used as part of the verification phase of HACCP. Thus,
failure to meet a within-lot sampling plan would indicate a potentially unacceptable lot whereas
failure of a between-lot sampling plan would signal a potential loss of control of a HACCP
program.
As indicated above, the purpose of process control testing is to determine whether a control system
is functioning as designed; i.e., producing servings that have a defect rate below a specified value or
within a specified range. An inherent assumption made in conducting between-lot microbiological
testing is that actions have been taken to reduce the variability among lots so that the variability
between lots is minimized or that the system is consistently operating at a level of control such that
the products are substantially better than the specified acceptable level. It is questionable whether a
HACCP program could be truly considered under control if there is a large between-lot variation.
Thus, between-lot testing is most effective when there is little variation in the mean and standard
deviation of the log concentrations of a hazard among lots under normal operation. A small betweenlot
variance allows a loss of control of the food safety or quality system to be more readily identified
with the least amount of microbiological sample analysis.
As a simple example of the difference between within-lot and between-lot sampling, consider a
company that has two processing lines, one old and less reliable, and one new and highly reliable, for
the same product. The company wants to ensure a defect rate of <1% of that product from either line.
For product from the old line, where there is less confidence in the reliability of the process, the
company may opt to test each lot. In this case, end product testing is used as a critical control point.
Given that the within lot variability of product from the old line is higher, the manufacturer might
even choose to use a sampling plan that involves a greater number of samples so as to have more
confidence that the results of the sampling plan are representative of the entire lot. Conversely, for
the new line, the company could apply the same sampling plan but draw the samples from a greater
number of lots; i.e., effectively considering the process as a continuous lot, or a series of large lots,
with the lot being defined by a period of time and lots overlapping in time. This is the basis of
the moving window approach, exemplified in Sect. 3.4. In the moving window approach,
3.2 How to Verify that a Process is Under Control 35
an increase in the number of positive results over time indicates a trend toward loss of control.
In this case the same sampling plan is used to verify the process.
Appropriate statistical analysis can identify when the incidence of defective units significantly
exceeds the tolerable defect rate. If the incidence exceeds that level, the manufacturer should investigate
the cause of the elevated defect rate to determine why the process is no longer functioning as
intended and should take corrective action. Examination of the system’s performance over time also
provides useful information and insights into the type of failures that occur (ICMSF 2002). Process
control testing is most effective when it can detect an issue at a level or frequency below that which
would be considered unacceptable for safety or quality if it were to enter the marketplace. In this way
corrective actions can be taken before a critical limit is exceeded.
3.2 How to Verify that a Process is Under Control
The actual microbiological methods used to detect, identify and enumerate microorganisms of concern
for process control verification are essentially the same as those used for within-lot testing. These
methods are available in a variety of standard references (e.g., ISO, AOAC, FDA Bacteriological
Analytical Manual, American Public Health Association etc.) and are not discussed further.
Like within-lot testing, microbiological criteria established for a process control testing program
can be based on either 2 or 3 class attributes testing plans; i.e., presence/absence or a numerical limit
(or limits in the case of three class plans) or variables testing (i.e., full range of quantitative data).
Similarly, attribute testing can be based on a 2-class or 3-class sampling plan. Process control sampling
plans can be applied to finished products, in-process samples or ingredients. Ideally a decision
on the analytical approach used is reached early in the development of the process control sampling
program. The approach selected strongly influences the types of data needed during the initial phases
of establishing the program. A decision on the approach used should be determined before establishing
the microbiological criteria (i.e., decision criteria) for the program.
3.2.1 Information Required to Establish a Process Control Testing Program
As indicated above, use of process control testing is based on detailed knowledge of the product
and process. A meaningful process control testing program requires detailed knowledge of the
levels or frequency at which the microorganism of concern can be expected in a product when it is
produced and handled properly. This includes information on the variation in those levels both
between lots and within lots. Thus, the first step in establishing a process control testing program
to verify continued successful operation of food safety or quality system is to gather baseline data
on the performance of the food safety system when it is functioning as intended. This is commonly
referred to as a process capability study. During this period, intensive acquisition of data that characterizes
the performance of the system is undertaken, either by generating new data from tests on
the system or by collating existing data. The data collected are specific to the system being evaluated.
This can be as specific as the performance of a single line within a manufacturing plant or as
broad as a commodity class for an industry. However, the latter requires a great deal of forethought
and effort to ensure that the acquisition of data is not biased and adequately represents an entire
industry. On a national basis, this is typically done through a series of national baseline studies; a
major undertaking that is typically done by a national government or industry representative body.
The sensitivity of the methods and sampling plans selected should be adequate to provide sufficient
data on the true incidence of defects within a lot as well as prevalence (the average rate of defects
over time) of the microbiological hazard in the food. Ideally the sensitivity will be set at a level
36 3 Verification of Process Control
that is sufficient to detect the pathogen or quality defect at least a portion of the time. Historical
within-lot testing results can be highly useful for determining the system’s performance and
variability.
When conducting a process capability study, care must be taken to ensure that the data collected
represent product manufactured when the food safety system is under control. If not, it is likely to
increase the variability of the levels (or frequencies) of the microbiological hazard that will form the
basis of the reference level against which ongoing performance will be assessed. This could decrease
the ability of the process control program to identify when the system is not functioning as intended.
The duration of a process capability study will vary with product, pathogen and purpose, but it should
be long enough to generate sufficient data to ensure that the variability in the process has been characterized
accurately. At a minimum, 30 lots should be examined so that the influence of sampling
error is acceptably small and that the performance characterization is reasonably robust. There are
instances where the process control study may need to be conducted for longer periods or in phases.
For example, if raw ingredient contamination varies substantially over the course of a year, then the
process capability study may need to consider seasonality as a factor, thereby extending the duration
of the study for a full year. In such instances, it is possible to conduct the process capability study for
30 days, perform initial analyses and set initial control limits; and then review and revise the analysis
and control limits, if necessary, as additional data are accumulated. The inclusion of such data in the
process control study depends, in part, on a value judgment related to whether the product is deemed
under control during those periods when high levels are observed due to season or supplier. If the
process is not deemed as being under control, then the data derived from it should not be included in
the reference level data set. It also implies that means for preventing the increased defect rates associated
with seasonality or supplier will need to be immediately identified since, once implemented, the
process control testing program based on the process control study that does not include the period
higher defect rate will appropriately identify the process as being out of control during those
periods.
As indicated above, process control testing programs are most effective when they detect loss of
control before a critical limit is exceeded. For that reason, the microbiological limits for process
control testing programs employed by companies are frequently established to effectively detect
changes before a regulatory limit is exceeded. This allows corrective actions to be taken proactively.
However, this proactive approach can be difficult to implement if competent authorities establish
limits based on “zero tolerance” instead of specifying a specific microbiological criterion based on
risk or on specific testing protocols.
Process control testing can be used for assessing both food safety and food quality, and is not
restricted to microbiological testing. Simple, easily performed physical and/or chemical measures of
the impact of microbial contamination can offer distinct advantages over more sophisticated testing
methods. For example, sterility testing of UHT milk products is amenable to process control testing
based on sensory evaluation combined with a pH determination (von Bockelmann 1989).
3.2.2 Setting Microbiological Criteria, Limits and Sampling Plans
The concentration of microorganisms varies in lots of food and is often described by a log normal
distribution. Such distributions are open-ended functions and high values can potentially occur even
when the system is in control. However, such events should be rare and a high frequency of such
occurrences is evidence that the system is no longer under control. A microbiological criterion establishes
the decision criterion to assess whether a microbiological testing result could have occurred by
chance alone or whether the food safety or quality system has undergone some significant change
such that it is no longer functioning as intended.
3.3 Routine Data Collection and Review 37
The microbiological limit associated with a process that is under control effectively establishes
that decision criterion, based on the results of the initial process capability study. Assuming that the
current level of control within a plant or an industry is deemed acceptable, a limit can be established
in combination with an appropriate sampling plan so that the frequency of detecting a positive result
or a specific concentration would be unlikely to occur by chance alone. For example, a result that
exceeds the 95% probability value would only be expected to occur, on average, about once in 20
samples. If the frequency were higher, it would be indicative that the system is out of control. An
increase in the number and size of analytical units examined increases the likelihood of detecting a
positive result so that the decision criteria are specific to the microbiological criterion and sampling
plan established. Establishing the stringency of a microbiological criterion is a risk management
activity. Thus, the specific sampling plan thresholds selected (e.g., 95 or 99% confidence) may take
into account a range of scientific and other parameters such as assessed risk, severity of the hazard,
technological capability, public health goals, cost of taking action when the process is actually in
control, or consumer preferences and expectations. Because this is a risk management issue and not
a risk assessment, no specific value of probability of detection serves as a standard criterion. For
example, consider two situations that a country or company might assess in establishing a microbiological
limit for a food product. First, consider a product where the industry’s food safety or quality
systems is based on a single, well established technology that is operating with a substantial safety
margin to control a relatively mild hazard and has both a low between-lot and within-lot variance.
In that instance a microbiological limit based on 99.99% of the baseline distribution (i.e., £0.001%
of the test values from the program operating as intended would exceed the microbiological limit)
might be deemed sufficient to protect public health and the microbiological criterion would be
established accordingly. In such a situation, the microbiological limit established would result in the
appropriate acceptance of the vast majority of this product. Such a process control standard would
have little impact on the industry’s current performance. In contrast, consider an industry where
there is substantial variability among the technologies, practices and standards of care used by individual
companies, leading to substantial between-lot (and in some instances within-lot) variability.
In this case, the country or company might establish a microbiological limit at 80% of the current
baseline distribution (i.e., 1 in 5 of samples as currently produced would be deemed unacceptable).
Over time a process control microbiological limit of such a magnitude would be likely have a large
impact on the companies that are poorer performers; i.e., their food systems would be considered as
not functioning as intended. Conversely, the limit would have minimal impact on companies that are
good performers. The end result would be to decrease both the mean and variance of the log concentration
of the hazard in servings of the product entering commerce. A similar outcome would
occur over time if the stringency of a within-lot testing program was increased.
3.3 Routine Data Collection and Review
Once established, process control testing requires routine testing of only a small number of samples.
The number of lots that need to be tested, the frequency of testing and the number of samples from
each lot depends on the inherent defect rate when the food safety or quality system is functioning as
intended and the degree of confidence that the microbiological limit is not being exceeded by the
manufacturer or country. The specific testing requirements of the process control sampling plan
depend on the type of process control analysis approach being employed (e.g., CUSUM, Moving
Window) (ICMSF 2002). Process control testing programs can also include variations in testing frequency
based on process performance; e.g., to increase testing when increased defects are detected
or to decrease the frequency of testing when results are consistently acceptable over time. However,
rules for variable sampling frequencies should be formulated with a clear understanding of the effect
38 3 Verification of Process Control
that the alternate sampling frequencies have on the ability of the testing program to detect an emerging
loss of process control and to be able to respond in time to prevent unacceptable product from
entering commerce.
Implementation of process control testing programs requires effective data management systems
and the ongoing evaluation of collected data over time. This is usually done through control charting
where the data are arrayed over time (Fig. 3.1). Graphical representation is often a useful tool as an
initial evaluation of the data. Comparing these data with the data collected in the routine monitoring
of critical control points in HACCP plans and other verification data can be useful for interpreting
the results of the process control testing and enhancing the identification of the underlying causes of
process deviations For most food microbiology concerns, the lower limit would not typically be
considered a decision criterion, with the possible exception of fermented foods or probiotic-containing
foods; however, the lower limit may reflect the limit of detection of the test. In the hypothetical
example in Fig. 3.1, a loss of control is apparent at weeks 50 and 51 that should have elicited investigation
to restore control. Additionally, a general increasing trend began at week 42 and became
apparent by week 46–47. This could have stimulated corrective action investigations even before a
loss of control occurred.
3.4 Competent Authority Process Control Program Examples
The use of process control testing for regulatory verification of food safety programs began in the
1990s as competent authorities began to incorporate HACCP into their regulatory programs. The use
of process control analysis techniques provided them with a statistically sound means of establishing
microbiological testing as a HACCP verification tool, while minimizing the economic impact of testing
on both business operators and the competent authority. While the techniques are increasingly
being used by industry and governments, the greatest adoption of this approach has been in North
America. Examples of early use of this approach follow.
3.4.1 Meat and Poultry
One of the first uses of process control programs by competent authorities was in the Pathogen
Reduction/Hazard Analysis and Critical Control Point (HACCP) Systems rule (USDA 1996).
This regulation established two microbiological criteria as a means of verifying HACCP plans for
meat and poultry products:
2
3
4
5
6
0 10 20 30 40 50 60
Time (weeks)
Log CFU/g
Fig. 3.1 Hypothetical
control chart for a microbial
indicator assay conducted
weekly. The center horizontal
line (—) represents the
hypothetical microbiological
criterion and the two flanking
lines (− −) represent 95%
upper and lower confidence
limits
3.4 Competent Authority Process Control Program Examples 39
1. Testing for Escherichia coli as an indicator of fecal contamination and adequate chilling
performed
by individual business operators.
2. Salmonella enterica testing performed by USDA Food Safety and Inspection Service (FSIS).
The microbiological limits established by FSIS were based on extensive review of baseline studies,
regulatory testing and industry data for various classes of meat and poultry products (USDA 1995).
Built into these standards was a goal of decreasing the incidence of foodborne disease attributable to
meat and poultry. The program employed a between-lot moving window approach (i.e., as each new
test result is obtained the window moves and the oldest result are discarded), where the results of
single samples taken on individual production days are examined over the course of a specified number
of days. The frequency of positive samples over that moving time frame is then related to the
defect rate that is expected for the specific meat or poultry product. The testing required of manufacturers;
i.e., the presence of biotype I E. coli as an indicator of fecal contamination, is based on a
3-class attribute sampling plan. The testing by FSIS for S. enterica is based on a 2-class plan in conjunction
with samples taken periodically by regulatory personnel over a specified number of days.
Failure to meet the microbiological limit is considered indicative that the probability that the facility
is not achieving the level of control required was >99% (USDA 1996). The Salmonella performance
standards are not lot acceptance/rejection standards. The detection of Salmonella in a specific lot of
carcasses or ground product does not, by itself, result in condemnation of the lot. Instead, the standards
are intended to ensure that each establishment is consistently achieving an acceptable level of
performance with regard to controlling and reducing enteric pathogens on raw meat and poultry
products (USDA 1996).
The FSIS regulation and requirements are intended to evolve to address new risks and availability
of new data. Development of process control microbiological criteria is being considered by other
national governments and intergovernmental organizations. For example, the EU has established
process control-based hygiene criteria for controlling Salmonella in raw poultry (EFSA 2010), and
the Codex Committee on Food Hygiene is considering a process control approach.
3.4.2 Juice
A more limited use of microbiological testing for process control is employed in the US FDA’s
Hazard Analysis and Critical Control Point (HACCP); Procedures for the Safe and Sanitary
Processing and Importing of Juice; Final Rule (FDA 2001). In this example the competent authority
was concerned about the underlying scientific assumption that enteric pathogens would not become
internalized in citrus fruit. The regulation has an exemption for citrus fruit juice producers enabling
them to fulfill the required 5-D pathogen reduction by treating the surface of the fruit prior to the
juice being expressed. This exemption was based on data that suggest enteric bacteria are limited to
the surface of the fruit. This prompted a requirement that manufacturers choosing to use only surface
treatments must analyze a 20-mL sample for every 1,000 gallons (~4,000 L) produced per day
for generic E. coli, using a moving window analysis based on a 7-day window, where two positive
samples in a 7-day window are deemed to indicate the process is no longer in control. This requires
the manufacturer to investigate the cause of the deviation and divert juice to pasteurization after the
juice is expressed. Based on extensive baseline studies of commercial juice operations indicating the
range of initial contamination levels, juice that is successfully treated to achieve a 5-D reduction
(99.999%) is likely to have <0.5% probability of having two positives in a 7-day window after 20
samples. Conversely, a reduction that yields only 3-D inactivation is calculated to result in a 34%
frequency of 2 positive E. coli findings within the 7-day window with 20 samples, which would
detect the process failure (Garthright et al. 2000; FDA 2001).

No comments:

Post a Comment