Good Weighing Practices for the Pharma Industry
In the pharmaceutical industry, weighing is only one step in
a QC analysis chain or manufacturing process, but it strongly influences the
overall quality and integrity of the result of final product. In production,
weighing is a key factor in achieving batch uniformity and consistency in
dispensing or formulation processes. Proper weighing is thus essential in
ensuring continuous adherence to predefined process requirements and avoiding a
frequent source of out-of-specification results.
Furthermore, accurate weighing processes help to address some of
the most demanding challenges of the pharmaceutical industry, increasing public
health, consumer safety, productivity, and competitiveness.
This article introduces a scientific methodology for selecting
and testing weighing instruments within an integrated qualification
approach—Good Weighing Practices. Based on primary requirement of weighing by
the user and prevailing weighing risks, it provides a state-of-the-art strategy
to reduce measurement errors and ensure reliable weighing results.
Understanding weighing process requirements and important balance and scale
properties such as minimum weight is essential to selecting an appropriate
system. When the instrument is in use, these requirements and risks are taken
into account to establish a specific routine testing scenario.
The higher the impact of inaccurate weighing and the more
stringent the weighing accuracy requirements are, the more frequent testing
should be. For less risky and stringent applications, however, testing efforts
can be reduced accordingly. Risk and life cycle management forms an integrated
part of the overall strategy of good weighing practices to bridge the gap
between productivity, process quality, safety, and compliance.
OOS Results and their Consequences
OOS results have always had a significant
impact not only on the consumer safety and product quality, but also has effect
on the organizational productivity. Any observed OOS may result in reduced
uptime due to investigations, delayed batch release, and even costly recalls. In recent years, organizations are
facing more stringent safety and quality regulations. New challenges concerning
product safety and quality are created by developments such as genetically
modified organisms or nanotechnology. Furthermore, the rise in international
sourcing and trade of pharmaceuticals are expected to accelerate this trend.
In light of these issues, along with
corresponding changes in international and national laws, standards and
inspection processes will be subject to regular revision and update. One
example of recent legislation affecting the industry is FSMA, which went into effect in the year 2011.
FSMA shifts federal regulators’ focus from responding to safety issues to
preventing them. Implementation, of it is still underway, will lead to enhanced
prevention and increased frequency of mandatory FDA inspections. In the past,
almost all FDA 483 observations and warning letters were addressed to the pharmaceutical
industry with the focus on weighing practices in industry.
Weighing is a key activity in most pharmaceutical
industry; however, it’s not well understood, and its complexity and importance is
often underestimated. The weighing process is even less understood in the
production environment than in the laboratory. The selection of a scale is
affected by external factors such as hygiene, ingress protection, corrosion,
the risk of fire or explosion, and the health and safety of the operating staff.
In current practices, all these factors are given higher priority than mere
metrological needs. Metrological criteria—the understanding and proper
consideration of which are a prerequisite for preventing OOS outcomes—are taken
into consideration to an insufficient extent.
More often than not, the qualifications of production operators
are much lower than those of laboratory analysts. As a consequence,
manipulation errors, along with OOS errors, are more frequent in production areas
rather than in a laboratory.
One frequent practice is to use existing instruments for a
different purpose than the one for which they were originally procured.
Unfortunately, the metrological needs of the new application may not clearly
match the capability of the recycled scale.
OOS in production is not only an indicator that quality might be
at a higher risk. Other problems may result in a hazard to the health and
safety of the consumer, a potential breach of regulatory requirements, and an
economic loss for the organization. When this happens, raw materials, manpower,
and asset utilization are mobilized in a process that ends with poor final results.
Products must then be re-worked or disposed of. In many cases, the detection of
an error may trigger tedious and costly recall actions that impact the brand adversely.
Regulations require instruments to be checked or calibrated
periodically. For example:
“The company shall identify and control measuring equipment used
to monitor CCPs…. All identified measuring devices, including new equipment,
shall be checked and where necessary adjusted at a predetermined frequency,
based on risk assessment…. Reference measuring equipment shall be calibrated
and traceable to a recognized national or international standard and records
maintained.”
While the standard calls for instruments to be adjusted when
necessary, it remains silent with regard to how accurate results should be
defined and verified. The applied principles are consequently diverse
throughout the industry. In many cases, the principle of “what you see is what
you get” is applied.
In this environment of misconception, scales are the last part
of the production chain to be suspected when OOS results occur. OOS then
becomes a necessary evil, when it should not.
Measurement Uncertainty and Minimum Weight
State-of-the-art strategies for consistently accurate and
reliable weighing consist of scientific methodologies on instrument selection
and testing. Within these methodologies, industry misconceptions on weighing
are widespread, including “what you see is what you get.” What do we mean by that?
Here’s an example: A user weighs a product on an industrial floor scale and
gets a reading of 20kg, which he believes is the true amount of material.
However, this reading might not exactly reflect the amount weighed; in other
words, the amount weighed might differ slightly from the instrument reading.
This is due to the so-called measurement uncertainty, which is applicable to
every instrument you might think of.
Measurement uncertainty is determined in calibration, and the
results are issued in appropriate calibration certificates. In general, the
measurement uncertainty of weighing systems can be approximated by a positive
sloped straight line—the higher the load on the balance, the larger the
(absolute) measurement uncertainty. Looking at the relative measurement
uncertainty, which is the absolute measurement uncertainty divided by the load
expressed as a percentage, we see that the smaller the load, the larger the
relative measurement uncertainty. If you weigh at the very low end of the
instrument’s measurement range, the relative uncertainty can become so high
that the weighing result cannot be trusted anymore.
It is good practice to define accuracy (tolerance)
requirements for every weighing process that is being utilized. Weighing in the
red area will result in inaccurate measurements, because here the measurement
uncertainty of the instrument is larger than the required accuracy of the
weighing process. Consequently, there is a specific accuracy limit for every
weighing instrument: the so-called minimum sample weight, better known as the
minimum weight. This is the smallest amount of material that will satisfy the
specific weighing accuracy requirement.
While measurement uncertainty is described in great detail in
the literature, we want to emphasize that for weighing small loads on
analytical and microbalances, the dominant factor in measurement uncertainty
stems from repeatability. Samples and standards that are typically weighed on
these balances are usually small loads in comparison with the capacity of the particular
balance.
Scales follow the same principles as balances, with some
additional constraints that rise from the technology used and the size of the
instrument. Most scales use strain gauge weighing cells that lead to a lower
resolution than balances. In some cases, the rounding error may be predominant,
but for scales of higher resolution, the repeatability becomes a decisive
contributor to the measurement uncertainty in the lower measurement range of
the instrument.
Linearity deviation is often a large contributor, but it is
often neglected when weighing samples which are small in size. Considering that
the relative measurement uncertainty diminishes when weighing larger samples,
we can conclude that non-linearity will play a minimum role in maintaining the
measurement uncertainty of the instrument below the required process tolerance.
Similarly, we need to focus our attention on repeatability to define the
critical limit of a high-resolution industrial scale.
It is important to state that the minimum weight of balances and
scales is not constant over time. This is due to changing environmental
conditions that affect the performance of the instrument—factors such as
vibrations, draft, wear and tear, and temperature changes. The operator also
adds variability to the minimum weight, because different users may weigh
differently or apply different skills to perform on the same instrument.
To ensure that you always operate at a weight above the minimum
determined at calibration (at a particular time with particular environmental
conditions by a qualified service technician), apply a safety factor. This
means you only weigh sufficiently above the minimum weight as determined at
calibration. For standard weighing processes, a safety factor of two(2) is
commonly used, provided you have reasonably stable environmental conditions and
trained operators at work. For very critical applications or a very unstable
environment, an even higher safety factor is always recommended.
Another frequent misconception is that the weight of the tare
vessel counts toward the minimum weight requirement. In other words, if the
tare weighs more than the minimum weight, any quantity of material can be
added, and the minimum weight requirement is automatically fulfilled. This
suggests that with a large enough tare container, you could weigh a sample of
just one gram on an industrial floor scale with a ton capacity and still comply
with the applicable process accuracy. Given the fact that the rounding error of
the digital indication is always the lowest limit of the overall measurement
uncertainty, it’s clear that such a small amount of material in any tare
container cannot be weighed with accurate results. Although this is an extreme
example, it clearly shows that this widespread misinterpretation does not make
any sense.
Just recently, we encountered another misconception involving
a dispensing application with the measured minimum weight of the scale in
question at 100 kg. The company stated that its practice was to dispense 20 kg
at a time, always leaving more than 100 kg of substance in the container to
adhere to the minimum weight requirement. Its staff did not understand that
they would have to dispense at least 100 kg—instead of 20 kg—to comply with
their own accuracy requirement.
Routine Testing
“Measuring equipment shall be calibrated and/or verified at
specified intervals…against measurement standards traceable to international or
national measurement standards.” — ISO9001:2008, 7.6 Control of Monitoring and
Measuring Devices.
“The methods and responsibility for the calibration and
recalibration of measuring, test and inspection equipment used for monitoring
activities outlined in Pre-requisite Program, Food Safety Plans and Food
Quality Plans and other process controls…shall be documented and implemented.”
— SQF 2000 Guidance – Chapter 6.4.1.1 “Methods & Responsibilities of
Calibration of Key Equipment.”
These statements delegate the responsibility for the correct
operation of weighing instruments to the user. Statements like these are usually
vague; they are intended to be general guidelines. Therefore, they cannot be
used for daily operations. Questions such as “How often should I test my
weighing instrument?” arise in situations in which guidance is needed to design
standard operating procedures. Such guidelines should not be too exhaustive,
and thus costly and time consuming, nor too vague, and thus insufficient to
assure proper functioning. The right balance between consistent quality and
sufficient productivity must be found. The following test procedures for
weighing instruments are recommended for normal use:
·
Calibration in situ by
authorized personnel, including the determination of measurement uncertainty
and minimum weight under normal utilization conditions. The aim is to assess
the complete performance of the instrument by testing all relevant weighing
parameters, made transparent to the user by a calibration certificate.
Calibration is an important step to take after the instrument is installed and
the necessary functional tests are performed.
·
Routine test of the
weighing system, to be carried out in situ by the user on weighing parameters
that have the greatest influence on the performance of the balance or scale;
the aim is to confirm the suitability of the instrument for the application.
·
Automatic tests or
adjustments, where applicable, using built-in reference weights; the aim is to
reduce the effort of manual testing stipulated by specific FDA guidance.
Test Frequencies
The routine testing procedures and
corresponding frequencies are based on the following factors:
·
The required weighing
accuracy of the particular application;
·
The impact of OOS
results (e.g., for business, consumer, or environment), in case the weighing
instrument does not adhere to the process-specific weighing requirements; and
·
The detectability of a
malfunction.
The more stringent the accuracy requirements
of a weighing process, the higher the probability is that results will fail to
comply as per specification. Therefore, test frequency must be increased.
Similarly, if the severity of the impact increases, testing should be performed
more frequently to offset the likelihood of OOS. If malfunction of the weighing
instrument is easily detected, test frequency should be decreased. The
frequency of testing ranges from daily, for risky applications, to weekly,
monthly, quarterly, semi-annually, and yearly.
Our experience is that many industries tend to test their
laboratory balances quite frequently. A proper risk-based approach can reveal
whether it is necessary to conduct testing so often and whether these efforts
can be reduced without compromising the quality. Furthermore, the applied test
procedures might not always be appropriate. While many industries take one or
several test weightings to assess the balance at different parts of the
weighing range, the importance of the repeatability test is often
underestimated.
Surprisingly, the approach to production
testing often differs from that used with lab testing. Often, only rudimentary
procedures —or none at all—are found on the production floor. This leads to
inconsistent quality and OOS results. Only a few industry understand the
importance of establishing a robust routine testing regime. For many of these
disciplined users, the practice is to reproduce in production what they have
implemented in the laboratory area. This is not appropriate, however, because
probability, severity, and detectability differ significantly in the two
settings.
A sound understanding of the instrument’s
functionality and its weighing parameters, combined with the necessary
understanding of the process-specific weighing requirements, eliminates these
misconceptions and helps prevent critical weighing errors that might result in
OOS outcomes in both the laboratory and the production environment.
Implementing good weighing practices in a
risk-based life-cycle approach for evaluating, selecting, and routine testing
of balances and scales can reduce measurement errors and ensure reliable
weighing processes.
While the standard calls for instruments to be
adjusted when necessary, it remains silent with regard to how accurate results
should be defined and verified.
The key requirement of effective weighing
practices is to ensure that the minimum weight for the required accuracy must
be lower than the smallest amount of material the user expects to weigh.
Furthermore, an appropriate safety factor to compensate for fluctuations in the
minimum weight due to environmental variability and operator variation should also
be implemented.
An understanding of the weighing process
requirements, together with an understanding of the basic principles of balance
and scale properties such as measurement uncertainty and minimum weight,
enables the user to realize an integrated qualification strategy. Furthermore,
a frequent source for OOS problems is eliminated in both the laboratory and the
production floor. Appropriate and meaningful routine tests help the user meet
specific weighing requirements and avoid unnecessary and costly testing. Risk
and life cycle management then become an integral part of an overall strategy
to bridge the gap between productivity, process quality, safety, and
compliance.