| |
|
Released by - Mr. Govindkumar - REVERS PHASE & NORMAL PHASE HPLC, HPLC DETECTORS , ALL KINDS OF COLUMN DISCRIPTIONS , CONCEPT OF ANALYSIS , SYSTEM SUITABILITY CONCEPT , ICH GUIDLINES
| |
|
INTERNAL STANDARD AND EXTERNAL STANDARD
An internal standard in analytical chemistry is a chemical substance that is added in a constant amount to samples, the blank and calibration standards in a chemical analysis. This substance can be used for calibration by plotting the ratio of the analyte signal to the internal standard signal as a function of the analyte standard concentration. This is done to correct analyte losses during sample preparation. The internal standard is a compound that must be show similar behaviour to the analyte.
This ratio for the samples is then used to obtain their analyte concentrations from a calibration curve. The internal standard used needs to provide a signal that is similar to the analyte signal in most ways but sufficiently different so that the two signals are readily distinguishable by the instrument.
An external standard is like the internal standard (known behaviour), but is not added to the unknown. Rather it is run alone, as a sample, and usually at different concentrations, so you can generate a standard curve. Again, the peak areas are related to the known amounts of external standard run. External standards do not correct for losses that may occur during preparation of the sample, such as extraction, centrifugation, evaporation, etc. Internal standards would correct for this if added to the unknown at the beginning of the sample preparation.
Internal standard methods are used to improve the precision and accuracy of results where volume errors are difficult to predict and control. A systematic approach has been used to compare internal and external standard methods in high performance liquid chromatography (HPLC). The precision was determined at several different injection volumes for HPLC and ultrahigh-pressure liquid chromatography (UHPLC), with two analyte and internal standard combinations. Precision using three methods of adding the internal standard to the analyte before final dilution was examined. The internal standard method outperformed external standard methods in all instances
In an external standard calibration method, the absolute analyte response is plotted against the analyte concentration to create the calibration curve. An external standard method will not provide acceptable results when considerable volume errors are expected because of sample preparation or injection-to-injection variation. An IS method, which is a method where a carefully chosen compound different from the analyte of interest is added uniformly to every standard and sample, gives improved precision results in quantitative chromatographic experiments. The internal standard calibration curves plot the ratio of the analyte response to the internal standard response (response factor) against the ratio of the analyte amount to the internal standard amount. The resultant calibration curve is applied to the ratio of the response of the analyte to the response of the internal standard in the samples and the amount of analyte present is determined.
Several approaches have been used to determine the amount of internal standard that should be used in preparing the standards and the samples, but none have illustrated definitive results (1–4). For example, Haefelfinger (1) reports that the IS peak height or area must be similar to that of the analyte of interest, but does not present supporting data. Araujo and colleagues (2) show that experimental design strategies can be used to determine the optimal amount of internal standard used while Altria and Fabre (3) show that the IS should be used in the highest possible concentration.
Calculation of the response factor assumes that the detector gives a linear response for both the analyte and the internal standard over the entire range of the experiment. Since this is not always the case, it is essential to understand the behavior of the response factor as the concentration or amount of analyte and internal standard are varied. Knowing the behavior of the response factor allows one to set limits on the useful range of the chosen analyte or internal standard concentration combinations.
The internal standard method is used to improve the precision and accuracy of results where volume errors are difficult to predict and control. Examples of types of errors that are minimized by the use of an internal standard are those caused by evaporation of solvents, injection errors, and complex sample preparation involving transfers, extractions, and dilutions. An internal standard must be chosen properly and a known amount added carefully to both sample and standard solutions to minimize error and be utilized to its full advantage. The resulting internal standard peak should be well resolved from other components in the sample and properly integrated. If all of these conditions are not met, the use of an internal standard may actually increase the variability of the results. One report suggests that whenever detector noise or integration errors are the dominant sources of error, the use of an internal standard will likely make the results of the experiment worse (5).
A paper published by P. Haefelfinger in the Journal of Chromatography in 1981 (1) discussed some limitations of the internal standard technique in HPLC. Using the law of propagation of errors, the paper showed conditions that need to be met for the internal standard procedure to improve results. In addition to the mathematical illustration, Haefelfinger detailed practical examples where either internal or external standard methods were advantageous.
The Journal of the Pharmaceutical Society of Japan published a study in 2003 (6) that found that the internal standard method did not offer an improvement in precision with the then current autosampler technology. Interestingly, they also found that if the peak of the internal standard was small, the relative standard deviation (RSD) was actually larger than the RSD for the external standard method (6). The limitation of this study was that only one injection volume (10 µL) was used to establish the conclusions.
In our work, a systematic approach has been used to compare the internal to the external standard method using two analytes and two internal standards. The precision resulting from both an internal and external standard method were determined at several injection volumes and on two different instruments. Three methods of adding the IS to the analyte before final dilution have been compared. In the first, a solid internal standard was weighed directly into the glassware containing the sample before dilution with solvent. In the second, a solution of a known concentration of the IS was prepared and a known volume of this solution was added to the sample prior to dilution. In the third, the IS was added in the same manner as the second method, but the internal standard solution was weighed and the weight, not the volume, was used in the IS calculations. We examined the effect of weight of analyte and internal standard on the precision of the results. Initially, the weights of the analyte were varied versus a constant IS concentration, and then the concentration of the internal standard was varied versus a constant weight of the analyte.
Standard deviation was chosen to monitor precision. All possible errors are reflected in the standard deviations of the final measurements, including each step in the sample preparation, sample transfer, and sample introduction into the HPLC or UHPLC system, as well as the HPLC or UHPLC analyses themselves. Both external and internal standard calibration methods were used to calculate the percent recoveries for comparison.
When creating a new method for quantitation, the choice of the correct internal standard (IS) can improve the accuracy and precision of the method. The proper internal standard should be chemically similar to the compound(s) that you are analyzing, but is not expected to be naturally present in your sample. It is best to choose compounds that have the same functional groups, boiling points, and activity as your target compounds.
Two examples
If you are using MS, then it is common to use a deuterated analog of your compound of interest i.e. Amphetamine & Amphetamine d-5.
If you are working with a non-MS detector, the deuterated internal standards would coelute with your analyte of interest and cause problems with quantitation. In this case, you would try to use a compound that is somewhat chemically similar, but would not be found in your sample, i.e. if your target is tri-chlorophenol, you might use a tri-bromophenol or a di-chlorophenol internal standard.
Any time you choose in internal standard, you have to validate that it does correct for small amounts of variation within the analytical process. The best quantitation is achieved when with an internal standard for each one of your target compounds, but in many cases this is impractical because of the number of analytes in your mix and/or the cost of each internal standard.
Important Fact -
Adding an internal standard (IS) to an assay can be an excellent way to often improve method precision and accuracy. An IS can account for signal suppression (or enhancement), that may be caused by the sample matrix. When using an IS, the response of your target compound(s) is compared to the response of the IS. The internal standard is added at a constant amount to all standards, controls, and samples. Since it is a constant, the IS can be monitored to determine if an individual injection is valid. Additionally, the IS can be used to calculate relative retention time and assist with peak identification.
Choosing a proper internal standard is the key to the success of using one. The IS should be chemically similar to your target compounds, and it should not be present in your sample(s). I would recommend reviewing your target compound’s structure and boiling point and choose an IS that would be similar. Ideally, one would have an IS for each target compound in one’s method, but this can become impractical with large compound lists. In these cases, choose a handful of compounds to match early eluting, mid-eluting, and late eluting compounds in your assay.
If you are using a mass spec detector, an isotopically labeled analog of your target compound makes an excellent IS. These are compounds that are deuterium labeled (contain deuterium in place of hydrogen) or 13C labeled (contain 13C atom instead of 12C atom). An example would be using Atrazine-d5 as an IS for Atrazine or other triazine herbicide compounds. Isotopically labeled analogs will chromatograph extremely similar to their non-labelled counterparts.
The drawback to isotopically labeled internal standards is that they are not available for every compound. Additionally, they coelute with the non-labelled target compound; hence, they can only be used with mass spectrometry. With a GC detector, such as an FID, ECD, or NPD or LC detector, such as UV or Fluorescence, these coelutions will lead to headaches in the lab. In these cases, one needs to look at for an internal standard that is similar to the target compound but not found in your sample. Many environmental assays use an internal standard that contains bromine or fluorine when analyzing chlorine containing compounds. As an example, a compound such as 4-bromofluorobenzene could be a suggested IS for chlorobenzene analysis. In the case of medical cannabis potency testing, my colleague Jack Cochran used compounds somewhat similar in structure and behavior to the cannabinoids that were present.
EFFLORESCENT, HYGROSCOPIC , DELIQUESCENCE , EFFLORESCENCE SUBSTANCE - DEFINITION
Some Important Characters of Compounds
EFFLORESCENT
SUBSTANCE - DEFINITION
An efflorescent substance is a chemical which has water associated
with its molecules, and which, when exposed to air, loses this water through
evaporation. A common example of this phenomenon is the drying of cement.
HYGROSCOPIC
SUBSTANCE - DEFINITION
This is when substances absorb water from air, but not enough to
form solutions. Examples of such substances include CaO,NaNO3,NaCl, sucrose
and CuO. Also, certain liquid substances absorb water from the air to get diluted,
these are also regarded as being hygroscopic. Example, conc. H2SO4 and conc. HCl. lf a
hydroscopic substance absorbs so much moisture that an aqueous solution is
formed, the substance becomes deliquescent.
DEFINITION OF
DELIQUESCENCE - DEFINITION
Deliquescence, the process by which a
substance absorbs moisture from the atmosphere until it dissolves in the
absorbed water and forms a solution.
Example are solid NaOH, CaCl2, CaCl2.6H2O
DEFINITION OF
EFFLORESCENCE - DEFINITION
Efflorescence, spontaneous loss of water
by a hydrated salt, which occurs when the aqueous vapor pressure of the
hydrate is greater than the partial pressure of the water vapor in the
air.
Examples are Na2SO4.10H2O, Na2CO3.10H2O and FeSO4.7H2O
PRIMARY AND SECONDRY STANDARD
Any chemical analysis can be considered valid only if the method of analysis is validated before adoption and results are reported against internationally recognized standard reference materials. Results of such analysis can be relied upon by consumers as well as other laboratories across the world.
Standards are primarily used for the following types of analytical studies:
It becomes necessary to understand the fine differences between different categories of standards used for chemical analysis. The standards are commonly grouped into two groups:
A primary standard reference material is an ultra high purity grade compound used in analysis involving assay, identification or purity tests. It can be a single compound or a mixture having the analyte of interest in a specified and certified amount.
The impurities, if any, should be identified and controlled for use in assay studies. The material selected as a primary standard should be highly stable, free from water of hydration and bear traceability to a national or international standards body
In many cases it may not be possible to procure a reference material from such sources because of following reasons :
Examples
Sodium carbonate Na2CO3
Sodium borate Na2B4O7
Potassium hydrogen iodate KH(IO3)2
Pure metals and their salts like Zn, Mg, Cu, Mn, Ag, AgNO3 , NaCl, KCl, KBr – used in Acid base titrations
K2Cr2O7, KBrO3, KIO3, KI(IO3)2, NaC2O4, As2O3, pure iron – used in redox titrations
Eligibility criteria for a primary standard
A primary standard should satisfy the following conditions
When to use a primary standard?
In pharmaceutical QC, the use of reference standards to calibrate the analytical procedure is mandatory when measurements are performed with relative methods such as HPLC in combination with a UV or MS detector. These measurements need to be traceable to a primary standard. This requirement is realized either by using the primary standard directly for the calibration purposes, or by using a secondary standard which is compared to the primary one.
ICH guideline Q7 states:
11.17 Primary reference standards should be obtained as appropriate for the manufacture of APIs. The source of each primary reference standard should be documented. Records should be maintained of each primary reference standard’s storage and use in accordance with the supplier’s recommendations. Primary reference standards obtained from an officially recognized source are normally used without testing if stored under conditions consistent with the supplier’s recommendations.
Officially recognized sources, however, are not specified in Q7, but the FDA does mention sources in their ‘Guidance for Industry on Analytical Procedures and Methods Validation for Drugs and Biologics’ but, interestingly, does not refer to these institutions as an official or definitive list:
Reference standards can often be obtained from the USP and may also be available through the European Pharmacopoeia, Japanese Pharmacopoeia, World Health Organization, or National Institute of Standards and Technology.
Instead, the FDA states that “reference materials from other sources should be characterized by procedures including routine and beyond routine release testing” and that producers “should consider orthogonal methods for reference material characterization”.
For primary RSs, both the ICH guideline and FDA guidance allow other sources than the “officially recognized sources”. Independent manufacturers can provide such primary standards, ideally characterized by processes like those outlined in the general text 5.12. of the European Pharmacopoeia (Ph.Eur).
Correct use of primary reference standards: what to keep in mind?
In essence, a primary RS needs to be fit for its intended purpose. A pharmacopoeial RS has been shown to be fit for its compendial purpose, but has not been demonstrated to be fit for any other purpose; this needs to be proven by the user. Consequently challenges of compendial standard use for non-compendial purposes have been reported in regulatory inspections. Other primary standards with fully documented CoAs can be used for most applications, providing they have been characterized appropriately. If a primary RS is used to establish a secondary standard then the secondary RS can only be used for the same purpose as the primary one.
Secondary standard is a chemical that has been standardized against a primary standard for use in a specific analysis. Secondary standards are commonly used to calibrate analytical methods.
A secondary standard is a standard that is prepared in the laboratory for a specific analysis. It is usually standardized against a primary standard.
Standards play a crucial role in analysis so require to be preserved under specified conditions so that their authenticity is preserved over the prescribed storage periods.
Following recommendations can prove useful:
The significant contribution of standards in any chemical analysis is emphasized in the article. It is indeed difficult to imagine a laboratory functioning without making use of standards and reference materials.
Preparation of Working Standard
Select any approved batch, the quality attributes of the selected batch will be reviewed critically with special emphasis on its assay and related substances. This batch shall have reviewed according to pharmaceutical standards and the selected batch shall have maximum assay/purity and the lowest related substance.
Analyse the selected batch against the reference standard using logbook as per Annexure – VI and follow the stipulated control procedure which includes Description, Identication, Moisture Content, Assay, Perform Assay, and Moisture Content Analysis in triplicate. The QC manager shall allocate the material abbreviation and assign a unique number to each working standard as below.
In HPLC we deal with the time-dependent process. The appearance of the component from the column in the detector represented by the deflection of the recorder pen from the baseline. It is a problem to distinguish between the actual component and artifact caused by the pressure fluctuation, bubble, compositional fluctuation, etc. If the peaks are fairly large, one has no problem in distinguishing them. However, the smaller the peaks, the more important that the baseline be smooth, free of noise, and drift.
Baseline noise is the short time variation of the baseline from a straight line caused by electric signal fluctuations, lamp instability, temperature fluctuations and other factors. Noise usually has much higher frequency than actual chromatographic peak. Noise is normally measured "peak-to-peak": i.e., the distance from the top of one such small peak to the bottom of the next. Sometimes, noise is averaged over a specified period of time. Noise is the factor which limits detector sensitivity. In trace analysis, the operator must be able to distinguish between noise spikes and component peaks. A practical limit for this is a 3 x signal-to-noise ratio, but only for qualitative purposes. Practical quantitative detection limit better be chosen as 10x signal-to-noise ratio. This ensures correct quantification of the trace amounts with less than 2% variance. Figure below illustrates this, indicating the noise level of a baseline(measured at highest detector sensitivity) and the smallest peak which can be unequivocally detected.
Definition of noise, drift, and smallest detectable peak.
Another parameter related to the detector signal fluctuation is drift. Noise is a short-time characteristic of a detector, an additional requirement is that the baseline should deviate as little as possible from a horizontal line. It is usually measured for a specified time, e.g., 1/2 hour or one hour. Drift usually associated to the detector heat-up in the first hour after power-on. Figure also illustrates the meaning of drift.
ChAdOx1 nCoV- 19 Corona Virus Vaccine (Recombinant) COVISHIELD™ ...