Buffer and buffer solution , work of buffer in mobile phase complete Guidlines , Why is pH Important?

 What is Buffer and Buffer solution 

Buffer  A solution that resists a change in pH when small amounts of acid or alkali are added to it, or when it is diluted with water.

Buffer solution  An aqueous solution consisting of a mixture of a weak acid and its conjugate base or a weak base and it's conjugate acid [i.e., CH3COOH acetic acid and CH3COO acetate ion from sodium acetate (CH3COONa)]

In a buffer solution there is an equilibrium between a weak acid, HA, and its conjugate base, A (or vice versa as stated above).

HA + H2OH3O+ + A

 

 When an acid (H+ or H3O+ ) is added to the solution, the equilibrium moves to the left, as there are hydrogen ions 

(H+ or H3O+ ) on the right-hand side of the equilibrium expression.

When a base [hydroxide ions (OH )] are added to the solution, equilibrium moves to the right, as hydrogen ions 
(H+ or H3O+ ) are removed in the reaction

H+ + OHH2O

 

Thus, in both cases, some of the added reagent is consumed in shifting the equilibrium in accordance with Le Chatelier’s principle, and the pH changes by less than it would if the solution were not buffered.


Why Do We Need a Buffer Solution ?

The mobile phase pH can change on standing, with ingress of CO2 from the atmosphere for example, and a buffer can help to combat this effect to a certain extent. Similarly, volatile reagents, such as TFA, may also selectively evaporate, thus changing the eluent pH. There is, however, no substitute to regularly replacing the buffer on our HPLC system!

Perhaps the largest potential for pH change is on mixing of the injection slug with the mobile phase within the tubing and components of the autosampler, or at the head of the HPLC column, where more extensive mixing of the sample diluent and eluent occurs. If the sample diluent pH differs greatly from the eluent, the ‘local’ pH will change as the two mix — leading to retention time variability and peak distortion as not all analyte molecules experience the same solution pH and, therefore, may exhibit different partitioning behaviour. 


 

Why is pH Important?
The pH of the mobile phase effects the retention time of ionizable analytes.
Altering the mobile phase pH may alter the extent to which the analytes are ionized, affecting their relative hydrophobicity, the extent to which they interact with the stationary phase and hence their retention time. Ultimately, changes in pH will lead to changes in retention time for ionizable compounds, which may in turn lead to selectivity changes within the chromatogram.

Choosing the Right Buffer/Buffer Capacity
Many factors influence the choice of buffer, but the two major considerations tend to be:

  • What is the required pH of the mobile phase (which is dictated by the analyte properties).
  • Does the buffer need to be volatile (which usually depends on whether one is using mass spectrometric detection).

The mobile phase pH value will depend upon the analytes pKa (partial acid dissociation constant) value and may be derived by experimentation or computer simulation or both. For a weak acid in solution, we can write the equilibrium,

HAA− + H+ 

to represent the dissociation of the acid.

The dissociation constant (Ka) for this equilibrium can be written as;

 

Because of the wide range of Ka values possible (several orders of magnitude), it is more convenient to talk about the partial acid dissociation constant, which can be written
as;

pKa = −log10Ka

We can also express the pKa in terms of pH using the Henderson Hasselbalch equation;

It should be obvious that pH is equal to pKa when the associated and dissociated forms of the acid are present in equal concentrations (i.e., when [A ] = [HA]). This represents the value at which the acid (or base) is 50% ionized [dissociated if acidic (A ), associated if basic (BH+ )].

A buffer is chosen, so that the buffer pKa is as close to the required eluent pH as possible, and certainly within one pH unit of this value (see Table 1). When this condition is satisfied, the buffering capacity of the solution is at its maximum and a more robust method will result, using a lower concentration of the buffer. When the eluent is ±1 pH unit from the buffer pKa, the buffering capacity has already fallen to 33% of the capacity obtained when the eluent pH = buffer pKa. 

If the buffer is incorrectly chosen, it will need to be added at much higher concentrations to be effective, which will lead to method robustness issues and changes in the selectivity of the separation that are more difficult to predict and manipulate.

So, from Table 1, if an eluent pH of 4.2 is required to achieve a particular separation, the ideal buffer system to chose would be ammonium acetate (pKa 4.76) adjusted to pH 4.2 using acetic acid or ammonium formate (pKa 3.74) adjusted to pH 4.2 using formic acid.

It should be noted that although the phosphate species have three suitable pKa values, NONE are suitable for use within the range pH 3 to 6!



Why avoiding most to inorganic Buffer for life of HPLC column ??


 Reason to avoid---

 One of the main reasons for avoiding is inorganic buffers is that they degrade the stationary phase quickly as compared to organic buffers. Plenty of studies have been done by pioneers of HPLC like Kirkland and others showing the effect of buffer. Organic acids are milder less corrosive than inorganic salts, yet they achieve the same job of buffering. On the other hand you don't have much precipitation problems with organics e.g. phosphate suddenly precipitates out in ACN. This effect permanently damages the stationary phase. Last but not the least organic buffers are more MS friendly.

Why use the PHOSHPORIC ACID IN MOBILE PHASE ?

WHO IS THE BEST BETWEEN FORMIC ACID AND PHOSPHORIC ACID FOR BETTER RESOLUTION ?

 ANSWER-
 formic acid and phosphoric acid have two different contexts.
density of phosphoric acid is 1.88 where formic acid is 1.23.
pka value for formic acid is 3.75 where as for phosphoric acid is 2.12,7.2 and 12.3. so that you can use any of the pH for phosphoric acid, when adjusting to higher side.
But for formic acid its not possible.
Generally as above said formic acid is the best choice for MS/MS.
formic acid and phosphoric acid will behave differently in aqueous solutions and on stationary phase and with your compound. In case if your compound behaves same in both the acids, then its better to choose formic acid because of its volatility, if you don't have any peak shape issues
My experience says some compounds will give very good peak shape in phosphoric acid rather than formic acid because of its buffer strength.
If you have to run HPLC and LC-MSMS Simultaneously then formic acid is the best choice

What causes peak tailing in reversed-phase HPLC and what can I do about it?

 

          What causes peak tailing in reversed-phase HPLC and what can I do about it?



Some type of secondary interaction between an analyte and the column causes peak tailing. This interaction is in addition to the partitioning behavior seen for reversed-phase analyses. Peak tailing is most commonly seen with basic compounds and is usually a result of interactions between the residual silanols and positively charged basic compounds. The most common of these interactions is an ion exchange interaction between a positively charged basic compound and a negatively charged column surface silanol. Silanols on the surface of silica-based columns will have a negative charge when the pH of the mobile phase is above 4.5 – 5.0. Therefore the quickest way to reduce peak tailing is to operate with a buffered mobile phase at a pH below 4. Choosing newer columns with high purity, fully hydroxylated silica will also minimize peak tailing because silanol activity and ionization is reduced.

 






      Some ZORBAX columns that use this type of silica are the StableBond columns, the Eclipse XDB columns, the Bonus-RP and the Extend-C18 column. Each of these columns can reduce peak tailing, but they are all a little different. The StableBond (SB) columns are ideal at low pH so they are often first choices to reduce peak tailing when using a low pH mobile phase. The Eclipse XDB columns are the first choice to reduce peak tailing if your mobile phase is pH 5 – 9. This column is double endcapped so it minimizes peak tailing by covering as many residual silanols on the column surface as possible and eliminating possible secondary interactions with silanols. The Bonus-RP column is also a good choice to reduce peak tailing in this intermediate pH region. The Bonus-RP column has a bonded-phase with an imbedded polar group. This group reduces interactions between basic compounds and residual silanols thereby improving the peak shape of basic compounds. This column can be used from pH 2 – 8. The Extend-C18 is designed as a high pH column and can be used up to pH 11.5. At high pH many basic compounds are no longer charged and interactions with silanols are minimized, reducing peak tailing.

      Careful choice of a mobile phase can also reduce peak tailing. Buffered mobile phases (25 – 50 mM) will reduce peak tailing and low pH mobile phases are preferred (pH 2 –3). This should also result in more reproducible chromatography. Mobile phase additives such as triethylamine (TEA) can be added to reduce peak tailing of basic compounds, if needed. TEA acts as a competing base and ties up silanol sites, eliminating interactions between your analyte and residual silanols. But this type of additive is rarely needed at low pH and is only occasionally necessary at intermediate pH.

       If you have peak tailing with an acidic compound the same process applies. Reduce the mobile phase pH to try to protonate the acids, then use a buffered mobile phase and try increasing the ionic strength of the mobile phase. Finally a competing organic acid can be added to the mobile phase and we have achieved excellent results with 0.1% trifluoroacetic acid (TFA), and this additive has a very low UV cutoff. Following these suggestions should reduce peak tailing of acids and bases.

  

    Most columns now use spherical particles because columns packed with spherical particles will have higher efficiencies. Therefore start by choosing a column with spherical particles. The most common particle size choice for analytical separations is 5 um because it is easy to use, but more often today the better choice is 3.5 um particles. These smaller particles generate higher efficiencies in shorter column lengths and make it possible to do separations with shorter analysis times. If analysis time is important to you, consider choosing a ZORBAX Rapid Resolution (3.5 um) column to minimize analysis time. The 4.6 x 150 mm, 3.5 um Rapid Resolution column will have the same efficiency as a 4.6 x 250 mm, 5 um column and reduce analysis time by 40%. Other shorter Rapid Resolution columns (75 mm, 50 mm, 30 mm, and 15 mm) are available to further reduce analysis time.

      Column pore size is selected based on the molecular weights of your analytes. A pore size of less than 100Å can be used for small molecules with molecular weights less than 4000. Larger molecules, such as proteins and peptides, should be analyzed on 300Å pore size columns. In addition, some smaller molecules with large, multi-ring, rigid structures can better be analyzed on 300Å pore size columns. Choosing the right pore size is important because most of the bonded-phase resides in the pores of the particles, therefore optimum retention and peak width are achieved only if the molecules can diffuse in and out of the pores rapidly and easily.-

 








 

 

INTERNAL STANDARD AND EXTERNAL STANDARD

                             INTERNAL STANDARD AND EXTERNAL STANDARD


DISCRIPTION

An internal standard in analytical chemistry is a chemical substance that is added in a constant amount to samples, the blank and calibration standards in a chemical analysis. This substance can be used for calibration by plotting the ratio of the analyte signal to the internal standard signal as a function of the analyte standard concentration. This is done to correct analyte losses during sample preparation. The internal standard is a compound that must be show similar behaviour to the analyte.

This ratio for the samples is then used to obtain their analyte concentrations from a calibration curve. The internal standard used needs to provide a signal that is similar to the analyte signal in most ways but sufficiently different so that the two signals are readily distinguishable by the instrument.

An external standard is like the internal standard (known behaviour), but is not added to the unknown. Rather it is run alone, as a sample, and usually at different concentrations, so you can generate a standard curve. Again, the peak areas are related to the known amounts of external standard run. External standards do not correct for losses that may occur during preparation of the sample, such as extraction, centrifugation, evaporation, etc. Internal standards would correct for this if added to the unknown at the beginning of the sample preparation.

Internal standard methods are used to improve the precision and accuracy of results where volume errors are difficult to predict and control. A systematic approach has been used to compare internal and external standard methods in high performance liquid chromatography (HPLC). The precision was determined at several different injection volumes for HPLC and ultrahigh-pressure liquid chromatography (UHPLC), with two analyte and internal standard combinations. Precision using three methods of adding the internal standard to the analyte before final dilution was examined. The internal standard method outperformed external standard methods in all instances

In an external standard calibration method, the absolute analyte response is plotted against the analyte concentration to create the calibration curve. An external standard method will not provide acceptable results when considerable volume errors are expected because of sample preparation or injection-to-injection variation. An IS method, which is a method where a carefully chosen compound different from the analyte of interest is added uniformly to every standard and sample, gives improved precision results in quantitative chromatographic experiments. The internal standard calibration curves plot the ratio of the analyte response to the internal standard response (response factor) against the ratio of the analyte amount to the internal standard amount. The resultant calibration curve is applied to the ratio of the response of the analyte to the response of the internal standard in the samples and the amount of analyte present is determined.

Several approaches have been used to determine the amount of internal standard that should be used in preparing the standards and the samples, but none have illustrated definitive results (1–4). For example, Haefelfinger (1) reports that the IS peak height or area must be similar to that of the analyte of interest, but does not present supporting data. Araujo and colleagues (2) show that experimental design strategies can be used to determine the optimal amount of internal standard used while Altria and Fabre (3) show that the IS should be used in the highest possible concentration.

Calculation of the response factor assumes that the detector gives a linear response for both the analyte and the internal standard over the entire range of the experiment. Since this is not always the case, it is essential to understand the behavior of the response factor as the concentration or amount of analyte and internal standard are varied. Knowing the behavior of the response factor allows one to set limits on the useful range of the chosen analyte or internal standard concentration combinations.

The internal standard method is used to improve the precision and accuracy of results where volume errors are difficult to predict and control. Examples of types of errors that are minimized by the use of an internal standard are those caused by evaporation of solvents, injection errors, and complex sample preparation involving transfers, extractions, and dilutions. An internal standard must be chosen properly and a known amount added carefully to both sample and standard solutions to minimize error and be utilized to its full advantage. The resulting internal standard peak should be well resolved from other components in the sample and properly integrated. If all of these conditions are not met, the use of an internal standard may actually increase the variability of the results. One report suggests that whenever detector noise or integration errors are the dominant sources of error, the use of an internal standard will likely make the results of the experiment worse (5).

A paper published by P. Haefelfinger in the Journal of Chromatography in 1981 (1) discussed some limitations of the internal standard technique in HPLC. Using the law of propagation of errors, the paper showed conditions that need to be met for the internal standard procedure to improve results. In addition to the mathematical illustration, Haefelfinger detailed practical examples where either internal or external standard methods were advantageous.

The Journal of the Pharmaceutical Society of Japan published a study in 2003 (6) that found that the internal standard method did not offer an improvement in precision with the then current autosampler technology. Interestingly, they also found that if the peak of the internal standard was small, the relative standard deviation (RSD) was actually larger than the RSD for the external standard method (6). The limitation of this study was that only one injection volume (10 µL) was used to establish the conclusions.

In our work, a systematic approach has been used to compare the internal to the external standard method using two analytes and two internal standards. The precision resulting from both an internal and external standard method were determined at several injection volumes and on two different instruments. Three methods of adding the IS to the analyte before final dilution have been compared. In the first, a solid internal standard was weighed directly into the glassware containing the sample before dilution with solvent. In the second, a solution of a known concentration of the IS was prepared and a known volume of this solution was added to the sample prior to dilution. In the third, the IS was added in the same manner as the second method, but the internal standard solution was weighed and the weight, not the volume, was used in the IS calculations. We examined the effect of weight of analyte and internal standard on the precision of the results. Initially, the weights of the analyte were varied versus a constant IS concentration, and then the concentration of the internal standard was varied versus a constant weight of the analyte.

 

Standard deviation was chosen to monitor precision. All possible errors are reflected in the standard deviations of the final measurements, including each step in the sample preparation, sample transfer, and sample introduction into the HPLC or UHPLC system, as well as the HPLC or UHPLC analyses themselves. Both external and internal standard calibration methods were used to calculate the percent recoveries for comparison.

Choosing an Internal Standard

When creating a new method for quantitation, the choice of the correct internal standard (IS) can improve the accuracy and precision of the method. The proper internal standard should be chemically similar to the compound(s) that you are analyzing, but is not expected to be naturally present in your sample. It is best to choose compounds that have the same functional groups, boiling points, and activity as your target compounds.


Two examples

If you are using MS, then it is common to use a deuterated analog of your compound of interest i.e. Amphetamine & Amphetamine d-5.


If you are working with a non-MS detector, the deuterated internal standards would coelute with your analyte of interest and cause problems with quantitation. In this case, you would try to use a compound that is somewhat chemically similar, but would not be found in your sample, i.e. if your target is tri-chlorophenol, you might use a tri-bromophenol or a di-chlorophenol internal standard.


Any time you choose in internal standard, you have to validate that it does correct for small amounts of variation within the analytical process. The best quantitation is achieved when with an internal standard for each one of your target compounds, but in many cases this is impractical because of the number of analytes in your mix and/or the cost of each internal standard.


Important Fact -


Adding an internal standard (IS) to an assay can be an excellent way to often improve method precision and accuracy. An IS can account for signal suppression (or enhancement), that may be caused by the sample matrix. When using an IS, the response of your target compound(s) is compared to the response of the IS. The internal standard is added at a constant amount to all standards, controls, and samples. Since it is a constant, the IS can be monitored to determine if an individual injection is valid. Additionally, the IS can be used to calculate relative retention time and assist with peak identification.

Choosing a proper internal standard is the key to the success of using one. The IS should be chemically similar to your target compounds, and it should not be present in your sample(s). I would recommend reviewing your target compound’s structure and boiling point and choose an IS that would be similar. Ideally, one would have an IS for each target compound in one’s method, but this can become impractical with large compound lists. In these cases, choose a handful of compounds to match early eluting, mid-eluting, and late eluting compounds in your assay.

If you are using a mass spec detector, an isotopically labeled analog of your target compound makes an excellent IS. These are compounds that are deuterium labeled (contain deuterium in place of hydrogen) or 13C labeled (contain 13C atom instead of 12C atom). An example would be using Atrazine-d5 as an IS for Atrazine or other triazine herbicide compounds. Isotopically labeled analogs will chromatograph extremely similar to their non-labelled counterparts.

The drawback to isotopically labeled internal standards is that they are not available for every compound. Additionally, they coelute with the non-labelled target compound; hence, they can only be used with mass spectrometry. With a GC detector, such as an FID, ECD, or NPD or LC detector, such as UV or Fluorescence, these coelutions will lead to headaches in the lab. In these cases, one needs to look at for an internal standard that is similar to the target compound but not found in your sample. Many environmental assays use an internal standard that contains bromine or fluorine when analyzing chlorine containing compounds. As an example, a compound such as 4-bromofluorobenzene could be a suggested IS for chlorobenzene analysis. In the case of medical cannabis potency testing, my colleague Jack Cochran used compounds somewhat similar in structure and behavior to the cannabinoids that were present.



COVISHIELD™ Overview , You should know.....

  ChAdOx1 nCoV- 19 Corona Virus Vaccine (Recombinant) COVISHIELD™                                                                           ...

Most Popular Post