HĀž»­

 

Accuracy and Precision

Before we discuss statistical, analytical error, we will set out definitions for accuracy and precision.Ā  Accuracy is the absolute description of the ā€˜truthā€™ of the analysis ā€“ how close the reported analytical value of the concentration of a given element is to the ā€˜realā€™ concentration in a sample.Ā  In a target-shooting analogy, if you are aiming at the bullā€™s eye, your accuracy is measured by how close your shot comes to the bullā€™s eye ā€“ the closer your shot is to the bullā€™s eye, the better your accuracy is.Ā  In contrast, precision is defined as the relative description of how reproducible a result is, and is directly tied to counting statistics.Ā  In our target shooting analogy, precision can be defined as how tight the grouping of several shots is ā€“ the closer the shots are to one another, the higher is the precision.Ā  Ideally, we would like for our analytical data to be as accurate and as precise as possible, or as close to the ā€˜realā€™ value as possible, while simultaneously as reproducible as possible (i.e., with as little difference as possible between repeated trials on the same analytical point).Ā  Commonly, the precision will be expressed in terms of standard deviation when our sample size (n) is 1, and as standard variance when the sample size is greater than 1 (and most commonly where n ā‰„ 3).Ā  These terms will be defined algebraically later in this document.Ā  Before we discuss the statistics in detail, it is best to have at least a qualitative understanding of all the common factors that effect the intensities of our measured X-rays.Ā  It is of vital importance to keep the idea of the intensities of the X-rays in mind, and the phenomena that affect said intensities, also, rather than getting lost in the algebraic manipulations.Ā  Once we have established an understanding of the factors that affect the X-ray intensities, we can assign a prioritized, statistical magnitude or importance of said factors to their effects on X-ray intensities.

Factors to consider in achieving good accuracy and precision

Achieving the best possible accuracy and precision doesnā€™t happen by accident, rather it happens by understanding physical phenomena, modelling said phenomena mathematically and designing the experimental analysis in an intelligent way.Ā  The following list describes many of the primary factors that need to be considered when optimizing accuracy and precision ā€“ when these factors are not well-controlled, errors in counting X-rays may result (N.B. Much of the following information has been compiled from various sources that are referenced at the end of this document, and the remaining sections are based on public domain PowerPoint presentations constructed by and rightfully attributed to faculty members at the University of Wisconsin, Madison, Dept. of Earth Sciences) as follows:

  • Standards ā€“ The choice of standards is critical in X-ray analysis.Ā  Ideally, a standard is of the same structural and compositional type as the unknown, is compositionally homogeneous and is well-characterized so as to minimize the magnitude of matrix effects.Ā  One of the problems with natural standards is that they are not always compositionally homogeneous, especially with respect to volatile species, such as F, Cl, OH and H2O.Ā  In some cases, such as with apatite, the crystallographic direction along which the electron beam ionizes the target atoms influences the resultant, net X-ray intensities.Ā  If the direction along which standard data are collected differs from that along which an unknown is analysed, an inaccurate concentration may result.

  • Standard and sample physical condition ā€“ The physical conditions of standards and samples influence the homogeneity of X-ray production, particularly where the quality of the surface polish (or roughness) is concerned.Ā  Excessive roughness or undue surface relief will cause anomalous scattering of X-rays, potentially reducing the accuracy of the analysis.Ā  Also, the setting of the sample in the sample-holder may lead to the tilting of the sample ā€“ any tilt will cause the sample to point preferentially towards some WDS spectrometers whilst pointing away from others, thus altering the measured X-ray intensities.Ā  This type of error may appear to be systematic, as elements are assigned to be measured in one spectrometer only, under most common analytical conditions (ACO).Ā  The thickness of the carbon-coat is commonly uniform across small surfaces and is likely to be uniform across a given sample, but it may not the same on a standard compared to an unknown sample.Ā  A difference in coating thickness between the standard and unknown may result in differences of absorption of emergent X-rays from the respective surfaces, thus altering the K-ratio and the accuracy of the analyses.Ā  Such differences in thickness accentuate differential X-ray absorption for lighter elements and the problem is further compounded in cases where low accelerating voltages are being used.Ā  Also, some samples are more sensitive to beam damage than others, and this can also result in anomalous absorption or emission of X-rays.

  • Instrumental stability ā€“ The stability of a microprobe generally refers to the beam stability, and more specifically the constancy of the beam current and positional stability (i.e., the beam hits the same relative spot with respect to the stored X-,Y- and Z- coordinates of a sample).Ā  The beam stability is affected by stability of the power source, humidity effects on electrical components and age of the filament.Ā  The shape of the filament and its electron-emission profile change as the filament ages, both of which factors affect the shape and relative location of the excitation volume in the target, which in turn affects X-ray production.Ā  Another aspect of instrument stability is the spectrometer reproducibility, which can have a huge effect on both the accuracy and precision of the measurements taken.Ā  The thermal stability of the environment in the laboratory is crucial, as temperature fluctuations in the room will cause metallic parts such as spectrometer drive belts to expand and contract over several hours, thus causing the diffractors and detectors to be off-peak when measuring peak X-ray intensities and background intensities (i.e., peak drift).Ā  Such instabilities will have detrimental effects on the accuracy and precision of measured data.Ā  Even something as simple as turning off the lights in the lab overnight can induce enough of a temperature change to affect X-ray peak positions and intensities.

  • Beam effects on sample ā€“ In general, the electron beam does more to the sample than just generating X-rays.Ā  The beam can heat the sample locally and drive off volatile species.Ā  The beam can also change the oxidation state of sensitive elements, as well as causing light alkali elements such as Na to migrate away from the centre of the beam, both of which result in reducing the X-ray intensities generated by the sample over the time the intensities are being collected.Ā  Whereas the beam is made from electrons, certain chemical moieties can capture some of these electrons and spontaneously become reduced with respect to oxidation state.Ā  One example of this process is the reduction of 2 adjacent CO32-anions in calcite into 2 CO2(g) + 1 O2(g) molecules, resulting in the physical damaging of the calcite structure.

  • Spectral line- and background-overlaps- Specific X-ray spectral lines are unique in wavelength for every element, e.g., all first-order KĪ±X-ray-wavelengths are unique for each element in the periodic table.Ā  However, that is not to say that all X-rays produced by all elements in a sample are unique and donā€™t overlap to some degree.Ā  Elements in our sample that are irradiated by a sufficiently energetic electron beam will produce all possible X-rays for said elements that have absorption-edge energies below the accelerating voltage being used.Ā  Some of these X-rays, especially higher-order ones, will almost always overlap the wavelengths of X-rays that we are trying to measure.Ā  We refer to these interfering X-rays as ā€˜peak overlapsā€™ or ā€˜spectral interferencesā€™, and they inevitably produce false counts for the X-ray lines we are measuring.Ā  For example, in a stainless steel sample that has both Ti and V, basic quantum X-ray theory tells us that the Ti KĪ² X-ray overlaps the V KĪ± X-ray to a significant degree.Ā  Even though we arenā€™t actively measuring the Ti KĪ² X-ray as part of our analysis, because it overlaps the V KĪ± X-ray position in our spectrometer, it will be measured as V KĪ± X-ray intensity if any Ti is present and even if V is absent from part of the steel alloy, resulting in the false reporting of V in the analysis.Ā  Commonly, ā€˜unwantedā€™ X-rays of both low and high order, usually minor peaks, will overlap the positions at which we are measuring background counts for the X-ray of interest that we are currently measuring.Ā  These are called ā€˜background overlapsā€™, and they can result in sloping backgrounds by making one side of the background ā€˜curveā€™ higher than the other.Ā  A sloping background commonly results in the reduction of measured counts for the X-ray we are trying to quantify, which then affects the accuracy of our data.

  • Counting statistics and analytical strategy ā€“ Given that the ā€˜resolutionā€™ or ā€˜sensitivityā€™ of an analysis is proportional to 1/āˆšn , where n = the number of samples or counting time, we have to choose adequate counting times, especially for the backgrounds, to achieve the desired level statistical significance.Ā  Commonly, the statistical significance is quoted as the number of standard deviations (Ļƒ) above the background that an X-ray intensity is; normally, in order for the X-ray intensity to be called ā€˜realā€™ (or statistically significant), it must be above 3s above the background noise.Ā  So, we must choose our counting times carefully, in order that we may have statistical confidence in the collected data.Ā  Such decisions become more important when we analyse trace elements that are present in quantities below 1 wt %, where the level of precision can approach that of the accuracy of the analysis, and therefore analyses of trace elements can require counting times an order of magnitude greater than those used for major elements.

  • Matrix correction factors ā€“ Although we have discussed briefly the concept of matrix corrections factors, i.e., ZAF factors, we have to mention the concept of mass absorption coefficients, MACs for short.Ā  MACs basically represent the rate at which given elements can absorb radiation, and are typically expressed in units of cm2/g.Ā  MACs vary with the average density of the target substance and with the wavelength (Ī») of the incident radiation.Ā  In general terms, for a given series of X-ray emission lines below an certain absorption edge energy (which varies with increasing atomic number), MACs increase with increasing Ī».Ā  Basically, the total secondary absorption of a given matrix is given by the equation Ī¼matrix = Ī£i (Ī¼iwi) , where Ī¼i and wi are the individual MACs and weight fractions, respectively, for each element i that make up the matrix. The overall absorption, Ī¼matrix has to be multiplied by the average density, Ļ, of the matrix in order to account for the absorptive behaviour of the sample volume on X-ray intensities. Given that the MACs for light elements are generally much smaller than those for heavy elements, in samples where there exists gross disparity in atomic numbers of constituent atoms, the effect of the lighter atoms will have little influence on the overall X-ray absorptive characteristics of the sample unless they are present in large concentrations.Ā  In the case where trace elements are of relatively low Z and low concentration, such as with fluorine in certain silicates, the ZAF correction process may less-than-accurately estimate the correction factors required to achieve overall accuracy in the analysis.

  • Sample size relative to excitation volume ā€“ In most cases where the sample thickness exceeds 30 microns, and the sample isnā€™t made up of mostly very light elements packed in loose structures, the sample thickness will exceed the depth of penetration of the electron beam, as well as the depth of the excitation volume produced by the incident electron beam at relatively high accelerating voltages (e.g., 15 kV).Ā  In the case of thin films or thin mineral ā€˜slicesā€™ that may be only 100 ā€“ 200 nm thick, a relatively high accelerating voltage will cause the electron beam to penetrate the sample thickness entirely, causing excitation of whatever material is below the intended target.Ā  At best, this phenomenon will result in a composite analysis of the target and the substrate underneath the target; at worst, the beam will destroy the target area. Various workers have modelled the X-ray generation characteristics of substances by altering the thickness of successive layers of a target substance laid down on a substrate of known composition that is thicker than any penetration depth associated with the maximum accelerating voltage of their electron probe.Ā  In doing so, they have measured what is called the ā€˜mass-depth profileā€™ of given elements, or the amounts of X-rays generated at specific analytical conditions with increasing depth ā€“ such methods empirically account for absorption and fluorescence phenomena.Ā  This general type of correction process is called the ā€˜phi-rho-zetaā€™ (Ļ†(Ļz)) correction model ā€“ the ZAF data reduction model is a Ļ†(Ļz)-based model, with specific values being assigned to the 3 correction factors (not to be confused with the very specific Ļ†(Ļz)-correction model detailed in scientific literature, also commonly referred to as ā€˜PROZAā€™).