Estimating the half-life of 241Pu and its uncertainty
Introduction
Wellum et al. (2009) estimate the 241Pu half-life, t1/2,241, which is about 14.3 years, the shortest of the common Pu isotopes recovered from spent nuclear fuel following reprocessing. As explained by Wellum et al. (2009) historically t1/2,241 was relatively poorly known with a spread of experimental values, derived from various methods, extending from approximately 13–15 years and this led to what was considered to be an unacceptably large uncertainty in decay-corrected plutonium inventories after several years. This was the motivation in the mid 1970's, in part initiated by the recommendations of DOE Half-Life Evaluation Committee, for making an improved t1/2,241 estimation using the best available techniques.
The estimation described in Wellum et al. (2009) began in January 1976 with a stock of plutonium oxide homogenized by dissolution with a 241Pu atomic number fraction of 92.73%. Over the next 31 years (a little over two half-lives), ending in January 2007, 15 independent measurements of the relative 241Pu atomic abundance were made. From the trend as a function of time t1/2,241 can be estimated as we shall discuss. The technique is capable of high accuracy. Estimating the uncertainty is an important part of the data analysis to quantify this statement, and to enable users of the result to propagate their own application specific uncertainty analysis. The uncertainty estimate is also needed by users of the half-life, and by those workers in the measurement community trying to understand the limitations and biases of particular techniques.
The method used to track the evolution of the relative atomic abundance was thermal ionization mass-spectrometry (TIMS) performed on samples of Pu, drawn from the homogenized stock, which had been completely chemically-stripped of the isobaric analog 241Am (the principle decay product of 241Pu) which otherwise would not be discriminated against by TIMS. A double ratio of isotope ratios each differing by only one mass unit was used to compensate for mass fractionation (unintended enrichment by the assay process which otherwise introduces assay bias) effects and to also achieve a result independent of the particular TIMS instrument and associated measurement technique used. In terms of the measured relative atomic number intensities the double ratio at a given time is defined by:
All three nuclides appearing in this ratio are radioactive and so R as a function of time is expected to change as follows:where R(0) is the initial value of the double ratio at time t = 0, the start of the observation period, and λxxx is the decay constant of xxxPu.
Taking natural logarithms of R(t) we obtain a linear functional form:which defines the variable y and the two model parameters p1 and p2, including the sign convention.
From the slope of the line we can extract the decay constant of 241Pu and hence the half-life as follows:
By inspection we see that the fractional uncertainty in t1/2,241 is equal to the fractional uncertainty in λ241.
In comparison to 241Pu, 239Pu and 240Pu are long lived so that they almost act as stable monitor nuclides. This means that the correction to p2 to get λ241 is small, and moreover, because the half-lives for 239Pu and 240Pu are believed to be well known, the correction can be made accurately. The recommended values of the half-lives for 239Pu and 240Pu are (24110 ± 30) years and (6563 ± 7) years respectively as listed in the paper by Wellum et al. (2009), where the uncertainties are quoted at one standard deviation. Using these values the expression for λ241 can be written explicitly as:where the uncertainty in the correction has been propagated by quadrature addition and is expressed at the one standard deviation level. The units of λ241 are years−1 and in this work we adopt the same definition as in Wellum et al. (2009) that 1 year = 365.24219878 days.
Wellum et al. (2009) present a table of measured R values as a function of t. Estimation of t1/2,241 is thus a matter of extracting from the data the best value of the slope parameter p2 and then transforming from λ241to t1/2,241. Wellum et al. (2009) state that although it is assumed that parameter estimation using regression analysis (a plot of residuals from an ordinary least squares fit is presented in Wellum et al. (2009)) is appropriate, there are doubts whether the uncertainty associated with the regression value is appropriate. The doubts arise because some of the residuals for points 10–12 and for 13–15 are too large to be explained by the claimed experimental uncertainties, which suggests that the assumptions underpinning simple regression analysis are violated to some degree. Wellum et al. (2009) then devises a specific scheme to extract the slope parameter and its uncertainty. The scheme involves using a cluster of nine early and three late data points, respectively, to perform a two effective-point fit with the three central data points being used as a consistency check and to scale the overall final uncertainty estimate. The uncertainties in the individual data points needed to be increased to obtain consistency (that is, to make the residuals for points 10–12 of 15 lie within their respective estimated uncertainties) and the final uncertainty estimate was approximately eight times larger than for the two composite-point fit. The approach was also justified as being compatible with the widely accepted ISO GUM methodology (International Organisation for Standardisation, 1993). The corresponding conventional analysis was not presented in Wellum et al. (2009), so for completeness it is presented in the Analysis section below.
Linear calibrations performed using regression or least squares methods are ubiquitous within the applied physical sciences. In the present work we perform several alternative analyses of the data in Wellum et al. (2009) in order to assess sensitivity of the estimated t1/2,241 and its estimated uncertainty to modeling assumptions. Because of the need to combine information from multiple laboratories and/or experiments, measurement error modeling assumptions play a key role, so we believe the measurement community will endorse sensitivity-to-the-error-model assessments such as this one. Our motivation is to explore whether the novel approach advocated in Wellum et al. (2009) provides a unique analysis pathway or may be considered as a complementary tool to more familiar methods and which an analyst may use to judge the quality of the result in a broader sense.
Section snippets
Analysis
In the present discussion we assume that the stock material is homogeneous, which seems reasonable within experimental limits (Seiler et al., 1982). The measurements of the double ratio are, however, subject to other sources of random experimental uncertainty estimated by making a small (typically six, but in the case of data point number 13, seven) number of repeats. The uncertainty in R must be transformed into the associated uncertainty in y. Similarly there is a small uncertainty in the
Model selection
The main empirical observation is that the residuals appear to show systematic structure depending on whether a point belongs to one of the three groups of data (early, middle, late) defined by Wellum et al. (2009) and which broadly correspond to experimental practice. We have revised the uncertainty estimates, which set the relative weights, as described but have no known reason to repair, reject, or otherwise adjust the data in any other way. For the sole reason of quantifying any potential
Simulations
The Analysis section introduced two options to estimate t1/2,241 and its uncertainty. Both options assumed the small errors in the predictors (the experimental times) could be lumped with the errors in the response (the log transform of the double ratio). Both options also assumed that experiments 1–12 had the same σR/R equal to 0.025 and experiments 13–15 had the same σR/R equal to 0.13. The first option allows for a possible bias in experiments 10–12, and in 13–15, resulting in a 1σ “bias”
Discussion
In reducing their experimental data to an estimated t1/2,241 along with an appropriate associated uncertainty, Wellum et al. (2009) proposed an alternative to the widely used statistical method of regression (weighted least squares). It is reassuring to find that however that using the same prior knowledge about the experiments we have come to similar conclusions by using either WLS or variations that allow for experimental variance estimation errors and/or biases on the individual data points.
Conclusion
Wellum et al. (2009) report a carefully performed series of measurements to estimate t1/2,241, the 241Pu half-life. Minor improvements in the reporting of the data set could be made but did not prove to be material here. They applied the method of double isotope abundance ratio using mass spectrometry on a stock of Pu rich in 241Pu and believed to be homogenous. The double ratio method is believed to minimize isotope fractionation, as extreme care was taken to avoid isobaric bias from 241Am.
Acknowledgments
The authors warmly acknowledge the generosity of Dr. Roger Wellum in sharing the raw data with one of us (SC) at an early stage and for very many detailed exchanges over several years.
References (14)
- et al.
Defense of the generalized least squares solution to Peele's Pertinent Puzzle
Algorithms
(2011) - et al.
Least-squares fitting with errors in the response and predictors
International Journal of Metrology and Quality Engineering
(2012) - et al.
Meta-analysis options for inconsistent measurements
Nuclear Science and Engineering
(2013) - et al.
The effect of estimating weights in weighted least squares
Journal of the American Statistical Association
(1988) Statistics for Technology
(1970)- et al.
Robust Statistics
(2009) Guide to the Expression of Uncertainty in Measurement ISO
(1993)