To implement equilibrium hard‐modeling of spectroscopic titration data, the analyst must make a variety of crucial data processing choices that address negative absorbance and molar absorptivity values. The efficacy of… Click to show full abstract
To implement equilibrium hard‐modeling of spectroscopic titration data, the analyst must make a variety of crucial data processing choices that address negative absorbance and molar absorptivity values. The efficacy of three such methodological options is evaluated via high‐throughput Monte Carlo simulations, root‐mean‐square error surface mapping, and two mathematical theorems. Accuracy of the calculated binding constant values constitutes the key figure of merit used to compare different data analysis approaches. First, using singular value decomposition to filter the raw absorbance data prior to modeling often reduces the number of negative values involved but has little effect on the calculated binding constant despite its ability to address spectrometer noise. Second, both truncation of negative molar absorptivity values and the fast nonnegative least squares algorithms are superior to unconstrained regression because they avoid local minima; however, they introduce bias into the calculated binding constants in the presence of negative baseline offsets. Finally, we establish two theorems showing that negative values are best addressed when all the chemical solutions leading to the raw absorbance data are the result of mixing exactly two distinct stock solutions. This allows the raw absorbance data to be shifted up, eliminating negative baseline offsets, without affecting the concentration matrix, residual matrix, or calculated binding constants. Otherwise, the data cannot be safely upshifted. A comprehensive protocol for analyzing experimental absorbance datasets with is included.
               
Click one of the above tabs to view related content.