The field of high-resolution mass spectrometry has undergone a rapid progress in the last years due to instrumental improvements leading to a higher sensitivity and selectivity of instruments. A variety… Click to show full abstract
The field of high-resolution mass spectrometry has undergone a rapid progress in the last years due to instrumental improvements leading to a higher sensitivity and selectivity of instruments. A variety of qualitative screening approaches, summarized as nontarget screening, have been introduced and have successfully extended the environmental monitoring of organic micropollutants. Several automated data processing workflows have been developed to handle the immense amount of data that are recorded in short time frames by these methods. Most data processing workflows include similar steps, but underlying algorithms and implementation of different processing steps vary. In this study the consistency of data processing with different software tools was investigated. For this purpose, the same raw data files were processed with the software packages MZmine2, enviMass, Compound Discoverer, and XCMS online and resulting feature lists were compared. Results show a low coherence between different processing tools, as overlap of features between all four programs was around 10%, and for each software between 40% and 55% of features did not match with any other program. The implementation of replicate and blank filter was identified as one of the sources of observed divergences. However, there is a need for a better understanding and user instructions on the influence of different algorithms and settings on feature extraction and following filtering steps. In future studies it would be of interest to investigate how final data interpretation is influenced by different processing software. With this work we want to encourage more awareness on data processing as a crucial step in the workflow of nontarget screening.
               
Click one of the above tabs to view related content.