The performance of cosmic-ray tomography systems is largely determined by their tracking accuracy. With conventional scintillation detector technology, good precision can be achieved with a small pitch between the elements… Click to show full abstract
The performance of cosmic-ray tomography systems is largely determined by their tracking accuracy. With conventional scintillation detector technology, good precision can be achieved with a small pitch between the elements of the detector array. Improving the resolution implies increasing the number of read-out channels, which in turn increases the complexity and cost of the tracking detectors. As an alternative to that, a scintillation plate detector coupled with multiple silicon photomultipliers could be used as a technically simple solution. In this paper, we present a comparison between two deep-learning-based methods and a conventional Center of Gravity (CoG) algorithm, used to calculate cosmic-ray muon hit positions on the plate detector using the signals from the photomultipliers. In this study, we generated a dataset of muon hits on a detector plate using the Monte Carlo simulation toolkit GEANT4. We demonstrate that two deep-learning-based methods outperform the conventional CoG algorithm by a significant margin. Our proposed algorithm, Fully Connected Network, produces a 0.72 mm average error measured in Euclidean distance between the actual and predicted hit coordinates, showing great improvement in comparison with CoG, which yields 1.41 mm on the same dataset. Additionally, we investigated the effects of different sensor configurations on performance.
               
Click one of the above tabs to view related content.