Present day flow cytometers and microscopes enable combined signal detection as a function of many parameters such as the color and polarization of the exciting and emitted light, and the… Click to show full abstract
Present day flow cytometers and microscopes enable combined signal detection as a function of many parameters such as the color and polarization of the exciting and emitted light, and the spatial position and orientation of the detected object, as well as the time point of the data registration [1]. Because these parameters can be changed in very fine steps, “quasi continuously,” the observations generally lead to vast amount of data sets, for example, the size of recorded movies when several physiologic parameters of a cell culture are monitored with a camera for days. The ultimate aim of basic research in molecular biology and medicine is the application of the results in diagnostics and therapies, the “translation” of the achievements of basic science to the every-day or “real life” level. Examples are the application of fluorescence polarization in cancer diagnostics [2, 3], and fluorescence lifetime for detecting various physiological parameters in the human bodies, for example, when delineating tumor boundaries during surgical intervention [4]. Huge data sets might arise here not only due to the high number of monitored parameters, but also due to the size of the monitored (scanned) areas of body surfaces. Additional complexity in these cases might arise from a need for a very quick data evaluation, preferably in parallel with the registration of the data, that is, the “real time” data processing. The collection of huge data sets with the combinatorial or hyperdimensional microscopies, flow cytometers have been enabled by the development in the illumination and detection photonic technologies as well as the level of computerization. However, the best exploitation of this hardware level development supposes also a corresponding software-level development of very quick, robust and efficient evaluation techniques, which can also be applied sometimes “remotely,” without the touch of human hand, in a so-called “unsupervised” manner. Such methods start to become quite widespread today. They are called collectively as “artificial intelligence” techniques, referring mainly to “deep-learning,” (or “soft computing,” “machine-vision”), and “reduction of dimensionality” [5–11]. One subclass of “artificial intelligence” is “deep-learning” with its two versions called “supervised” and “unsupervised” depending on whether apriori information is available or not for the further data processing referring mainly to classification—assigning them in subgroups, “labeling”—and regression (fitting) of data. Operationally “supervised deep-learning”—carried out generally in a neural network environment, such as the one shown in Figure 1— can be envisioned as a kind of extension of the conventional regression or fitting algorithm, when some unknown parameters of a fitting function of a known analytical form are determined via comparison with some observed values of the function, the “observed data” [12, 13]. During deep-learning the precise form of a
               
Click one of the above tabs to view related content.