Articles with "pre trained" as a keyword



Photo from archive.org

Budget Restricted Incremental Learning with Pre-Trained Convolutional Neural Networks and Binary Associative Memories

Sign Up to like & get
recommendations!
Published in 2019 at "Journal of Signal Processing Systems"

DOI: 10.1007/s11265-019-01450-z

Abstract: For the past few years, Deep Neural Networks (DNNs) have achieved state-of-art performance in numerous challenging domains. To reach this performance, DNNs consist in large sets of parameters and complex architectures, which are trained offline… read more here.

Keywords: incremental learning; neural networks; binary associative; associative memories ... See more keywords
Photo from wikipedia

Application of deep learning to identify COVID-19 infection in posteroanterior chest X-rays

Sign Up to like & get
recommendations!
Published in 2021 at "Clinical Imaging"

DOI: 10.1016/j.clinimag.2021.07.004

Abstract: Introduction Posteroanterior chest X-rays (CXRs) are recommended over computed tomography scans for COVID-19 diagnosis, as CXRs can be obtained with relatively low risk of facility contamination. The objective of this study was to assess seven… read more here.

Keywords: classification; posteroanterior chest; chest rays; cxrs ... See more keywords
Photo by kellysikkema from unsplash

T4SEfinder: a bioinformatics tool for genome-scale prediction of bacterial type IV secreted effectors using pre-trained protein language model.

Sign Up to like & get
recommendations!
Published in 2021 at "Briefings in bioinformatics"

DOI: 10.1093/bib/bbab420

Abstract: Bacterial type IV secretion systems (T4SSs) are versatile and membrane-spanning apparatuses, which mediate both genetic exchange and delivery of effector proteins to target eukaryotic cells. The secreted effectors (T4SEs) can affect gene expression and signal… read more here.

Keywords: bacterial type; t4sefinder; secreted effectors; pre trained ... See more keywords
Photo from wikipedia

miProBERT: identification of microRNA promoters based on the pre-trained model BERT.

Sign Up to like & get
recommendations!
Published in 2023 at "Briefings in bioinformatics"

DOI: 10.1093/bib/bbad093

Abstract: Accurate prediction of promoter regions driving miRNA gene expression has become a major challenge due to the lack of annotation information for pri-miRNA transcripts. This defect hinders our understanding of miRNA-mediated regulatory networks. Some algorithms… read more here.

Keywords: promoter; bert; trained model; model ... See more keywords
Photo by nicopic from unsplash

DNABERT: pre-trained Bidirectional Encoder Representations from Transformers model for DNA-language in genome

Sign Up to like & get
recommendations!
Published in 2021 at "Bioinformatics"

DOI: 10.1093/bioinformatics/btab083

Abstract: MOTIVATION Deciphering the language of non-coding DNA is one of the fundamental problems in genome research. Gene regulatory code is highly complex due to the existence of polysemy and distant semantic relationship, which previous informatics… read more here.

Keywords: dnabert; pre; trained bidirectional; transformers model ... See more keywords
Photo by joakimnadell from unsplash

MICER: a pre-trained encoder-decoder architecture for molecular image captioning

Sign Up to like & get
recommendations!
Published in 2022 at "Bioinformatics"

DOI: 10.1093/bioinformatics/btac545

Abstract: MOTIVATION Automatic recognition of chemical structures from molecular images provides an important avenue for the rediscovery of chemicals. Traditional rule-based approaches that rely on expert knowledge and fail to consider all the stylistic variations of… read more here.

Keywords: architecture molecular; encoder decoder; molecular image; pre trained ... See more keywords
Photo by theshuttervision from unsplash

On the effectiveness of compact biomedical transformers

Sign Up to like & get
recommendations!
Published in 2022 at "Bioinformatics"

DOI: 10.1093/bioinformatics/btad103

Abstract: Abstract Motivation Language models pre-trained on biomedical corpora, such as BioBERT, have recently shown promising results on downstream biomedical tasks. Many existing pre-trained models, on the other hand, are resource-intensive and computationally heavy owing to… read more here.

Keywords: biomedical transformers; effectiveness compact; compact biomedical; biomedical tasks ... See more keywords
Photo by thinkmagically from unsplash

Deep-pre-trained-FWI: where supervised learning meets the physics-informed neural networks

Sign Up to like & get
recommendations!
Published in 2023 at "Geophysical Journal International"

DOI: 10.1093/gji/ggad215

Abstract: Full-Waveform Inversion (FWI) is the current standard method to determine final and detailed model parameters to be used in the seismic imaging process. However, FWI is an ill-posed problem that easily achieves a local minimum,… read more here.

Keywords: physics; supervised learning; model; physics informed ... See more keywords
Photo from wikipedia

A Deep Learning Model Based on Concatenation Approach for the Diagnosis of Brain Tumor

Sign Up to like & get
recommendations!
Published in 2020 at "IEEE Access"

DOI: 10.1109/access.2020.2978629

Abstract: Brain tumor is a deadly disease and its classification is a challenging task for radiologists because of the heterogeneous nature of the tumor cells. Recently, computer-aided diagnosis-based systems have promised, as an assistive technology, to… read more here.

Keywords: diagnosis; tumor; brain tumor; pre trained ... See more keywords
Photo by nicopic from unsplash

Implicit Stereotypes in Pre-Trained Classifiers

Sign Up to like & get
recommendations!
Published in 2021 at "IEEE Access"

DOI: 10.1109/access.2021.3136898

Abstract: Pre-trained deep learning models underpin many public-facing applications, and their propensity to reproduce implicit racial and gender stereotypes is an increasing source of concern. The risk of large-scale, unfair outcomes resulting from their use thus… read more here.

Keywords: implicit stereotypes; stereotypes pre; trained classifiers; pre trained ... See more keywords
Photo by austindistel from unsplash

Injecting User Identity Into Pretrained Language Models for Document-Level Sentiment Classification

Sign Up to like & get
recommendations!
Published in 2022 at "IEEE Access"

DOI: 10.1109/access.2022.3158975

Abstract: This paper mainly studies the combination of pre-trained language models and user identity information for document-level sentiment classification. In recent years, pre-trained language models (PLMs) such as BERT have achieved state-of-the-art results on many NLP… read more here.

Keywords: identity; user identity; trained language; language models ... See more keywords