Sign Up to like & get
recommendations!
2
Published in 2023 at "IEEE Access"
DOI: 10.1109/access.2023.3239668
Abstract: The confidentiality threat against training data has become a significant security problem in neural language models. Recent studies have shown that memorized training data can be extracted by injecting well-chosen prompts into generative language models.…
read more here.
Keywords:
token level;
language;
attack;
language models ... See more keywords