Unite.ai -- Though the study chooses email addresses as an example of potentially vulnerable PII, the paper emphasizes the extensive research into this pursuit in regard to exfiltrating patients’ medical data, and consider their experiments a demonstration of principle, rather than a specific highlighting of the vulnerability of email addresses in this context. The paper is titled Are Large Pre-Trained Language Models Leaking Your Personal Information?, and is written by three researchers at the University of Illinois Urbana-Champaign.
Read the article.