A Generic Self-Supervised Framework of Learning Invariant Discriminative Features

Ntelemis F, Jin Y, Thomas SA (2023)
IEEE Transactions on Neural Networks and Learning Systems: 1-15.

Zeitschriftenaufsatz | E-Veröff. vor dem Druck | Englisch
 
Download
Es wurden keine Dateien hochgeladen. Nur Publikationsnachweis!
Autor*in
Ntelemis, Foivos; Jin, YaochuUniBi ; Thomas, Spencer A.
Abstract / Bemerkung
Self-supervised learning (SSL) has become a popular method for generating invariant representations without the need for human annotations. Nonetheless, the desired invariant representation is achieved by utilizing prior online transformation functions on the input data. As a result, each SSL framework is customized for a particular data type, for example, visual data, and further modifications are required if it is used for other dataset types. On the other hand, autoencoder (AE), which is a generic and widely applicable framework, mainly focuses on dimension reduction and is not suited for learning invariant representation. This article proposes a generic SSL framework based on a constrained self-labeling assignment process that prevents degenerate solutions. Specifically, the prior transformation functions are replaced with a self-transformation mechanism, derived through an unsupervised training process of adversarial training, for imposing invariant representations. Via the self-transformation mechanism, pairs of augmented instances can be generated from the same input data. Finally, a training objective based on contrastive learning is designed by leveraging both the self-labeling assignment and the self-transformation mechanism. Despite the fact that the self-transformation process is very generic, the proposed training strategy outperforms a majority of state-of-the-art representation learning methods based on AE structures. To validate the performance of our method, we conduct experiments on four types of data, namely visual, audio, text, and mass spectrometry data and compare them in terms of four quantitative metrics. Our comparison results demonstrate that the proposed method is effective and robust in identifying patterns within the tested datasets.
Erscheinungsjahr
2023
Zeitschriftentitel
IEEE Transactions on Neural Networks and Learning Systems
Seite(n)
1-15
ISSN
2162-237X
eISSN
2162-2388
Page URI
https://pub.uni-bielefeld.de/record/2978768

Zitieren

Ntelemis F, Jin Y, Thomas SA. A Generic Self-Supervised Framework of Learning Invariant Discriminative Features. IEEE Transactions on Neural Networks and Learning Systems. 2023:1-15.
Ntelemis, F., Jin, Y., & Thomas, S. A. (2023). A Generic Self-Supervised Framework of Learning Invariant Discriminative Features. IEEE Transactions on Neural Networks and Learning Systems, 1-15. https://doi.org/10.1109/TNNLS.2023.3265607
Ntelemis, Foivos, Jin, Yaochu, and Thomas, Spencer A. 2023. “A Generic Self-Supervised Framework of Learning Invariant Discriminative Features”. IEEE Transactions on Neural Networks and Learning Systems, 1-15.
Ntelemis, F., Jin, Y., and Thomas, S. A. (2023). A Generic Self-Supervised Framework of Learning Invariant Discriminative Features. IEEE Transactions on Neural Networks and Learning Systems, 1-15.
Ntelemis, F., Jin, Y., & Thomas, S.A., 2023. A Generic Self-Supervised Framework of Learning Invariant Discriminative Features. IEEE Transactions on Neural Networks and Learning Systems, , p 1-15.
F. Ntelemis, Y. Jin, and S.A. Thomas, “A Generic Self-Supervised Framework of Learning Invariant Discriminative Features”, IEEE Transactions on Neural Networks and Learning Systems, 2023, pp. 1-15.
Ntelemis, F., Jin, Y., Thomas, S.A.: A Generic Self-Supervised Framework of Learning Invariant Discriminative Features. IEEE Transactions on Neural Networks and Learning Systems. 1-15 (2023).
Ntelemis, Foivos, Jin, Yaochu, and Thomas, Spencer A. “A Generic Self-Supervised Framework of Learning Invariant Discriminative Features”. IEEE Transactions on Neural Networks and Learning Systems (2023): 1-15.

Link(s) zu Volltext(en)
Access Level
Restricted Closed Access

Zitationen in Europe PMC

Daten bereitgestellt von Europe PubMed Central.

References

Daten bereitgestellt von Europe PubMed Central.

Export

Markieren/ Markierung löschen
Markierte Publikationen

Open Data PUB

Web of Science

Dieser Datensatz im Web of Science®
Quellen

PMID: 37126634
PubMed | Europe PMC

Suchen in

Google Scholar