A Case Study on the Diminishing Popularity of Encoder-Only Architectures in Machine Learning Models
Praveen Kumar Sridhar1, Nitin Srinivasan2, Adithyan Arun Kumar3, Gowthamaraj Rajendran4, Kishore Kumar Perumalsamy5

1Praveen Kumar Sridhar, Department of Data Science, Northeastern University, San Jose, United States.

2Nitin Srinivasan, Department of Computer Science, University of Massachusetts Amherst, Sunnyvale, United States.

3Adithyan Arun Kumar, Department of Information Security, Carnegie Mellon University, San Jose, United States.

4Gowthamaraj Rajendran, Department of Information Security, Carnegie Mellon University, San Jose, United States.

5Kishore Kumar Perumalsamy, Department of Computer Science, Carnegie Mellon University, San Jose, United States.

Manuscript received on 28 February 2024 | Revised Manuscript received on 08 March 2024 | Manuscript Accepted on 15 March 2024 | Manuscript published on 30 March 2024 | PP: 22-27 | Volume-13 Issue-4, March 2024 | Retrieval Number: 100.1/ijitee.D982713040324 | DOI: 10.35940/ijitee.D9827.13040324

Open Access | Editorial and Publishing Policies | Cite | Zenodo | OJS | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: This paper examines the shift from encoder-only to decoder and encoder-decoder models in machine learning, highlighting the decline in popularity of encoder-only architectures. It explores the reasons behind this trend, such as the advancements in decoder models that offer superior generative capabilities, flexibility across various domains, and enhancements in unsupervised learning techniques. The study also discusses the role of prompting techniques in simplifying model architectures and enhancing model versatility. By analyzing the evolution, applications, and shifting preferences within the research community and industry, this paper aims to provide insights into the changing landscape of machine learning model architectures.

Keywords: Machine Learning, Deep Learning, Encoder, Transformers, Decoder, Natural Language Processing, Generative Model, Model Evolution.
Scope of the Article: Deep Learning