Chungbuk National University
Professor, School of Computer Science (Undergraduate), Department of Computer Science (Graduate)
Artificial Intelligence Lab (http://ailab.cbnu.ac.kr)
Professor Keon Myung Lee received his BS, MS, and Ph.D. degrees in computer science from KAIST(Korea Institute of Science and Technology), Korea and was a Post-doc fellow in INSA de Lyon, France. He was a visiting professor in University of Colorado at Denver and a visiting scholar in Indiana University, USA. After having an industrial career at Silicon Valley, USA, he joined Dept. of Computer Science, Chungbuk National University, Korea in 1995. Now he is a professor. He serves as the Editor-in-Chief of International Journal of Fuzzy Logic and Intelligent Systems. His principal research interests are in data mining, machine learning, soft computing, big data processing, and intelligent service systems.
Over the past decade, Transformer architectures have reigned supreme in deep learning, revolutionizing fields from natural language processing to computer vision. Their attention mechanisms and scalability set a new standard for neural architectures. However, this dominance came at the cost of computational inefficiency and limited inductive bias for sequential data. This plenary talk revisits the once-declining Recurrent Neural Networks (RNNs), now revitalized through innovative designs such as linear recurrent layers, structured state-space models, and hardware-aware optimizations. We explore the technological and theoretical breakthroughs that have empowered modern RNN variants—like S4, RetNet, RWKVs, Mamba, and TTT—to match or even outperform Transformers in long-context modeling, inference efficiency, and memory scalability. Through a historical lens and technical insight, we will trace the arc from Transformer supremacy to the emerging RNN renaissance. This talk offers a forward-looking perspective on how the synergy of recurrence and structure could redefine the future of efficient, adaptable AI systems.