Global-Leading Research and Education in Brain and Cognitive engineering KOREA UNIVERSITY our department is trying to foster professional human resources who are to be equipped with creative thinking,
adaptation to new technology and practical application.


Colloquium Announcement on Nov. 19, 2018
W-Name Admin (IP: *.152.74.116) W-Date 2018-10-30 11:22 Hit 500

Robot Learning: A Sparse Approach

Speaker: Prof. Songhwai Oh (Department of Electrical and Computer Engineering, Seoul National University)

Date: Nov. 19, (Mon.), 2018

Time: 05:00 PM 

Location: Woo Jung Information & Communication bldg., Room 601

Hosted by Dept. of Brain & Cognitive Engineering, Korea University / Institute of Brain and Cognitive Engineering, Korea University / BK21+ Global Leader Development Division in Brain Engineering, Korea University / Interdisciplinary Major in Brain and Cognitive Science


With recent advances in hardware, sensing, and algorithms, we are witnessing the emergence of a new robotics industry. I will present a few examples of new services provided by upcoming service robots, assisting us in the near future in places, such as offices, malls, and homes. But, for a robot to coexist with humans and operate successfully in crowded and dynamic environments, a robot must be able to learn from experiences to act safely and harmoniously with human participants. I will discuss research challenges for service robots and our attempts to address those challenges. In particular, I will present our recent work in foundations in robot learning: nested sparse networks (NestedNet) and a sparse Markov decision process (MDP). A NestedNet is a new deep learning framework, allowing an n-in-1 nested structure in a neural network. The proposed framework realizes a resource-aware versatile architecture as a single network can meet diverse resource requirements. For a sequential decision making problem, we have proposed a sparse MDP using novel causal sparse Tsallis entropy regularization, which results in a sparse and multi-modal optimal policy distribution. I will describe how sparse MDPs can be applied to reinforcement learning and inverse reinforcement learning problems with some theoretical and experimental results


File 오성회.pdf(114.9K)

No Subject Name W-Date Hit
194 BK21 PLUS Seminar Announcement on Oct. 28, 2019 Admin 19-10-04 32
193 BK21 PLUS Seminar Announcement on Sept. 27, 2019 Admin 19-09-16 48
192 BK21 PLUS Seminar Announcement June 12, 2019 Admin 19-05-17 147
191 BK21 PLUS Seminar Announcement May 2, 2019 Admin 19-04-26 277
190 Colloquium Announcement on May 27, 2019 Admin 19-03-27 712
189 Colloquium Announcement on May 20, 2019 Admin 19-03-27 509
188 Colloquium Announcement on May 13, 2019 Admin 19-03-27 291
187 Colloquium Announcement on Apr. 29, 2019 Admin 19-03-27 246
186 Colloquium Announcement on Apr. 22, 2019 Admin 19-03-27 709
185 Colloquium Announcement on Apr. 15, 2019 Admin 19-03-27 114
1 2 3 4 5 6 7 8 9 10