Global-Leading Research and Education in Brain and Cognitive engineering KOREA UNIVERSITY our department is trying to foster professional human resources who are to be equipped with creative thinking,
adaptation to new technology and practical application.


Colloquium Announcement on Nov. 19, 2018
W-Name Admin (IP: *.152.74.116) W-Date 2018-10-30 11:22 Hit 347

Robot Learning: A Sparse Approach

Speaker: Prof. Songhwai Oh (Department of Electrical and Computer Engineering, Seoul National University)

Date: Nov. 19, (Mon.), 2018

Time: 05:00 PM 

Location: Woo Jung Information & Communication bldg., Room 601

Hosted by Dept. of Brain & Cognitive Engineering, Korea University / Institute of Brain and Cognitive Engineering, Korea University / BK21+ Global Leader Development Division in Brain Engineering, Korea University / Interdisciplinary Major in Brain and Cognitive Science


With recent advances in hardware, sensing, and algorithms, we are witnessing the emergence of a new robotics industry. I will present a few examples of new services provided by upcoming service robots, assisting us in the near future in places, such as offices, malls, and homes. But, for a robot to coexist with humans and operate successfully in crowded and dynamic environments, a robot must be able to learn from experiences to act safely and harmoniously with human participants. I will discuss research challenges for service robots and our attempts to address those challenges. In particular, I will present our recent work in foundations in robot learning: nested sparse networks (NestedNet) and a sparse Markov decision process (MDP). A NestedNet is a new deep learning framework, allowing an n-in-1 nested structure in a neural network. The proposed framework realizes a resource-aware versatile architecture as a single network can meet diverse resource requirements. For a sequential decision making problem, we have proposed a sparse MDP using novel causal sparse Tsallis entropy regularization, which results in a sparse and multi-modal optimal policy distribution. I will describe how sparse MDPs can be applied to reinforcement learning and inverse reinforcement learning problems with some theoretical and experimental results


File 오성회.pdf(114.9K)

No Subject Name W-Date Hit
184 Colloquium Announcement on Apr. 1, 2019 Admin 19-03-19 30
183 Colloquium Announcement on Mar. 25, 2019 Admin 19-03-11 46
182 Colloquium Announcement on Mar. 18, 2019 Admin 19-03-05 64
181 Colloquium Announcement on Mar. 11, 2019 Admin 19-03-04 70
180 Colloquium Announcement on Mar. 4, 2019 Admin 19-02-25 88
179 Colloquium Announcement on Nov. 26, 2018 Admin 18-10-30 366
178 Colloquium Announcement on Nov. 19, 2018 Admin 18-10-30 348
177 Colloquium Announcement on Nov. 12, 2018 Admin 18-10-30 362
176 BK21 PLUS Seminar Announcement Nov. 8, 2018 Admin 18-10-25 351
175 Colloquium Announcement on Nov. 5, 2018 Admin 18-10-25 339
1 2 3 4 5 6 7 8 9 10