You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _config.yml
+4-4Lines changed: 4 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ footer_text: >
16
16
keywords: jekyll, jekyll-theme, academic-website, portfolio-website # add your own keywords or leave empty
17
17
18
18
lang: en # the language of your site (for example: en, fr, cn, ru, etc.)
19
-
icon: brighton.png# the emoji used as the favicon (alternatively, provide image name in /assets/img/)
19
+
icon: 👋# the emoji used as the favicon (alternatively, provide image name in /assets/img/)
20
20
21
21
url: https://rl-max.github.io # the base hostname & protocol for your site
22
22
baseurl: # the subpath of your site, e.g. /blog/. Leave blank for root
@@ -70,11 +70,11 @@ og_image: # The site-wide (default for all links) Open Graph preview image
70
70
71
71
github_username: rl-max # your GitHub user name
72
72
gitlab_username: # your GitLab user name
73
-
x_username: Haeone_Lee# your X handle
73
+
x_username: # your X handle
74
74
mastodon_username: # your mastodon instance+username in the format instance.tld/@username
75
-
linkedin_username: haeone-lee-882b301b1# your LinkedIn user name
75
+
linkedin_username: # your LinkedIn user name
76
76
telegram_username: # your Telegram user name
77
-
scholar_userid: GUXJi7sAAAAJ&hl=en# your Google Scholar ID
77
+
scholar_userid: # your Google Scholar ID
78
78
semanticscholar_id: # your Semantic Scholar ID
79
79
whatsapp_number: # your WhatsApp number (full phone number in international format. Omit any zeroes, brackets, or dashes when adding the phone number in international format.)
Copy file name to clipboardExpand all lines: _pages/about.md
+10-4Lines changed: 10 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -2,12 +2,12 @@
2
2
layout: about
3
3
title: about
4
4
permalink: /
5
-
subtitle: <strong>Ian<strong> Lee
5
+
subtitle: haeone.lee@kaist.ac.kr
6
6
7
7
profile:
8
8
align: right
9
9
image: prof_pic.jpg
10
-
image_circular: false# crops the image to make it circular
10
+
image_circular: true# crops the image to make it circular
11
11
more_info: >
12
12
<p> 📍 Seoul, Korea </p>
13
13
<p> </p>
@@ -20,6 +20,12 @@ social: true # includes social icons at the bottom of the page
20
20
---
21
21
<!-- Hi there, my name is Haeone Lee. My goal is to develop intelligence that is helpful to humans, consisting of any form e.g., physical embodiment(robots), or software(android agent). I believe in the power of **Reinforcement Learning**, in that sense (1) it can reach the optimal performance (2) it interacts with and adapts to the changing world (3) it is the closest to how animals ‘emerge’ the intelligence as part of goal pursuit. To make RL successful, I deem there are plenty of challenges to solve such as enabling efficient exploration, long-horizon control, and safe and autonomous learning. To this end, I am interested in utilizing prior knowledge(e.g., common sense, offline data), and equipping the algorithms with long-term memorizing, hierarchical decision-making, and good abstraction capabilities. For details, [**this**](https://rl-max.github.io/assets/pdf/Creating_Artificial_Intelligence_from_the_World.pdf) briefly surveys my thoughts.
22
22
23
-
Hi there, my name is Haeone Lee. My goal is to develop intelligent agent that can outperform human, while also being helpful. I believe in the power of **Reinforcement Learning**, in that sense (1) it can autonomously come up with the solution given only the goal (2) it interacts with and adapts to the changing world (3) it is the closest to how animals ‘emerge’ the intelligence as part of goal pursuit. To make RL successful, I deem there are plenty of challenges to solve such as enabling efficient exploration, long-horizon control, and safe and autonomous learning. To this end, I am interested in utilizing prior knowledge(e.g., common sense, offline data), and equipping the algorithms with long-term memorizing, hierarchical decision-making, and good abstraction capabilities. For details, [**this**](https://rl-max.github.io/assets/pdf/Creating_Artificial_Intelligence_from_the_World.pdf) briefly surveys my thoughts. -->
23
+
Hi there, my name is Haeone Lee. My goal is to develop intelligent agent that can outperform human, while also being helpful. I believe in the power of **Reinforcement Learning**, in that sense (1) it can autonomously come up with the solution given only the goal (2) it interacts with and adapts to the changing world (3) it is the closest to how animals ‘emerge’ the intelligence as part of goal pursuit. To make RL successful, I deem there are plenty of challenges to solve such as enabling efficient exploration, long-horizon control, and safe and autonomous learning. To this end, I am interested in utilizing prior knowledge(e.g., common sense, offline data), and equipping the algorithms with long-term memorizing, hierarchical decision-making, and good abstraction capabilities. For details, [**this**](https://rl-max.github.io/assets/pdf/Creating_Artificial_Intelligence_from_the_World.pdf) briefly surveys my thoughts.
24
24
25
-
I am interested in building intelligent agents that can self-improve to be useful for humans. Specifically, it should generate useful problems and solve them by leveraging prior knowledge with critics to validate the success. I believe in the power of Reinforcement Learning, in that sense (1) it can autonomously come up with the solution given the goal (2) it interacts with and adapts to the changing world (3) it is the closest to how animals ‘emerge’ the intelligence as part of goal pursuit. To make RL successful, I deem there are plenty of challenges to solve such as sample efficiency, long-horizon control, and safe and autonomous learning. I believe that utilizing prior knowledge(e.g., common sense, offline data), and equipping the algorithms with long-term memorizing, hierarchical decision-making, and good abstraction can help to achieve my goal. For details, [**this**](https://rl-max.github.io/assets/pdf/Creating_Artificial_Intelligence_from_the_World.pdf) briefly surveys my thoughts.
25
+
I am interested in building intelligent agents that can self-improve to be useful for humans. Specifically, it should generate useful problems and solve them by leveraging prior knowledge with critics to validate the success. I believe in the power of Reinforcement Learning, in that sense (1) it can autonomously come up with the solution given the goal (2) it interacts with and adapts to the changing world (3) it is the closest to how animals ‘emerge’ the intelligence as part of goal pursuit. To make RL successful, I deem there are plenty of challenges to solve such as sample efficiency, long-horizon control, and safe and autonomous learning. I believe that utilizing prior knowledge(e.g., common sense, offline data), and equipping the algorithms with long-term memorizing, hierarchical decision-making, and good abstraction can help to achieve my goal. For details, [**this**](https://rl-max.github.io/assets/pdf/Creating_Artificial_Intelligence_from_the_World.pdf) briefly surveys my thoughts. -->
26
+
27
+
I am a master student in KAIST Graduate school of AI, advised by Prof. [Kimin Lee](https://sites.google.com/view/kiminlee/home).
28
+
29
+
I am interested in developing a safe and proficient decision-making agent in real-world(e.g., robots). To this end, I aim to develop a method that can extract behavioral rules from existing data efficiently and help the agent continuously self-improve. Relevant topics include imitation learning on human data and developing scalable reinforcement learning (RL) algorithms that can work both on/offline.
30
+
31
+
Relevant keywords include imitation learning, hierarchical RL, explorations in RL, and robot learning.
0 commit comments