Skip to content

Commit 712791b

Browse files
committed
update
1 parent 7369dea commit 712791b

File tree

3 files changed

+14
-52
lines changed

3 files changed

+14
-52
lines changed

_config.yml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ footer_text: >
1616
keywords: jekyll, jekyll-theme, academic-website, portfolio-website # add your own keywords or leave empty
1717

1818
lang: en # the language of your site (for example: en, fr, cn, ru, etc.)
19-
icon: brighton.png # the emoji used as the favicon (alternatively, provide image name in /assets/img/)
19+
icon: 👋 # the emoji used as the favicon (alternatively, provide image name in /assets/img/)
2020

2121
url: https://rl-max.github.io # the base hostname & protocol for your site
2222
baseurl: # the subpath of your site, e.g. /blog/. Leave blank for root
@@ -70,11 +70,11 @@ og_image: # The site-wide (default for all links) Open Graph preview image
7070

7171
github_username: rl-max # your GitHub user name
7272
gitlab_username: # your GitLab user name
73-
x_username: Haeone_Lee # your X handle
73+
x_username: # your X handle
7474
mastodon_username: # your mastodon instance+username in the format instance.tld/@username
75-
linkedin_username: haeone-lee-882b301b1 # your LinkedIn user name
75+
linkedin_username: # your LinkedIn user name
7676
telegram_username: # your Telegram user name
77-
scholar_userid: GUXJi7sAAAAJ&hl=en # your Google Scholar ID
77+
scholar_userid: # your Google Scholar ID
7878
semanticscholar_id: # your Semantic Scholar ID
7979
whatsapp_number: # your WhatsApp number (full phone number in international format. Omit any zeroes, brackets, or dashes when adding the phone number in international format.)
8080
orcid_id: # your ORCID ID

_pages/about.md

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,12 +2,12 @@
22
layout: about
33
title: about
44
permalink: /
5-
subtitle: <strong>Ian<strong> Lee
5+
subtitle: haeone.lee@kaist.ac.kr
66

77
profile:
88
align: right
99
image: prof_pic.jpg
10-
image_circular: false # crops the image to make it circular
10+
image_circular: true # crops the image to make it circular
1111
more_info: >
1212
<p> 📍 Seoul, Korea </p>
1313
<p> </p>
@@ -20,6 +20,12 @@ social: true # includes social icons at the bottom of the page
2020
---
2121
<!-- Hi there, my name is Haeone Lee. My goal is to develop intelligence that is helpful to humans, consisting of any form e.g., physical embodiment(robots), or software(android agent). I believe in the power of **Reinforcement Learning**, in that sense (1) it can reach the optimal performance (2) it interacts with and adapts to the changing world (3) it is the closest to how animals ‘emerge’ the intelligence as part of goal pursuit. To make RL successful, I deem there are plenty of challenges to solve such as enabling efficient exploration, long-horizon control, and safe and autonomous learning. To this end, I am interested in utilizing prior knowledge(e.g., common sense, offline data), and equipping the algorithms with long-term memorizing, hierarchical decision-making, and good abstraction capabilities. For details, [**this**](https://rl-max.github.io/assets/pdf/Creating_Artificial_Intelligence_from_the_World.pdf) briefly surveys my thoughts.
2222
23-
Hi there, my name is Haeone Lee. My goal is to develop intelligent agent that can outperform human, while also being helpful. I believe in the power of **Reinforcement Learning**, in that sense (1) it can autonomously come up with the solution given only the goal (2) it interacts with and adapts to the changing world (3) it is the closest to how animals ‘emerge’ the intelligence as part of goal pursuit. To make RL successful, I deem there are plenty of challenges to solve such as enabling efficient exploration, long-horizon control, and safe and autonomous learning. To this end, I am interested in utilizing prior knowledge(e.g., common sense, offline data), and equipping the algorithms with long-term memorizing, hierarchical decision-making, and good abstraction capabilities. For details, [**this**](https://rl-max.github.io/assets/pdf/Creating_Artificial_Intelligence_from_the_World.pdf) briefly surveys my thoughts. -->
23+
Hi there, my name is Haeone Lee. My goal is to develop intelligent agent that can outperform human, while also being helpful. I believe in the power of **Reinforcement Learning**, in that sense (1) it can autonomously come up with the solution given only the goal (2) it interacts with and adapts to the changing world (3) it is the closest to how animals ‘emerge’ the intelligence as part of goal pursuit. To make RL successful, I deem there are plenty of challenges to solve such as enabling efficient exploration, long-horizon control, and safe and autonomous learning. To this end, I am interested in utilizing prior knowledge(e.g., common sense, offline data), and equipping the algorithms with long-term memorizing, hierarchical decision-making, and good abstraction capabilities. For details, [**this**](https://rl-max.github.io/assets/pdf/Creating_Artificial_Intelligence_from_the_World.pdf) briefly surveys my thoughts.
2424
25-
I am interested in building intelligent agents that can self-improve to be useful for humans. Specifically, it should generate useful problems and solve them by leveraging prior knowledge with critics to validate the success. I believe in the power of Reinforcement Learning, in that sense (1) it can autonomously come up with the solution given the goal (2) it interacts with and adapts to the changing world (3) it is the closest to how animals ‘emerge’ the intelligence as part of goal pursuit. To make RL successful, I deem there are plenty of challenges to solve such as sample efficiency, long-horizon control, and safe and autonomous learning. I believe that utilizing prior knowledge(e.g., common sense, offline data), and equipping the algorithms with long-term memorizing, hierarchical decision-making, and good abstraction can help to achieve my goal. For details, [**this**](https://rl-max.github.io/assets/pdf/Creating_Artificial_Intelligence_from_the_World.pdf) briefly surveys my thoughts.
25+
I am interested in building intelligent agents that can self-improve to be useful for humans. Specifically, it should generate useful problems and solve them by leveraging prior knowledge with critics to validate the success. I believe in the power of Reinforcement Learning, in that sense (1) it can autonomously come up with the solution given the goal (2) it interacts with and adapts to the changing world (3) it is the closest to how animals ‘emerge’ the intelligence as part of goal pursuit. To make RL successful, I deem there are plenty of challenges to solve such as sample efficiency, long-horizon control, and safe and autonomous learning. I believe that utilizing prior knowledge(e.g., common sense, offline data), and equipping the algorithms with long-term memorizing, hierarchical decision-making, and good abstraction can help to achieve my goal. For details, [**this**](https://rl-max.github.io/assets/pdf/Creating_Artificial_Intelligence_from_the_World.pdf) briefly surveys my thoughts. -->
26+
27+
I am a master student in KAIST Graduate school of AI, advised by Prof. [Kimin Lee](https://sites.google.com/view/kiminlee/home).
28+
29+
I am interested in developing a safe and proficient decision-making agent in real-world(e.g., robots). To this end, I aim to develop a method that can extract behavioral rules from existing data efficiently and help the agent continuously self-improve. Relevant topics include imitation learning on human data and developing scalable reinforcement learning (RL) algorithms that can work both on/offline.
30+
31+
Relevant keywords include imitation learning, hierarchical RL, explorations in RL, and robot learning.

_pages/repositories.md

Lines changed: 0 additions & 44 deletions
This file was deleted.

0 commit comments

Comments
 (0)