Skip to content

Commit 91eea24

Browse files
committed
Deploy to GitHub Pages: 1ac3429
1 parent 20448e0 commit 91eea24

File tree

11 files changed

+726
-10
lines changed

11 files changed

+726
-10
lines changed

areas/adaptive-agents-and-foundation-models/index.html

Lines changed: 272 additions & 0 deletions
Large diffs are not rendered by default.

areas/concept-learning-abstraction/index.html

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,8 @@
44
<meta property="og:title" content=" | Agentic Learning AI Lab" />
55
<meta property="og:description" content="Scaling AI for lifelong learning and reasoning requires the ability to transform raw inputs into abstract concepts that can be efficiently composed to form more complex ones. Our lab has a strong focus on few-shot learning for concept acquisition. In recent research, we have enabled large-scale foundation models to incrementally learn new language and visual concepts. Our current efforts extend to recognizing functional and relational concepts, as well as exploring how learned concepts can be composed hierarchically for high-level reasoning. These advancements are key to building AI systems that generalize efficiently and adapt continuously to new tasks." />
66
<meta property="og:type" content="website" />
7-
<meta property="og:url" content="https://agenticlearning.ai/research/concept-learning-abstraction" />
7+
<meta property="og:url" content="https://agenticlearning.ai/research/concept-learning-and-abstraction
8+
concept-learning-and-abstraction" />
89
<!--Replace with the current website url-->
910
<meta property="og:image" content="https://agenticlearning.ai//assets/images/home/concept_abstraction.png" />
1011
<meta charset="UTF-8" />

areas/concept-learning-and-abstraction/index.html

Lines changed: 136 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -142,6 +142,142 @@ <h3 class="text-left tw-text-2xl tw-font-medium max-md:tw-text-xl tw-mt-10">
142142
</div>
143143

144144
<div class="tw-mt-8 tw-gap-10 tw-space-y-reverse sm:tw-columns-1 md:tw-columns-2 lg:tw-columns-3 xl:tw-columns-4 tw-items-start">
145+
<a href="/research/context-tuning">
146+
<div class="safari-padding-fix">
147+
<!-- reveal-up -->
148+
<!-- tw-rounded-lg -->
149+
<!-- hover:tw-shadow-lg tw-transition-shadow tw-duration-300 -->
150+
<!-- max-lg:tw-max-w-[400px] -->
151+
<div class="tw-flex tw-h-fit tw-break-inside-avoid tw-flex-col tw-gap-2 tw-border-transparent tw-border-2 hover:tw-border-solid hover:tw-border-gray-600 tw-bg-[#f3f3f3b4] tw-p-4 max-lg:tw-w-full tw-overflow-hidden tw-transition-all tw-duration-200">
152+
<div class="tw-flex tw-place-items-center tw-gap-3">
153+
<!-- tw-rounded-lg -->
154+
<div class="tw-h-[300px] tw-w-full tw-overflow-hidden ">
155+
<img src=/assets/images/papers/context_tuning.png
156+
class="tw-h-full tw-w-full tw-object-cover"
157+
alt="design"/>
158+
</div>
159+
<!-- tw-transition-transform tw-duration-300 tw-transform hover:tw-scale-110 -->
160+
</div>
161+
<div class="tw-flex tw-flex-col tw-gap-2">
162+
<h3 class="tw-text-xl tw-font-medium tw-mt-4">Context Tuning for In-Context Optimization</h3>
163+
<p class="tw-text-gray-600 tw-mt-4">
164+
Context Tuning is a simple and effective method to significantly enhance few-shot adaptation of LLMs without fine-tuning model parameters.
165+
</p>
166+
<p class="tw-mt-4">
167+
Published: 2025-07-06
168+
</p>
169+
<!-- <a href=/research/context-tuning class="tw-mt-4"> -->
170+
<div class="tw-mt-4">
171+
<span>Learn more</span>
172+
<i class="bi bi-arrow-right"></i>
173+
</div>
174+
<!-- </a> -->
175+
</div>
176+
</div>
177+
</div>
178+
</a>
179+
<a href="/research/discrete-jepa">
180+
<div class="safari-padding-fix">
181+
<!-- reveal-up -->
182+
<!-- tw-rounded-lg -->
183+
<!-- hover:tw-shadow-lg tw-transition-shadow tw-duration-300 -->
184+
<!-- max-lg:tw-max-w-[400px] -->
185+
<div class="tw-flex tw-h-fit tw-break-inside-avoid tw-flex-col tw-gap-2 tw-border-transparent tw-border-2 hover:tw-border-solid hover:tw-border-gray-600 tw-bg-[#f3f3f3b4] tw-p-4 max-lg:tw-w-full tw-overflow-hidden tw-transition-all tw-duration-200">
186+
<div class="tw-flex tw-place-items-center tw-gap-3">
187+
<!-- tw-rounded-lg -->
188+
<div class="tw-h-[300px] tw-w-full tw-overflow-hidden ">
189+
<img src=/assets/images/papers/discrete_jepa.png
190+
class="tw-h-full tw-w-full tw-object-cover"
191+
alt="design"/>
192+
</div>
193+
<!-- tw-transition-transform tw-duration-300 tw-transform hover:tw-scale-110 -->
194+
</div>
195+
<div class="tw-flex tw-flex-col tw-gap-2">
196+
<h3 class="tw-text-xl tw-font-medium tw-mt-4">Discrete JEPA: Learning Discrete Token Representations without Reconstruction</h3>
197+
<p class="tw-text-gray-600 tw-mt-4">
198+
Discrete-JEPA extends the latent predictive coding JEPA framework with semantic tokenization and complementary objectives for symbolic reasoning tasks.
199+
</p>
200+
<p class="tw-mt-4">
201+
Published: 2025-06-22
202+
</p>
203+
<!-- <a href=/research/discrete-jepa class="tw-mt-4"> -->
204+
<div class="tw-mt-4">
205+
<span>Learn more</span>
206+
<i class="bi bi-arrow-right"></i>
207+
</div>
208+
<!-- </a> -->
209+
</div>
210+
</div>
211+
</div>
212+
</a>
213+
<a href="/research/procreate">
214+
<div class="safari-padding-fix">
215+
<!-- reveal-up -->
216+
<!-- tw-rounded-lg -->
217+
<!-- hover:tw-shadow-lg tw-transition-shadow tw-duration-300 -->
218+
<!-- max-lg:tw-max-w-[400px] -->
219+
<div class="tw-flex tw-h-fit tw-break-inside-avoid tw-flex-col tw-gap-2 tw-border-transparent tw-border-2 hover:tw-border-solid hover:tw-border-gray-600 tw-bg-[#f3f3f3b4] tw-p-4 max-lg:tw-w-full tw-overflow-hidden tw-transition-all tw-duration-200">
220+
<div class="tw-flex tw-place-items-center tw-gap-3">
221+
<!-- tw-rounded-lg -->
222+
<div class="tw-h-[300px] tw-w-full tw-overflow-hidden ">
223+
<img src=/assets/images/papers/procreate.png
224+
class="tw-h-full tw-w-full tw-object-cover"
225+
alt="design"/>
226+
</div>
227+
<!-- tw-transition-transform tw-duration-300 tw-transform hover:tw-scale-110 -->
228+
</div>
229+
<div class="tw-flex tw-flex-col tw-gap-2">
230+
<h3 class="tw-text-xl tw-font-medium tw-mt-4">ProCreate, Don't Reproduce! Propulsive Energy Diffusion for Creative Generation</h3>
231+
<p class="tw-text-gray-600 tw-mt-4">
232+
ProCreate is a simple and easy-to-implement method to improve sample diversity and creativity of diffusion-based image generative models and to prevent training data reproduction.
233+
</p>
234+
<p class="tw-mt-4">
235+
Published: 2024-08-05
236+
</p>
237+
<!-- <a href=/research/procreate class="tw-mt-4"> -->
238+
<div class="tw-mt-4">
239+
<span>Learn more</span>
240+
<i class="bi bi-arrow-right"></i>
241+
</div>
242+
<!-- </a> -->
243+
</div>
244+
</div>
245+
</div>
246+
</a>
247+
<a href="/research/college">
248+
<div class="safari-padding-fix">
249+
<!-- reveal-up -->
250+
<!-- tw-rounded-lg -->
251+
<!-- hover:tw-shadow-lg tw-transition-shadow tw-duration-300 -->
252+
<!-- max-lg:tw-max-w-[400px] -->
253+
<div class="tw-flex tw-h-fit tw-break-inside-avoid tw-flex-col tw-gap-2 tw-border-transparent tw-border-2 hover:tw-border-solid hover:tw-border-gray-600 tw-bg-[#f3f3f3b4] tw-p-4 max-lg:tw-w-full tw-overflow-hidden tw-transition-all tw-duration-200">
254+
<div class="tw-flex tw-place-items-center tw-gap-3">
255+
<!-- tw-rounded-lg -->
256+
<div class="tw-h-[300px] tw-w-full tw-overflow-hidden ">
257+
<img src=/assets/images/papers/college.png
258+
class="tw-h-full tw-w-full tw-object-cover"
259+
alt="design"/>
260+
</div>
261+
<!-- tw-transition-transform tw-duration-300 tw-transform hover:tw-scale-110 -->
262+
</div>
263+
<div class="tw-flex tw-flex-col tw-gap-2">
264+
<h3 class="tw-text-xl tw-font-medium tw-mt-4">CoLLEGe: Concept Embedding Generation for Large Language Models</h3>
265+
<p class="tw-text-gray-600 tw-mt-4">
266+
CoLLEGe is a meta-learning framework capable of generating flexible embeddings for new concepts using a small number of example sentences or definitions.
267+
</p>
268+
<p class="tw-mt-4">
269+
Published: 2024-03-22
270+
</p>
271+
<!-- <a href=/research/college class="tw-mt-4"> -->
272+
<div class="tw-mt-4">
273+
<span>Learn more</span>
274+
<i class="bi bi-arrow-right"></i>
275+
</div>
276+
<!-- </a> -->
277+
</div>
278+
</div>
279+
</div>
280+
</a>
145281
</div>
146282

147283
</div>

assets/images/papers/arq.png

2.27 MB
Loading

assets/images/thumbnails/arq.png

72.3 KB
Loading

assets/search-index.json

Lines changed: 19 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,14 @@
11
[
2+
{
3+
"type": "paper",
4+
"title": "Local Reinforcement Learning with Action-Conditioned Root Mean Squared Q-Functions",
5+
"authors": "Frank (Zequan) Wu and Mengye Ren",
6+
"abstract": "Action-conditioned Root mean squared Q-Functions (ARQ) is a novel backprop-free value estimation method that applies a goodness function and action conditioning for local reinforcement learning.",
7+
"image": "/assets/images/papers/arq.png",
8+
"thumbnail": "/assets/images/thumbnails/arq.png",
9+
"url": "/research/arq/",
10+
"keywords": "local reinforcement learning with action-conditioned root mean squared q-functions frank (zequan) wu mengye ren action-conditioned root mean squared q-functions (arq) is a novel backprop-free value estimation method that applies a goodness function and action conditioning for local reinforcement learning. adaptive-agents-and-foundation-models"
11+
},
212
{
313
"type": "paper",
414
"title": "Midway Network: Learning Representations for Recognition and Motion from Latent Dynamics",
@@ -17,7 +27,7 @@
1727
"image": "/assets/images/papers/stream_mem.png",
1828
"thumbnail": "/assets/images/thumbnails/stream_mem.png",
1929
"url": "/research/stream-mem/",
20-
"keywords": "streammem: query-agnostic kv cache memory for streaming video understanding yanlai yang zhuokai zhao satya narayan shukla aashu singh shlok kumar mishra lizhu zhang mengye ren streammem is a query-agnostic kv cache memory mechanism for streaming video understanding. learning-from-visual-experience adaptive-foundation-models"
30+
"keywords": "streammem: query-agnostic kv cache memory for streaming video understanding yanlai yang zhuokai zhao satya narayan shukla aashu singh shlok kumar mishra lizhu zhang mengye ren streammem is a query-agnostic kv cache memory mechanism for streaming video understanding. learning-from-visual-experience adaptive-agents-and-foundation-models"
2131
},
2232
{
2333
"type": "paper",
@@ -27,7 +37,7 @@
2737
"image": "/assets/images/papers/context_tuning.png",
2838
"thumbnail": "/assets/images/thumbnails/context_tuning.png",
2939
"url": "/research/context-tuning/",
30-
"keywords": "context tuning for in-context optimization jack lu ryan teehan zhenbang yang mengye ren context tuning is a simple and effective method to significantly enhance few-shot adaptation of llms without fine-tuning model parameters. adaptive-foundation-models concept-learning-abstraction"
40+
"keywords": "context tuning for in-context optimization jack lu ryan teehan zhenbang yang mengye ren context tuning is a simple and effective method to significantly enhance few-shot adaptation of llms without fine-tuning model parameters. adaptive-agents-and-foundation-models concept-learning-and-abstraction"
3141
},
3242
{
3343
"type": "paper",
@@ -37,7 +47,7 @@
3747
"image": "/assets/images/papers/discrete_jepa.png",
3848
"thumbnail": "/assets/images/thumbnails/discrete_jepa.png",
3949
"url": "/research/discrete-jepa/",
40-
"keywords": "discrete jepa: learning discrete token representations without reconstruction junyeob baek hosung lee chris hoang mengye ren sungjin ahn discrete-jepa extends the latent predictive coding jepa framework with semantic tokenization and complementary objectives for symbolic reasoning tasks. concept-learning-abstraction"
50+
"keywords": "discrete jepa: learning discrete token representations without reconstruction junyeob baek hosung lee chris hoang mengye ren sungjin ahn discrete-jepa extends the latent predictive coding jepa framework with semantic tokenization and complementary objectives for symbolic reasoning tasks. concept-learning-and-abstraction"
4151
},
4252
{
4353
"type": "paper",
@@ -67,7 +77,7 @@
6777
"image": "/assets/images/papers/are_llms_prescient.png",
6878
"thumbnail": "/assets/images/thumbnails/are_llms_prescient.png",
6979
"url": "/research/are-llms-prescient/",
70-
"keywords": "are llms prescient? a continuous evaluation using daily news as oracle amelia (hui) dai ryan teehan mengye ren our new benchmark, daily oracle, automatically generates question-answer (qa) pairs from daily news, challenging llms to predict \"future\" events based on pre-training data. adaptive-foundation-models"
80+
"keywords": "are llms prescient? a continuous evaluation using daily news as oracle amelia (hui) dai ryan teehan mengye ren our new benchmark, daily oracle, automatically generates question-answer (qa) pairs from daily news, challenging llms to predict \"future\" events based on pre-training data. adaptive-agents-and-foundation-models"
7181
},
7282
{
7383
"type": "paper",
@@ -87,7 +97,7 @@
8797
"image": "/assets/images/papers/procreate.png",
8898
"thumbnail": "/assets/images/thumbnails/procreate.png",
8999
"url": "/research/procreate/",
90-
"keywords": "procreate, don't reproduce! propulsive energy diffusion for creative generation jack lu ryan teehan mengye ren procreate is a simple and easy-to-implement method to improve sample diversity and creativity of diffusion-based image generative models and to prevent training data reproduction. concept-learning-abstraction"
100+
"keywords": "procreate, don't reproduce! propulsive energy diffusion for creative generation jack lu ryan teehan mengye ren procreate is a simple and easy-to-implement method to improve sample diversity and creativity of diffusion-based image generative models and to prevent training data reproduction. concept-learning-and-abstraction"
91101
},
92102
{
93103
"type": "paper",
@@ -107,7 +117,7 @@
107117
"image": "/assets/images/papers/college.png",
108118
"thumbnail": "/assets/images/thumbnails/college.png",
109119
"url": "/research/college/",
110-
"keywords": "college: concept embedding generation for large language models ryan teehan brenden m. lake mengye ren college is a meta-learning framework capable of generating flexible embeddings for new concepts using a small number of example sentences or definitions. adaptive-foundation-models concept-learning-abstraction"
120+
"keywords": "college: concept embedding generation for large language models ryan teehan brenden m. lake mengye ren college is a meta-learning framework capable of generating flexible embeddings for new concepts using a small number of example sentences or definitions. adaptive-agents-and-foundation-models concept-learning-and-abstraction"
111121
},
112122
{
113123
"type": "paper",
@@ -117,7 +127,7 @@
117127
"image": "/assets/images/papers/reawakening.png",
118128
"thumbnail": "/assets/images/thumbnails/reawakening.png",
119129
"url": "/research/anticipatory-recovery/",
120-
"keywords": "reawakening knowledge: anticipatory recovery from catastrophic interference via structured training yanlai yang matt jones michael c. mozer mengye ren we discover a curious and remarkable property of llms fine-tuned sequentially in this setting: they exhibit anticipatory behavior, recovering from the forgetting on documents before encountering them again. adaptive-foundation-models"
130+
"keywords": "reawakening knowledge: anticipatory recovery from catastrophic interference via structured training yanlai yang matt jones michael c. mozer mengye ren we discover a curious and remarkable property of llms fine-tuned sequentially in this setting: they exhibit anticipatory behavior, recovering from the forgetting on documents before encountering them again. adaptive-agents-and-foundation-models"
121131
},
122132
{
123133
"type": "paper",
@@ -137,7 +147,7 @@
137147
"image": "/assets/images/papers/learning_and_forgetting_llm.png",
138148
"thumbnail": "/assets/images/thumbnails/learning_and_forgetting_llm.png",
139149
"url": "/research/learning-forgetting-llms/",
140-
"keywords": "learning and forgetting unsafe examples in large language models jiachen zhao zhun deng david madras james zou mengye ren we explore the behavior of llms finetuned on noisy custom data containing unsafe content and propose a simple filtering algorithm for detecting harmful content based on the phenomenon of selective forgetting. adaptive-foundation-models"
150+
"keywords": "learning and forgetting unsafe examples in large language models jiachen zhao zhun deng david madras james zou mengye ren we explore the behavior of llms finetuned on noisy custom data containing unsafe content and propose a simple filtering algorithm for detecting harmful content based on the phenomenon of selective forgetting. adaptive-agents-and-foundation-models"
141151
},
142152
{
143153
"type": "paper",
@@ -147,7 +157,7 @@
147157
"image": "/assets/images/papers/lifelong_memory.png",
148158
"thumbnail": "/assets/images/thumbnails/lifelong_memory.png",
149159
"url": "/research/lifelong-memory/",
150-
"keywords": "lifelongmemory: leveraging llms for answering queries in long-form egocentric videos ying wang yanlai yang mengye ren lifelongmemory is a new framework for accessing long-form egocentric videographic memory through natural language question answering and retrieval. learning-from-visual-experience adaptive-foundation-models"
160+
"keywords": "lifelongmemory: leveraging llms for answering queries in long-form egocentric videos ying wang yanlai yang mengye ren lifelongmemory is a new framework for accessing long-form egocentric videographic memory through natural language question answering and retrieval. learning-from-visual-experience adaptive-agents-and-foundation-models"
151161
},
152162
{
153163
"type": "person",

0 commit comments

Comments
 (0)