JaxARC is a JAX-based reinforcement learning environment for the Abstraction and Reasoning Corpus (ARC) challenge. It's built for researchers who want to use extremely fast vectorized environments to explore reinforcement learning, and meta-learning techniques for abstract reasoning.
Speed. Environments compile with jax.jit and vectorize with jax.vmap.
Run thousands of episodes in parallel on GPU/TPU.
Flexible. Multiple action spaces (point-based, selection masks, bounding boxes). Multiple datasets (ARC-AGI, ConceptARC, MiniARC). Observation wrappers for different input formats. Configure everything via typed dataclasses or YAML.
Extensible. Clean parser interface for custom datasets. Wrapper system for custom observations and actions. Built with future HRL and Meta-RL experiments in mind.
- JAX-Native: Pure functional API — every function is
jax.jit-compatible - Lightning Fast: JIT compilation turns Python into XLA-optimized machine code
- Configurable: Multiple action spaces, reward functions, and observation formats
- Multiple Datasets: ARC-AGI-1, ARC-AGI-2, ConceptARC, and MiniARC included
- Type-Safe: Full type hints with runtime validation
- Visual Debug: Terminal and SVG rendering for development
pip install jaxarcgit clone https://github.com/aadimator/JaxARC.git
cd JaxARC
pixi shell # Sets up the environment
pixi run -e dev pre-commit install # Hooks for code qualitySee the tutorials for training loops, custom wrappers, and dataset management.
JaxARC uses the Stoa API, allowing seamless integration with Stoix, which is a JAX-based reinforcement learning codebase supporting various RL algorithms.
This means you can easily plug JaxARC environments into Stoix's training pipelines to leverage its efficient implementations of RL algorithms.
You can explore jaxarc-baselines repository for example implementations of training agents on JaxARC environments using Stoix.
Found a bug? Want a feature? Open an issue or submit a PR.
JaxARC builds on great work from the community:
- ARC Challenge by François Chollet — The original dataset and challenge
- ARCLE — Python-based ARC environment (inspiration for our design)
- Stoix by Edan Toledo — Single-agent RL in JAX (we use their Stoa API)
If you use JaxARC in your research:
@software{jaxarc2025,
author = {Aadam},
title = {JaxARC: JAX-based Reinforcement Learning for Abstract Reasoning},
year = {2025},
url = {https://github.com/aadimator/JaxARC}
}MIT License — see LICENSE for details.
- Bugs/Features: GitHub Issues
- Discussions: GitHub Discussions
- Docs: jaxarc.readthedocs.io
