Skip to content
This repository was archived by the owner on Sep 26, 2025. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 7 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,21 +38,7 @@ ______________________________________________________________________
- Added [IC-Light](https://github.com/lllyasviel/IC-Light) to manipulate the illumination of images
- Added Multi Upscaler for high-resolution image generation, inspired from [Clarity Upscaler](https://github.com/philz1337x/clarity-upscaler) ([HF Space](https://huggingface.co/spaces/finegrain/enhancer))
- Added [HQ-SAM](https://arxiv.org/abs/2306.01567) for high quality mask prediction with Segment Anything
- Added [SDXL-Lightning](https://arxiv.org/abs/2402.13929)
- Added [Latent Consistency Models](https://arxiv.org/abs/2310.04378) and [LCM-LoRA](https://arxiv.org/abs/2311.05556) for Stable Diffusion XL
- Added [Style Aligned adapter](https://arxiv.org/abs/2312.02133) to Stable Diffusion models
- Added [ControlLoRA (v2) adapter](https://github.com/HighCWu/control-lora-v2) to Stable Diffusion XL
- Added [Euler's method](https://arxiv.org/abs/2206.00364) to solvers (contributed by [@israfelsr](https://github.com/israfelsr))
- Added [DINOv2](https://github.com/facebookresearch/dinov2) for high-performance visual features (contributed by [@Laurent2916](https://github.com/Laurent2916))
- Added [FreeU](https://github.com/ChenyangSi/FreeU) for improved quality at no cost (contributed by [@isamu-isozaki](https://github.com/isamu-isozaki))
- Added [Restart Sampling](https://github.com/Newbeeer/diffusion_restart_sampling) for improved image generation ([example](https://github.com/Newbeeer/diffusion_restart_sampling/issues/4))
- Added [Self-Attention Guidance](https://github.com/KU-CVLAB/Self-Attention-Guidance/) to avoid e.g. too smooth images ([example](https://github.com/SusungHong/Self-Attention-Guidance/issues/4))
- Added [T2I-Adapter](https://github.com/TencentARC/T2I-Adapter) for extra guidance ([example](https://github.com/TencentARC/T2I-Adapter/discussions/93))
- Added [MultiDiffusion](https://github.com/omerbt/MultiDiffusion) for e.g. panorama images
- Added [IP-Adapter](https://github.com/tencent-ailab/IP-Adapter), aka image prompt ([example](https://github.com/tencent-ailab/IP-Adapter/issues/92))
- Added [Segment Anything](https://github.com/facebookresearch/segment-anything) to foundation models
- Added [SDXL 1.0](https://github.com/Stability-AI/generative-models) to foundation models
- Made possible to add new concepts to the CLIP text encoder, e.g. via [Textual Inversion](https://arxiv.org/abs/2208.01618)
- ...see past [releases](https://github.com/finegrain-ai/refiners/releases)

## Installation

Expand All @@ -68,6 +54,12 @@ rye sync --all-features

Refiners comes with a MkDocs-based documentation website available at https://refine.rs. You will find there a [quick start guide](https://refine.rs/getting-started/recommended/), a description of the [key concepts](https://refine.rs/concepts/chain/), as well as in-depth foundation model adaptation [guides](https://refine.rs/guides/adapting_sdxl/).

## Projects using Refiners

- [Finegrain Editor](https://editor.finegrain.ai/signup?utm_source=github&utm_campaign=refiners): use state-of-the-art visual AI skills to edit product photos
- [Visoid](https://www.visoid.com/): AI-powered architectural visualization
- [imaginAIry](https://github.com/brycedrennan/imaginAIry): Pythonic AI generation of images and videos

## Awesome Adaptation Papers

If you're interested in understanding the diversity of use cases for foundation model adaptation (potentially beyond the specific adapters supported by Refiners), we suggest you take a look at these outstanding papers:
Expand All @@ -81,10 +73,6 @@ If you're interested in understanding the diversity of use cases for foundation
- [Cross Modality Attention Adapter](https://arxiv.org/abs/2307.01124)
- [UniAdapter](https://arxiv.org/abs/2302.06605)

## Projects using Refiners

- https://github.com/brycedrennan/imaginAIry

## Credits

We took inspiration from these great projects:
Expand Down
3 changes: 3 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -163,6 +163,9 @@ extend-ignore-identifiers-re = ["NDArray*", "interm", "af000ded"]
[tool.typos.default.extend-words]
adaptee = "adaptee" # Common name for an adapter's target

[tool.typos.default.extend-identifiers]
imaginAIry = "imaginAIry"

[tool.pytest.ini_options]
filterwarnings = [
"ignore::UserWarning:segment_anything_hq.modeling.tiny_vit_sam.*",
Expand Down