Skip to content

Releases: lucidrains/memory-efficient-attention-pytorch

0.0.12

04 Mar 22:00
Compare
Choose a tag to compare
0.0.12

0.0.11

04 Mar 18:35
Compare
Choose a tag to compare
expose memory efficient attention functions in __init__.py

0.0.10

04 Mar 16:58
Compare
Choose a tag to compare
fix causal mask leading to high mem consumption for 65536 length

0.0.9

04 Mar 16:54
Compare
Choose a tag to compare
remove normal head scale from cosine sim attention

0.0.8

04 Mar 16:50
Compare
Choose a tag to compare
0.0.8

0.0.7

04 Mar 01:26
Compare
Choose a tag to compare
add ability to add attention bias, for dynamic positional bias and ex…

0.0.6

04 Mar 01:04
Compare
Choose a tag to compare
if the chunk is to be all masked out causally, skip summarizing the b…

0.0.5

04 Mar 00:39
Compare
Choose a tag to compare
fix bug with scale

0.0.4

04 Mar 00:32
Compare
Choose a tag to compare
release

0.0.2

04 Mar 00:13
Compare
Choose a tag to compare
complete numerical stability for memory efficient attention