Skip to content

RLS autoencoder #4

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
snapo opened this issue Nov 30, 2023 · 2 comments
Open

RLS autoencoder #4

snapo opened this issue Nov 30, 2023 · 2 comments

Comments

@snapo
Copy link
Contributor

snapo commented Nov 30, 2023

Hi,
Did you somehow figure out how it would be possible to create a auto encoder with RLS?
for example with the mnist dataset to remove noise OR create new numbers....

normaly the autoencoder does something like 784 -> 256 -> 784 for either compression or to create new images if one starts from the 256 hidden layer.
Is this somehow possible?

@hunar4321
Copy link
Owner

This is difficult to compete with the standard autoencoder approach because with the current RLS approach we can only fine-tune one layer of the weights.
You can do this: 784 -> 784-> 784
1- Make a random projection of the 784 inputs into 784 nodes (or more) in the first layer.
2 - Add a non-linear activation function like relu or tanh
3 - From the output of the non-linear layer use RLS to map this non-linear output into the 784 outputs where the output = input
For better performance increase the number of the neurons in the middle layer i.e more than 784 but this can be computational intensive because RLS has O(n2) complexity.

@snapo
Copy link
Contributor Author

snapo commented Dec 7, 2023

thats a pretty good idea :-)
only O(n2) will "kinda" be a problem :-)

Thanks for sharing...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants