You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
Did you somehow figure out how it would be possible to create a auto encoder with RLS?
for example with the mnist dataset to remove noise OR create new numbers....
normaly the autoencoder does something like 784 -> 256 -> 784 for either compression or to create new images if one starts from the 256 hidden layer.
Is this somehow possible?
The text was updated successfully, but these errors were encountered:
This is difficult to compete with the standard autoencoder approach because with the current RLS approach we can only fine-tune one layer of the weights.
You can do this: 784 -> 784-> 784
1- Make a random projection of the 784 inputs into 784 nodes (or more) in the first layer.
2 - Add a non-linear activation function like relu or tanh
3 - From the output of the non-linear layer use RLS to map this non-linear output into the 784 outputs where the output = input
For better performance increase the number of the neurons in the middle layer i.e more than 784 but this can be computational intensive because RLS has O(n2) complexity.
Hi,
Did you somehow figure out how it would be possible to create a auto encoder with RLS?
for example with the mnist dataset to remove noise OR create new numbers....
normaly the autoencoder does something like 784 -> 256 -> 784 for either compression or to create new images if one starts from the 256 hidden layer.
Is this somehow possible?
The text was updated successfully, but these errors were encountered: