Skip to content

Removal of multihead attention from activation #13

@dreamer2368

Description

@dreamer2368

Per @punkduckable , multihead attention should be implemented as a layer, not an activation function. However, the current implementation simply uses multihead attention as an activation function, which also disrupts overall structure within MultiLayerPerceptron.

multihead attention should be removed from activation, and probably implemented as a derived class of latent space.

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions