Retentive Network: A Successor to Transformer for Large Language Models en (arxiv.org)

This is an exciting new paper that replaces attention in the Transformer architecture with a set of decomposable matrix operations that retain the modeling capacity of Transformer models, while allowing parallel training and efficient RNN-like inference without the use of attention (it doesn't use a softmax)....

  • Todo
  • Suscrito
  • Moderado
  • Favoritos
  • random
  • noticiascr
  • machinelearning@kbin.social
  • CostaRica
  • Todos las revistas