Tuesday 6 February 2024

Beyond self-attention: How a small language model predicts the next token

Beyond self-attention: How a small language model predicts the next token
463 by tplrbv | 85 comments


No comments:

Post a Comment

New exponent functions that make SiLU and SoftMax 2x faster, at full accuracy

New exponent functions that make SiLU and SoftMax 2x faster, at full accuracy 379 by weinzierl | 72 comments