Transformers have modified the best way synthetic intelligence works, particularly in understanding language and studying from knowledge. On the core of those fashions are tensors (a generalized sort of mathematical matrices that assist course of info) . As knowledge strikes by way of the completely different elements of a Transformer, these tensors are topic to completely different transformations that assist the mannequin make sense of issues like sentences or photographs. Studying how tensors work inside Transformers might help you perceive how right this moment’s smartest AI techniques truly work and suppose.
What This Article Covers and What It Doesn’t
✅ This Article IS About:
- The circulation of tensors from enter to output inside a Transformer mannequin.
- Guaranteeing dimensional coherence all through the computational course of.
- The step-by-step transformations that tensors endure in varied Transformer layers.
❌ This Article IS NOT About:
- A common introduction to Transformers or deep studying.
- Detailed structure of Transformer fashions.
- Coaching course of or hyper-parameter tuning of Transformers.
How Tensors Act Inside Transformers
A Transformer consists of two major parts:
- Encoder: Processes enter knowledge, capturing contextual relationships to create significant representations.
- Decoder: Makes use of these representations to generate coherent output, predicting every aspect sequentially.
Tensors are the elemental knowledge constructions that undergo these parts, experiencing a number of transformations that guarantee dimensional coherence and correct info circulation.

Enter Embedding Layer
Earlier than getting into the Transformer, uncooked enter tokens (phrases, subwords, or characters) are transformed into dense vector representations by way of the embedding layer. This layer capabilities as a lookup desk that maps every token vector, capturing semantic relationships with different phrases.

For a batch of 5 sentences, every with a sequence size of 12 tokens, and an embedding dimension of 768, the tensor form is:
- Tensor form:
[batch_size, seq_len, embedding_dim] → [5, 12, 768]
After embedding, positional encoding is added, making certain that order info is preserved with out altering the tensor form.

Multi-Head Consideration Mechanism
Some of the essential parts of the Transformer is the Multi-Head Consideration (MHA) mechanism. It operates on three matrices derived from enter embeddings:
- Question (Q)
- Key (Okay)
- Worth (V)
These matrices are generated utilizing learnable weight matrices:
- Wq, Wk, Wv of form
[embedding_dim, d_model]
(e.g.,[768, 512]
). - The ensuing Q, Okay, V matrices have dimensions
[batch_size, seq_len, d_model]
.

Splitting Q, Okay, V into A number of Heads
For efficient parallelization and improved studying, MHA splits Q, Okay, and V into a number of heads. Suppose we have now 8 consideration heads:
- Every head operates on a subspace of
d_model / head_count
.

- The reshaped tensor dimensions are
[batch_size, seq_len, head_count, d_model / head_count]
. - Instance:
[5, 12, 8, 64]
→ rearranged to[5, 8, 12, 64]
to make sure that every head receives a separate sequence slice.

- So every head will get the its share of Qi, Ki, Vi

Consideration Calculation
Every head computes consideration utilizing the formulation:

As soon as consideration is computed for all heads, the outputs are concatenated and handed by way of a linear transformation, restoring the preliminary tensor form.


Residual Connection and Normalization
After the multi-head consideration mechanism, a residual connection is added, adopted by layer normalization:
- Residual connection:
Output = Embedding Tensor + Multi-Head Consideration Output
- Normalization:
(Output − μ) / σ
to stabilize coaching - Tensor form stays
[batch_size, seq_len, embedding_dim]

Feed-Ahead Community (FFN)
Within the decoder, Masked Multi-Head Consideration ensures that every token attends solely to earlier tokens, stopping leakage of future info.

That is achieved utilizing a decrease triangular masks of form [seq_len, seq_len]
with -inf
values within the higher triangle. Making use of this masks ensures that the Softmax operate nullifies future positions.

Cross-Consideration in Decoding
For the reason that decoder doesn’t absolutely perceive the enter sentence, it makes use of cross-attention to refine predictions. Right here:
- The decoder generates queries (Qd) from its enter (
[batch_size, target_seq_len, embedding_dim]
). - The encoder output serves as keys (Ke) and values (Ve).
- The decoder computes consideration between Qd and Ke, extracting related context from the encoder’s output.

Conclusion
Transformers use tensors to assist them study and make sensible selections. As the info strikes by way of the community, these tensors undergo completely different steps—like being become numbers the mannequin can perceive (embedding), specializing in necessary elements (consideration), staying balanced (normalization), and being handed by way of layers that study patterns (feed-forward). These modifications maintain the info in the best form the entire time. By understanding how tensors transfer and alter, we will get a greater thought of how AI fashions work and the way they’ll perceive and create human-like language.