Hello everybody! For individuals who have no idea me but, my title is Francois, I’m a Analysis Scientist at Meta. I’ve a ardour for explaining superior AI ideas and making them extra accessible.
In the present day, let’s dive into some of the vital contribution within the subject of Laptop Imaginative and prescient: the Imaginative and prescient Transformer (ViT).
This submit focuses on the state-of-the-art implementation of the Imaginative and prescient Transformer since its launch. To completely perceive how a ViT works, I strongly suggest studying my different submit on the theoretical foundations: The Final Information to Imaginative and prescient Transformers
Let’s begin with probably the most well-known constructing block of the Transformer Encoder: the Consideration Layer.
class Consideration(nn.Module):
def __init__(self, dim, heads=8, dim_head=64, dropout=0.):
tremendous().__init__()
inner_dim = dim_head * heads # Calculate the entire interior…