Geographic Place Encoders. Understanding fashionable methods for… | by Ruth Crasto | Oct, 2024

Understanding fashionable methods for encoding geographic coordinates in a neural community

Photograph by CHUTTERSNAP on Unsplash

An inductive bias in machine studying is a constraint on a mannequin given some prior information of the goal job. As people, we will acknowledge a hen whether or not it’s flying within the sky or perched in a tree. Furthermore, we don’t want to look at each cloud or take within the entirety of the tree to know that we’re taking a look at a hen and never one thing else. These biases within the imaginative and prescient course of are encoded in convolution layers through two properties:

  • Weight sharing: the identical kernel weights are re-used alongside an enter channel’s full width and peak.
  • Locality: the kernel has a a lot smaller width and peak than the enter.

We will additionally encode inductive biases in our selection of enter options to the mannequin, which could be interpreted as a constraint on the mannequin itself. Designing enter options for a neural community entails a trade-off between expressiveness and inductive bias. On one hand, we wish to enable the mannequin the pliability to be taught patterns past what we people can detect and encode. Alternatively, a mannequin with none inductive biases will wrestle to be taught something significant in any respect.

On this article, we’ll discover the inductive biases that go into designing efficient place encoders for geographic coordinates. Place on Earth is usually a helpful enter to a variety of prediction duties, together with picture classification. As we’ll see, utilizing latitude and longitude immediately as enter options is under-constraining and finally will make it tougher for the mannequin to be taught something significant. As an alternative, it’s extra frequent to encode prior information about latitude and longitude in a nonparametric re-mapping that we name a positional encoder.

To inspire the significance of selecting efficient place encoder extra broadly, let’s first study the well-known place encoder within the transformer mannequin. We begin with the notion that the illustration of a token enter to an consideration block ought to embrace some details about its place within the sequence it belongs to. The query is then: how ought to we encode the place index (0, 1, 2…) right into a vector?

Assume now we have a position-independent token embedding. One potential strategy is so as to add or concatenate the index worth on to this embedding vector. Right here is why this doesn’t work nicely:

  1. The similarity (dot product) between two embeddings — after their place has been encoded — ought to be impartial of the full variety of tokens within the sequence. The 2 final tokens of a sequence ought to document the identical similarity whether or not the sequence is 5 or 50 phrases lengthy.
  2. The similarity between two tokens shouldn’t rely on absolutely the worth of their positions, however solely the relative distance between them. Even when the encoded indices had been normalized to the vary [0, 1], two adjoining tokens at positions 1 and a pair of would document a decrease similarity than the identical two tokens later within the sequence.

The unique “Consideration is All You Want” paper [1] proposes as an alternative to encode the place index pos right into a discrete “snapshot” of ok totally different sinusoids, the place ok is the dimension of the token embeddings. These snapshots are computed as follows:

the place i = 1, 2, …, ok / 2. The ensuing ok-dimensional place embedding is then added elementwise to the corresponding token embedding.

The instinct behind this encoding is that the extra snapshots are out of section for any two embeddings, the additional aside are their corresponding positions. Absolutely the worth of two totally different positions won’t affect how out of section their snapshots are. Furthermore, for the reason that vary of any sinusoid is the interval [-1, 1], the magnitude of the positional embeddings won’t develop with sequence size.

I received’t go into extra element on this explicit place encoder since there are a number of wonderful weblog posts that accomplish that (see [2]). Hopefully, now you can see why it is necessary, on the whole, to think twice about how place ought to be encoded.

Let’s now flip to encoders for geographic place. We wish to prepare a neural community to foretell some variable of curiosity given a place on the floor of the Earth. How ought to we encode a place (λ, ϕ) in spherical coordinates — i.e. a longitude/latitude pair — right into a vector that can be utilized as an enter to our community?

By Peter Mercator, Public Area.

Easy strategy

One potential strategy could be to make use of latitude and longitude values immediately as inputs. On this case our enter characteristic house could be the rectangle [-π, π] × [0, π], which I’ll check with as lat/lon house. As with place encoders for transformers, this easy strategy sadly has its limitations:

  1. Discover that as you progress in the direction of the poles, the gap on the floor of the Earth lined by 1 unit of longitude (λ) decreases. Lat/lon house doesn’t protect distances on the floor of the Earth.
  2. Discover that the place on Earth comparable to coordinates (λ, ϕ) ought to be equivalent to the place comparable to (λ + 2π, ϕ). However in lat/lon house, these two coordinates are very far aside. Lat/lon house doesn’t protect periodicity: the best way spherical coordinates wrap across the floor of the Earth.

To be taught something significant immediately from inputs in lat/lengthy house, a neural community should learn to encode these properties concerning the curvature of the Earth’s floor by itself — a difficult job. How can we as an alternative design a place encoder that already encodes these inductive biases? Let’s discover some early approaches to this downside and the way they’ve developed over time.

Discretization-based (2015)

The primary paper to suggest featurizing geographic coordinates to be used as enter to a convolutional neural community known as “Enhancing Picture Classification with Location Context” [3]. Revealed in 2015, this work proposes and evaluates many alternative featurization approaches with the objective of coaching higher classification fashions for geo-tagged photographs.

The concept behind every of their approaches is to immediately encode a place on Earth right into a set of numerical options that may be computed from auxiliary information sources. Some examples embrace:

  • Dividing the U.S. into evenly spaced grids in lat/lon house and utilizing a one-hot encoding to encode a given location right into a vector primarily based on which grid it falls into.
  • Wanting up the united statesZIP code that corresponds to a given location, then retrieving demographic information about this ZIP code from ACS (American Neighborhood Survey) associated to age, intercourse, race, residing situations, and extra. That is made right into a numerical vector utilizing one-hot encodings.
  • For a selected set of Instagram hashtags, counting what number of hashtags are recorded at totally different distances from a given location and concatenating these counts right into a vector.
  • Retrieving color-coded maps from Google Maps for varied options corresponding to precipitation, land cowl, congressional district, and concatenating the numerical coloration values from every right into a vector.

Notice that these positional encodings are usually not steady and don’t protect distances on the floor of the Earth. Within the first instance, two close by places that fall into totally different grids can be equally distant in characteristic house as two places from reverse sides of the nation. Furthermore, these options largely depend on the supply of auxiliary information sources and should be fastidiously hand-crafted, requiring a selected selection of hashtags, map options, survey information, and so forth. These approaches don’t generalize nicely to arbitrary places on Earth.

WRAP (2019)

In 2019, a paper titled “Presence-Solely Geographical Priors for Tremendous-Grained Picture Classification” [4] took an vital step in the direction of the geographic place encoders generally used at the moment. Much like the work from the earlier part, this paper research use geographic coordinates for enhancing picture classification fashions.

The important thing concept behind their place encoder is to leverage the periodicity of sine and cosine features to encode the best way geographic coordinates wrap across the floor of the Earth. Given latitude and longitude (λ, ϕ), each normalized to the vary [-1, 1], the WRAP place encoder is outlined as:

Not like the approaches within the earlier part, WRAP is steady and simply computed for any place on Earth. The paper then reveals empirically that coaching a fully-connected community on prime of those options and mixing them with latent picture options can result in improved efficiency on fine-grained picture classification benchmarks.

The WRAP encoder seems easy, but it surely efficiently encodes a key inductive bias about geographic place whereas remaining expressive and versatile. With a view to see why this selection of place encoder is so highly effective, we have to perceive the Double Fourier Sphere (DFS) technique [5].

DFS is a technique of reworking any real-valued perform f (x, y, z) outlined on the floor of a unit sphere right into a 2π-periodic perform outlined on a rectangle [-π, π] × [-π, π]. At a excessive degree, DFS consists of two steps:

  1. Re-parametrize the perform f (x, y, z) utilizing spherical coordinates, the place (λ, ϕ) ∈ [-π, π] × [0, π]

2. Outline a brand new piece-wise perform over the rectangle [-π, π] × [-π, π] primarily based on the re-parametrized f (basically “doubling it over”).

Discover that the DFS re-parametrization of the Earth’s floor (step 1.) preserves the properties we mentioned earlier. For one, as ϕ tends to 0 or ± π (the Earth’s poles), the gap between two factors (λ, ϕ) and (λ’, ϕ) after re-parametrization decreases. Furthermore, the re-parametrization is periodic and clean.

Fourier Theorem

It’s a indisputable fact that any steady, periodic, real-valued perform could be represented as a weighted sum of sines and cosines. That is known as the Fourier Theorem, and this weighted sum illustration known as a Fourier sequence. It seems that any DFS-transformed perform could be represented with a finite set of sines and cosines. They’re generally known as DFS foundation features, listed beneath:

Right here, ∪ denotes union of units, and S is a group of scales (i.e. frequencies) for the sinusoids.

DFS-Based mostly Place Encoders

Discover that the set of DFS foundation features contains the 4 phrases within the WRAP place encoder. “Sphere2Vec” [6] is the earliest publication to watch this, proposing a unified view of place encoders primarily based on DFS. The truth is, with this generalization in thoughts, we will assemble a geographic place encoder by selecting any subset of the DFS foundation features — WRAP is only one such selection. Check out [7] for a complete overview of varied DFS-based place encoders.

Why are DFS-based encoders so highly effective?

Think about what occurs when a linear layer is skilled on prime of a DFS-based place encoder: every output factor of the community is a weighted sum of the chosen DFS foundation features. Therefore, the community could be interpreted as a realized Fourier sequence. Since just about any perform outlined on the floor of a sphere could be remodeled utilizing the DFS technique, it follows {that a} linear layer skilled on prime of DFS foundation features is highly effective sufficient to encode arbitrary features on the sphere! That is akin to the common approximation theorem for multilayer perceptrons.

In apply, solely a small subset of the DFS foundation features is used for the place encoder and a fully-connected community is skilled on prime of those. The composition of a non-parametric place encoder with a neural community is often known as a location encoder:

An outline of a geographic location encoder. Picture by writer.

As now we have seen, a DFS-based place encoder can successfully encode inductive biases now we have concerning the curvature of the Earth’s floor. One limitation of DFS-based encoders is that they assume an oblong area [-π, π] × [-π, π]. Whereas that is largely superb for the reason that DFS re-parametrization already accounts for the way distances get warped nearer to the poles, this assumption breaks down on the poles themselves (ϕ = 0, ± π), that are strains within the rectangular area that collapse to singular factors on the Earth’s floor.

A distinct set of foundation features known as spherical harmonics have just lately emerged instead. Spherical harmonics are foundation features which are natively outlined on the floor of the sphere versus a rectangle. They’ve been proven to exhibit fewer artifacts across the Earth’s poles in comparison with DFS-based encoders [7]. Notably, spherical harmonics are the premise features used within the SatCLIP location encoder [8], a current basis mannequin for geographic coordinates skilled within the fashion of CLIP.

Although geographic place encoders started with discrete, hand-crafted options within the 2010s, these don’t simply generalize to arbitrary places and require domain-specific metadata corresponding to land cowl and demographic information. In the present day, geographic coordinates are rather more generally used as neural community inputs as a result of easy but significant and expressive methods of encoding them have emerged. With the rise of web-scale datasets which are sometimes geo-tagged, the potential for utilizing geographic coordinates as inputs for prediction duties is now immense.