Quantization
Pushing quantization to its limits by performing it on the characteristic stage with ft-Quantization (ft-Q)
***To grasp this text, data of embeddings and fundamental quantization is required. The implementation of this algorithm has been launched on GitHub and is absolutely open-source.
For the reason that daybreak of LLMs, quantization has turn into one of the common memory-saving methods for production-ready functions. Not lengthy after, it has been popularized throughout vector databases, which have began utilizing the identical expertise for compressing not solely fashions but additionally vectors for retrieval functions.
On this article, I’ll showcase the constraints of the present quantization algorithms and suggest a new quantization strategy (ft-Q) to deal with them.
Quantization is a memory-saving algorithm that lets us retailer numbers (each in-memory and in-disk) utilizing a decrease quantity of bits. By default, once we retailer any quantity in reminiscence, we use float32: because of this this quantity is saved utilizing a mix of 32-bits (binary components).
For instance, the integer 40 is saved as follows in a 32-bit object:
Nonetheless, we might determine to retailer the identical quantity utilizing fewer bits (reducing by half the reminiscence utilization), with a 16-bit object:
By quantization, we imply to retailer information utilizing a decrease variety of bits (ex. 32 -> 16, or 32 -> 4), that is also referred to as casting. If we had been to retailer 1GB of numbers (by default saved as 32-bit objects), if we determined to retailer them utilizing 16-bit objects (therefore, making use of a quantization), the dimensions of our information could be halved, leading to 0.5GB.
Is there a catch to quantization?
Saving this quantity of storage appears unimaginable (as you understood, we might hold reducing till we attain the minimal quantity of bits: 1-bit, also referred to as binary quantization. Our database measurement will probably be diminished by 32 occasions, from 1GB to 31.25MB!), however as you may need already understood, there’s a catch.
Any quantity could be saved as much as the bounds allowed by all of the potential mixtures of bits. With a 32-bit quantization, you may retailer a most of 2³² numbers. There are such a lot of potential mixtures that we determined to incorporate decimals when utilizing 32-bits. For instance, if we had been so as to add a decimal to our preliminary quantity and retailer 40.12 in 32-bits, it might be utilizing this mix of 1 and 0:
01000010 00100000 01111010 11100001
We’ve got understood that with a 32-bit storage (given its massive mixture of potential values) we will just about encode every quantity, together with its decimal factors (to make clear, if you’re new to quantization, the true quantity and decimal will not be separated, 40.12 is transformed as an entire into a mix of 32 binary numbers).
If we hold diminishing the variety of bits, all of the potential mixtures diminish exponentially. For instance, 4-bit storage has a restrict of 2⁴ mixtures: we will solely retailer 16 numbers (this doesn’t go away a lot room to retailer decimals). With 1-bit storage, we will solely retailer a single quantity, both a 1 or a 0.
To place this into context, storing our initials 32-bit numbers into binary code would drive us to transform all our numbers, akin to 40.12 into both 0 or 1. On this situation, this compression doesn’t look excellent.
We’ve got seen how quantization ends in an info loss. So, how can we make use of it, in any case? Once you have a look at the quantization of a single quantity (40.12 transformed into 1), it appears there isn’t any worth that may derive from such an excessive stage of quantization, there is just too a lot loss.
Nonetheless, once we apply this system to a set of information akin to vectors, the knowledge loss shouldn’t be as drastic as when utilized to a single quantity. Vector search is an ideal instance of the place to use quantization in a helpful method.
After we use an encoder, akin to all-MiniLM-L6-v2, we retailer every pattern (which was initially within the type of uncooked textual content) as a vector: a sequence of 384 numbers. The storage of hundreds of thousands of comparable sequences, as you may need understood, is prohibitive, and we will use quantization to considerably diminish the dimensions of the unique vectors by an enormous margin.
Maybe, quantizing our vectors from 32-bit to 16-bit shouldn’t be this large of a loss. However how about 4-bit and even binary quantization? As a result of our units are comparatively massive (384 numbers every), this appreciable complexity lets us attain the next stage of compression with out leading to extreme retrieval loss.
4-bit quantization
The best way we execute quantization is by wanting on the information distribution of our flattened vector and selecting to map an equal interval with a decrease variety of bits. My favourite instance is 4-bit quantization. With this diploma of complexity, we will retailer 2⁴ = 16 numbers. However, as defined, all of the numbers in our vectors are complicated, every with a number of decimal factors:
array([ 2.43655406e-02, -4.33481708e-02, -1.89688837e-03, -3.76498550e-02,
-8.96364748e-02, 2.96154656e-02, -5.79943173e-02, 1.87652372e-02,
1.87771711e-02, 6.30387887e-02, -3.23972516e-02, -1.46128759e-02,
-3.39277312e-02, -7.04369228e-03, 3.87261175e-02, -5.02494797e-02,
...
-1.03239892e-02, 1.83096472e-02, -1.86534156e-03, 1.44851031e-02,
-6.21072948e-02, -4.46912572e-02, -1.57684386e-02, 8.28376040e-02,
-4.58770394e-02, 1.04658678e-01, 5.53084277e-02, -2.51113791e-02,
4.72703762e-02, -2.41811387e-03, -9.09169838e-02, 1.15215247e-02],
dtype=float32)
What we will do is map every of our numbers within the distribution into an interval that spans between [-8, 7] (16 potential numbers). To outline the acute of the interval, we will use the minimal and most values of the distribution we’re quantizing.
For instance, the minimal/most of the distribution is [-0.2, 0.2]. Because of this -0.2 will probably be transformed to -8, and 0.2 to 7. Every quantity within the distribution could have a quantized equal within the interval (ex. the primary quantity 0.02436554 will probably be quantized to -1).
array([[-1, -3, -1, ..., 1, -2, -2],
[-6, -1, -2, ..., -2, -2, -3],
[ 0, -2, -4, ..., -1, 1, -2],
...,
[ 3, 0, -5, ..., -5, 7, 0],
[-4, -5, 3, ..., -2, -2, -2],
[-1, 0, -2, ..., -1, 1, -3]], dtype=int4)
1-bit quantization
The identical precept applies to binary quantization however is way easier. The rule is the next: every variety of the distribution < 0 turns into 0, and every quantity > 0 turns into 1.
The principal subject with present quantization methods is that they reside on the idea that every one our values are based mostly on a single distribution. That’s the reason, once we use thresholds to outline intervals (ex. minimal and most), we solely use a single set derived from the totality of our information (which is modeled on a single distribution).