You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -91,17 +74,6 @@ Weight-only quantization stores the model weights in a specific low-bit data typ
91
74
92
75
Dynamic activation quantization stores the model weights in a low-bit dtype, while also quantizing the activations on-the-fly to save additional memory. This lowers the memory requirements from model weights, while also lowering the memory overhead from activation computations. However, this may come at a quality tradeoff at times, so it is recommended to test different models thoroughly.
93
76
94
-
The quantization methods supported are as follows:
95
-
96
-
|**Category**|**Full Function Names**|**Shorthands**|
|**Floating point X-bit quantization**|`fpx_weight_only`|`fpX_eAwB` where `X` is the number of bits (1-7), `A` is exponent bits, and `B` is mantissa bits. Constraint: `X == A + B + 1`|
Some quantization methods are aliases (for example, `int8wo` is the commonly used shorthand for `int8_weight_only`). This allows using the quantization methods described in the torchao docs as-is, while also making it convenient to remember their shorthand notations.
104
-
105
77
Refer to the [official torchao documentation](https://docs.pytorch.org/ao/stable/index.html) for a better understanding of the available quantization methods and the exhaustive list of configuration options available.
106
78
107
79
## Serializing and Deserializing quantized models
@@ -111,8 +83,9 @@ To serialize a quantized model in a given dtype, first load the model with the d
111
83
```python
112
84
import torch
113
85
from diffusers import AutoModel, TorchAoConfig
86
+
from torchao.quantization import Int8WeightOnlyConfig
If you are using `torch<=2.6.0`, some quantization methods, such as `uint4wo`, cannot be loaded directly and may result in an `UnpicklingError` when trying to load the models, but work as expected when saving them. In order to work around this, one can load the state dict manually into the model. Note, however, that this requires using `weights_only=False` in `torch.load`, so it should be run only if the weights were obtained from a trustable source.
113
+
If you are using `torch<=2.6.0`, some quantization methods, such as `uint4` weight-only, cannot be loaded directly and may result in an `UnpicklingError` when trying to load the models, but work as expected when saving them. In order to work around this, one can load the state dict manually into the model. Note, however, that this requires using `weights_only=False` in `torch.load`, so it should be run only if the weights were obtained from a trustable source.
141
114
142
115
```python
143
116
import torch
144
117
from accelerate import init_empty_weights
145
118
from diffusers import FluxPipeline, AutoModel, TorchAoConfig
119
+
from torchao.quantization import IntxWeightOnlyConfig
0 commit comments