Skip to content

Commit 6acf88d

Browse files
authored
Update README.md (#150)
1 parent 87e2df7 commit 6acf88d

1 file changed

Lines changed: 1 addition & 0 deletions

File tree

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -240,6 +240,7 @@ python3 download_pdfs.py # The code is generated by Doubao AI
240240
|2024.09|🔥[VPTQ] VPTQ: EXTREME LOW-BIT VECTOR POST-TRAINING QUANTIZATION FOR LARGE LANGUAGE MODELS(@Microsoft)|[[pdf]](https://arxiv.org/pdf/2409.17066)|[[VPTQ]](https://github.com/microsoft/VPTQ) ![](https://img.shields.io/github/stars/microsoft/VPTQ.svg?style=social)|⭐️ |
241241
|2024.11|🔥[BitNet] BitNet a4.8: 4-bit Activations for 1-bit LLMs(@Microsoft)|[[pdf]](https://arxiv.org/pdf/2411.04965)|[[bitnet]](https://github.com/microsoft/unilm/tree/master/bitnet) ![](https://img.shields.io/github/stars/microsoft/unilm.svg?style=social)|⭐️ |
242242
|2025.04|🔥[**BitNet v2**] BitNet v2: Native 4-bit Activations with Hadamard Transformation for 1-bit LLMs(@Microsoft)|[[pdf]](https://arxiv.org/pdf/2504.18415)|[[bitnet]](https://github.com/microsoft/unilm/tree/master/bitnet) ![](https://img.shields.io/github/stars/microsoft/unilm.svg?style=social)|⭐️ |
243+
|2025.05|🔥[**GuidedQuant**] GuidedQuant: Large Language Model Quantization via Exploiting End Loss Guidance (@SNU&SamsungAILab&Google) |[[pdf]](https://arxiv.org/pdf/2505.07004) |[[GuidedQuant]](https://github.com/snu-mllab/GuidedQuant) ![](https://img.shields.io/github/stars/snu-mllab/GuidedQuant.svg?style=social)|⭐️⭐️ |
243244

244245
### 📖IO/FLOPs-Aware/Sparse Attention ([©️back👆🏻](#paperlist))
245246
<div id="IO-FLOPs-Aware-Attention-Sparse"></div>

0 commit comments

Comments
 (0)