Bolt: Accelerated Data Mining with Fast Vector Compression

被引:11
作者
Blalock, Davis W. [1 ]
Guttag, John V. [1 ]
机构
[1] MIT, Comp Sci & Artificial Intelligence Lab, Cambridge, MA 02139 USA
来源
KDD'17: PROCEEDINGS OF THE 23RD ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING | 2017年
关键词
Vector Quantization; Scalability; Compression; Nearest Neighbor Search; PRODUCT QUANTIZATION; JOHNSON;
D O I
10.1145/3097983.3098195
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Vectors of data are at the heart of machine learning and data mining. Recently, vector quantization methods have shown great promise in reducing both the time and space costs of operating on vectors. We introduce a vector quantization algorithm that can compress vectors over 12x faster than existing techniques while also accelerating approximate vector operations such as distance and dot product computations by up to 10x. Because it can encode over 2GB of vectors per second, it makes vector quantization cheap enough to employ in many more circumstances. For example, using our technique to compute approximate dot products in a nested loop can multiply matrices faster than a state-of-the-art BLAS implementation, even when our algorithm must first compress the matrices. In addition to showing the above speedups, we demonstrate that our approach can accelerate nearest neighbor search and maximum inner product search by over 100x compared to floating point operations and up to 10x compared to other vector quantization methods. Our approximate Euclidean distance and dot product computations are not only faster than those of related algorithms with slower encodings, but also faster than Hamming distance computations, which have direct hardware support on the tested platforms. We also assess the errors of our algorithm's approximate distances and dot products, and find that it is competitive with existing, slower vector quantization algorithms.
引用
收藏
页码:727 / 735
页数:9
相关论文
共 41 条
[1]   THE FAST JOHNSON-LINDENSTRAUSS TRANSFORM AND APPROXIMATE NEAREST NEIGHBORS [J].
Ailon, Nir ;
Chazelle, Bernard .
SIAM JOURNAL ON COMPUTING, 2009, 39 (01) :302-322
[2]  
Andoni A, 2015, ADV NEUR IN, V28
[3]  
André F, 2015, PROC VLDB ENDOW, V9, P288
[4]  
[Anonymous], 2012, P INT C NEUR INF PRO
[5]  
[Anonymous], 2014, Hashing for similarity search: A survey
[6]  
[Anonymous], 2004, P 20 ACM S COMP
[7]  
[Anonymous], 2016, ICLR 2016 C TRACK P
[8]  
[Anonymous], ARXIV14112404
[9]  
[Anonymous], P 31 INT C INT C MAC
[10]  
[Anonymous], 2016, ARXIV161101600