Scalable Metropolis Monte Carlo for simulation of hard shapes

被引:72
作者
Anderson, Joshua A. [1 ]
Irrgang, M. Eric [2 ]
Glotzer, Sharon C. [1 ,2 ]
机构
[1] Univ Michigan, Dept Chem Engn, 2800 Plymouth Rd, Ann Arbor, MI 48109 USA
[2] Univ Michigan, Dept Mat Sci & Engn, 2300 Hayward St, Ann Arbor, MI 48109 USA
基金
美国国家科学基金会;
关键词
Monte Carlo; Hard particle; GPU; MOLECULAR-DYNAMICS SIMULATIONS; DISSIPATIVE PARTICLE DYNAMICS; PARALLEL; ALGORITHMS; ELLIPSOIDS; ENSEMBLE; SYSTEMS; CUBES; ORDER; GPUS;
D O I
10.1016/j.cpc.2016.02.024
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
We design and implement a scalable hard particle Monte Carlo simulation toolkit (HPMC), and release it open source as part of HOOMD-blue. HPMC runs in parallel on many CPUs and many GPUs using domain decomposition. We employ BVH trees instead of cell lists on the CPU for fast performance, especially with large particle size disparity, and optimize inner loops with SIMD vector intrinsics on the CPU. Our GPU kernel proposes many trial moves in parallel on a checkerboard and uses a block level queue to redistribute work among threads and avoid divergence. HPMC supports a wide variety of shape classes, including spheres/disks, unions of spheres, convex polygons, convex spheropolygons, concave polygons, ellipsoids/ellipses, convex polyhedra, convex spheropolyhedra, spheres cut by planes, and concave polyhedra. NVT and NPT ensembles can be run in 2D or 3D triclinic boxes. Additional integration schemes permit Frenkel-Ladd free energy computations and implicit depletant simulations. In a benchmark system of a fluid of 4096 pentagons, HPMC performs 10 million sweeps in 10 min on 96 CPU cores on XSEDE Comet. The same simulation would take 7.6 h in serial. HPMC also scales to large system sizes, and the same benchmark with 16.8 million particles runs in 1.4 h on 2048 GPUs on OLCF Titan. (C) 2016 Elsevier B.V. All rights reserved.
引用
收藏
页码:21 / 30
页数:10
相关论文
共 57 条
[1]   Exploiting seeding of random number generators for efficient domain decomposition parallelization of dissipative particle dynamics [J].
Afshar, Y. ;
Schmid, F. ;
Pishevar, A. ;
Worley, S. .
COMPUTER PHYSICS COMMUNICATIONS, 2013, 184 (04) :1119-1128
[2]   Determining if two solid ellipsoids intersect [J].
Alfano, S ;
Greer, ML .
JOURNAL OF GUIDANCE CONTROL AND DYNAMICS, 2003, 26 (01) :106-110
[3]   General purpose molecular dynamics simulations fully implemented on graphics processing units [J].
Anderson, Joshua A. ;
Lorenz, Chris D. ;
Travesset, A. .
JOURNAL OF COMPUTATIONAL PHYSICS, 2008, 227 (10) :5342-5359
[4]   Massively parallel Monte Carlo for many-particle simulations on GPUs [J].
Anderson, Joshua A. ;
Jankowski, Eric ;
Grubb, Thomas L. ;
Engel, Michael ;
Glotzer, Sharon C. .
JOURNAL OF COMPUTATIONAL PHYSICS, 2013, 254 :27-38
[5]  
[Anonymous], 2004, Real-time collision detection
[6]   A Comparison of Neighbor Search Algorithms for Large Rigid Molecules [J].
Artemova, Svetlana ;
Grudinin, Sergei ;
Redon, Stephane .
JOURNAL OF COMPUTATIONAL CHEMISTRY, 2011, 32 (13) :2865-2877
[7]  
Asanovic K., 2006, TECHNICAL REPORT UCB
[8]   Stable algorithm for event detection in event-driven particle dynamics [J].
Bannerman, Marcus N. ;
Strobl, Severin ;
Formella, Arno ;
Poeschel, Thorsten .
COMPUTATIONAL PARTICLE MECHANICS, 2014, 1 (02) :191-198
[9]   Phase behavior of colloidal superballs: Shape interpolation from spheres to cubes [J].
Batten, Robert D. ;
Stillinger, Frank H. ;
Torquato, Salvatore .
PHYSICAL REVIEW E, 2010, 81 (06)
[10]   Event-chain Monte Carlo algorithms for hard-sphere systems [J].
Bernard, Etienne P. ;
Krauth, Werner ;
Wilson, David B. .
PHYSICAL REVIEW E, 2009, 80 (05)