Distributed computing in the LHC era

被引:0
作者
Paganoni, M. [1 ,2 ]
机构
[1] Univ Milano Bicocca, Milan, Italy
[2] INFN, Sez Milano, Milan, Italy
来源
NUOVO CIMENTO C-COLLOQUIA AND COMMUNICATIONS IN PHYSICS | 2010年 / 33卷 / 06期
关键词
D O I
10.1393/ncc/i2011-10805-2
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
A large, worldwide distributed, scientific community is running intensively physics analyses on the first data collected at LHC. In order to prepare for this unprecedented computing challenge, the four LHC experiments have developed distributed computing models capable of serving, processing and archiving the large number of events produced by data taking, amounting to about 15 petabytes per year. The experiments workflows for event reconstruction from raw data, production of simulated events and physics analysis on skimmed data generate hundreds of thousands of jobs per day, running on a complex distributed computing fabric. All this is possible thanks to reliable Grid services, which have been developed, deployed at the needed scale and thouroughly tested by the WLCG Collaboration during the last ten years. In order to provide a concrete example, this paper concentrates on CMS computing model and CMS experience with the first data at LHC.
引用
收藏
页数:5
相关论文
共 7 条
[1]  
Aderholz M., 2000, CERNLCB2000001 MONAR
[2]   CMS results in the Combined Computing Readiness Challenge CCRC'08 [J].
Bonacorsi, D. ;
Bauerdick, L. .
NUCLEAR PHYSICS B-PROCEEDINGS SUPPLEMENTS, 2009, 197 :99-108
[3]  
*CMS COLL, CERNLHCC2005023 CMS
[4]  
Jacobs D., CERNCRRB200501R WLCG
[5]  
Spiga D, 2007, LECT NOTES COMPUT SC, V4873, P580
[6]  
The CMS Collaboration, CMSNOT2004031 CMS
[7]  
Tuura L., 2008, Journal of Physics: Conference Series, V119, DOI 10.1088/1742-6596/119/7/072030