Significant Productivity Gains through Programming with Large Language Models

被引:0
|
作者
Weber T. [1 ]
Brandmaier M. [1 ]
Schmidt A. [1 ]
Mayer S. [1 ]
机构
[1] LMU Munich, Munich
关键词
github copilot; gpt; language models; programming; software development; user study;
D O I
10.1145/3661145
中图分类号
学科分类号
摘要
Large language models like GPT and Codex drastically alter many daily tasks, including programming, where they can rapidly generate code from natural language or informal specifications. Thus, they will change what it means to be a programmer and how programmers act during software development. This work explores how AI assistance for code generation impacts productivity. In our user study (N=24), we asked programmers to complete Python programming tasks supported by a) an auto-complete interface using GitHub Copilot, b) a conversational system using GPT-3, and c) traditionally with just the web browser. Aside from significantly increasing productivity metrics, participants displayed distinctive usage patterns and strategies, highlighting that the form of presentation and interaction affects how users engage with these systems. Our findings emphasize the benefits of AI-assisted coding and highlight the different design challenges for these systems. © 2024 Owner/Author.
引用
收藏
相关论文
共 50 条
  • [21] Large language models in psychiatry: Opportunities and challenges
    Volkmer, Sebastian
    Meyer-Lindenberg, Andreas
    Schwarz, Emanuel
    PSYCHIATRY RESEARCH, 2024, 339
  • [22] Performance of Recent Large Language Models for a Low-Resourced Language
    Jayakody, Ravindu
    Dias, Gihan
    2024 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING, IALP 2024, 2024, : 162 - 167
  • [23] Exploring the Performance of Large Language Models for Data Analysis Tasks Through the CRISP-DM Framework
    Musazade, Nurlan
    Mezei, Jozsef
    Wang, Xiaolu
    GOOD PRACTICES AND NEW PERSPECTIVES IN INFORMATION SYSTEMS AND TECHNOLOGIES, VOL 5, WORLDCIST 2024, 2024, 989 : 56 - 65
  • [24] Evaluating language models for mathematics through interactions
    Collins, Katherine M.
    Jiang, Albert Q.
    Frieder, Simon
    Wong, Lionel
    Zilka, Miri
    Bhatt, Umang
    Lukasiewicz, Thomas
    Wu, Yuhuai
    Tenenbaum, Joshua B.
    Hart, William
    Gowers, Timothy
    Li, Wenda
    Weller, Adrian
    Jamnik, Mateja
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2024, 121 (24)
  • [25] LiP-LLM: Integrating Linear Programming and Dependency Graph With Large Language Models for Multi-Robot Task Planning
    Obata, Kazuma
    Aoki, Tatsuya
    Horii, Takato
    Taniguchi, Tadahiro
    Nagai, Takayuki
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (02): : 1122 - 1129
  • [26] The use of large language models in medicine: proceeding with caution
    Deng, Jiawen
    Zubair, Areeba
    Park, Ye-Jean
    Affan, Eesha
    Zuo, Qi Kang
    CURRENT MEDICAL RESEARCH AND OPINION, 2024, 40 (02) : 151 - 153
  • [27] Exploring Large Language Models in a Limited Resource Scenario
    Panchbhai, Anand
    Pankanti, Smarana
    2021 11TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING, DATA SCIENCE & ENGINEERING (CONFLUENCE 2021), 2021, : 147 - 152
  • [28] Based on Medicine, The Now and Future of Large Language Models
    Su, Ziqing
    Tang, Guozhang
    Huang, Rui
    Qiao, Yang
    Zhang, Zheng
    Dai, Xingliang
    CELLULAR AND MOLECULAR BIOENGINEERING, 2024, 17 (04) : 263 - 277
  • [29] Large Language Models: А Socio-Philosophical Essay
    Penner, Regina, V
    GALACTICA MEDIA-JOURNAL OF MEDIA STUDIES - GALAKTIKA MEDIA-ZHURNAL MEDIA ISSLEDOVANIJ, 2024, 6 (03): : 83 - 100
  • [30] Conformer LLM - Convolution Augmented Large Language Models
    Vermas, Prateek
    SPEECH AND COMPUTER, SPECOM 2024, PT II, 2025, 15300 : 326 - 333