Significant Productivity Gains through Programming with Large Language Models

被引:0
|
作者
Weber T. [1 ]
Brandmaier M. [1 ]
Schmidt A. [1 ]
Mayer S. [1 ]
机构
[1] LMU Munich, Munich
关键词
github copilot; gpt; language models; programming; software development; user study;
D O I
10.1145/3661145
中图分类号
学科分类号
摘要
Large language models like GPT and Codex drastically alter many daily tasks, including programming, where they can rapidly generate code from natural language or informal specifications. Thus, they will change what it means to be a programmer and how programmers act during software development. This work explores how AI assistance for code generation impacts productivity. In our user study (N=24), we asked programmers to complete Python programming tasks supported by a) an auto-complete interface using GitHub Copilot, b) a conversational system using GPT-3, and c) traditionally with just the web browser. Aside from significantly increasing productivity metrics, participants displayed distinctive usage patterns and strategies, highlighting that the form of presentation and interaction affects how users engage with these systems. Our findings emphasize the benefits of AI-assisted coding and highlight the different design challenges for these systems. © 2024 Owner/Author.
引用
收藏
相关论文
共 50 条
  • [1] Large Language Models (GPT) for automating feedback on programming assignments
    Pankiewicz, Maciej
    Baker, Ryan S.
    31ST INTERNATIONAL CONFERENCE ON COMPUTERS IN EDUCATION, ICCE 2023, VOL I, 2023, : 68 - 77
  • [2] Large Language Models in Robot Programming Potential in the programming of industrial robots
    Syniawa, Daniel
    Ates, Baris
    Boshoff, Marius
    Kuhlenkoetter, Bernd
    ATP MAGAZINE, 2024, (6-7):
  • [3] Level Generation Through Large Language Models
    Todd, Graham
    Earle, Sam
    Nasir, Muhammad Umair
    Green, Michael Cerny
    Togelius, Julian
    PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON THE FOUNDATIONS OF DIGITAL GAMES, FDG 2023, 2023,
  • [4] Automatic Generation of Programming Exercises and Code Explanations Using Large Language Models
    Sarsa, Sami
    Denny, Paul
    Hellas, Arto
    Leinonen, Juho
    PROCEEDINGS OF THE 2022 ACM CONFERENCE ON INTERNATIONAL COMPUTING EDUCATION RESEARCH, ICER 2022, VOL. 1, 2023, : 27 - 43
  • [5] Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm
    Reynolds, Laria
    McDonell, Kyle
    EXTENDED ABSTRACTS OF THE 2021 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'21), 2021,
  • [6] Automating Autograding: Large Language Models as Test Suite Generators for Introductory Programming
    Alkafaween, Umar
    Albluwi, Ibrahim
    Denny, Paul
    JOURNAL OF COMPUTER ASSISTED LEARNING, 2025, 41 (01)
  • [7] Thrilled by Your Progress! Large Language Models (GPT-4) No Longer Struggle to Pass Assessments in Higher Education Programming Courses
    Savelka, Jaromir
    Agarwal, Arav
    An, Marshall
    Bogart, Chris
    Sakr, Majd
    PROCEEDINGS OF THE 2023 ACM CONFERENCE ON INTERNATIONAL COMPUTING EDUCATION RESEARCH V.1, ICER 2023 V1, 2023, : 78 - 92
  • [8] Unveiling the potential of large language models in generating semantic and cross-language clones
    Roy, Palash R.
    Alam, Ajmain I.
    Al-omari, Farouq
    Roy, Banani
    Roy, Chanchal K.
    Schneider, Kevin A.
    2023 IEEE 17TH INTERNATIONAL WORKSHOP ON SOFTWARE CLONES, IWSC 2023, 2023, : 22 - 28
  • [9] "Conversing" With Qualitative Data: Enhancing Qualitative Research Through Large Language Models (LLMs)
    Hayes, Adam S.
    INTERNATIONAL JOURNAL OF QUALITATIVE METHODS, 2025, 24
  • [10] Symbols and grounding in large language models
    Pavlick, Ellie
    PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES, 2023, 381 (2251):