Augmenting human innovation teams with artificial intelligence: Exploring transformer-based language models

被引:115
|
作者
Bouschery, Sebastian G. [1 ,3 ]
Blazevic, Vera [1 ,2 ]
Piller, Frank T. [1 ,3 ]
机构
[1] Rhein Westfal TH Aachen, Sch Business & Econ, Aachen, Germany
[2] Radboud Univ Nijmegen, Dept Mkt, Nijmegen, Netherlands
[3] Rhein Westfal TH Aachen, Sch Business & Econ, Templergraben 55, D-52056 Aachen, Germany
关键词
artificial intelligence; GPT-3; hybrid intelligence; innovation teams; prompt engineering; transformer-based language models; PERFORMANCE; CREATIVITY; KNOWLEDGE; SEARCH; IDEA;
D O I
10.1111/jpim.12656
中图分类号
F [经济];
学科分类号
02 ;
摘要
The use of transformer-based language models in artificial intelligence (AI) has increased adoption in various industries and led to significant productivity advancements in business operations. This article explores how these models can be used to augment human innovation teams in the new product development process, allowing for larger problem and solution spaces to be explored and ultimately leading to higher innovation performance. The article proposes the use of the AI-augmented double diamond framework to structure the exploration of how these models can assist in new product development (NPD) tasks, such as text summarization, sentiment analysis, and idea generation. It also discusses the limitations of the technology and the potential impact of AI on established practices in NPD. The article establishes a research agenda for exploring the use of language models in this area and the role of humans in hybrid innovation teams. (Note: Following the idea of this article, GPT-3 alone generated this abstract. Only minor formatting edits were performed by humans.)
引用
收藏
页码:139 / 153
页数:15
相关论文
共 50 条
  • [1] Bringing order into the realm of Transformer-based language models for artificial intelligence and law
    Greco, Candida M.
    Tagarelli, Andrea
    ARTIFICIAL INTELLIGENCE AND LAW, 2024, 32 (04) : 863 - 1010
  • [2] Artificial intelligence: Augmenting telehealth with large language models
    Snoswell, Centaine L.
    Snoswell, Aaron J.
    Kelly, Jaimon T.
    Caffery, Liam J.
    Smith, Anthony C.
    JOURNAL OF TELEMEDICINE AND TELECARE, 2025, 31 (01) : 150 - 154
  • [3] The research landscape on generative artificial intelligence: a bibliometric analysis of transformer-based models
    Sekli, Giulio Marchena
    KYBERNETES, 2024,
  • [4] Shared functional specialization in transformer-based language models and the human brain
    Kumar, Sreejan
    Sumers, Theodore R.
    Yamakoshi, Takateru
    Goldstein, Ariel
    Hasson, Uri
    Norman, Kenneth A.
    Griffiths, Thomas L.
    Hawkins, Robert D.
    Nastase, Samuel A.
    NATURE COMMUNICATIONS, 2024, 15 (01)
  • [5] Ouroboros: On Accelerating Training of Transformer-Based Language Models
    Yang, Qian
    Huo, Zhouyuan
    Wang, Wenlin
    Huang, Heng
    Carin, Lawrence
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [6] Transformer-Based Language Models for Software Vulnerability Detection
    Thapa, Chandra
    Jang, Seung Ick
    Ahmed, Muhammad Ejaz
    Camtepe, Seyit
    Pieprzyk, Josef
    Nepal, Surya
    PROCEEDINGS OF THE 38TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2022, 2022, : 481 - 496
  • [7] A Comparison of Transformer-Based Language Models on NLP Benchmarks
    Greco, Candida Maria
    Tagarelli, Andrea
    Zumpano, Ester
    NATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS (NLDB 2022), 2022, 13286 : 490 - 501
  • [8] RadBERT: Adapting Transformer-based Language Models to Radiology
    Yan, An
    McAuley, Julian
    Lu, Xing
    Du, Jiang
    Chang, Eric Y.
    Gentili, Amilcare
    Hsu, Chun-Nan
    RADIOLOGY-ARTIFICIAL INTELLIGENCE, 2022, 4 (04)
  • [9] Applications of transformer-based language models in bioinformatics: a survey
    Zhang, Shuang
    Fan, Rui
    Liu, Yuti
    Chen, Shuang
    Liu, Qiao
    Zeng, Wanwen
    NEURO-ONCOLOGY ADVANCES, 2023, 5 (01)
  • [10] TAG: Gradient Attack on Transformer-based Language Models
    Deng, Jieren
    Wang, Yijue
    Li, Ji
    Wang, Chenghong
    Shang, Chao
    Liu, Hang
    Rajasekaran, Sanguthevar
    Ding, Caiwen
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 3600 - 3610