Augmenting human innovation teams with artificial intelligence: Exploring transformer-based language models

被引:115
|
作者
Bouschery, Sebastian G. [1 ,3 ]
Blazevic, Vera [1 ,2 ]
Piller, Frank T. [1 ,3 ]
机构
[1] Rhein Westfal TH Aachen, Sch Business & Econ, Aachen, Germany
[2] Radboud Univ Nijmegen, Dept Mkt, Nijmegen, Netherlands
[3] Rhein Westfal TH Aachen, Sch Business & Econ, Templergraben 55, D-52056 Aachen, Germany
关键词
artificial intelligence; GPT-3; hybrid intelligence; innovation teams; prompt engineering; transformer-based language models; PERFORMANCE; CREATIVITY; KNOWLEDGE; SEARCH; IDEA;
D O I
10.1111/jpim.12656
中图分类号
F [经济];
学科分类号
02 ;
摘要
The use of transformer-based language models in artificial intelligence (AI) has increased adoption in various industries and led to significant productivity advancements in business operations. This article explores how these models can be used to augment human innovation teams in the new product development process, allowing for larger problem and solution spaces to be explored and ultimately leading to higher innovation performance. The article proposes the use of the AI-augmented double diamond framework to structure the exploration of how these models can assist in new product development (NPD) tasks, such as text summarization, sentiment analysis, and idea generation. It also discusses the limitations of the technology and the potential impact of AI on established practices in NPD. The article establishes a research agenda for exploring the use of language models in this area and the role of humans in hybrid innovation teams. (Note: Following the idea of this article, GPT-3 alone generated this abstract. Only minor formatting edits were performed by humans.)
引用
收藏
页码:139 / 153
页数:15
相关论文
共 50 条
  • [21] Assessing the Syntactic Capabilities of Transformer-based Multilingual Language Models
    Perez-Mayos, Laura
    Taboas Garcia, Alba
    Mille, Simon
    Wanner, Leo
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 3799 - 3812
  • [22] Reward modeling for mitigating toxicity in transformer-based language models
    Farshid Faal
    Ketra Schmitt
    Jia Yuan Yu
    Applied Intelligence, 2023, 53 : 8421 - 8435
  • [23] Reward modeling for mitigating toxicity in transformer-based language models
    Faal, Farshid
    Schmitt, Ketra
    Yu, Jia Yuan
    APPLIED INTELLIGENCE, 2023, 53 (07) : 8421 - 8435
  • [24] Generating Fake Cyber Threat Intelligence Using Transformer-Based Models
    Ranade, Priyanka
    Piplai, Aritran
    Mittal, Sudip
    Joshi, Anupam
    Finin, Tim
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [25] Tweets Topic Classification and Sentiment Analysis Based on Transformer-Based Language Models
    Mandal, Ranju
    Chen, Jinyan
    Becken, Susanne
    Stantic, Bela
    VIETNAM JOURNAL OF COMPUTER SCIENCE, 2023, 10 (02) : 117 - 134
  • [26] Transformer-based Language Models for Semantic Search and Mobile Applications Retrieval
    Coelho, Joao
    Neto, Antonio
    Tavares, Miguel
    Coutinho, Carlos
    Oliveira, Joao
    Ribeiro, Ricardo
    Batista, Fernando
    PROCEEDINGS OF THE 13TH INTERNATIONAL JOINT CONFERENCE ON KNOWLEDGE DISCOVERY, KNOWLEDGE ENGINEERING AND KNOWLEDGE MANAGEMENT (KDIR), VOL 1:, 2021, : 225 - 232
  • [27] Dynamic Low-rank Estimation for Transformer-based Language Models
    Huai, Ting
    Lie, Xiao
    Gao, Shangqian
    Hsu, Yenchang
    Shen, Yilin
    Jin, Hongxia
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 9275 - 9287
  • [28] Roles and Utilization of Attention Heads in Transformer-based Neural Language Models
    Jo, Jae-young
    Myaeng, Sung-hyon
    58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 3404 - 3417
  • [29] Pre-training and Evaluating Transformer-based Language Models for Icelandic
    Daoason, Jon Friorik
    Loftsson, Hrafn
    LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 7386 - 7391
  • [30] Localizing in-domain adaptation of transformer-based biomedical language models
    Buonocore, Tommaso Mario
    Crema, Claudio
    Redolfi, Alberto
    Bellazzi, Riccardo
    Parimbelli, Enea
    JOURNAL OF BIOMEDICAL INFORMATICS, 2023, 144