BLGAN: Bayesian Learning and Genetic Algorithm for Supporting Negotiation With Incomplete Information

被引:44
作者
Sim, Kwang Mong [1 ]
Guo, Yuanyuan [2 ]
Shi, Benyun [1 ]
机构
[1] Hong Kong Baptist Univ, Dept Comp Sci, Kowloon Tong, Hong Kong, Peoples R China
[2] Univ New Brunswick, Dept Comp Sci, St John, NB E2L 4L5, Canada
来源
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS | 2009年 / 39卷 / 01期
关键词
Automated negotiation; Bayesian learning (BL); genetic algorithms (GAs); intelligent agents; negotiation agents; COMMERCE; AGENTS; MODEL;
D O I
10.1109/TSMCB.2008.2004501
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Automated negotiation provides a means for resolving differences among interacting agents. For negotiation with complete information, this paper provides mathematical proofs to show that an agent's optimal strategy can be computed using its opponent's reserve price (RP) and deadline. The impetus of this work is using the synergy of Bayesian learning (BL) and genetic algorithm (GA) to determine an agent's optimal strategy in negotiation (N) with incomplete information. BLGAN adopts: 1) BL and a deadline-estimation process for estimating an opponent's RP and deadline and 2) GA for generating a proposal at each negotiation round. Learning the RP and deadline of an opponent enables the GA in BLGAN to reduce the size of its search space (SP) by adaptively focusing its search on a specific region in the space of all possible proposals. SP is dynamically defined as a region around an agent's proposal P at each negotiation round. P is generated using the agent's optimal strategy determined using its estimations of its opponent's RP and deadline. Hence, the GA in BLGAN is more likely to generate proposals that are closer to the proposal generated by the optimal strategy. Using GA to search around a proposal generated by its current strategy, an agent in BLGAN compensates for possible errors in estimating its opponent's RP and deadline. Empirical results show that agents adopting BLGAN reached agreements successfully, and achieved: 1) higher utilities and better combined negotiation outcomes (CNOs) than agents that only adopt GA to generate their proposals, 2) higher utilities than agents that adopt 81, to learn only RP, and 3) higher utilities and better CNOs than agents that do not learn their opponents' RPs and deadlines.
引用
收藏
页码:198 / 211
页数:14
相关论文
共 32 条
  • [1] [Anonymous], 2003, GRID COMPUTING MAKIN
  • [2] Buffett S, 2005, SEVENTH INTERNATIONAL CONFERENCE ON ELECTRONIC COMMERCE, VOLS 1 AND 2, SELECTED PROCEEDINGS, P300
  • [3] Learning other agents' preferences in multi-agent negotiation using the Bayesian classifier
    Bui, HH
    Venkatesh, S
    Kieronska, D
    [J]. INTERNATIONAL JOURNAL OF COOPERATIVE INFORMATION SYSTEMS, 1999, 8 (04) : 275 - 293
  • [4] Chavez A, 1996, P 1 INT C PRACT APPL, P159
  • [5] Agreement-based resource management
    Czajkowski, K
    Foster, I
    Kesselman, C
    [J]. PROCEEDINGS OF THE IEEE, 2005, 93 (03) : 631 - 643
  • [6] Bargaining with incomplete information
    Fatima, SS
    Wooldridge, M
    Jennings, NR
    [J]. ANNALS OF MATHEMATICS AND ARTIFICIAL INTELLIGENCE, 2005, 44 (03) : 207 - 232
  • [7] Agent-mediated electronic commerce: a survey
    Guttman, RH
    Moukas, AG
    Maes, P
    [J]. KNOWLEDGE ENGINEERING REVIEW, 1998, 13 (02) : 147 - 159
  • [8] Harsanyi J. C., 1972, MANAGE SCI, V18, P80, DOI [10.1287/mnsc.18.5.80, DOI 10.1287/MNSC.18.5.80]
  • [9] HARSAYI J, 1977, RATIONAL BEHAV BARGA
  • [10] Jin NL, 2006, IEEE C EVOL COMPUTAT, P2134