Learning in repeated stochastic network aggregative games
被引:0
作者:
Meigs, Emily
论文数: 0引用数: 0
h-index: 0
机构:
MIT, Lab Informat & Decis Syst, 77 Massachusetts Ave, Cambridge, MA 02139 USAMIT, Lab Informat & Decis Syst, 77 Massachusetts Ave, Cambridge, MA 02139 USA
Meigs, Emily
[1
]
Parise, Francesca
论文数: 0引用数: 0
h-index: 0
机构:
MIT, Lab Informat & Decis Syst, 77 Massachusetts Ave, Cambridge, MA 02139 USAMIT, Lab Informat & Decis Syst, 77 Massachusetts Ave, Cambridge, MA 02139 USA
Parise, Francesca
[1
]
Ozdaglar, Asuman
论文数: 0引用数: 0
h-index: 0
机构:
MIT, Lab Informat & Decis Syst, 77 Massachusetts Ave, Cambridge, MA 02139 USAMIT, Lab Informat & Decis Syst, 77 Massachusetts Ave, Cambridge, MA 02139 USA
Ozdaglar, Asuman
[1
]
机构:
[1] MIT, Lab Informat & Decis Syst, 77 Massachusetts Ave, Cambridge, MA 02139 USA
来源:
2019 IEEE 58TH CONFERENCE ON DECISION AND CONTROL (CDC)
|
2019年
基金:
瑞士国家科学基金会;
关键词:
PUBLIC-GOODS;
D O I:
暂无
中图分类号:
TP [自动化技术、计算机技术];
学科分类号:
0812 ;
摘要:
We consider a repeated network aggregative game where agents are unsure about a parameter that weights their neighbors' actions in their utility function. We consider simple learning dynamics where agents iteratively play their best response, given previous information, and update their estimate of the network weight parameter according to ordinary least squares. We derive a sufficient condition dependent on the network and on the agents' utility function to guarantee that, under these dynamics, the agents' strategies converge almost surely to the full information Nash equilibrium. We illustrate our theoretical results on a local public good game where agents are uncertain about the level of substitutability of their goods.