Model-Free Neural Counterfactual Regret Minimization with Bootstrap Learning

    Research output: Contribution to journalArticlepeer-review


    Counterfactual Regret Minimization (CFR) has achieved many fascinating results in solving large-scale Imperfect Information Games (IIGs). Neural network approximation CFR (neural CFR) is one of the promising techniques that can reduce computation and memory consumption by generalizing decision information between similar states. Current neural CFR algorithms have to approximate cumulative regrets. However, efficient and accurate approximation in a large-scale IIG is still a tough challenge. In this paper, a new CFR variant, Recursive CFR (ReCFR), is proposed. In ReCFR, Recursive Substitute Values (RSVs) are learned and used to replace cumulative regrets. It is proven that ReCFR can converge to a Nash equilibrium at a rate of (1/T). Based on ReCFR, a new model-free neural CFR with bootstrap learning, Neural ReCFR-B, is proposed. Due to the recursive and non-cumulative nature of RSVs, Neural ReCFR-B has lower-variance training targets than other neural CFRs. Experimental results show that Neural ReCFR-B is competitive with the state-of-the-art neural CFR algorithms at a much lower training cost.

    Original languageEnglish (US)
    JournalIEEE Transactions on Games
    StateAccepted/In press - 2022


    • Approximation algorithms
    • Costs
    • Counterfactual Regret Minimization
    • Game Theory
    • Games
    • History
    • Imperfect Information Games
    • Nash equilibrium
    • Neural Networks
    • Neural networks
    • Training

    ASJC Scopus subject areas

    • Software
    • Control and Systems Engineering
    • Artificial Intelligence
    • Electrical and Electronic Engineering


    Dive into the research topics of 'Model-Free Neural Counterfactual Regret Minimization with Bootstrap Learning'. Together they form a unique fingerprint.

    Cite this