Applications of game theory in deep learning: a survey

This paper provides a comprehensive overview of the applications of game theory in deep learning. Today, deep learning is a fast-evolving area for research in the domain of artificial intelligence. Alternatively, game theory has been showing its multi-dimensional applications in the last few decades. The application of game theory to deep learning includes another dimension in research. Game theory helps to model or solve various deep learning-based problems. Existing research contributions demonstrate that game theory is a potential approach to improve results in deep learning models. The design of deep learning models often involves a game-theoretic approach. Most of the classification problems which popularly employ a deep learning approach can be seen as a Stackelberg game. Generative Adversarial Network (GAN) is a deep learning architecture that has gained popularity in solving complex computer vision problems. GANs have their roots in game theory. The training of the generators and discriminators in GANs is essentially a two-player zero-sum game that allows the model to learn complex functions. This paper will give researchers an extensive account of significant contributions which have taken place in deep learning using game-theoretic concepts thus, giving a clear insight, challenges, and future directions. The current study also details various real-time applications of existing literature, valuable datasets in the field, and the popularity of this research area in recent years of publications and citations.

Similar content being viewed by others

Games of GANs: game-theoretical models for generative adversarial networks

Article 13 February 2023

A Deep Analysis on the Role of Deep Learning Models Using Generative Adversarial Networks

Chapter © 2022

Understanding GANs: fundamentals, variants, training challenges, applications, and open problems

Article 14 May 2024

Explore related subjects

Avoid common mistakes on your manuscript.

1 Introduction

Game theory is an essential field for research, and it helps to choose the suitable strategy of the players in a game. It has several applications in various domains. Game theory has significant applications in technology, especially concerning computer science, electronics, aerospace engineering, etc. [90, 91, 96, 99, 100]. Different types of games are briefed in Fig. 1. Alternatively, Deep Learning is the study of various learning algorithms that uses multiple layers of non-linear processing units. The output of the previous layer is taken as input by each successive layer. Deep learning algorithms are primarily classified into three categories, e.g., supervised, semi-supervised and unsupervised. Deep Learning algorithms implement higher-level features that are extracted from lower-level features. The depth in neural networks plays an essential part in the outcome of the model. The framework of the neural networks can represent dynamic environments, and similarly, dynamic environments can further be presented as games [37, 72].

figure 1

Artificial intelligence has adapted the game theory to solve or model various real-time problems, and it is observed that the performance is improved while applying game theory [43, 76, 124, 148]. This paper establishes the connection of deep learning algorithms with game theory. The applications of evolutionary computing and swarm intelligence are discussed in [20]. This paper mainly focuses on applications of game theory to solve GAN models. GAN has received a tremendous response from several research communities because of its complex problem-solving abilities and performance improvement [19, 36, 130].

The multilayer neural network comprises any number of unit neurons and may determine multimodal output functions. The multimodality nature of the output function creates hardships to optimize deep neural network models. Strategic Deep Learning is a defying game task [62, 84, 136, 141, 151]. Table 1 shows research articles in game theory, deep learning, and their collaborative research as per the records obtained from various databases. In a nutshell, the contribution of the paper is as follows:

figure 2

2.1.2 Players

Players are the critical components of a game. They participate in a game intending to achieve certain goals. The nature of interaction among the players can be primarily classified into two types: cooperative and non-cooperative. Two or more than two players can take part in a game. The players always aim to maximize the overall payoff of the whole group.

2.1.3 Strategies

In a game, strategies are a sequence of actions chosen by the players. The strategies can also be fundamentally classified into two types, cooperative and non-cooperative, based on players’ goals. The paramount of a game is to determine suitable strategies for players.

2.1.4 Payoffs

The payoffs of players represent the rewards or penalties for choosing their respective strategies. In cooperative games, players aim to improve the overall payoffs of a group. In contrast, the players aim to maximize their self-payoff (without considering the overall payoff of the group) in non-cooperative games. The game planners formulate the payoff functions. Let StI represents all possible strategies for player I. If there are m number of players, then, possible combinations of strategies for player I is St1 × St2Stm. So, the payoff function can be represented as util(St1 × St2Stm).

2.1.5 Best response

Best response strategies denote the most favorable outcomes for a player considering all the strategies for opponents. The players in a game usually prefer to choose the best response strategies. NE is determined based on the best response strategies for players. Let best response strategy for player I be represented asbrI(.). Thus, considering a set of opponent’s strategiesxI, brI(xI) denotes player I’s best response to xI.

2.1.6 Nash equilibrium

It is a stable state where players cannot earn more profits by unilaterally deviating from their strategies. Players can have a pure or mixed strategy equilibrium in a game.

Let a game with m players be denoted by (St, util), StI represents strategies for player I, St = St1 × … × Stm represents strategy combinations andutil(x) = (util1(x), …, utilm(x)) represents the utility or payoff function, wherexϵSt. xIandxI are the strategy combination for player I and others except for player I, respectively. When each player 1, …, mcomputes the strategy xI from the strategy-profilex = x1, …, xm, then, the player I gets a payoffutilI(x). A strategy profile xϵSt is a NE if the following condition is satisfied

$$\forall I,_I\epsilon _1:_I\left(_I^,_^\right)\ge util\left(_I,_^\right).$$

Mixed equilibrium is computed for those scenarios, where the players mix their strategies with uncertainties, and the mixed equilibrium is computed based on the expected payoffs of respective players [92].

2.1.7 Shapley function

In cooperative games, the reward of each player is computed by a function called Shapleyfunction. The Shapley value \(<\Phi>_(v)\) for each player Si, 1 ≤ iN is computed based on its individual contribution to a coalition, where N represents the player count. The Shapley value [116] for each player is calculated by the equation given below.

In the above equation, S represents a set of N players (|S| = N), C denotes a group (|C| = c) and C is a subset of S\Si>, \(\frac<\mathrm!\left(\mathrm-\mathrm-1\right)><\mathrm!>\) denotes the uncertainty in a permutation, the contributors of C are ahead of the individual player Si and (v(C ∪ Si>) − v(C)) denotes the individual contribution of a player Si in the group C, where the rewards of all groups are initially determined. In this way, the Shapley function \(<\Phi>_(v)\) yields the Shapley value of a player Si for a particular group. A player having the highest Shapley value is the most significant contributor in a group.

2.1.8 Minimax theorem

In 1928, John Von Neumann introduced the Minimax theorem, which opens the door for conventional game theory. For a two-player, zero-sum, simultaneous move finite game, there must be a value and exists an equilibrium point for both the players [126]. The equilibrium point can be determined by applying pure or mixed strategies by either one or both the players. Let’s assume that x and y are strategies of two players and v is the value of the game. Then, the minimax theorem can be formulated as

$$ x\in X,y\in Y\max \kern.5em \min f\left(x,y\right)=y\in Y,x\in X\ \min \kern.5em \max f\left(x,y\right)=v $$

2.2 Deep learning overview

Deep learning is an advanced part of machine learning algorithms based on artificial neural networks and various learning methods, i.e., supervised, unsupervised, and reinforcement. There are well-known deep learning architecture and techniques. Deep learning models use multiple layers in an artificial neural network to extract significant characteristics from the unprocessed data. Graphics Processing Units (GPUs) are required to perform high-power computation on complex deep learning architectures. Deep learning has immense applications in multi-dimensional fields such as speech recognition, computer vision, audio recognition, natural language processing, medical image analysis, games, etc.

2.2.1 Deep learning architecture

In recent times, deep learning has received tremendous responses from various fields. Robust deep learning architectures bring significant improvement in the performance of multiple models. They can solve or model various complex problems because of their robustness. In most cases, it is observed that deep learning architectures outperform other existing models. The most popular deep learning architectures are the convolutional neural network, recurrent neural network, generative adversarial network, deep belief network, autoencoder, residual neural network, etc. This paper primarily studies generative adversarial neural networks because of their architecture, where game-theoretic techniques can be easily applied. In the future, we may explore possibilities of applications of game theory in other deep learning architectures.

Convolutional neural network

The convolutional neural network, a significant type of deep learning architecture, is best known for its vast capabilities when analyzing visual imagery. They comprise regularized versions of complicated multilayer perceptrons (usually fully connected networks). A CNN has three layers, input, hidden, and output. The hidden layers are mainly convolutional layers that convolve with multiplication or other product, and the activation function is commonly a RELU layer. CNNs were inspired by the works [32, 59, 60].

Recurrent neural network

Recurrent Neural Network is a category of advanced artificial neural networks. A directed graph with a temporal sequence is what we get when there is a connection between the nodes. Temporal dynamic behavior is exhibited in RNN. Using internal states, RNNs have capabilities to process inconsistent range sequences of inputs. They are powerful enough because having a distributed hidden state makes them capable of storing huge amounts of information regarding the past and non-linear dynamics, acknowledging them to revise their hidden state in complex ways [39, 111, 146].

Long Short-Term Memory, applied in deep learning, belongs to the family of recurrent neural networks. Its specialty lies in the fact that along with feed-forward neural networks, it has feedback connections too. A general LSTM architecture constitutes a cell, an input gate, an output gate, and a front gate. The utility of a cell is to store values over irregular periods, and all gates examine the flow of data entering into and exiting out of the cell. LSTM networks are best applicable in categorizing, processing, and predicting based on time series data [15, 54].

Generative adversarial network

Generative adversarial network (GAN), a type of deep learning architecture, was invented in 2014 by Ian Goodfellow [36]. Here, two neural networks challenge amongst themselves in a game (initially given a training set). The theory of GAN is to train images that can generate new images that look accurate to human eyes. GANs were originally modeled for unsupervised learning, but they are broadly used in reinforcement learning, semi-supervised learning, and fully supervised learning.

GAN layered architecture is represented in Fig. 3. Let’s introduce the variables and parameters related to the GAN model

Parameters of the discriminator

Parameters of generator

Input noise distribution

Original data distribution

figure 3

The final loss function for the discriminator can be written as

$$^=\mathit<\max>\left[\mathit<\log>\left(D(x)\right)+\mathit<\log>\left(1-D\left(g(z)\right)\right)\right]$$

Alternatively, the generator is competing with the discriminator. The final loss function for the generator can be written as

$$^=\mathit<\min>\left[\mathit<\log>\left(D(x)\right)+\mathit<\log>\left(1-D\left(g(z)\right)\right)\right]$$

Both the equations can be combined and rewritten as

$$\underset<\min\ \max>\left[\mathit<\log>\left(D(x)\right)+\mathit<\log>\left(1-D\left(g(z)\right)\right)\right]$$

The above equation represents a single data point. For an entire dataset, we can write the equation as follows:

Deep belief networks

A deep belief network, a type of deep learning architecture, is a probabilistic generative graphical model. They are constructed from several layers of stochastic, latent variables with binary values called hidden layers or feature identifiers. The two compelling features of deep neural networks are:

  • A sequential layer-wise, efficient method for learning the top to bottom, generative weights capable of determining how variables on a layer depend upon the variables on its top layer
  • After learning, values are determined for latent variables in each layer, resulting in the single, bottom-up pass. It starts with a vector of selected data in the bottom layer. It exploits the weights in the opposite direction [48,49,50].

Autoencoder

Autoencoder, a distinctive category of artificial neural network utilized in learning efficient data coding through unsupervised techniques. Autoencoders target to learn an encoding for a set of given data and to reduce dimensions in data by training the given network to ignore “noise.” They can probably encode a provided input to represent smaller dimensions, being a data-compression model. Later, decoders can then reconstruct the input back from the encoded version [125, 147].

Residual neural network

Residual neural networks, abbreviated as ResNet, are artificial neural networks of a particular type inspired by the pyramidal cells of the animal cerebral cortex. Residual neural networks perform this by applying skip connections or shortcuts to jump through a few layers. ResNet models are achieved using double/triple layer skips consisting of nonlinearities (ReLU) and batch normalization in between [46]. An extra new weight matrix may be used to learn the skip weights. Such architectures are known as HighwayNets [121]. Architecture with several parallel skips is termed as DenseNets [57].

Radial basis function networks (RBFNs)

“The RBF network model is motivated by the locally tuned response observed in biologic neurons. Neurons with a locally tuned response characteristic can be found in several parts of the nervous system, such as cells in the auditory system that are selective to small bands of frequencies [114]”.

Multilayer Perceptrons (MLPs)

“MLPs belong to the class of feedforward neural networks with multiple layers of perceptrons that have activation functions. MLPs consist of an input layer and an output layer that is fully connected. They have the same number of input and output layers but may have multiple hidden layers and can be used to build speech recognition, image recognition, and machine-translation software [8].”

Self-organizing maps (SOMs)

“The SOM algorithm distinguishes two stages: the competitive stage and the cooperative stage. In the first stage, the best matching neuron is selected. In the second stage, neuron weights are not modified independently but as topologically-related subsets on which similar kinds of weight updates are performed [66].”

2.2.2 Deep learning techniques

Dropout

Dropout is adapted as a technique to overcome overfitting problems in a neural network. It addresses both issues – training and testing computations. Effectively, it allows the training of several neural networks without any significant computational overhead. Also, it gives an efficient approximate way of combining exponentially many different neural networks. In the training phase, dropping out refers to dropping out units(neurons) of a certain set of randomly chosen neurons. The dropped out units are not further considered during a forward and backward pass. Temporarily removes a node and all its incoming/outgoing connections, resulting in a thinned network [49, 50].

Rectified linear unit

ReLU is an activation function broadly used in various deep learning architectures. It is a non-linear activation function used for both types of networks, i.e., multiple-layer neural networks and deep neural networks. The function for any negative input returns zero. Alternatively, x is returned by the function for any positive input x. So, the simplified form of the function is f(x) = max(0, x).

In recent times, a sigmoid and hyperbolic tangent is replaced by the ReLU function. The main reason for the popularity of the ReLU function is its ability to make the training speed of deep neural networks faster than other conventional activation functions. A significant feature of ReLU is the derivative of this function is 1 for positive input. Deep neural networks can save additional time for calculating error terms due to a constant during the training phase. The extensive use of ReLU is shown in [34].

Stochastic gradient descent

Stochastic gradient descent is one of the powerful algorithms used in several machine learning and deep learning models. It is the basis of neural networks. It is an essential iterative algorithm. The functionalities of the gradient descent algorithm are as follows: it starts from an arbitrary point on a function. It moves down its slope in several steps/iterations until it reaches the minimum point of the given function. The algorithm is modified by including a random probability, called Stochastic Gradient Descent. In this algorithm, a set of samples is randomly chosen from the whole data set in each iteration. If we consider a large dataset, the programmer may require using many samples in each iteration while using the Gradient Descent algorithm. The task needs to be performed for each iteration until the minima are found, which is the main challenge in the Gradient Descent algorithm. Computational complexity is a significant concern in this algorithm. Stochastic Gradient Descent is introduced as a solution to this problem. SGD reduces the sample size. It takes only a single sample to perform the task for each iteration. Therefore, the sample is randomly mixed and chosen for completing the task for the iteration. The backbone of the SGD is to consider the gradient of the cost function of a single sample for each iteration. Applications of the Gradient Descent method are addressed in [26, 65].

Batch normalization

Batch normalization is an essential technique in artificial neural networks. The advantages of the method are improvement in speed, stability, and performance of neural networks. The input layer can be normalized by scaling and adjusting the activation functions. It is a technique by which the inputs for each layer are normalized to deal with the internal covariate, shift problem, i.e., the problem appears in the intermediate layers because, during training, the distribution of the activation functions is constantly changing. This change slows down the training process because each layer needs to learn a new distribution of activation functions in each training step. This method includes: calculating the variance and mean of the layer inputs, normalizing the layer inputs with the help of batch statistics, scaling, and shifting to find the layer’s output [61].

3 Application of game theory in deep learning and artificial intelligence

This section addresses some contributions in which game models solve deep learning and artificial intelligence problems. Figure 4 depicts the basic architectures of a simple neural network and deep learning neural network.

figure 4

The authors in [112] address a new approach by which game-theoretic techniques can model individual neurons. The authors show that different strategic game-theoretic approaches can be applied to model paired neuron systems. A learning algorithm is developed depending on game theory for neural learning. Artificial neural network has proved its significance in multi-dimensional domains. Selecting an appropriate network is challenging to solve a problem [62, 141, 151]. The authors in [122] show that game-theoretic concepts like the Shapley value help to differentiate significantly from unnecessary elements of an artificial neural network. A cooperative game is designed from a neural network where neurons that form different groups and their contributions to the game are determined with the help of the Shapley value. The experiments prove that the Shapley value concept is better than other heuristics approaches to assess the contribution of neurons.

There are various GAN algorithms. The first is fully connected neural networks for both the generator and discriminator. The second is convolutional GAN, as going from fully connected to convolutional neural networks is a natural extension. The third neural network is conditional GAN. It extends the 2D GAN framework to the conditional setting by making both the generator and the discriminator networks class-conditional [142].

Alternatively, a max-min problem is formulated for adversarial learning with multiplayer stochastic games and two-player sequential games above deep learning networks [17]. The experimental results forecast the efficacy of the used adversarial algorithm. The algorithm can manipulate adversarial features that affect testing results in deep learning models. The work introduces a secure learner who is adaptive to the antagonistic attacks on deep learning. The paper claims that the given framework is more robust than a traditional convolutional neural network (CNN) and a generative adversarial network under adversarial attacks. This paper also highlights the impacts on adversarial payoff functions over randomized strategies while the rules of the games are changed. The offensive scenarios over such strategy spaces find multiplayer games over varied strategies. A reduction of supervised learning to the game is explored in [113]. For convex one-layer problems, an equivalence between Nash equilibria in a simple game and global minimizers of the training problem is shown. It is also demonstrated how the game can be extended to acyclic neural networks using differentiable convex gates. The work in [86] presents a model that integrates the concepts from the field of deep learning and artificial life to reflect their potentiality in various scenarios. The model shows the potentiality of neural networks to simulate population dynamics. The model also shows the applications of evolutionary game theory result in the behavior of the networks.

Modeling humans’ ability to depict the mental state transitions of others is a challenging task for the research community. The authors of [106] train a machine to construct such models. A Theory of Mind neural network (ToMnet) is designed by meta-learning that builds handler models by observing their behavior. The ToMnet model is applied to agents in simple grid environments. This system can autonomously learn modeling other agents, which is a significant contribution to designing multi-agent AI systems. It can be applied to develop technologies for machine-human communication and advance the growth of interpretable artificial intelligence. For large-scale perfect-information games, artificial intelligence is superior to human-level intelligence [124]. On the other hand, it is not easy to find good results in large-scale imperfect-information games (i.e., business strategies, war games, etc.). NFSP is a self-play-based approach without prior information, which helps for effectively learning approximate Nash equilibrium. But, the algorithm depends on Deep Q-Network. In online games, it is not easy to converge by changing opponents’ strategies. Neither in large search scale nor deep search depth games can it find approximate Nash equilibrium. This paper introduces the Monte Carlo Neural Fictitious Self Play (MC-NFSP) algorithm that amalgamates the NFSP and Monte Carlo tree search. For large-scale zero-sum imperfect-information games, the approach improves performance. The asynchronous Neural Fictitious Self Play (ANFSP) model is developed to use a parallel and asynchronous framework to gather the game’s history [145].

The authors in [128] address a reversed reinforcement learning method. After training a deep neural network according to strategies in the payoff table, randomized strategy input is initialized, and the error differentiates the actual output. The required output is propagated back to the initially randomized strategy input in the input layer of the trained deep neural network results in performing a task similar to the human deduction. Detecting imaging biomarkers for autism spectrum disorder (ASD) is challenging to help to explain ASD and predicting or monitoring treatment outcomes. Deep learning classifiers are used to detect ASD from functional magnetic resonance imaging (fMRI) with better correctness than traditional learning strategies. The concept of Shapley value from cooperative game theory is applied to this problem. Cooperative game theory is suitable since it more accurately determines biomarker importance for each instance from deep learning models. The main challenge for using Shapley Value calculation is its computational complexity. The method is validated on the MNIST dataset and compared to human perception. A Random Forest (RF) is modeled is trained to classify ASD or control subjects from fMRI and compare Shapley value outcomes with existing RF-based feature importance [74]. The development of intelligent machine learning applications is studied with expected-long-term profit maximization in multi-agent systems. A learning algorithm for the IPD problem is proposed in [2]. It is shown to outperform the tit-for-tat algorithm and many other adaptive and non-adaptive strategies using numerical analysis. It is also discussed how artificial intelligence and machine learning work closely to provide the agent with a mind-reading capability.

Shin et al. [119] have put forward a model to assign the updating time period to the drones by auctioning. Alternatively, a second-price auction system is applied in which the victorious bidder pays the second-highest bid. In the model, data needed for the dispersal of drones was found from the deep learning algorithm. The shortcoming of the model proposed by Shin and the team was it does not consider two significant criteria; the possibility of increasing the charging station and the uncertainty of charging to the drones at smaller bids. Ren et al. [107] have summarized the representative defenses developed, including adversarial training, randomization-based schemes, de-noising methods, and provable defenses.

Further, to utilize the adversarial training-based intuitive training method, Ren et al. have combined min-max games with deep learning and neural network. Leckie et al. [71] have combined game theory and deep learning to evade jamming attacks in network security. Based on a deep analysis of node behavior characteristics in the opportunistic network, Wang et al. [135] have introduced the evolutionary game theory to traverse the node cooperation mechanism in the opportunistic social network. In the same line, Ranadheera et al. [118] have utilized a deep learning-based deep-Q algorithm with the game theory for fair and efficient resource management in mobile edge computing. To achieve intelligence in the shared environment with multiple agents, Lanctot et al. Further, Lu and Kai [78] have generalized a multi-agent approach using reinforced deep learning and game theory. Dasgupta et al. [21] have combined deep learning and game theory for cybersecurity approaches.

Rudral et al. [108] have used deep learning-based predictive modeling for football games to result in multilayer perception. Wakatsuki et al. (2020) have used multi-player games for decision making. Wang et al. [134] have used the Deep Neural Network (DNN) based deep learning model and the game theory to provide a holistic framework of robot-human interaction. Pinto et al. [104] have used self-supervised deep learning for the robotic adversary. Moreover, they have designed adversarial training as a two-player zero-sum repeated game. Balduzzi [22] has used gradient-based game theory and deep learning to optimize game grammars. Shu et al. [44] have used DNN and game-theoretical approach for developing Multi-granularity Network Representation Learning (MGNRL) framework for the latent representation of nodes in the network. Game theory and deep learning concepts have been widely used in the education field also. Vos et al. [95] have combined game theory and deep learning to understand students’ motivation in education. Urbani [101] has combined game theory and deep learning for music genre classification. Han and Jiequn [63] have concluded that their approach of Deep Learning Approximation for Stochastic Control Problems should apply to broad areas, including dynamic game theory with more than one agent, dynamic resource allocation with several resources and demands, and properties management with large portfolios. Woo [137] has used game-theoretic complex analysis for nuclear security by addressing non-zero-sum algorithms. In the research, Woo has used deep learning for data processing, where the neural network is used for wiretapping. Table 2 summarizes critical game-theoretic models/concepts used to model various deep learning/artificial systems.

figure 5

The contribution in [123] sets up a connection amidst a deep learning model and a game-theoretic approach. This work introduces the application of deep learning to solve game-theoretic problems. Some techniques have been addressed to speed up deep learning and gradient-based approaches in actions with continuity for multi-agent adversarial games. On the other hand, multiple GANs are developed as robust distributed games. A Bregman-based strategic deep learning algorithm is introduced for finding robust distributed Nash equilibria, and it is also checked upon in image synthesis and picture classification. GANs are modeled as a min-max game, and a fast learning algorithm using Bregman divergence is explored. A comparative study on the performance of the Bregman-based algorithm with six algorithms is also shown. Table 3 shows a mapping of components of a game with the components of the deep learning network. The work in [16] shows that deep learning is unsafe for changes in data distribution. So, a deep learning network like CNN is dangerous for adversarial scenarios. An adversarial learning model is designed for supervised learning. Adversarial scenarios are modeled by a game-theoretic approach to the conduct of deep learning. A smart antagonist and a deep learning model interact with each other. The interaction is represented as a two-person sequential non-cooperative Stackelberg game, and stochastic payoff functions are formulated.

figure 6

The datasets used in most of the research articles in this study are as follows: MNIST, CelebA, CIFAR-10, BBBC039, initial dataset, transfer dataset, SVHN, STL-10, ImageNet, DIEL, etc.

6 Implementation environment

The current study mainly focuses on deep reinforcement learning and GAN models. There are various real-time and business applications of GAN and reinforcement learning. The current study addresses several diverse problems. Some significant implementation environments of GAN and deep reinforcement learning models are described in this section based on the existing literature. GAN is primarily used to generate samples/images for various image datasets. GAN can be used to create images of human faces, which are used for different purposes. GANs also help to create realistic images. The cartoon industry is a popular entertainment industry where GAN is used to generate various cartoon characters. Another critical application of GAN is image to image translation. Text to image translation is another vital application of GAN. GAN can be used to generate new human postures which have security, healthcare, and entertainment applications. Another exciting application is generating emojis from typical images frequently used in various social network platforms, mobile applications for entertainment purposes. Photos are edited using GAN models. Photo blending is also done using GAN models. GAN is used for video prediction and 3D object generation. GAN can be used for improving cybersecurity in various places and improving healthcare sectors. It can create artistic skills; it can generate exact or similar images of a painting. It can even cause fake videos. The quality of an image can also be improved using GAN that serves several purposes. De-noising on images is also done using GAN. Alternatively, deep reinforcement learning has excellent application in real-time systems and industries. It is used for the implementation of a self-driving car and industry automation. Deep reinforcement learning is also used in NLP applications. The Healthcare system has also adopted deep reinforcement learning as a tool. It successfully performs various tasks such as automatic disease prediction etc. It has other robust applications in news recommendation and gaming. It also has applications to build automated robots. It has other significant applications such as resource management in computer clusters, traffic light control, web system configuration, advertisement, etc. Reduction in energy consumption, online recommendation systems are some industrial applications.

The performances of the proposed study can be evaluated through Sensitivity, Specificity, recall. Specificity is defined as the proportion of actual negatives, which got predicted as the negative. Sensitivity is a measure of the balance of real positive cases that got expected as positive. The recall is the measure of our model correctly identifying True Positives [87].

7 Future research direction and challenges

The previous sections of the present work discuss the linkages and applications of game theory in deep learning. The current section aims to discuss future research challenges related to the application of game theory in deep learning. To narrate and list out all possible future challenges is a complex task as different literature uses different simulation environments, simulation tools, different data sets, and different experimental conditions. Nevertheless, the current section discusses some of the critical representative future research directions. The future research direction is described in Table 7 below. Table 7 also provides hints, literary way, and association of game theory and deep learning to get the breakthrough in solving future research challenges.

Table 7 Future research challenges and the association of deep learning and game theory

In conclusion, researchers and scientists have to do extensive developmental research and efforts in the direction described above to overcome the problems and challenges faced by the future of deep learning, such as genomics and medical imaging. Besides, more techniques and further inspiration are required to develop new deep learning approaches. New methodologies for complex and challenging problems would be necessary as collaborative efforts by various research communities need to be carefully addressed.

8 Conclusions

Games play a vital role in the advancement of artificial intelligence. Games are frequently used as training mechanisms in learning algorithms, such as reinforcement learning and imitation learning. The paper highlights that AI-enabled and deep learning-enabled multi-agent systems are also developed to compete or cooperate to complete a goal with the help of game theory. In real-time scenarios, deep learning frameworks need to face situations with imperfect information. Paper considers all 31 articles in which game theory and deep learning coincide described in Table 1. It is discussed in the paper that DeepMind’sAlphaGo works with the help of partial knowledge to strategically make superior to the world’s most efficient human in the game of Go.

The paper also puts ample light on the application of game theory to construct adversarial networks. An essential property of an adversarial network is that a closed-form loss function is not needed. Some networks have the capability of finding their loss function. A significant disadvantage of adversarial networks is difficulties in training them. Adversarial learning models find a Nash equilibrium for a two-player non-cooperative game. The paper addresses applications of game theory in artificial neural networks, reinforcement learning, and deep learning. The main focus is to focus on the construction of GAN by applying different game models. The paper’s contribution may help the researchers in game theory, deep learning, and artificial intelligence to acquire a large number of ideas about inter-domain research and contribute holistic works in these domains. The paper also addresses various challenges and future directions of the identified area that will help researchers explore. In the future, behavioral game theory can be applied to model deep learning networks and mimic human neural networks. The combination of the research areas will open several paths for the researchers.

References

  1. Abraham I, Dolev D, Gonen R, Halpern J (2006) Distributed computing meets game theory: robust mechanisms for rational secret sharing and multiparty computation. In: Proceedings of the twenty-fifth annual ACM symposium on principles of distributed computing, pp 53–62
  2. Agrawal A, Jaiswal D (1981) When machine learning meets ai and game theory
  3. Alafif T, Tehame AM, Bajaba S, Barnawi A, Zia S (2021) Machine and deep learning towards COVID-19 diagnosis and treatment: survey, challenges, and future directions. Int J Environ Res Public Health 18(3):1117 ArticleGoogle Scholar
  4. Andersen PA, Goodwin M, Granmo OC (2018) Deep RTS: a game environment for deep reinforcement learning in real-time strategy games. In: 2018 IEEE conference on computational intelligence and games (CIG). IEEE, pp 1–8 Google Scholar
  5. Arora S, Ge R, Liang Y, Ma T, Zhang Y (2017) Generalization and equilibrium in generative adversarial nets (gans). In: Proceedings of the 34th international conference on machine learning-Volume 70. JMLR.org, pp 224–232 Google Scholar
  6. Aslan S, Vascon S, Pelillo M (2020) Two sides of the same coin: improved ancient coin classification using graph transduction games. Pattern Recogn Lett 131:158–165 ArticleGoogle Scholar
  7. Balliet D, Mulder LB, Van Lange PA (2011) Reward, punishment, and cooperation: a meta-analysis. Psychol Bull 137(4):594–615 ArticleGoogle Scholar
  8. Baum EB (1988) On the capabilities of multilayer perceptrons. J Complex 4(3):193–215 ArticleMathSciNetMATHGoogle Scholar
  9. Berthelot D, Schumm T, Metz L (2017) Began: boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717
  10. Brams SJ (2003) Negotiation games: applying game theory to bargaining and arbitration, vol 2, Psychology Press
  11. Caicedo JC, Roth J, Goodman A, Becker T, Karhohs KW, Broisin M, … Carpenter AE (2019) Evaluation of deep learning strategies for nucleus segmentation in fluorescence images. Cytometry Part A 95(9):952–965
  12. Carse J (2011) Finite and infinite games. Simon and Schuster Google Scholar
  13. Carter E (2019) Deep learning for robust meta-analytic estimation
  14. Chavdarova T, Fleuret F (2018) Sgan: an alternative training of generative adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 9407–9415
  15. Cheng J, Dong L, Lapata M (2016) Long short-term memory-networks for machine reading. arXiv preprint arXiv:1601.06733
  16. Chivukula AS, Liu W (2017) Adversarial learning games with deep learning models. In: 2017 international joint conference on neural networks (IJCNN). IEEE, pp 2758–2767 Google Scholar
  17. Chivukula AS, Liu W (2018) Adversarial deep learning models with multiple adversaries. IEEE Trans Knowl Data Eng 31(6):1066–1079 ArticleGoogle Scholar
  18. Chongxuan LI, Xu T, Zhu J, Zhang B (2017) Triple generative adversarial nets. In: Advances in neural information processing systems, pp 4088–4098 Google Scholar
  19. Creswell A, White T, Dumoulin V, Arulkumaran K, Sengupta B, Bharath AA (2018) Generative adversarial networks: an overview. IEEE Signal Process Mag 35(1):53–65 ArticleGoogle Scholar
  20. Darwish A, Hassanien AE, Das S (2019) A survey of swarm and evolutionary computing approaches for deep learning. Artificial intelligence review, pp 1–46
  21. Dasgupta P, Collins JB (2019) A survey of game theoretic approaches for adversarial machine learning in cybersecurity tasks. arXiv preprint arXiv:1912.02258
  22. David B (2016) Grammars for games: a gradient-based, game-theoretic framework for optimization in deep learning. Front Robot AI 2:39 Google Scholar
  23. Davis M, Maschler M (1965) The kernel of a cooperative game. Naval Res Logist Q 12(3):223–259 ArticleMathSciNetMATHGoogle Scholar
  24. Deng Z, Zhang H, Liang X, Yang L, Xu S, Zhu J, Xing EP (2017) Structured generative adversarial networks. In: Advances in neural information processing systems, pp 3899–3909 Google Scholar
  25. Ding M, Tang J, Zhang J (2018) Semi-supervised learning on graphs with generative adversarial nets. In: Proceedings of the 27th ACM international conference on information and knowledge management. ACM, pp 913–922 Google Scholar
  26. Duchi J, Hazan E, Singer Y (2011) Adaptive subgradient methods for online learning and stochastic optimization. J Mach Learn Res 12(Jul):2121–2159 MathSciNetMATHGoogle Scholar
  27. Durugkar I, Gemp I, Mahadevan S (2016) Generative multi-adversarial networks. arXiv preprint arXiv:1611.01673
  28. Esteva A, Robicquet A, Ramsundar B, Kuleshov V, DePristo M, Chou K, … Dean J (2019) A guide to deep learning in healthcare. Nat Med 25(1):24–29 ArticleGoogle Scholar
  29. Fedus W, Rosca M, Lakshminarayanan B, Dai AM, Mohamed S, Goodfellow I (2017) Many paths to equilibrium: GANs do not need to decrease a divergence at every step. arXiv preprint arXiv:1710.08446
  30. Fei F (2019) Integrate learning with game theory for societal challenges. Proceedings of the 28th International Joint Conference on Artificial Intelligence. AAAI Press Google Scholar
  31. Foerster J, Chen RY, Al-Shedivat M, Whiteson S, Abbeel P, Mordatch I (2018) Learning with opponent-learning awareness. In: Proceedings of the 17th international conference on autonomous agents and MultiAgent systems. International Foundation for Autonomous Agents and Multiagent Systems, pp 122–130 Google Scholar
  32. Fukushima K (1980) Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol Cybern 36(4):193–202 ArticleMATHGoogle Scholar
  33. Gadirov H (2018) Capsule architecture as a discriminator in generative adversarial networks
  34. Glorot X, Bordes A, Bengio, Y. (2011) Deep sparse rectifier neural networks. In: Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp 315–323
  35. Goodfellow I (2016) NIPS 2016 tutorial: generative adversarial networks. arXiv preprint arXiv:1701.00160
  36. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems, pp 2672–2680
  37. Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT press MATHGoogle Scholar
  38. Granmo OC (2018) The Tsetlin machine-a game theoretic bandit driven approach to optimal pattern recognition with propositional logic. arXiv preprint arXiv:1804.01508
  39. Graves A, Mohamed AR, Hinton G (2013) Speech recognition with deep recurrent neural networks. In: 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, pp 6645–6649 Google Scholar
  40. Guo X, Singh S, Lewis R, Lee H (2016) Deep learning for reward design to improve Monte Carlo tree search in atari games. arXiv preprint arXiv:1604.07095
  41. Gupta S (2018) Multi-player generative adversarial networks. In: 2018 international high performance extreme computing conference (HPEC). IEEE Google Scholar
  42. Gurram P, Kwon H (2014) Coalition game theory based feature subset selection for hyperspectral image classification. In: 2014 IEEE geoscience and remote sensing symposium. IEEE, pp 3446–3449 Google Scholar
  43. Halpern JY (2008) Computer science and game theory. The New Palgrave Dictionary of Economics 1–8:984–994
  44. Hang S, Liu Q, Xia S (2018) Multi-granularity network representation learning based on game theory. 2018 IEEE international conference on data mining workshops (ICDMW). IEEE, p 2018 Google Scholar
  45. Hartmann S, Weinmann M, Wessel R, Klein R (2017) StreetGAN: towards road network synthesis with generative adversarial networks
  46. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778 Google Scholar
  47. Heinrich J, Silver D (2016) Deep reinforcement learning from self-play in imperfect-information games. arXiv preprint arXiv:1603.01121
  48. Hinton GE, Osindero S, Teh YW (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554 ArticleMathSciNetMATHGoogle Scholar
  49. Hinton GE, Srivastava N, Krizhevsky A, Sutskever I, Salakhutdinov RR (2012) Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580
  50. Hinton G, Deng L, Yu D, Dahl GE, Mohamed AR, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Sainath TN, Kingsbury B (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process Mag 29(6):82–97 ArticleGoogle Scholar
  51. Hitawala S (2018) Comparative study on generative adversarial networks. arXiv preprint arXiv:1801.04271
  52. Hjelm RD, Jacob AP, Che T, Trischler A, Cho K, Bengio Y (2017) Boundary-seeking generative adversarial networks. arXiv preprint arXiv:1702.08431
  53. Ho J, Ermon S (2016) Generative adversarial imitation learning. In: Advances in neural information processing systems, pp 4565–4573
  54. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780 ArticleGoogle Scholar
  55. Holmgård C, Liapis A, Togelius J, Yannakakis GN (2014) Generative agents for player decision modeling in games. In: FDG
  56. Hsieh YP, Liu C, Cevher V (2018) Finding mixed Nash equilibria of generative adversarial networks. arXiv preprint arXiv:1811.02002
  57. Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4700–4708
  58. Huang, C., Kairouz, P., Chen, X., Sankar, L. and Rajagopal, R., 2018. Generative adversarial privacy. arXiv preprint arXiv:1807.05306.
  59. Hubel DH, Wiesel TN (1959) Receptive fields of single neurones in the cat's striate cortex. J Physiol 148(3):574–591 ArticleGoogle Scholar
  60. Hubel DH, Wiesel TN (1968) Receptive fields and functional architecture of monkey striate cortex. J Physiol 195(1):215–243 ArticleGoogle Scholar
  61. Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167
  62. Jain AK, Mao J, Mohiuddin KM (1996) Artificial neural networks: a tutorial. Computer 29(3):31–44 ArticleGoogle Scholar
  63. Jiequn H (2016) Deep learning approximation for stochastic control problems. arXiv preprint arXiv:1611.07422
  64. Johnson ND, Mislin AA (2011) Trust games: a meta-analysis. J Econ Psychol 32(5):865–889 ArticleGoogle Scholar
  65. Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980
  66. Kohonen T (1997) Exploration of very large databases by self-organizing maps. In: Proceedings of international conference on neural networks (icnn'97), vol 1). IEEE, pp PL1–PL6
  67. Kossaifi J, Tran L, Panagakis Y, Pantic M (2018) Gagan: Geometry-aware generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 878–887
  68. Lamba, A., 2013. Enhancing awareness of cyber-security and cloud computing using principles of game theory. Int J Adv Manag Technol Eng Sci, 3.
  69. Lanctot M et al (2017) A unified game-theoretic approach to multiagent reinforcement learning. Advances in neural information processing systems
  70. Larichev OI, Moshkovich HM (2013) Verbal decision analysis for unstructured problems, vol 17. Springer Science & Business Media, p 51 BookMATHGoogle Scholar
  71. Leckie C, Peyam P, Jack R (2018) Deep Learning Based Game-Theoretical Approach to Evade Jamming Attacks. Decision and Game Theory for Security: 9th International Conference, GameSec 2018, Seattle, WA, USA, October 29–31, 2018, proceedings, vol 11199. Springer Google Scholar
  72. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436 ArticleGoogle Scholar
  73. Li J, Madry A, Peebles J, Schmidt L (2017) Towards understanding the dynamics of generative adversarial networks. arXiv preprint arXiv:1706.09884
  74. Li X, Dvornek NC, Zhou Y, Zhuang J, Ventola P, Duncan JS (2019) Efficient interpretation of deep learning models using graph structure and cooperative game theory: application to ASD biomarker discovery. In International conference on information processing in medical imaging. Springer, Cham, pp 718–730 Google Scholar
  75. Liang X, Xiao Y (2012) Game theory for network security. IEEE Commun Surv Tutor 15(1):472–486 ArticleMathSciNetGoogle Scholar
  76. Lippi M (2015) Statistical relational learning for game theory. IEEE Trans Comput Intell AI Games 8(4):412–425 ArticleGoogle Scholar
  77. Liu J, Snodgrass S, Khalifa A, Risi S, Yannakakis GN, Togelius J (2021) Deep learning for procedural content generation. Neural Comput & Applic 33(1):19–37 ArticleGoogle Scholar
  78. Lu Y, Kai Y (2020) Algorithms in multi-agent systems: a holistic perspective from reinforcement learning and game theory. arXiv preprint arXiv:2001.06487
  79. Luce RD, Raiffa H (1989) Games and decisions: introduction and critical survey. Courier Corporation MATHGoogle Scholar
  80. Lucic M, Kurach K, Michalski M, Gelly S, Bousquet O (2018) Are gans created equal? A large-scale study. In: advances in neural information processing systems, pp 700–709
  81. Malherbe C, Umarova RM, Zavaglia M, Kaller CP, Beume L, Thomalla G, Weiller C, Hilgetag CC (2018) Neural correlates of visuospatial bias in patients with left hemisphere stroke: a causal functional contribution analysis based on game theory. Neuropsychologia 115:142–153 ArticleGoogle Scholar
  82. Mao X, Li Q, Xie H, Lau RY, Wang Z, Paul Smolley S (2017) Least squares generative adversarial networks. In Proceedings of the IEEE international conference on computer vision, pp 2794–2802 Google Scholar
  83. Marchiori D, Nagel R, Schmidt J (2019) Heuristics, thinking about others, and strategic management: insights from behavioral game theory
  84. Marsland S (2014) Machine learning: an algorithmic perspective. Chapman and Hall/CRC BookGoogle Scholar
  85. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G. and Petersen, S., 2015. Human-level control through deep reinforcement learning. Nature, 518(7540), p.529.
  86. Moran N, Pollack J (2018) Coevolutionary neural population models. In artificial life, pp 39–46
  87. Murugan R, Goel T (2021) E-DiCoNet: extreme learning machine based classifier for diagnosis of COVID-19 using deep convolutional network. J Ambient Intell Humaniz Comput:1–12
  88. Mycielski J (1992) Games with perfect information. Handbook of game theory with economic applications 1:41–70 ArticleMathSciNetMATHGoogle Scholar
  89. Myerson RB (1991) Game theory: analysis of conflict. Harvard university press MATHGoogle Scholar
  90. Myerson RB (2013) Game theory. Harvard university press BookMATHGoogle Scholar
  91. Narahari Y (2014) Game theory and mechanism design, vol 4. World Scientific MATHGoogle Scholar
  92. Nash JF (1950) Equilibrium points in n-person games. Proc Natl Acad Sci 36(1):48–49 ArticleMathSciNetMATHGoogle Scholar
  93. Nash J (1951) Non-cooperative games. Ann Math 54:286–295 ArticleMathSciNetMATHGoogle Scholar
  94. Nguyen T, Le T, Vu H, Phung D (2017) Dual discriminator generative adversarial nets. In: Advances in neural information processing systems, pp 2670–2680
  95. Nienke V, Van H, Denessen E (2011) Effects of constructing versus playing an educational game on student motivation and deep learning strategy use. Comput Educ 56(1):127–137 ArticleGoogle Scholar
  96. Nisan N, Roughgarden T, Tardos E, Vazirani VV (eds) (2007) Algorithmic game theory. Cambridge university press Google Scholar
  97. Oliehoek FA, Savani R, Gallego-Posada J, Van der Pol E, De Jong ED, Groß R (2017) GANGs: generative adversarial network games. arXiv preprint arXiv:1712.00679
  98. Oliehoek FA, Savani R, Gallego J, van der Pol E, Groß R (2018) Beyond local nashequilibria for adversarial networks. arXiv preprint arXiv:1806.07268
  99. Osborne MJ (2004) An introduction to game theory, vol 3, no 3. Oxford university press, New York
  100. Osborne MJ, Rubinstein A (1994) A course in game theory. MIT press MATHGoogle Scholar
  101. Paola U (2018) Combining deep learning and game theory for music genre classification. BS thesis. UniversitàCa'FoscariVenezia
  102. Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, … Desmaison A (2019) PyTorch: an imperative style, high-performance deep learning library. In Advances in neural information processing systems, pp 8024–8035
  103. Pfau D, Vinyals O (2016) Connecting generative adversarial networks and actor-critic methods. arXiv preprint arXiv:1610.01945
  104. Pinto L, James D, Gupta A (2017) Supervision via competition: robot adversaries for learning tasks. 2017 IEEE international conference on robotics and automation (ICRA). IEEE, p 2017 Google Scholar
  105. Qi GJ (2017) Loss-sensitive generative adversarial networks on lipschitz densities. arXiv preprint arXiv:1701.06264
  106. Rabinowitz NC, Perbet F, Song HF, Zhang C., Eslami SM, Botvinick M (2018) Machine theory of mind. arXiv preprint arXiv:1802.07740
  107. Ren K, Zheng T, Qin Z, Liu X (2020) Adversarial attacks and defenses in deep learning. Engineering. 6:346–360 ArticleGoogle Scholar
  108. Rudrapal D, Boro S, Srivastava J, Singh S (2020) A deep learning approach to predict football match result. In: Computational intelligence in data mining. Springer, Singapore, pp 93–99 Google Scholar
  109. Salimans T, Goodfellow I, Zaremba W, Cheung V, Radford A, Chen X (2016) Improved techniques for training gans. In: Advances in neural information processing systems, pp 2234–2242
  110. Saxe AM, Bansal Y, Dapello J, Advani M, Kolchinsky A, Tracey BD, Cox DD (2019) On the information bottleneck theory of deep learning. J Stat Mech Theory Exp 2019(12):124020 ArticleMathSciNetMATHGoogle Scholar
  111. Schuster M, Paliwal KK (1997) Bidirectional recurrent neural networks. IEEE Trans Signal Process 45(11):2673–2681 ArticleGoogle Scholar
  112. Schuster A, Yamaguchi Y (2010) Application of game theory to neuronal networks. Advances in Artificial Intelligence, 2010
  113. Schuurmans D, Zinkevich MA (2016) Deep learning games. In: Advances in neural information processing systems, pp 1678–1686
  114. Schwenker F, Kestler HA, Palm G (2001) Three learning phases for radial-basis-function networks. Neural Netw 14(4–5):439–458 ArticleMATHGoogle Scholar
  115. Sengupta N (2021) Information generation in interactions: the link between evolutionary game theory and evolutionary economics
  116. Shapley LS (1953) A value for n-person games. Contributions to the Theory of Games, 2(28), pp 307–317
  117. Shapley LS (1953) Stochastic games. Proc Natl Acad Sci 39(10):1095–1100 ArticleMathSciNetMATHGoogle Scholar
  118. Shermila R, Maghsudi S, Hossain E (2017) Mobile edge computation offloading using game theory and reinforcement learning. arXiv preprint arXiv:1711.09012
  119. Shin M, Joongheon K, Marco L (2019) Auction-based charging scheduling with deep learning framework for multi-drone networks. IEEE Trans Veh Technol 68(5):4235–4248 ArticleGoogle Scholar
  120. Silver D, Schrittwieser J, Simonyan K, Antonoglou I, Huang A, Guez A, Hubert T, Baker L, Lai M, Bolton A, Chen Y (2017) Mastering the game of go without human knowledge. Nature 550(7676):354 ArticleGoogle Scholar
  121. Srivastava RK, Greff K, Schmidhuber J (2015) Highway networks. arXiv preprint arXiv:1505.00387
  122. Stier J, Gianini G, Granitzer M, Ziegler K (2018) Analysing neural network topologies: a game theoretic approach. Procedia Computer Science 126:234–243 ArticleGoogle Scholar
  123. Tembine H (2019) Deep learning meets game theory: Bregman-based algorithms for interactive deep generative adversarial networks. IEEE Trans Cybern
  124. Tennenholtz M (2002) Game theory and artificial intelligence. In Foundations and applications of multi-agent systems. Springer, Berlin, pp 49–58 MATHGoogle Scholar
  125. Vincent P, Larochelle H, Lajoie I, Bengio Y, Manzagol PA (2010) Stacked denoisingautoencoders: learning useful representations in a deep network with a local denoising criterion. J Mach Learn Res 11(Dec):3371–3408 MathSciNetMATHGoogle Scholar
  126. Von Neumann J (1959) On the theory of games of strategy. Contributions to the Theory of Games, 4, pp 13–42
  127. Wakatsuki M, Fujimura M, Nishino T (2020) A decision making method based on society of mind theory in multi-player imperfect information games. In: Deep learning and neural networks: concepts, methodologies, tools, and applications. IGI Global, pp 317–329 Google Scholar
  128. Wang B (2018) From deep learning to deep deducing: automatically tracking down Nash equilibrium through autonomous neural agent, a possible missing step toward gneral AI
  129. Wang Y Discriminative adversarial learning: a general framework for interactive learning
  130. Wang K, Gou C, Duan Y, Lin Y, Zheng X, Wang FY (2017) Generative adversarial networks: introduction and outlook. IEEE/CAA J Autom Sin 4(4):588–598 ArticleMathSciNetGoogle Scholar
  131. Wang B, Liu K, Zhao J (2017) Conditional generative adversarial networks for commonsense machine comprehension. In: IJCAI, pp 4123–4129
  132. Wang J, Yu L, Zhang W, Gong Y, Xu Y, Wang B, Zhang P, Zhang D (2017) Irgan: a minimax game for unifying generative and discriminative information retrieval models. In: Proceedings of the 40th international ACM SIGIR conference on research and development in information retrieval. ACM, pp 515–524 Google Scholar
  133. Wang H, Wang J, Wang J, Zhao M, Zhang W, Zhang F, Xie X, Guo M (2018) Graphgan: graph representation learning with generative adversarial nets. In: Thirty-Second AAAI Conference on Artificial Intelligence.
  134. Wang RQ et al (2019) Scene recognition based on DNN and game theory with its applications in human-robot interaction. arXiv preprint arXiv:1912.01293
  135. Wang EK, Chen C-M, Yiu SM, Hassan MM, Alrubaian M, Fortino G (2020) Incentive evolutionary game model for opportunistic social networks. Futur Gener Comput Syst 102:14–29 ArticleGoogle Scholar
  136. Wiering M, Van Otterlo M (2012) Reinforcement learning. Adapt Learn Optim 12:3 Google Scholar
  137. Woo TH (2019) Game theory based complex analysis for nuclear security using non-zero sum algorithm. Ann Nucl Energy 125:12–17 ArticleGoogle Scholar
  138. Wu TY, Lee WT, Guizani N, Wang TM (2014) Incentive mechanism for P2P file sharing based on social network and game theory. J Netw Comput Appl 41:47–55 ArticleGoogle Scholar
  139. Xiao L et al (2017) A secure mobile crowdsensing game with deep reinforcement learning. IEEE Trans Inf Forensics Secur 13(1):35–47 ArticleGoogle Scholar
  140. Yang LC, Chou SY, Yang YH (2017) MidiNet: a convolutional generative adversarial network for symbolic-domain music generation. arXiv preprint arXiv:1703.10847
  141. Yegnanarayana B (2009) Artificial neural networks. PHI Learning Pvt. Ltd Google Scholar
  142. Yoon JH, Lee BK, Kim BW (2021) A study on GAN algorithm for restoration of cultural property (pagoda). J Korea Soc Comput Inf 26(1):77–84 Google Scholar
  143. Yoshida W, Dolan RJ, Friston KJ (2008) Game theory of mind. PLoS Comput Biol 4(12):e1000254 ArticleMathSciNetGoogle Scholar
  144. Yu L et al (2018) Deep reinforcement learning for green security game with online information. Workshops at the Thirty-Second AAAI Conference on Artificial Intelligence
  145. Zhang L, Wang W, Li S, Pan G (2019) Monte Carlo neural fictitious self-play: approach to approximate Nash equilibrium of imperfect-information games, arXiv:1903.09569v2
  146. Zheng S, Jayasumana S, Romera-Paredes B, Vineet V, Su Z, Du D, Huang C, Torr PH (2015) Conditional random fields as recurrent neural networks. In: Proceedings of the IEEE international conference on computer vision, pp 1529–1537
  147. Zhou C, Paffenroth RC (2017) Anomaly detection with robust deep autoencoders. In: Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, pp 665–674
  148. Zhou Y, Kantarcioglu M, Xi B (2019) A survey of game theoretic approach for adversarial machine learning. Wiley Interdiscip Rev Data Min Knowl Discov 9(3):e1259 ArticleGoogle Scholar
  149. Zhu JJ, Bento J (2017) Generative adversarial active learning. arXiv preprint arXiv:1702.07956
  150. Zou J, Huss M, Abid A, Mohammadi P, Torkamani A, Telenti A (2019) A primer on deep learning in genomics. Nat Genet 51(1):12–18 ArticleGoogle Scholar
  151. Zurada JM (1992) Introduction to artificial neural systems, vol 8. West publishing company, St. Paul Google Scholar

Author information

Authors and Affiliations

  1. Department of Computer Science and Engineering, Indian Institute of Information Technology Pune, Pune, Maharashtra, India Tanmoy Hazra
  2. Institute of Rural Management Anand (IRMA), Post Box No. 60, Anand, Gujarat, 388001, India Kushal Anjaria
  1. Tanmoy Hazra