Instantly share code, notes, and snippets.

Created March 27, 2017 21:57
Show Gist options
• Save adambear91/2a2a4814b588ee3c89b503c97f59992a to your computer and use it in GitHub Desktop.
Code for Bear, Kagan, & Rand (2017)
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
 % % % % % % % % % % % % % % % % % % % % CODE FOR BEAR, KAGAN & RAND (2017) PROCB % % This MATLAB script runs the low mutation limit (steady state) calculations % shown in Figures S1-S12 % % This script calls the following functions: % % calculate_payoff_YDYI(strategy1,strategy2,b,c,d,p,YD,YI) % - returns the payoff of strategy1 against strategy2 given % the specified values of b, c, d, p, YD, and YI % % steady_state_calc(payoffTable,N,w,a) % - returns the steady state distribution of the population in % the limit of low mutation, where payoffTable is an MxM matrix % with payoffTable(i,j) being the payoff of strategy i against % strategy j; N is the population size; w is the selection % strength; and a is the amount of assortment % % vary_YD(N,b,c,d,w,a) % - returns and plots the average results from steady_state_calc % when varying YD and p (with YI fixed at .5) % % vary_YI(N,b,c,d,w,a) % - returns and plots the average results from steady_state_calc % when varying YI and p (with YD fixed at 1) % % % % % % % % % % % % % % % % % % % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% STEADY STATE CALCULATION %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % parameters used in Figures S1-S12 N = 50; w = 3; a = 0; bs = [4,2,8,4,4,4,4]; cs = [1,1,1,2,.5,1,1]; ds = [1,1,1,1,1,.75,2]; for i=1:length(bs) vary_YD(N,b,c,d,w,a); vary_YI(N,b,c,d,w,a); end % Note: These functions may take a long time to run.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
 function payoff = calculate_payoff_YDYI(s1,s2,b,c,d,p,YD,YI) %% SETUP %% % s1 and s2 are agent 1's and agent 2's respective strategy profiles % where a strategy profile consists of [pR,p1,pIR,pI1,T] % STRATEGY PARAMETERS % % pR is probability of cooperating when agent deliberates and realizes the % game is repeated/coordination game % p1 is probability of cooperating when agent deliberates and realizes the % game is 1-shot/social dilemma % pIR is probability of cooperating when agent uses intuition in a repeated % game % pI1 is probability of cooperating when agent uses intuition in a 1 shot % game % T is (threshold) max cost in [0,d] agent is willing to pay to deliberate % ENVIRONMENT/GAME PARAMETERS % % b is benefit of cooperation % c is cost of cooperation % d is max possible cost of deliberation (such that the cost of % deliberation varies uniformly in the interval [0,d]) % p is probability of repeated game % YD is the sensitivity of deliberation % YI is the sensitivity of intuition % OUTPUTS % % payoff is s1's payoff %%%%%%%%%%%%%%%%%%%%%%% %% CALCULATE PAYOFFS %% %%%%%%%%%%%%%%%%%%%%%%% % Rename each agent's threshold to T1 and T2, respectively T1 = s1(5); T2 = s2(5); %%%% % Calculate repeated game payoffs % Repeated Game Payoff Matrix repPayoffMat=[0 0 0 (b-c)]; % Rows correspond to agent 1's strategy % where row 1 = agent 1 defects, row 2 = agent 1 cooperates % Columns correspond to agent 2's strategy % where col 1 = agent 2 defects, col 2 = agent 2 cooperates % Values correspond to agent 1's payoffs % e.g., if agent 1 cooperates and agent 2 defects, % agent 1 gets a payoff of 0 because repPayoffMat(2,1) = 0 % We consider repeated-game payoffs in 4 situations: % rPayDD: both agents deliberate % rPayDI: only agent 1 deliberates % rPayID: only agent 2 deliberates % rPayII: neither agent deliberates %Given the possibility of innaccurate deliberation and sensitivity of intuition % rPayDD: % rPayDtDt: Both agents deliberate, both deliberations track game type % rPayDtDf: Both agents deliberate, agent 1's deliberation tracks, agent % 2's fails % rPayDfDt: Both agents deliberate, agent 1's deliberation fails, % agent 2's tracks % rPayDfDf: Both agents deliberate, both deliberations fail % rPayDI: % rPayDtIt: 1 deliberates (tracks) 1 intuits (tracks) % rPayDtIf: 1 deliberates (tracks) 1 intuits (fails) % rPayDfIt: 1 deliberates (fails) 1 intuits (tracks) % rPayDfIf: 1 deliberates (fails) 1 intuits (fails) % rPayID: % rPayItDt: agent 2 deliberates (tracks), agent 1 intuits (tracks) % rPayIfDt: agent 2 deliberates (tracks), agent 1 intuits (fails) % rPayItDf: agent 2 deliberates (fails), agent 1 intuits (tracks) % rPayIfDf: agent 2 deliberates (fails), agent 1 intuits (fails) % rPayII: % rPayItIt: both intuit (both track) % rPayItIf: both intuit (one tracks, 1 fails) % rPayIfIt: both intuit (one tracks, 1 fails) % rPayIfIf: both intuit (both fail) % E.g., to calculate rPayDfIf, % We know agent 1 deliberates but fails to reailze they're in a repeated game => % their prob. of cooperating is p1, i.e., s1(2) % We know agent 2 uses their intuitive response but fails to realize they're in a repeated game => % their prob. of cooperating is pI1, i.e., s2(4) % Thus, rPayDfIf is a weighted average of the payoff agent 1 gets when both % cooperate (repPayoffMat(2,2)), when only player 1 cooperates % (repPayoffMat(2,1)), and so on % where the prob. both cooperate is given by s1(2)*s2(4), prob. only % player 1 cooperates is given by s1(2)*(1-s2(4)), and so on rPayDtDt=s1(1)*s2(1)*repPayoffMat(2,2) + s1(1)*(1-s2(1))*repPayoffMat(2,1) + ... (1-s1(1))*s2(1)*repPayoffMat(1,2) + (1-s1(1))*(1-s2(1))*repPayoffMat(1,1); rPayDtDf=s1(1)*s2(2)*repPayoffMat(2,2) + s1(1)*(1-s2(2))*repPayoffMat(2,1) + ... (1-s1(1))*s2(2)*repPayoffMat(1,2) + (1-s1(1))*(1-s2(2))*repPayoffMat(1,1); rPayDfDt=s1(2)*s2(1)*repPayoffMat(2,2) + s1(2)*(1-s2(1))*repPayoffMat(2,1) + ... (1-s1(2))*s2(1)*repPayoffMat(1,2) + (1-s1(2))*(1-s2(1))*repPayoffMat(1,1); rPayDfDf=s1(2)*s2(2)*repPayoffMat(2,2) + s1(2)*(1-s2(2))*repPayoffMat(2,1) + ... (1-s1(2))*s2(2)*repPayoffMat(1,2) + (1-s1(2))*(1-s2(2))*repPayoffMat(1,1); rPayDtIt=s1(1)*s2(3)*repPayoffMat(2,2) + s1(1)*(1-s2(3))*repPayoffMat(2,1) + ... (1-s1(1))*s2(3)*repPayoffMat(1,2) + (1-s1(1))*(1-s2(3))*repPayoffMat(1,1); rPayDtIf=s1(1)*s2(4)*repPayoffMat(2,2) + s1(1)*(1-s2(4))*repPayoffMat(2,1) + ... (1-s1(1))*s2(4)*repPayoffMat(1,2) + (1-s1(1))*(1-s2(4))*repPayoffMat(1,1); rPayDfIt=s1(2)*s2(3)*repPayoffMat(2,2) + s1(2)*(1-s2(3))*repPayoffMat(2,1) + ... (1-s1(2))*s2(3)*repPayoffMat(1,2) + (1-s1(2))*(1-s2(3))*repPayoffMat(1,1); rPayDfIf=s1(2)*s2(4)*repPayoffMat(2,2) + s1(2)*(1-s2(4))*repPayoffMat(2,1) + ... (1-s1(2))*s2(4)*repPayoffMat(1,2) + (1-s1(2))*(1-s2(4))*repPayoffMat(1,1); rPayItDt=s1(3)*s2(1)*repPayoffMat(2,2) + s1(3)*(1-s2(1))*repPayoffMat(2,1) + ... (1-s1(3))*s2(1)*repPayoffMat(1,2) + (1-s1(3))*(1-s2(1))*repPayoffMat(1,1); rPayIfDt=s1(4)*s2(1)*repPayoffMat(2,2) + s1(4)*(1-s2(1))*repPayoffMat(2,1) + ... (1-s1(4))*s2(1)*repPayoffMat(1,2) + (1-s1(4))*(1-s2(1))*repPayoffMat(1,1); rPayItDf=s1(3)*s2(2)*repPayoffMat(2,2) + s1(3)*(1-s2(2))*repPayoffMat(2,1) + ... (1-s1(3))*s2(2)*repPayoffMat(1,2) + (1-s1(3))*(1-s2(2))*repPayoffMat(1,1); rPayIfDf=s1(4)*s2(2)*repPayoffMat(2,2) + s1(4)*(1-s2(2))*repPayoffMat(2,1) + ... (1-s1(4))*s2(2)*repPayoffMat(1,2) + (1-s1(4))*(1-s2(2))*repPayoffMat(1,1); rPayItIt=s1(3)*s2(3)*repPayoffMat(2,2) + s1(3)*(1-s2(3))*repPayoffMat(2,1) + ... (1-s1(3))*s2(3)*repPayoffMat(1,2) + (1-s1(3))*(1-s2(3))*repPayoffMat(1,1); rPayItIf=s1(3)*s2(4)*repPayoffMat(2,2) + s1(3)*(1-s2(4))*repPayoffMat(2,1) + ... (1-s1(3))*s2(4)*repPayoffMat(1,2) + (1-s1(3))*(1-s2(4))*repPayoffMat(1,1); rPayIfIt=s1(4)*s2(3)*repPayoffMat(2,2) + s1(4)*(1-s2(3))*repPayoffMat(2,1) + ... (1-s1(4))*s2(3)*repPayoffMat(1,2) + (1-s1(4))*(1-s2(3))*repPayoffMat(1,1); rPayIfIf=s1(4)*s2(4)*repPayoffMat(2,2) + s1(4)*(1-s2(4))*repPayoffMat(2,1) + ... (1-s1(4))*s2(4)*repPayoffMat(1,2) + (1-s1(4))*(1-s2(4))*repPayoffMat(1,1); %Thus: rPayDD=YD*YD*rPayDtDt + YD*(1-YD)*rPayDtDf + (1-YD)*YD*rPayDfDt + (1-YD)*(1-YD)*rPayDfDf; rPayDI=YD*YI*rPayDtIt + (1-YD)*YI*rPayDfIt + YD*(1-YI)*rPayDtIf +(1-YD)*(1-YI)*rPayDfIf; rPayID=YI*YD*rPayItDt + (1-YI)*(1-YD)*rPayIfDf + (1-YI)*YD*rPayIfDt +YI*(1-YD)*rPayItDf; rPayII=YI*YI*rPayItIt + (1-YI)*YI*rPayIfIt + YI*(1-YI)*rPayItIf + (1-YI)*(1-YI)*rPayIfIf; %%% % Calculate 1-shot game payoffs % (these work the same way as the repeated game payoffs) % 1-shot Game Payoff Matrix onesPayoffMat=[0 b -c (b-c)]; oPayDtDt=s1(2)*s2(2)*onesPayoffMat(2,2) + s1(2)*(1-s2(2))*onesPayoffMat(2,1) + ... (1-s1(2))*s2(2)*onesPayoffMat(1,2) + (1-s1(2))*(1-s2(2))*onesPayoffMat(1,1); oPayDtDf=s1(2)*s2(1)*onesPayoffMat(2,2) + s1(2)*(1-s2(1))*onesPayoffMat(2,1) + ... (1-s1(2))*s2(1)*onesPayoffMat(1,2) + (1-s1(2))*(1-s2(1))*onesPayoffMat(1,1); oPayDfDt=s1(1)*s2(2)*onesPayoffMat(2,2) + s1(1)*(1-s2(2))*onesPayoffMat(2,1) + ... (1-s1(1))*s2(2)*onesPayoffMat(1,2) + (1-s1(1))*(1-s2(2))*onesPayoffMat(1,1); oPayDfDf=s1(1)*s2(1)*onesPayoffMat(2,2) + s1(1)*(1-s2(1))*onesPayoffMat(2,1) + ... (1-s1(1))*s2(1)*onesPayoffMat(1,2) + (1-s1(1))*(1-s2(1))*onesPayoffMat(1,1); oPayDtIt=s1(2)*s2(4)*onesPayoffMat(2,2) + s1(2)*(1-s2(4))*onesPayoffMat(2,1) + ... (1-s1(2))*s2(4)*onesPayoffMat(1,2) + (1-s1(2))*(1-s2(4))*onesPayoffMat(1,1); oPayDtIf=s1(2)*s2(3)*onesPayoffMat(2,2) + s1(2)*(1-s2(3))*onesPayoffMat(2,1) + ... (1-s1(2))*s2(3)*onesPayoffMat(1,2) + (1-s1(2))*(1-s2(3))*onesPayoffMat(1,1); oPayDfIt=s1(1)*s2(4)*onesPayoffMat(2,2) + s1(1)*(1-s2(4))*onesPayoffMat(2,1) + ... (1-s1(1))*s2(4)*onesPayoffMat(1,2) + (1-s1(1))*(1-s2(4))*onesPayoffMat(1,1); oPayDfIf=s1(1)*s2(3)*onesPayoffMat(2,2) + s1(1)*(1-s2(3))*onesPayoffMat(2,1) + ... (1-s1(1))*s2(3)*onesPayoffMat(1,2) + (1-s1(1))*(1-s2(3))*onesPayoffMat(1,1); oPayItDt=s1(4)*s2(2)*onesPayoffMat(2,2) + s1(4)*(1-s2(2))*onesPayoffMat(2,1) + ... (1-s1(4))*s2(2)*onesPayoffMat(1,2) + (1-s1(4))*(1-s2(2))*onesPayoffMat(1,1); oPayIfDt=s1(3)*s2(2)*onesPayoffMat(2,2) + s1(3)*(1-s2(2))*onesPayoffMat(2,1) + ... (1-s1(3))*s2(2)*onesPayoffMat(1,2) + (1-s1(3))*(1-s2(2))*onesPayoffMat(1,1); oPayItDf=s1(4)*s2(1)*onesPayoffMat(2,2) + s1(4)*(1-s2(1))*onesPayoffMat(2,1) + ... (1-s1(4))*s2(1)*onesPayoffMat(1,2) + (1-s1(4))*(1-s2(1))*onesPayoffMat(1,1); oPayIfDf=s1(3)*s2(1)*onesPayoffMat(2,2) + s1(3)*(1-s2(1))*onesPayoffMat(2,1) + ... (1-s1(3))*s2(1)*onesPayoffMat(1,2) + (1-s1(3))*(1-s2(1))*onesPayoffMat(1,1); oPayItIt=s1(4)*s2(4)*onesPayoffMat(2,2) + s1(4)*(1-s2(4))*onesPayoffMat(2,1) + ... (1-s1(4))*s2(4)*onesPayoffMat(1,2) + (1-s1(4))*(1-s2(4))*onesPayoffMat(1,1); oPayItIf=s1(4)*s2(3)*onesPayoffMat(2,2) + s1(4)*(1-s2(3))*onesPayoffMat(2,1) + ... (1-s1(4))*s2(3)*onesPayoffMat(1,2) + (1-s1(4))*(1-s2(3))*onesPayoffMat(1,1); oPayIfIt=s1(3)*s2(4)*onesPayoffMat(2,2) + s1(3)*(1-s2(4))*onesPayoffMat(2,1) + ... (1-s1(3))*s2(4)*onesPayoffMat(1,2) + (1-s1(3))*(1-s2(4))*onesPayoffMat(1,1); oPayIfIf=s1(3)*s2(3)*onesPayoffMat(2,2) + s1(3)*(1-s2(3))*onesPayoffMat(2,1) + ... (1-s1(3))*s2(3)*onesPayoffMat(1,2) + (1-s1(3))*(1-s2(3))*onesPayoffMat(1,1); oPayDD=YD*YD*oPayDtDt + YD*(1-YD)*oPayDtDf + (1-YD)*YD*oPayDfDt + (1-YD)*(1-YD)*oPayDfDf; oPayDI=YD*YI*oPayDtIt + (1-YD)*YI*oPayDfIt + YD*(1-YI)*oPayDtIf +(1-YD)*(1-YI)*oPayDfIf; oPayID=YI*YD*oPayItDt + (1-YI)*(1-YD)*oPayIfDf + (1-YI)*YD*oPayIfDt +YI*(1-YD)*oPayItDf; oPayII=YI*YI*oPayItIt + YI*(1-YI)*oPayItIf + (1-YI)*YI*oPayIfIt + (1-YI)*(1-YI)*oPayIfIf; %%% % Calculate agent 1's final payoff based on % Prob. of repeated game (p) vs. 1-shot game (1-p) % Prob. agents 1 and 2 deliberate (T1/d and T2/d) vs. use intuition (1-T1/d and 1-T2/d) % Average costs agent 1 and 2 pay when deliberating (T1/2 and T2/2) % (because the cost of deliberation varies uniformly, an agent who's % willing to pay max cost of deliberation T pays an average cost of % T/2 to deliberate) % We take a weighted avg. of agent 1's payoff in 4 situations: % 1. payoff when both agents deliberate = (p*rPayDD + (1-p)*oPayDD - T1/2) % this happens with probability (T1/d)*(T2/d) % 2. payoff when only agent 1 deliberates = (p*rPayDI + (1-p)*oPayDI - T1/2) % this happens with probability (T1/d)*(1 - T2/d) % ... and so on payoff = (T1/d)*(T2/d)*(p*rPayDD + (1-p)*oPayDD - T1/2) + ... (T1/d)*(1 - T2/d)*(p*rPayDI + (1-p)*oPayDI - T1/2) + ... (1 - T1/d)*(T2/d)*(p*rPayID + (1-p)*oPayID) + ... (1 - T1/d)*(1 - T2/d)*(p*rPayII + (1-p)*oPayII); end
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
 function [ssDist,transitionPs]=steady_state_calc(payoffTable,N,w,a) % Calculate the steady state distribution of the Moran process with % exponential fitness, assuming the limit of low mutation % % INPUTS % payoffTable is an MxM matrix of payoffs, where M is the number of strategies % N is the population size % w is the selection strength (0=neutral drift, ->Inf=deterministic) % a is assortment (probability of interacting with a player having the same % strategy as you, rather than a randomly selected player) % % OUTPUTS % ssDist is an Mx1 vector with the steady state frequency of each strategy % transitionPs is an MxM vector with transition probilities between % homogeneous states of the population numStrats=length(payoffTable); transitionPs=zeros(numStrats); % To do the steady state calc, we need to assemble a transition matrix % betewen homogeneous states of the population for s1=1:numStrats for s2=1:numStrats if s1~=s2 % for each combination of strategies s1 and s2, the transition prob % going from s1 to s2 is the product of the probability a s1 mutant % arises, and the probability that mutant goes to fixation %prob s1 mutant arises %prob that s1 goes to fixation %(assuming uniform mutation) transitionPs(s1,s2)=(1/(numStrats-1)) *calc_invasionprob(payoffTable,N,w,s1,s2,a); end end end % The diagonal entries of the transition matrix are the probability you % stay in the given state - which is just 1 minus all the transition % probabilities out of that state for i=1:numStrats transitionPs(i,i)=1-sum(transitionPs(:,i)); end % Now that you've got the transition matrix, find the eigenvector with % eigenvalue 1, and that's your steady state distribution! [vec,val]=eig(transitionPs); [~, idxofEigWithVal1]=min(abs(diag(val)-1)); % For some reason, sometimes Matlab gets the eigenvalue wrong s/t its not literally 1, just very close % normalize the eigenvector corresponding to eigenvalue so that it sums to 1, and you're done! ssDist=vec(:,idxofEigWithVal1)./sum(vec(:,idxofEigWithVal1)); end function rhoBA=calc_invasionprob(payoffTable,N,w,sA,sB,a) % here we calculate rhoAB, the probability that a sA mutant goes to fixation in % a sB population % this probability is the inverse of (1 + a sum of products) so I % start by intitalizing rhoInv to 1, and then add on the sum of series of % products, and then take the inverse rhoInv=1; % loop through each possible number of steps the sA mutant would need to go through in order to take over % (this ranges from 1, if there are already N-1 sA players, % to N-1, if there is only the 1 initial sA mutant) for k=1:N-1 % same thing - initialize the eventual product of various terms as 1, % then multiply it by stuff thisProd=1; % loop through each possible make-up of the population (indexed by k=the % number of sA players) that would need to be traversed to get from % there being only 1 sA player to there being k sA players for i=1:k % for each population make-up, multiply by the ratio of the fitness % of an sA player in the current pop to the fitness of an sB player % in the current pop (where there are i sA players) % fitness = exp( w * expectedpayoff) % expectedpayoff = a * payoff against your own strategy % + (1-a) *( (payoff against sB * fraction of other players who are sB) % +(payoff against sA * fraction of other players who are sA) % ) fi=exp(w* (a*payoffTable(sA,sA)+(1-a)*(payoffTable(sA,sB)*(N-i)/(N-1) + payoffTable(sA,sA)*(i-1)/(N-1)))); gi=exp(w* (a*payoffTable(sB,sB)+(1-a)*(payoffTable(sB,sB)*(N-i-1)/(N-1) + payoffTable(sB,sA)*(i)/(N-1) ))); % multiple the current state of the product by the ratio for this value of i thisProd=thisProd*(gi/fi); end % now that we've finished with the product for k, add it onto the sum rhoInv=rhoInv+thisProd; end % now that we've finished with the sum, take the inverse rhoBA=1/rhoInv; end
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters