Skip to content

Instantly share code, notes, and snippets.

@adambear91
Created March 27, 2017 21:57
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save adambear91/2a2a4814b588ee3c89b503c97f59992a to your computer and use it in GitHub Desktop.
Save adambear91/2a2a4814b588ee3c89b503c97f59992a to your computer and use it in GitHub Desktop.
Code for Bear, Kagan, & Rand (2017)
% % % % % % % % % % % % % % % % % % %
% CODE FOR BEAR, KAGAN & RAND (2017) PROCB
%
% This MATLAB script runs the low mutation limit (steady state) calculations
% shown in Figures S1-S12
%
% This script calls the following functions:
%
% calculate_payoff_YDYI(strategy1,strategy2,b,c,d,p,YD,YI)
% - returns the payoff of strategy1 against strategy2 given
% the specified values of b, c, d, p, YD, and YI
%
% steady_state_calc(payoffTable,N,w,a)
% - returns the steady state distribution of the population in
% the limit of low mutation, where payoffTable is an MxM matrix
% with payoffTable(i,j) being the payoff of strategy i against
% strategy j; N is the population size; w is the selection
% strength; and a is the amount of assortment
%
% vary_YD(N,b,c,d,w,a)
% - returns and plots the average results from steady_state_calc
% when varying YD and p (with YI fixed at .5)
%
% vary_YI(N,b,c,d,w,a)
% - returns and plots the average results from steady_state_calc
% when varying YI and p (with YD fixed at 1)
% % % % % % % % % % % % % % % % % % %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% STEADY STATE CALCULATION %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% parameters used in Figures S1-S12
N = 50;
w = 3;
a = 0;
bs = [4,2,8,4,4,4,4];
cs = [1,1,1,2,.5,1,1];
ds = [1,1,1,1,1,.75,2];
for i=1:length(bs)
vary_YD(N,b,c,d,w,a);
vary_YI(N,b,c,d,w,a);
end
% Note: These functions may take a long time to run.
function payoff = calculate_payoff_YDYI(s1,s2,b,c,d,p,YD,YI)
%% SETUP %%
% s1 and s2 are agent 1's and agent 2's respective strategy profiles
% where a strategy profile consists of [pR,p1,pIR,pI1,T]
% STRATEGY PARAMETERS %
% pR is probability of cooperating when agent deliberates and realizes the
% game is repeated/coordination game
% p1 is probability of cooperating when agent deliberates and realizes the
% game is 1-shot/social dilemma
% pIR is probability of cooperating when agent uses intuition in a repeated
% game
% pI1 is probability of cooperating when agent uses intuition in a 1 shot
% game
% T is (threshold) max cost in [0,d] agent is willing to pay to deliberate
% ENVIRONMENT/GAME PARAMETERS %
% b is benefit of cooperation
% c is cost of cooperation
% d is max possible cost of deliberation (such that the cost of
% deliberation varies uniformly in the interval [0,d])
% p is probability of repeated game
% YD is the sensitivity of deliberation
% YI is the sensitivity of intuition
% OUTPUTS %
% payoff is s1's payoff
%%%%%%%%%%%%%%%%%%%%%%%
%% CALCULATE PAYOFFS %%
%%%%%%%%%%%%%%%%%%%%%%%
% Rename each agent's threshold to T1 and T2, respectively
T1 = s1(5);
T2 = s2(5);
%%%%
% Calculate repeated game payoffs
% Repeated Game Payoff Matrix
repPayoffMat=[0 0
0 (b-c)];
% Rows correspond to agent 1's strategy
% where row 1 = agent 1 defects, row 2 = agent 1 cooperates
% Columns correspond to agent 2's strategy
% where col 1 = agent 2 defects, col 2 = agent 2 cooperates
% Values correspond to agent 1's payoffs
% e.g., if agent 1 cooperates and agent 2 defects,
% agent 1 gets a payoff of 0 because repPayoffMat(2,1) = 0
% We consider repeated-game payoffs in 4 situations:
% rPayDD: both agents deliberate
% rPayDI: only agent 1 deliberates
% rPayID: only agent 2 deliberates
% rPayII: neither agent deliberates
%Given the possibility of innaccurate deliberation and sensitivity of intuition
% rPayDD:
% rPayDtDt: Both agents deliberate, both deliberations track game type
% rPayDtDf: Both agents deliberate, agent 1's deliberation tracks, agent
% 2's fails
% rPayDfDt: Both agents deliberate, agent 1's deliberation fails,
% agent 2's tracks
% rPayDfDf: Both agents deliberate, both deliberations fail
% rPayDI:
% rPayDtIt: 1 deliberates (tracks) 1 intuits (tracks)
% rPayDtIf: 1 deliberates (tracks) 1 intuits (fails)
% rPayDfIt: 1 deliberates (fails) 1 intuits (tracks)
% rPayDfIf: 1 deliberates (fails) 1 intuits (fails)
% rPayID:
% rPayItDt: agent 2 deliberates (tracks), agent 1 intuits (tracks)
% rPayIfDt: agent 2 deliberates (tracks), agent 1 intuits (fails)
% rPayItDf: agent 2 deliberates (fails), agent 1 intuits (tracks)
% rPayIfDf: agent 2 deliberates (fails), agent 1 intuits (fails)
% rPayII:
% rPayItIt: both intuit (both track)
% rPayItIf: both intuit (one tracks, 1 fails)
% rPayIfIt: both intuit (one tracks, 1 fails)
% rPayIfIf: both intuit (both fail)
% E.g., to calculate rPayDfIf,
% We know agent 1 deliberates but fails to reailze they're in a repeated game =>
% their prob. of cooperating is p1, i.e., s1(2)
% We know agent 2 uses their intuitive response but fails to realize they're in a repeated game =>
% their prob. of cooperating is pI1, i.e., s2(4)
% Thus, rPayDfIf is a weighted average of the payoff agent 1 gets when both
% cooperate (repPayoffMat(2,2)), when only player 1 cooperates
% (repPayoffMat(2,1)), and so on
% where the prob. both cooperate is given by s1(2)*s2(4), prob. only
% player 1 cooperates is given by s1(2)*(1-s2(4)), and so on
rPayDtDt=s1(1)*s2(1)*repPayoffMat(2,2) + s1(1)*(1-s2(1))*repPayoffMat(2,1) + ...
(1-s1(1))*s2(1)*repPayoffMat(1,2) + (1-s1(1))*(1-s2(1))*repPayoffMat(1,1);
rPayDtDf=s1(1)*s2(2)*repPayoffMat(2,2) + s1(1)*(1-s2(2))*repPayoffMat(2,1) + ...
(1-s1(1))*s2(2)*repPayoffMat(1,2) + (1-s1(1))*(1-s2(2))*repPayoffMat(1,1);
rPayDfDt=s1(2)*s2(1)*repPayoffMat(2,2) + s1(2)*(1-s2(1))*repPayoffMat(2,1) + ...
(1-s1(2))*s2(1)*repPayoffMat(1,2) + (1-s1(2))*(1-s2(1))*repPayoffMat(1,1);
rPayDfDf=s1(2)*s2(2)*repPayoffMat(2,2) + s1(2)*(1-s2(2))*repPayoffMat(2,1) + ...
(1-s1(2))*s2(2)*repPayoffMat(1,2) + (1-s1(2))*(1-s2(2))*repPayoffMat(1,1);
rPayDtIt=s1(1)*s2(3)*repPayoffMat(2,2) + s1(1)*(1-s2(3))*repPayoffMat(2,1) + ...
(1-s1(1))*s2(3)*repPayoffMat(1,2) + (1-s1(1))*(1-s2(3))*repPayoffMat(1,1);
rPayDtIf=s1(1)*s2(4)*repPayoffMat(2,2) + s1(1)*(1-s2(4))*repPayoffMat(2,1) + ...
(1-s1(1))*s2(4)*repPayoffMat(1,2) + (1-s1(1))*(1-s2(4))*repPayoffMat(1,1);
rPayDfIt=s1(2)*s2(3)*repPayoffMat(2,2) + s1(2)*(1-s2(3))*repPayoffMat(2,1) + ...
(1-s1(2))*s2(3)*repPayoffMat(1,2) + (1-s1(2))*(1-s2(3))*repPayoffMat(1,1);
rPayDfIf=s1(2)*s2(4)*repPayoffMat(2,2) + s1(2)*(1-s2(4))*repPayoffMat(2,1) + ...
(1-s1(2))*s2(4)*repPayoffMat(1,2) + (1-s1(2))*(1-s2(4))*repPayoffMat(1,1);
rPayItDt=s1(3)*s2(1)*repPayoffMat(2,2) + s1(3)*(1-s2(1))*repPayoffMat(2,1) + ...
(1-s1(3))*s2(1)*repPayoffMat(1,2) + (1-s1(3))*(1-s2(1))*repPayoffMat(1,1);
rPayIfDt=s1(4)*s2(1)*repPayoffMat(2,2) + s1(4)*(1-s2(1))*repPayoffMat(2,1) + ...
(1-s1(4))*s2(1)*repPayoffMat(1,2) + (1-s1(4))*(1-s2(1))*repPayoffMat(1,1);
rPayItDf=s1(3)*s2(2)*repPayoffMat(2,2) + s1(3)*(1-s2(2))*repPayoffMat(2,1) + ...
(1-s1(3))*s2(2)*repPayoffMat(1,2) + (1-s1(3))*(1-s2(2))*repPayoffMat(1,1);
rPayIfDf=s1(4)*s2(2)*repPayoffMat(2,2) + s1(4)*(1-s2(2))*repPayoffMat(2,1) + ...
(1-s1(4))*s2(2)*repPayoffMat(1,2) + (1-s1(4))*(1-s2(2))*repPayoffMat(1,1);
rPayItIt=s1(3)*s2(3)*repPayoffMat(2,2) + s1(3)*(1-s2(3))*repPayoffMat(2,1) + ...
(1-s1(3))*s2(3)*repPayoffMat(1,2) + (1-s1(3))*(1-s2(3))*repPayoffMat(1,1);
rPayItIf=s1(3)*s2(4)*repPayoffMat(2,2) + s1(3)*(1-s2(4))*repPayoffMat(2,1) + ...
(1-s1(3))*s2(4)*repPayoffMat(1,2) + (1-s1(3))*(1-s2(4))*repPayoffMat(1,1);
rPayIfIt=s1(4)*s2(3)*repPayoffMat(2,2) + s1(4)*(1-s2(3))*repPayoffMat(2,1) + ...
(1-s1(4))*s2(3)*repPayoffMat(1,2) + (1-s1(4))*(1-s2(3))*repPayoffMat(1,1);
rPayIfIf=s1(4)*s2(4)*repPayoffMat(2,2) + s1(4)*(1-s2(4))*repPayoffMat(2,1) + ...
(1-s1(4))*s2(4)*repPayoffMat(1,2) + (1-s1(4))*(1-s2(4))*repPayoffMat(1,1);
%Thus:
rPayDD=YD*YD*rPayDtDt + YD*(1-YD)*rPayDtDf + (1-YD)*YD*rPayDfDt + (1-YD)*(1-YD)*rPayDfDf;
rPayDI=YD*YI*rPayDtIt + (1-YD)*YI*rPayDfIt + YD*(1-YI)*rPayDtIf +(1-YD)*(1-YI)*rPayDfIf;
rPayID=YI*YD*rPayItDt + (1-YI)*(1-YD)*rPayIfDf + (1-YI)*YD*rPayIfDt +YI*(1-YD)*rPayItDf;
rPayII=YI*YI*rPayItIt + (1-YI)*YI*rPayIfIt + YI*(1-YI)*rPayItIf + (1-YI)*(1-YI)*rPayIfIf;
%%%
% Calculate 1-shot game payoffs
% (these work the same way as the repeated game payoffs)
% 1-shot Game Payoff Matrix
onesPayoffMat=[0 b
-c (b-c)];
oPayDtDt=s1(2)*s2(2)*onesPayoffMat(2,2) + s1(2)*(1-s2(2))*onesPayoffMat(2,1) + ...
(1-s1(2))*s2(2)*onesPayoffMat(1,2) + (1-s1(2))*(1-s2(2))*onesPayoffMat(1,1);
oPayDtDf=s1(2)*s2(1)*onesPayoffMat(2,2) + s1(2)*(1-s2(1))*onesPayoffMat(2,1) + ...
(1-s1(2))*s2(1)*onesPayoffMat(1,2) + (1-s1(2))*(1-s2(1))*onesPayoffMat(1,1);
oPayDfDt=s1(1)*s2(2)*onesPayoffMat(2,2) + s1(1)*(1-s2(2))*onesPayoffMat(2,1) + ...
(1-s1(1))*s2(2)*onesPayoffMat(1,2) + (1-s1(1))*(1-s2(2))*onesPayoffMat(1,1);
oPayDfDf=s1(1)*s2(1)*onesPayoffMat(2,2) + s1(1)*(1-s2(1))*onesPayoffMat(2,1) + ...
(1-s1(1))*s2(1)*onesPayoffMat(1,2) + (1-s1(1))*(1-s2(1))*onesPayoffMat(1,1);
oPayDtIt=s1(2)*s2(4)*onesPayoffMat(2,2) + s1(2)*(1-s2(4))*onesPayoffMat(2,1) + ...
(1-s1(2))*s2(4)*onesPayoffMat(1,2) + (1-s1(2))*(1-s2(4))*onesPayoffMat(1,1);
oPayDtIf=s1(2)*s2(3)*onesPayoffMat(2,2) + s1(2)*(1-s2(3))*onesPayoffMat(2,1) + ...
(1-s1(2))*s2(3)*onesPayoffMat(1,2) + (1-s1(2))*(1-s2(3))*onesPayoffMat(1,1);
oPayDfIt=s1(1)*s2(4)*onesPayoffMat(2,2) + s1(1)*(1-s2(4))*onesPayoffMat(2,1) + ...
(1-s1(1))*s2(4)*onesPayoffMat(1,2) + (1-s1(1))*(1-s2(4))*onesPayoffMat(1,1);
oPayDfIf=s1(1)*s2(3)*onesPayoffMat(2,2) + s1(1)*(1-s2(3))*onesPayoffMat(2,1) + ...
(1-s1(1))*s2(3)*onesPayoffMat(1,2) + (1-s1(1))*(1-s2(3))*onesPayoffMat(1,1);
oPayItDt=s1(4)*s2(2)*onesPayoffMat(2,2) + s1(4)*(1-s2(2))*onesPayoffMat(2,1) + ...
(1-s1(4))*s2(2)*onesPayoffMat(1,2) + (1-s1(4))*(1-s2(2))*onesPayoffMat(1,1);
oPayIfDt=s1(3)*s2(2)*onesPayoffMat(2,2) + s1(3)*(1-s2(2))*onesPayoffMat(2,1) + ...
(1-s1(3))*s2(2)*onesPayoffMat(1,2) + (1-s1(3))*(1-s2(2))*onesPayoffMat(1,1);
oPayItDf=s1(4)*s2(1)*onesPayoffMat(2,2) + s1(4)*(1-s2(1))*onesPayoffMat(2,1) + ...
(1-s1(4))*s2(1)*onesPayoffMat(1,2) + (1-s1(4))*(1-s2(1))*onesPayoffMat(1,1);
oPayIfDf=s1(3)*s2(1)*onesPayoffMat(2,2) + s1(3)*(1-s2(1))*onesPayoffMat(2,1) + ...
(1-s1(3))*s2(1)*onesPayoffMat(1,2) + (1-s1(3))*(1-s2(1))*onesPayoffMat(1,1);
oPayItIt=s1(4)*s2(4)*onesPayoffMat(2,2) + s1(4)*(1-s2(4))*onesPayoffMat(2,1) + ...
(1-s1(4))*s2(4)*onesPayoffMat(1,2) + (1-s1(4))*(1-s2(4))*onesPayoffMat(1,1);
oPayItIf=s1(4)*s2(3)*onesPayoffMat(2,2) + s1(4)*(1-s2(3))*onesPayoffMat(2,1) + ...
(1-s1(4))*s2(3)*onesPayoffMat(1,2) + (1-s1(4))*(1-s2(3))*onesPayoffMat(1,1);
oPayIfIt=s1(3)*s2(4)*onesPayoffMat(2,2) + s1(3)*(1-s2(4))*onesPayoffMat(2,1) + ...
(1-s1(3))*s2(4)*onesPayoffMat(1,2) + (1-s1(3))*(1-s2(4))*onesPayoffMat(1,1);
oPayIfIf=s1(3)*s2(3)*onesPayoffMat(2,2) + s1(3)*(1-s2(3))*onesPayoffMat(2,1) + ...
(1-s1(3))*s2(3)*onesPayoffMat(1,2) + (1-s1(3))*(1-s2(3))*onesPayoffMat(1,1);
oPayDD=YD*YD*oPayDtDt + YD*(1-YD)*oPayDtDf + (1-YD)*YD*oPayDfDt + (1-YD)*(1-YD)*oPayDfDf;
oPayDI=YD*YI*oPayDtIt + (1-YD)*YI*oPayDfIt + YD*(1-YI)*oPayDtIf +(1-YD)*(1-YI)*oPayDfIf;
oPayID=YI*YD*oPayItDt + (1-YI)*(1-YD)*oPayIfDf + (1-YI)*YD*oPayIfDt +YI*(1-YD)*oPayItDf;
oPayII=YI*YI*oPayItIt + YI*(1-YI)*oPayItIf + (1-YI)*YI*oPayIfIt + (1-YI)*(1-YI)*oPayIfIf;
%%%
% Calculate agent 1's final payoff based on
% Prob. of repeated game (p) vs. 1-shot game (1-p)
% Prob. agents 1 and 2 deliberate (T1/d and T2/d) vs. use intuition (1-T1/d and 1-T2/d)
% Average costs agent 1 and 2 pay when deliberating (T1/2 and T2/2)
% (because the cost of deliberation varies uniformly, an agent who's
% willing to pay max cost of deliberation T pays an average cost of
% T/2 to deliberate)
% We take a weighted avg. of agent 1's payoff in 4 situations:
% 1. payoff when both agents deliberate = (p*rPayDD + (1-p)*oPayDD - T1/2)
% this happens with probability (T1/d)*(T2/d)
% 2. payoff when only agent 1 deliberates = (p*rPayDI + (1-p)*oPayDI - T1/2)
% this happens with probability (T1/d)*(1 - T2/d)
% ... and so on
payoff = (T1/d)*(T2/d)*(p*rPayDD + (1-p)*oPayDD - T1/2) + ...
(T1/d)*(1 - T2/d)*(p*rPayDI + (1-p)*oPayDI - T1/2) + ...
(1 - T1/d)*(T2/d)*(p*rPayID + (1-p)*oPayID) + ...
(1 - T1/d)*(1 - T2/d)*(p*rPayII + (1-p)*oPayII);
end
function [ssDist,transitionPs]=steady_state_calc(payoffTable,N,w,a)
% Calculate the steady state distribution of the Moran process with
% exponential fitness, assuming the limit of low mutation
%
% INPUTS
% payoffTable is an MxM matrix of payoffs, where M is the number of strategies
% N is the population size
% w is the selection strength (0=neutral drift, ->Inf=deterministic)
% a is assortment (probability of interacting with a player having the same
% strategy as you, rather than a randomly selected player)
%
% OUTPUTS
% ssDist is an Mx1 vector with the steady state frequency of each strategy
% transitionPs is an MxM vector with transition probilities between
% homogeneous states of the population
numStrats=length(payoffTable);
transitionPs=zeros(numStrats);
% To do the steady state calc, we need to assemble a transition matrix
% betewen homogeneous states of the population
for s1=1:numStrats
for s2=1:numStrats
if s1~=s2
% for each combination of strategies s1 and s2, the transition prob
% going from s1 to s2 is the product of the probability a s1 mutant
% arises, and the probability that mutant goes to fixation
%prob s1 mutant arises %prob that s1 goes to fixation
%(assuming uniform mutation)
transitionPs(s1,s2)=(1/(numStrats-1)) *calc_invasionprob(payoffTable,N,w,s1,s2,a);
end
end
end
% The diagonal entries of the transition matrix are the probability you
% stay in the given state - which is just 1 minus all the transition
% probabilities out of that state
for i=1:numStrats
transitionPs(i,i)=1-sum(transitionPs(:,i));
end
% Now that you've got the transition matrix, find the eigenvector with
% eigenvalue 1, and that's your steady state distribution!
[vec,val]=eig(transitionPs);
[~, idxofEigWithVal1]=min(abs(diag(val)-1)); % For some reason, sometimes Matlab gets the eigenvalue wrong s/t its not literally 1, just very close
% normalize the eigenvector corresponding to eigenvalue so that it sums to 1, and you're done!
ssDist=vec(:,idxofEigWithVal1)./sum(vec(:,idxofEigWithVal1));
end
function rhoBA=calc_invasionprob(payoffTable,N,w,sA,sB,a)
% here we calculate rhoAB, the probability that a sA mutant goes to fixation in
% a sB population
% this probability is the inverse of (1 + a sum of products) so I
% start by intitalizing rhoInv to 1, and then add on the sum of series of
% products, and then take the inverse
rhoInv=1;
% loop through each possible number of steps the sA mutant would need to go through in order to take over
% (this ranges from 1, if there are already N-1 sA players,
% to N-1, if there is only the 1 initial sA mutant)
for k=1:N-1
% same thing - initialize the eventual product of various terms as 1,
% then multiply it by stuff
thisProd=1;
% loop through each possible make-up of the population (indexed by k=the
% number of sA players) that would need to be traversed to get from
% there being only 1 sA player to there being k sA players
for i=1:k
% for each population make-up, multiply by the ratio of the fitness
% of an sA player in the current pop to the fitness of an sB player
% in the current pop (where there are i sA players)
% fitness = exp( w * expectedpayoff)
% expectedpayoff = a * payoff against your own strategy
% + (1-a) *( (payoff against sB * fraction of other players who are sB)
% +(payoff against sA * fraction of other players who are sA)
% )
fi=exp(w* (a*payoffTable(sA,sA)+(1-a)*(payoffTable(sA,sB)*(N-i)/(N-1) + payoffTable(sA,sA)*(i-1)/(N-1))));
gi=exp(w* (a*payoffTable(sB,sB)+(1-a)*(payoffTable(sB,sB)*(N-i-1)/(N-1) + payoffTable(sB,sA)*(i)/(N-1) )));
% multiple the current state of the product by the ratio for this value of i
thisProd=thisProd*(gi/fi);
end
% now that we've finished with the product for k, add it onto the sum
rhoInv=rhoInv+thisProd;
end
% now that we've finished with the sum, take the inverse
rhoBA=1/rhoInv;
end
function [ssDists,avgStrats] = vary_YD(N,b,c,d,w,a)
% Calculates and plots the average results from steady_state_calc
% when varying YD and p (with YI fixed at .5)
% INPUTS
% N is the population size
% b is benefit of cooperation
% c is cost of cooperation
% d is max possible cost of deliberation (such that the cost of
% deliberation varies uniformly in the interval [0,d])
% w is the selection strength (0=neutral drift, ->Inf=deterministic)
% a is assortment (probability of interacting with a player having the same
% strategy as you, rather than a randomly selected player)
%
% OUTPUTS
% ssDists stores the ssDist steady state frequencies of each strategy for
% each p x YD combination.
% avgStrats stores the average strategy of of the steady state populations
% for each p x YD combination.
ps_steadyState = 0:.01:1;
YD_steadyState= 0.5:0.01:1;
%% Create strat space
% generate all possible combinations of the first three strat parameters (sR,s1,sI)
% set to be 0 or 1
[x1,x2,x3] = ind2sub([2,2,2], find(zeros(2,2,2)==0));
strats = [x1 x2 x3] - 1;
% we now combine the above with the 11 possible deliberation thresholds,
% running from 0 to d in increments of d/10
Tvals = 0:d/10:d;
strats = repmat(strats,length(Tvals),1);
Tvals = sort(repmat(Tvals,1,length(strats)/length(Tvals)));
strats = [strats Tvals'];
% because we're using the YIYD calc payoff function, have to add in another
% column (4th column) for second intuitive response
strats = [strats(:,1:3) strats(:,3) strats(:,4)];
% ... to create a total of 2*2*2*11 (88) possible strategies
M = size(strats,1); % define M to be number of strats
%% Calculate steady state distribution
% initialize matrix to store steady state distributions for each p value
% and each YD value
ssDists = zeros(M,length(ps_steadyState),length(YD_steadyState));
%ssDists = zeros(M ,length(ps_steadyState));
% initialize matrix to store average strat values for each p value and each
% YD value
avgStrats = zeros(length(ps_steadyState),5,length(YD_steadyState));
%avgStrats = zeros(length(ps_steadyState),4);
for YD=YD_steadyState
YD % display YD value that's running
for p=ps_steadyState
p % display p value that's running
% initialize payoff table
payoffTable=zeros(M);
% create payoff table
for i=1:M
for j=1:M
payoffTable(i,j)=calculate_payoff_YDYI(strats(i,:),strats(j,:),b,c,d,p,YD,.5);
end
end
% run steady state using this payoff table and store result in ssDists
ssDists(:,p==ps_steadyState,YD==YD_steadyState)=steady_state_calc(payoffTable,N,w,a);
% use ssDists to get average strategy values by taking a weighted average of
% the strat values based on the distribution in ssDists
avgStrats(p==ps_steadyState,:,YD==YD_steadyState) = sum(repmat(ssDists(:,p==ps_steadyState,YD==YD_steadyState),1,5)...
.*strats);
end
end
%% Create Surface Plots
% create a surface showing avg. amount of deliberation in p x YD space
T = squeeze(avgStrats(:,5,:)); % stores this average value
figure
surf(YD_steadyState,ps_steadyState,T)
xlabel('Sensitivity of Deliberation y_d')
ylabel('Probability of Coordination p')
colormap(parula)
view(2); view(-90,90) % change the orientation of the plot...
set(gca,'ydir','reverse') % ... and reverse axis direction for clearer presentation
% create a surface showing closest intuitive profile in p x YD space
intuitiveProfile = round(squeeze(avgStrats(:,3,:))).*round(squeeze(avgStrats(:,4,:))); % stores the intuitive responses as 1,2
figure
surf(YD_steadyState,ps_steadyState,intuitiveProfile)
xlabel('Sensitivity of Deliberation y_d')
ylabel('Probability of Coordination p')
colormap([.8 0 0; 1 1 0]) % we create a custom colormap where...
% red always intuitively defects and yellow always intuitively cooperates
view(2); view(-90,90) % change the orientation of the plot...
set(gca,'ydir','reverse') % ... and reverse axis direction for clearer presentation
function [ssDists,avgStrats] = vary_YI(N,b,c,d,w,a)
% Calculates and plots the average results from steady_state_calc
% when varying YI and p (with YD fixed at 1)
% INPUTS
% N is the population size
% b is benefit of cooperation
% c is cost of cooperation
% d is max possible cost of deliberation (such that the cost of
% deliberation varies uniformly in the interval [0,d])
% w is the selection strength (0=neutral drift, ->Inf=deterministic)
% a is assortment (probability of interacting with a player having the same
% strategy as you, rather than a randomly selected player)
%
% OUTPUTS
% ssDists stores the ssDist steady state frequencies of each strategy for
% each p x YI combination.
% avgStrats stores the average strategy of of the steady state populations
% for each p x YI combination.
ps_steadyState = 0:.01:1;
YI_steadyState= 0.5:0.01:1;
%% Generate Strategy Space
% For the steady state calculation, we must generate a discrete strategy
% set. To do so:
% pR, p1, pIR and pI1 are all restricted to being either 0 or 1;
% we discretize the deliberation threshold (T) in intervals of d/10
% generate all possible combinations of the first four strat parameters (pR,p1,pIR SI1)
% set to be 0 or 1
[x1,x2,x3,x4] = ind2sub([2,2,2,2], find(zeros(2,2,2,2)==0));
strats = [x1 x2 x3 x4] - 1;
% we now combine the above with the 11 possible deliberation thresholds,
% running from 0 to d in increments of d/10
Tvals = 0:d/10:d;
strats = repmat(strats,length(Tvals),1);
Tvals = sort(repmat(Tvals,1,length(strats)/length(Tvals)));
strats = [strats Tvals'];
% ... to create a total of 2*2*2*2*11 (176) possible strategies
M = size(strats,1); % define M to be number of strats
%% Main Steady State Loop
% initialize matrix to store steady state distributions for each p value
% each 1 value and each YI value
ssDists = zeros(M,length(ps_steadyState),length(YI_steadyState));
% initialize matrix to store average strat values for each p value, YI val and 1 val
avgStrats = zeros(length(ps_steadyState),5,length(YI_steadyState));
for YI=YI_steadyState
YI % display YI value that's running
for p=ps_steadyState
p % display p value that's running
% initialize payoff table
payoffTable=zeros(M);
% create payoff table
for i=1:M
for j=1:M
payoffTable(i,j)=calculate_payoff_YDYI(strats(i,:),strats(j,:),b,c,d,p,1,YI);
end
end
% run steady state using this payoff table and store result in ssDists
ssDists(:,p==ps_steadyState,YI==YI_steadyState)=steady_state_calc(payoffTable,N,w,a);
% use ssDists to get average strategy values by taking a weighted average of
% the strat values based on the distribution in ssDists
avgStrats(p==ps_steadyState,:,YI==YI_steadyState) = sum(repmat(ssDists(:,p==ps_steadyState,YI==YI_steadyState),1,5).*strats);
end
end
%% Create Surface Plots
% create a surface showing avg. amount of deliberation in p x YI space
T = squeeze(avgStrats(:,5,:)); % stores this average value
figure
surf(YI_steadyState,ps_steadyState,T)
xlabel('Sensitivity of Intuition y_i')
ylabel('Probability of Coordination p')
colormap(parula)
view(2); view(-90,90) % change the orientation of the plot...
set(gca,'ydir','reverse') % ... and reverse axis direction for clearer presentation
% create a surface showing closest intuitive profile in p x YI space
intuitiveProfile = round(squeeze(avgStrats(:,3,:))).*round(squeeze(avgStrats(:,4,:))) ...
+ 2*(round(squeeze(avgStrats(:,3,:))) - round(squeeze(avgStrats(:,4,:)))); % stores the intuitive responses as 1,2,3
figure
surf(YI_steadyState,ps_steadyState,intuitiveProfile)
xlabel('Sensitivity of Intuition y_i')
ylabel('Probability of Coordination p')
colormap([.8 0 0; 1 1 0; 0 .8 0]) % we create a custom colormap where...
% red always intuitively defects, yellow always intuitively cooperates, and
% green discriminates its response (defect in SD; cooperate in CG)
view(2); view(-90,90) % change the orientation of the plot...
set(gca,'ydir','reverse') % ... and reverse axis direction for clearer presentation
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment