Skip to content

Instantly share code, notes, and snippets.

@daanklijn
Created March 12, 2021 14:22
Show Gist options
  • Save daanklijn/4e16aa928890b7cae37a39dbce5c71b9 to your computer and use it in GitHub Desktop.
Save daanklijn/4e16aa928890b7cae37a39dbce5c71b9 to your computer and use it in GitHub Desktop.
Multi-agent Reinforcement Learning flowchart using LaTeX and TikZ
\begin{tikzpicture}[node distance = 6em, auto, thick]
\node [block] (Agent1) {Agent$_1$};
\node [block, below of=Agent1] (Agent2) {Agent$_2$};
\node [below of=Agent2] (Dots) {\cvdots};
\node [block, below of=Dots] (Agent3) {Agent$_n$};
\node [block, below of=Agent3] (Environment) {Environment};
\path [line] (Agent1.0) --++ (10em,0em) |- node [near start]{$a_{1,t}$} (Environment.-15);
\path [line] (Agent2.0) --++ (6em,0em) |- node [near start]{$a_{2,t}$} (Environment.0);
\path [line] (Agent3.0) --++ (2em,0em) |- node [near start]{$a_{n,t}$} (Environment.15);
\path [line] (Environment.195) --++ (-18em,0em) |- node [near start] {$s_{1,t+1}, r_{1,t+1}$} (Agent1.180);
\path [line] (Environment.180) --++ (-10em,0em) |- node [near start] {$s_{2,t+1}, r_{2,t+1}$} (Agent2.180);
\path [line] (Environment.165) --++ (-2em,0em) |- node [near start] {$s_{n,t+1}, r_{n,t+1}$} (Agent3.180);
\end{tikzpicture}
@daanklijn
Copy link
Author

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment