"Attention is all you need" they wrote in 2017, and introduced a transformer architecture implemented in LLMs. The "The Parallelism Tradeoff: Limitations of Log-Precision Transformers" article in 2023 described transformers' constraints.
In a theory transformers are Turing-complete, which means they can solve any computational task. But there are 2 major assumptions: infinite precision, and arbitrary size of models. The limits of computer memory and precision in chips are set in stone. Long story short, there are tasks that transformer LLMs can not solve.
In programming we are used to the binary logic: AND, OR, NOT, and others, called "gates". They are used to implement operators like IF and FOR loops that are used to branch execution and define the logic of applications.