This course focused on using existing LLMs to build, test, and monitor apps. It introduced various ways to interact with LLMs using the chat
endpoint, various ways to think about RAGs and LLM usage patterns, and had a number of guest speakers from various companies talking about their produces and how to use them in conjunction with LLMs to build great applications.
This was a key driver for taking this course in the first place. We covered the following ways to evaluate LLM outputs: