LLMs with Your Data in Your Applications
You need your own data to be useful with AI and LLMs. How do you go about that? There has been a lot of focus on making applications intelligent using the power of AI with the help of large language models (LLMs). In this we will cover how to add intelligence to existing applications with an overview of different techniques starting from prompt engineering, using vector stores to improve the results, and building intelligent agents to figure out how to solve a problem.
You will learn about the techniques using open-source frameworks to build LLM-powered applications like LangChain & LlamaIndex.
Beyond the techniques, there is the quesiton of optimization. What are some practical techniques around deploying these kinds of applications to production, understanding if they work, and improving them over time.