What's our Vector, Victor?
AI is a hot topic right now, and for good reason! Natural Language Search and Retrieval Augmented Generation (RAG) are two great methods to leverage data stored in Postgres in an immediately useful way. Why use Full Text Search when we can search for intent and related topics?
Actually doing it on the other hand is a huge pain. We need to choose an embedding model to vectorize the data, then produce, maintain, and index the embeddings. Then we need to keep the embedding model around to similarly vectorize search parameters, and build queries for appropriate similarity searches. If RAG is involved, we need to choose a public Large Language Model API and juggle references fed to the system and user prompts, and relate that all back to our original data.
Or... we can just use the pg_vectorize extension. Come to this talk and we'll show you how to build a rudimentary self-maintaining RAG application with just a few Postgres queries. We'll discuss a bit about the theory behind modern AI and how Postgres plays an integral part in that ecosystem thanks to pgvector and related extensions.
Mainly, we aim to demystify AI so anyone can use it thanks to Postgres.