Maira Khwaja is a Service Delivery Manager and AI product innovator with over 16 years of experience driving complex technology delivery, AI-driven products, and large-scale service operations. At the Kwaai Summit at SCALE 23x, she will present “Benchmarking Open Source Vector Databases” and co-host the event, bringing a practitioner’s perspective on how to evaluate and operate open-source vector databases in real-world AI systems.
At RealNetworks, Maira leads global service delivery for KONTXT Messaging and Voice Solutions, where she has successfully managed high-availability deployments, demanding SLAs, and cross-functional teams spanning engineering, product, and customer success. Previously, as a Lead Software Test Engineer, she built and scaled automation frameworks and quality practices that improved release reliability and accelerated product delivery across multiple product lines.​
Maira also serves as a Board Officer and AI Product Innovation Lead at Kwaai, helping shape the roadmap for an open-source Personal AI Operating System focused on user empowerment and ethical, transparent AI. In that role, she has contributed to AI product strategy, process improvement, and community-led initiatives that resist concentration of power in closed platforms while making advanced AI capabilities accessible to more people.
Her background in software quality, AI product strategy, and service reliability gives her a systems view of how infrastructure choices—like vector databases—affect performance, trust, and user experience in AI-native applications.​

Presentations

23x

dRAG Race: Benchmarking Open Source Vector Databases

“dRAG Race: Benchmarking Open Source Vector Databases” presents the findings of Kwaai’s intern-led Vector DB Performance project, now accepted for publication in the Journal for Big Data and AI. A cross‑functional cohort of data science and engineering interns—guided by a PhD AI‑robotics advisor and program coordinator—designed and ran a rigorous benchmark of seven open source vector databases under realistic RAG workloads, from corpus design and chunking through automated multi‑run experiments and visual analysis.
 

See Presentation