The presentation will showcase an example of an emotionally responsive system, made entirely of combining open source software, artificial intelligence, and robotics. The main function of this project relies on a voice-to-text interface to translate commands into a request, that is able to be sent to an AI bot, resulting in a response and a coordinating physical movement. GPT-3 is used as the AI model to understand and reply to the request, while PyCharm is used to send, receive, and process the command, with Arduino for the functions of robotics. To bridge the gap between humans and technology, it requires a branch in software.

The message of this project is not only to showcase a pipeline of connecting different programming languages and open source software, but also how a complex emotional output of a system based on artificial intelligence can be made with minimal investment, initial knowledge, or equipment. To convey this, I’ll discuss the capabilities of this project, including its conversational and command functions with the user’s voice, alongside the design process of choosing the right components to limit cost and maximize impact.

Though demonstrating both the “liveliness” and complexity of implementing a variety of open source software with artificial intelligence, this presentation also invites and encourages others to experiment with differing systems, and the human-like capabilities of artificial intelligence. Attendees will leave inspired to contribute to a world where AI doesn’t just understand basic commands, but can perform complex tasks and emotion, even on a minimal budget, and from a basic understanding of code.