Presentations
pgBadger is a widely used tool that helps to identify the exact factors slowing down the system, improve SQL for quick wins, and enhance overall system performance. But what if you could do more? Imagine the power of PostgreSQL at your fingertips, which is used for its own log analysis and performance optimization.
Explore the latest features in pgBadger and the tools we built on top of it to unlock a world of possibilities!
Fast access to unlimited resources has allowed many organizations to iterate and innovate quickly. It comes at a cost, though. In addition to growing AWS bills, operational knowledge has been ceded to managed services and SaaS providers. Organizational resilience has been lostand the cost to the environment and sustainability efforts will impact the world around us for generations to come.
Learn how we can take more control of our journeys with awareness, rebuilding skills, thinking beyond the quarterly sales goal, and cleaning up our messes.
Join us for a presentation discussing the unique considerations of moderating a code collaboration platform. Using diverse case studies, we'll look at how GitHub has refined its approach to developer-first content moderation in response to technological and societal developments as well as notable incidents on the platform. This presentation is a condensed version of the recently published T&S Research Journal article, “Nuances and Challenges of Moderating a Code Collaboration Platform” authored by members of the GitHub Trust and Safety, Legal, and Policy teams.
Research Software Engineering (RSE) is emerging as a critical bridge between scientific research and software development. This talk will introduce the discipline of Research Software Engineering and highlight how it accelerates discovery and foster collaboration by embracing open source principles. I will also discuss how RSE career paths are evolving, the challenges faced by modern research environments, and the pivotal role open-source ecosystems play in advancing scientific innovation.
Designing complex, integrated systems is a challenge with so many products available in the cloud native open source world. This talk aims to discuss a team’s recent experience building out a new system, and the choices made to build an automatable, scalable, and stable environment.This talk will be delivered in a “choose your own adventure” format, where we discuss the different OSS options that were considered and their implementation. As we work with the audience through different options, we will encounter the benefits/consequences of our choices.
We present a recipe for improving the code integrity of a container host based on Azure Linux, at scale on Microsoft Azure. From the perspective of a cloud provider, there is a serious need to protect the integrity of a container host runtime from its tenant container workloads that require privileged access while not sacrificing performance or serviceability requirements. One of the important ingredients is the Integrity Policy Enforcement (IPE) Linux Security Module developed at Microsoft and accepted into the upstream Linux Kernel Version 6.10.
This session breaks down the training process of multi-modal machine learning models for real-time detection of debilitating diseases using video and audio data. Attendees will learn how to build and integrate video and audio classifiers, curate multi-modal datasets, and deploy these models in real-world settings. The session includes practical demonstrations and code samples to help participants implement their own multi-modal classification models, equipping them with tools and methodologies to apply in the healthcare industry or other fields requiring complex real-time detection.
In this talk, I will be discussing the Universal Blue Project. The Universal Blue project builds custom Fedora Atomic images via OCI/Docker containers. This is a feature that is not yet in mainline Fedora. The project also publishes a diverse set of base images and tooling for users to develop their own custom images. We will be explain how the project builds images and why we feel this model is the future of Linux.
We've all probably encountered the following scenario at least once. We start pulling a container, and have to wait a solid 4–5 minutes... Why? Oh, our colleague decided to put all the development and building tools inside the container, making it >5 GB in size. Let me show you a way to construct images that prevent this from happening, and maybe teach you a thing or 2 about Nix in the process.
We will examine the inner workings of container registries, such as Docker Hub and GitHub Container Registry, exploring the types of data these registries can capture. Additionally, we will discuss potential analytics that can be derived from this data, highlighting its practical utility. Lastly, we will outline how this information can benefit your project and share ways to acquire it.