Data Reliability Engineering Conference
One day of data reliability standards and innovation
Data has become a core competitive advantage for modern organizations — making data reliability more important than ever. Driven by this growing need, the tools, techniques, and best practices for keeping data fresh, accurate, reliable, and trustworth are rapidly evolving. This renaissance for data tools and best practices is changing the game up and down the entire data stack. Data Reliability Engineering is an emerging approach that every data team needs to unlock the full potential of their organization’s data.
Join us for a day of (fully virtual) technical talks, panels, and networking. Hear from data engineering teams, modern data stack companies, founders, and other speakers and engage with other data engineers, scientists, and analysts working to solve the most complex data reliability problems enterprises are currently facing.
Sample of topics covered:
- Embracing risk – how to balance reliability and change velocity
- Measuring freshness across the data model
- Defining data quality with SLAs
- Gold-standard datasets
- Data observability
- Monitoring ML model performance
- Reducing toil—eliminating repetitive tasks in data engineering
- Version control, code review, and data diffs
- Employing automation for self-healing pipelines
- Quarantining bad data
- Maintaining simplicity – reducing complexity in infrastructure, data models, and tools.