Skip to content

Instantly share code, notes, and snippets.

@swyxio
Last active October 20, 2022 18:01
Show Gist options
  • Save swyxio/135136c1217b038e4b897415845e8150 to your computer and use it in GitHub Desktop.
Save swyxio/135136c1217b038e4b897415845e8150 to your computer and use it in GitHub Desktop.
prompts used for Airbyte Data Nets article https://airbyte.com/blog/data-nets
1. Introduction
2. What are Data Nets?
3. Data Nets vs. Data Mesh
4. Data Nets vs. Data Contract
5. When do you need a Data Net?
6. What does a Data Net architecture look like?
7. Main technological and cloud data warehousing trends
8. Organizational and socio-technical adjustments
9. Core principles of the Data Net
---
CONCEPT 1: DATA ORCHESTRATION
Data Orchestration models dependencies between different tasks in heterogeneous environments end-to-end. It handles integrations with legacy systems, cloud-based tools, data lakes, and data warehouses. It invokes computation, such as wrangling your business logic in SQL and Python and applying ML models at the right time based on a time-based trigger or by custom-defined logic. Data consumers, such as data analysts, and business users, care mostly about the production of data assets. On the other hand, data engineers have historically focused on modeling the dependencies between tasks (instead of data assets) with an orchestrator tool. How can we reconcile both worlds?This article reviews open-source data orchestration tools (Airflow, Prefect, Dagster) and discusses how data orchestration tools introduce data assets as first-class objects. We also cover why a declarative approach with higher-level abstractions helps with faster developer cycles, stability, and a better understanding of what’s going on pre-runtime. We explore five different abstractions (jobs, tasks, resources, triggers, and data products) and see if it all helps to build a Data Mesh. What makes an orchestrator an expert is that it lets you find when things are happening (monitoring with lots of metadata), what is going wrong and how to fix the wrong state with integrated features such as backfills.
In the end, an orchestrator must activate Business Intelligence, Analytics, and Machine Learning. These are company-accessible dashboards/reports, machine learning models, or self-serve BI environments where users can create and pull their data. It is also where the shift happens from data pipelines to what the user is interested in, the Data Asset or Data Product, to use the jargon of Data Mesh. I will use orchestration as a synonym for data orchestration, as all we talk about in this article is data. As well, I use Data Assets interchangeably with Data Products.
@swyxio
Copy link
Author

swyxio commented Oct 20, 2022

Section 8. Organizational and socio-technical adjustments

If Data Nets are the future, what will that mean for data engineering organizations and the data engineering profession?

There are a few key ways that Data Nets will impact data engineering organizations:

  • The role of the data engineer will become more focused on governance and quality control, as automated pipelines take care of the heavy lifting.
  • Data engineers will need to be comfortable with code, as they will be responsible for maintaining and troubleshooting complex algorithms.
  • There will be a greater emphasis on collaboration between data engineers and other teams, as Data Nets require a cross-functional approach.
  • There will be a need for new tools and processes to support Data Nets, which may require investment from organizational leaders.

@swyxio
Copy link
Author

swyxio commented Oct 20, 2022


Write an improved draft of the introduction to this essay

In recent years, there has been a confluence of data engineering, neural networks and generative AI which has given rise to a new breed of data architectures known as Data Nets. Data Nets are composed of four main components: a data router, a data store, a control plane and a data pipeline. In direct contrast with the dbt semantic layer, Data Nets are designed to be fully automated and take advantage of advances in Neural Nets and Generative AI. This essay will explore the advantages of Data Nets over traditional approaches to data management, as well as the implications of this new breed of data architecture on the future of data engineering.

Data engineering has come a long way in recent years, thanks in part to advances in neural networks and generative AI. These technologies have given rise to a new breed of data architectures known as Data Nets.

Data Nets have the following characteristics:

  • A net is composed of a set of interconnected data tables.
  • The net must be fully automated, with intelligent algorithms that determine the best way to route data between the tables.
  • The net must be highly available and scalable, able to handle large workloads on demand.
  • The net should be able to evolve over time, adding new features and capabilities as needed.

In addition, the Data Net is designed to be highly scalable and easily integrated into existing systems. It can be used to supplement or replace an existing data warehouse, or it can be used as the primary data store for a new system.

The Data Net is also intended to be highly available, with built-in redundancy and failover capabilities.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment