kato

joined 1 year ago
[–] kato@programming.dev 1 points 5 months ago

I have a couple of years of experience writing functional scala as a backend web dev and switched to doing data engineering 2 years ago. Before that some C/C++ (this is where my rust interest came from).

I definitely understand the feeling of learning from scratch, I had the same experience learning functional programming but having learnt that made learning rust much easier

[–] kato@programming.dev 1 points 5 months ago (2 children)

Oh no worries I am quite new to rust myself but am lucky to be able to use it at work and already got some experience with datafusion and delta-rs :). Accessing postgresql with this is not supported yet but am trying to figure out using OpenDAL for that which should hopefully make it quite easy to implement

[–] kato@programming.dev 5 points 5 months ago

ETL stands for extract transform and load and it is a widely used architecture for data pipelines where you load some data from different sources (like an S3 or gcs bucket), apply some transformation logic to either aggregate the data or do some other data transformation like changing the schema and then output the result as a different data product.

These pipelines are then usually run on a schedule or triggered to periodically output data for different time periods to be able to deal with large sets of data by breaking them down into more manageable pieces for a downstream data science team or for a team of data analysts for example.

What this library is aiming at is to combine the querying capabilities of datafusion which is a query parser and query engine, with the delta lake protocol to provide a pretty capable framework to build these pipelines in a short amount of time. I've used both datafusion and delta-rs for some time and I really love these projects as they enable me to use rust in my day job as a data engineer which is usually a python dominated field.

However they are quite complex as they cover a wide variety of usecases and this library tries to reduce the complexity using them by constraining them for the use case of building simple data pipelines.

[–] kato@programming.dev 5 points 5 months ago

Basically yes. The usecases I have found so far at work is to build an API around this to dynamically register automatic reports for data analysts, clients and non devs. In general this also greatly speeds up dev time for any ETL that we need to deploy (am part of a data engineering team). Another usecase I found is that using the CLI tool we can create run books for our SRE team to run queries for debugging/data validation purposes. I think we'll find more as we go but another part of it was to simplify working with datafusion and deltalake as their APIs expose a lot of lower level stuff.

 

This is my first try at anything open source so any feedback is welcome :)

 

Hey, I held a talk at the Vienna rust meetup in January about how we use rust to write data pipelines in our company. I really enjoy writing ETLs like this so I wanted to share