Batch-driven or event-driven ETL
I am trying to come up with a data pipeline architecture. The data I deal with is event logging for labs requested, failed, succeeded etc with timestamps and some customer info for several different customers. Eventually I want that data being dumped into a dashboard, for both external and internal use. What's the best way to approach it: event-driven or batch driven ETL? We don't care much for real-time processing, and the data is rather small.
Topic data-engineering etl
Category Data Science