Chief Data Solutions Architect
Job Opportunity
A strategic leader is needed to spearhead the development of data engineering solutions.
This key position will oversee the design and implementation of data pipelines and ETL processes, ensuring high-quality deliverables.
Key Responsibilities - Lead the design and development of data engineering solutions
- Mentor a team of data engineers
- Provide technical guidance
- Ensure high-quality deliverables
Requirements - Strong technical skills in code review, architecture, and data models
- Experience with designing data pipelines and building ETL processes
- Passion for continuous learning and improvement
- Ability to accurately estimate time and resources for projects
- Excellent communication and collaboration skills
- Leadership and mentoring skills
- Experience working with Python, Scala, Java, Spark, PySpark, GCP, AWS, Azure, SQL, OLTP, OLAP
- Proficiency in PostgreSQL, MSSQL, MySQL
- Experience with Apache Airflow, Prefect, Glue, Azure Data Factory
- Knowledge of data integration, business intelligence architecture
- Experience with MongoDB, DynamoDB, Azure Data Lake, Apache Hudi, Apache Iceberg, Delta Lake
- Stream processing using AWS Kinesis, Kafka streams, Flink, Beam, Storm, HDFS, Hive
- Containerized or serverless deployment using Docker, ECS, Kubernetes, Lambda
- Good knowledge of JSON, XML, Proto, Parquet, Avro, ORC
- Strong technical skills in code review, architecture, and data models
- Experience with designing data pipelines and building ETL processes
- Passion for continuous learning and improvement
- Ability to accurately estimate time and resources for projects
- Excellent communication and collaboration skills
- Leadership and mentoring skills
- Experience working with Python, Scala, Java, Spark, PySpark, GCP, AWS, Azure, SQL, OLTP, OLAP
- Proficiency in PostgreSQL, MSSQL, MySQL
- Experience with Apache Airflow, Prefect, Glue, Azure Data Factory
- Knowledge of data integration, business intelligence architecture
- Experience with MongoDB, DynamoDB, Azure Data Lake, Apache Hudi, Apache Iceberg, Delta Lake
- Stream processing using AWS Kinesis, Kafka streams, Flink, Beam, Storm, HDFS, Hive
- Containerized or serverless deployment using Docker, ECS, Kubernetes, Lambda
- Good knowledge of JSON, XML, Proto, Parquet, Avro, ORC
- Részletes információk az állásajánlatról
Vállalat: beBeeData Hely: Budapest
Budapest, Budapest, HUHozzáadva: 27. 9. 2025
Aktív álláslehetőségek
A friss munkaajánlatra Ön elsőként jelentkezhet!