Chief Data Architect
Key Role in Data Engineering
We are seeking a skilled data engineer to join our team and drive the further development of software in the Big Data environment.
This is an exceptional opportunity for a talented professional to lead the design, implementation, and optimization of scalable, production-ready data products using PySpark and cloud-based solutions with a focus on Google Cloud Platform (GCP).
Responsibilities:
- Design and implement efficient data pipelines, leveraging batch and streaming processes, to ensure seamless integration and high-performance distributed data processing.
- Develop and optimize PySpark code for optimal performance in large-scale data processing environments.
- Collaborate with agile teams (Scrum, Kanban) to continuously improve processes and tools, promoting DevOps principles and CI/CD practices.
- Work closely with cross-functional teams to apply practical experience with cloud platforms, especially GCP, and maintain solid understanding of Continuous Integration and Continuous Deployment (CI/CD) practices.
Requirements:
- Strong background in Big Data, including hands-on expertise with Hadoop ecosystem (PySpark, Airflow, HDFS, Elasticsearch)
- Experience with ETL (SSIS), tabular models (SSAS), and data warehousing/Business Intelligence architectures
- Proficiency in Python; knowledge of C# and BigQuery is an advantage
- Solid understanding of cloud-based solutions, particularly Google Cloud Platform (GCP)
- Familiarity with agile methods (Scrum, Kanban) and DevOps principles
Benefits:
We offer a dynamic work environment, sport subscription, private healthcare, international projects, training budget, mobiltelefon, free coffee, canteen, bike parking, playroom, shower, free parking, in-house trainings, in-house hack days, modern office, no dress code.
- Részletes információk az állásajánlatról
Vállalat: beBeeData Hely: Debrecen
Debrecen, Hajdú-Bihar, HUHozzáadva: 27. 9. 2025
Aktív álláslehetőségek
A friss munkaajánlatra Ön elsőként jelentkezhet!