Empresa contratante: HAYS
Salario: Sin Especificar
Big Data Engineer en Madrid
Be a key driver on BASF´s path to digitalisation by supporting existing products and initiatives as well as innovate additional digital solutions that supports BASF´s global businesses
Our unit “Data Enablement – Big Data Solutions” uses the highly innovative technologies to develop advanced analytics prototypes, builds big storage solutions and analytic platforms for global deployment and organizes an overarching data lake, the BASF Enterprise Data Lake
– A Bachelor or Master degree in relevant Business/IT studies with at least 4/5 years of experience in a similar role
– Business Consulting and Technical Consulting skills
– Practical experience with at least one of this Hadoop, Spark, Hive • Tableau or PowerBI • Azure
– Working flexible and agile (Scrum knowledge appreciated) with a DevOps mindset
– An entrepreneurial spirit and the ability to foster a positive and energized culture
– A growth mindset with a curiosity to learn and improve.
– Team player with strong interpersonal, written and verbal communication skills.
– You can demonstrate fluent communication skills in English (spoken and written)
– You are experienced with data visualization tools, like Tableau or Power BI
– At least 2 years of experience in the field of Data Engineering, Big Data and Distributed Computing
– Experience with Big Data technologies such as Hadoop, Spark, Hive Kafka
– Experience in Cloud Big Data technologies and architectures within AZURE, Google Cloud or AWS.
– Experience in Tools like Apache Nifi, Kylo or Streamsets
– Experienced in Java, Scala, Python, MySQL
– Run Data Ingestion and Advanced Data Screening
– Design and build Data Flows within Big Data Architectures
– Develop and optimize Data Models and pipelines for performance and scalability, reusable and listing in libraries for the future
– Support industrialization of Analytics Solutions
– Enable meaningful and insightful reports for Data Analysis and Monitoring
– Ensure systematic quality assurance for the validation of accurate Data Processing
– Building reusable code and libraries for future use
– Optimization of applications for maximum speed and scalability
– Implementation of security and data protection
– Translation of stakeholder requirements into concrete specifications for the data warehouses, BI solutions and self-service solutions
Hadoop, Spark, Hive, Tableau, PowerBI, Azure
Por favor, para apuntarte a este trabajo visita www.tecnoempleo.com.