Sr. Azure Data Engineer
Pune, IN
Eviden, part of the Atos Group, with an annual revenue of circa € 5 billion is a global leader in data-driven, trusted and sustainable digital transformation. As a next generation digital business with worldwide leading positions in digital, cloud, data, advanced computing and security, it brings deep expertise for all industries in more than 47 countries. By uniting unique high-end technologies across the full digital continuum with 47,000 world-class talents, Eviden expands the possibilities of data and technology, now and for generations to come.
Roles & Responsibilities:
- Design end-to-end data code development using pyspark, python, SQL and Kafka leveraging Microsoft Fabric's capabilities.
Requirements:
- Data Engineering: Design end-to-end data code development using pyspark, python, SQL and Kafka leveraging Microsoft Fabric's capabilities.
- MS Fabric Setup:- implementation knowledge of MS Fabric with recommended practice of capacity and workspace.
- Data Flows: Design and technical skill data flows within the Microsoft Fabric environment.
- Storage Strategies: Implement OneLake storage strategies.
- Analytics Configuration: Configure Synapse Analytics workspaces.
- Migration: Experience in potential migration from their existing data platforms like Databricks/Spark, etc to Microsoft Fabric
- Integration Patterns: Establish Power BI integration patterns.
- Data Integration: Architect data integration patterns between systems using Azure Databricks/Spark and Microsoft Fabric.
- Delta Lake Architecture: Design Delta Lake architecture and implement medallion architecture (Bronze/Silver/Gold layers).
- Real-Time Data Ingestion: Create real-time data ingestion patterns and establish data quality frameworks.
- Data Governance: Establish data governance frameworks incorporating Microsoft Purview for data quality, lineage, and compliance.
- Security: Implement row-level security, data masking, and audit logging mechanisms.
- Pipeline Development: Design and implement scalable data pipelines using Azure Databricks/Spark for ETL/ELT processes and real-time data integration.
- Performance Optimization: Implement performance tuning strategies for large-scale data processing and analytics workloads.
- Experience: Proven experience in data architecture, particularly with Microsoft Fabric and Azure Databricks/Spark.
- Technical Skills: Proficiency in Microsoft Fabric, Azure Databricks/Spark, Synapse Analytics, and data modelling.
- Analytical Skills: Strong analytical and problem-solving skills.
- Communication: Excellent communication and teamwork skills.
- Certifications: Relevant certifications in Microsoft data platforms are a plus
- Qualifications: Bachelor’s or master’s degree in computer science, Information Technology, or a related field.
Our Offering:
- Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment.
- Wellbeing programs & work-life balance - integration and passion sharing events.
- Attractive Salary and Company Initiative Benefits
- Courses and conferences.
- Attractive Salary.
- Hybrid work culture.
#Eviden
Let’s grow together.