(Competitive pay)
Role
Company
What you'll do:
- Design and implement scalable and efficient data architectures to support the organization's data processing needs
- Work closely with cross-functional teams to understand data requirements and ensure that data solutions align with business objectives
- Oversee the development of robust ETL processes to extract, transform, and load data from various sources into the data warehouse
- Ensure data quality and integrity throughout the ETL process, implementing best practices for data cleansing and validation
- Stay abreast of emerging trends and technologies in big data and analytics, and assess their applicability to the organization's data strategy
- Implement and optimize big data technologies to process and analyze large datasets efficiently
- Collaborate with the IT infrastructure team to integrate data engineering solutions with cloud platforms, ensuring scalability, security, and performance
- Performance Monitoring and Optimization:
- Implement monitoring tools and processes to track the performance of data pipelines and proactively address any issues
- Optimize data processing workflows for improved efficiency and resource utilization
- Maintain comprehensive documentation for data engineering processes, data models, and system architecture
- Ensure that team members follow documentation standards and best practices.
- Collaboration and Communication:
- Collaborate with data scientists, analysts, and other stakeholders to understand their data needs and deliver solutions that meet those requirements
- Communicate effectively with technical and non-technical stakeholders, providing updates on project status, challenges, and opportunities.
What makes you a great fit:
- 6-8 years of professional experience in data engineering
- In-depth knowledge of data modeling, ETL processes, and data warehousing
- In-depth knowledge of building the data warehouse using Snowflake
- Should have experience in data ingestion, data lakes, data mesh and data governance
- Must have experience in Python programming
- Strong understanding of big data technologies and frameworks, such as Hadoop, Spark, and Kafka
- Experience with cloud platforms, such as AWS, Azure, or Google Cloud
- Familiarity with database systems like SQL, NoSQL, and data pipeline orchestration tools
- Excellent problem-solving and analytical skills
- Strong communication and interpersonal skills
- Proven ability to work collaboratively in a fast-paced, dynamic environment
GIST Impact is a leading impact data and analytics provider that has been measuring and quantifying impact for more than 15 years. With a team of 100+ scientists, engineers, data scientists and ecological and environmental economists, GIST Impact delivers market-leading impact platforms and datasets, covering 12,800+ companies with geographically precise, time-series data. GIST Impact works with pioneering companies across all sectors and with investors representing over $8 trillion in assets under management. GIST Impact also partners with some of the world's largest ESG data providers, business networks, and fintech platforms to enable impact measurement across global markets.
Find Popular Jobs on BigShyft.com
Jobs By Skill in Bengaluru
Apply to Similar Jobs
- EEmidsSenior Developer (Azure Data Engineer)201-500 employeesPrivate4y - 6y₹18 - ₹27.5 LPABengaluru/ Bangalore, Hyderabad, NoidaAzure, ETL, SQL, Python, Scala
- AAlgo8.aiAI EngineerAcquiredStart-up51-200 employees4y - 6y₹12 - ₹18 LPANoidaPython, Natural Language Processing (NLP), Deep Learning
- EEmidsAzure Data Engineer201-500 employeesPrivate4y - 7y₹12 - ₹22.5 LPABengaluru/ Bangalore, Hyderabad, NoidaDatabricks, Azure, Apache, Python, Scala
- EEmidsData Engineer201-500 employeesPrivate4y - 7y₹12 - ₹22.5 LPABengaluru/ Bangalore, Noida, HyderabadSpark, Kafka
Find Popular Jobs on BigShyft.com
Jobs By Skill in Bengaluru
- Home
- >
- Jobs in Noida
- >
- Python Jobs
- >
- Python Jobs in Noida
- >
- Data Engineering Lead