

ETL Developer // Informatica PowerCenter- W2
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an ETL Developer specializing in Informatica PowerCenter, offering a remote contract position. Requires 10-15 years of ETL experience, proficiency in Python, and expertise in Amazon Redshift and IICS. Pay rate is "unknown".
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
August 19, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Remote
-
📄 - Contract type
W2 Contractor
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Snowflake #Data Pipeline #HBase #Leadership #AI (Artificial Intelligence) #ERWin #Jenkins #Spark (Apache Spark) #Data Extraction #Data Science #Schema Design #Pandas #Data Profiling #Visualization #S3 (Amazon Simple Storage Service) #GIT #Documentation #Data Integration #Data Management #BI (Business Intelligence) #Vault #Airflow #Data Manipulation #AWS (Amazon Web Services) #API (Application Programming Interface) #Libraries #Microsoft Power BI #Data Lineage #Informatica PowerCenter #Redshift #Compliance #SageMaker #Informatica #Physical Data Model #Data Processing #Datasets #Observability #Scala #Automation #DevOps #Data Governance #Cloud #Data Modeling #Python #"ETL (Extract #Transform #Load)" #GDPR (General Data Protection Regulation) #NumPy #Lambda (AWS Lambda) #SQL (Structured Query Language) #IICS (Informatica Intelligent Cloud Services) #Data Vault #Amazon Redshift #Data Analysis #PySpark
Role description
Role: ETL-Engineer
Location: Remote
Key Responsibilities:
· ETL Development & Innovation: Architect, develop, and optimize sophisticated ETL workflows using Informatica PowerCenter and IICS to manage data extraction, transformation, and loading from diverse sources into Amazon Redshift and other platforms, incorporating real-time and near-real-time processing capabilities.
· Cloud Data Integration & Orchestration: Lead the implementation of cloud-native data integration solutions using IICS, leveraging API-driven architectures to seamlessly connect on-premises, cloud, and hybrid ecosystems, ensuring scalability, resilience, and low-latency data flows.
· Advanced Data Modeling: Design and maintain enterprise-grade logical and physical data models, incorporating advanced techniques like Data Vault or graph-based modeling, to support high-performance data warehousing, and analytics.
· Data Warehousing Leadership: Spearhead the development and optimization of Amazon Redshift data structures, utilizing advanced features like Redshift Spectrum, workload management, and materialized views to handle petabyte-scale datasets with optimal performance.
· Advanced Analytics : Conduct in-depth data profiling, cleansing, and analysis using Python and advanced analytics tools to uncover actionable insights, to enable predictive and prescriptive analytics.
· Python-Driven Automation: Develop and maintain Python-based scripts and frameworks for data processing, ETL automation, and orchestration, leveraging libraries like pandas, PySpark, or Airflow to streamline workflows and enhance operational efficiency.
· Performance Optimization & Cost Efficiency: Proactively monitor and optimize ETL processes, Redshift queries, Python scripts, and data pipelines using DevOps practices (e.g., CI/CD for data pipelines) to ensure high performance, cost efficiency, and reliability in cloud environments.
· Cross-Functional Leadership & Innovation: Collaborate with data scientists, AI engineers, business stakeholders, and DevOps teams to translate complex business requirements into innovative data solutions, driving digital transformation and business value.
· Data Governance & Ethics: Champion data governance, quality, and ethical data practices, ensuring compliance with regulations (e.g., GDPR, CCPA) and implementing advanced data lineage, auditing, and observability frameworks.
· Documentation & Thought Leadership: Maintain comprehensive documentation of ETL processes, data models, Python scripts, and configurations while contributing to thought leadership by sharing best practices, mentoring teams, and presenting at industry forums.
Experience:
· 10-15 years of hands-on experience with Informatica PowerCenter for designing and implementing complex ETL workflows.
· 5+ years of experience with Informatica Intelligent Cloud Services (IICS) for cloud-based data integration and orchestration.
· 5-7 years of hands-on experience with Amazon Redshift, including advanced schema design, query optimization, and large-scale data management.
· Extensive experience (8+ years) in data modeling (conceptual, logical, and physical) for data warehousing, analytics solutions.
· 7+ years of experience as a Data Analyst, performing advanced data profiling, analysis, and reporting to support strategic decision-making.
· 5+ years of hands-on experience with Python for data processing, automation, and integration with ETL, data warehousing, and analytics platforms.
Technical Skills:
· Expert-level proficiency in Informatica PowerCenter and IICS for ETL and cloud-native data integration.
· Advanced SQL skills for querying, optimizing, and managing Amazon Redshift environments, including expertise in Redshift-specific features.
· Strong expertise in data modeling tools (e.g., ER/Studio, Erwin, or Data Vault) and advanced modeling techniques (e.g., star/snowflake schemas, graph-based models).
· Proficiency in Python for data manipulation, automation, and analytics (e.g., pandas, NumPy, PySpark, Airflow).
· Experience with data visualization and analytics platforms (e.g.,MSTR, Power BI) for delivering actionable insights.
· Familiarity with AWS cloud services (e.g., S3, Glue, Lambda, SageMaker) and DevOps tools (e.g., Jenkins, Git) for data pipeline automation.
Role: ETL-Engineer
Location: Remote
Key Responsibilities:
· ETL Development & Innovation: Architect, develop, and optimize sophisticated ETL workflows using Informatica PowerCenter and IICS to manage data extraction, transformation, and loading from diverse sources into Amazon Redshift and other platforms, incorporating real-time and near-real-time processing capabilities.
· Cloud Data Integration & Orchestration: Lead the implementation of cloud-native data integration solutions using IICS, leveraging API-driven architectures to seamlessly connect on-premises, cloud, and hybrid ecosystems, ensuring scalability, resilience, and low-latency data flows.
· Advanced Data Modeling: Design and maintain enterprise-grade logical and physical data models, incorporating advanced techniques like Data Vault or graph-based modeling, to support high-performance data warehousing, and analytics.
· Data Warehousing Leadership: Spearhead the development and optimization of Amazon Redshift data structures, utilizing advanced features like Redshift Spectrum, workload management, and materialized views to handle petabyte-scale datasets with optimal performance.
· Advanced Analytics : Conduct in-depth data profiling, cleansing, and analysis using Python and advanced analytics tools to uncover actionable insights, to enable predictive and prescriptive analytics.
· Python-Driven Automation: Develop and maintain Python-based scripts and frameworks for data processing, ETL automation, and orchestration, leveraging libraries like pandas, PySpark, or Airflow to streamline workflows and enhance operational efficiency.
· Performance Optimization & Cost Efficiency: Proactively monitor and optimize ETL processes, Redshift queries, Python scripts, and data pipelines using DevOps practices (e.g., CI/CD for data pipelines) to ensure high performance, cost efficiency, and reliability in cloud environments.
· Cross-Functional Leadership & Innovation: Collaborate with data scientists, AI engineers, business stakeholders, and DevOps teams to translate complex business requirements into innovative data solutions, driving digital transformation and business value.
· Data Governance & Ethics: Champion data governance, quality, and ethical data practices, ensuring compliance with regulations (e.g., GDPR, CCPA) and implementing advanced data lineage, auditing, and observability frameworks.
· Documentation & Thought Leadership: Maintain comprehensive documentation of ETL processes, data models, Python scripts, and configurations while contributing to thought leadership by sharing best practices, mentoring teams, and presenting at industry forums.
Experience:
· 10-15 years of hands-on experience with Informatica PowerCenter for designing and implementing complex ETL workflows.
· 5+ years of experience with Informatica Intelligent Cloud Services (IICS) for cloud-based data integration and orchestration.
· 5-7 years of hands-on experience with Amazon Redshift, including advanced schema design, query optimization, and large-scale data management.
· Extensive experience (8+ years) in data modeling (conceptual, logical, and physical) for data warehousing, analytics solutions.
· 7+ years of experience as a Data Analyst, performing advanced data profiling, analysis, and reporting to support strategic decision-making.
· 5+ years of hands-on experience with Python for data processing, automation, and integration with ETL, data warehousing, and analytics platforms.
Technical Skills:
· Expert-level proficiency in Informatica PowerCenter and IICS for ETL and cloud-native data integration.
· Advanced SQL skills for querying, optimizing, and managing Amazon Redshift environments, including expertise in Redshift-specific features.
· Strong expertise in data modeling tools (e.g., ER/Studio, Erwin, or Data Vault) and advanced modeling techniques (e.g., star/snowflake schemas, graph-based models).
· Proficiency in Python for data manipulation, automation, and analytics (e.g., pandas, NumPy, PySpark, Airflow).
· Experience with data visualization and analytics platforms (e.g.,MSTR, Power BI) for delivering actionable insights.
· Familiarity with AWS cloud services (e.g., S3, Glue, Lambda, SageMaker) and DevOps tools (e.g., Jenkins, Git) for data pipeline automation.