

Data Engineer - Bridgeville, PA/ Homewood, AL/Irving, TX/ Brooklyn, OH - Onsite - JOBID587
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "unknown." It requires expertise in SQL, programming (Python, Scala, Java), cloud platforms (AWS, Azure, GCP), and big data technologies.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 7, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Bridgeville, PA
-
π§ - Skills detailed
#Talend #Apache Airflow #GDPR (General Data Protection Regulation) #AWS (Amazon Web Services) #Airflow #Hadoop #Data Quality #BigQuery #SQL (Structured Query Language) #Programming #Databases #Security #Azure #Snowflake #Big Data #dbt (data build tool) #GCP (Google Cloud Platform) #Scala #Data Engineering #Spark (Apache Spark) #Python #Fivetran #Data Security #Informatica #Luigi #Compliance #Kafka (Apache Kafka) #Data Science #Java #Redshift #"ETL (Extract #Transform #Load)" #Computer Science #Data Pipeline #Cloud
Role description
Key Responsibilities:
β’ Design, develop, and maintain scalable and robust data pipelines.
β’
β’ Integrate data from multiple sources, including APIs, cloud platforms, databases, and third-party systems.
β’
β’ Build and manage data infrastructure and architecture that supports analytics and reporting needs.
β’
β’ Ensure data quality, consistency, reliability, and governance across systems.
β’
β’ Collaborate with data scientists, analysts, and business stakeholders to understand data requirements.
β’
β’ Implement data security best practices to ensure compliance with privacy and regulatory standards (GDPR, HIPAA, etc.).
β’
β’ Monitor pipeline performance and troubleshoot issues.
β’
β’ Automate repetitive tasks to improve efficiency and reduce manual workloads.
Required Skills and Qualifications:
β’ Bachelorβs or Masterβs degree in Computer Science, Information Systems, Engineering, or related field.
β’
β’ Experience in data engineering or a related role.
β’
β’ Proficiency in SQL and at least one programming language (e.g., Python, Scala, Java).
β’
β’ Experience with data pipeline tools such as Apache Airflow, Luigi, or similar.
β’
β’ Strong understanding of ETL/ELT processes and tools (e.g., Talend, Informatica, Fivetran, dbt).
β’
β’ Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform.
β’
β’ Experience working with big data technologies like Spark, Hadoop, or Kafka.
β’
β’ Familiarity with data warehousing solutions (e.g., Snowflake, Redshift, BigQuery).
β’
β’ Strong problem-solving and communication skills.
Key Responsibilities:
β’ Design, develop, and maintain scalable and robust data pipelines.
β’
β’ Integrate data from multiple sources, including APIs, cloud platforms, databases, and third-party systems.
β’
β’ Build and manage data infrastructure and architecture that supports analytics and reporting needs.
β’
β’ Ensure data quality, consistency, reliability, and governance across systems.
β’
β’ Collaborate with data scientists, analysts, and business stakeholders to understand data requirements.
β’
β’ Implement data security best practices to ensure compliance with privacy and regulatory standards (GDPR, HIPAA, etc.).
β’
β’ Monitor pipeline performance and troubleshoot issues.
β’
β’ Automate repetitive tasks to improve efficiency and reduce manual workloads.
Required Skills and Qualifications:
β’ Bachelorβs or Masterβs degree in Computer Science, Information Systems, Engineering, or related field.
β’
β’ Experience in data engineering or a related role.
β’
β’ Proficiency in SQL and at least one programming language (e.g., Python, Scala, Java).
β’
β’ Experience with data pipeline tools such as Apache Airflow, Luigi, or similar.
β’
β’ Strong understanding of ETL/ELT processes and tools (e.g., Talend, Informatica, Fivetran, dbt).
β’
β’ Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform.
β’
β’ Experience working with big data technologies like Spark, Hadoop, or Kafka.
β’
β’ Familiarity with data warehousing solutions (e.g., Snowflake, Redshift, BigQuery).
β’
β’ Strong problem-solving and communication skills.