Brooksource

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown," offering a pay rate of "unknown," and is located in "unknown." Key skills include SQL, ETL/ELT tools, Python, and cloud platforms. Requires a Bachelor's degree and 3+ years of relevant experience.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
October 28, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Georgia, United States
-
🧠 - Skills detailed
#Programming #BI (Business Intelligence) #"ETL (Extract #Transform #Load)" #Data Analysis #Cloud #Docker #Kubernetes #Snowflake #Java #AWS Glue #Data Lake #SQL (Structured Query Language) #Data Security #Data Governance #Apache Airflow #Compliance #PostgreSQL #Security #Scala #dbt (data build tool) #GCP (Google Cloud Platform) #Redshift #Data Processing #Spark (Apache Spark) #Data Architecture #Data Pipeline #Azure #MongoDB #Informatica #Computer Science #BigQuery #Documentation #Data Science #Data Engineering #Data Quality #Python #Data Modeling #ML (Machine Learning) #Airflow #AWS (Amazon Web Services) #Databases #NoSQL #Kafka (Apache Kafka)
Role description
Overview We are seeking a skilled Data Engineer to design, build, and optimize data pipelines and architectures that enable analytics, reporting, and data-driven decision-making across the organization. The ideal candidate is passionate about data, has strong engineering fundamentals, and enjoys solving complex data challenges in a fast-paced environment. Key Responsibilities β€’ Design, develop, and maintain scalable ETL/ELT data pipelines and integrations across multiple data sources and systems. β€’ Build and optimize data architectures (data lakes, warehouses, and pipelines) to support business intelligence, analytics, and machine learning use cases. β€’ Collaborate with data analysts, scientists, and business stakeholders to understand data needs and ensure high data quality and availability. β€’ Implement best practices for data governance, security, and compliance. β€’ Monitor and troubleshoot data pipelines for performance, reliability, and accuracy. β€’ Automate workflows and improve efficiency of data processing and delivery. β€’ Maintain documentation for data models, data flows, and systems. Required Qualifications β€’ Bachelor’s degree in Computer Science, Data Engineering, Information Systems, or a related field (or equivalent experience). β€’ 3+ years of experience in data engineering, data warehousing, or software engineering roles. β€’ Proficiency in SQL and experience with relational and NoSQL databases (e.g., PostgreSQL, Snowflake, BigQuery, Redshift, MongoDB). β€’ Strong experience with ETL/ELT tools and frameworks (e.g., Apache Airflow, dbt, Informatica, or AWS Glue). β€’ Programming skills in Python, Scala, or Java. β€’ Experience with cloud data platforms (AWS, Azure, GCP). β€’ Understanding of data modeling, data governance, and data security best practices. Preferred Qualifications β€’ Experience with streaming technologies (Kafka, Kinesis, Spark Streaming). β€’ Familiarity with containerization and orchestration tools (Docker, Kubernetes). β€’ Exposure to CI/CD practices for data pipelines. β€’ Experience supporting data science and machine learning workflows.