PARSETEK INC

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer, contract length unspecified, offering a pay rate of "unknown." Work location is "remote." Key skills include advanced SQL, Python, ETL/ELT, and experience with Oracle, Postgres, and big data technologies. Public trust clearance required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
440
-
🗓️ - Date
April 2, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Virginia, United States
-
🧠 - Skills detailed
#Data Lake #Kubernetes #Big Data #Indexing #Cloud #"ETL (Extract #Transform #Load)" #Data Engineering #Database Schema #Data Pipeline #Liquibase #Kafka (Apache Kafka) #Databases #Snowflake #Terraform #Oracle #Apache Spark #Python #Data Quality #Programming #SQL (Structured Query Language) #Database Administration #Spark (Apache Spark) #Data Lineage #Pytest #Airflow #AI (Artificial Intelligence) #Databricks #SQL Queries #DevOps
Role description
Key Qualifications: • Ability to independently manage Oracle and Postgres SQL databases by designing efficient schemas, optimizing complex SQL queries, and automating ETL/ELT pipelines • Experience with Performance tuning (indexing, vacuuming), implementing data quality checks, managing backups/restores, and utilizing PSQL for database administration • Past experience successfully developing data pipelines -- Independently building robust, efficient data pipelines, managing schema evolution, handling late-arriving data, and implementing backfill strategies. • Demonstrated ability to design efficient table structures, implement star/snowflake schemas, and utilize data lake technologies • Familiarity with database schema change management tool such as Liquibase • Technical Proficiency: Advanced SQL skills, strong Python programming, and working with big data technologies such as Apache Spark, Kafka, and Airflow for orchestration. • Cloud Infrastructure & DevOps: Utilizing cloud services (e.g., Databricks) and infrastructure-as-code tools like Terraform and Kubernetes. • Data Quality & Governance: Implementing data quality checks, testing with Pytest, and managing data lineage to ensure reliability. • Collaboration: Working with stakeholders to translate business requirements into technical solutions and documenting data processes • Data Maintenance: Demonstrated ability to create and follow low risk processes to handle business requested data maintenance activities • Business Context: Understands how data drives the business processes, able to collaborate with business stakeholders to design and/or modify data processes and procedures to meet operational business needs • Experience using AI-assisted development tools Ability to obtain a public trust clearance