Intone Networks Inc

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "unknown." Key skills include Python, PySpark, Spark, AWS services, and experience in cloud-based environments. Industry experience in data engineering is required.
🌎 - Country
United States
💱 - Currency
Unknown
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 14, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
McLean, VA
-
🧠 - Skills detailed
#S3 (Amazon Simple Storage Service) #Databricks #Data Quality #Redshift #Athena #Data Lifecycle #Cloud #Jenkins #Airflow #Spark (Apache Spark) #Data Lake #Data Ingestion #SQL (Structured Query Language) #Process Automation #Storage #Version Control #Terraform #Agile #PySpark #Data Engineering #Automation #"ETL (Extract #Transform #Load)" #Python #AWS Glue #Lambda (AWS Lambda) #DevOps #GIT #AWS (Amazon Web Services) #Scala
Role description
Key Responsibilities: • Design, build, and optimize ETL pipelines using Python, PySpark, and Spark • Develop scalable data solutions leveraging Databricks, AWS Glue, EMR, and S3 • Collaborate with cross-functional engineering and analytics teams to implement best practices in data ingestion, transformation, and storage • Support data quality, performance tuning, and process automation across the data lifecycle • Work in Agile environments with CI/CD and version control tools Required Skills and Experience: • 3 to 7 plus years of experience in data engineering, preferably in cloud-based environments • Strong proficiency in Python, PySpark, Spark, and SQL • Hands-on experience with AWS data services (S3, Glue, EMR, Redshift, Lambda, Athena) • Experience with Databricks or equivalent data lake platforms • Familiarity with modern DevOps practices (Git, Jenkins, Terraform, Airflow, etc.)