TechTriad

Senior AWS Data Engineer- Local to DMV Area

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior AWS Data Engineer in Washington, DC or New York, requiring 8+ months of onsite work. Key skills include AWS, Databricks, Python, and experience with data processing and ML workflows. A degree and 8-10+ years of experience are required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
October 28, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Washington, DC
-
🧠 - Skills detailed
#Programming #Java #TensorFlow #Lambda (AWS Lambda) #SQL (Structured Query Language) #PySpark #Monitoring #SageMaker #Libraries #Deployment #Scala #S3 (Amazon Simple Storage Service) #MLflow #Data Processing #Spark (Apache Spark) #AWS S3 (Amazon Simple Storage Service) #Computer Science #Batch #Model Deployment #Data Engineering #Data Quality #Distributed Computing #Python #ML (Machine Learning) #AWS (Amazon Web Services) #Delta Lake #Databricks #PyTorch
Role description
Role: Senior AWS Data Engineer Location: Washington, DC, 20006 or New York 10002- 3 days onsite Duration: 8+ Months with possible extension What You’ll Need Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 8-10+ years of experience in data engineering or distributed data systems development. Deep expertise with Databricks (PySpark, Delta Lake, SQL) and strong experience with AWS (S3, Glue, EMR, Kinesis, Lambda). Experience designing and building Feature Stores (Databricks Feature Store, Feast, or similar). Proven ability to profile and optimize data processing code, including Spark tuning, partitioning strategies, and efficient data I/O. Strong programming skills in Python (preferred) or Scala/Java, with emphasis on writing performant, production-ready code. Experience with batch and streaming pipelines, real-time data processing, and large-scale distributed computing. Familiarity with ML model deployment and monitoring workflows (MLflow, SageMaker, custom frameworks). Familiarity with ML model development using libraries such as scikit-learn, TensorFlow, or PyTorch. Working knowledge of data quality frameworks, CI/CD, and infrastructure-as-code. Excellent problem-solving and communication skills; able to collaborate across technical and product domains.