

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a 6-month contract, paying $50-55/hour. Key skills include AWS, Python, Spark, and API development. Requires 6+ years in data engineering, preferably in retail or logistics, and expertise in cloud services.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
440
-
ποΈ - Date discovered
July 15, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Irving, TX
-
π§ - Skills detailed
#PCI (Payment Card Industry) #NoSQL #IAM (Identity and Access Management) #Prometheus #Monitoring #Apache Spark #Security #Apache Airflow #REST (Representational State Transfer) #Scala #"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #Airflow #SQL (Structured Query Language) #API (Application Programming Interface) #Data Modeling #S3 (Amazon Simple Storage Service) #Azure cloud #Azure #Databases #Data Engineering #Deployment #Docker #Programming #Observability #DynamoDB #Spark (Apache Spark) #Automation #Batch #IoT (Internet of Things) #Oracle #Computer Science #Grafana #Compliance #Data Processing #Data Governance #Big Data #Python #AWS (Amazon Web Services) #Kafka (Apache Kafka) #Datadog #Data Pipeline #Terraform #Databricks #GitHub #MongoDB #Cloud #GCP (Google Cloud Platform) #Data Lake
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Description
Our client is currently looking for a Sr Data Engineer to join theirβ―RIS2.0 and work with their Engineering, Product, Support, and Customer Success teams and have the responsibility to keep our platform and services working at full steam. For this role, youβll need to have a good knowledge of how each team works and how they interact with one another.
Responsibilities
β’ Design and build scalable real-time and batch data pipelines to support store operations, including POS transactions, inventory updates, and device logs.
β’ Lead integration of store systems (handheld devices, IoT sensors, store APIs) with centralized cloud-based data platforms.
β’ Develop efficient MongoDB schemas and queries to support transactional and analytical workloads.
β’ Ensure data reliability, observability, and latency optimization across all processing stages.
β’ Implement and maintain infrastructure-as-code, CI/CD pipelines, and automated deployment workflows.
β’ Work collaboratively with cross-functional teams in engineering, product, store operations, and analytics to define data requirements and deliver scalable solutions.
β’ Establish and enforce data governance, access control, and compliance aligned with internal security policies and industry regulations (e.g., PCI-DSS).
β’ Mentor junior engineers and contribute to architectural reviews, standards, and technical roadmaps.
Key Technologies & Stack
We are looking for candidates with proven expertise in the following technologies and platforms:
β’ Strong hands-on experience with AWS services, particularly Lambda, Kinesis, Glue, S3, Step Functions, CloudWatch, and IAM, to build and manage scalable, cloud-native data pipelines.
β’ Proficiency in using Amazon S3 as a central data lake and Apache Spark (via EMR or Glue) for distributed data processing at scale.
β’ Advanced programming skills in Python, with the ability to develop robust and reusable ETL components.
β’ Experience in orchestrating workflows using Apache Airflow or AWS MWAA, as well as event-driven state machines with Step Functions.
β’ Knowledge of containerization and infrastructure automation using Docker, Terraform, and GitHub Actions as part of CI/CD workflows.
β’ Strong background in monitoring and observability using tools like CloudWatch, Datadog, or Prometheus/Grafana.
β’ Experience integrating with external systems and services using RESTful APIs and gRPC protocols.
β’ Solid understanding of cloud security and compliance, with working knowledge of IAM policies, CloudTrail auditing, and encryption standards for data at rest and in transit.
β’ Hands on experience with SQL technologies.
β’ 4+ yearsβ experience in building data workflows and big data systems.
β’ Must have 2+ years in Azure cloud and databricks setup.
β’ Must have 4+ yearsβ experience in spark framework based data pipeline development.
β’ Must have exposure to API development.
β’ 4+ years of experience in any relational database (Oracle/Postgres).
β’ 2+ years of experience in any NoSQL databases (Cassandra/MongoDB/DynamoDB).
β’ 4+ years of experience in any cloud services (AWS, Azure, GCP).
β’ Must have experience with messaging Technologies like Kafka or Rabbit MQ.
Qualifications
β’ Bachelorβs or masterβs degree in computer science, Engineering, or a related field.
β’ 6+ years of experience in data engineering, preferably in the retail or logistics domain.
β’ Experience designing and operating production-grade data pipelines on AWS.
β’ Strong understanding of data modeling concepts (document, dimensional, normalized).
β’ Excellent problem-solving skills and ability to work in a fast-paced, distributed team.
Rate: $50-55/hour (depends on experience level). This is a contract position with candidates expected to work 40 hours/ week. Contract duration is 6 months with possible extensions. This position currently does not offer any benefits.