

Data Scientist
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Scientist in Irving, TX, for 6+ months at a competitive pay rate. Requires 6+ years in data engineering within retail/logistics, AWS data pipelines, strong Python skills, and 2+ years in Azure and Databricks.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 15, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
On-site
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Irving, TX
-
π§ - Skills detailed
#NoSQL #IAM (Identity and Access Management) #Prometheus #Monitoring #Apache Spark #Security #Apache Airflow #REST (Representational State Transfer) #Scala #"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #Airflow #SQL (Structured Query Language) #API (Application Programming Interface) #Data Modeling #S3 (Amazon Simple Storage Service) #Azure cloud #Azure #Databases #Data Engineering #Docker #Programming #Observability #DynamoDB #Spark (Apache Spark) #Data Science #Automation #Oracle #Computer Science #Grafana #Compliance #Data Processing #Big Data #Python #AWS (Amazon Web Services) #Kafka (Apache Kafka) #Datadog #Data Pipeline #Terraform #Databricks #GitHub #MongoDB #Cloud #GCP (Google Cloud Platform) #Data Lake
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Note: Considering only W2 and local to Dallas, TX profiles.
Title: Data Scientist
Location: Irving, TX (Onsite)
Duration: 6+ Months (Extendable)
Qualifications:
β’ Bachelorβs or masterβs degree in computer science, Engineering, or a related field.
β’ 6+ years of experience in data engineering, preferably in the retail or logistics domain.
β’ Experience designing and operating production-grade data pipelines on AWS.
β’ Strong understanding of data modeling concepts (document, dimensional, normalized).
β’ Excellent problem-solving skills and ability to work in a fast-paced, distributed team.
Key Technologies & Stack
β’ Strong hands-on experience with AWS services, particularly Lambda, Kinesis, Glue, S3, Step Functions, CloudWatch, and IAM, to build and manage scalable, cloud-native data pipelines.
β’ Proficiency in using Amazon S3 as a central data lake and Apache Spark (via EMR or Glue) for distributed data processing at scale.
β’ Advanced programming skills in Python, with the ability to develop robust and reusable ETL components.
β’ Experience in orchestrating workflows using Apache Airflow or AWS MWAA, as well as event-driven state machines with Step Functions.
β’ Knowledge of containerization and infrastructure automation using Docker, Terraform, and GitHub Actions as part of CI/CD workflows.
β’ Strong background in monitoring and observability using tools like CloudWatch, Datadog, or Prometheus/Grafana.
β’ Experience integrating with external systems and services using RESTful APIs and gRPC protocols.
β’ Solid understanding of cloud security and compliance, with working knowledge of IAM policies, CloudTrail auditing, and encryption standards for data at rest and in transit.
β’ Hands on experience with SQL technologies.
β’ 4+ yearsβ experience in building data workflows and big data systems.
β’ Must have 2+ years in Azure cloud and databricks setup.
β’ Must have 4+ yearsβ experience in spark framework based data pipeline development.
β’ Must have exposure to API development.
β’ 4+ years of experience in any relational database (Oracle/Postgres).
β’ 2+ years of experience in any NoSQL databases (Cassandra/MongoDB/DynamoDB).
β’ 4+ years of experience in any cloud services (AWS, Azure, GCP).
β’ Must have experience with messaging Technologies like Kafka or Rabbit MQ.