

Software/Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Software/Data Engineer with a 6-month contract, offering a pay rate of "$X/hour". Key skills include 4+ years in software development, experience with Snowflake or similar platforms, and proficiency in Python, SQL, and AWS.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
464
-
ποΈ - Date discovered
June 3, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Atlanta Metropolitan Area
-
π§ - Skills detailed
#Spark (Apache Spark) #SQL Queries #AWS (Amazon Web Services) #Kafka (Apache Kafka) #RDS (Amazon Relational Database Service) #S3 (Amazon Simple Storage Service) #Programming #Cloud #DevOps #Amazon Redshift #Redshift #Data Pipeline #"ETL (Extract #Transform #Load)" #Snowflake #Databases #SQL (Structured Query Language) #Palantir Foundry #Agile #Python #Data Governance #Data Integration #Lambda (AWS Lambda) #Cloudera #Computer Science #Data Engineering #Code Reviews #NoSQL #TypeScript #Spark SQL #C# #PySpark
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
No C2C candidates will be accepted for this role.
Primary Responsibilities:
- Design, Develop and maintain efficient Data pipelines, Data Integrations, Backend and Frontend applications using Enterprise Data platforms (e.g., Snowflake, Cloudera/Hive, Palantir Foundry or Amazon Redshift)
- Ingest, clean and model structured and unstructured data from a variety of data sources
- Build and manage programs for moving, transforming, and loading data using Python, Spark, SQL, C#, etc.
- Implement data governance, quality checks, and validation rules to ensure accuracy and consistency
- Collaborate with cross-functional teams including Agile team members, business SMEs, and external stakeholders to deliver high-quality solutions
- Participate in code reviews, knowledge sharing, and technical discussions within the team
Requirements:
- Bachelor's degree in Computer Science or a technical related field
- 4+ years of experience as a Software Developer or similar role
- 3+ of years experience building data solutions At scale using one of the Enterprise Data platforms β Palantir Foundry, Snowflake, Cloudera/Hive, Amazon Redshift
- 3+ years of experience with SQL and No-SQL databases (Snowflake or Hive)
- 3+ years of hands-on experience with programming using Python, C#, PySpark
- Experience in writing complex SQL queries, optimizations, UDFs, views, indexes
- Experience with AWS cloud with Spark, Glue, Kafka, Lambda, S3, RDS and others
- Experience with DevOps principals and CI/CD
- Strong understanding of ETL principles and data integration patterns
- Experience with Agile and iterative development processes is a plus
- Knowledge of Typescript & Full stack development experience is a plus (not mandatory)