

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract through the end of the year, offering a pay rate of "unknown." Candidates must be located in San Antonio, TX, VA, DC, or NY, and possess 3+ years of PySpark experience, along with expertise in Azure Data Factory and Azure Data Lake.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 30, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#ADF (Azure Data Factory) #PySpark #Azure #Azure ADLS (Azure Data Lake Storage) #ADLS (Azure Data Lake Storage) #"ETL (Extract #Transform #Load)" #Storage #Data Storage #Spark (Apache Spark) #Data Lake #Data Processing #Scala #Azure Data Factory #Data Engineering
Role description
Contract through the end of the year with a chance for renewal
Notes
candidates must be located in San Antonio, TX, VA, DC, NY- will work onsite first week and then remotely, contract through end of year,
Data Engineer
A proficient Data Engineer to develop scalable data solutions that directly address complex business requirements. The role demands deep hands-on expertise in PySpark for data processing (ideally 3+ years), Azure Data Factory (ADF) for orchestration, and Azure Data Lake (ADLS) for optimized data storage.
PySpark, Spark Architecture, Azure Data Factory (ADF), Azure Data Lake Storage (ADLS) Gen2, Data Engineering, Business Requirements Analysis, ETL/ELT Pipelines.
Best Regards,
David Roy | Accounts Manager β US Staffing | Charter Global Inc. | https://www.charterglobal.com
LinkedIn
Contract through the end of the year with a chance for renewal
Notes
candidates must be located in San Antonio, TX, VA, DC, NY- will work onsite first week and then remotely, contract through end of year,
Data Engineer
A proficient Data Engineer to develop scalable data solutions that directly address complex business requirements. The role demands deep hands-on expertise in PySpark for data processing (ideally 3+ years), Azure Data Factory (ADF) for orchestration, and Azure Data Lake (ADLS) for optimized data storage.
PySpark, Spark Architecture, Azure Data Factory (ADF), Azure Data Lake Storage (ADLS) Gen2, Data Engineering, Business Requirements Analysis, ETL/ELT Pipelines.
Best Regards,
David Roy | Accounts Manager β US Staffing | Charter Global Inc. | https://www.charterglobal.com
LinkedIn