

Euclid Innovations
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Cloud Data Engineer in Charlotte, NC, for 12 months at a financial client, paying "competitive rate." Requires 5+ years in Python, SQL, and Databricks PySpark, along with cloud certification and enterprise data warehousing experience.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
October 10, 2025
π - Duration
More than 6 months
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Charlotte, NC
-
π§ - Skills detailed
#Data Warehouse #Spark (Apache Spark) #Spark SQL #Redshift #Scala #Python #Code Reviews #Data Management #Snowflake #Migration #Cloud #"ETL (Extract #Transform #Load)" #Synapse #Data Architecture #Complex Queries #Databricks #PySpark #Business Analysis #Airflow #SQL Server #Data Ingestion #Data Engineering #Data Pipeline #Data Processing #Data Integration #Datasets #Delta Lake #EDW (Enterprise Data Warehouse) #Security #Computer Science #SQL (Structured Query Language) #Data Modeling #BigQuery
Role description
Cloud Data Engineer
Charlotte, NC (3 days/week)
Financial Client
12 Months
Job Description:
Seeking a Cloud Data Engineer skilled in Python, PySpark, SQL, and enterprise-level data warehousing. Must collaborate effectively with business users and possess business analyst expertise.
Major Responsibilities:
β’ Build and optimize data pipelines ensuring quality and integrity.
β’ Design, develop, and deploy Spark programs in Databricks for large-scale data processing.
β’ Work with Delta Lake, DWH, data integration, cloud platforms, and data modeling.
β’ Develop programs in Python and SQL; write complex queries and stored procedures.
β’ Implement dimensional data modeling and work with event-based/streaming data ingestion.
β’ Handle structured, semi-structured, and unstructured data.
β’ Optimize and troubleshoot Databricks jobs for performance and scalability.
β’ Apply best practices for data management, security, and governance in Databricks.
β’ Design and develop enterprise data warehouse solutions.
β’ Conduct code reviews to ensure standards and efficiency.
Skills:
β’ 5+ years Python coding and SQL Server development with large datasets.
β’ 5+ years experience building and deploying ETL pipelines with Databricks PySpark.
β’ Experience with cloud data warehouses (Synapse, BigQuery, Redshift, Snowflake).
β’ Strong background in OLTP, OLAP, dimensional modeling, and data warehousing.
β’ Led enterprise-wide Cloud Data Platform migrations; strong architecture/design skills.
β’ Familiarity with cloud data architectures, messaging, and analytics.
β’ Cloud certification(s) required.
β’ Airflow experience a plus.
Education:
Minimum BA in engineering/computer science; Masterβs preferred.
Cloud Data Engineer
Charlotte, NC (3 days/week)
Financial Client
12 Months
Job Description:
Seeking a Cloud Data Engineer skilled in Python, PySpark, SQL, and enterprise-level data warehousing. Must collaborate effectively with business users and possess business analyst expertise.
Major Responsibilities:
β’ Build and optimize data pipelines ensuring quality and integrity.
β’ Design, develop, and deploy Spark programs in Databricks for large-scale data processing.
β’ Work with Delta Lake, DWH, data integration, cloud platforms, and data modeling.
β’ Develop programs in Python and SQL; write complex queries and stored procedures.
β’ Implement dimensional data modeling and work with event-based/streaming data ingestion.
β’ Handle structured, semi-structured, and unstructured data.
β’ Optimize and troubleshoot Databricks jobs for performance and scalability.
β’ Apply best practices for data management, security, and governance in Databricks.
β’ Design and develop enterprise data warehouse solutions.
β’ Conduct code reviews to ensure standards and efficiency.
Skills:
β’ 5+ years Python coding and SQL Server development with large datasets.
β’ 5+ years experience building and deploying ETL pipelines with Databricks PySpark.
β’ Experience with cloud data warehouses (Synapse, BigQuery, Redshift, Snowflake).
β’ Strong background in OLTP, OLAP, dimensional modeling, and data warehousing.
β’ Led enterprise-wide Cloud Data Platform migrations; strong architecture/design skills.
β’ Familiarity with cloud data architectures, messaging, and analytics.
β’ Cloud certification(s) required.
β’ Airflow experience a plus.
Education:
Minimum BA in engineering/computer science; Masterβs preferred.