

Chelsoft Solutions Co.
Data Engineer_W2_CT
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 5+ years of experience, specializing in Databricks, SQL, and Python/Pyspark. The contract length is unspecified, with a pay rate of "$XX/hr". Remote work is allowed, focusing on cloud data solutions and ETL processes.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 8, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Bloomfield, CT
-
🧠 - Skills detailed
#Data Extraction #SQL Queries #JSON (JavaScript Object Notation) #AWS (Amazon Web Services) #GIT #Delta Lake #Deployment #Cloud #Spark (Apache Spark) #Security #SQL (Structured Query Language) #Apache Spark #Data Ingestion #Data Governance #Airflow #Python #Docker #Kubernetes #Data Modeling #Data Quality #Data Engineering #"ETL (Extract #Transform #Load)" #Data Pipeline #PySpark #Azure #Kafka (Apache Kafka) #Data Science #Storage #Version Control #GCP (Google Cloud Platform) #Compliance #Scala #Data Processing #Databricks #Automation
Role description
Job Summary
We are seeking a highly skilled and experienced Senior Data Engineer with a strong background in Databricks, SQL, and Python, pyspark to join our data engineering team. The ideal candidate will have a proven track record of designing, building, and deploying scalable data pipelines and solutions in cloud environments. You will be responsible for end-to-end development, from data ingestion to deployment, ensuring high performance and reliability.
Key Responsibilities
• Design, develop, and maintain scalable data pipelines using Databricks and Apache Spark.
• Write efficient and optimized SQL queries for data extraction, transformation, and analysis.
• Develop robust data processing scripts and automation using Python, Pyspark.
• Implement end-to-end data solutions including ingestion, transformation, storage, and deployment.
• Collaborate with data scientists, analysts, and business stakeholders to understand data requirements.
• Optimize data workflows for performance, scalability, and reliability.
• Ensure data quality, integrity, and governance across all stages of the pipeline.
• Monitor and troubleshoot production data pipelines and deployments.
• Document technical designs, processes, and best practices.
Required Qualifications
• 5+ years of professional experience in data engineering or related roles.
• Strong proficiency in Databricks, SQL, and Python, Pyspark.
• Experience with end-to-end deployment of data solutions in cloud environments (e.g., Azure, AWS, GCP).
• Solid understanding of ETL/ELT processes, data modeling, and data warehousing concepts.
• Familiarity with CI/CD pipelines, version control (Git), and workflow orchestration tools (e.g., Airflow).
• Experience with structured and unstructured data formats (e.g., Parquet, JSON, CSV).
• Strong problem-solving skills and attention to detail.
• Excellent communication and collaboration skills.
Preferred Qualifications
• Experience with Delta Lake or other Databricks ecosystem tools.
• Knowledge of data governance, security, and compliance standards.
• Familiarity with containerization (Docker) and Kubernetes.
• Exposure to real-time data processing (e.g., Kafka, Spark Streaming).
Job Summary
We are seeking a highly skilled and experienced Senior Data Engineer with a strong background in Databricks, SQL, and Python, pyspark to join our data engineering team. The ideal candidate will have a proven track record of designing, building, and deploying scalable data pipelines and solutions in cloud environments. You will be responsible for end-to-end development, from data ingestion to deployment, ensuring high performance and reliability.
Key Responsibilities
• Design, develop, and maintain scalable data pipelines using Databricks and Apache Spark.
• Write efficient and optimized SQL queries for data extraction, transformation, and analysis.
• Develop robust data processing scripts and automation using Python, Pyspark.
• Implement end-to-end data solutions including ingestion, transformation, storage, and deployment.
• Collaborate with data scientists, analysts, and business stakeholders to understand data requirements.
• Optimize data workflows for performance, scalability, and reliability.
• Ensure data quality, integrity, and governance across all stages of the pipeline.
• Monitor and troubleshoot production data pipelines and deployments.
• Document technical designs, processes, and best practices.
Required Qualifications
• 5+ years of professional experience in data engineering or related roles.
• Strong proficiency in Databricks, SQL, and Python, Pyspark.
• Experience with end-to-end deployment of data solutions in cloud environments (e.g., Azure, AWS, GCP).
• Solid understanding of ETL/ELT processes, data modeling, and data warehousing concepts.
• Familiarity with CI/CD pipelines, version control (Git), and workflow orchestration tools (e.g., Airflow).
• Experience with structured and unstructured data formats (e.g., Parquet, JSON, CSV).
• Strong problem-solving skills and attention to detail.
• Excellent communication and collaboration skills.
Preferred Qualifications
• Experience with Delta Lake or other Databricks ecosystem tools.
• Knowledge of data governance, security, and compliance standards.
• Familiarity with containerization (Docker) and Kubernetes.
• Exposure to real-time data processing (e.g., Kafka, Spark Streaming).