

LanceSoft, Inc.
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer based in Wilmington, DE / Plano, TX for 6 months, offering a competitive pay rate. Key skills include Java, Apache Spark, AWS services, ETL/ELT processes, and strong problem-solving abilities.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
480
-
🗓️ - Date
January 17, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Plano, TX
-
🧠 - Skills detailed
#Programming #Big Data #SQL (Structured Query Language) #Cloud #NoSQL #Redshift #AWS (Amazon Web Services) #Lambda (AWS Lambda) #Data Modeling #Snowflake #Spark (Apache Spark) #Data Security #Deployment #Strategy #Data Architecture #Data Science #Monitoring #Data Quality #Scala #Data Engineering #S3 (Amazon Simple Storage Service) #Security #Data Governance #Documentation #Data Lake #Java #Spark SQL #Storage #Data Storage #Data Pipeline #DynamoDB #Databases #Data Warehouse #Apache Spark
Role description
Job Title: Data Engineer
Location: Wilmington, DE / Plano, TX - onsite
Duration: 6 months
Role Descriptions:
Key Responsibilities Data Pipeline Development Design| develop| and implement efficient and scalable ETLELT processes using Apache Spark with Java| integrating data from diverse sources into data lakes (e.g.| S3) and data warehouses (e.g.| Redshift).
AWS Service Utilization Leverage a range of AWS services for data storage| processing| and analytics| including but not limited to S3| Redshift| Glue| EMR| Lambda| Kinesis| and DynamoDB.
Performance Optimization Optimize Spark applications and data pipelines for performance| cost-efficiency| and reliability| including tuning Spark configurations and utilizing appropriate AWS resources.
Data Modeling and Architecture Design and implement data models for structured and unstructured data| and contribute to the overall data architecture strategy within an AWS environment.
Collaboration and Communication Work closely with data scientists| analysts| and other stakeholders to understand data requirements and deliver solutions that meet business needs.
Monitoring and Troubleshooting Implement monitoring solutions for data pipelines and infrastructure| and troubleshoot issues to ensure data quality and system stability.
Security and Governance Implement data security measures and adhere to data governance policies within the AWS ecosystem.
Documentation Create and maintain comprehensive documentation for data engineering processes| designs| and deployments.
Essential Skills:
Key Responsibilities
Data Pipeline Development Design| develop| and implement efficient and scalable ETLELT processes using Apache Spark with Java| integrating data from diverse sources into data lakes (e.g.| S3) and data warehouses (e.g.| Redshift).
AWS Service Utilization Leverage a range of AWS services for data storage| processing| and analytics| including but not limited to S3| Redshift| Glue| EMR| Lambda| Kinesis| and DynamoDB.
Performance Optimization Optimize Spark applications and data pipelines for performance| cost-efficiency| and reliability| including tuning Spark configurations and utilizing appropriate AWS resources.
Data Modeling and Architecture Design and implement data models for structured and unstructured data| and contribute to the overall data architecture strategy within an AWS environment.
Collaboration and Communication Work closely with data scientists| analysts| and other stakeholders to understand data requirements and deliver solutions that meet business needs.
Monitoring and Troubleshooting Implement monitoring solutions for data pipelines and infrastructure| and troubleshoot issues to ensure data quality and system stability.
Security and Governance Implement data security measures and adhere to data governance policies within the AWS ecosystem.
Documentation Create and maintain comprehensive documentation for data engineering processes| designs| and deployments.
Required Skills and Qualifications
Programming Expertise Strong proficiency in Java| with experience in developing Spark applications.
Big Data Technologies In-depth knowledge and hands-on experience with Apache Spark| including Spark SQL| DataFrames| and RDDs.
Cloud Computing Extensive experience with AWS services| particularly those related to data engineering (S3| Redshift| Glue| EMR| Kinesis| Lambda).Data Warehousing and Lakes Experience with data warehousing concepts and technologies (e.g.| Redshift| Snowflake)| and building data lakes on S3.Databases Proficiency in SQL and experience with both relational and NoSQL databases.
ETLELT Strong understanding and practical experience in designing and implementing ETLELT processes.
Problem-Solving Excellent analytical and problem-solving skills| with the ability to troubleshoot complex data issues.
Communication Strong communication and collaboration skills to work effectively within a team and with various stakeholders.
Job Title: Data Engineer
Location: Wilmington, DE / Plano, TX - onsite
Duration: 6 months
Role Descriptions:
Key Responsibilities Data Pipeline Development Design| develop| and implement efficient and scalable ETLELT processes using Apache Spark with Java| integrating data from diverse sources into data lakes (e.g.| S3) and data warehouses (e.g.| Redshift).
AWS Service Utilization Leverage a range of AWS services for data storage| processing| and analytics| including but not limited to S3| Redshift| Glue| EMR| Lambda| Kinesis| and DynamoDB.
Performance Optimization Optimize Spark applications and data pipelines for performance| cost-efficiency| and reliability| including tuning Spark configurations and utilizing appropriate AWS resources.
Data Modeling and Architecture Design and implement data models for structured and unstructured data| and contribute to the overall data architecture strategy within an AWS environment.
Collaboration and Communication Work closely with data scientists| analysts| and other stakeholders to understand data requirements and deliver solutions that meet business needs.
Monitoring and Troubleshooting Implement monitoring solutions for data pipelines and infrastructure| and troubleshoot issues to ensure data quality and system stability.
Security and Governance Implement data security measures and adhere to data governance policies within the AWS ecosystem.
Documentation Create and maintain comprehensive documentation for data engineering processes| designs| and deployments.
Essential Skills:
Key Responsibilities
Data Pipeline Development Design| develop| and implement efficient and scalable ETLELT processes using Apache Spark with Java| integrating data from diverse sources into data lakes (e.g.| S3) and data warehouses (e.g.| Redshift).
AWS Service Utilization Leverage a range of AWS services for data storage| processing| and analytics| including but not limited to S3| Redshift| Glue| EMR| Lambda| Kinesis| and DynamoDB.
Performance Optimization Optimize Spark applications and data pipelines for performance| cost-efficiency| and reliability| including tuning Spark configurations and utilizing appropriate AWS resources.
Data Modeling and Architecture Design and implement data models for structured and unstructured data| and contribute to the overall data architecture strategy within an AWS environment.
Collaboration and Communication Work closely with data scientists| analysts| and other stakeholders to understand data requirements and deliver solutions that meet business needs.
Monitoring and Troubleshooting Implement monitoring solutions for data pipelines and infrastructure| and troubleshoot issues to ensure data quality and system stability.
Security and Governance Implement data security measures and adhere to data governance policies within the AWS ecosystem.
Documentation Create and maintain comprehensive documentation for data engineering processes| designs| and deployments.
Required Skills and Qualifications
Programming Expertise Strong proficiency in Java| with experience in developing Spark applications.
Big Data Technologies In-depth knowledge and hands-on experience with Apache Spark| including Spark SQL| DataFrames| and RDDs.
Cloud Computing Extensive experience with AWS services| particularly those related to data engineering (S3| Redshift| Glue| EMR| Kinesis| Lambda).Data Warehousing and Lakes Experience with data warehousing concepts and technologies (e.g.| Redshift| Snowflake)| and building data lakes on S3.Databases Proficiency in SQL and experience with both relational and NoSQL databases.
ETLELT Strong understanding and practical experience in designing and implementing ETLELT processes.
Problem-Solving Excellent analytical and problem-solving skills| with the ability to troubleshoot complex data issues.
Communication Strong communication and collaboration skills to work effectively within a team and with various stakeholders.






