Data Engineer- W2 Contract

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a W2 contract in West Chester, PA, requiring 5+ years of experience, expertise in Python, SQL, PySpark, Databricks, and AWS. The contract length and pay rate are unspecified.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 13, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
West Chester, PA
-
🧠 - Skills detailed
#Storage #Data Pipeline #Cloud #Data Storage #Statistics #Data Processing #EC2 #Microsoft Power BI #"ETL (Extract #Transform #Load)" #Data Analysis #BI (Business Intelligence) #Data Accuracy #Python #Big Data #PySpark #Data Engineering #AWS (Amazon Web Services) #Computer Science #Databricks #Data Management #Visualization #Redshift #Datasets #Data Quality #Spark (Apache Spark) #Scala #SQL (Structured Query Language) #Data Governance #S3 (Amazon Simple Storage Service) #Tableau
Role description
HIRING NOW: Data Engineer – West Chester, PA (Onsite Only | No Remote) Location: West Chester, PA (5 Days/Week Onsite – No Remote) We’re looking for an experienced Data Engineer to join a fast-paced, data-driven team working on customer experience, churn prediction, network performance, and personalization analytics. Responsibilities β€’ Data Pipeline Development and Management: Design, construct, install, test, and maintain highly scalable data management systems. Develop and optimize ETL/ELT pipelines using PySpark and Databricks to process large volumes of structured and unstructured data. β€’ Cloud Infrastructure: Utilize AWS services for data storage, computation, and orchestration, ensuring a reliable and efficient data infrastructure. β€’ Data Analysis and Insights: Collaborate with business stakeholders to understand customer experience challenges and opportunities. Analyze complex datasets to identify trends, patterns, and insights related to customer behavior, network performance, product usage, and churn. β€’ Business Use Case Analysis: Apply your analytical skills to various customer experience use cases, including: β€’ Churn Prediction: Develop models to identify customers at risk of leaving and understand the underlying drivers. β€’ Network Experience: Analyze network performance data to identify and address areas of poor customer experience. β€’ Personalization: Enable data-driven personalization of marketing communications, offers, and customer support interactions. β€’ Billing and Service Inquiries: Analyze inquiry data to identify root causes of customer confusion and drive improvements in billing and service clarity. β€’ Reporting and Visualization: Create compelling and insightful reports and dashboards using Tableau or Power BI to communicate findings to both technical and non-technical audiences. β€’ Data Governance and Quality: Ensure data accuracy, completeness, and consistency across all data platforms. Implement data quality checks and best practices. β€’ Collaboration and Mentorship: Work closely with cross-functional teams, including product, marketing, and engineering, to deliver data-driven solutions. Mentor junior team members and promote a culture of data-driven decision-making. Qualifications β€’ Education: Bachelor's or Master's degree in Computer Science, Engineering, Statistics, or a related quantitative field. β€’ Experience: 5+ years of experience in a data engineering or data analyst role, with a proven track record of working with large-scale data ecosystems. β€’ Technical Skills: β€’ Expert-level proficiency in Python and SQL. β€’ Hands-on experience with PySpark for big data processing. β€’ In-depth knowledge of the Databricks platform. β€’ Strong experience with AWS cloud services (e.g., S3, EC2, Redshift, EMR). β€’ Demonstrated expertise in data visualization and reporting with Tableau or Power BI.