BEPC Inc. - Business Excellence Professional Consulting

Sr. Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer on a 6-month W2 contract, paying $50.00 – $55.00/hour, 100% remote. Requires 3+ years in data engineering, proficiency in SQL and Python, and experience with ETL processes and cloud platforms (AWS, Azure, GCP).
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
440
-
🗓️ - Date
March 20, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Raritan, NJ
-
🧠 - Skills detailed
#Computer Science #Data Processing #Data Governance #Scala #AWS (Amazon Web Services) #Spark (Apache Spark) #Datasets #Cloud #Data Quality #Database Design #ML (Machine Learning) #SQL (Structured Query Language) #Data Pipeline #Security #Database Systems #Compliance #"ETL (Extract #Transform #Load)" #Azure #Data Science #Data Engineering #Databases #GCP (Google Cloud Platform) #Python #Big Data #Data Architecture #Databricks #Apache Spark
Role description
Job Title: Data Engineer Location: 100% Remote Employment Type: W2 Contract, 6 Month Contract with possibility of extension Pay Rate: $50.00 – $55.00/hour Role Overview: BEPC is seeking a Data Engineer to support our client by designing, building, and optimizing scalable data pipelines and architectures. This role is ideal for a technically strong professional who thrives in a collaborative environment and enjoys working with large datasets, cloud platforms, and modern data technologies to drive business insights. Key Responsibilities: • Design, develop, and maintain ETL pipelines for large-scale structured and unstructured data. • Build and optimize data architectures, models, and database systems for performance and scalability. • Develop data solutions using cloud platforms (AWS, Azure, or GCP). • Collaborate with cross-functional teams to translate business needs into technical solutions. • Ensure data quality, integrity, and security, especially with sensitive datasets. • Integrate data from multiple sources including databases, APIs, and flat files. • Support analytics and machine learning initiatives with clean, reliable datasets. • Troubleshoot and resolve data pipeline and performance issues. • Document systems, workflows, and processes for maintainability and knowledge sharing. Qualifications: • Bachelor’s degree in Computer Science, Engineering, or related field. • 3+ years of experience in data engineering or similar roles. • Strong experience with ETL processes and data pipeline development. • Proficiency in SQL and Python. • Experience with Databricks, Apache Spark, or similar big data tools. • Hands-on experience with cloud platforms (AWS, Azure, or GCP). • Strong understanding of database design and optimization. • Experience working with large-scale and distributed data systems. • Advanced English communication skills. Preferred Qualifications: • Experience with real-time data processing or streaming technologies. • Familiarity with industrial data systems (e.g., PLCs, LabVIEW). • Exposure to machine learning workflows or data science collaboration. • Knowledge of data governance and compliance standards.