BCforward

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position for a 12+ month remote contract, offering a competitive pay rate. Key skills required include SQL, Python, GCP, Azure, and data pipeline development. A Bachelor's degree in a related field or equivalent experience is needed.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
440
-
πŸ—“οΈ - Date
April 10, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Data Accuracy #Microsoft Azure #SQL Queries #Python #Cloud #Logging #Azure #Data Ingestion #Agile #GCP (Google Cloud Platform) #Metadata #Data Pipeline #"ETL (Extract #Transform #Load)" #Infrastructure as Code (IaC) #Data Engineering #Scala #Data Modeling #Data Quality #Dataflow #Data Exploration #Data Catalog #Data Management #Data Mining #Computer Science #Monitoring #Storage #Data Architecture #BigQuery #Terraform #Data Analysis #Data Warehouse #Databases #Datasets #Data Science #Clustering #SQL (Structured Query Language) #Batch
Role description
BCforward is currently seeking a highly motivated Data Engineer - Remote Job Title: Data Engineer Location: Remote Duration: Contract – 12+ Months Job Description We are seeking a skilled Data Engineer to join our team. The ideal candidate will have strong experience in SQL, Python, cloud data platforms (GCP and Azure), and data pipeline development. This role requires the ability to design, build, and optimize scalable data architectures and workflows that support business needs. Key Responsibilities β€’ Assemble large, complex datasets that meet both functional and non-functional requirements. β€’ Build and maintain batch and near real-time data pipelines for ingestion, cleansing, transformation, and curation of structured and unstructured data. β€’ Design, expand, and optimize data architectures and data flows across cross-functional teams. β€’ Write, troubleshoot, and optimize SQL queries for data mining, analysis, and data product development. β€’ Perform complex data analysis to interpret results and provide actionable recommendations. β€’ Apply data modeling techniques and optimize performance in cloud data warehouses, including partitioning and clustering. β€’ Support ETL and ELT processes, including validation and testing to ensure data accuracy and quality. β€’ Implement processes for data transformation, metadata management, dependency tracking, and workload management. β€’ Develop and maintain monitoring, logging, and alerting systems for data pipelines and platforms. β€’ Use Infrastructure as Code tools such as Terraform for cloud provisioning and configuration. β€’ Configure and support cloud services for data ingestion, integration, messaging, CI/CD, and processing across GCP and Azure platforms, including Microsoft Fabric. β€’ Support the setup and optimization of operational databases or data-serving layers based on business use cases. β€’ Collaborate with engineers, analysts, and product teams in an agile, product-focused environment. β€’ Conduct data quality reviews and drive continuous improvement initiatives. Required Skills & Qualifications β€’ Strong proficiency in SQL and Python for data transformation and analysis. β€’ Hands-on experience with Google Cloud Platform (BigQuery, Dataflow, Pub/Sub, Cloud Storage) and Microsoft Azure, including Microsoft Fabric. β€’ Experience building batch and streaming data pipelines and applying data modeling concepts. β€’ Experience with Infrastructure as Code tools, preferably Terraform. β€’ Ability to profile, validate, and improve data quality using data exploration and profiling techniques. β€’ Strong analytical and problem-solving skills, including root cause analysis. β€’ Excellent time management skills with the ability to manage multiple priorities. β€’ Experience collaborating with cross-functional teams including developers, analysts, and data scientists. Preferred Skills β€’ Familiarity with CI/CD practices for data workflows. β€’ Experience with data quality tools and data cataloging solutions. Education β€’ Bachelor’s degree in Computer Science, Engineering, or a related field, β€’ OR β€’ Associate’s degree with at least 2 years of relevant experience.