

TechWish
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "Unknown," offering a pay rate of "$$$." It requires expertise in Azure, Databricks, SQL, and Spark, along with a Master's degree or equivalent experience in a related field.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 14, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Vienna, VA
-
🧠 - Skills detailed
#Data Integration #Leadership #API (Application Programming Interface) #Deployment #"ETL (Extract #Transform #Load)" #Azure Databricks #Forecasting #Data Quality #Data Ingestion #Visualization #Datasets #Azure #Data Engineering #Cloud #Spark (Apache Spark) #DevSecOps #SQL (Structured Query Language) #Data Lake #Data Science #Data Cleaning #Python #Data Pipeline #Databricks #Computer Science #Logical Data Model #Azure SQL
Role description
Basic Purpose: Architects and delivers full‑stack analytics solutions spanning data ingestion, transformation, modeling, and visualization using cloud and big‑data technologies (Azure, Databricks, SQL, Spark). Builds resilient data pipelines and analytical datasets that support descriptive, diagnostic, and advanced analytics use cases, including data science and forecasting models. Writes complex SQL and Spark code optimized for scale, governance, and CI/CD deployment. Acts as a technical and analytical expert, independently tackling highly complex problems, leading functional initiatives, and translating raw data into trusted insights that improve the performance and decision‑making capability of Clients ecosystem.
Responsibilities:
• Provide Data Intelligence and Data Warehousing (DW) solutions and support by leveraging project standards and leading data platforms
• Design and operationalize data science workflows within Databricks, including exploratory analysis, feature engineering, and production-ready analytical datasets integrated into CI/CD pipelines.
• Build and maintain Azure data pipelines using DevSecOps process
• Define and build data integration processes to be used across the organization
• Build conceptual and logical data models for stakeholders and management
• Work directly with business leadership to understand data requirements; propose and develop solutions that enable effective decision-making and drives business objectives
• Prepare advanced project implementation plans which highlight major milestones and deliverables, leveraging standard methods and work planning tools
• Document existing and new processes to develop and maintain technical and non-technical reference materials.
• Recognize potential issues and risks during the analytics project implementation and suggest mitigation strategies
• Communicate and own the process of manipulating and merging large datasets
• Perform other duties as assigned.
Qualifications and Education Requirements:
• Master’s degree in Information Systems, Computer Science, Engineering, or related field, or the equivalent combination of education, training and experience
• Expert skill in Databricks
• Advanced skill in Azure SQL, Azure Data Lake, and Azure App Service, Python, T-SQL
• Experienced in sourcing, maintaining, and updating data in Cloud environments
• Knowledge of and the ability to perform basic statistical analysis
• Experienced in the use of ETL tools and techniques
• Experience Designing and building of data pipelines using API ingestion and Streaming ingestion methods.
• Demonstrates change management and/or excellent communication skills
• Understands data warehousing, data cleaning, data quality, data pipelines and other analytical techniques required for data usage.
Basic Purpose: Architects and delivers full‑stack analytics solutions spanning data ingestion, transformation, modeling, and visualization using cloud and big‑data technologies (Azure, Databricks, SQL, Spark). Builds resilient data pipelines and analytical datasets that support descriptive, diagnostic, and advanced analytics use cases, including data science and forecasting models. Writes complex SQL and Spark code optimized for scale, governance, and CI/CD deployment. Acts as a technical and analytical expert, independently tackling highly complex problems, leading functional initiatives, and translating raw data into trusted insights that improve the performance and decision‑making capability of Clients ecosystem.
Responsibilities:
• Provide Data Intelligence and Data Warehousing (DW) solutions and support by leveraging project standards and leading data platforms
• Design and operationalize data science workflows within Databricks, including exploratory analysis, feature engineering, and production-ready analytical datasets integrated into CI/CD pipelines.
• Build and maintain Azure data pipelines using DevSecOps process
• Define and build data integration processes to be used across the organization
• Build conceptual and logical data models for stakeholders and management
• Work directly with business leadership to understand data requirements; propose and develop solutions that enable effective decision-making and drives business objectives
• Prepare advanced project implementation plans which highlight major milestones and deliverables, leveraging standard methods and work planning tools
• Document existing and new processes to develop and maintain technical and non-technical reference materials.
• Recognize potential issues and risks during the analytics project implementation and suggest mitigation strategies
• Communicate and own the process of manipulating and merging large datasets
• Perform other duties as assigned.
Qualifications and Education Requirements:
• Master’s degree in Information Systems, Computer Science, Engineering, or related field, or the equivalent combination of education, training and experience
• Expert skill in Databricks
• Advanced skill in Azure SQL, Azure Data Lake, and Azure App Service, Python, T-SQL
• Experienced in sourcing, maintaining, and updating data in Cloud environments
• Knowledge of and the ability to perform basic statistical analysis
• Experienced in the use of ETL tools and techniques
• Experience Designing and building of data pipelines using API ingestion and Streaming ingestion methods.
• Demonstrates change management and/or excellent communication skills
• Understands data warehousing, data cleaning, data quality, data pipelines and other analytical techniques required for data usage.






