

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a contract basis in Bellevue, WA (Hybrid), requiring expertise in Synapse with PySpark, Azure Data Factory, and Big Data pipelines. Key skills include Java, Python, and data visualization with Power BI.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 3, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Bellevue, WA
-
π§ - Skills detailed
#Data Lake #Java #Spark (Apache Spark) #Microsoft Power BI #Version Control #PySpark #ADLS (Azure Data Lake Storage) #Monitoring #Visualization #Scala #Azure #Azure SQL #Azure Synapse Analytics #SQL (Structured Query Language) #Data Engineering #Data Pipeline #Synapse #BI (Business Intelligence) #Big Data #Azure ADLS (Azure Data Lake Storage) #Storage #ADF (Azure Data Factory) #Snowflake #Data Quality #Databricks #Azure Data Factory #Python
Role description
Hi,
Greetings from Smart IT Frame, Hope you are doing well!!!
Role: Data Engineer
Location: Bellevue, WA (Hybrid)
Duration: Contract
Job description:
β’ Experience in Synapase with pyspark
β’ Knowledge of Big Data pipelines Data Engineering
β’ Working Knowledge on MSBI stack on Azure
β’ Working Knowledge on Azure Data factory Azure Data Lake and Azure Data lake storage
β’ Handson in Visualization like Power BI
β’ Implement end to end data pipelines using cosmos Azure Data factory
β’ Should have good analytical thinking and Problem solving
β’ Good communication and coordination skills
β’ Able to work as Individual contributor.
Requirement Analysis:
β’ Create Maintain and Enhance Big Data Pipeline
β’ Daily status reporting interacting with Leads
β’ Version control ADOGIT CICD
β’ Marketing Campaign experiences
β’ Data Platform Product telemetry Analytical thinking
β’ Data Validation of the new streams
β’ Data quality check of the new streams
β’ Monitoring of data pipeline created in Azure Data factory
β’ updating the Tech spec and wiki page for each implementation of pipeline
β’ Updating ADO on daily basis.
Skills:
Mandatory Skills: Java, Python, Scala, Snowflake, Azure BLOB, Azure Data Factory, Azure Functions, Azure SQL, Azure Synapse Analytics, AZURE DATA LAKE, ANSI-SQL, Databricks, HDInsight
Good to Have Skills: Python, Azure BLOB, Azure Data Factory, AZURE DATA LAKE
Hi,
Greetings from Smart IT Frame, Hope you are doing well!!!
Role: Data Engineer
Location: Bellevue, WA (Hybrid)
Duration: Contract
Job description:
β’ Experience in Synapase with pyspark
β’ Knowledge of Big Data pipelines Data Engineering
β’ Working Knowledge on MSBI stack on Azure
β’ Working Knowledge on Azure Data factory Azure Data Lake and Azure Data lake storage
β’ Handson in Visualization like Power BI
β’ Implement end to end data pipelines using cosmos Azure Data factory
β’ Should have good analytical thinking and Problem solving
β’ Good communication and coordination skills
β’ Able to work as Individual contributor.
Requirement Analysis:
β’ Create Maintain and Enhance Big Data Pipeline
β’ Daily status reporting interacting with Leads
β’ Version control ADOGIT CICD
β’ Marketing Campaign experiences
β’ Data Platform Product telemetry Analytical thinking
β’ Data Validation of the new streams
β’ Data quality check of the new streams
β’ Monitoring of data pipeline created in Azure Data factory
β’ updating the Tech spec and wiki page for each implementation of pipeline
β’ Updating ADO on daily basis.
Skills:
Mandatory Skills: Java, Python, Scala, Snowflake, Azure BLOB, Azure Data Factory, Azure Functions, Azure SQL, Azure Synapse Analytics, AZURE DATA LAKE, ANSI-SQL, Databricks, HDInsight
Good to Have Skills: Python, Azure BLOB, Azure Data Factory, AZURE DATA LAKE