
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position for a 12-month contract, offering competitive pay. Key skills include Azure Databricks, Data Warehouse, and experience with Azure Data Factory. Expertise in building data pipelines and Agile project experience is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
480
-
ποΈ - Date discovered
June 15, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Jersey City, NJ
-
π§ - Skills detailed
#Cloud #Datasets #Data Integration #Agile #Scripting #Python #API (Application Programming Interface) #"ETL (Extract #Transform #Load)" #ADLS (Azure Data Lake Storage) #Spark SQL #Batch #Storage #Data Lake #Azure #Data Pipeline #Azure cloud #Data Engineering #Data Catalog #ADF (Azure Data Factory) #Azure SQL #PySpark #Databricks #Azure Databricks #Azure Data Factory #Data Ingestion #Data Migration #Migration #Scala #SQL (Structured Query Language) #Jira #Spark (Apache Spark) #Data Processing #Big Data
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Length 12 months
Need strong experience and expertise with Azure Databricks and DataWarehouse
RESPONSIBILITIES
Build large-scale batch and real-time data pipelines with data processing frameworks in Azure cloud platform.
Designing and implementing highly performant data ingestion pipelines from multiple sources using Azure Databricks.
Direct experience of building data pipelines using Azure Data Factory and Databricks.
Developing scalable and re-usable frameworks for ingesting of datasets
Lead design of ETL, data integration and data migration.
Partner with architects, engineers, information analysts, business, and technology stakeholders for developing and deploying enterprise grade platforms that enable data-driven solutions.
Integrating the end to end data pipeline - to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all times
Working with event based / streaming technologies to ingest and process data
Working with other members of the project team to support delivery of additional project components (API interfaces, Search)
Evaluating the performance and applicability of multiple tools against customer requirements
REQUIREMENTS
Experience on ADLS, Azure Databricks, Azure SQL DB and Datawarehouse
Strong working experience in Implementation of Azure cloud components using Azure Data Factory , Azure Data Analytics, Azure Data Lake, Azure Data Catalogue, LogicApps and FunctionApps
Have knowledge in Azure Storage services (ADLS, Storage Accounts)
Expertise in designing and deploying data applications on cloud solutions on Azure
Hands on experience in performance tuning and optimizing code running in Databricks environment
Good understanding of SQL, T-SQL and/or PL/SQL
Should have experience working in Agile projects with knowledge in Jira
Good to have handled Data Ingestion projects in Azure environment
Demonstrated analytical and problem-solving skills particularly those that apply to a big data environment
β’ Experience on Python scripting, Spark SQL PySpark is a plus.