

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a long-term contract in Spring, Texas, offering a competitive pay rate. Key skills include ETL, data ingestion, cloud platforms (Azure, AWS), and tool expertise (Power BI, Kafka). Houston-based candidates preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
520
-
ποΈ - Date discovered
June 17, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#"ETL (Extract #Transform #Load)" #Kafka (Apache Kafka) #AWS (Amazon Web Services) #Azure #Data Engineering #Data Quality #Cloud #Microsoft Power BI #Spark (Apache Spark) #BI (Business Intelligence) #DevOps #Data Ingestion #Automation
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title:- Data Engineer
Location:- Spring Texas (On-Site/Remote β Onsite Preferred)
Job Type:- Long Term Contract
Houston based candidates preferred
Responsibilities:
Notes; must have very good communication skills; will be working directly with OSI PI product owner; ideally is Houston based and can come to campus, but would consider US remote .
β’ Data Engineer / date ingestion / building pipelines. Data engineering (ETL, pipelines, ingestion frameworks).
β’ Data quality and validation expertise.
β’ Cloud platform knowledge (e.g., Azure, AWS, Google Cloud).
β’ Tool-specific expertise (e.g., Aveva Connect, Power BI, Spark, Kafka).
β’ DevOps/automation for CI/CD pipelines.
β’ Data ingestion and processing of complex data sets
β’ Integration with other tools and platforms (Aveva Connect, PI Historian, Kafka)