

Lorven Technologies Inc.
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer specializing in FileNet and AI Kubernetes, located onsite in Columbus, OH. The long-term contract requires strong Python and Apache Spark skills, AWS proficiency, SQL experience, and knowledge of ETL frameworks and data warehousing concepts.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
February 26, 2026
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Columbus, OH
-
π§ - Skills detailed
#Data Engineering #Databases #Athena #PySpark #S3 (Amazon Simple Storage Service) #Python #GIT #Kubernetes #Data Processing #Apache Spark #Spark (Apache Spark) #Lambda (AWS Lambda) #AI (Artificial Intelligence) #SQL (Structured Query Language) #Version Control #AWS (Amazon Web Services) #Redshift #"ETL (Extract #Transform #Load)" #Data Modeling
Role description
FileNet Data Engineer with AI Kubernetes
Location: Columbus, OH - Onsite
Duration: Long Term Contract
Required Skills & Qualifications:
β’ Strong proficiency in Python for data processing and pipeline development
β’ Hands-on experience with Apache Spark (PySpark preferred)
β’ Solid experience with AWS services such as S3, Glue, EMR, Redshift, Athena, Lambda
β’ Experience with SQL and relational/non-relational databases
β’ Knowledge of data modeling, data warehousing concepts, and ETL frameworks
β’ Experience working with large-scale, distributed data systems
β’ Familiarity with CI/CD pipelines and version control tools (Git)
β’ Strong problem-solving and communication skills
FileNet Data Engineer with AI Kubernetes
Location: Columbus, OH - Onsite
Duration: Long Term Contract
Required Skills & Qualifications:
β’ Strong proficiency in Python for data processing and pipeline development
β’ Hands-on experience with Apache Spark (PySpark preferred)
β’ Solid experience with AWS services such as S3, Glue, EMR, Redshift, Athena, Lambda
β’ Experience with SQL and relational/non-relational databases
β’ Knowledge of data modeling, data warehousing concepts, and ETL frameworks
β’ Experience working with large-scale, distributed data systems
β’ Familiarity with CI/CD pipelines and version control tools (Git)
β’ Strong problem-solving and communication skills






