

SSi People
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Strongsville, Ohio, with a contract length of "unknown" and a pay rate of "unknown." Key skills include Hadoop utilities, Python, Kafka, and agile experience. A Bachelor's degree is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
March 13, 2026
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Strongsville, OH
-
π§ - Skills detailed
#Hadoop #Agile #AI (Artificial Intelligence) #Shell Scripting #HDFS (Hadoop Distributed File System) #HBase #GitHub #YARN (Yet Another Resource Negotiator) #Data Engineering #Java #Programming #Jira #Spark (Apache Spark) #Python #Impala #Kafka (Apache Kafka) #Scripting #Jenkins #PySpark #Maven #Data Pipeline #Data Integration
Role description
Job Title: Data Engineer
Location: Strongsville, Ohio (On-site)
Job Summary: We are seeking a highly skilled professional to contribute to data product and data integration initiatives within a dynamic, agile team. Our team values technical expertise and collaboration in supporting high-impact business objectives.
Responsibilities
β’ Design and build consumer and production applications using the Hadoop Technology Stack and Python programming.
β’ Collaborate within an agile crew, participating in two-week sprints to deliver data-oriented solutions.
β’ Utilize platform utilities such as Hive, Oozie, Yarn, Impala, HDFS, Hbase-Hue, and Beeline to manage robust data pipelines.
β’ Engage in data streaming activities and integrate applications using tools like Kafka Confluent, Core Java, and PySpark.
Required
β’ Proficiency with Hadoop Platform utilities and demonstrated experience with HDFS, Hive, Impala, and associated technologies.
β’ Strong programming skills in Python (2.7/3.0), Shell scripting, and PySpark for handling complex data workflows.
β’ Solid understanding of data streaming platforms such as Kafka Confluent and familiarity with tools including IntelliJ, Maven or SBT, Confluence, Jira, Service Now, GitHub, Jenkins, and Udeploy.
β’ Bachelorβs degree and excellent written and verbal communication skills; prior experience working in an agile environment is a plus.
About SSI People: With over 26 years of industry experience, SSi People has built its reputation and expertise on putting people first. Everything we do works toward delivering exceptional experience for our consultants, our clients, and our internal team. Through a genuine commitment to people in everything we do. We have developed refined processes and a stellar internal team to deliver talent quickly. More importantly, we focus on building long-term relationships, not transactions. Putting people first is just what we do well.
By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from SSi People and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here: SSi People Privacy Policy
Job Title: Data Engineer
Location: Strongsville, Ohio (On-site)
Job Summary: We are seeking a highly skilled professional to contribute to data product and data integration initiatives within a dynamic, agile team. Our team values technical expertise and collaboration in supporting high-impact business objectives.
Responsibilities
β’ Design and build consumer and production applications using the Hadoop Technology Stack and Python programming.
β’ Collaborate within an agile crew, participating in two-week sprints to deliver data-oriented solutions.
β’ Utilize platform utilities such as Hive, Oozie, Yarn, Impala, HDFS, Hbase-Hue, and Beeline to manage robust data pipelines.
β’ Engage in data streaming activities and integrate applications using tools like Kafka Confluent, Core Java, and PySpark.
Required
β’ Proficiency with Hadoop Platform utilities and demonstrated experience with HDFS, Hive, Impala, and associated technologies.
β’ Strong programming skills in Python (2.7/3.0), Shell scripting, and PySpark for handling complex data workflows.
β’ Solid understanding of data streaming platforms such as Kafka Confluent and familiarity with tools including IntelliJ, Maven or SBT, Confluence, Jira, Service Now, GitHub, Jenkins, and Udeploy.
β’ Bachelorβs degree and excellent written and verbal communication skills; prior experience working in an agile environment is a plus.
About SSI People: With over 26 years of industry experience, SSi People has built its reputation and expertise on putting people first. Everything we do works toward delivering exceptional experience for our consultants, our clients, and our internal team. Through a genuine commitment to people in everything we do. We have developed refined processes and a stellar internal team to deliver talent quickly. More importantly, we focus on building long-term relationships, not transactions. Putting people first is just what we do well.
By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from SSi People and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here: SSi People Privacy Policy






