Bernard Tech

Senior Pyspark and Spark Developer

โญ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior PySpark and Spark Developer on a remote contract, requiring proficiency in PySpark and Apache Spark, experience with AWS, and strong Python coding skills. A degree in Computer Science or Engineering is essential.
๐ŸŒŽ - Country
United States
๐Ÿ’ฑ - Currency
$ USD
-
๐Ÿ’ฐ - Day rate
Unknown
-
๐Ÿ—“๏ธ - Date
November 4, 2025
๐Ÿ•’ - Duration
Unknown
-
๐Ÿ๏ธ - Location
Remote
-
๐Ÿ“„ - Contract
Unknown
-
๐Ÿ”’ - Security
Unknown
-
๐Ÿ“ - Location detailed
United States
-
๐Ÿง  - Skills detailed
#Scala #AWS (Amazon Web Services) #Spark (Apache Spark) #Python #Apache Spark #Computer Science #Azure #Big Data #Debugging #Data Pipeline #Hadoop #Cloud #Data Processing #Data Framework #PySpark
Role description
Company Description Bernard Tech is an Israel-based company with an international presence, offering a wide range of IT-related services and solutions. We pride ourselves on being inspired by the passion, experience, and expertise of our employees. Our approach is both humble and knowledgeable, allowing us to deliver significant impact for our clients. We provide both โ€œout of the boxโ€ and tailor-made solutions focusing on delivering cost-effective solutions that meet our customersโ€™ needs. For an international leading multinational telecommunications technology company headquartered in the USA, specializing in software and services for communications, media and financial services providers and digital enterprises worldwide we are seeking an experienced senior developer with direct experience with Pyspark / Spark. Role Description This is a contract remote role for a senior developer with direct experience with Pyspark / Spark to support strategic projects. The ideal candidate should have a solid background in building scalable data pipelines, real-time data flow processing, and performance optimization within AWS AMR environment/s. Qualifications โ€ข Proficiency in PySpark and Apache Spark for large-scale data processing โ€ข Solid background in building scalable data pipelines, real-time data flow processing, and performance optimization within AWS AMR environment/s โ€ข Strong coding skills in Python, as well as debugging and optimization techniques โ€ข Familiarity with cloud platforms (e.g., AWS, Azure, or Google Cloud) and associated data services โ€ข Understanding of big data frameworks and technologies such as Hadoop or Hive is a plus โ€ข Ability to work collaboratively in a remote team environment โ€ข Bachelorโ€™s or Masterโ€™s degree in Computer Science, Engineering, or a related field โ€ข Prior experience in a similar role and strong analytical thinking capabilities