

Vision Square INC
Big Data Developer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Developer on a contract basis, remote work. Requires 7+ years in big data engineering, expertise in AWS, SQL, Apache Airflow, and Spark. Familiarity with CDPs and AI solutions is a plus.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 18, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Scala #Snowflake #Data Architecture #Athena #Spark (Apache Spark) #DevOps #Code Reviews #Agile #Data Integration #Data Integrity #DataOps #Python #Data Management #Scrum #Kafka (Apache Kafka) #AWS (Amazon Web Services) #Data Pipeline #Redshift #Big Data #SQL (Structured Query Language) #Data Modeling #Airflow #Data Lake #Apache Airflow #Data Engineering #"ETL (Extract #Transform #Load)"
Role description
Job Title: Big Data Engineer
Location: Remote
Job Type: Contract
Working in an agile (Scrum) environment, you will collaborate closely with product owners, marketing stakeholders, data architects, and cross-functional teams to plan and execute key initiatives. You'll ensure the team has clear direction, adequate resources, and strong technical guidance to deliver high-quality. high-impact solutions. In addition, you'll play a key role in shaping system architecture, enforcing softwares quality standards, and fostering a culture of innovation and technical excellence.
Primary Responsibilities:
• Lead agile data engineering team focused on customer data management and big data solutions that power marketing analytics, personalization, and engagement strategies across Catalyst Brands' digital ecosystem. Ensure initiatives are delivered on time, with high data integrity, performance, and scalability. Collaborate with cross-functional partners including architecture, product management, DevOps, DataOps, marketing analytics, and business operations to define and execute a robust data product backlog and technical roadmap.
• Ensure scalability, performance, and maintainability of big data pipelines, data lakes, and customer data: platforms (CDPs) to support advanced marketing use cases such as segmentation and real-time personalization.
• Serve as a technical lead on the design and delivery of complex data engineering projects involving customer data integration, identity resolution, and data synchronization across internal and third-party platforms. Oversee development and data engineering practices, providing guidance through design/code reviews, architectural decisions, and best practices for building secure and compliant customer data solutions. Mentor and guide in a dynamic, data-driven environment, promoting a culture of excellence, collaboration, and customer-centric data solutions.
Core Competencies & Accomplishments:
• 7+ years' of big data engineering relevant experience.
• Develop and orchestrate workflows using Apache Airflow and Spark.
• Leverage AWS services (e.g., 53, Glue, Athena, Redshift, EMR) to manage and optimize large-scale data infrastructure.
• Write robust, modular, and testable code in Python to support data engineering use cases.
• Experience developing or supporting Al-powered solutions is a strong plus.
• Expert-level skills in SQL for data modeling, transformation, and analysis.
• Hands-on experience with Snowflake, Redshift, and traditional data warehousing principles (star/snowflake schema, performance tuning, etc.)
• Familiarity with real-time data streaming (Kafka) and event-based architectures.
• Implement and manage CI/CD pipelines for data applications and infrastructure.) Exposure to CDPs (Customer Data Platforms) or Marketing Tech Platforms is a plus Strong sense of urgency and ability to drive change.
Job Title: Big Data Engineer
Location: Remote
Job Type: Contract
Working in an agile (Scrum) environment, you will collaborate closely with product owners, marketing stakeholders, data architects, and cross-functional teams to plan and execute key initiatives. You'll ensure the team has clear direction, adequate resources, and strong technical guidance to deliver high-quality. high-impact solutions. In addition, you'll play a key role in shaping system architecture, enforcing softwares quality standards, and fostering a culture of innovation and technical excellence.
Primary Responsibilities:
• Lead agile data engineering team focused on customer data management and big data solutions that power marketing analytics, personalization, and engagement strategies across Catalyst Brands' digital ecosystem. Ensure initiatives are delivered on time, with high data integrity, performance, and scalability. Collaborate with cross-functional partners including architecture, product management, DevOps, DataOps, marketing analytics, and business operations to define and execute a robust data product backlog and technical roadmap.
• Ensure scalability, performance, and maintainability of big data pipelines, data lakes, and customer data: platforms (CDPs) to support advanced marketing use cases such as segmentation and real-time personalization.
• Serve as a technical lead on the design and delivery of complex data engineering projects involving customer data integration, identity resolution, and data synchronization across internal and third-party platforms. Oversee development and data engineering practices, providing guidance through design/code reviews, architectural decisions, and best practices for building secure and compliant customer data solutions. Mentor and guide in a dynamic, data-driven environment, promoting a culture of excellence, collaboration, and customer-centric data solutions.
Core Competencies & Accomplishments:
• 7+ years' of big data engineering relevant experience.
• Develop and orchestrate workflows using Apache Airflow and Spark.
• Leverage AWS services (e.g., 53, Glue, Athena, Redshift, EMR) to manage and optimize large-scale data infrastructure.
• Write robust, modular, and testable code in Python to support data engineering use cases.
• Experience developing or supporting Al-powered solutions is a strong plus.
• Expert-level skills in SQL for data modeling, transformation, and analysis.
• Hands-on experience with Snowflake, Redshift, and traditional data warehousing principles (star/snowflake schema, performance tuning, etc.)
• Familiarity with real-time data streaming (Kafka) and event-based architectures.
• Implement and manage CI/CD pipelines for data applications and infrastructure.) Exposure to CDPs (Customer Data Platforms) or Marketing Tech Platforms is a plus Strong sense of urgency and ability to drive change.






