Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 34 years of experience, offering a contract of over 6 months, remote or on-site in the U.S. Requires strong SQL and Python skills, ETL frameworks, and experience with Snowflake and Apache Spark.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 15, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Automation #GCP (Google Cloud Platform) #Snowflake #Apache Spark #Big Data #BigQuery #Data Modeling #Monitoring #Data Pipeline #Schema Design #Spark (Apache Spark) #AWS (Amazon Web Services) #Python #Cloud #Programming #Compliance #Data Governance #Consulting #Data Analysis #Security #Scala #Data Processing #Data Engineering #Kafka (Apache Kafka) #Data Quality #SQL (Structured Query Language) #dbt (data build tool) #"ETL (Extract #Transform #Load)" #Data Ingestion #Batch #Azure
Role description
Data Engineer (34 Years Experience) Location: United States ( Remote / On-site based on client needs ) Employment Type: Full-time ( Contract or Contract-to-Hire ) Experience Level: Mid-level (34 years) Company: Aaratech Inc Eligibility: Open to U.S. Citizens and Green Card holders only. We do not offer visa sponsorship. About Aaratech Inc Aaratech Inc is a specialized IT consulting and staffing company that places elite engineering talent into high-impact roles at leading U.S. organizations. We focus on modern technologies across cloud , data , and software disciplines. Our client engagements offer long-term stability, competitive compensation, and the opportunity to work on cutting-edge data projects. Position Overview We are seeking a Data Engineer with 34 years of experience to join a client-facing role focused on building and maintaining scalable data pipelines , robust data models , and modern data warehousing solutions. You'll work with a variety of tools and frameworks, including Apache Spark , Snowflake , and Python , to deliver clean, reliable, and timely data for advanced analytics and reporting. ? Key Responsibilities Design and develop scalable Data Pipelines to support batch and real-time processing Implement efficient Extract, Transform, Load (ETL) processes using tools like Apache Spark and dbt Develop and optimize queries using SQL for data analysis and warehousing Build and maintain Data Warehousing solutions using platforms like Snowflake or BigQuery Collaborate with business and technical teams to gather requirements and create accurate Data Models Write reusable and maintainable code in Python (Programming Language) for data ingestion, processing, and automation Ensure end-to-end Data Processing integrity, scalability, and performance Follow best practices for data governance , security , and Compliance Required Skills & Experience 34 Years of experience in Data Engineering or a similar role Strong proficiency in SQL and Python (Programming Language) Experience with Extract, Transform, Load (ETL) frameworks and building data pipelines Solid understanding of Data Warehousing concepts and architecture Hands-on experience with Snowflake , Apache Spark , or similar big data technologies Proven experience in Data Modeling and data schema design Exposure to Data Processing frameworks and performance optimization techniques Familiarity with cloud platforms like AWS , GCP , or Azure Nice to Have Experience with streaming data pipelines (e.g., Kafka, Kinesis) Exposure to CI/CD practices in data development Prior work in consulting or multi-client environments Understanding of data quality frameworks and monitoring strategies #J-18808-Ljbffr