Data Engineer (Contract, Remote)

โญ - Featured Role | Apply direct with Data Freelance Hub
This role is a remote Data Engineer contract position, offering $70.00 - $150.00 per hour for 20-40 hours per week. Key skills include ETL processes, big data technologies (Hadoop, Spark), and programming (Java, Python, SQL). Experience in Agile environments is required.
๐ŸŒŽ - Country
United States
๐Ÿ’ฑ - Currency
$ USD
-
๐Ÿ’ฐ - Day rate
1200
-
๐Ÿ—“๏ธ - Date discovered
August 15, 2025
๐Ÿ•’ - Project duration
Unknown
-
๐Ÿ๏ธ - Location type
Remote
-
๐Ÿ“„ - Contract type
Unknown
-
๐Ÿ”’ - Security clearance
Unknown
-
๐Ÿ“ - Location detailed
Remote
-
๐Ÿง  - Skills detailed
#Agile #Hadoop #Databases #Database Design #Microsoft SQL #BI (Business Intelligence) #MS SQL (Microsoft SQL Server) #Bash #Big Data #SQL Server #Data Lake #VBA (Visual Basic for Applications) #Datasets #Shell Scripting #Data Pipeline #Spark (Apache Spark) #AWS (Amazon Web Services) #Python #Cloud #Programming #Microsoft SQL Server #Data Science #Data Warehouse #Looker #Scala #Scripting #Informatica #Data Engineering #Java #Apache Hive #Data Quality #Unix #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #Database Performance #Oracle #Azure
Role description
OverviewWe are seeking a skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and architectures that support our analytics and business intelligence initiatives. You will work closely with data scientists, analysts, and other stakeholders to ensure that our data infrastructure meets the evolving needs of the organization. Responsibilities Design and implement robust data pipelines using ETL processes to extract, transform, and load data from various sources into data warehouses. Develop and maintain data models that support business intelligence reporting and analytics. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Optimize database performance through effective database design and management practices. Utilize big data technologies such as Hadoop, Spark, and Azure Data Lake for processing large datasets. Implement RESTful APIs for seamless integration of data services. Conduct model training and analysis to enhance predictive analytics capabilities. Ensure data quality and integrity through rigorous testing and validation processes. Stay updated on industry trends and best practices in data engineering, analytics, and cloud technologies. Experience Proficiency in programming languages such as Java, Python, SQL, VBA, Bash (Unix shell), or Shell Scripting. Hands-on experience with big data technologies including Hadoop, Apache Hive, Spark, and Informatica. Familiarity with cloud platforms such as AWS or Azure Data Lake is highly desirable. Experience with relational databases like Microsoft SQL Server or Oracle; knowledge of database design principles is a plus. Understanding of analytics tools like Looker for reporting purposes. Knowledge of linked data concepts and techniques is an advantage. Strong analytical skills with the ability to troubleshoot complex data issues effectively. Experience working in Agile environments to deliver high-quality solutions efficiently. Join us in leveraging the power of data to drive informed decision-making across the organization. If you are passionate about data engineering and eager to make an impact, we encourage you to apply. Job Type: Contract Pay: $70.00 - $150.00 per hour Expected hours: 20 โ€“ 40 per week Benefits: Flexible schedule Referral program Work Location: Remote