

Akkodis
Senior Data Engineer - 100% Remote - Contract
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a 100% remote contract, paying $75-$85 per hour. Key skills include Python, AWS, PyIceberg, ETL, and data lake technologies. Experience with Pandas, PyArrow, and large dataset optimization is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
680
-
🗓️ - Date
November 7, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#FastAPI #Flask #Spark (Apache Spark) #Storage #"ETL (Extract #Transform #Load)" #Data Management #Azure #AWS S3 (Amazon Simple Storage Service) #Data Governance #Azure ADLS (Azure Data Lake Storage) #Datasets #Data Processing #AWS (Amazon Web Services) #Programming #Batch #Metadata #Pandas #Delta Lake #Data Lake #Apache Iceberg #Big Data #Data Engineering #Python #Data Access #Data Integrity #S3 (Amazon Simple Storage Service) #Data Manipulation #Cloud #ADLS (Azure Data Lake Storage)
Role description
Akkodis is seeking a Senior Data Engineer for a Contract position, and this is a 100% Remote role. Ideally, looking for applicants with a solid background in Data Engineering, AWS, PyIceberg, ETL, and Pandas/PyArrow.
Rate Range: $75 - $85 per hour. The salary may be negotiable based on experience, education, geographic location, and other factors.
About the Role
We are seeking a Data Engineer with strong Python expertise and experience working with modern data lake technologies to join our team. This role will focus on building efficient data workflows and integrating large-scale datasets into web applications. You will work closely with backend engineers to design and optimize data access layers for a client, leveraging Apache Iceberg and Python-based tools.
Key Responsibilities
• Design and implement data access layers for web applications using PyIceberg.
• Develop efficient querying workflows using Pandas, PyArrow, and DuckDB.
• Optimize memory-heavy operations and improve performance for large datasets.
• Build and maintain ETL pipelines for batch updates and overwrite workflows.
• Manage Iceberg table metadata, schema evolution, and partitioning strategies.
• Collaborate with backend engineers to integrate data services into RESTful APIs.
• Implement caching and pre-processing strategies to reduce latency.
• Ensure data integrity and consistency across snapshots and versions.
Required Skills & Experience
• Strong Python programming skills with experience in data engineering.
• Hands-on experience with PyIceberg or similar technologies (Delta Lake, Hive).
• Proficiency in Pandas, PyArrow, and DuckDB for data manipulation.
• Understanding of data lake architectures, Parquet format, and columnar storage.
• Experience with ETL design, batch processing, and overwrite workflows.
• Familiarity with cloud storage systems (e.g., AWS S3, Azure Data Lake Storage).
• Knowledge of query optimization and performance tuning for large datasets.
Preferred Qualifications
• Experience integrating data workflows with FastAPI, Flask, or similar frameworks.
• Background in data governance, metadata management, and schema evolution.
• Exposure to distributed systems and big data processing frameworks (Spark, Flink) is a plus.
If you are interested in this position, please click APPLY NOW. For other opportunities available at Akkodis, go to www.akkodis.com. If you have questions about the position, please contact Narendra Pratap at (213) 410-5211 or narendra.pratap@akkodis.com
Equal Opportunity Employer/Veterans/Disabled
Benefit offerings include medical, dental, vision, term life insurance, short-term disability insurance, additional voluntary benefits, commuter benefits, and a 401K plan. Our program provides employees the flexibility to choose the type of coverage that meets their individual needs. Available paid leave may include Paid Sick Leave, where required by law; any other paid leave required by Federal, State, or local law; and Holiday pay upon meeting eligibility criteria. Disclaimer: These benefit offerings do not apply to client-recruited jobs and jobs that are direct hires to a client.
To read our Candidate Privacy Information Statement, which explains how we will use your information, please visit https://www.akkodis.com/en/privacy-policy
The Company will consider qualified applicants with arrest and conviction records.
Akkodis is seeking a Senior Data Engineer for a Contract position, and this is a 100% Remote role. Ideally, looking for applicants with a solid background in Data Engineering, AWS, PyIceberg, ETL, and Pandas/PyArrow.
Rate Range: $75 - $85 per hour. The salary may be negotiable based on experience, education, geographic location, and other factors.
About the Role
We are seeking a Data Engineer with strong Python expertise and experience working with modern data lake technologies to join our team. This role will focus on building efficient data workflows and integrating large-scale datasets into web applications. You will work closely with backend engineers to design and optimize data access layers for a client, leveraging Apache Iceberg and Python-based tools.
Key Responsibilities
• Design and implement data access layers for web applications using PyIceberg.
• Develop efficient querying workflows using Pandas, PyArrow, and DuckDB.
• Optimize memory-heavy operations and improve performance for large datasets.
• Build and maintain ETL pipelines for batch updates and overwrite workflows.
• Manage Iceberg table metadata, schema evolution, and partitioning strategies.
• Collaborate with backend engineers to integrate data services into RESTful APIs.
• Implement caching and pre-processing strategies to reduce latency.
• Ensure data integrity and consistency across snapshots and versions.
Required Skills & Experience
• Strong Python programming skills with experience in data engineering.
• Hands-on experience with PyIceberg or similar technologies (Delta Lake, Hive).
• Proficiency in Pandas, PyArrow, and DuckDB for data manipulation.
• Understanding of data lake architectures, Parquet format, and columnar storage.
• Experience with ETL design, batch processing, and overwrite workflows.
• Familiarity with cloud storage systems (e.g., AWS S3, Azure Data Lake Storage).
• Knowledge of query optimization and performance tuning for large datasets.
Preferred Qualifications
• Experience integrating data workflows with FastAPI, Flask, or similar frameworks.
• Background in data governance, metadata management, and schema evolution.
• Exposure to distributed systems and big data processing frameworks (Spark, Flink) is a plus.
If you are interested in this position, please click APPLY NOW. For other opportunities available at Akkodis, go to www.akkodis.com. If you have questions about the position, please contact Narendra Pratap at (213) 410-5211 or narendra.pratap@akkodis.com
Equal Opportunity Employer/Veterans/Disabled
Benefit offerings include medical, dental, vision, term life insurance, short-term disability insurance, additional voluntary benefits, commuter benefits, and a 401K plan. Our program provides employees the flexibility to choose the type of coverage that meets their individual needs. Available paid leave may include Paid Sick Leave, where required by law; any other paid leave required by Federal, State, or local law; and Holiday pay upon meeting eligibility criteria. Disclaimer: These benefit offerings do not apply to client-recruited jobs and jobs that are direct hires to a client.
To read our Candidate Privacy Information Statement, which explains how we will use your information, please visit https://www.akkodis.com/en/privacy-policy
The Company will consider qualified applicants with arrest and conviction records.






