Ampstek

Data Engineer (Python Enterprise Developer)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Python Enterprise Developer) in London, UK (Hybrid 4 days onsite/week), on a contract basis (Inside IR35), with a pay rate of "X". Requires 8+ years of experience, expertise in Python, SQL, and cloud services, preferably in energy trading.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
February 19, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Inside IR35
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#NumPy #Pandas #Lambda (AWS Lambda) #Scala #Azure DevOps #Apache Airflow #Azure #Data Engineering #GIT #S3 (Amazon Simple Storage Service) #Python #Airflow #Programming #Scripting #Azure Databricks #Agile #Scrum #DevOps #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #AWS (Amazon Web Services) #Jenkins #SQL (Structured Query Language) #Big Data #Databricks
Role description
About Us: AmpsTek – a global technology leader since 2013 – is transforming how businesses approach technology and staffing solutions. Founded by seasoned technology leaders across the UK, Europe, APAC, North America, and LATAM, and with registered offices in 30+ countries, we deliver exceptional service, scalable solutions, and measurable impact. With a portfolio of 200+ clients and millions of users across web and mobile platforms, we empower businesses to innovate, grow, and succeed. Join our team and be part of a dynamic, growth-oriented organization that values talent, creativity, and results. Role : Data Engineer (Python Enterprise Developer) Location : London, UK (Hybrid 4 days onsite/week) Contract (InsideIR35) Must have skills: -Python, Postgres SQL, Azure Databricks, AWS(S3), Git, Azure DevOps CICD, Apache Airflow -Preferred Domain: Energy trading Minimum number of relevant years of experience: 8+ Detailed Job Description: JD: β€’ 8+ years of experience in python scripting. β€’ Exposed to python packages such as NumPy, pandas, polars beautiful soup, Selenium, Requests etc. β€’ Experience in building native python ETL pipelines for scraping/processing. β€’ Good to have experience in processing big data. β€’ Proficient in SQL programming, Postgres SQL. Spark knowledge is a good to have β€’ Knowledge on DevOps CI/CD like Azure Pipelines, Jenkins, Git. β€’ Experience working with AWS services(S3, lambda,etc) and Azure Databricks. β€’ Have experience in delivering project with Agile and Scrum methodology. β€’ Able to co-ordinate with Teams across multiple locations and time zones β€’ Strong interpersonal and communication skills with an ability to lead a team and keep them motivated. β€’ Python, Postgres SQL, Azure Databricks, AWS(S3), Git, Azure DevOps CICD, Apache Airflow