

TMS, LLC
Control-M Developer with Data Engineering Experience
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Control-M Developer with Data Engineering experience, offering a 6+ month remote contract at a pay rate of "unknown." Candidates must have 5+ years in Control-M and strong SQL skills, with preferred experience in cloud platforms and CI/CD tools.
🌎 - Country
United States
💱 - Currency
Unknown
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 21, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Jersey City, NJ
-
🧠 - Skills detailed
#Data Warehouse #ADLS (Azure Data Lake Storage) #Scala #Snowflake #BigQuery #Hadoop #Cloud #Azure DevOps #Monitoring #Spark (Apache Spark) #Batch #Data Integration #Lambda (AWS Lambda) #Data Quality #Teradata #Databricks #S3 (Amazon Simple Storage Service) #Scripting #Python #Linux #SQL (Structured Query Language) #Oracle #Storage #Big Data #SQL Server #Data Ingestion #Jenkins #Data Pipeline #Azure #GCP (Google Cloud Platform) #DevOps #Shell Scripting #GIT #Unix #Deployment #"ETL (Extract #Transform #Load)" #AWS (Amazon Web Services) #Automation #Data Processing #Redshift #Data Engineering
Role description
Company Description
Job Description
Job Title: Control-M Developer – Data Engineering
Location: Remote
Experience: 10+ years
Duration: 6+ Months
Employment Type: Contract
Note: Attached is a profile selected internally at RL for this role. Need someone better than him.
Job Summary
We are seeking an experienced Control-M Developer with strong Data Engineering expertise to design, develop, and manage enterprise-scale batch scheduling and data pipeline workflows. The ideal candidate will have hands-on experience in Control-M automation, data integration, and end-to-end data pipeline orchestration across modern data platforms.
Key Responsibilities
Control-M & Scheduling
Design, develop, and maintain Control-M job flows for complex batch and data workflows
Create and manage job dependencies, calendars, conditions, and alerts
Monitor, troubleshoot, and optimize batch failures and performance issues
Implement job automation, reruns, recovery, and SLA management
Collaborate with application, data, and infrastructure teams to ensure seamless scheduling
Data Engineering
Develop and support data pipelines using ETL/ELT tools and scripting
Integrate Control-M with data platforms such as:
Data warehouses (Snowflake, Redshift, BigQuery, Teradata, Oracle, SQL Server)
Big data ecosystems (Hadoop, Spark)
Orchestrate workflows involving file transfers (SFTP, FTP), APIs, and cloud storage
Support data ingestion, transformation, validation, and downstream consumption
Ensure data quality, reliability, and performance across pipelines
Cloud & Automation
Schedule and monitor workloads in AWS / Azure / GCP environments
Integrate Control-M with cloud-native services (S3, ADLS, Lambda, Databricks, etc.)
Use shell scripting / Python for automation and data processing
Implement CI/CD best practices for job and pipeline deployments
Required Skills & Qualifications
Must-Have
5+ years of hands-on experience as a Control-M Developer / Scheduler
Strong experience in Data Engineering or ETL development
Proficiency in Unix/Linux shell scripting
Strong SQL skills for data validation and troubleshooting
Experience supporting enterprise batch processing environments
Nice-to-Have
Experience with Python, Spark, or Scala
Exposure to cloud-based data platforms (Snowflake, Databricks, BigQuery)
Knowledge of CI/CD tools (Git, Jenkins, Azure DevOps)
Experience with monitoring tools and SLA reporting
Control-M certification is a plus
Soft Skills
Strong analytical and troubleshooting skills
Ability to work in fast-paced, production-critical environments
Excellent communication and cross-team collaboration
Ownership mindset with attention to detail
Additional Information
All your information will be kept confidential according to EEO guidelines.
Company Description
Job Description
Job Title: Control-M Developer – Data Engineering
Location: Remote
Experience: 10+ years
Duration: 6+ Months
Employment Type: Contract
Note: Attached is a profile selected internally at RL for this role. Need someone better than him.
Job Summary
We are seeking an experienced Control-M Developer with strong Data Engineering expertise to design, develop, and manage enterprise-scale batch scheduling and data pipeline workflows. The ideal candidate will have hands-on experience in Control-M automation, data integration, and end-to-end data pipeline orchestration across modern data platforms.
Key Responsibilities
Control-M & Scheduling
Design, develop, and maintain Control-M job flows for complex batch and data workflows
Create and manage job dependencies, calendars, conditions, and alerts
Monitor, troubleshoot, and optimize batch failures and performance issues
Implement job automation, reruns, recovery, and SLA management
Collaborate with application, data, and infrastructure teams to ensure seamless scheduling
Data Engineering
Develop and support data pipelines using ETL/ELT tools and scripting
Integrate Control-M with data platforms such as:
Data warehouses (Snowflake, Redshift, BigQuery, Teradata, Oracle, SQL Server)
Big data ecosystems (Hadoop, Spark)
Orchestrate workflows involving file transfers (SFTP, FTP), APIs, and cloud storage
Support data ingestion, transformation, validation, and downstream consumption
Ensure data quality, reliability, and performance across pipelines
Cloud & Automation
Schedule and monitor workloads in AWS / Azure / GCP environments
Integrate Control-M with cloud-native services (S3, ADLS, Lambda, Databricks, etc.)
Use shell scripting / Python for automation and data processing
Implement CI/CD best practices for job and pipeline deployments
Required Skills & Qualifications
Must-Have
5+ years of hands-on experience as a Control-M Developer / Scheduler
Strong experience in Data Engineering or ETL development
Proficiency in Unix/Linux shell scripting
Strong SQL skills for data validation and troubleshooting
Experience supporting enterprise batch processing environments
Nice-to-Have
Experience with Python, Spark, or Scala
Exposure to cloud-based data platforms (Snowflake, Databricks, BigQuery)
Knowledge of CI/CD tools (Git, Jenkins, Azure DevOps)
Experience with monitoring tools and SLA reporting
Control-M certification is a plus
Soft Skills
Strong analytical and troubleshooting skills
Ability to work in fast-paced, production-critical environments
Excellent communication and cross-team collaboration
Ownership mindset with attention to detail
Additional Information
All your information will be kept confidential according to EEO guidelines.






