

Ab Initio/Teradata Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an Ab Initio/Teradata Developer in Charlotte, NC, for 12+ months at a competitive pay rate. Requires 10+ years of Data Engineering, 8+ years with Ab Initio and Teradata, 4+ years in GCP, and strong SQL/ETL skills.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 1, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Charlotte, NC
-
π§ - Skills detailed
#SQL (Structured Query Language) #Agile #Data Manipulation #Data Engineering #Data Processing #GCP (Google Cloud Platform) #Hadoop #Data Pipeline #SQL Queries #BigQuery #Data Ingestion #Data Lake #Cloud #Spark (Apache Spark) #Teradata #Ab Initio #Jira #Python #Storage #Java #"ETL (Extract #Transform #Load)" #Debugging
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Ab Initio/Teradata Developer
Location: Charlotte, NC
Duration: 12 Months +
Must Haves:
MUST have 10+ years of Data Engineer experience
8+ years of Ab Initio experience
8+ years of Teradata experience
4+ years of GCP experience
4+ years of experience with BigQuery
Expertise with SQL/ETL
4+ years of Agile and JIRA experience
Experience with technical stakeholder interactions
Enterprise level experience
EXCELLENT written and verbal communication skills
Day to Day:
Designing, coding, and testing new data pipelines using Ab Initio Designing
Implementing ETL/ELT Processes
Writing, optimizing, and debugging complex SQL queries for data manipulation, aggregation, and reporting, particularly for data within Teradata and BigQuery
Data ingestion - develop and manage processes to ingest large volumes of data into GCP's BigQuery
Manage and monitor GCP resources specifically used for data processing and storage
Optimize Cloud Data Workloads
Desired Experience:
Java experience highly desired
Python experience highly desired
Experience with Spark, Hadoop, MapR, Data Lake
Background in Banking/Financial Technology β Deposits, Payments, Cards domain, etc.