Splunk Consultant || Melbourne FL, Dallas/Frisco TX or Cary,NC

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a "Splunk Consultant" with a contract length of "unknown," offering a pay rate of "unknown." It requires experience in Splunk design, implementation, and support, particularly in service provider networks, along with strong UNIX/Windows knowledge and AWS skills.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 5, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Dallas-Fort Worth Metroplex
-
🧠 - Skills detailed
#JavaScript #Python #Monitoring #Splunk #Databases #Macros #Hadoop #Cloud #EC2 #"ETL (Extract #Transform #Load)" #REST API #REST (Representational State Transfer) #Visualization #Indexing #SaaS (Software as a Service) #Logging #AWS (Amazon Web Services) #Deployment #API (Application Programming Interface) #Scripting #Linux #Unix
Role description
Hello , My name is Rajat, and I am a Technical Recruiter at K-Tek Resourcing. We are searching for professionals for the below business requirements for one of our clients. Please send me your updated resume at - rajat.rathore@ktekresourcing.com My number is 832 743 6754. JD Splunk-L3 Experience in both On-Prem & SaaS design, implementation, and support of Splunk (Indexers, Forwarders, Search-Heads Setup etc.) β€’ Experience with implementing and administering Splunk. β€’ Experience in SPLUNK core/ITSI implementation preferably in service provider network with strong UNIX/Windows knowledge. β€’ Hands on multi-site cluster and all Splunk components. β€’ Hands all sorts of log onboardings techniques in Splunk like (Monitoring, DB connect, HEC, Syslog & Rest API) for ITSI Analysis. β€’ Support large-scale deployments with data feeds from multiple tier deployment on premise data centers. β€’ Develop reliable, efficient queries that will feed custom alerts, Dashboards, Macros and all kind of knowledge objects β€’ Develop new Splunk Heavy Forwarders based on requirements by following standard procedure. β€’ Expertise in data onboarding flow Inputs(inputs.conf), Parsing (Props & transforms), Indexing (indexes.conf) and Searching (Props & transforms) β€’ Expertise in data summary creations (Summary Index, Report acceleration and Data model acceleration), extensively used most of knowledge objects & components in Splunk, implemented best practices in platform. β€’ Fluent with Linux OS, including knowledge of applications such as rsyslog / syslogng / net-snmp β€’ Understand logging methods such as syslog / SNMP β€’ Knowledge on scripts (python, JavaScript, etc.) as needed in support of data collection or integration β€’ Hands on AWS platforms (Needs to create EC2 instance and integrate all type (cloud watch, description, kinesis) of logs into Splunk) Nice to have: β€’ Splunk Admin Certification β€’ Experience with databases. β€’ Familiarity with server-side scripting β€’ Has a broad experience from either a development or operations perspective β€’ Expert understanding in data analytics, Hadoop, MapReduce, visualization is a plus β€’ Experience working in a software engineering environment is a plus β€’ Drive complex deployments of Splunk dashboards and reports while working side by side with the customers to solve their unique problems across a variety of use cases β€’ Assist internal users of Splunk in designing and maintaining production-quality dashboards.