Freelance Kafka
1 week ago
Company Description
ThreatXIntel is a startup cybersecurity company dedicated to protecting businesses and organizations from cyber threats. Our team offers a range of services, including cloud security, web and mobile security testing, and DevSecOps. We deliver customized, affordable solutions to meet the specific needs of our clients, ensuring that every business has access to high-quality cybersecurity services. At ThreatXIntel, we take a proactive approach, continuously monitoring and testing digital environments to identify vulnerabilities. Our mission is to provide exceptional cybersecurity services, giving clients peace of mind to focus on growing their business.
We are looking for an experienced
Kafka / Streaming Data Engineer
to join our team on a
freelance / contract basis
and support the launch of a high-volume
data ingestion and fulfillment architecture
. This is an exciting opportunity to work on
real-time event streaming
at scale, fine-tuning production systems and driving automation.
About the Role
Our current architecture leverages
Ab Initio for CDC, Apache Kafka for event streaming, and Apache Flink for downstream fulfillment into databases
. We are now moving into
production readiness
and require a
Kafka expert
to help
stabilize, monitor, and optimize the ecosystem
.
Responsibilities
- Manage and fine-tune
Kafka clusters
for high throughput, scalability, and reliability. - Automate Kafka installation, configuration, and deployment using
Ansible scripting
. - Build a
production-ready solution
for replicating data from on-premise to
AWS cloud
. - Monitor
end-to-end pipelines
, including Ab Initio CDC jobs, Kafka topics, and Flink consumers. - Capture and report key metrics:
lag, latency, throughput, record ingestion/consumption
. - Configure and optimize
Prometheus + Grafana dashboards
for observability and alerting. - Troubleshoot production issues, propose alternative solutions, and stabilize systems.
- Participate in
UAT, load testing, and production readiness simulations
. - Drive
automation
for repetitive operational tasks and cluster tuning.
Requirements
- Proven,
hands-on expertise with Apache Kafka
(cluster management, partitions, replication, retention policies, monitoring). - Solid experience in
streaming data pipelines
and real-time processing. - Strong skills with
Prometheus, Grafana, and monitoring/alerting setups
. - Knowledge of
Apache Flink
(or equivalent ETL/streaming frameworks). - Familiarity with
Postgres
and
CDC techniques
. - Ability to
optimize, fine-tune, and troubleshoot large-scale, high-volume production data systems
. - Experience in
production-grade environments
(multi-billion row datasets a plus). - Problem-solving mindset
– able to propose quick workarounds and long-term solutions. - Excellent communication skills to collaborate across development, UAT, and fulfillment teams.
Engagement Details
- Type
: Freelance / Contract (Remote) - Duration
: 3–6 months, extendable - Commitment
: Full-time preferred, flexible for the right candidate - Location
: Remote (global applicants welcome) - Compensation
: Competitive, based on experience
-
BIG QUERY CONSULTANT
13 hours ago
Chennai, India APPIT Software Inc Full timeFREELANCE Experience : 7 to 15 Years Location: Remote Key Responsibilities: Design, build and optimize large volume data pipelines using Big Query on Google Cloud Platform. Work on complex SQL, performance tuning, query optimization and large data processing workloads. Implement scalable data frameworks, transformations, data modelling and...
-
BIG QUERY CONSULTANT
4 hours ago
Chennai, India APPIT Software Inc Full timeFREELANCE Experience: 7 to 15 Years Location: Remote Key Responsibilities: Design, build and optimize large volume data pipelines using Big Query on Google Cloud Platform. Work on complex SQL, performance tuning, query optimization and large data processing workloads. Implement scalable data frameworks, transformations, data modelling and...
-
BIG QUERY CONSULTANT
1 hour ago
Chennai, India APPIT Software Inc Full timeFREELANCE Experience : 7 to 15 Years Location: Remote Key Responsibilities: Design, build and optimize large volume data pipelines using Big Query on Google Cloud Platform. Work on complex SQL, performance tuning, query optimization and large data processing workloads. Implement scalable data frameworks, transformations, data modelling and...