Spring REST, Java, Hibernate, Oracle RDBMS, Java REST Web API, Spring Core, Spring Boot, MySQL
Job requirements
Key Responsibilities • Build and support java-based applications in the data domain • Design, develop, and maintain robust data pipelines in Hadoop and related ecosystems, ensuring data reliability, scalability, and performance. • Implement data ETL processes for batch and streaming analytics requirements. • Optimize and troubleshoot distributed systems for ingestion, storage, and processing. • Collaborate with data engineers, analysts, and platform engineers to align solutions with business needs. • Ensure data security, integrity, and compliance throughout the infrastructure. • Maintain documentation and contribute to architecture reviews. • Participate in incident response and operational excellence initiatives for the data warehouse. • Continuously learn and apply new Hadoop ecosystem tools and data technologies. Required Skills and Experience • Java + Spring Boot: Build and maintain microservices. • Apache Flink: Develop and optimize streaming/batch pipelines. • Cloud Native: Docker, Kubernetes (deploy, network, scale, troubleshoot). • Messaging & Storage: Kafka; NoSQL KV stores (Redis, Memcached, MongoDB etc.). • Python: Scripting, automation, data utilities. • Ops & CI/CD: Monitoring (Prometheus/Grafana), logging, pipelines (Jenkins/GitHub Actions). • Core Engineering: Data structures/algorithms, testing (JUnit/pytest), Git, clean code.