Lead Engineer - R01557124

Bangalore, India
Job ID: R01557124

Lead Engineer - R01557124

Lead Engineer
Primary Skills
  • Jmeter, App monitoring tools, Performance Reporting, HP Load Runner
  • Job requirements
  • Senior Performance Test Engineer (Experience: 7–12 years) Number of positions: 4 Key Responsibilities Design and implement performance test strategies for LLM-driven workflows, MCP-compliant tool orchestration, and streaming APIs Develop modular, reusable test scripts using JMeter / Gatling / Locust with Groovy/Bash/Python integration. Build polling loops and timing extractors to measure end-to-end latency across multiple layers Simulate concurrent user flows across mobile, web, and desktop interfaces Validate scalability of components like Chat API, LLM Gateway, Planner Agent, Actor Agents, Tool Registry, and MCP Server Integrate performance tests with CI/CD pipelines and APM tools (Dynatrace, AppDynamics, Grafana) Analyze test results, identify bottlenecks, and provide actionable recommendations Collaborate with engineering and DevOps to tune infrastructure and optimize response times Document test plans, scenarios, and results with clarity and auditability Mentor junior testers and contribute to performance testing best practices Required Skills Performance Engineering: JMeter, Gatling, Locust for load tests. Experience with LoadRunner and Neo Load for enterprise-grade performance testing. Streaming Protocol Testing: Web Sockets, HTTP/2, SSE. Strong understanding of HTTP/HTTPS, REST APIs, WebSocket protocols LLM Performance and Profiling: Tool orchestration, Streaming architectures, Token latency and Context window impact. MCP Orchestration Testing: Multi-agent workflows, sandboxing. Observability & Monitoring: Prometheus, Grafana, Open Telemetry or similar monitoring tools Experience with Dynatrace or AppDynamics. Cloud & Container Scaling: Kubernetes HPA, auto-scaling policies. Scripting & Automation: Groovy/Bash/Python, Go (for custom load scripts). Kafka Performance Tuning: Consumer lag, partition balancing. CI/CD integration with Jenkins, GitLab, or Azure DevOps Root-cause analysis and iterative debugging across distributed systems Clear, concise documentation and audit trail creation Bias-aware collaboration and mentoring in diverse teams Preferred Experience Performance testing of LLM-based systems or AI orchestration platforms Experience with MCP-compliant tool orchestration and sandboxed tool execution Familiarity with streaming architectures and real-time message flow Exposure to Azure-hosted services, AWS Bedrock, or private LLM deployments Designing polling loops and timing extractors for asynchronous workflows Deep understanding of concurrency, thread management, and load distribution Experience with test data generation for prompt variability and tool invocation paths Strong grasp of fairness, inclusion, and ethical testing practices in AI systems Ability to collaborate across engineering, DevOps, and product teams to drive performance goals
  • Jobs based on your browsing history

    Together, we create the future you always aspired to. Explore your next career opportunity.

    SEE ALL OPEN POSITIONS