Enterprise
June 27, 2024
4 min read

3 Strategies to Supercharge Developer Operational Efficiency

Peter Barnard
Content @ Kong

Developer operational efficiency is crucial for streamlining API management processes and empowering development teams to work more effectively. In this blog post, we'll explore three key tips to unlock developer operational efficiency — leveraging API documentation and self-service credential management, automating API lifecycle management, and optimizing resources and performance — using Kong Konnect and Kong Kubernetes Ingress Controller (KIC).

1. Leverage API documentation and self-service credential management 

The first key benefit of self-service API documentation and testing is that it allows developers to quickly get started with your APIs without requiring any manual intervention. With Kong Konnect, you can publish detailed API documentation and specs directly to the developer portal using either the Kong Konnect UI or APIs. This enables developers to explore the request and response structures, authentication methods, and error codes without needing to configure the portal itself.

Additionally, Kong Konnect provides a self-service experience where developers can register themselves and create API keys, without any manual approval process. By configuring auto-approval for developers and applications, you can ensure a seamless onboarding experience. Users can come into your developer portal, get themselves started, and begin making real-time calls to test the API endpoints without writing any code. This rapid feedback loop significantly decreases development time and allows developers to get a feel for your APIs quickly.

2. Automate API lifecycle management 

Managing the lifecycle of your APIs is made easy with Kong Konnect and the Kong Kubernetes Ingress Controller (KIC). You can version and deprecate your API products directly within the developer portal using either the Kong Konnect UI or APIs, and the changes will be automatically reflected in the portal.

KIC takes automation a step further by allowing you to run the Kong Gateway as a Kubernetes ingress to handle inbound requests for your Kubernetes clusters. It can convert Kubernetes resources, such as an HTTP route or an ingress, into valid Kong Gateway configurations. Any updates made to your Kubernetes resources will automatically trigger updates to your Kong Gateway configuration, eliminating the need for manual intervention.

By managing your API lifecycle through code and version control, you can easily version, duplicate, and integrate your applications into the CI/CD pipeline. This approach ensures consistency, reproducibility, and easier collaboration among team members.

3. Optimize resources and performance 

The Kong Kubernetes Ingress Controller (KIC) offers a range of capabilities to optimize your resources and performance out of the box. One such feature is the ability to set service weights for traffic distribution. By configuring these service weights, you can control how network requests are distributed among your services. For example, you can set up a two-to-one ratio to route traffic according to your desired proportions.

Rate limiting is another essential aspect of resource optimization. With KIC, you can configure rate-limiting plugins without any external dependencies. Kong stores request counters in memory, and each node applies the rate-limiting policy independently. By creating and associating a rate-limiting plugin with your service, Kong will enforce the specified limits, blocking requests that exceed the threshold and returning a 429 "Too Many Requests" error.

To ensure the stability and availability of your services, KIC allows you to set up passive health checks. When a passive health check is configured for a service running in your cluster, Kong will monitor the pods for any errors. If a pod reports consecutive errors, Kong will return a 503 status code and stop proxying further requests to that pod. This short-circuiting mechanism helps prevent cascading failures and maintains the overall health of your system. Manual intervention is required to delete, scale, or change the pod's health status using Kong's Admin API.

See Kong in action 

To see Kong Konnect and the Kong Developer Portal in action, check out the live demo video below for a step-by-step walkthrough of key features, including API lifecycle management, developer portal exploration, application creation, API key generation, and testing API endpoints.

KIC use cases

This video also covers three key use cases for the Kong Kubernetes Ingress Controller (KIC): 

  • KIC can be used to set service weights and distribute traffic to gateway services in the desired manner, enabling fine-grained traffic management and scenarios like canary deployments or A/B testing.
  • You can implement rate limiting using KIC without external dependencies, with Kong storing request counters in memory and each node applying the rate limiting policy independently to protect services from excessive traffic.
  • You can set up passive health checks using KIC to automatically short-circuit requests to misbehaving pods in a Kubernetes cluster, helping prevent cascading failures and maintain overall system health.

To learn more about these use cases and see them in action, check out the use case section of the video above.

Conclusion

This blog explored three tips for unlocking developer operational efficiency using Kong Konnect and Kong Kubernetes Ingress Controller. By leveraging API documentation and self-service credential management, automating API lifecycle management, and optimizing resources and performance, you can streamline your API management processes and empower your development teams to work more efficiently.

Want to unleash your true potential with Kong Konnect? Get started with Kong Konnect for free and claim the efficiency that you deserve!