See what makes Kong the fastest, most-adopted API gateway
Check out the latest Kong feature releases and updates
Single platform for SaaS end-to-end connectivity
Enterprise service mesh based on Kuma and Envoy
Collaborative API design platform
How to Scale High-Performance APIs and Microservices
Call for speakers & sponsors, Kong API Summit 2023!
< 1 MIN READ
In this episode of Kongcast, Jeff Taylor, senior product manager at Okta, tells and shows us how to speed up microservices security and take the burden off developers by managing auth with an API gateway.
Check out the transcript and video from our conversation below, and be sure to subscribe to get email alerts for the latest new episodes.
Kaitlyn: Could you kick us off by giving us a quick rundown of what we will be talking about today and how it relates to common connectivity challenges?
Jeff: I work on our SDKs and dev tools at Okta, so helping our customers work in their customized scenarios is what brings me a lot of joy.
I was running a microservices group in my previous life, and we were in charge of revolutionizing how our company thought about building software. And what came across was how you could break up traditional monoliths into microservices.
So we’re going to talk about how Okta can help you do this with API gateways, specifically using Kong Konnect.
Kaitlyn: I’d love to start kind of right at the beginning where, especially for companies who are beginning with monoliths, what are some of the signs? How do you know you’re ready to break up your monolith into microservices?
Jeff: When I started, I was anti-monolith. Monoliths had been the bane of my existence. So I was like, “we’ve got to break everything up to start.” And then I found that when I was trying to go to market with new applications, that wasn’t reasonable.
So one of the things that I latched onto was this concept called an application continuum. I loved it because it was just a spectrum around the life cycle of an application. So you could go from building a monolith. And over time, you could break that up into different components within the code and then break that finally into separate microservices so that you can work a little bit more independently.
But again, the question becomes, “When do I start this? What is a good indicator?”
When you start to realize that there are sections of your code that you don’t know what’s going on and you’re avoiding those sections, it’s usually a good indicator that the code base is getting big.
You start to see your functions and methods doing more than just one thing. So you begin to violate the single responsibility principle. They’re taking on a lot. It might be time to break those apart and provide sections.
Now what’s great about the continuums is it doesn’t say you have to go from zero to one immediately. You can start this as you’re working on improving features and adding more to your codebase. You can take notes of, “Oh, I haven’t touched this in a while. Let me take a look here and see if I can separate it out.”
So usually, it’s when you start to fear areas of your code is when you probably want to think about breaking it out into its components or microservices.
Kaitlyn: One of the things that happen as you move from a monolith to all of these different services, security comes into play. You have to start worrying about security across all of those individual services. How do you think about doing that right without burdening the developers?
Jeff: You have to do two main things. You have to figure out what the shape of your API and contract is going to be. That has to do with an object model – looking at things like REST, where we map resources and sub-resources. We’re mapping out our object model and its hierarchy inside of code through these APIs. That takes a lot of thought. So having to do that and, on top of that, figuring out how I can make sure that the right people have the right access to the right methods at the right time becomes exceedingly difficult. And so you look to offload some of those needs into other programs that can help you standardize it.
I use that term “standardize” because it’s a good signal for a need for maturity that happens when you’ve got to do this across multiple different applications or even software offerings that you have. But you have to figure out how you can scale it to do it quickly and respond to the changing customer needs. This is an indicator where you need to utilize another tool to help you get there.
And that’s where things like API gateways come in handy. They can take and offload the security. You can template it and make sure that all of your APIs that you’re developing are protected in the same way to ensure that you have security across the board.
It also allows your operators to work with your application teams to deliver a safe, cohesive solution and deliver value to the customers.
Kaitlyn: So talking about that API gateway layer, what you can do in that layer, as you said, is standardize your authentication. Can you define that for our audience who may be new to API gateway authentication or are trying to build this out right now? There are so many different ways we think about authentication across an application. What are the types of authentication that fit in this layer?
Jeff: Yeah, so let’s separate from the ones you are probably familiar with. I’m sure everyone has used a website where you log in with a username and password, and you get access to an application. Well, that’s one way of doing it. We’ll set that to the side.
Now there’s another thing when machines are connecting. They also have to identify themselves to ensure that they have the right authorization to request resources from another machine. And the way that we do this is through a protocol we call OAuth. It allows us to insert an authorization server in the middle to allow the two machines to communicate directly by sharing credentials.
It allows a broker to sit between and for those credentials to go up to the broker to create a representation which we call a bearer token to pass over to the second application that they can then verify independently with that authorization server. This removes the burden of applications from protecting the credentials of the other while doing their operations. So in effect, it creates this seamless way that the applications can interact with each other but not have to know that much about them.
And the other part of this is if you want to bring users into this mix, we’ve got another protocol on top of that called OpenID Connect (OIDC) that performs that same action but allows the user context to come into those authentication modes. And this is the way that most of our service-to-service communication happens securely.
API gateways will allow you to instantiate that and help broker that conversation so you don’t have to do that inside of your own code.
Kaitlyn: One of the things that we talk a lot about at Kong is when introducing these new technologies or new layers into your application, it becomes a lot for your ops team to manage. And you mentioned building this in a way with empathy for the DevOps engineers. Can you talk about how to do that or what exactly that means?
Jeff: Yeah, and what it means here is on both sides, right? You have to create empathy for the application developers and focus on what they need to do. But for the operators, you have to allow them to scale their needs.
We see in operations that they deal with many requests, like “I have to set up this thing,” “I have to change this other thing.” And because we are looking to lock down who has access to protected or privileged resources, we only allow it to go through a certain group of people, and usually, that team is tiny. So how do you make that team more efficient?
Building standardization into the tooling allows you to automate a lot of these requests and bring new applications into the fold a lot easier by doing things in a standardized way.
Back in the old days, before we had these fancy tools, operations engineers were judged by how well they could script certain applications, even through the command line or just through whatever they had. And they would build these standard processes and ship them out and these CLI methods that they could call. And they may be the ones calling them, but you kind of see that the scale is behind the scenes.
By creating more UI and accessible tools, you’re allowing teams to self-service where the operators are in charge of setting up the standards by which they can make their endpoints available but not actually in charge of doing the route instrumentation every single time.
So this allows them to stay on the forefront and think forward about the security posture as certain things change. We’ve seen a lot of changes coming in the pandemic with spoofing and phishing and all of this. You want to keep your DevOps group at the forefront of that, making those changes and propagating them down to every application that needs them.
Kaitlyn: One more thing that comes into play and can become a lot to manage to folks is when you’re dealing with different services, you’re not necessarily dealing with a standard programming language across them. That’s kind of the beauty, right? You can all work in your preferred programming language, but can you talk about how language agnosticity comes into play here with an API gateway and as you start thinking about authentication to standardize some of that stuff?
Jeff: Yeah, I think that’s a fundamental concept. I was really big on collecting as many value-driven services as possible back in my old job. Part of that means that you want to accept whatever a developer feels comfortable writing. You get better code that way because you’re asking someone to work in their strengths.
The burden then becomes that I have to support and standardize how they expose those different endpoints to maintain that high principle of language agnosticity. It becomes important that we figure out ways to connect these languages. Usually, that will come in a standard form to expose their APIs, and the API gateway can absorb that. It’s that age-old conversation of how you would integrate with another application. There are some opinions on both sides, and they have to sort of meet in the middle.
This is a great way to extend your operations team by inviting these other languages and frameworks to connect to your API gateway.
This unleashes the power of a full development force instead of creating a small team, again pushing the bottleneck downstream, so one team is in charge of translating all of these microservices to be exposed through an API gateway.
You want to avoid that because all you’re doing is having this highly scalable DevOps team, but then you have a low-scale microservices team that won’t handle the flood of requests. And so, have you really increased the efficiency across the board? I would say no, but again, this is exactly what you said; keeping it very language agnostic and standardizing how you expose those endpoints through an API gateway can help you achieve that scale and diversity in what you offer.
I hope you’ll join us again on February 21 for our next Kongcast episode with Matt Stratton from Pulumi.
Until then, be sure to subscribe to Kongcast to get episodes sent to your inbox (and a chance to win cool SWAG)!
Learn how to make your API strategy a competitive advantage.