Video

Opening Keynote (Part 3): Enabling API Connectivity Within the Enterprise

Reza Shafii, VP of Product, Kong

VP of Product Reza Shafii dives into Kong’s latest product announcements that will enable our customers and community to excel at API connectivity within the enterprise, including Insomnia Projects, Kong Gateway 2.6, Kong Ingress Controller 2.0, Kong Mesh 1.5 and Okta support for Kong Konnect. Guest speakers from Comcast, Checkr, American Airlines and Vanguard also make an appearance to share their special journeys with Kong.

Opening Keynote (Part 3): Enabling API Connectivity Within the Enterprise
Speaker1: All right. So what does it take to make connectivity be available just like electricity? Imagine a world where your developers, your application teams are able to access an array, a toolkit of APIs that represent all of your organization’s functions, that represent most of all your organization’s applications, and leverage these APIs as building blocks to build new digital experiences at speed, consistently and at quality. And in doing so, also being able to expose new APIs that go back into the toolkit for others being able to leverage them. That’s what it means for connectivity to be available just like electricity. Now, what does it take to make that world happen? And to answer that question, we really figured it’s best to go back to you. To you, our community, to you, our users, to you our customers and see what you’ve done to make such a world happen. And in going back to you, what we’ve noticed are three patterns, three areas, three activities that really give energy to each other and enable this type of synergy among these activities to make connectivity available just like electricity. These are building for developers. So how do you enable developers to be able to explore, design, test APIs seamlessly in a modern world? And then when we come to operators, how do you enabled them to run them on a solid foundation that spans the modern infrastructure across Kubernetes, across multiple clouds? And then last but not least, how do you build that toolkit, that catalog of microservices that enables consumable services to be available?So today we’re going to do three things. First, we’re going to talk about the trends, the trends of success that we’ve noticed from our customers. The second, we’re going to bring our customers in and you’re going to hear firsthand from them on how they’re being able to enable this. And third, we’re going to talk about our products and make some really exciting product announcements. Let’s do it.

We start with the build phase and the trend of success in the build phase is all about moving from a governance led model to a developer experience model. This doesn’t mean that governance is not important. What this means is that governance needs to be embedded, integrated into the developer experience flow to enable developers to use the tools they’re most comfortable with and enable governance to happen through a carrot based model so that they have access to the right tools and the right policies to make it happen. And our tool for making that happen is Insomnia. Insomnia is the number one open source tool for API exploration, API design and testing out there. And the numbers really speak for themselves. More than one point five million downloads. Ten thousand specs built since the beginning of the year and the last number is really my favorite, three hundred and thirty plus community developer plugins contributed. This is the power of the community, and every one of these plugins is a plugin that enhances a subset of the developer’s experience. So together they’re able to work together to enable an optimal developer experience.

So let’s see Insomnia in action, and let’s see Stephanie Kalacis, our UX designer, show us how Insomnia comes together. Steph, it’s all yours.

Speaker2: Thanks, Reza. So today I’ll be demonstrating how developers can use Insomnia to design, debug and test their APIs. So starting off with the spec, I’ll copy this in and I’ll create a new document using the import from clipboard function. So clicking in here, you can see that this is an API that returns information about Kong’s products and all the sessions available at Summit this year. If I look into the past over here, you can see that reflected in the preview pane to the right. So I’ll take a look at the get session’s operation. If I click in here and I scroll down to responses and go into the schema, I can see I get a list of responses back each of the which contain a title, description, list of presenters, and date. So now I’ve gone off, I’ve implemented the spec and I’ll validate it using the Debug tab up at the top here. Looking into the session folder to the left, I can see the same paths that we saw in the spec have been generated into Insomnia requests. I can click on one of them, simply send the request and see that the server has responded with all of the summit sessions available this year. Going into the Query tab, I can see that there’s a parameter pre-populated from the spec that includes the example value. Enabling this, I can send it over. And now you can see the response contains only the sessions available on this date. Now that it’s all debugged, I’ll go into the Test tab to write a unit test. I’ll create a new test suite, and inside that, I’ll create a test that ensures the response returns processed. I’ll select that same request we’ve been looking at, open it up and I’ll replace the boilerplate code with a test I’ve previously written. Running the test, I can see that it passed. Now I can intentionally break this test by changing the array through a string. Running it now, you can see that it failed. So this is a demonstration of the end to end life cycle of designing, debugging and testing an API using Insomnia. Thank you Reza, I hand it back to you.

Speaker1: All right, thank you, Steph. And what’s great is that all of the capabilities that Steph just showed us are available through Insomniac’s command line inso. So what that means is that one can insert command lines into the CI pipeline to run those same tests and actually see them in action through an automation model. Now what you notice there was that Steph was alone and doing this all by herself. But of course, in a typical world application teams can see that many, many developers. So how do developers collaborate together on the type of actions that Steph showed us, on the type of capabilities that Steph showed us? Well, I’m really excited to be announcing a new feature in Insomnia today, and that’s called Insomnia Projects. Insomnia Projects enables developers to share different collections of API calls and collaborate together seamlessly. And really, this is the beginning, really the tip of the iceberg in the type of collaboration capabilities that we’re planning in building into Insomnia in its journey to becoming the most popular API exploration, design, and testing tool out there.

Ok, let’s now move to the run phase and talk about the trend of success there. Really, this is a mega mega trend. This is a trend from centralized to distributed. And the reason why I say mega mega trend is because it’s backed by at least three megatrends. The infrastructure architecture, as we heard from Marco, is changing. It’s moving from a virtualized model to a containerised model. The application architecture is changing. It’s moving from the monolith to microservices. And on top of that, we’re dealing more and more with the reality of a multi-cloud world and a hybrid world. And these are the megatrends that have made the Kong Gateway the de facto standard cloud native API gateway out there. The traits that make the Kong Gateway and its Kubernetes Ingress Controller, the Kong Congress controller, the number one API gateway out there are here. One hundred percent API and declarative configlet. Fully Kubernetes native and unparalleled performance. So let us hear from our customers, and I’m really excited to introduce John McCann and Harmon Dhillon to show us how they’re using Kong Gateway on the connected living technology for Comcast. John and Harmon take it away.

Speaker3: Thanks, Reza. Hi, everyone. I’m John McCann, V.P. of Connected Living Technology at Comcast.

Speaker4: Hi, and I’m Harmon Dhillon , Executive Director of Platform Infrastructure and Tools at Comcast.

Speaker3: So Connected Living Technology is responsible for Comcast Global Broadband Platform, including our WiFi product, our WiFi mesh extenders and many of our home security products like the camera you see here. We use Kong API Gateway in connected living technology to elevate the developer experience so that our developers building all of the services that we run in our cloud can be developed independently by the various scrum teams, but published in a way that is consistent and coherent to the consumers. So Kong API Gateway makes it easy for the developers to publish and manage the routes, monitor the important metrics around the operational health of their systems, and easily and consistently generate API documentation for the consumers. And we’re using Kong API Gateway to expand and achieve a true global broadband platform that we can syndicate to our partners across the United States and Canada, and even into Europe. And so with Kong API Gateway we’re able to deploy services in a way that makes it trivial to consume by our client team and also our partners. And we’re able to deploy it in multiple regions and we’re able to scale elastically as the workloads grow to meet the demands of the business.

Speaker4: So how are we using Kong within Comcast to power our customers broadband and home security experiences? The journey started for us about two years back, where as we were transitioning from monolithic applications to microservices, we had our engineering teams geographically dispersed, working in small groups or squads, creating these microservices in different computing languages deployed across various infrastructure workloads that varied from easy to instances to functions as service. We needed to provide a seamless platform to applications consuming the API layer exposed by these microservices, and that’s where Kong Enterprise API Gateway fit in for us. Over the last two years, we’ve grown quite a bit with routing close to five thousand requests per second for seventy five production services configured on the Gateway. The Gateway is deployed fully redundant and active setup that is geographically separated with auto scaling of instances where we go up to twenty five instances at peak and run approximately nine instances at normal traffic patterns across the deployed regions. So as you can see for the scale of our engineering team and the number of services, self-service capabilities is the key for us on the API Gateway platform. So with regards to self-service capabilities for our teams, we have two key use cases from enterprise perspective. Onboarding to the platform itself, which entails creating workspace appropriate roles and user assignment standard plug ins like Cause Datadog for monitoring that need to be enabled by default for all services.

Speaker4: For this, we’ve put together a few internal tools that are managed by declarative config, driven through a get workflow that creates the required entities on the API Gateway platform using the admin API. Second aspect of self-service capability is the creation and update of the services and route itself. For this, we again rely heavily on admin API, which is exposed to the development teams, for CI/CD workflow integration and also tools like Deck, which help with ad hoc tasks of service, synchronization and backups. Canary rollout of services are another big use case for us, and that is again enabled with CI/CD tools using admin API. For additional details on how Comcast is managing self-service capabilities at scale, we have a breakout session by Tyler Rivera, which I would encourage you all to attend. So in closing, I would say Kong has been a great partner for us in our microservices decision. It’s been a collaboration where we’ve asked for feature requests and in parallel have also made active contributions to help improve the overall platform. We are now actively looking at developer portal and service hub for documentation standardization. I will now pass back to John for any additional insights and our thoughts.

Speaker3: Yeah, that’s right. And I feel that Kong has really helped elevate our team’s maturity and ability to standardize and adhere to trivially consumable API semantics. So really appreciate the partnership and really are enjoying using the product and we are hiring across many positions, so please reach out if you’re interested. And with that, we will pass back to Reza.

Speaker1: Great. I love the scale at which Comcast is running Kong. Now what I find really inspiring is that through the connected living capabilities, this is the routers. This is the extension devices for your internet that are operating and being powered by Kong. So that means that for the for those of you who are in the U.S., the internet connection, that the discussion we’re having today is actually being powered by Kong through Comcast. All right. Well, now it’s time for a demo of the Kong Gateway, and I’m going to pass it to Melissa van der Hecht, our field CTO, who’s going to take that spec that Steph showed you from the previous demo and show you through declarative configuration and our automation mechanism, how to bring that spec and enable it and deploy it onto a Kong gateway. Melissa, all yours.

Speaker5: Thanks, Reza, and hi, everyone. I’m going to show you how easy it is to go from Steph’s API design, as you can see, and in so many are here, to a validated, deployed and governed API in the common gateway. We’ll be automating this using inSo and Deck, two of the Kong CLIs. And ultimately, we’ll see our API end up here in Service Hub, the Universal Service Catalog. I’m going to start off by showing you something really cool. We’re going to use inSo, which is Insomnia CLI, and this gives you a really easy way to integrate Insomnia’s capabilities into your CI/CD pipelines. For example, the ability to automatically generate declarative configuration for Kong based on the API design. This is super cool because one, it’s declarative rather than imperative configuration, which is so much easier to set up and troubleshoot and manage. Secondly, this is generated from the API specification so everything will get deployed or be completely true according to that design. And thirdly, this is automatically generated for me, so I don’t have to write a single thing to enable this automation. We’ve just output this file, this declarative configuration file that you can see here. I’ve got a few bits of data about the service itself. We’ve got three different routes that we can see on this service, and if I wanted, I could use this to deploy to the Kong gateway immediately. It doesn’t actually need any input from me, but I’m not ready to deploy this yet because we don’t have a backend implementation ready for me to use.

So I’m going to turn on the Kong mocking service. Here I’m copying over the configuration for the Kong mocking plug in that I made earlier and pasting it into this declarative configuration file. You can see there’s a lot of configuration here, most of this is just the dummy data that the mocking service will need to return, but we’ve set the status to enable to true. Let me save this file. We go back to the CLI and now we’re going to use something called Deck, which stands for declarative configuration for Kong. This is pretty self-explanatory. It does some super cool things, one of which is the ability to sync the Kong gateways to follow what I have in my declarative configuration file. So as soon as I hit enter, we can see there’s a lot of changes going on and we’ve got a service created. We’ve got three routes created. We’ve enabled the mocking plug in and we’ve created version one of our service called Summit. Let’s have a look at this in service hub. You can see suddenly we’ve got this service that’s just appeared. This has been deployed automatically. The details have been pulled through from the declarative config. And if we go through into the version one implementation, you can see our three routes and our mocking plug in has been deployed and it’s enabled here. We can test this out in Insomnia. If we go to the Debug tab and I send a request to that endpoint, we’re getting back some realistic looking data, this talk from Michael Heat looks great, but if we have a look in the headers you can see here that the mocking plug in is what’s generating this response. And this is great because it enables the team consuming this API to get going with building their application without waiting for this API to be ready.

Speaker5: But guess what? I just had a notification that the API is ready to be deployed. The team was able to build it super fast because a lot of the stuff that they would have had to build originally came out of the box as Kong plugins. So let’s go back to our declarative configuration. We’re going to disable the mocking plugin just by switching that status to false. I’m actually going to add some governance to this service now, though, because we’re deploying a real implementation. So let me paste in the configuration for a rate limiting plugin. Here I’ve set it so you can only make three requests in any given minute. I’m saving the file, and we’re going to just redo that sync of the Kong Gateway, according to the declarative config. And then back in service hub, you can see suddenly the mocking plug in is disabled and we’ve got this rate limiting plug in, which is enabled. Again, we can go back to Insomnia. Make a request, we’re getting some different data coming back now, this is the real implementation. And if I hit send a couple more times, you can see I’ve hit that threshold and I’m not allowed to make any more calls. So I have shown you how we go from an API design in Insomnia to having a deployed implementation in the Kong Gateway using inSo and Deck. Doing this declaratively as part of your CI/CD pipelines means that you can deploy your APIs significantly faster and the operations are so much easier. Back to you as Reza.

Speaker1: Thank you, Melissa. I’ve seen this type of demo many times now, and I never cease to be amazed by it. The reason for that is even though Kong has a number of web UI capabilities, you’ll see us demo these always using declarative config. And that’s where the power lies. Every change that you saw Melissa make is done to a file that can be validated, tested and applied only after that through automation. All right, it’s time now for some really exciting product announcement. I’m going to start with the Kong gateway, and today we’re announcing the availability of the Gateway 2.6 In October, right around the corner, and there’s really three significant things about the Kong Gateway 2.6 Release. Let’s talk about them. The first is speed. Now remember when I was talking about the success trends? The performance of your gateway matters quite a bit in the cloud native world, and today I’m really happy to announce that the fastest API gateway in the world just became significantly faster, with Kong Gateway 2.6 we’ve increased throughput by 12 percent and decreased latency by 30 percent. And that means more performance for applications and more cost savings for everyone. The second capability we’re introducing in Kong Gateway 2.6 is transformations.

Speaker1: The Gateway is really at an ideal place in taking a look at all the APIs, all the traffic that are going through the Gateway and morphing them, morphing them for appropriate usage for the back end and the frontend. And today we’ve really injected major power into the transformation capabilities of the common gateway with the introduction of the jQ Plugin. jQ is a popular framework out there for easily creating data transformations, using jQ filters and piping them into each other and leveraging a wide set of transformation libraries. And today that is available at your fingertips through the power of customizations that can be applied through plugins. And third streaming, events streaming. We hear about this all the time. It’s really a big part of the backbone of our of our applications that we use day to day because there is a lot of data streaming through the enterprise and Kafka is behind these streaming events. Now, a major pattern is to provide a gateway on top of these topics that process the events so that they’re more easily accessible to different clients that may expect a Rest API or a gRPC API or a graph API. How do you make that happen easily and securely? And today we’re announcing that the Kafka plugin for the Kong Gateway supports now full stack security, with authentication through LDAP authorization through SASL and full encryption through mTLS. And that further enables you to take your existing topics and expose them and really easily consumable interfaces that do not require, say, and the usage of a language library. And last but not least. The Kong Congress controller, the Kubernetes Ingress controller for the Kong Gateway and the most popular Kubernetes ingress controller out there is getting a major version, 2.0. And what that means is that we’ve made a number of architectural improvements that make it even more performant out there. And in doing so, we’ve also introduced one exciting new feature that I’m going to highlight here, among many, and that’s Native Prometheus integration out of the box. That means out of the box, you’ll be able to monitor the actual capabilities of the controller itself and make sure it’s performing up to your standards. Now my next guest is Ivan Rylach from Checkr. And what’s exciting here is that Checkr is an API based company, an API based unicorn startup. And we’re going to see how the API gateway from Kong is powering all of the capabilities behind those APIs. Ivan, all yours.

Speaker6: Hey everyone, my name is Ivan Rylach, and I work as a senior staff software engineer at Checkr. I will talk about how Kong enables Checkr to deliver a global API platform which empowers both external and internal developer teams. Let’s dive right into it. Checkr is the leading technology company in the background check industry. Checkr APIs allow companies to run background checks for employment, along with other background screenings inside the platforms they use every day to get more done in less time with less risk. Checkr’s background screening technology works behind the scenes to provide businesses with clear and actionable results in an efficient way. Artificial intelligence and machine learning help us to deliver background checks faster with lower risk. Checkr serves industry leading companies like Netflix, AirBnB, Uber, Lyft, Instacart and tens of thousands of customers from small and medium businesses to Fortune 500 employers. To achieve the best possible result, Checkr delivers global availability for each API, and Kong Gateway is instrumental in enabling Checkr platform. We chose Kong because of its high resilience and performance. Checkr’s API platform relies on Kong to facilitate ingress and egress traffic crowding, and forced traffic security requirements, maintain necessary level of service quality and to gain visibility by leveraging auditing and analytics capabilities. Additionally, extensive flagging system allows us to build new and enhance existing behaviors of API layer.

Now how to manage and keep Kong configurations consistent across various regions and environments? Each region agrees with its own data plan, which consists of API and egress gateways. Each regional data plan is managed by the global control point. There are two types of configurations, static, which does not change frequently and requires auditing and manual review. For example, Ingress has a variety configurations and authentication methods. The second type of configuration is dynamic, which can change in a runtime like API keys. Static configurations are managed in a declarative way by using Kong Kubernetes ingress controller and corresponding resources. Our API platform relies on code version control systems like git and usual software development lifecycle to handle these declarations. Kong Kubernetes Ingress Controller works with Kubernetes API server in each region to process incoming changes and applies them to local instances of Kong Gateway. Dynamic configurations are managed ad hoc by customer developer teams who should be able to make their first API call within seconds after getting their first API key. We have built a custom service called Consumers Controller, which is responsible for propagation of API key changes across all regions. How the developers interact with the Global Control Plane you ask? Customer teams work with Checkr developer Portal to manage their API keys, while internal Checkr product teams use the API ops model, which improves developer velocity by delivering self-service and automated tools. APIOps approach helped us to create a fully automated management system, with Kong being in the heart of it. Kong as a configuration helps developers to focus on building instead of operations. It’s performance and scalability enable us to deliver the best results. So naturally, Kong became our most favorite way of managing API platform. Thank you for your time.

Speaker1: Thank you, Ivan. That’s really a great example of declarative configuration based change management in action. And what I love about the Checkr use case is that they’re using Kong as a gateway, as an ingress, but also as an egress. Watch that space.

All right. Let’s now talk about the third type of connectivity. We’ve talked so far about edge connectivity and cross app connectivity using API Gateway and the Ingress. But we know that there is a third type of connectivity, and that connectivity is increasing in the number of use cases that it has. That’s because within our applications, the number of microservices is increasing more and more from dozens to hundreds. That’s the trend. So how do you go about managing the internal communication between these microservices? And the answer to that, of course, is a mesh. Now we introduced the Kuma mesh to the world two years ago, and the reason we did that was because we felt the world deserved and an easy to use mesh that is also powerful. And so we built that mesh on Envoi technology, and we’re an active contributor to Envoi as well as Engine X. On top of this Envoi layer, as a side car lies and an automatic multi zone propagation mechanism. That means that your your meshes can span different geographies, different clouds and different environments, and all of the capabilities that you would expect from a mesh are available for application to all of these different zones. So routing, security, discovery.

Now, here’s the secret sauce. At least one of the secret sauce. We brought all of the learnings from the Kong Gateway and its powerful customization capabilities in terms of plug ins to the mesh world, with the ability to create simply and easily policy based capabilities that you can inject into the mesh. And we did that by keeping in mind that the world doesn’t begin and end with Kubernetes. We know that Kubernetes is prominent, but there’s still many virtual machine based architectures out there and applications out there so the mesh needs to spend them both and be able to operate in both worlds. And our product on top of the human mesh, the Kong mesh, was introduced last year. It brings with it security support and governance capabilities. And this product and this project have been very successful. We’ve had over four times year over year growth, with over a thousand organizations adopting Kuma so far.

Speaker1: So what are they using them for, you might ask? This is what we’ve seen. The first use case is zero trust security. The ability to leverage the mesh to manage C8 certificates and the whole mTLS lifecycle to make sure that you have a secure application in the communication between all of the microservices is one of the number one use cases. The second is observability, and here Kong Mesh comes out of the box with dashboards that provide you full visibility into the inner workings of your applications microservices. And the third, an important one, and you were seeing this one more and more is the ability to replace expensive load balancers because what is being provided by Kuma mesh is really a modern, self-healing load balancing capabilities that is implemented at the sidecar level. And what this allows you to do is get rid of expensive load balancing technologies in many cases and not only save upfront costs at the seven figure level, but also save on performance. So speaking of cost saving and developer experience and performance, let me now introduce Jason Walker, who’s going to tell us how they’ve been leveraging Kong’s mesh capabilities to deliver their mesh at American Airlines. Jason, take it away.

Speaker7: Thank you, Reza. My name is Jason Walker, I’m with American Airlines, and it’s great to be here at Kong. Summit. I’d like to talk a little bit about at American Airlines, what we’re doing in the developer experience. One of the things that we see as an advantage internally is being able to make it very easy for people to do the right hard things and make the developer experience not only a product, but a practice. And we see Kuma as a great way for us to be able to incorporate three specific attributes around that developer experience. Open standards, batteries included, and providing managed choice. With open standards, we don’t think of it as just open source. We think about it as a means for us to be able to provide less coupled or tightly integrated offerings to the developers at American Airlines. We see these standardization as a means for us to provide more simplified solutions overall, and from that simplification, be able to have a better sense of understanding about what’s going on in the environment and be able to measure different things like deployments in our integrations, the way that applications are talking to each other. Around batteries included, this is one area where we want to focus in on the outcomes of what the application teams need to be able to provide great features to our flying customers. We don’t want to focus on just the outputs of the transactions that take place We want to be able to provision an application, stand up a load balancer, be able to get a new feature out the door. We want to be able to provide a mechanism where teams are able to provide and go down a path of with code test deployments, security included, all the right tags for all of our environments being applied to get the right routing over the internet in place. And this pivot is one of those areas where we see that American development community being able to incorporate a lot of the different feature sets that Open-Source Kuma provides to us.

So the last piece in the last attribute is around managed choice, and we do see our developer experience space as a place for us to assert a specific opinion about the way that we want to provide different sets of services to our American Airlines engineers. We seek solutions that adhere to in their own set up open standards, like being able to make use of open trace and open telemetry, being able to make use of non-proprietary protocols and be able to interact and complement the overall ecosystem that we’re building up in American Airlines through that managed choice and because of the opinionated pipelines. We also want to do things like reduce the amount of overload and contacts that are developers need to understand. We kind of refer to this as the walls of YAML. If you think of Kubernetes and because we are going down a path of open standards and providing a simpler interface, we’re able to minimize that wall of YAML that you may experience when it comes down to things like making use of Kubernetes.

So why Kuma? When it comes down to the things that we’re looking to offer? We see our journey to the cloud as being something where we want to provide application security out of the box. We want to make sure that connectivity isn’t defined or limited to a particular location. And we want to go beyond our internal customer expectations to be able to leverage open standards to secure, to connect and to be able to measure the way that applications flow through. Later on at Kong Summit, Karl Heyworth, who is the principal engineer and technical lead for our developer experience product, is going to talk more about the way that we’re making use of Kuma to not only provide security for single instances of clusters in integer around Kubernetes, but also how we’re able to extend that to multiple clusters and cross region to provide high availability, measurability and streamlined ways for application teams to be able to deploy. Some of the outcomes and some of the pieces that we really are specifically looking at with Kuma, we want to be able to provide a set of easy to use controls. Kuma and their service mesh approach to traffic policies make it very easy for us to not only do things like network policies at a Kubernetes level, but also be able to extend that in the event that we need to provide the same offering for on premise virtual machine type of implementations. We are doing all of our automation development and all of our developer experience work through code and the fact that we’re able to streamline a lot of our own testing, our own deployments and our own offerings is a great way for us to be able to extend it into the developers that are using our platforms.

Speaker7: And because of that open source approach that where Kuma is available, we’re able to also standardize against the interface and make it to where not only are our cloud deployments making use of modern platforms and implementation scenarios, but also legacy of being able to extend back into things like virtual machines and the data center when and if needed. A big thing around developer experience and trying to make it advantageous for our developers is around these three big areas of controls, consistency and self-service. We know we need to be able to provide a safe, secure environment for our developers to build applications. We want them to be able to have trust around consistency, and we want to provide as much autonomy to those teams as possible. And with open source Kuma, we’re able to drill into things like securing our network and connections. We’re able to provide high availability across different clusters, across different regions, across different implementations. And we’re able to provide a wealth of observability in its ability to show things like networks, connections and how applications are working together. One of the big things that we are really excited about is not only at American Airlines are we are we focused in on open source as a whole, but really excited about the open source community in the Kuma space and looking forward to growing our relationship there. Thank you, Reza. Back to you.

Speaker1: Thank you, Jason. I found it truly resonating to see that the choice of American Airlines for Kuma was based on open standards, open source and developer experience. These are values that obviously align so much with everything we’re doing at Kong.

Ok, you’re all ready for the next big product release. This is a big one. Today, we’re going to be announcing Kong Mesh 1.5. This is a minor release of the Kong Mesh and provides some really exciting capabilities. Let us cover it quickly. First, we’re introducing Windows support. Now, today we support 12 distribution of mesh and Windows is our 13th distribution, and this enables the control plane and data plane of the Kong Mesh to be available on Windows. It also allows you to have hybrid mixed environments across Windows and other platforms so that you can manage a truly hybrid environment, which is one of the trends we talked about and all of the connectivity capabilities of the Kuma mesh and Kong mesh apply to Windows as well. The second capability and this is core, it’s the ability to have role based access control. This is the ability to make sure that only the right people with the right access can perform the right action when operating the mesh, and of course, this is critical when it’s an enterprise, you’re operating a mesh. And again, all of these capabilities are back apply to all of the capabilities of Kong mesh across all of the different platforms that it supports. Now, nothing better than a demo to showcase the capabilities I just talked about, and I’m really excited to have our next speaker here, Felderi Santiago, to show us Kong Mesh in action. Fell, all yours.

Speaker8: Thanks, Reza. Appreciate it. Let’s now take a look at how Kong Mesh provides the observability and security needed to increase operational efficiency and compliance. In this environment, we have a blog and application consisting of a few microservices, including a graphical front end, a blogging service, a database and a language processor to score the sentiment of blocks. Let’s see how Kong Mesh provides observability and security for this application. First thing we’re going to do is send some request or application. We’re going to use Insomnia to send a request to this application every second. And you can see we’re going to be sending a pretty positive blog post, I’m happy and I know it. And we can see the application returns a very nice sentiment score. Let’s take a look and see the visibility that Kong Mesh provides us over what’s happening with this application. We’re going to jump to Grafana. Grafana, we’re using to visualize our analytics in the metrics for this particular application. We’re going to go into this Kuma mesh dashboard, and the first thing we’re going to notice is that in this dashboard, we can see the visual representation of our services and how they’re communicating with one another. So in our application, we can see the request coming into our blog, our portal service, our graphical front end. It then goes to the blogging service, which then sends the request to the NLP service to get that sentiment score. And then the request is stored in our post-res database backend. We can very easily see that all these services are properly inter communicating with one another, our requests were successful. We get a visual representation of the throughput going through each service as well.

In this dashboard, we can also get a sense of all the traffic across the various services in our mesh. So if we wanted to drill into, say, the NLP service, we can absolutely do that here and get an understanding of how much traffic over time, any error codes or latency concerns may currently exist inside of the environment. So the mesh provides clear visibility to help drive operational efficiency across the different applications and services. The other big benefit Kong Mesh provides us with is the ability to enforce fine grained security policies across our applications and services. So here’s an overview of our mesh of 16 services in the mesh at the moment. And what we’re going to do is take a look at traffic permissions. Inside Of traffic permissions we’re applying a very broad policy that says that any service in the mesh can communicate with any other service. Let’s remove that permission and see what happens. So here’s our permission. We’re going to delete the permission. Go back to inside your designer. And when you see that application immediately stops the work. If we put that rule back, for applications, you now start working again. So hopefully you can see how Kong Mesh gives you the ability to secure and observe your applications and services in a very straightforward way. Back to you Reza.

Speaker1: Thank you Fell. I never cease to be impressed by those graphs and a service map capabilities of the Kong Mesh.

All right. We just covered the built activity and the run activity, and we have one more to cover and that’s the catalog activity. How do you enable a thriving ecosystem of consumable services? This is key. Remember that API building blocks analogy I use? Well, how are you going to find? How are you going to discover these APIs, if you don’t have a single place you can go to, that is always accurate. And for accuracy, automation is the key. So the success trend here is from a manual model to an automated model. But really, to say that the success trend is from manual to automated is really short selling things. It’s really the trend here for catalogs, and catalogs have existed for a long time, but the success trend for modern microservices catalog is really everything else we’re talking about. How does your catalog integrate in the developer workflow so that is automatically maintained in the modern developer workflow. How does your catalog integrate with the runtimes so that it is aware of this runtimes? And really, how does your catalog integrate with all of the tools for your operators and your developers so that it provides them that building blocks set that they can go to to build their next application in a consistent, quality manager at speed.

And last year we announced Connect Cloud. Connect Cloud is the ability to run control planes as a hosted way, and you run your data planes anywhere you want. On top of these controlled planes, Connect Cloud provides a number of applications. Portal, analytics capabilities and critically, Service Hub. Service Hub is the microservices of record and partly due to service hub, Connect Cloud has had tremendous success. We have thousands of organizations leveraging Connect Cloud and running millions of requests through it. Now, I want to make one different point about differences between microservices catalog and the DEF portal, because that’s a question I get quite a bit. What is the difference between a catalog and a portal? And this is an important one. Think of the microservices, the microservices catalog as your catalog of record for all of your services. Now an analogy that that really helps here is a grocery store. If you walk into a grocery store, you see the front facing products. That’s the DEF portal. They’re beautifully set up, perfect lighting for everyone to be able to access them. But everyone on the outside? What about all the other products that are sitting in the supply chain? What you need to have access to all of them as well. And actually, there’s a lot more of those, you know, microservices catalog needs to reflect the entirety of your inventory of APIs and service hub is that. And it’s tightly integrated with the Connect developer portal, so you can publish the ones you want to your customers or your partners using that perfect lighting.

So let us now show you service hub and portal in action with a demo from Thao Yeager, our product manager for both Thao, take it away.

Speaker9: Thank you, Reza. Well, everyone, I’m here to show you how you can publish an API service in 10 clicks. My name is Thao Yeager, and today we’re going to talk about Dev Portal and Service Hub. Looking here at Dev Portal this is your public facing API catalogue of the services and versions you’ve made available to your external API consumers. But Service Hub is where you catalogue all of your services into one system as the single source of truth, where you document and manage and track every service in your entire architecture. And while Service Hub is natively integrated with Dev Portal, the services in service hub do not need to be backed by Kong Gateway.

So let’s get started. I’m going to go to this summit demo service where I’m going to upload, and this is where you would see some basic vitals about the performance of your API. The request counts and such. And you could also see currently this service is unpublished, so I’m going to upload a service document, and this is a document that will tell the consumer a little bit about your service, including the maintainer and such thing as a authentication, for example. Then I’m going to go into a version and upload the version specs, and the open API specs are similar to what you saw in Stephanie’s demo when an API was created in Insomnia.

Speaker9: So now I’ll go back to the main view of my service and under actions, publish to the portal. Going over to the portal, you’ll see that the summit demo service is not there yet, but if I refresh, what I just published is going to appear right here and an API consumer can go in, read a little bit about the service at the top. This content was from the markdown file that I uploaded, and then they can go in to a particular API call and look at some basic details about the call, including the sample response. They can even click, try it out and click Execute. And the server response will be displayed down here, including the the response code to hundred of successful. So today we use a dev portal, but that is public, but you don’t have to. You can require developer registration to ensure that only those authorized to see the API can access the documentation and Connect provides an approval workflow for that. The Dev Portal can also handle application registration, which means you can identify who’s using your API and handle authentication automatically using the Kong Gateway. So, as you can see, the Dev Portal reduces barrier to entry for potential consumers dramatically and provides both documentation and an interactive playground. Thank you for listening. Back to you, Reza.

Speaker1: Thank you. So that was a great demonstration of service hub as a service catalog backing a portal published API. Now my next guest is Yoni Ryabinski and we’ve worked with Yoni now for over a year. And I’m delighted to see the journey that we’ve been together on. We started on a small use case with the Kong Gateway, then adoption in a much broader way. And now Yoni is working with us on a service catalog use case. So, Yoni, take it away.

Speaker10: Thank you, Reza. My name is Yoni. I’m a head of Resilience Architecture Office in my current role in Vanguard. Our journey into Kong became when we were looking for a solution to provide an identity proxy for us, for our modernized credentials as a service tech to perform authentication authorization for external consumers. At that time, our state of API economy was somewhat disparate API gateways, technologies, inconsistent experiences for developers. And obviously we’re trying to solve the problem of authentication authorization and Kong fit the bill. Since we discovered Kong, we realize that actually it can perform a role for a lot of our other transformational activities on our path to modernization. So it performs a variety of URL or URI conversion URLs for a variety of internal applications. And obviously, it fits the bill of our credentials and Service Use Case, which performs authentication authorization both for the end consumer and things that consume APIs. What’s next for us? We are super excited about Kong mesh. And from the beginning, we have chosen Kong Mesh for both of our ETS and ECS solutions. When Kong Connect was announced, we were extremely happy and pleased and the promise of not running our own infrastructure for the control plane is something that Kong Connect delivers for us. We are currently piloting this technology in our environment. With that, I’m going to transfer it back to Reza. Thank you.

Speaker1: Thank you, Yoni. It’s really great to see that not only Vanguard is looking at using service habits and microservices catalogue, but they’re also leveraging Connect Cloud to be able to decrease their operational footprint and have connect cloud abstract the operations of the control plane across all of their environment, which they have dozens of.

Ok, one last product announcement for this session here. And that’s the availability Okta Support in Connect Cloud. Now, I’ve heard folks say that authentication and authorization and really user management solutions or Okta is a leader and go with API management hand in hand. They’re so tightly related. And so today we’re announcing one click button integration of Connect Cloud with Okta with full role mapping. This makes leveraging Okta as a backend for single sign on just as easy as filling up a single screen. And this is really a tip of the iceberg. I really look forward to our partnership with Okta, and you expect to see many more capabilities and integrations coming up.

Ok, let’s recap. So we talked about the patterns we see from you, our users, our community, our customers that have made them successful. We saw the pattern of empowering developers as opposed to governing them with a stick based approach. And then moving on to operators, how do you enable them with a modern, cloud native based environment? This is where we talked about Kong Gateway and Kong Ingress controller, and then we went to enterprise architect and talk about the importance of a microservices catalog. And in doing so, we made some really exciting product announcement. Insomnia projects for collaboration. Kong Gateway 2.5. Kong Mesh 1.5. Kong Istio Gateway and Kong Ingress Controller 2.0. And last but not least, the Connect Cloud Okta integration. That’s quite a bit of announcement. So what’s next? Well, we have a lot planned. In fact, we have a lot that’s being worked on as we speak. In the developer space, look for more capabilities that meet developers where they are. Remember, we talked about it’s about meeting developers and integrating to their workflow. In the operator space, look for more customization capabilities that leverage the languages at hand. And the cataloging space? Well, really, the sky’s the limit. The number of places that a catalog can integrate through to provide value is huge and expect to see many of those coming up.

Transcription