Sharpening the Axe: Our Journey into Disruption with Kong
Jason Walker shares how Cargill is using Kong to transform legacy architecture with a “Cloud first, but not always” approach. Hear why Cargill chose Kong for their API gateway as part of their internal API platform, Capricorn, allowing Jason's small team to stay nimble while they administer decentralized deployments. In this talk from Kong Summit 2018, Jason shares how Kong routes traffic in Cargill's Kubernetes cluster. He also discusses how Kong fits in with Cargill’s architectural principles and strategies to maintain discrete controls over continuous deployment, and more.
More talks like this await at Kong Summit 2019!
Full Transcript
Hi, so I’m Jason Walker, I’ve been a Cargill for about two years. Prior to that I was at a large retailer that has a dog as a mascot I was there for a couple years. Over the course in the last couple of decades one of the things that I found that I’ve been involved in is helping teams level up on their journey of disrupting whatever type of market or industry that they’re in whether that be in retail in banking and now and we like to refer to his tech AG. so if you’re not familiar with the sharpening the ax quote it comes from, as I am aware of it I wasn’t there so I can’t actually quote you know quote the master if you will but Abe Lincoln and I’m going to paraphrase but essentially if you give me five hours to chop down a tree I’m going to spend four hours sharpening the ax.
And so there’s a little bit of the things that a collection of us have done as far as to bring some disruptive ideas into Cargill is to look at what are those things that we need to set up as far as a set of supportive platform services etc. for the rest of Cargill to be able to pull in and combine and use as a method to be able to address some of the digital transformation issues that we have.
A little bit about Cargill has anyone heard of Cargill? Wow that’s actually impressive because usually to say it’s the largest company nobody has ever heard of. At Cargill we’ve been around for over 150 years I think this is probably more anecdotal than anything else but my understanding is that all of McDonald’s eggs worldwide are supplied in some way by Cargill. So if you’ve ever had an Egg McMuffin thank you very much please keep doing that.
One of the big purposes that we have is to nourish the world that is our purpose our mission is to help people thrive and we know what they need to do is we need to do these in safe responsible and sustainable ways and while technology becomes an enabler of that it’s certainly one of the aspects of us having just the idea that we need to be able to continue to improve without technology it’s going to become more burdensome it’s going to be more difficult and essentially we’re going to end up losing out on things like farmers of the new world and those types of capabilities that may come from using technologies like Kong, like cloud service providers and being able to truly level up.
We are steeped in legacy as I mentioned during the keynote we have thousands of plants those plants are anything from grinding corn to creating oil and then there’s some amount of like disassembly of various types of protein I’ll leave it at that. One of the things though is that those locations in some cases are very remote. In different parts of the world we have cocoa pod tree plantations or farms, there’s no connectivity really that’s there so we need to be able to do some creative things of like being able to take drones to take pictures of trees to make sure that the trees are healthy and we can only do that through being able to extend some of the technology for equipment that we have.
So the direction that we’re going is and we realize we need to be able to establish a bedrock that we can build up a broader foundation and these are the these are the types of things and we’ll get into some of the specifics around some of the different platforms. Colin Job is also here with me from Cargill and has a talk later on today. We work in an organization within our global IT function called Cargill digital labs, and Cargill digital labs we lean on our architectural principles I made note of that during the keynote there’s eight in total but some of the things that will be interleaved inside of this presentation this conversation is simple and standard loosely coupled provider agnostic.
When it comes to simple and standard all of these are probably self-explanatory but one of the things that we want to be able to do and we’ll get into more specifics later on is an ability to be able to provide to our developers who are building apps, building solutions for our customers a way that they can consume a set of services that are repeatable so if they do it on their workstation it’s the same thing as what they’re doing in dev that they’re promoting the stage they’re moving to prod etc. For loosely coupled we want to make sure that we can tightly align things like the way that we manage security but loosely couple that with the implementation in itself.
We far too often start to make the tool the primary thing that is going to deliver an implementation as opposed to saying we have a set of capabilities. We need to find tools that can actually map to those capabilities so in the event we need to swap something out by going from one cloud service provider to another or being able to deploy and scale across, we need to make sure that we’re able to do that. Then provider agnostic enforces that and makes it to where it’s very clear that we’re providing very intellectual property type widgets that we’re able to actually move those things around and not couple them to two various technology bases.
So in the digital lab space the stuff that’s inside the triangle are essentially our digital foundations, right now there’s three of them. The data platform is the oldest that is where like big data stuff takes place. So there’s analytics there’s reporting there’s all types of capabilities that you would expect from a big data platform. The next ten years one or the next oldest one is our cloud platform tons of great work in that space for us to be able to consume through a very basic interface a means to be able to get cloud oriented infrastructure whether that be deploying a kubernetes, being able to get database as a service, object store etc. Those two feed into and provide a set of services for us to be able to do things like provide the API platform, and the API platform isn’t just the gateway we provide some other components such as scaling, metrics, and the ability to be able to extend and in safely expose to the right people at the right time various data points.
For example as we look to do things like maybe monetize some of the data that we have for various operations or things that we want to be able to expose, the API platform is there for us to be able to provide that. One thing that we’re looking at is standing up as we go through and look to generate more competencies around IOT is to eventually have that IOT platform sit more on the edge but then also be able to consume the other components that are in the digital foundation strangle. It isn’t intended to look like Illuminati and sometimes people kind of like let’s put the triangle no, so API platform is born from us building up those digital foundations. These are four of the main things that we wanted to be able to articulate as part of like why we’re building an API platform the things that were being able to create it.
Dave Chucky is our product owner. So we have internally we call the Data Platform CDP Cargill Data Platform Cargill cloud platform, is CCP. The API platform it was like well it’s capped and then it turned into copy and now and I forgot the stickers we’ve got a bunch of like little Capricorn stickers he just went with Capricorn and now let’s call Capricorn but Cappy just for short. We obviously have various strategies that come into play with Cargill whether it be a technology strategy business strategy there’s probably a dozen strategies if we were to actually that’s 150 year old company like it’s huge we’ve got a lot of strategies. But we want to incorporate the key components in our architecture principles into what the API platform is going to provide. We also want to make sure that when it comes down to a road mapping that platform as well as other platforms that the services work well together.
We also want to be able to say that when it comes down to deployment to production and life cycling of all of the different moving parts that we can take a systems thinking approach to being able to deliver these various services and of course since we do we already have a place within our cloud platform to manage hosting to manage observability and a certain degree of like of metrics, logs, etc. We want to be able to reuse those components if we already have data points that are in our data platform we want to reuse them and not recreate them. These are some of the high-level tenants if you will of within the API platform.
Great we’ve got all these things so how does Kong fit? As I mentioned before we at Cargill and this is not specific to Cargill I think anyone that’s been in a large enterprise and it’s sort of dealt with some of the I’ll say the bureaucracy of a large enterprise it’d be very easy to say that we already have a tool we already have an incumbent in place let’s just use that big hammer see I bleep myself big hammer for all the problems that potentially exist. We can always make use of this one particular tool and always be able to deploy it.
We wanted to take a step back and just say what are some high-level criterion that we would use and let’s evaluate the market? We already had in place some incumbents when it came down to being able to provide gateway type of experiences but they were limiting there are different things that we wanted to be able to do against some of the other aspects of our club of our architecture principles because I mentioned the keynote we have a cloud first but not always.
Well if we’re cloud first but not always is there an opportunity to introduce a requirement that says let’s do cloud first and find based upon those capabilities tools that actually enable and empower the delivery. So there are three capabilities that we wanted to take it a super-high level let’s look at things like in the open source market things that then can extend maybe commercial offerings but there was really nothing that was removed including the incumbent. We wanted to look at this and say could we take this criteria what would we use what would it look like and evaluate against the various offerings that are out there.
The first one cloud native implementation we wanted something that was going to be containerized something that would be able to be using our cloud platform space we’re maturing and leveling up around kubernetes so this is something that we’d be able to potentially use and reuse. For choice on pipeline we’ve been maturing our CICD CD pipeline and want to make sure that the way that we’re maturing that is not going to be interfered with by a new technology base but something that’s able to integrate into the pipeline that we’re building. It’s pretty extensive as far as what we do it a lot of the things that we do with that we feel like it’s over the course of you know the last 10 or so years of experience. We feel like we are actually pretty good at it. We know we can get better and there are some things we’ll talk about with that.
The last thing is making sure that that developer experienced the, it works on my machine, is something that they’re able to actually pull in incorporate and then be able promote those same artifacts and the same configs and the same experience upwards it did not be all of a sudden something is different even though it worked on my Mac or my Windows machine.
So within the containers in kubernetes space our cloud platform is where we have the majority of artifacts that are hosted. There is kubernetes as a one of the moving parts of the cloud platform but again there’s other things that you would expect from an abstraction layer to a cloud service provider. So there’s object storage there’s database there are security curves, there’s firewalls, there’s various points of ingress, there’s DNS, all those things we’ve built into the cloud platform so the API platform we’re just able to consume them and use some declarative gamal to be able to tell the cloud platform here’s the way that I need some stuff done in order for me to be able to provide that API platform.
So DNS ingress and hosting is provided by our cloud platform dropping down to the data platform data reporting analytics. We don’t do transactional within the data platform we have the application team’s own that but they’re able to actually make use of the cloud platform to be able to consume their own stuff. And so in the API platform at a super high level routing authentication and gateway type capabilities.
So for discrete controls over continuous deployment again I said I feel like we continue to mature and evolve and when we have conversations with other companies and this not even necessarily with Cargill but just over the experience of talking with other companies but they’re like we’re pretty far along in this journey and it’s something that as we go through and we identify here’s a new way to do something. We do get a little bit of like hey we just figured out a way to level up and then that I don’t know if we’re like level 7 or out of whatever there’s no I don’t think there’s necessarily a scale but when you find something and then internally we’re able to kind of socialize and get a really good experience and a good fit and feel for the way that it sort of does it pass a smell test with other teams and they go hey that’s kind of cool we should do that too.
Here’s a picture at a really high level of some of the stuff that we’ve got going on. To kind of go across the deal we’ve got a build repository where we do all of our testing we built an artifact we basically put out their principal or not a principal but the guidance of you can use whatever language you want but you’re going to build a docker image, but the hard stop like you’re just going to build a docker image. So that what that allows us to do what I think I can do the laser, that doesn’t really show up does it?
I’ll go over here still doesn’t work. We have this build repository which lets us do basically a really basic fork branch type workflow and that is a workload that we subscribe to we don’t necessarily do a get workflow where you pull or clone from one repo and then push to it we want to do a fork, did I just do, oh I saw some giggles I thought maybe I screwed something up. No but branch submit a pull request that trigger some activity emerged triggers some more activity but a tag ultimately creates a version semantically versioned artifact that we present for deployment.
That doesn’t mean it gets deployed it’s just now in a position to be deployed. What we have is a separate deployment repository that follow some similar things as far as a pull request being submitted to it but the environments are defined and we actually make use of the deployments API that are available through a lot of the different CI servers as well as SCM servers to be able to say hey we want to be able to deploy this to our engineering environment which is where my team is able to introduce breaking changes and if things go bad we just crater the environment and rebuild it. Why? Because we have built all of our images and our configs is tagged semantically versioned images that were able to deploy on demand.
We also keep set secrets separate. The underlying tooling for our CI server is drone CI. We just use the open-source version of drone CI. It under the covers is essentially it leverages Hashicorp vault to manage secrets. So what we do is we push the secrets into drone, drone is then able to in a – your way push them into various environments whether it be through something like cube CTL or AWS parameter store or into another vault environment that’s able to be consumed. So what that then allows us to do is keep these things discreet. We often will timestamp the name of the secret so in the event that we need to introduce secret number two, secret number one can live out in the wild we can track it make sure secret two is actually the thing that’s being consumed as we move forward in through our environments and then deprecated secret number one throwing it away.
So we never have the big bang of I need to rotate a secret and now I’ve got this issue of do I redeploy and try to forklift and oh God I hope this works or do we just introduce something new and then and then rotate through? So down here in the in this drone box we have SCM of course, we make use of doing all of our testing and this is anything from unit tests to functional integration security performance anything that we’re able to come up with. We package up the image we make sure that the image is actually not only that it actually will run and work so we can push it into an environment that just says hey we created this docker image but can I actually push it and consume it someplace.
So in some cases we’ll just do a essentially a smoke test against it and then we have a bin repository that has some controls in it though that does not allow us to do things like overwrite an existing version because there’s nothing worse than having a 1.0 that was just released and then like tomorrow 1.0 gets pushed again you’re like wait no we cannot have different [inaudible 00:16:56] for 1.0 am I right or am I right or am I right? Thank you yeah.
So we call out delivery and deployment in separate boxes because in our continuous delivery space now that we’ve published that image we want to do CVE scans. We do common vulnerability we want to make sure that any kind of licensing that may be packaged inside of that image doesn’t include things that we don’t want Cargill to have any kind of issue if we release something. Way back when there was an FTP client it was sort of like a putty but it was an FTP client and the licensing was do good. It was like what does that mean? Well you just have to do good. Okay so subjective right so we have to make sure that the licenses that are actually pushed into those images just adhere to stuff like it’s patchy v2 or it’s MIT or it’s in some cases GPL v3 or GPL stuff okay but in other areas or where we potentially want to maybe sell something that has intellectual property we need to keep those licenses out so that we don’t run into situations where we have to push up stream.
Then hooda like as part of the delivery cycle we will build in some exploratory stuff and this is just observe orient decide and act right so it’s an old 70s military term as far as just when you’re going through and you’ve got a ton of data and you need to make decisions, observe orient decide and act. Then deployment captain is called out here that is the framework that our cloud platform team has created is just the tag name because kubernetes and you know more stickers, I think maybe we have stickers for those. But with Captain it is that it’s a gamal file essentially that runs as a drone plugin and we’re able to declare what an environment should look like whether it be dev stage or prod through essentially be very similar to like docker compose but totally different syntax but a similar type of setup.
And then of course in drone we’re able to post in things in the chat ops post other API’s and incorporate some additional things like ITSM controls or compliance are things that were A working on and B leveling up in the ITSM space where as a 150 year old company with tons of legacy, we have our ITSM team going what if we expose an API you pushed like changes that you’re doing over here into that API, we’re like hey we happen to have a place where you can run that API just saying.
All right so declare local environments for the batteries included. This is honestly just a really high level what if there’s an open-source project out there that can do a visualization of docker compose files that’s all this is just to give you an idea of we’re really just making use of docker compose in order to say in one package we can have developers on their workstation consume a gateway using community edition, a database in admin GUI we’re just making use of Konga know if anyone’s heard of Konga but using Konga is the admin for this particular scenario exposing on their local machine the necessary ports and then via configurations this file share essentially is able to be consumed so when the gateway starts up its able to kind of put some sugar into the deployment so you get things like a login that kind of thing.
So the next opportunities okay I'm way good, next opportunities. One of the key things I mentioned Dave Chucky is our product owner one of the things that he continues to iterate on is let’s make sure we’re giving a people choice. When it comes down to the stick or the carrot in interacting with customers we want to give a lot of carrots, I think we’ve all had enough stick. So we want to do is take a look at what are some things that we can do in order to internally like level up our game on being able to promote a safe, secure, and really just a same set of platforms API platform included. So CICD self service and plug-in development are three things where we see what we want to be able to do and move forward.
This is where we see definitely Kong in the approach that’s taking place from Kong really starting to map very well. So in CICD I mean this is just as we’ve gone from Community Edition now we’re moving into Enterprise Edition and seeing now with stuff with 1.0 there’s just a ton of opportunity for more automation. Like more ways to be able to automate not only like the deployment config but we’re working on some stuff from the plug inside to be able to do things like let’s just evaluate that the compliance of a gateway actually adheres to what we expect it to look like and that that maps to our compliance and risk teams expectations of things like hey if there happens to be an HTTP port on an interface it redirects to HTTPS, just some basic things like that.
Not necessarily like pen test not necessarily full-fledged vulnerability scans but just some lightweight things that we can either introduce as part of our CICD pipeline to give developers a really quick feedback but also a method that we’re able to continue to level up and be able to push safe code into the environment.
I mentioned the carrot and the stick, we always are asking our customers like what do you want the like how can we do better? In that if there are ideas that those customers have just ask. The ideas around self-service and we’ve developed a few tools and utilities that we promote as our API platform services it includes things like, if you’ve ever heard of a JSON encrypted JSON there’s a some libraries out there around salt and sodium and basically it’s an ability to be able to encrypt text such that you can present the encrypted text through like a chain like a pull request and then use a public private key to encrypt decrypt.
Reason we do this is we have application teams that are making use of authentication plugins, think of things like through open ID and open ID connect, they have a client ID and a client secret. As the platform team we don’t want to know their secret but we need their plugin to be configured to allow the traffic to authenticate. So we have a platform services app with an API that talks back into the gateway where people are able to register essentially a token, the gateway owns the private token they get the public token that we very able to encrypt there with that public token and only the Gateway is able to decrypt it.
So they’re able to check their secrets into source code management in clear-text because it’s already ciphered text and the only thing that is setup to be able to decrypt is the gateway. What that does is makes it to where we don’t have to know what the passwords are and those teams are able to just use the pull request model to say here’s a change here’s an update yep we screwed up our fill in the blank identity provider configuration and we need to update it or we need to revoke and renew passwords.
For plugins this is one area where Collin and I, we continue to riff on ideas we’d love to be able to do like a secrets plug-in be able to do a metadata plug-in, the ecosystem that that Kong is presenting especially with the plug-in developer kit, it’s going to make it to where we’re going to be able to start launching and in fingers-crossed as we get better in Cargill around those be able to open source those things that participate in it in a better way. I mean it’d be great it would be great if anyone does like secrets management at all have a Kong plug-in that can just talk to something like vault. Am I right? Yeah cool so yeah, wow yes.
So it’s a bit of these particular aspects of that Kong community in that Kong ecosystem. We saw it early when we were using community now that we’re moving forward we see it as a great fit into the different things that we’re looking to do. so the next steps was just some things that and I was really excited to hear about some of the stuff with like the service mesh and so forth because like our first phase is we’re centralizing the platform and the deployments. What we want to do is we’re actually kind of doing the let’s create a monolith of a cluster with that can scale up and down but have everything recourse through that set of gateways through that cluster.
What it allows us to do is with a very small team build up new features build up and enhance against what the features are, stay as nimble as possible and be able to as new Kong features are released be able to consume them in same ways and honestly because we are constrained by having a small team, we actually don’t over invest in I’ll say indulgent ideas that we think will become elegant everyone’s going to love that never come to fruition because they’re just like it’s just too much. So we keep things simple we iterate on in this centralized deployment what are just really the important things that we need to be able to provide and promote and be able to execute against that.
So the next phase for us is moving to send decentralized deployments and once we get the automation stuff squared away and teams are able to self service and not collide with you our eyes and up streams we can be in a position that we can say we’re not going to have us at a gateway directory I have multiple clusters and push those clusters closer to the applications.
Let the app teams actually do the deployment of their own stuff, we’ll end up with this whatever the packaging is it’ll be a Duggar but whatever the package is it’ll be something where teams are going to be able to consume on demand through semantically versioned images and deployments and then push it all down to where it just becomes part of the network through via service mesh and until they agree with the pattern out of technology. I was so happy to hear layer four, four to seven, I’ve been asking a couple of times like you’re going to four right like you’re yes so that’s really great to hear.
Because as we talked about you know cloud first but not always, we’re absolutely going to have those things that need to stay on premise, may refer numbers like clown, crown jewels, clown drools yeah clowns rules, that even got worse like I doubled down on the crown jewels. But those big things that every company says this is like regardless of the product that we sell there’s a there’s an algorithm there’s a formula there’s some secret sauce that needs to stay protected that probably will never make it into another data center that’s owned by someone else right I mean there’s just there’s just those things, but the more that we’re able to provide the services at the network layer and it just sort of acts like DNS like it’s just part that’s just there, the more that we’re going to be able to accelerate some of the development in the delivery of the digital transformation that Cargill is undergoing.
So those are kind of the key things around how Kong fits within the API platform. It's the aspects of like the plugins, the direction we want to go with self-service, this architecture there’s big architectural things that are coming out of Kong. Like I said we had no perspective or no information as part of what was being announced and just to see commonality, there was a bit of you know talking behind at the back then we were watching things go up and it was like, they’re checking our boxes right like I mean it’s just all moving around very, very nicely.
We are hiring, sort of the obligatory like we’re hiring slide. So if between API platform to the cloud platform in various areas we have plenty of things that are going on and with that we’re I think I’m early, I don’t know I can’t even tell the time it is. Yeah but nonetheless open it up to questions. Thank you. And if not I’m not keeping you from lunch, so I don’t feel bad. We don’t have the little cubes?
Speaker 2: So Cargill is a 150 year old company so I’m interested in seeing what was your experience in migrating like legacy like did you have monoliths that you have started to break down and like what was like in your opinion some challenges that you faced in terms of like API management or was making API is externally available.
Jason Walker: Yeah the I’m assuming everyone was able to your question and let me know if I missed the answer here, some of the things that we haven’t done yet is externalized all the API’s. the previous implementation of an API management strategy included lots of I’ll say tightly-knit integrations of there was already an existing file based integration to get data from point A to point B, instead of the mindset of well how do we decouple that and how do we go data centric but provide a loosely coupled interface, it was basically let’s just lift and land that integration which meant that any of the gateways or any of those services we’re really just a rinse and repeat it just we changed out the tool. Which meant we actually weren’t doing things like using or consuming or building out the API’s.
That’s not to say that there were you know that that was the exclusive approach but when it came down to some of the lift and shift if you will I think that we have taken in an approach to how we’re managing this the Okay platform itself is an enabler of in a way to expose some of the work and effort that was done around rationalizing and exposing the data itself. In the past it would be the data itself was just it was despair it was duplicated there was no ownership.
That data platform is actually the thing that’s triggering the ability to say we can we can assign data ownership and make it very clear regardless of like economical but really clear what that data model should look like data and domain model and then make you some API platform to safely expose that to the right consumers to the right places with the right guards and controls in place. As we mature that API platform we are building it in such a way that any API could be exposed externally, not that we would because of reasons. Does that … Okay. And I saw two hands, she has a mic, she wins, she roshambo'd you.
Speaker 3: Thank you this was very informational, so I have two questions you can pick whichever one you want answer. One was I was really interested in knowing how you have done the hybrid, if you could just get a use case or something? And the second question was did you go from monoliths to micro service or you were using a different API gateway before this and then you decided to move to Kong and if you just went directly from monolith to Kong did you evaluate some other gateways before deciding for Kong and why did you go with Kong?
Jason Walker: Wow there was a lot in there, wow. Did we have a previous gateway or a previous strategy in a previous set of tools? Yes and they actually still live today. So when it comes down to the hybrid model we actually keep them discrete today where it is Kong is intended and is used today for our cloud-based deployments. We still have that investment that hasn’t fully lifecycle yet for the on-premise however our decisions of making use of Kong with the idea of as long as we’re able to consume things like a kubernetes environment then our deployment to the cloud can remain consistent even if it’s on Prem.
So there are some reasons around saying like Kong and that overall architecture is something that we can reuse on Prem there’s some more work that would need to be done to make sure that the right hosting environment is available on Prem. We probably have like 12 to 18 months left on current investments of which in six months we’ll probably start to have a real conversation of alright as that lifecycle comes down when do we actually start to migrate and make more use of what we’re doing in the API platform, Kong being a component of that.
Speaker 3: Okay so the on-Prem one is not Kong and the cloud one is Kong is what you are saying?
Jason Walker: Correct.
Speaker 3: And just really quick the second question was why did you decide to go with Kong when there’s the other API gateways out there in the market?
Jason Walker: Oh sure, we went back and forth a little bit around you know okay there’s why Kong but what that process looked like as far as getting to the decision of let’s try Kong literally we just we spun up some different environments of making use of other open-source gateways half a dozen of them and when we look at the various components of other parts of our ecosystem like our monitoring tool, when we look at the monitoring and instrumentation and then we look at the integrations that are available right out of the box and there was a Kong button.
It was like okay the thing we already have in place will monitor the thing that we’re already looking at do the others already have that very quick easy click on the button and go? No. so we look at the ecosystem as a whole that’s just a sample of that ecosystem as a whole whether it’s open source in a web scale IOT scale what have you, it became really clear that there was traction in the common community between things like the number of stars on GitHub it was actually looked like that was a deciding factor it was like one of how many people are actually paying attention to this how many people are contributing how old is the last pull request?
We went through and treated it like if than if we just do open source. A secondary thing was there commercial support if we went open source from the same path or did we have to like third-party it. Like we could just mature and level into like an enterprise software license agreement what have you, are we dealing with the same relationship and is that something that we would see ourselves being able to fit into? Not only for ourselves as we start the API platform but Cargill Inc. globally be able to scale it up. Does that help? Okay.
Speaker 3: Thank you, that was good.
Speaker 4: Testing oh good, so I have a two part question as well, name your favorite color no-
Jason Walker: Name my favorite … yea, like whoa.
Speaker 4: So Capricorn I didn’t quite hear the first part is it is it the central API table that Cargill services all used to find and talk to each other or is this the aspiration of Capricorn?
Jason Walker: It is the were down the path. We haven’t made the statement you must put your stuff into Capricorn. What we have done is we’ve gone to customers and say what is the statement you must feel like to them? Because we know that we’re not necessarily in a maturity level to be able to have the globally distributed the resiliency in place. We have some warts every time we pull up the sleeves and we want to make sure we can address those and build an environment where Capricorn is so frictionless and easy to use, the people like why are we doing anything about this and not make the directive like a top down. We want actually internally in an inner source way our internal API customers to gravitate towards this really simple standard easy way for us to deploy our API’s and it not feel like it’s being done to them. I don't know if I answered-
Speaker 4: That’s exactly because some of our customers are in the same place, they’re trying to build these central API services that the rest of the org would then move their stuff to and then consume and the answer I’m looking for is because you can’t usually make them do that how do you attract them to that or what are you finding success and given them a good reason to come what’s the carrot that you, you talk a little bit about carrots are there other carrots that you are looking for that would make your case more compelling for Capricorn?
Jason Walker: Yeah so there are other influences that those teams are interacting with. It may be like a security team versus team asset management like software compliant like all of these different areas. one of the things that in establishing these different platforms is having those customer interviews to say hey if we build these things and we can check off all these boxes so when you go and talk to your security team you can say I’m just using the API platform and they go this meetings over have a good day, that’s part of like the bureaucratic big company like we can reduce that friction.
The additional carrot and how I think we’re actually attracting some of the internal customers is by simply going what do you want it to do? And they actually start to say a lot of the things that like the security team would want them to say like well we don’t want anyone just to be able to deploy code. okay cool let’s not get into implementation details but that’s a that’s a great what statement so if it was there’s a mount of governance and there’s some amount of control but the application team owns that then the platform team is going to have to require there be a named owner.
If the app team wants to own stuff somebody has to own it, that’s cool if we’re able to establish that balance as they using that as an example the way that we’re able to automate and build in certain controls makes it to where the app team is actually getting what they asked for. Security teams probably actually getting what they’re asking for but now we’ve made it to where it's more of a carrot. Does that answer? Okay cool. And I think yep.
Speaker 5: You talked about the batteries included local developer deployment, do you have like inter dependencies between the services when you do that? so like if you’re working on service A and it’s got like five dependencies or does your dot compose having all of those and then how are you configuring the local code for that?
Jason Walker: Not yet. So what we have is an ability for people to pull in a config so get it to where they’re able to interact and essentially be admin and then be able to import and push in certain configs and policies. Its intended to be not that you are going to be able to reproduce integration type testing, but be able to get to a point where you can at least say it’s more of a four unit test type development work or you don’t have those dependencies and in fact if you’re building those dependencies in your unit tests there’s an opportunity. I would say you’re doing it wrong but that’s kind of a jerk thing to say. I don't know if there was any …
Speaker 6: So you mentioned the monitoring and how with Kong you get like monitoring out-of-the-box are using Prometheus?
I’m so sorry with that train I didn’t hear, I'm just going to come up to you.
Speaker 6: So you mentioned monitoring of API’s, are using Prometheus or if not what’s your experience with monitoring Kong?
Jason Walker: We actually consume the stats deep plugin and as part of our deployment we will prefix an API’s metrics with the environment and name of that API so as it progresses up the main monitoring tool they’re able to drill in and get specifics on latency on whatever types of details at that step there’s like 15 or 20 different metrics that it publishes. Because we’re making use of kubernetes some of the things that we’re able to do is we make use of horizontal pod auto scaler HPA and we keep our CPU threshold really low so if we need to scale up because there’s a lot of traffic, we’re less concerned about like memory footprint because we can go as wide as we technically want.
I bring that up because all those plugins and all those metrics is just going to mean more in memory type of consumption as things are filtering through so we just expand out to be able to account for the overhead. For things like logs we just do basic engine x parsing on the output.