Delivering the Goods with Chef

Abstract:

Michael Ducy, Chef, May 2016: The Chef Workflow with Artifactory accelerates the adoption of continuous delivery and encourages DevOps collaboration. It provides a proven, reproducible workflow for managing changes as they progress from a developer’s workstation, through a series of automated tests, and out into production. In this talk Michael Ducy will introduce you to Chef Delivery, Chef Compliance, and discuss how Chef is tackling application configuration with runtimes along with traditional infrastructure.

Talk Transcription:

All right. Good morning. I realized that I didn’t have my name on a slide, so I’d put it up there but it’s probably really, really hard for you to see. My name’s Michael Ducy. I work at Chef. And I’m gonna talk today — well, there was an abstract that was written. And I’m probably not going to follow the abstract to a T. But I’m gonna talk about kind of release pipelines from the operations side up. And I probably should stop the music, huh. There we go. I think the nice gentleman in the back took care of that for me.

So what’s interesting is — so first off. How many people use Chef? Or, okay, so a few of you. How many people know what Chef is? Even more of you. So how many people don’t know what Chef is at all. All right, a few of you. So, that’s kind of important.

So we went through this journey over the last couple of years. Chef started off primarily as a configuration management tool. And what’s interesting is that as we started as a configuration management tool and we practiced this thing as infrastructure as code, we started behaving more like developers. Right? And what’s super interesting is that Chef got started in the web operations, large scale web operations, kind of, e-commerce sites and things like that. Started off in that world. And what’s happened is, is we definitely evolved into managing enterprise, banks, other things like that. What’s interesting is when we started off in that web operations world, it fit very nicely as this new thing came about called DevOps, right.

So I’m going to talk a little bit about the journey. Basically how we got into the business of building release pipelines and how we kind of take a different approach to it and why we take a different approach to it.

So fundamentally, if you want to say what is Chef. If you want to define it in a sentence, you could say Chef is infrastructure as code. Right? So what Chef gives you is the – a DSL. So it gives you a language. It’s based upon Ruby. And what you can do with that language is you can programmatically provision and configure components. For whatever value of component — whatever component means to you. So whether it’s a file, whether it’s a service running on a system. But also it can be other things that you want to manipulate. Like API endpoints. Right? And a common one that we manipulate in our release pipelines is the Artifactory APIs. So we communicate out to the Artifactory API and we publish the rpms that you would then go and get from downloads dot Chef dot io. Right? Then, from an infrastructure as code perspective, the philosophy is, is that your infrastructure as code, or your Chef code should be treated like any other code base. Right? And I’ll talk about that a little bit more here in a second.

And then the idea is that with infrastructure as code you should be able to with three things. You should be able to reconstruct your business or your application that runs your business. From a code repository, that stores your application code and your infrastructure as code, a data backup. Because we don’t handle the data so you have to, you know, store the data somehow. And compute resources. So some arbitrary value of compute resources. Whether it’s in your own data center. Whether it’s just virtualization on VMware or whether if you’re doing a private on the cloud. Or even in a public cloud environment. Amazon Azure and so forth. Right?

So, to put those three things together to talk about what Chef is holistically, so it’s infrastructure as code, programmatically provision and configure components, you treat it like any other code base and then you reconstruct — you should be able to reconstruct your application with those three things.

So let’s talk about this as code. Right? Because that has connotations. Right? So infrastructure as code should be treated like any other code base. So what do you do to any other codebase? Right? You have some software engineering practices around that codebase. You have software engineering practices about how changes get merged into that codebase. You have engineering practices about how that actual codebase gets released and an artifact gets published. Right? So it’s super interesting is that as operations people, we started down this path of going and configuring infrastructure and that when we — the initial time that we put Chef code into a Git repository, it completely changed the world of how, from our perspective, how operations should work. Right? Now all of a sudden we have operations people working in Git on a regular basis. Right? Other things like that, and using practices and principles that developers are very used to and very comfortable with. The operations people started behaving more and more like developers.

Because if you’re going to treat it like any other codebase, you should store it in source control. You should have testing coverage. Right? And it should be part of — at the very least it should be part of your continuous integration pipelines. Right?

So let’s talk about testing here for a second. So has everyone seen this before? Some of you might have. If you have a developer background, you probably have seen this. So I found this super interesting. I saw this presented by someone from Yahoo and they were talking about their journey to continuous delivery. If you haven’t heard that talk, it’s actually a really, really good talk. Essentially Marissa Mayer and why she got involved, I don’t know, but she basically made a directive that nothing else is getting shipped until we can continuously deliver changes and features. So they basically put every feature on hold until they were able to push things out in a continuous delivery way. And what’s interesting is they talked about this slide because when you talk about this slide and you talk about all the stuff that goes into your release. Right? You really have to look at that entire release pipeline, or that release chain and see how much manual testing do you have in that process of getting an artifact out. Right?

And what we found at Chef when we were going down this path of infrastructure as code is that testing coverage was really, really super important from a couple of different perspectives. One, we wanted to make sure that we were writing sane Chef code. Right? So more of a unit test perspective. But then also we wanted to go and when we wanted to check to make sure that if I laid this infrastructure as code onto a live running system, does it create the live running system that I expect or that I want. Right? So that we call — we consider that integration testing. There’s tools like server speck and end speck and other things like that that allow you to do that. And we built a lot of the tooling around how you do testing with Chef cookbooks. We built a lot of that tooling, our community built a lot of that tooling. And basically we tried to flip the picture to where this state. Right? So this is what you want from an optimal perspective of software testing. Right? And when you think of software testing, you shouldn’t just think of the software that you’re releasing that runs your actual application, but also the infrastructure as code that’s required to actually go and build that application stack that you need. Right?

And so what I’ve found super interesting as an operations person, or with an operations background, is to do all the freaking time when we wrote a script. Right? Like you’d write a shell script and how do you verify that the shell script work? Well you put it onto a live running system and you do manual testing. You’re like, ah, it works. Okay. And, you know, in shell scripts, what is it, dash n and dash x are always your friend and that’s kind of your testing. Right? But from an operations person going into Chef world and beginning to think about release and how you do release properly, this is really, really super important thing to realize of it has to be automated. And if it’s not automated, if the test isn’t automated, you should find a way either to do you need that test and the other thing is can you find a way to automate that test somehow. Cause if you can’t automate the test, then you’re going to be in that previous picture.

And so what we found is that as infrastructure as code evolved to where we are today, we started doing test driven development. Right? And so if you don’t know what test driven development is, it’s essentially you write the test first, the test is gonna fail. So it’s also called red-green as well. The test is gonna fail, and then you go and write the actual code that makes the test pass. Right? So basically you’re writing to the result that you want to see. And as operations people, we started writing test driven development. We started practicing test driven development with series of tools that we have around infrastructure as code.

And then the other really interesting thing is, is that we started building release pipelines as well for Chef cookbooks. Right? Or for this infrastructure as code. And as we started writing these release pipelines, we found, kind of, common patterns and practices that we saw our customers using over and over again. We also saw a lot of bad things that our customers were doing as well. I’ll talk about that here in a minute.

So it’s interesting though is when we got into this idea of release pipelines and we started talking about release pipelines, everyone is — like it’s interesting that everyone in the industry has started gravitating towards this idea of continuous delivery. What’s super interesting is that I don’t think a lot of us are really thinking about why is this valuable. Right? Like why does this bring about this fundamental change in the way that we think in our organizations? Right?

And so there’s this idea called the three ways. Right? And these are the three ways. Does anyone know where the three ways come from?

[Audience]

DevOps but in particular.

[Audience] Phoenix Project.

The Phoenix Project. Right? So Gene Kim rewrote, and a couple other people, basically did a rewrite of the goal. That talks about, you know, systems level thinking and how do you troubleshoot — or not how you troubleshoot but how do you optimize an entire system. And what’s interesting is, is that when you put things into a release pipeline, you get these same benefits of these three ways. Or you start — you’re able to start to look at your problems from that perspective. Right?

So let’s take the first one, system level thinking. So systems level thinking helps us from a couple different perspectives. So we avoid local optimization. So if you can see, from cradle to grave, or cradle to deploy, I guess, we’re not throwing away the stuff that we generate in our release pipeline. Right? Well, maybe you are. Eventually. Right? I think a lot of us would like a little move grave in some of the legacy applications that we have to support. Right? So it give you the holistic picture of how that […] object gets built from the instance that somebody checks in code to where it’s actually running on a live system. Right? So that’s an interesting aspect of continuous delivery.

And then the other thing is, is that you can also begin to understand the impact to up or downstream actors. So you can see how your release train is actually impacting other people that might be consuming you at various different points.

So release pipelines also help us from a perspective of amplifying feedback. Right? So when your release pipeline breaks, what happens? You get a message that the build is bad or that the deployment didn’t work or other various things like that. And you get information that say this test failed or this object failed to deploy correctly and all these other things. And you’re getting information back about whether or not what you’re going to deploy is going to work the way that you expect. Right? What you think is kind of obvious, but what’s interesting is, is when you begin to use that information in different ways, it can really help you, and I’ll talk about that here in a second.

And then the other thing is it gives you the ability to do continuous learning and improvement. Right? So if you’re doing continuous delivery or continuous deployment, you’re also doing, hopefully, continuous learning. So improvement is never done. Your practices and processes improve. The technology that you’re using improves. You know, everybody, of course, has all of their sites not running in Docker. Right? Come on, no chuckle, really. Is it too early? Tried to play some music to get your guys going.

And so how do you begin to think about — when you think of things holistically and you think of things from a systems perspective. You can really begin to build in this process of continuous improvement and continuous learning because you can see the entire picture and you have those feedback loops going on.

And the other really important thing is that improvement in one area, often requires improvement elsewhere. So if you optimize something locally, you’re probably going to impact something else. Either further down or upstream. Then you would then have to go optimize other things in that, kind of, release process, or that release cycle. Right?

So this basically. This is the model that came out of the book in 2010, Continuous Delivery. And this is essentially the optimal deployment pipeline model. Or, at least, what they proposed in that book. And what’s interesting, though, is — so you see the feedback loops. Right? We just kind of talked about. You see the entire system as well. And so you have an entire picture of what’s going on. So this is familiar to us. Everyone’s probably seen a continuous delivery pipeline.

And one thing that they talked about though in that book. Or what Jez has also talked about afterwards, is this idea of four principles of low risk releases. So low risk releases are incremental. You want to decouple deployment and release. So building the artifact is not the same thing as releasing the artifact. I’m sorry, deploying the artifact. So building the artifact is releasing it, and then deployment is actually putting onto live running systems. And then you also want to focus on reducing the batch size.

Sorry, my wife’s texting me. I thought I turned that off. Let me take care of that real quick. There we go. Thank you.

And then you want to optimize for resilience. But what’s interesting is what we typically get is something like this. Right? Now this picture looks a whole hell of a lot different than this picture. Right? And so how can you even begin to — one how do you have a system level view of this? One, it’s way too big. There’s too many. There’s many, many things wrong with this diagram. And this is actually somebody’s release pipeline. Right? At some point they branch out and they deploy to multiple environments all at the same time. Then they merge back in and all this other really, really odd stuff.

This is another example of one of their release pipelines as well. This is the same company, they will remain nameless. I blacked out most of things up there so hopefully they don’t come and sue us later. But from a perspective of that previous diagram to the optimal deployment pipeline that they talked about in Continuous Delivery. To what we actually do in principle, is much different than what — in theory, the theory, as the theory was laid out what we should be doing. Or what we should be doing from an optimal perspective. Right?

So what we probably should be doing. I’ll propose to you that your release pipelines probably look like, something like this or like this. And what you should be doing is throwing that release pipeline away. Cause you’re probably doing many things in that release pipeline wrong. There’s reasons why you created this particular process at that particular time two or three years ago when you started going down this route. There were reasons why you made the decisions you made at that time. But that doesn’t necessarily mean that those are applicable now. Right? So what you want to try and do is you want to try and rationalize the pipeline.

So a co-worker of ours at Chef, his name is Chris Webber. You can follow him on Twitter there. Wanted to make sure that I gave him a shout out for this. Because he’s done a lot of work inside of Chef and Seth as well who’s in the back of the room. Of talking about how we release Artifacts. And so how do we release the code that we actually give to you, to actually use our product. And so he had a couple thoughts. He actually wrote a very long essay on this. But two key thoughts that came out of that essay that kind of struck me.

So the first one is, is everything you do in a pipeline is in service of the promotable artifact that you create. Right? And so it’s interesting is that when you start to think about how you build a pipeline, you have to think about the object that you’re trying to create and the end result that you end up wanting. And nothing else matters from, like, an external actor perspective. The only thing that matters is really is that artifact and what you need to do to test that individual artifact. Not necessarily all the other actors. The other actors come in later and further down the pipeline.

And then the other thought that he had was that there’s no value created by a pipeline until a promoted artifact exits on the other end. So pipelines are essentially, they don’t add a lot of value to the organization. Right? And if they don’t add value until you actually have that artifact that a customer can actually use and consume, then what you don’t want is you don’t want something like this. Where it takes a lot of time to actually get that artifact out and actually prove whether or not that artifact is adding value to the customer. Right? So you want to try and optimize this as much as possible. And what’s interesting is if you actually — if I zoomed in on this diagram and if I hadn’t blurred it out, there were times in all of these boxes. Right? And some of these times were like 14 minutes, 7 minutes, 8 minutes, and all of that work that’s being done, well it’s adding value from a testing perspective and giving us feedback, it’s not adding any value from a perspective of having an artifact that somebody can actually use and consume.

So how do you begin to kind of think about how to rationalize the pipelines? So the first thing is, so focus on incremental changes. So typically what happens is the reason why those pipelines are as big as they are, is because we’re pushing too much stuff through the pipeline at one time. Right? Small changes are really important. Not only from a ease of development type perspective but from a how do you know that what you’re going to release isn’t going to break other people type of perspective as well. Right? So incremental changes is something that’s preached over and over again but I don’t think it’s practiced as much as it should actually be practiced in reality.

You want to have the minimal number of steps needed to verify the artifact. And verify that the artifact is good. The other interesting thing is that when you have a smaller pipeline, it’s easier to troubleshoot, if something does actually go wrong. And if the artifact that you create has a problem with it, if you’ve done an incremental release, if you’ve done a very, very small release. And by small release it could be just a couple lines. If you done it that way, then you actually know what you released in that particular build and you know if something breaks, well then it’s — you’ve only released one thing and therefore you have a very good idea of what is actually breaking the actual artifact. Right? I mean, if I release one — if I add one thing in, the artifact’s broken, what’s the culprit? The thing I just added in. Right? See how easy that is.

Another thing about rationalizing the pipeline is what we’ve found through our, kind of, consulting engagements and the work that we did with customers is that a common pipeline shape helps tremendously. Right? So if everybody’s pipeline looks like this, or like this, and these are two different pipelines in the same organization, how can I go into this new group? Right? So if you changed teams, you changed groups and so-forth. And you’re going to go from this to this or go from this to this. How do you have any understanding of the system that is in front of you? How do you have any understanding of why this release pipeline exists the way that it does? Right? And so if you have a common shape for releasing your artifacts it helps you tremendously from a different — a couple of different perspectives.

So it gives you consistent processes across teams. It also gives you a common nomenclature as well. And Seth and I were talking about this on the ride up, and it’s really interesting how important nomenclature is. But if I come up to Seth and if I say, that something is failing in the provision phase, he knows exactly where that would actually be failing. Right? If I tell him that my release pipeline is broken in this particular phase, because we have this common nomenclature. Right? All of our release pipelines look the same at Chef and it’s because we use Chef delivery which gives you that ability. Right?

But it’s super interesting of how the dialogue changes. Right? Versus having to go and explain this when you’re going to ask for help and if you had to try and explain to somebody that bigger diagram, it’s going to be much more difficult for them and it’s also going to be more difficult to have that sharing going on.

Thus, if you have a common pipeline shape, you have common nomenclature, common processes, thus you can have a common method of optimization for the entire organization. Right? You can look at things the same way and thus when you’re trying to want to think about how you release faster, you can use the same tooling across pipelines to optimize those pipelines. And I’ll talk about that here in a second.

It prevents tool thrashing. So another problem that we see is that you move from one team to another team and you’re gonna — there’s a whole new release process and then, of course, one team might use Jenkins, one team might use Chef delivery or Bamboo or something like that. Right? So now you have a whole new set of tooling that you have to learn from a release pipeline perspective. Right?

And then the other thing is that we’re engineers and we like to debate what’s the most optimal way to actually do things and we’ll debate things forever. And that’s typically what bike shedding is called. Right? So if you don’t know what — everyone — who doesn’t know what bike shedding is? I’ll tell you the story behind bike shedding. All right, a few people.

So there’s this guy, he’s British, I think he’s — he’s past. By the name of Parkinson. And he had something called Parkinson’s Law. He actually has two laws. One is Parkinson’s Law and then one is Parkinson’s Law of Triviality. So bike shedding is Parkinson’s Law of Triviality. You should look up his other law as well, Parkinson’s Law, because it’s actually really interesting as well. I actually find it more interesting than the bike shedding one. But basically what he says is that if you have a committee, and if you put three things in front of the committee: what color you should paint a bike shed for employees, whether or not if they should spend 20 million dollars on building a nuclear reactor, and then something a little bit between those two. Right?

So nobody understands a nuclear reactor, so they’ll probably spend like five minutes and everybody will rubber stamp and then be like, okay let’s go build a nuclear reactor. The middle item, people have a little bit more understanding about it, but it doesn’t cost that much money so maybe they’ll debate it for about an hour. But the bike shed, everybody can relate to the bike shed. Right? Everybody’s probably ridden a bike at one time. People have put their bike in a shed at some point in time. It doesn’t cost that much money so it’s like a hundred bucks to paint the bike shed. But everybody’s going to set their debate at what color it should be because they have a favorite color. It’s something that’s relatable and something that they can grasp in their head. And so they’re going to debate it for hours and hours on end. Right?

And that what happens from a engineering perspective is that when you think that you know something, and you think you have good ideas around it, as engineers, we’re just going to sit there and we’re going to debate it forever and we’re never going to make progress and move forward. Right? If you have a common pipeline shape and you say across your organization, this is the way the pipeline is going to look, everybody’s going to use the same pipeline, you end that debate right away. Right? That might be good and bad and you can do it in some organizations but you may not be able to do it in others.

But the other thing is, is that you can ensure that the process is being followed. Right? And what’s interesting about ensuring that the process is being followed is if you have that common shape, you can also add in common steps that everybody must have in their pipeline. Right? Like compliance tests, or security tests, or other things like that. And somebody else can define what those tests look like, like the security team, or the compliance team, the auditors and so forth, and then you can consume those and pull those in into your pipeline and use them and then you know that as you’re releasing your artifact, or going about the process, you know that you’re taking into account those concerns of those other teams. Because when you get it into production, it’s too late to worry about security or compliance or anything like that. You need to try and shift that to what we call shift it to the left as much as possible, and as early on in the deployment or delivery cycle as possible.

So let’s talk about optimization of the pipeline. So, this is called a value string map. This is, I think it’s a bike, no relation to the bike shed, but I think it’s, like, how do you produce a bike. Right?

And so you have materials coming in from the supplier, they sit on the dock for five days, then finally it gets into milling. There’s two people working in milling and for every artifact that they produce it’s going to take two minutes. They actually add value for two minutes. Right? So basically they mill it into the right shape. And then it sets for 10 days and then two people in welding finally have enough time to actually pick that artifact up. And then they’re going to weld it into the bike frame. And it takes them four minutes to weld it into a bike frame. And then it’s going to set for 15 days and then somebody is going to pick it — three people will pick it up and paint it. And it’s going to set — it’s gonna take seven minutes to paint it. And then it’s gonna set for eight days and then it’ll get assembled into an actual bike that somebody wants. And then — that takes two minutes — and then it will set for 30 days before it’s actually shipped out for a customer. Right?

Does this look familiar? I mean, if you think about this, this is a pipeline, right? And this is also what’s interesting is this is a systematic view. And when you have this systematic view, you can begin to think different things so you can think of — so it takes 68 days from the raw material coming into this factory to actually having an artifact produced that a customer can get value out of. Right? It only takes 15 minutes to produce something of value. But all that other time in there is basically waste. Right? And so it’s interesting is that when you start to think — look at things from a systematic perspective, you can start to think about the areas where you can actually begin to optimize.

So where can we begin to optimize this? Well, one, we have all these wait states. Right? We have basically 68 days minus 15 minutes of waste. Right? So where we’re just sitting there doing nothing. And that’s really where you can begin to optimize things and so what’s — I’ll talk about this here a little bit more.

But if you think about how we’re actually doing releases. Right? Thank you thought works for the diagram. But these two diagrams look very, very similar. Right? And if you go back to that other diagram where we showed all the green boxes and all those times that’s exactly the exact same thing that you see here. So we can begin to use these tools that already exist in the industry and kind of what Captain Sully said last night, you know a lot of us in the industry are doing the same — in different industries are trying to solve the exact same problems. So this diagram comes from the practice of lean. And lean manufacturing. And the Toyota production system. Right? And that was all stuff that came out of the sixties and seventies that came out of work and Toyota. Right?

So let’s talk about some lean principles and tools that we can use to optimize the pipeline. So, value stream mapping is one. I just showed that. There’s also — what’s interesting is that what we’re doing here in this diagram is we’re really visualizing the work that we’re doing as well. Right? And so we can actually see from the prospective of when you visualize the work, it helps you from a couple different ways. You can see all of the waste that you have in the system. You can also tell what people are working on, so another lean principle that you might use is […]. And that helps you visualize the work to figure out what is the work in progress, what is somebody working on right now, what is everything that I have coming in, what am I getting completed. Right? And then the last one is Muda as well which I’ll talk about in specifics here in a second.

So let’s talk about the removal of waste. So this is a great example of the diagram. And Muda basically means, I don’t know what it means, word for word in translating into Japanese but it roughly means waste. Right? So how do you, if you hear someone use the term Muda it’s basically how do you remove the waste. And so basically what you want to try and do is find everything that’s a waste of time and leave that little sliver that’s not a waste of time. Lot of times this looks like our calendars. Right?

But going back to Adrian. So he was on a podcast that I do and he kind of proposed the question: If you were to release every day, how much of your time would actually be spent on the process? Right? And how many meetings would you have to have? Right? To do that release every day. And then ask yourself this, if you were to release 10 times a day, would you have enough time in the day to actually execute all of that process that you actually have to do to release 10 times. Because if you’re going to release 10 times a day, you’re going to have to change advisory board, you’re going to have all of this work that needs to happen over and over again. And you can’t bash it all together cause you’re doing small, incremental releases, remember. And so basically you’re going to have to change advisory board 10 times a day. Right? You’re going to have all these processes that are going to have to be followed. And what you’ll find is most of that process is going to look like this. Right? There’s going to be a lot of waste in the system that you can begin to remove.

So let’s talk about the waste in the system. So things to look for. So when you’re looking at that diagram, when you’re looking at your overall release process, where can you begin to optimize that overall release process?

So one, focusing on defects. So avoiding building stuff that’s bad to begin with. Right? And you can avoid building stuff that’s bad to begin with by enabling your developers or giving your developers the tools that they need to have easier accessibility to things like development environments. Right? That mimic production a little bit better. And you can have these capabilities now with things like Vagrant is a great tool for this. A lot of people use that to where developers can spin up an instance very quickly and they can begin to develop against that live running instance. And you know there’s consistency because you’re managing the image that they’re actually spinning things up from.

Overproduction of things not demanded by actual customers. So this is preventing things getting into that whole release pipeline that nobody would want when you actually go and produce it at the end. And so what’s super interesting is that there’s a — there’s a statistic, and I don’t know how accurate it is, Jez Humble likes to throw it out, but basically it’s two-thirds of the features that you actually release, nobody actually ever uses. And so you fix that by having, controlling what’s actually getting into the release pipeline in the first place.

And what’s interesting though is that when you can release quicker, and you’ve optimized that release pipeline, maybe you produce it and somebody, you turn it on, and this is why AB Testing is really important. You release it, you turn it on, you get feedback right away, you see if customers actually want it, and if they don’t want it, you can turn it off and then go focus on something else. Right? If they do want it, then you can maybe actually start to make that feature better, and because you can release more quickly, you’re able to actually layer on features that they might actually want to that main feature much, much quicker. But the idea is try to get feedback as quickly as possible. Right?

Inventories awaiting further processing or consumption. So things waiting for other people to pick it up. And usually that’s manual — manual steps or manual tasks. And when you don’t look at things from a systems perspective, you don’t think of all the other actors that actually need to get created at other various points in time. What you’re going to have is you’re going to be waiting on the team to actually go build the server for you. Right? And this still happens, I know we’re here in Silicon Valley but I live in the Midwest and I work with a lot of companies that still, it takes them six months to get a server. Right? He actually says 10 months. Oh wait, that’s 10 minutes. Okay, thank you.

Unnecessary over-processing. So. So when you have to do — a lot of times what happens is, is that you’re putting tests in because somebody broke something at some point in time. Right? But that test is never going to — you might have fixed the root cause of why that problem happened, but you’re constantly always going to be running this test. And you not might even, the test might even not — sorry, I’m getting tongue tied — a test may not even be necessary anymore because the system has changed. And thus, you’re basically spinning processes and using CPU cycles for something that’s not even relevant anymore.

Unnecessary motion of employees or meetings. Unnecessary transport and handling of goods. You can think of those as you’re putting too many approvals in the process.

And then waiting for an upstream process to deliver, or for a machine to finish processing, or for a supporting function to be completed, or for an interrupted worker to get back at work. You can always this of this as compile time. Right?

So that’s lean and that’s Muda. Right? So we can use these tools to actually go and help our — help us optimize how that release pipeline should be built.

So by my clock, I was going to end five minutes early. So I have about four minutes and so now the unapologetic product pitch.

So Chef delivery, believe it or not, actually helps you with a lot of these problems. It’s funny how that works. Right? So what we’ve done is, we’ve actually, and I love Google slides on how it always messes up my font when I import things.

So that’s approve and that’s deliver. But basically what we’ve done, and this is actually the release pipeline shape that we use. This is the only pipeline shape that you get with delivery. What’s interesting is, is that we use this for all of the software that we’re building and shipping and delivering to you, the customers. And if you have any questions about it, I can answer questions or Seth is also in the room as well. I’m happy to talk more about how we do it.

And so basically you have some of the CI type processes here and then basically all of this is more of the CD type processes. Right? And so we have a standard Lint syntax and unit checking. What’s going to happen is, is somebody can go in. We practice what we call the rule of four eyes and somebody can approve it and say this is actually something that we want to do. This is the change we want to ship.

Once that approval happens, a build takes place. And also what happens is, is when that approve takes place, we merge to master. Right? So small, incremental changes, merging to master as fast as possible. Branches should be short lived as much as possible as well. Once that build takes place, we deploy it into an acceptance environment. And in the acceptance environment we actually deploy it onto live running hardware. You can either create that live running hardware dynamically and then […] [Sound fades out] you can configure it using a tool like Chef or something like that. And then you can actually deploy that artifact onto it and run the smoke and functional tests to actually verify if the artifact is behaving the way that you expect. And then a product manager, a product owner can actually go in and say, I want to deliver and ship this artifact out.

And so when that happens, it moves into a union phase. And so basically if you look at it from this perspective — it seems like they turned my audio off. Was I too loud?

So when you think of it from this perspective, if you have cookbooks and you have applications, and when you look at it like this, it’s really interesting cause everyone’s following the same process. Everybody’s following the same flow to get changes out into the production environment. Right? And so everybody’s speaking that common language and using the common tools and the common pipeline shape. And then what happens in union, is you can have all of these acceptance pipelines come together to actually say, can I actually deploy this out as a set into other environments. And then dependencies can get pulled in.

And then the other interesting thing is, is because we have a common pipeline shape, when you go into union, you know if somebody declares you as a dependency, you can go and run their smoke and functional tests to actually see whether if you’re going to break them when you actually do your release. And what’s super interesting though is if you didn’t have that common pipeline shape, you wouldn’t know that these other actors or these other artifacts that are dependent on you even have smoke and functional tests. Right? Cause you don’t know what people have put into their pipeline. Right? And so it kind of gives you that common shape to make sure that you’re moving fast and you’re moving fast safely.

And so basically, the principles that we built into delivery. So basically a consistent pipeline across teams and projects as we just small — saw. We’re really passionate about enforcing small batch size and enforcing incremental releases. You might have seen a change if you’ve used Chef and you go and download the Chef client. What you’ll see is that release number is actually really, really high. And the reason why that release number is really, really high is because we’re doing a lot of this. Right? So the release number essentially becomes the build number now. Cause we’re constantly building these artifacts. We’re constantly testing. We’re constantly pushing things through the pipeline.

And then the other thing is consistent quality gates. Right? So you know that every pipeline is going to follow the same quality gates. You also know — what you can also do that if you don’t want to do everything at once, if you don’t want to do all of these phases as we call them, you can turn them off. And the other thing that you can do is that what we found is that pipelines need to be accessible. Pipelines need to be easy to create and easy to consume. And so with essentially two commands, you can have a new pipeline up and you have this shape. You get it by default, you’re always going to have it and basically just by running delivery on it, you get a new pipeline created for you with this shape.

And what’s interesting going back to what Chris said, is that if you’re not — the first thing you need to do is work out the process to get your artifact published. Once you get the artifact being published, then you actually go and build in all of the tests that you need later. Right? You publish the artifact first and then you figure out how to actually test it in the process of the build phrase and the deployment phase.

So with that, we got about four minutes left for questions. So what questions can I answer for you?