Easy Agile Podcast Ep.12 Observations on Observability
On this episode of The Easy Agile Podcast, tune in to hear developers Angad, Jared, Jess and Jordan, as they share their thoughts on observability.
Wollongong has a thriving and supportive tech community and in this episode we have brought together some of our locally based Developers from Siligong Valley for a round table chat on all things observability.
💥 What is observability?
💥 How can you improve observability?
💥 What's the end goal?

"This was a great episode to be a part of! Jess and Jordan shared some really interesting points on the newest tech buzzword - observability""
Be sure to subscribe, enjoy the episode 🎧
Transcript
Jared Kells:
Welcome everybody to the Easy Agile podcast. My name's Jared Kells, and I'm a developer here at Easy Agile. Before we begin, Easy Agile would like to acknowledge the traditional custodians of the land from which we broadcast today, the Wodiwodi people of the Dharawal nation, and pay our respects to elders past, present and emerging, and extend that same respect to any aboriginal people listening with us today.
Jared Kells:
So today's podcast is a bit of a technical one. It says on my run sheet here that we're here to talk about some hot topics for engineers in the IT sector. How exciting that we've got a couple of primarily front end engineers and Angad and I are going to share some front end technical stuff and Jess and Jordan are going to be talking a bit about observability. So we'll start by introductions. So I'll pass it over to Jess.
Jess Belliveau:
Cool. Thanks Jared. Thanks for having me one as well. So yeah, my name's Jess Belliveau. I work for Apptio as an infrastructure engineer. Yeah, Jordan?
Jordan Simonovski:
I'm Jordan Simonovski. I work as a systems engineer in the observability team at Atlassian. I'm a bit of a jack of all trades, tech wise. But yeah, working on building out some pretty beefy systems to handle all of our data at Atlassian at the moment. So, that's fun.
Angad Sethi:
Hello everyone. I'm Angad. I'm working for Easy Agile as a software dev. Nothing fancy like you guys.
Jared Kells:
Nothing fancy!
Jess Belliveau:
Don't sell yourself short.
Jared Kells:
Yeah, I'll say. Yeah, so my name's Jared, and yeah, senior developer at Easy Agile, working on our apps. So mainly, I work on programs and road maps. And yeah, they're front end JavaScript heavy apps. So that's where our experience is. I've heard about this thing called observability, which I think is just logs and stuff, right?
Jess Belliveau:
Yeah, yeah. That's it, we'll wrap up!
Jared Kells:
Podcast over! Tell us about observability.
Jess Belliveau:
Yeah okay, I'll, yeah. Well, I thought first I'd do a little thing of why observability, why we talk about this and sort of for people listening, how we got here. We had a little chat before we started recording to try and feel out something that might interest a broader audience that maybe people don't know a lot about. And there's a lot of movements in the broad IT scope, I guess, that you could talk about. There's so many different things now that are just blowing up. Observability is something that's been a hot topic for a couple of years now. And it's something that's a core part of my job and Jordan's job as well. So it's something easy for us to talk about and it's something that you can give an introduction to without getting too technical. So we don't want to get down. This is something that you can go really deep into the weeds, so we picked it as something that hopefully we can explain to you both at a level that might interest the people at home listening as well.
Jess Belliveau:
Jordan and I figured out these four bullet points that we wanted to cover, and maybe I can do the little overview of that, and then I can make Jordan cover the first bullet point, just throw him straight under the bus.
Jordan Simonovski:
Okay!
Jess Belliveau:
So we thought we'd try and describe to you, first of all, what is observability. Because that's a pretty, the term doesn't give you much of what it is. It gives you a little hint, but it'll be good to base line set what are we talking about when we say what is observability. And then why would a development team want observability? Why would a company want observability? Sort of high level, what sort of benefits you get out of it and who may need it, which is a big thing. You can get caught up in these industry hot buzz words and commit to stuff that you might not need, or that sort of stuff.
Jared Kells:
Yep.
Jordan Simonovski:
Yep.
Jess Belliveau:
We thought we'd talk about some easy wins that you get with observability. So some of the real basic stuff you can try and get, and what advantages you get from it. And then we just thought because we're no going to try and get too deep, we could just give a few pointers to some websites and some YouTube talks for further reading that people want to do, and go from there. So yeah, Jordan you want to-
Jared Kells:
Sounds good.
Jess Belliveau:
Yeah. I hopefully, hopefully. We'll see how this goes! And I guess if you guys have questions as well, that's something we should, if there's stuff that you think we don't cover or that you want to know more, ask away.
Jordan Simonovski:
I guess to start with observability, it's a topic I get really excited about, because as someone that's been involved in the dev ops and SRE space for so long, observability's come along and promises to close the loop or close a feedback loop on software delivery. And it feels like it's something we don't really have at the moment. And I get that observability maybe sounds new and shiny, but I think the term itself exists to maybe differentiate itself from what's currently out there. A lot of us working in tech know about monitoring and the loading and things like that. And I think they serve their own purpose and they're not in any way obsolete either. Things like traditional monitoring tools. But observability's come along as a way to understand, I think, the overwhelmingly complex systems that we're building at the moment. A lot of companies are probably moving towards some kind of complicated distributed systems architecture, microservices, other buzz words.
Jordan Simonovski:
But even for things like a traditional kind of monolith. Observability really serves to help us ask new questions from our systems. So the way it tends to get explained is monitoring exits for our known unknowns. With seniority comes the ability to predict, almost, in what way your systems will fail. So you'll know. The longer you're in the industry, you know this, like a Java server fails in x, y, z amount of ways, so we should probably monitor our JVM heap, or whatever it is.
Jared Kells:
I was going to say that!
Jordan Simonovski:
I'll try not to get too much into-
Jared Kells:
Runs out of memory!
Jordan Simonovski:
Yeah. So that's something that you're expecting to fail at some point. And that's something that you can consider a known unknown. But then, the promise of observability is that we should be shipping enough data to be able to ask new questions. So the way it tends to get talked about, you see, it's an unknown unknown of our system, that we want to find out about and ask new questions from. And that's where I think observability gets introduced, to answer these questions. Is that a good enough answer? You want me to go any further into detail about this stuff? I can talk all day about this.
Jared Kells:
Is it like a [crosstalk 00:08:05]. So just to repeat it back to you, see if I've understood. Is it kind of like if I've got a, traditionally with a Java app, I might log memories. It's because I know JVM's run out of memory and that's a thing that I monitor, but observability is more broad, like going almost over the top with what you monitor and log so that you can-
Jordan Simonovski:
Yeah. And I wouldn't necessarily say it's going over the top. I think it's maybe adding a bit more context to your data. So if any of you have worked with traces before, observability is very similar to the way traces work and just builds on top of the premise of traces, I guess. So you're creating these events, and these events are different transactions that could be happening in your applications, usually submitting some kind of request. And with that request, you can add a whole bunch of context to it. You can add which server this might be running on, which time zone. All of these additional and all the exciters. You can throw in user agency into there if you want to. The idea of observability is that you're not necessarily constrained by high cardinality data. High cardinality data being data sets that can change quite largely, in terms of the kinds of data they represent, or the combinations of data sets that you could have.
Jordan Simonovski:
So if you want shipping metrics on something, on a per user basis and you want to look at how different users are affected by things, that would be considered a high cardinality metric. And a lot of the time it's not something that traditional monitoring companies or metric providers can really give you as a service. That's where you'll start paying insanely huge bills on things like Datadog or whatever it is, because they're now being considered new metrics. Whereas observability, we try and store our data and query it in a way that we can store pretty vast data sets and say, "Cool. We have errors coming from these kinds of users." And you can start to build up correlations on certain things there. You can find out that users from a particular time zone or a particular device would only be experiencing that error. And from there, you can start building up, I think, better ways of understanding how a particular change might have broken things. Or some particular edge cases that you otherwise couldn't pick up on with something like CPU or memory monitoring.
Angad Sethi:
Would it be fair to say-
Jared Kells:
Yeah. It's [crosstalk 00:11:02].
Angad Sethi:
Oh, sorry Jared.
Jared Kells:
No you can-
Angad Sethi:
Would it be fair to say that, so, observability is basically a set of principles or a way to find the unknown unknowns?
Jordan Simonovski:
Yeah.
Angad Sethi:
Oh.
Jess Belliveau:
And better equip you to find, one of the things I find is a lot of people think, you get caught up in thinking observability is a thing that you can deploy and have and tick a box, but I like your choice of word of it being a set of principles or best practices. It's sort of giving you some guidance around these, having good logging coming out of your application. So structured logs. So you're always getting the same log format that you can look at. Tracing, which Jordan talked a little bit about. So giving you that ability to follow how a user is interacting with all the different microservices and possibly seeing where things are going wrong, and metrics as well. So the good thing with metrics is we're turning things a bit around and trying to make an application, instead of doing, and I don't want to get too technical, black box monitoring, where we're on the outside, trying to peer in with probes and checks like that. But the idea with metrics is the application is actually emitting these metrics to inform us what state it is in, thereby making it more observable.
Jess Belliveau:
Yeah, I like your choice of words there, Angad, that it's like these practices, this sort of guide of where to go, which probably leads into this next point of why would a team want to implement it. If you want to start again, Jordan?
Jordan Simonovski:
Yeah, I can start. And I'll give you a bit more time to speak as well, Jess in this one. I won't rant as much.
Jess Belliveau:
Oh, I didn't sign up for that!
Jordan Simonovski:
I think why teams would want it is because, it really depends on your organization and, I guess, the size of the teams you're working in. Most of the time, I would probably say you don't want to build observability yourself in house. It is something that you can, observability capabilities themselves, you won't achieve it just by buying a thing, like you can't buy dev ops, you can't buy Agile, you can't buy observability either.
Jared Kells:
Hang on, hang on. It says on my run sheet to promote Easy Agile, so that sounds like a good segue-
Jess Belliveau:
Unless you want to buy it. If you do want to buy Agile, the [crosstalk 00:13:55] in the marketplace.
Jared Kells:
Yeah, sorry, sorry, yeah! Go on.
Jordan Simonovski:
You can buy tools that make your life a lot easier, and there are a lot of things out there already which do stuff for people and do surface really interesting data that people might want to look at. I think there are a couple of start ups like LightStep and Honeycomb, which give you a really intuitive way of understanding your data in production. But why you would need this kind of stuff is that you want to know the state of your systems at any given point in time, and to build, I guess, good operational hygiene and good production excellence, I guess as Liz Fong-Jones would put it, is you need to be able to close that feedback loop. We have a whole bunch of tools already. So we have CICD systems in place. We have feature flags now, which help us, I guess, decouple deployments from releases. You can deploy code without actually releasing code, and you can actually give that power to your PM's now if you want to, with feature flags, which is great.
Jordan Simonovski:
But what you can also do now is completely close this loop, and as you're deploying an application, you can say, "I want to canary this deployment. I want to deploy this to 10% of my users, maybe users who are opted in for Beta releases or something of our application, and you can actually look at how that's performing before you release it to a wider audience. So it does make deployments a lot safer. It does give you a better understanding of how you're affecting users as well. And there are a whole bunch of tools that you can use to determine this stuff as well. So if you're looking at how a lot of companies are doing SRE at the moment, or understanding what reliable looks like for their applications, you have things like SLO's in place as well. And SLO's-
Jared Kells:
What's an SLO?
Jordan Simonovski:
They're all tied to user experiences. So you're saying, "Can my user perform this particular interaction?" And if you can effectively measure that and know how users are being affected by the changes you're making, you can easily make decisions around whether or not you continue shipping features or if you drop everything and work on reliability to make sure your users aren't affected. So it's this very user centric approach to doing things. I think in terms of closing the loop, observability gives us that data to say, "Yes, this is how users are being affected. This is how, I guess the 99th percentile of our users are fine, but we have 1% who are having adverse issues with our application." And you can really pinpoint stuff from there and say, "Cool. Users with this particular browser or this particular, or where we've deployed this app to," let's say if you have a global deployment of some kind, you've deployed to an island first, because you don't really care what happens to them. You can say, "Oh, we've actually broken stuff for them." And you can roll it back before you impact 100% of your users.
Jared Kells:
Yeah. I liked what you said about the test. I forgot the acronym, but actually testing the end user behavior. That's kind of exciting to me, because we have all these metrics that are a bit useless. They're cool, "Oh, it's using 1% CPU like it always is, now I don't really care," but can a user open up the app and drag an issue around? It's like-
Jess Belliveau:
Yeah, that's a really great example, right?
Jared Kells:
That's what I really care about.
Jess Belliveau:
The 1% CPU thing, you could look at a CPU usage graph and see a deployment, and the CPU usage doesn't change. Is everything healthy or not? You don't know, whereas if you're getting that deeper level info of the user interactions, you could be using 1% CPU to serve HTTP500 errors to the 80% of the customer base, sort of thing.
Angad Sethi:
How do you do that? The SLO's bit, how do you know a user can log in and drag an issue?
Jordan Simonovski:
Yeah. I think that would come with good instrumenting-
Angad Sethi:
Good question?
Jordan Simonovski:
Yeah, it comes down to actually keeping observability in mind when you are developing new features, the same way you would think about logging a particular thing in your code as you're writing, or writing test for your code, as you're writing code as well. You want to think about how you can instrument something and how you can understand how this particular feature is working in production. Because I think as a lot of Agile and dev ops principles are telling us now is that we do want our applications in production. And as developers, our responsibilities don't end when we deploy something. Our responsibility as a developer ends when we've provided value to the business. And you need a way of understanding that you're actually doing that. And that's where, I guess, you do nee do think about observability with a lot of this stuff, and actually measuring your success metrics. So if you do know that your application is successful if your user can log in and drag stuff around, then that's exactly what you want to measure.
Jared Kells:
I think that we have to build-
Jordan Simonovski:
Yeah?
Jared Kells:
Oh, sorry Jordan.
Jordan Simonovski:
No, you go.
Jared Kells:
I was just going to say we have to build our apps with integration testing in mind already. So doing browser based tests around new features. So it would be about building features with that and the same thing in mind but for testing and production.
Jess Belliveau:
Yeah and the actual how, the actual writing code part, there's this really great project, the open telemetry project, which provides all these sort of API's and SDK's that developers can consume, and it's vendor agnostic. So when you talk about the how, like, "How do I do this? How do I instrument things?" Or, "How do I emit metrics?" They provide all these helpful libraries and includes that you can have, because the last thing you want to do is have to roll this custom solution, because you're then just adding to your technical debt. You're trying to make things easier, but you're then relying on, "Well I need to keep Jared Kells employed, because he wrote our log in engine and no one else knows how it works.
Jess Belliveau:
And then the other thing that comes to mind with something like open telemetry as well, and we talked a bit about Datadog. So Datadog is a SaaS vendor that specializes in observability. And you would push your metrics and your logs and your traces to them and they give you a UI to display. If you choose something that's vendor agnostic, let's just use the example of Easy Agile. Let's say they start Datadog and then in six months time, we don't want to use Datadog anymore, we want to use SignalFx or whatever the Splunk one is now.
Jordan Simonovski:
I think NorthX.
Jess Belliveau:
Yeah. You can change your end point, push your same metrics and all that sort of stuff, maybe with a few little tweaks, but the idea is you don't want to tie in to a single thing.
Jordan Simonovski:
Your data structures remain the same.
Jess Belliveau:
Yeah. So that you could almost do it seamlessly without the developers knowing. There's even companies in the past that I think have pushed to multiple vendors. So you could be consuming vendor A and then you want to do a proof of concept with vendor B to see what the experience is like and you just push your data there as well.
Jared Kells:
Yeah. I think our coupling to Datadog will be I all the dashboards and stuff that we've made. It's not so much the data.
Jess Belliveau:
Yeah. That's sort of the big up sell, right. It's how you interact. That's where they want to get their hooks in, is making it easier for you to interpret that data and manipulate it to meet your needs and that sort of stuff.
Jordan Simonovski:
Observability suggests dashboards, right?
Jess Belliveau:
Yeah, perhaps. You used this term as well, Jordan, "production excellence." And when we talk about who needs observability, I was thinking a bit about that while you were talking. And for me, production excellence, or in Apptio we call it production readiness, operational readiness and that sort of stuff is like we want to deploy something to production like what sort of best practices do we want to have in place before we do that? And I think observability is a real great idea, because it's helping you in the future. You don't know what problems you're going to have down the line, but you're equipping your teams to be able to respond to those problems easily. Whereas, we've all probably been there, we've deployed code of production and we have no observability, we have a huge outage. What went wrong? Well, no one knows, but we know this is the fix, and it's hard to learn from that, or you have to learn from that I guess, and protect the user against future stuff, yeah.
Jess Belliveau:
When I think easy wins for observability, the first thing that really comes to mind is this whole idea of structured logging, which is really this idea that your application is you're logging, first of all. Quite important as a baseline starting point, but then you have a structured log format which lets you programmatically pass the logs as well. If you go back in time, maybe logging just looked like plain text with a line, with a timestamp, an error message. Whatever the developer decided to write to the standard out, or to the error file or something like that. Now I think there's a general move to having JSON, an actual formatted blob with that known structure so you can look into it. Tracing's probably not an easy win. That's a little bit harder. You can implement it with open telemetry and libraries and stuff. Requires a bit more understanding of your code base, I guess, and where you want tracing to fire, and that sort of stuff, parsing context through, things like that.
Jordan Simonovski:
I think Atlassian, when you probably just want to know that everything is okay. At a fairly superficial level. Maybe you just want to do some kind of up time on a trend. And then as, I guess, your code might get more complex or your product gets a bit more complex, you can start adding things in there. But I think actually knowing or surfacing the things you know might break. Those would probably be your quickest wins.
Jess Belliveau:
Well, let's mention some things for further reading. If you want to go get the whole picture of the whole, real observability started to get a lot of movement out of the Google SRE book from a few years ago. The Google SRE stuff covers the whole gamut of their soak reliability engineering practice, and observability is a portion of that, there's some great chapters on that. O'Reilly has an observability book, I think, just dedicated to observability now.
Jordan Simonovski:
I think that's still in early release, if people want to google chapters.
Jess Belliveau:
The open telemetry stuff, we'll drop a link to that I think that's really handy to know.
Angad Sethi:
From [inaudible 00:26:12], which is my perspective, as a developer, say I wanted to introduce cornflake use Datadog at Easy Agile. Not very familiar, I'm not very comfortable with it. I know how to navigate, but what's a quick way for me to get started on introducing observability? Sorry to lock my direct job or at my workplace.
Jordan Simonovski:
I would lean, I could be biased here. Jess correct me or give your opinion on this, I would lean heavily towards SLO's for this. And you can have a quick read in the SRE-
Jess Belliveau:
What does SLO stand for, Jordan?
Jordan Simonovski:
Okay, sorry. Buzz words! SLO is a service level objective, not to be confused with service level agreement. An agreement itself is contractual and you can pay people money if you do breach those. An SLO is something you set in your team and you have a target of reliability, because we are getting to the point where we understand that all systems at any point in time are in some kind of degraded state. And yeah, reliability isn't necessarily binary, it's not unreliable or reliable. Most of the time, it's mostly reliable and this gives us a better shared language, I guess. And you can have a read in the SRE handbook by Google, which is free online, which gives you a pretty good understanding of Datadog.
Jordan Simonovski:
I think the last time I used it had a SLO offering. But I think like I was mentioning earlier, you set an SLO on particular functionalities or features of your application. You're saying, "My user can do this 99% of the time," or whatever other reliability target you might want to set. I wouldn't recommend five nines of reliability. You'll probably burn yourself out trying to get there. And you have this target set for yourself. And you know exactly what you're measuring, you're measuring particular types of functionality. And you know when you do breach these, users are being affected. And that's where you can actually start thinking about observability. You can think about, "What other features are we implementing that we can start to measure?" Or, "What user facing things are we implementing that we can start to measure?"
Jordan Simonovski:
Other things you could probably look at are, I think they're all covered in the book anyway, data freshness in a way. You want to make sure the data users are being displayed is relatively fresh. You don't want them looking at stale data, so you can look at measuring things like that as well. But you can pretty much break it down into most functionalities of a website. It's no longer like a ping check, that you're just saying, "Yes, HTTP, okay. My application is fine." You're saying, "My users are actually being affected by things not working." And you can start measuring things from there. And that should give you a better understanding, or a better idea, at least, of where to start with what you want to measure and ow you want to measure it. That would be my opinion on where to get started with this if you do want to introduce it.
Jared Kells:
We're going to talk a little bit about state and how with some of these, like our very front end heavy applications that we're building, so the applications we build just basically run inside the browser and the traditional state as you would think about it, is just pulling a very simple API that writes some things into the database with some authentication, and that sort of stuff. So in terms of reliability of the services, it's really reliable. Those tiny API's just never have problems, because it's just so simple. And well, they've got plenty of monitoring around it. But all our state is actually, when you say, "Observe the state of the system," for the most part, that's state in a browser. And how do we get observability into that?
Jess Belliveau:
A big thing is really, there's not one thing fits all as well. When we talk about the SLO stuff as well, it's understanding what is important to not so much maybe your company but your team as well. If you're delivering this product, what's important to you specifically? So one SLO that might work for me at Apptio probably isn't going to work for Easy Agile. This is really pushing my knowledge, as well, of front end stuff, but when we say we want to observe the state as well, we don't necessarily mean specifically just the state. You could want to understand with each one of those API's when it's firing, what the request response time is for that API firing. So that might be an important metric. So you can start to see if one of those APIs is introducing latency, and so your user experience is degraded. Like, "Hey when we were on release three, when users were interacting with our service here, it would respond in this percentile latency. We've done a release and since then, now we're seeing it's now in this percentile. Have we degraded performance performance?" Users might not be complaining, but that could be something that the team then can look into, add to a sprint. Hey, I'm using Agile terms now. Watch out!
Jared Kells:
That's a really good example, Jess. Performance issues for us are typically not an API that's performing poorly. It's something in this very complicated front end application is not running in the same order as it used to, or there's some complex interaction we didn't think of, so it's requesting more data than expected. The APIs are returning. They're never slow, for the most part, but we have performance regressions that we may not know about without seeing them or investigating them. The observability is really at the individual user's browser level. That makes sense? I want to know how long did it take for this particular interaction to happen.
Jess Belliveau:
Yeah. I've never done that sort of side of things. As well, the other thing I guess, you could potentially be impacted in as well as then, you're dealing with end user manifestations as well. You could perceive-
Jared Kells:
Yeah sure.
Jess Belliveau:
... Greater performance on their laptop or something, or their ISP or that sort of stuff. It'd be really hard to make sure you're not getting noise from that sort of thing as well.
Jordan Simonovski:
Yeah. There are tools like Sentry, I guess, which do exist to give you a bit more of an understanding what's happening on your front end. The way Sentry tends to work with JavaScript, is you'll upload a minified map of your JS to Sentry, deploy your code and then if something does break or work in a fairly unexpected way, that tends to get surfaced with Sentry will tell you exactly which line this kind of stuff is happening on, and it's a really cool tool for that company stuff. I don't know if it'd give you the right type of insights, I think, in terms of performance or-
Jared Kells:
Yeah, we use a similar tool and it does work for crashes and that sort of thing. And on the observability front, we log actions like state mutations in side the front end, not the actual state change, but just labels that represent that you updated an issue summary or you clicked this button, that sort of thing, and we send those with our crash reports. And it's super helpful having that sort of observability. So I think I know what you guys are talking about. But I'm just [crosstalk 00:35:25], yeah.
Jess Belliveau:
Yeah, that's almost like, I guess, a form of tracing. For me and Jordan, when we talk about tracing, we might be thinking about 12 different microservices sitting in AWS that are all interacting, whereas you're more shifting that. That's sort of all stuff in the browser interacting and just having that history of this is what the user did and how they've ended up-
Jared Kells:
In that state.
Jess Belliveau:
In that state, yeah.
Jordan Simonovski:
I guess even if you don't have a lot of microservices, if you're talking about particular, like you're saying for the most part your API requests are fine but sometimes you have particularly large payloads-
Jared Kells:
We actually have to monitor, I don't know, maybe you can help with this, we actually should be monitoring maybe who we're integrating with. It's actually much more likely that we'll have a performance issue on a Xero API rather than... We don't see it, the browser sees it as well, which is-
Jordan Simonovski:
Yeah, and tracing does solve all of those regressions for you. Most tracing libraries, like if you're running Node apps or whatever on your backend. I can just tell you about Node, because I probably have the most experience writing Node stuff. You pretty much just drop in Didi trace, which is a Datadog library for tracing into your backend and your hook itself into all of, I think, the common libraries that you'll tend to work with, I think. Like if you're working for express or for a lot of just HADP libraries, as well as a few AWS services, it will kind of hook itself into that. And you can actually pinpoint. It will kind of show you on this pretty cool service map exactly which services you're interacting with and where you might be experiencing a regression. And I think traces do serve to surface that information, which is cool. So that could be something worth investigating.
Jess Belliveau:
It's funny. This is a little bit unrelated to observability, but you've just made me think a bit more about how you're saying you're reliant on third party providers as well. And something I think that's really important that sometimes gets missed is so many of us today are relying on third party providers, like AWS is a huge thing. A lot of people writing apps that require AWS services. And I think a lot of the time, people just assume AWS or Jira or whatever, is 100% up time, always available. And they don't write their code in such a way that deals with failures. And I think it's super important. So many times now I've seen people using the AWS API and they don't implement exponential back off. And so they're basically trying to hit the AWS API, it fails or they might get throttled, for example, and then they just go into a fail state and throw an error to the user. But you could potentially improve that user experience, have a retry mechanism automatically built in and that sort of stuff. It doesn't really tie into the observability thing, but it's something.
Jared Kells:
And the users don't care, right? No one cares if it's an AWS problem. It's your problem, right, your app is too slow.
Jess Belliveau:
Well, they're using your app. Exactly right. It reflects on you sort of thing, so it's in your interest to guard against an upstream failure, or at least inform the user when it's that case. Yeah.
Jared Kells:
Well, I think we're going to have to call it, this podcast, because it was an hour ago. We had instructed max 45 minutes.
Jess Belliveau:
We could just keep going. We might need a part two! Maybe we can request [cross talk 00:39:21].
Jared Kells:
Maybe! Yeah.
Jess Belliveau:
Or we'll just start our own podcast! Yeah.
Angad Sethi:
So what were your biggest learnings today, given it's been Angad and I are just learning about observability, Angad what was your biggest learning today about observability? My biggest learning was that observability does not equal Datadog. No, sorry! It was just very fascinating to learn about quantifying the known unknowns. I don't know if that's a good takeaway, but...
Jess Belliveau:
Any takeaway is a good takeaway! What about you, Jared?
Jared Kells:
I think, because I we were going to talk about state management, and part of it was how we have this ability, at the moment to, the way our front ends are architected, we can capture the state of the app and get a customer to send us their state, basically. And we can load it into our app and just see exactly how it was, just the way our state's designed. But what might be even cooler is to build maybe some observability into that front end for support. I'm thinking instead of just having, we have this button to send us out your support information that sends us a bunch of the state, but instead of console logging to the browser log, we could be console logging, logging in our front end somewhere that when they click, "send support information," our customers should be sending us the actions that they performed.
Jared Kells:
Like, "Hey there's a bug, send us your support information." It doesn't have to be a third party service collecting this observability stuff. We could just build into our... So that's what I'm thinking about.
Jess Belliveau:
Yeah, for sure. It'll probably be a lot less intrusive, as well, as some of the third party stuff that I've seen around.
Jared Kells:
Yeah. It's pretty hard with some of these integrations, especially if you're developing apps that get run behind a firewall.
Jess Belliveau:
Yeah
Jared Kells:
You can't just talk to some of these third parties. So yeah, it's cool though. It's really interesting.
Jess Belliveau:
Well, I hope someone out there listening has learned something, and Jordan and I will send some links through, and we can add them, hopefully, to the show notes or something so people can do some more reading and...
Jared Kells:
All thanks!
Jess Belliveau:
Thanks for having us, yeah.
Jared Kells:
Thanks all for your time, and thanks everybody for listening.
Jordan Simonovski:
Thanks everyone.
Angad Sethi:
That was [inaudible 00:41:55].
Jess Belliveau:
Tune in next week!
Related Episodes
- Podcast
Easy Agile Podcast Ep.32 Why Your Retrospectives Keep Failing (and How to Finally Fix Them)
In this insightful episode, we dive deep into one of the most common frustrations in engineering and dev teams: retrospectives that fail to drive meaningful change. Join Jaclyn Smith, Senior Product Manager at Easy Agile, and Shane Raubenheimer, Agile Technical Consultant at Adaptavist, as they unpack why retrospectives often become checkbox exercises and share practical strategies for transforming them into powerful engines of continuous improvement.
Want to put these insights into practice? We hosted a live, hands-on retro action workshop to show you exactly how to transform your retrospectives with practical tools and techniques you can implement immediately.
Key topics covered:
- Common retrospective anti-patterns and why teams become disengaged
- The critical importance of treating action items as "first-class citizens"
- How to surface recurring themes and environmental issues beyond team control
- Practical strategies for breaking down overwhelming improvement initiatives
- The need for leadership buy-in and organizational support for retrospective outcomes
- Moving from "doing agile" to "being agile" through effective reflection and action
This conversation is packed with insights for making your retrospectives more impactful and driving real organizational change.
About our guests
Jaclyn Smith is a Senior Product Manager at Easy Agile, where she leads the Easy Agile TeamRhythm product that helps teams realize the full benefits of their practices. With over five years of experience as both an in-house and consulting agile coach, Jaclyn has worked across diverse industries helping teams improve their ways of working. At Easy Agile, she focuses on empowering teams to break down work effectively, estimate accurately, and most importantly, take meaningful action to continuously improve their delivery and collaboration.
Shane Raubenheimer is an Agile Technical Consultant at Adaptavist, a global family of companies that combines teamwork, technology, and processes to help businesses excel. Adaptavist specializes in agile consulting, helping organizations deliver customer value through agile health checks, coaching, assessments, and implementing agile at scale. Shane brings extensive experience working across multiple industries—from petrochemical to IT, digital television, and food industries—applying agile philosophy to solve complex organizational challenges. His expertise spans both the technical and cultural aspects of agile transformation.
Transcript
This transcript has been lightly edited for clarity and readability while maintaining the authentic conversation flow.
Opening and introductions
Jaclyn Smith: Hi everyone, and welcome back to the Easy Agile Podcast. Today I'm talking to Shane Raubenheimer, who's with us from Adaptavist. Today we're talking about why your retrospectives keep failing and how to finally fix them. Shane, you and I have spent a fair amount of time together exploring the topic of retros, haven't we? Do you want to tell us a little bit about yourself first?
Shane Raubenheimer: Yeah, hello everyone. I'm Shane Raubenheimer from Adaptavist. I am an agile coach and technical consultant, and along with Jaclyn, we've had loads of conversations around why retros don't work and how they just become tick-box exercises. Hopefully we're going to demystify some of that today.
Jaclyn Smith: Excellent. What's your background, Shane? What kind of companies have you worked with?
Shane Raubenheimer: I've been privileged enough to work across multiple industries—everything from petrochemical to IT, to digital television, food industry. All different types of applied work, but with the agile philosophy.
Jaclyn Smith: Excellent, a big broad range. I should introduce myself as well. My name is Jaclyn. I am a Senior Product Manager here at Easy Agile, and I look after our Team Rhythm product, which helps teams realize the benefits of being agile. I stumbled there because our whole purpose at Easy Agile is to enable our customers to realize the benefits of being agile.
My product focuses on team and teamwork, and teamwork happens at every level as we know. So helping our customers break down work and estimate work, reflect—which is what we're talking about today—and most importantly, take action to improve their ways of working. I am an agile coach by trade as well as a product manager, and spent about five years in a heap of different industries, both as a consultant like you Shane, and as an in-house coach as well.
The core problem: When retrospectives become checkbox exercises
Jaclyn Smith: All right, let's jump in. My first question for you Shane—I hear a lot that teams get a bit bored with retros, or they face recurring issues in their retrospectives. Is that your experience? Tell me about what you've seen.
Shane Raubenheimer: Absolutely. I think often what should be a positive rollup and action of a sequence of work turns out to normally become a checkbox exercise. There's a lot of latency in the things that get uncovered and discussed, and they just tend to perpetually roll over. It almost becomes a checkbox exercise from what I've seen, rather than the mechanism to actively change what is happening within the team—but more importantly, from influences outside the team.
I think that's where retros fail, because often the team does not have the capability to do any kind of upward or downstream problem solving. They tend to just mull about different ways to ease the issues within the team by pivoting the issues rather than solving them.
I think that's where retros fail, because often the team does not have the capability to do any kind of upward or downstream problem solving. They tend to just mull about different ways to ease the issues within the team by pivoting the issues rather than solving them.
Jaclyn Smith: Yeah, I would agree. Something that I see regularly too is because they become that checkbox, teams get really bored of them. They do them because they're part of their sprint, part of their work, but they're not engaged in them anymore. It's just this thing that they have to do.
It also can promote a tendency to just look at what's recently happened and within their sphere of influence to solve. Whereas I think a lot of the issues that sometimes pop up are things that leadership need to help teams resolve, or they need help to solve. It can end up with them really focusing on "Oh well, there's this one bit in how we do our code reviews, we've got control over that, we'll try to fix that." Or as you say, the same recurring issues come up and they don't seem to get fixed—they're just the same complaints every time.
Shane Raubenheimer: Absolutely. You find ways that you put a band-aid on them just so you can get through to the next phase. I think the problem with that is the impact that broader issues have on teams is never completely solvable within that space, and it's no one else's mandate necessarily to do it. When an issue is relatable to a team, exposing why it's not a team-specific issue and it's more environmental or potentially process-driven—that's the bit that I feel keeps getting missed.
When an issue is relatable to a team, exposing why it's not a team-specific issue and it's more environmental or potentially process-driven—that's the bit that I feel keeps getting missed.
The pressure problem and overwhelming solutions
Jaclyn Smith: Yeah, I think so too. The other thing you just sparked for me—the recurring issue—I think that also happens when the team are under pressure and they don't feel like they have the time to solve the problems. They just need to get into the next sprint, they need to get the next bit of work done. Or maybe that thing that they need to solve is actually a larger thing—it's not something small that they can just change.
They need to rethink things like testing strategies. If that's not working for you, and it's not just about fixing a few flaky tests, but you need to re-look at how you're approaching testing—it seems overwhelming and a bit too big.
Shane Raubenheimer: Absolutely. Often environmental issues are ignored in favor of what you've been mandated to do. You almost retrofit the thing as best you can because it's an environmental issue. But finding ways to expose that as a broader-based issue—I think that should be the only output, especially if it's environmental and not team-based.
The problem of forgotten action items
Jaclyn Smith: Something I've also seen recently is that teams will come up with great ideas of things that they could do. As I said before, sometimes they're under pressure and they don't feel they have the capacity to make those changes. Sometimes those actions get talked about, everyone thinks it's a wonderful idea, and then they just get forgotten about. Teams end up with this big long backlog of wonderful experiments and things that they could have tried that have just been out of sight, out of mind. Have you seen much of that yourself?
Shane Raubenheimer: Plenty. Yes, and often teams err on the side of what's expected of them rather than innovate or optimize. I think that's really where explaining the retrospective concept to people outside fully-stacked or insular teams is the point here. You need, very much like in change management, somebody outside the constructs of teams to almost champion that directive—the same way as you would do lobbying for money or transformation. It needs to be taken more seriously and incorporated into not just teams being mini-factories supporting a whole.
You transform at a company level, you change-manage at a company level. So you should action retrospective influences in the same way. Naturally you get team-level ones, and that's normally where retrospectives do go well because it's the art of the possible and what you're mandated to do. I think bridging the gap between what we can fix ourselves and who can help us expose it is a big thing.
I see so much great work going to waste because it simply isn't part of the day job, or should be but isn't.
You transform at a company level, you change-manage at a company level. So you should action retrospective influences in the same way.
Making action items first-class citizens
Jaclyn Smith: Yeah, absolutely. I know particularly in the pre-Covid times when we were doing a lot of retros in person, or mostly in person with stickies on walls, I also found even if we took a snapshot of the action column, it would still end up on a Confluence board or something somewhere and get forgotten about. Then the next retro comes around and you sort of feel like you're starting fresh and just looking at the last sprint again. You're like, "Oh yeah, someone raised that last retro, but we still didn't do anything about that."
Shane Raubenheimer: I think Product Owners, Scrum Masters, or any versions of those kinds of roles need to treat environmental change or anti-pattern change as seriously as they treat grooming work—the actual work itself. Because it doesn't matter how good you are if the impediments that are outside of your control are not managed or treated with the same kind of importance as the actual work you're doing. That'll never change, it'll just perpetuate. Sooner or later you hit critical mass. There's no scenario where your predictability or velocity gets better if these things are inherent to an environment you can't control.
Product Owners, Scrum Masters, or any versions of those kinds of roles need to treat environmental change or anti-pattern change as seriously as they treat grooming work—the actual work itself.
Jaclyn Smith: Yeah, that's true. We've talked about action items being first-class citizens and how we help teams do that for that exact reason. Because a retro is helpful to build relationships and empathy amongst the team for what's happening for each of them and feel a sense of community within their team. But the real change comes from these incremental changes that are made—the conversations that spark the important things to do to make those changes to improve how the team works.
That action component is really the critical part, or maybe one of two critical parts of a retro. I feel like sometimes it's the forgotten child of the retro. Everyone focuses a lot on engaging people in getting their ideas out, and there's not as much time spent on the action items and what's going to be done or changed as a result.
Beyond team-level retrospectives
Shane Raubenheimer: Absolutely, consistently. I think it's symptomatic potentially of how retros are perceived. They're perceived as an inward-facing, insular reevaluation of what a team is doing. But I've always thought, in the same way you have the concept of team of teams, or if you're in a scaled environment like PI planning, I feel retrospectives need the same treatment or need to be invited to the VIP section to become part of that.
Because retrospectives—yes, they're insular or introspective—but they need to be exposed at the same kind of level as things like managing your releases or training or QA, and they're not.
Jaclyn Smith: Yeah, I think like a lot of things, they've fallen foul of the sometimes contentious "agile" word. People tend to think, "Oh retros, it's just one of those agile ceremonies or agile things that you do." The purpose of them can get really lost in that, and how useful they can be in creating change. At the end of the day, it's about improving the business outcomes. That's why all of these things are in place—you want to improve how well you work together so that you can get to the outcome quicker.
At the end of the day, it's about improving the business outcomes. That's why all of these things are in place—you want to improve how well you work together so that you can get to the outcome quicker.
Shane Raubenheimer: Absolutely. Outcome being the operative word, not successfully deploying code or...
Jaclyn Smith: Or ticking the retro box, successfully having a retro.
Shane Raubenheimer: Yeah, exactly. Being doing agile instead of being agile, right?
Expanding the scope of retrospectives
Jaclyn Smith: One hundred percent. It also strikes me that there is still a tendency for retros to be only at a team level and only a reflection of the most recent period of time. So particularly if a team are doing Scrum or some version of Scrum with sprints, to look back over just the most recent period. I think sometimes the two things—the intent of a retro but also the prime directive of the retro—gets lost.
In terms of intent, you can run a retro about anything. Think about a post-mortem when you have an incident and everyone gets together to discuss what happened and how we prevent that in the future. I think people forget that you can have a retro and look at your system of work, and even hone in on something like "How are we estimating? Are we doing that well? Do we need to improve how we're doing that?" Take one portion of what you're working on and interrogate it.
You can run a retro about anything. I think people forget that you can have a retro and look at your system of work, and even hone in on something like "How are we estimating? Are we doing that well? Do we need to improve how we're doing that?" Take one portion of what you're working on and interrogate it.
Understanding anti-patterns
Shane Raubenheimer: Absolutely. You just default to "what looks good, what can we change, what did we do, what should we stop or start doing?" That's great and all, but without some kind of trended analysis over a period of time, you might just be resurfacing issues that have been there all along. I think that's where the concept or the lack of understanding of anti-patterns comes in, because you're measuring something that's happened again rather than measuring or quantifying why is it happening at all.
I think that's the big mistake of retros—it's almost like an iterative band-aid.
I think that's the big mistake of retros—it's almost like an iterative band-aid.
Jaclyn Smith: Yeah. Tell me a little bit more about some of the anti-patterns that you have seen or how they come into play.
Shane Raubenheimer: One of them we've just touched on—I think the buzzword for it is the cargo cult culture for agile. That's just cookie-cutting agile, doing agile because you have to instead of being agile. Literally making things like your stand-up or your review or even planning just becomes "okay, well we've got to do this, so we've ticked the box and we're following through."
Not understanding the boundaries of what your method is—whether you like playing "wagile" or whether you're waterfall sometimes, agile at other times, and you mistake that variability as your agility. But instead, you don't actually have an identity. You're course-correcting blindly based on what's proportionate to what kind of fire you've got in your way.
Another big anti-pattern is not understanding the concept of what a team culture means and why it's important to have a team goal or a working agreement for your team. Almost your internal contracting. We do it as employees, right?
I think a lot of other anti-patterns come in where something's exposed within a team process, and because it's not interrogated or cross-referenced across your broader base of teams, it's not even recognized as a symptom. It is just a static issue. For me, that's a real anti-pattern in a lot of ways—lack of directive around what to do with retrospectives externally as well as internally. That's simply not a thing.
A lot of other anti-patterns come in where something's exposed within a team process, and because it's not interrogated or cross-referenced across your broader base of teams, it's not even recognized as a symptom. It is just a static issue. For me, that's a real anti-pattern in a lot of ways—lack of directive around what to do with retrospectives externally as well as internally.
Jaclyn Smith: Yeah, I think that's a good call-out for anyone watching or listening. If you're not familiar with anti-patterns, they're common but ineffective responses to recurring problems. They may seem helpful initially to solve an immediate problem, but they ultimately lead to negative outcomes.
Shane, what you just spoke about there with retrospectives—an example of that is that the team feel disengaged with retrospectives and they're not getting anything useful out of it, or change isn't resulting from the retrospectives. So the solution is to not hold them as frequently, or to stop doing them, or not do them at different levels or at different times. That's a really good example of an anti-pattern. It does appear to fix the problem, but longer term it causes more problems than it solves.
Another one that I see is with breaking down work. The idea that spending time together to understand and gain a shared understanding of the work and the outcome that you need takes a lot of time, and breaking down that work and getting aligned on how that work is going to break down on paper can look like quite an investment. But it's also saving time at the other end, reducing risk, reducing duplication and rework to get a better outcome quicker. You shift the time spent—development contracts because you've spent a little bit more time discovering and understanding what you're doing.
A common anti-pattern that I see there is "we spent way too long looking at this, so we're going to not do discovery in the same way anymore," or "one person's going to look at that and break it down."
The budget analogy
Shane Raubenheimer: I always liken it to your budget. The retrospective is always the nice shiny holiday—it's always the first to go.
I always liken it to your budget. The retrospective is always the nice shiny holiday—it's always the first to go.
Jaclyn Smith: It's the contractor.
Shane Raubenheimer: Yeah. It's almost like exposing stuff that everybody allegedly knows to each other is almost seen as counterintuitive because "we're just talking about stuff we all know." It often gets conflated into "okay, we'll just do that in planning." But the reality is the concept of planning and how you amend what you've done in the retrospective—that's a huge anti-pattern because flattening those structures from a ceremonies perspective is what teams tend to do because of your point of "well, we're running out of daylight for doing actual development."
But it's hitting your head against the wall repeatedly and hoping for a different outcome without actually implying a different outcome. Use a different wall even. I think it's because people are so disillusioned with retrospectives. I firmly believe it's not an internal issue. I believe if the voices are being heard at a budgeting level or at a management level, it will change the whole concept of the retrospective.
Solution 1: Getting leadership buy-in
Jaclyn Smith: I like it, and that's a good thread to move on to. So what do we do about it? How do we help change this? What are some of the practical tips that people can deploy?
Shane Raubenheimer: A big practical tip—and this is going to sound like an obvious one—is actual and sincere buy-in. What I mean by that is, as a shareholder, if I am basing your performance and your effectiveness on the quality and output of the work that you're promising me, then I should be taking the issues that you're having that are repeating more seriously.
Because if you're course-correcting for five, six, or seven sprints and you're still not getting this increasing, predictable velocity, and if it's not your team size or your attitude, it's got to be something else. I often relate that to it being environmental.
Buying into the outputs for change the same way as you would into keeping everyone honest, managing budgets, and chasing deadlines—it should all be part of the same thing. They should all be sitting at the VIP table, and I think that's a big one.
Buying into the outputs for change the same way as you would into keeping everyone honest, managing budgets, and chasing deadlines—it should all be part of the same thing. They should all be sitting at the VIP table.
Solution 2: Making patterns visible
Jaclyn Smith: I think so too. Something that occurs to me, and it goes back to what we were talking about right at the beginning, is sometimes identifying that there's a pattern there and that the same thing keeps coming up isn't actually visible, and that's part of the problem, right?
I know some things we've been doing in Easy Agile TeamRhythm around that recently, attempting to help teams with this. We've recently started surfacing all incomplete action items in retrospectives so people can see that big long list. Because they can convert their action items to Jira items or work items, they can also see where they've just been sitting and languishing in the backlog forever and a day and never been planned for anything to be done about them.
We've recently started surfacing all incomplete action items in retrospectives so people can see that big long list. Because they can convert their action items to Jira items or work items, they can also see where they've just been sitting and languishing in the backlog forever and a day and never been planned for anything to be done about them.
We've added a few features to sort and that kind of thing. Coming in the future—and we've been asked about this a lot—is "what about themes? What about things that are bubbling up?" So that's definitely on our radar that will be helpful.
I think that understanding that something has been raised—a problem getting support from another team, or with a broken tool or an outdated tool that needs to be replaced in the dev tooling or something like that—if that's been popping up time and time again and you don't know about it, then even as the leader of that team, you don't have the ammunition to then say "Look, this is how much it's slowed us down."
I think we live in such a data world now. If those actions are also where the evidence is that this is what needs to change and this is where the barriers are...
Solution 3: The power of trend analysis
Shane Raubenheimer: Certainly. I agree. Touching on the trend analytics approach—we do trend analysis on everything except what isn't happening or what is actually going wrong, because we just track the fallout of said lack of application. We don't actually trend or theme, to your point.
We do trend analysis on everything except what isn't happening or what is actually going wrong, because we just track the fallout of said lack of application.
We theme everything when we plan, yet somehow we don't categorize performance issues as an example. If everybody's having a performance issue, that's the theme. We almost need to categorize or expose themes that are outward-facing, not just inward-facing. Because it's well and good saying "well, our automated testing system doesn't work"—what does that mean? Why doesn't it work?
I think it should inspire external investigation. When you do a master data cleanup, you don't just say "well, most of it looks good, let's just put it all in the new space." You literally interrogate it at its most definitive and lowest level. So why not do the same with theming and trending environmental issues that you could actually investigate, and that could become a new initiative that would be driven by a new team that didn't even know it was a thing?
Jaclyn Smith: Yeah, and you're also gathering data at that point to evidence the problem rather than "oh, it's a pain point that keeps coming up." It is, but it gives you the opportunity to quantify that pain point a little bit as well. I think that is sometimes really hard to do when you're talking about developer experience or team member experience. Even outside of product engineering teams, there are things in the employee experience that affect the ability for that delivery—whatever you're delivering—to run smoothly. You want to make that as slick as possible, and that's how you get the faster outcomes.
Solution 4: The human factor
Shane Raubenheimer: Absolutely. You can never underestimate the human factor as well. If everything I'm doing and every member of my team is doing is to the best of not just their capability, but to the best of the ability in what they have available to them, you become jaded, you become frustrated. Because if you're hitting your head against the same issue regardless of how often you're pivoting, that can be very disillusioning, especially if it's not been taken as seriously as your work output.
If everything I'm doing and every member of my team is doing is to the best of not just their capability, but to the best of the ability in what they have available to them, you become jaded, you become frustrated.
We run a week late for a customer delivery or a customer project, and we start complaining about things like money, budget overspend, over-utilization. But identifying systematic or environmental issues that you can actually quantify should be treated in exactly the same way. I feel very strongly about this.
Solution 5: Breaking down overwhelming action items
Jaclyn Smith: We tend to nerd out about this stuff, Shane, and you're in good company. You've also reminded me—we've put together a bit of a workshop to help teams and people understand how to get the most out of their retrospectives, not just in terms of making them engaging, but fundamentally how to leverage actions to make them meaningful and impactful.
We've spoken a lot about the incremental change that is the critical factor when it is something that's within the team's control or closely to the team's control. That's how you get that expansion of impact—the slow incremental change. We've talked about sometimes those action items seem overwhelming and too big. What's your advice if that's the scenario for a team? What do you see happen and what can they do?
Shane Raubenheimer: I would suggest following the mantra of "if a story is too big, you don't understand enough about it yet, or it's not broken down far enough." Incremental change should be treated in exactly the same way. The "eat the elephant one bite at a time" analogy. If it's insurmountable, identify a portion of it that will make it a degree less insurmountable next time, and so on and so forth.
If we're iterating work delivery, problem-solving should be done in rapid iteration as well. That's my view.
Jaclyn Smith: I like it.
The "eat the elephant one bite at a time" analogy. If it's insurmountable, identify a portion of it that will make it a degree less insurmountable next time, and so on and so forth. If we're iterating work delivery, problem-solving should be done in rapid iteration as well.
Wrapping up: What's next?
Jaclyn Smith: I think we're almost wrapping up in terms of time. What can people expect from us if they join our webinar on July 10th, I believe it is, where we dive and nerd out even more about this topic, Shane?
Shane Raubenheimer: I think the benefit of the webinar is going to be a practical showing of what we're waxing lyrical about. It's easy to speak and evangelize, but I think from the webinar we'll show turning our concepts into actual actions that you can eyeball and see the results of.
With our approach that we took to our workshop, I think people will very quickly get the feeling of "this is dealing with cause and effect in a cause and effect way." So practical—to put that in one sentence, an active showing or demonstration of how to quantify and actually do what we've been waxing lyrical about.
the benefit of the webinar is going to be a practical showing of what we're waxing lyrical about. It's easy to speak and evangelize, but I think from the webinar we'll show turning our concepts into actual actions that you can eyeball and see the results of.
Jaclyn Smith: Excellent. That was a lovely summation, Shane. If anyone is interested in joining, we urge you to do so. You can hear us talking more about that but get some practical help as well. There is a link to the registration page in the description below.
I think that's about all we have time for today. But Shane, as always, it's been amazing and lovely to chat to you and hear your thoughts on a pocket of the agile world and helping teams.
Shane Raubenheimer: Yeah, it's always great engaging with you. I always enjoy our times together, and it's been my pleasure. I live for this kind of thing.
Jaclyn Smith: It's wonderful! Excellent. Well, I will see you on the 10th, and hopefully we'll see everyone else as well.
Shane Raubenheimer: Perfect. Yeah, looking forward to it.
Jaclyn Smith: Thanks.
Ready to end the frustration of ineffective retrospectives?
Jaclyn Smith and Shane Raubenheimer also hosted a live, hands-on webinar designed to turn retrospectives into powerful engines for continuous improvement.
In this highly interactive session, they talked about how teams can:
- Uncover why retrospectives get stuck in repetitive cycles
- Clearly capture and assign actionable insights
- Identify and avoid common retrospective pitfalls and anti-patterns
- Get hands-on experience with Easy Agile TeamRhythm to streamline retrospective actions
- Practical tools, techniques, and clear next steps to immediately enhance retrospectives and drive meaningful team improvements.
- Podcast
Easy Agile Podcast Ep.17 Defining a product manager: The idea of a shared brain
In this episode, I was joined by Sherif Mansour - Distinguished Product Manager at Atlassian.
We spoke about styles of product management and the traits that make a great product manager. Before exploring the idea of a shared brain and the role of a product engineer.
Sherif has been in software development for over 15 years. During his time at Atlassian, he was responsible for Confluence, a popular content collaboration tool for teams.
Most recently, Sherif spends most of his days trying to solve problems across all of Atlassian’s cloud products. Sherif also played a key role in developing new products at Atlassian such as Stride, Team Calendars and Confluence Questions. Sherif thinks building simple products is hard and so is writing a simple, short bio.
Hope you enjoy the episode as much as I did. Thanks for a great conversation Sherif.
- Podcast
Easy Agile Podcast Ep.6 Chris Stone, The Virtual Agile Coach

What a great conversation this was with Chris Stone, The Virtual Agile Coach!
Chris shared some insights into the importance of sharing and de-stigmatising failures, looking after your own mental health, and why work shouldn't be stale.
Some other areas we discussed were, why you should spend time in self reflection - consider a solospective? and asking "how did that feel?" when working as a team.
"I really enjoyed our chat. Plenty to ponder over the silly season, and set yourself up with a fresh perspective for 2021. Enjoy and Merry Christmas!"
Transcript
Sean Blake:
Hello, and welcome to another episode of the Easy Agile Podcast. It's Sean Blake here, your host today, and we're joined by Chris stone. Chris is going to be a really interesting guest. I really enjoyed recording this episode. Chris is the Virtual Agile Coach. He's an agility lead. People First champion blogger, speaker and trainer, who always seeks to gamify content and create immersive Agile experiences. An Agile convert all the way from back in 2012, Chris has since sought to broaden his experiences, escape his echo chamber and to fearlessly challenge dysfunction and ask the difficult questions. My key takeaways from this episode were; it's okay to share your failures, the importance of recognizing our mental health, why it's important that work doesn't become stale, how to de-stigmatize failure, the importance of selfreflection and holding many self retrospectives, and the origins of the word deadline. You'll be really interested to find out where that word came from and why it's a little bit troubling. So here we go. We're about to jump in. Here's the episode with Chris stone on the Easy Agile Podcast. Chris, thanks so much for joining us and spending some time with us.
Chris Stone:
Hey there Sean, thank you for having me. It's a pleasure.
Sean Blake:
I have to mention you've got a really funky Christmas sweater on today. And for those people listening on the audio, they might have to jump over to YouTube just for a section to check out this sweater. Can you tell us a bit about where that came from?
Chris Stone:
So this sweater was a gift. It's a Green Bay Packers, Chris, Ugly Christmas Jumpers, what they call it. And I'm a fan of the Green Bay Packers, I've been out there a few times to Wisconsin, Green Bay, Wisconsin. It's so cold out there in fact. When you're holding a beer and minus 13 degrees, the beer starts turning to slush just from being outside in the cold air. It's a great place, very friendly, and the jumper was just a gift one Christmas from someone.
Sean Blake:
Love it. There's nothing better than warm beer is there? Okay. So Chris, I first came across you because of the content that you put out on LinkedIn. And the way that you go about it, it's so much fun and so different to really anything else I've seen in the corporate space, in the enterprise space, in the Agile space even, why have you decided to go down this track of calling yourself the virtual Agile coach, building a personal brand and really putting yourself out there?
Chris Stone:
Well, for me, it was an interesting one because COVID, this year has forced a lot of people to convert to being virtual workers, remote workers, virtual coaches themselves. Now, what I realized this year is that, the aspiration for many is those co-located teams, it's always what people desired. They say, "Oh, you have to work harder, Katie, that's the best way." And I realized that in my whole Agile working life, I'd never really had that co-located team. There was always some element of distributed working and the past two years prior to where I'm currently, my current company, I was doing distributed scaled Agile with time zones, including Trinidad and Tobago, Alaska, Houston, the UK, India, and it was all remote.
Chris Stone:
And I thought, all right, this is an opportunity to recognize the fact that I was a virtual Agile coach already, but to share with others, my learnings, my experiences, the challenges I've faced, the failures I've had with the wider community so they can benefit from it because obviously, everyone, or more many have had to make that transition very quickly. And there's lots of learnings there that I'm sure people would benefit from. And this year in particular, I guess the honest answer, the reason for me being, I guess out there and working more on that side of things, being creative is because it's an outlet for my mental health.
Chris Stone:
I suffer from depression and one of my ways of coping with that is being creative and creating new content and sharing it. So I guess it's a reason of... it's linked to that also, but also the stories that people tell me afterwards, they motivate me to keep doing it. So when someone comes to me and says, "Hey, I did the Queen retrospective, the Queen Rock Band retrospective, and this program manager who never smiles connected to the content and admitted he liked Queen and smiled." And this was a first and when people come to me and say, "Hey, we did the Home Alone retrospective, the one of your Christmas themed ones and people loved it. It was great." It was the most engaging retrospective we've had so far because the problem is work can become stale if you let it be so.
Chris Stone:
Retrospectives can become this, what did we do last time? What are we going to do next time? What actions can we do? Et cetera, et cetera. And unless you refresh it and try new things, people will get bored and they'll disconnect and they'll disengage, and you're less likely to get a good outcome that way. So for me, there's no reason you can't make work a little bit fun, with a little bit of creativity and a little bit of energy and passionate about it.
Sean Blake:
I love that. And do you think a lot of people come to work even when they're working in Agile co-located teams and it's just not fun, I mean what do you think the key reasons are that work isn't fun?
Chris Stone:
I think because it can become stale. All right. So let's reflect on where we are today. Today, we're in a situation where we're not face-to-face with one another. We don't have time for those water cooler chats. We don't connect over a coffee or a lunch. We don't have a chat about idle banter and things of that on the way to a meeting room, we didn't have any of that. And that forces people to look at each other and see themselves as an avatar behind a screen, just a name. Often in particular, people aren't even on video camera.
Chris Stone:
It forces them to think of people as a name on a screen, rather than a beating heart on a laptop. And it can abstract people into just these entities, these names you talk to each day and day out, and that can force it to be this professional non-personal interaction. And I'm a firm believer that we need to change that. We need to make things more fun because it can, and in my experience, does result in much better outcomes. I'm very, very people first. We need to focus on people being people. People aren't resources. This is a common phrase I like to refer to you.
Sean Blake:
I love that, people aren't resources. You spoke a little bit about mental health and your struggle with depression. Something that I hear come up time and time again, is people that talk about imposter syndrome. And I wonder, firstly, if you think that might be exasperated through working remotely now. People are not so sure how they fit in, where their role is still the same role that it was 12 months ago. And do you have any tips for people when they're dealing with imposter syndrome, especially in a virtual environment?
Chris Stone:
Well, yeah I think this current environment, this virtual environment, the pandemic in particular, has led to a number of unhelpful behaviors. That there's a lot more challenges with people's mental health and negativity, and that can only lead to, I guess, less desire, less confidence in doing things, maybe doubting yourself. There's some great visuals I've shared on this recently, and it's all about reframing those imposter thoughts you have, the unhelpful thinking, that thing that goes through your mind that says, Oh, they're all going to think I'm a total fraud because maybe I don't have enough years of experience, or I should already know this. I must get more training. There's lots of “shoulding” and “musting” in that. There's lots of jumping to conclusions in this.
Chris Stone:
And a couple of ways of getting around that is, so if you're thinking of the scenario where I'm a fraud think, "Oh, well I'm doing my best, but I can't predict what they might think." When you're trying to think about the scenario of do I need to get more training? Well, understand and acknowledge the reality that you can't possibly know everything. You continue to learn every single day and that's great, but it's unrealistic to know it all. There's a great quote I often refer to and it's, true knowledge is knowing that you know nothing. I believe it's a quote from Socrates.
Chris Stone:
And it's something that very much resonates with me. Over the years I've gone through this learning journey where, when I first finished university, for example, I thought I knew everything. I thought I've got it all. And I'd go out to clients and speak and I'm like, "Oh yeah, I know this. I've got this guys." And then the more involved I've become and the more deeper I've gone into the topic, the more I realized, actually there's so much that I don't know. And to me, true knowledge is knowing that you know nothing tells me there's so much out there that I must continuously learn, I must continuously seek to challenge myself each and every day.
Chris Stone:
Other people who approach me and say, "How do you, or you produce a lot of content. How would you put yourself out there?" And I say, "Well, I just do it." Let's de-stigmatize failure. If you put a post out there and it bombs, it doesn't matter, put another one out there. It's as simple as that, learn from failure, Chuck something out there, try it, if it doesn't work, try something else. We coach Agile teams to do this all the time, to experiment. Have a hypothesis to test against that. Verify the outcomes and do retrospectives. I do weekly solospectives. I reflect on my week, what works, what hasn't worked, what I'm going to try differently. And there's no reason you can't do that also.
Sean Blake:
Okay. So weekly solospectives. What does that look like? And how do you be honest with yourself about what's working, what's not working and areas for yourself to improve? How do you actually start to have that time for self-reflection?
Chris Stone:
Unfortunately you got to make time for some reflection. One thing I've learned with mental health is you have to make time for your health before you have to make time for your illness or before you're forced to make time for your illness. And it can become all too easy in this busy working world to not make time for your health, to not make time and focus on you. So you do just have to carve out that time, whether that's blocking some time in the diary on a Friday afternoon, just to sit down and reflect, whether that's making time to go out for a walk, setting up a time on your Alexa to have a five minute stretching break, whatever it is, there's things you can do, and you have things you have to do to make time for yourself.
Chris Stone:
With regards to a solospective, the way I tend to do things is I tend to journal on a daily basis. That's almost like my own daily standard with myself, it's like, what have I observed? What have I... what challenges do I face in the past day? And then that sums up in the weekly solospective, which is basically a retro for one, where I reflect on, what did I try it? What do I want to achieve this week? What's gone well? What hasn't gone well. It's the same as a retrospective just one and allows me to aggregate my thoughts across the week, rather than them being single events. So that I'm focusing more on the trajectory as opposed to any single outlier. Does that make sense?
Sean Blake:
It does. It does. So you've got this trajectory with your career. You're checking in each week to see whether you're heading in the right direction. I assume that you set personal goals as well along the way. I also noticed that you have personal values that you've published and you've actually published those publicly for other people to look at and to see. How important are those personal values in informing your life and personal and career goals?
Chris Stone:
So I'd say that are hugely important, for me, what I thought was we see companies sharing their values all the time. You look on company websites and you can see their values quite prominently. And you could probably think do they often live up to their values? You have so many companies have customer centricity as their value, but how many of them actually focus on engaging with their customers regularly? How many have a metric where they track, how often they engage with customers? Most of them are focusing on velocity and lead time. So I always challenge, are you really customer centric or is that lip service? But moving aside, I digress. I thought companies have values, and obviously we do as well, but why don't we share them? So I created this visual, showing what mine were and challenged a few others to share it also. And I had some good feedback from others which was great.
Chris Stone:
But they hugely influence who I am and how I interact on a day-to-day basis. And I'll give you an example, one of my values is being open source always. And what that means is nothing I create, no content I create, nothing I produce would ever be behind a payroll. And that's me being community driven. That's me sharing what I've learned with others. And how that's come to fruition, how I've lived that is I've had lots of people come to me say, "Hey, we love the things you do. You gave me flying things. Would you mind, or would you like to collaborate and create this course that people would pay for?" So often I've said, "If it's free, yes. But if it's going to be monetized, then no."
Chris Stone:
And I've had multiple people reach out to me for that purpose. And I've had to decline respectfully and say, "Look, I think what you're doing is great. You've got a great app and I can see how having this Agile coaching gamification course on that would be of great value. But if it's behind the payroll, then I'm not interested because it's in direct conflict with my own values, and therefore, I wouldn't be interested in proceeding with it. But keep doing what you're doing, being people first, #people first." This is about me embodying the focus on people being beating hearts behind a laptop, rather than just this avatar on a screen. And I have this little... the audio listeners, won't be able to see this, but I'm holding up a baby Groot here. And he's like my people first totem.
Chris Stone:
And the reason for that is I have a group called the Guardians of Agility, and we are people first. That's our emblem. And these are my transformation champions in my current company. I like to have Guardians of Agility, and I've got this totem reminding me to be people first in every interaction I have. So when, for example, I hear the term resources and I'm saying, well... As soon as I hear it, it almost triggers me. I almost hear like, "Oh, what do they mean by that?" And I'll wait a little moment and I'll say, "Hey, can you tell me what you mean by that?" And you tease it out a little bit. And often they meant, "Oh, it's people, isn't it?" If you're talking about people, can we refer to them as people?
Chris Stone:
Because people aren't resources. They're not objects or things you mine out the ground. They're not pens, paper or desks. They're not chairs in an office. They are people. And every time you refer to them as a resource, you abstract them. You make it easier to dehumanize them and think of them as lesser, you make it easier to make those decisions like, oh, we can just get rid of those resources or we can just move that resource from here to there and to this team and that team, whether they want to or not. So I don't personally like the language.
Chris Stone:
And the problem is it goes all the way back to how it's trained. You go to university and you take a business degree and you learn about human resources. You take a course, Agile HR, Agile human resources, right, and it's so prevalent out there. And unless we challenge it, it won't change. So I will happily sit there and a meeting with a CTO and he'll start talking about resource and I'll say, "Hey, what do you mean by that?" And I'll challenge it and he'll go, "Yeah, I've done it again, have I not?" "Yes. Yes, you have." And it's gotten to the point now where I'll be on this big group call for example, and someone will say it, and I'll just start doing this on a screen waving, and they'll go, "Did it again, didn't I?" "Yes, you did."
Sean Blake:
So some of these habits are so ingrained from our past experiences our education, and when you're working with teams for the first time, who's never worked in Agile before, they're using phrases like resources, they're doing things that sometimes we call anti-patents, how do you start to even have that conversation and introduce them to some of these concepts that are totally foreign to people who've never thought the way that you or I might think about our teams and our work?
Chris Stone:
Sure. So I guess that the first response to that is with empathy. I'm not going to blame someone or make out that they're a bad person for using words that are ingrained, that are normal. And this is part of the problem that that term, resource is so ingrained in that working language nowadays, same as deadlines. Deadlines is so ingrained, even though deadlines came from a civil war scenario where it referred to, if you went past the line, you were shot. How did that land in the business language? I don't know. But resources, it's so ingrained, it's so entrenched into this language, so people do it without intending to. They often do it without meaning it in a negative way. And to be honest, the word itself isn't the issue, it's how people actually behave and how they treat people.
Chris Stone:
I said my first approach is empathy. Let's talk about this. Let's understand, "Hey, why did you use term?" "Oh, I use it to mean this." "Okay. Well." Yeah, and not to do it or call them out publicly or things like that. It's doing things with empathy. Now, I also often use obviously gamification and training approaches, and Agile games to introduce concepts. If someone's unfamiliar to a certain way of working, I'll often gamify. I'll create something, a virtual Agile game to demonstrate. The way I do say, is I'm always looking to help people understand how it feels, not just to talk theory. And I'll give you an example. I'm a big fan of a game called the Virtual Name Game. It's a game about multitasking and context switching.
Chris Stone:
And I always begin, I'll ask group of people, "Hey guys, can you multitask?" And often they go, "Yeah, we can do that." And there'll be those stereotypical things like, "Oh yeah, I'm a woman. I can do that." It happens. Trust me. But one of the first things I do, if I'm face-to-face with them, I'll say, "Hey, hold your hands out like this. And in your left hand..." And people on the audio can't see me, I'm holding out like my hands in front of me. In my left hand, we're going to play an endless game of rock, paper, scissors. And in my right hand, we're going to play a game of, we have a thumb war with each other. And you can try, you can challenge them, can they do those concurrently? No, they can't. They will fail because you just can't focus on both at the same time.
Chris Stone:
Now the Virtual Name Game, the way it works is you divide a group of people up into primarily customers and one developer. And I love to make the most senior person in the room, that developer. I want them to see how it feels to be constantly context switching. So if you were the developer, you're the senior person to review the hippo in this scenario, the highest paid person. I would say Sean, in this game, these customers, they are trying to get their name written first on this virtual whiteboard. And we're going to time how long it takes for you to write everyone's name in totality. The problem is that they're all going to shouting at you continuously, endlessly trying to get your attention. So it's going to be Sean, Sean, write my name, write my...
Chris Stone:
And it's just going to be wow, wow, wow, who do I focus on? You won't know. And this replicates a scenario that I'm sure many people have experienced. He who shouts loudest gets what they want. Prioritization is often done by he who's... or the person who shouts loudest not necessarily he. We then go into another rounds where you say, I'm this round, Sean, people are to be shouting their name at you. But in this round, you're going to pay a little bit attention to everyone. So the way you're going to do that is you're going to read the first letter of one person's name, then you move on to the first letter of the next person's name, and you're going to keep going around. The consequence of that is everyone gets a little bit of attention, but the result is it's really slow.
Chris Stone:
You're starting lots of things but not finishing them. And again, in each round, we're exploring how it feels. How did it feel to be in that round? Sean, you were being shouted at, how did that feel? Everyone else, you were shouting to get your attention. You had to shout louder than other people, how did that feel? And it's frustrating, it's demotivating, it's not enjoyable. In the final around, I would say, "Hey, Sean, in this round, I'm going to empower you to decide whose name you write first. And you can write the whole thing in order. And the guys actually they're going to help you this time, there are no shouts over each other, they are going to help you." And in this scenario, as I'm sure you can imagine, it feels far better. The result is people finish things, and you can measure the output, the number of brand names written on a timeframe.
Chris Stone:
It's a very quick and easy way of demonstrating how it feels to be constantly context switching and the damage you can have, if, for example, you've prioritized things into a sprint and you got lots of trying to reorder things and so on and so forth, and lots of pressure from external people that ideally should be shielded from influencing this and that, and how that feels and what the result is, because you may start something, get changed into something else. You got to take your mindset of this, back into something else, and then the person who picks up the original thing might not have even been the same person, they've got to learn that over again. There's just lots of waste and efficiency costs through that. And that's just an example of a game I use, to bring that sort of things to life.
Sean Blake:
That's great. That's fantastic. I love that. And I think we need to, at Easy Agile, start playing some of those games because there's a lot of lessons to be learned from going through those exercises. And then when you see it play out in real life, in the work that you're doing, it's easier to recognize it then. If you've done the training, you've done the exercise, that all seems like fun and games at the time, but when it actually rears its head in the work that you're doing, it's much easier to call it out and say, "Oh wait, we're doing that thing that we had fun playing, but now we realize it's occurring in real life and let's go a different direction." So I can see how that would be really powerful for teams to go through that so Chris [crosstalk 00:22:26].
Chris Stone:
I'd also add that every game that I do, I construct it using the four Cs approach. So I'm looking to connect people... firstly, connect people to each other, and then to the subject matter. So in this game is about multitasking. To contextualize why that matters, why does context switching and multitasking matter in the world of work? Because it causes inefficiencies, because it causes frustration, de-motivation, et cetera. Then we do some concrete practice. We play a game that emphasizes how it feels. And at the end we draw conclusions, and the idea is that with the conclusion side of things, it's almost like a retrospective on the game. We say, "Hey, what did we learn? What challenges we face? And what can we do differently in our working world?" And that hopefully leaves people with actionable takeaways. A lot of the content I share is aiming to leave it with actionable takeaways, not just talking about something, but reflecting on what you could do differently, what you could try, what experiment you might like to employ with your working life, your team that might help improve a situation you're facing.
Sean Blake:
Okay. Yeah, that's really helpful. And you've spoken about this concept of Agile sins, and we know that a lot of companies have these values, they might've committed to an Agile transformation. They might've even gone and trained hundreds or thousands of people at accompany using similar tactics and coaching and educational experiences that you provide. But we still see sometimes things go terribly wrong. And I wonder, what's this concept of Agile sins that you talk about and how can we start to identify some of these sins that pop up in our day-to-day work with each other?
Chris Stone:
I guess, so the first thing I would emphasize about this is that using sin, it's a very dogmatic religious language and it's more being used satirically than with any real intent. So I just like to get that across. I'm not a dogmatic person, I don't believe there is any utopian solution. I certainly don't believe there's any one size fit to all situation for anyone. So the idea that there's really any actual sins is... yeah, take that with a pinch of salt. The reason the Agile sins came up is because I was part of... I'd done a podcast recently with a guy called Charles Lindsey, and he does this Agile confessional. And it's about one coach confessing to another, their mistakes, their sins, the things they've done wrong.
Chris Stone:
And I loved it because I'm all about de-stigmatizing failure. I'm all about sharing with one another, these war stories from one coach to another, because I've been a proponent of this in the past. I've shouted, "Hey there, I failed on this. I made this mistake. I learned from it." And I challenge others to do so as well and there's still this reluctance by many to share what went wrong. And it's because failure is this dirty word. It's got this stigma attached to it. No one wants to fail, leaders in particular. So the podcast was a great experience.
Chris Stone:
And it was interesting for me because that was the first time I'd given a confession, because I'll be honest with you, I'm someone who is used to taking confession myself. I go to this hockey festival every year and I got given years ago, this Archbishop outfit, and I kind of made that role my own way. I was drunk, and I said, "You're going to confess your sins to me." And if they haven't sinned enough, I tell them to go and do more. And I give them penicillin with alcohol shots and things like that. And I've actually baptized people in this paddling pool whilst drunk. Anyway, again, I digress, but I wasn't used to confessing myself, usually, I was taking confession, but I did so and it was a good experience to share some of my failures and my patterns was to create... and it was my own idea, to create my videos, seven videos of my seven Agile sins. And again, this was just me sharing my mistakes, what I've learned from that, with the intent of benefiting others to avoid those similar sins.
Sean Blake:
So you've spoken to a lot of other Agile coaches, you've heard about their failures, you confessed your own failures, is it possible for you to summarize down what are those ingredients that make someone a great coach?
Chris Stone: And that is a question, what makes someone a great coach? I think it's going to be entirely subjective, to be honest. And my personal view is that a great coach listens more than they speak. I guess that would be a huge starting point. When they listen more than they speak, because I've... and this is one of the things I've been guilty of in the past, is I've allowed my own biases to influence the team's direction. An approach I'd taken in the past was, "Hey, I'm working with this team and this has worked well in the past. We're going to do that." Rather than, "So guys, what have you done so far? What have you tried? What's worked well? What hasn't worked well? What can we create or what can we try next? That works for you guys. Let's have you make that decision and I'm here to guide you through that process to facilitate it, rather than to direct it myself."
Chris Stone:
Again, I find ... it's an approach that resonates more with people. They like feel that they own that decision as opposed to it being forced upon them. And there's far less, I guess, cognitive dissonance as a consequence. So listening more than speaking is a huge for me, a point an Agile coach should do. Another thing I think for me nowadays, is that there's too much copying and pasting. And what I mean by that is, the Spotify, the Spotify model came out years ago and everyone went, "Oh, this is amazing. We're going to adopt it. We're going to have tribes and chapters and guilds and squads, and it's going to work for us. That's that's our culture now."
Chris Stone:
I was like, "Well, let's just take a moment here. Spotify never intended for that to happen. They don't even follow that model themselves anymore. What you've done there is you've just tried to copy and paste another model." And people do it with SAFe as well. They just say, "Hey, we're going to take the whole SAFe framework and Chuck it into our system in this blueprint style cookie cutter." And the problem is that it doesn't take into account for me, the most important variable in any sort of transformation initiative, the people, what they want, and the culture there. So this is where another one of my values is, innovate, don't replicate. Work with people to experiment and find that Agile, what works for them rather than just copying and pasting things.
Chris Stone:
So tailor it to their needs rather than just coming in with some or seen dancing framework, and then the way I do it is I say, "Hey, well, SAFe is great. Well, it's got lots of values, and lots of great things about it. Lots of benefits to it, but maybe not all of it works for us. Let's borrow a few tenent of that." Same with LeSS, same with Scrum At Scale, same with Scrum, similar to Kanban. There's lots of little things you can borrow from various frameworks, but there's also lots of things you can inject yourself, lot's of things you can try that work for you guys, and ultimately come up with your own tailor-made solutions. So innovate, don't replicate would be another one for me.
Chris Stone:
Learning, learning fast and learning often, and living and breathing that yourself. Another mistake I and other coaches I think have made is not making time for your own personal development to allowing, day in, day out to just be busy, busy, busy, but at the same time you're going out there, coaching teams, "Hey, you've got to learn all the time. You got to try new things." But not making that time for yourself. So I always carve out time to do that, to attend conferences, to read books, to challenge myself and escape my echo chamber. Not just to speak to the same people I do all the time, but perhaps to go on a podcast with people I've never spoken to. To a different audience, maybe to connect with people that actually disagree with me, because I want that.
Chris Stone:
I don't want that homophilous thinking where everyone thinks exactly like I do, because then I don't get exposed to the perspectives that make me think differently. So I'm often doing that. How can I tend to conference that I know nothing about, maybe it's a project management focus one. Project management and waterfall isn't a dirty word either. There is no utopian system, project management and... sure traditional project management and waterfall has its benefits in certain environments. Environments with less footing, less flexible scope or less frequently changing requirements works very well.
Chris Stone:
I always say GDPR, which is an EU legislation around data protection, that was a two year thing in the making and everyone knew exactly what was happening and when they had to do it by. That was a great example of something that can be done very well with a waterfall style, because the requirements weren't changing. But if you are trying to develop something for a customer base that changes all the time, and you've got lots of new things and lots of competitors and things like that, then it varies, and probably the ability to iterate frequently and learn from it is going to be more beneficial and this is where Agile comes in. So I guess to sum up there, there's a few things, learning fast learning often. I can't even remember the ones I've mentioned now, I've gone off on many tangents and this is what I do.
Sean Blake:
I love it. It's great advice, Chris. It's really important and timely. And you mentioned, earlier on that the customer base that's always changing and we know that technology is always changing and things are only going to change more quickly, and disruptions are only going to be more severe going forward. Can you look into the future, or do you ever look into the future and say, what are those trends that are emerging in the Agile space or even in work places that are going to disrupt us in the way that we do our work? What does Agile look like in five or 10 years?
Chris Stone:
Now that again is a very big question. I can sit here and postulate and talk about what it might look like. I'm going to draw upon what I think is a great example of what will shape the next five or 10 years. In February, 2021, there's a festival called Agile 20 Reflect, I'm not sure if you've heard of it, and it's built as a festival, not conference, it's really important. So it's modeled on the Edinburgh festival and what it intends to be is a celebration of the past, the present and the future of Agile. Now what it is, it's a month long series of events on Agile, and anyone can create an event and speak and share, and it will create this huge community driven load of content that will be freely accessible and available.
Chris Stone:
Now, there are three of the original Agile manifestor signatories that are involved in this. Lisa Adkins is involved in this as is lots of big name speakers that are attached to this festival. And I myself, I'm running a series of retrospectives on the Agile manifesto. I've interviewed Arie van Bennekum, one of the original Agile manifesto signatories. They're going to be lots of events out there. And I think that festival will begin to shape in some way, what Agile might look like because there's lots of events, lots of speakers, lots of panel discussions that are coming up, coming together with lots of professionals out there, lots of practitioners out there that will begin to shape what that looks like. So whilst I could sit here and postulate on it, I'm not the expert to be honest, and there are far greater minds than I. And actually I'd rather leverage the power of the wider community and come into that than suggesting mine at this time.
Sean Blake:
Nice. I like it. And what you've done there, you've made it impossible for us to click this video and prove you wrong in the future when you predict something that doesn't end up happening. So that's very wise if you.
Chris Stone: Future proof myself.
Sean Blake: Exactly. Chris, I think we're coming almost to the end now, but I wanted to ask, given the quality of your Christmas sweater, do you have any tips for teams who are working over the holiday period, they're most likely burnt out after a really difficult 2020? What are some of the things you'd say to coaches on Agile teams as they come into this time where hopefully people are able to take some time off, spend some time with their family. Do you have any tips or recommendations for how people can look after their mental health look after their peers and spend that time in self-reflection?
Chris Stone:
Sure. So a number of things that I definitely would recommend. I'm currently producing and sharing this Agile advent calendar. And the idea is that every day you get a new bite-size piece of Agile knowledge or a template or something working in zany or a video, whatever it may be. There's lots of little things getting in there. And there's been retro templates, Christmas and festive themes. So there's a Home Alone one, a Diehard one, an elf movie one, there's all sorts. Perhaps try one of those as a fun immersive way with your team to just reflect on the recent times as a squad and perhaps come up with some things in the next year.
Chris Stone:
And there's for example, the Diehard one, it's based on the quotes from the movie Diehard so it's what you'd be doing in there, celebrate your... to send them to your team. Or there's one in there about, if this is how you celebrate Christmas, I can't wait for new year. And that question was saying, what do we want to try next year? Like this year has been great, what do we want to try differently next year? So there's opportunities through those templates to reflect in a fun way so give one of those a go. I shared some Christmas eve festive Zoom backgrounds, or Team backgrounds, give those a go and make a bit fun, make it a bit immersive. There's Christmas or festive icebreakers that you can use. What I tend to do is any meeting I facilitate, the first five minutes is just unadulterated chat about non-work things, and I often use icebreakers to do so. And whether that's a question, like if you could have the legs of any animal, what would you have and why, Sean, what would that be?
Sean Blake:
Probably a giraffe, because just thought the height advantage, it's got to be something that's useful in everyday life.
Chris Stone: Hard to take you on the ground maybe.
Sean Blake:
Yes. Yes, you would definitely need that. Although, I don't think I would fit in the lift on the way to work, so that would be a problem.
Chris Stone:
Yeah. That's just how I start. But yeah, that's just a question, because it's interesting to see what could people come up with, but there's some festive ones too, what's your favorite Christmas flick? What would put you on the naughty list this year? Yeah, does your family have any weird or quirky Christmas traditions? Because I love hearing about this. Everyone's got their own little thing, whether it's we have one Christmas present on Christmas Eve or every Christmas day we get a pizza together. There's some really random ones that come out. I love hearing about those and making time for that person interaction, but in a festive way can help as well.
Chris Stone:
And then on the mental health side of things, I very much subscribed to the Pomodoro effect from a productivity side of things. So I will use that. I'll set myself a timer, I'll focus without distractions, do something. And then in that five minute break, I'll just get up and move away from my desk and stretch and get a coffee or whatever it may be. But then I'll also block out time, and I know some companies in this remote working world at the moment are saying, "Hey, every one to 2:00 PM is blocked out time for you guys to go and have a walk." Some companies are doing that. I always make time to get out and away from my desk because that... and a little bit more productive and it breaks up my day a little bit. So I definitely recommend that. Getting some fresh air can do wonders for your mental health.
Sean Blake:
Awesome. Well, Chris, I've learnt so much from this episode and I really appreciate you spending some time with us today. We've talked about a lot of things from around the importance of sharing your failures, the importance of looking after your mental health, checking in on yourself and your own development, but also how you tracking, how you feeling. I love that quote that you shared from, we think it Socrates, that true knowledge is knowing that you know nothing. I think that's really important, every day is starting from day one, isn't it? De-stigmatizing failure. The origins of the word deadline. I did not know that. That's really interesting. And just asking that simple question, how did that feel? How did that feel working in this way? People were screaming your name, walk up work, when work's too busy, how does that feel? And is that a healthy feeling that everyone should have? So that's really important questions for me to reflect on and I know our audience will really appreciate those questions as well. So thanks so much Chris, for joining us on the Easy Agile Podcast.
Chris Stone:
Not a problem. Thank you for listening and a Merry Christmas, everyone.
Sean Blake:
Merry Christmas.

