Easy Agile Podcast Ep.35 Jeff Gothelf on Customer-Centric OKRs, Goal-Setting, and Leadership That Scales
TL;DR
Jeff Gothelf, renowned author of "Lean UX" and "Who Does What By How Much," discusses the evolution from output-based work to outcome-focused goal setting with OKRs. Key insights: Teams need to shift from "we're building a thing" to defining success as "who does what by how much" – meaningful changes in human behaviour that drive business results; the biggest barrier to agile ways of working is that people get paid to ship features, not deliver value; leaders should change their questions from "what are you building?" to "what are you learning?"; psychological safety is critical – teams need to feel safe admitting when something isn't working; start small by simply asking "what will people be doing differently when we ship this?"; rename teams around outcomes (mobile revenue team) rather than outputs (iPhone app team); proactive transparency through weekly three-bullet-point updates builds trust with leadership. Bottom line: OKRs, when done right, are the "Trojan horse" that enables all other agile practices to succeed.
Introduction
For years, agile practitioners have championed better ways of working – Lean UX, design thinking, continuous discovery, customer centricity. Yet despite widespread adoption of these practices, many teams still struggle with the same fundamental problem: they're rewarded for shipping features, not delivering value.
In this episode, our CEO Mat Lawrence sits down with Jeff Gothelf to explore how this misalignment of incentives undermines even the best agile practices, and why customer-centric OKRs might be the missing piece that makes everything else click into place.
Jeff Gothelf is a renowned author, speaker, and consultant whose work has shaped how product teams approach collaboration and customer-centricity. Along with co-author Josh Seiden, Jeff wrote "Lean UX," which revolutionised how designers work in agile environments. Their follow-up book, "Sense and Respond," helped leaders understand how to manage in software-based businesses. Their latest book, "Who Does What By How Much," tackles the thorniest problem yet: how to align incentives and goals with customer outcomes.
This conversation traces Jeff's journey from helping designers work better in agile teams, to helping leaders create the conditions for success, to finally addressing the root cause – the goals and incentives that determine what gets celebrated, rewarded, and promoted in organisations. It's a masterclass in shifting from output thinking to outcome thinking, with practical advice for both team members and leaders navigating this transformation.
About Our Guest
Jeff Gothelf is an author, speaker, and organisational consultant who has spent over 15 years helping companies build better products through collaboration, learning, and customer-centricity. His work focuses on the intersection of agile software development, user experience design, and modern management practices.
Jeff is best known as the co-author (with Josh Seiden) of three influential books that have shaped modern product development practices. "Lean UX" (now in its third edition) began as a guide for designers working in agile environments but has evolved into a comprehensive framework for cross-functional collaboration and risk mitigation in product development. The book's core principle – moving from deliverables to outcomes – has influenced how thousands of teams approach their work.
Following "Lean UX," Jeff and Josh wrote "Sense and Respond," a book aimed at leaders and aspiring leaders. It makes the case that the overwhelming majority of businesses today are software businesses, and that managing software-based businesses requires fundamentally different approaches to team structure, management, and leadership. The book provides a roadmap for creating organisations where teams can actually practise the collaborative, customer-centric approaches described in "Lean UX."
Jeff's latest book, "Who Does What By How Much," represents the natural evolution of this work. After years of helping teams work better and leaders manage differently, Jeff and Josh identified that the real barrier to change was incentives and goals. Teams kept saying, "That's great, Jeff, but I get paid to ship features." This book tackles that problem head-on, showing how to use objectives and key results (OKRs) to create customer-centric goals that align with – rather than undermine – modern ways of working.
Beyond his books, Jeff has also authored "Forever Employable" and "Lean vs Agile vs Design Thinking," and he regularly speaks at conferences and consults with organisations on product strategy, team effectiveness, and organisational transformation. His approach is characteristically practical and rooted in real-world experience, making complex concepts accessible through clear frameworks and relatable examples.
Jeff's work continues to evolve as he helps organisations navigate the challenges of building products that customers actually want and need, whilst creating work environments where teams can thrive.
Transcript
Transcript
Note: This transcript has been lightly edited for clarity and readability.
Why Write Another Book? The Journey from Lean UX to OKRs
Mat Lawrence: Well, Jeff, welcome. I'm Mat Lawrence for our audience. I'm COO at Easy Agile, and today I'm talking with Jeff Gothelf, who is the renowned author, speaker, and consultant. You've written a good few books, Jeff. I've been looking through the list – Lean versus Agile versus Design Thinking, Forever Employable, and co-authored a few. The latest one being "Who Does What By How Much," and I was just telling Jeff in the intro here how you've managed to get across a lot of the things that I care about when trying to build teams and get them to understand OKRs. I've already given it to a few people and I'm definitely going to be giving it around. So, Jeff, welcome.
Jeff Gothelf: Thank you so much, Mat. That's very kind of you on all of that stuff. I appreciate it. Thanks for having me.
Mat: I'd love to cover a little bit around the book and the concept you're trying to get across. So I suppose the first question I have is what problem are you hoping to solve with the book? Why did you write it?
Jeff: It's really interesting. I wrote a blog post about this a while back because somebody challenged me on LinkedIn – and I appreciate a good challenge. They said, "How can you write about all this stuff? There's no way you know enough about each one of these topics to write a book. You're spreading yourself way too thin."
I thought that was a really interesting challenge. No one had ever asked that question, and it got me thinking. The answer that I came up with is that this book, "Who Does What By How Much," and it's a conversation about customer-centric objectives and key results, is the natural evolution of the work that Josh Seiden and I have been doing together for more than 15 years.
"We started with Lean UX, and Lean UX was a solution for designers helping them work more effectively in agile software development environments. The response to that book was, 'That's great, Jeff and Josh. We'd love to work this way. My company won't let me work this way.'"
So we wrote "Sense and Respond," which was a book for leaders and aspiring leaders to inspire them to manage differently, to recognise that the overwhelming majority of businesses today are software businesses, and that managing software-based businesses is different.
As we began to work with that material and talk about that, we kept bumping up against the same ceiling, and that ceiling was incentives and goals. No matter how hard we tried to convince people to be customer-centric, to learn continuously, to improve continuously, to work in short cycles, they said, "That's great, Jeff. But I get paid to ship features."
The goal, the measure of success, was shipped – preferably on time and on budget. That's what got celebrated and rewarded, incentivised and promoted. It was in the job descriptions and all that stuff. So it felt like we were really fighting a losing battle.
Objectives and key results has been gaining momentum for the last decade or so. To us, that felt like the perfect Trojan horse – and I know Trojan horse has a negative connotation, but I don't think of it in this case as a negative thing. It was the perfect way to have a conversation about goals in a customer-centric fashion that, if applied in the way that we describe in the book, would enable everything else that we've done to happen more easily.
"What Will People Be Doing Differently?" – The Question That Changes Everything
Mat: I love the evolution of it, Jeff. I've been working in tech now for about 15 years. Prior to that, I used to work in the arts and special effects, which in itself is a very agile industry where you're constantly building prototypes and figuring out what things need to do before they go on stage or be filmed.
When I entered into the tech world as an inexperienced founder and product developer, I was designing to solve problems, and I found the teams I was working with responded really well to that. "What are we trying to do? What are we trying to get here?" They used to give me feedback all the time on whether I was helping them see far enough ahead with the value we're actually trying to deliver.
When I joined Atlassian in 2014, when we were introducing OKRs there, I think we were facing a problem that you described really well in the book, which is around people focusing on shipping their to-do list. They have a backlog that is predefined, full of great ideas, and they really want to get it out the door. Trying to change that conversation to be around "how do we know if this is any good?" – the answer was we just don't know.
I'd love to touch on how have you guided teams to move from that more traditional output-based metrics and shipping into that outcome approach? Maybe you could give an example of where that shift has led to some significant success.
Jeff: Sure. The title of the book is "Who Does What By How Much?" Overwhelmingly, the teams that we've worked on and with over the years have focused on delivering output, making stuff. The question that we tried to get them to understand is: if you do a great job – let's say when – when you do a great job with this feature, how will you know? What will people be doing differently?
That's the question that starts the mindset shift from outputs to outcomes. Outcomes, the way that we describe them, is a meaningful change in human behaviour that drives business results. The human that we're talking about is the human that consumes the thing that you create.
"The question is how will you know you delivered value to that human? Traditionally, it's been like, 'Well, we made the thing for them. There it is.' We made the Sharpie. Terrific. Did anybody need a Sharpie? Anybody looking for a Sharpie? How do we know? What are people doing now that the Sharpie is out there?"
The mindset shift starts with that question. Even in an organisation that just doesn't get this yet, it's a really safe question. I think it's a safe question to say, "Okay, we're gonna build the thing. What do we expect people to be doing differently once we ship this thing?" And when I say people, let's get specific about who. Which people? Who?
This is the evolution of the book title and how we teach this stuff. So what would people be doing differently before we start? Which people? Who? Okay, it's accountants in large accounting firms. Great. When we ship this new system to them, what are they gonna be doing differently than they're doing today? Well, they'll be entering their data more successfully and finishing their work in half the time.
Terrific. What are they doing? Who does what? And how much of that do we need to see to tell us that this was actually valuable? Well, today they're seeing at least a 30% error rate in data entry. Okay, great. What's meaningful? What's a meaningful improvement? If we cut that in half, that's a meaningful improvement. By how much?
All of a sudden, we've constructed the success criteria that has moved the team away from "we're building a thing" to "accountants in large accounting firms reduce their data entry errors by 50%." Who does what by how much. That begins the mindset shift in that conversation in a safe way because we're not saying let's set new goals, let's rewrite our incentives. We're just saying, "Look, I'm just asking a question."
Then once we start to build stuff, and especially once we start to ship stuff, you remember that conversation we had three months ago? We talked about who does what by how much. Is it happening? Do we know? Can we find out? And if it isn't, let's figure it out.
The Non-Profit That Changed Their Approach - From One Million Buses to Ten Iterations
Jeff: I'll give you an example. There was an organisation I worked with – I really loved working with them. They were a non-profit organisation that was looking to address major diseases in the developing world. They had three or four very specific diseases that they were targeting in very specific locations around the world, and I was thrilled to be working with them and helping them.
They managed everything with a task list. They were like, "We're gonna create this campaign and we're gonna put it on buses in China." And I was like, "Okay. How do you know that? So what? If the campaign works, what will people be doing differently?"
"Well, they'll scan the QR code that's on the bus."
"Okay, alright. And then what?"
"They'll sign up for an appointment to get a cardiovascular check."
"And then what?"
"For those who need actual care, they'll sign up for care."
"All of a sudden, we've taken 'put an ad campaign on a bus' to 'who does what by how much.' When we started to think about it that way, they fundamentally were rethinking the level of effort."
Because you might imagine, it was going to be one million buses and hope that it works. Instead, they decided, "Hey, we're gonna do 100 of these in one locality, and we're gonna give it a week, and we're gonna not only see what happens, but find out if people saw the ad, if it speaks to them, if they understood what it said. Then based on that learning, we're gonna iterate on the campaign."
So instead of getting one giant shot at this advertising campaign to drive people to take better care of themselves, now they're gonna get ten iterations. I think that was massively impactful in helping that organisation do better work and help more people.
Mat: I love how you're bringing that back to the experimental and iterative approach that people so often want but really struggle to get to. I've seen so many occasions where OKRs end up describing something that takes three, four, five months to build and ship, and they're only trying to measure the big outcome at the end, whereas what you're talking about there is breaking it down, making it far more iterative and experimental.
Jeff: Reducing your risk. Imagine this organisation had, let's say, £100,000 for this campaign. Traditionally, they would spend that whole hundred grand and hope. The reality is there's no need to do that. They could spend 10 and learn and do a better job with the next 10 and a better job with the next 10, and if they've de-risked it enough, take the last 50 and dump it on the thing that you've actually validated.
It's a de-risking strategy as well. You're increasing the value you're delivering and reducing the risk of spending money on stuff that isn't gonna work. Feels like a no-brainer, doesn't it?
The Reverse Five Whys - Asking "So What?" to Find Your Outcome
Mat: You make it sound like everyone should be doing it, which I agree with. There was something that you did in the middle of that conversation which I really like, and it's kind of like the opposite of the five whys. You know, where you see the problem and you ask why, why, why and you go back to the root cause. Whereas you took that in the other direction there.
Jeff: Right. We were moving forward in time for the desired outcome.
Mat: Yeah, exactly. You said, "Okay, you want to put this thing on a bus. So what?" And you took that three or four steps forward to get to that ultimate outcome. I love that, and that's probably a tactical, practical approach that our audience can take.
I think some of the stuff that I've struggled with over the years is getting teams who are new to OKRs to understand how to move from writing their to-do list, writing their backlog, turning that into their key results, and actually getting it into the outcome base. I think that's one of the things that a lot of teams find hardest to grasp.
Jeff: And as I kicked off with, if your entire career you've been rewarded for shipping and producing and ticking off a to-do list, then it's really hard to break away from that without some form of leadership buy-in. That's coming back to that incentives and performance management criteria side of things. That's really hard because that's what people optimise for.
We can preach outcome-based work until we're blue in the face, as they say in America at least. But if you're paid to ship product, you're gonna optimise in most cases for what gets you paid. That's an important component of this that I think gets ignored a lot.
Two Audiences, Two Approaches - What Should Teams and Leaders Do Differently?
Mat: Let's talk practically around this. We're probably going to have different people listening to this. We could probably give two bits of advice. One is somebody who's in a team and they really want to try this, or maybe they've been trying this and struggling because the incentives don't match. The other group may be someone who's in leadership who is trying to change their organisation to move into this more outcome-based approach. What advice would you give to each of those people?
Jeff: Great question. Let's start with the folks trying to make this happen initially. In my opinion, one of the easiest ways to move this conversation forward in your organisation is to ask that question I mentioned: What will people be doing differently when we ship this?
Have that conversation. Position it any way you'd like, word it any way you'd like. But ultimately, you're not challenging the work. You're not saying "I'm not gonna do the work." You're not pushing back yet.
"All you're saying is, 'Look, we're gonna build this thing, and we're gonna do a great job. What do we hope people will do with this once we have it out there? What are we trying to see? Are we trying to see them increase average order value? Do we want them to abandon their shopping carts less? Are we trying to get them to sign up for a medical check-up at least once a year?'"
That starts it. That starts getting people to think about more than just "I am making a thing."
Mat: If you took that to leadership and said, "Yeah, we're gonna get this stuff out the door, but I want to check with you that you're happy that this is the outcome we're trying to get to, that this is the result if we get it right."
Jeff: I think that's great, and I think that you should come back to them after you ship and say, "Look, remember we met three, six, nine months ago and I said we're building this and we're hoping people will do this? Well, we built it as designed, on time, on budget, and so far we're not seeing the results that we anticipated. We talked to some customers, and here's why we think that is. What we'd like to do next..."
To me, that should be a safe conversation inside your organisation.
Mat: I can imagine people listening to this and getting some cold sweats at the concept of going to someone and saying, "I did everything that you expected from me, but it wasn't good enough."
Jeff: It's not that. What tends to happen in these situations is a lot of upfront planning and commitments, and then we execute. Regardless of all the work that people have done to convince people that there are better ways of working, that's generally speaking how people are doing work still. We did the thing, and guess what? It didn't work. It didn't work as we had hoped. It's not because we built it poorly. It works as designed. We did usability testing on it. People can use it, they can get through the workflow.
What we think is it's not solving a meaningful problem, or we decided to put it somewhere in the workflow that didn't make sense, or whatever the case is. I understand it's not a risk-free conversation. I'm not encouraging people to do things that are career-limiting per se, but at some point we've got to talk about this kind of stuff. Otherwise, we're just a factory. I don't think anybody wants to work in a factory.
It's Not About the Quality of Your Code, It's About Learning
Mat: I couldn't agree more, and I think that the heart of what I spend a lot of my time doing is helping people understand how to get the benefits out of being agile, that agility piece. What we've been discussing there is that key part of learning. You can plan and you can build, you can have alignment on those things, you can improve how you're building all the time and reach quality standards and pass usability testing. But ultimately, if you don't learn, you're never gonna get the insight that you need to adapt what you do next.
"Where a lot of people fall down with agility is they go through all of the motions up to that point, and then through fear, self-preservation, or they've just not seen anybody else around them do it before, they hesitate to say, 'This thing that we've all invested all this time and effort into isn't working as expected.' It does take some courage to do that."
Jeff: It does. I agree. But it's an evidence-based conversation. It's not "we did a crap job." We didn't. It's bug-free, it's high performance, it's scalable, it's usable. But you can build products like that – there are infinite stories of products that were amazingly executed that didn't meet a need, didn't solve a problem.
Mat: Yeah, I built one of those and had to close a business for it, so I know that all too well. If there's a lesson I learned through the years of doing that, which you touched on earlier, it's around by focusing on the outcomes that you want to see, those behaviours you want to change, and bringing the work down, de-scoping the work to start to experiment and iterate, you de-risk all of that. You'll learn a lot earlier whether you're on the right track or not rather than getting that big bang at the end.
Jeff: Yeah. Again, you're reducing the risk of building something that people don't want. Let's just use round numbers because they're easy. If you have a million-pound budget to build something – a new product, a new feature, a new service – and you spend 100 of that million and find out that this isn't the right thing to make, it's not a real problem, for whatever reason, you've just saved the company £900,000.
They should hoist you up on their shoulders and sing your praises, parade you around the halls. That's how it should be. You're a hero, and now we can take that £900 and do something that actually will deliver value with it.
If You're a Leader: Stop Asking "When Will It Be Ready?" and Start Asking "What Are You Learning?"
Mat: The second half of that question was around if you're a senior leader in an organisation and you want to move to an outcome-based approach, maybe you start with celebrating the people who are trying to do that and positively reinforcing it in that way. But what advice would you give that person?
Jeff: Absolutely. Celebrate anybody – literally hoist them up on your shoulders and parade them around the halls and say, "Look, this team tried this, figured out it wasn't going to work, and pivoted, and saved the company a million pounds." That should be a regular conversation and a regular thing that the company celebrates.
What's interesting is that you can find yourself on a team with resistant leadership, and you can also find yourself in leadership with resistant teams. And for a variety of reasons, not the least of which is that they've never actually been allowed to work this way and don't believe you that you're gonna let them work this way.
"Without getting caught up in too much process or training or dogma, I think as a leader you start to soften the conversations around this stuff by changing the questions that you ask."
Normally, it's like, "Hey, what are you guys working on? When will it be ready? How much is it gonna cost me? What do you predict the ROI is gonna be?" That's a typical line of questioning for a product team.
Conversely, you can say, "Hey, folks. What are you learning this week? This sprint? This quarter? What did you learn?" You might get a bunch of blank stares initially. They'll say, "What do you mean, what did we learn? We're building what you told us to build."
"Okay, well, cool. Next quarter when we meet, I'd love for you folks – I'm gonna ask you this question again. What did you learn this quarter about the product, about the customer, about the value of the thing that we're delivering? If you don't know how to answer those questions, I can help. I can get training for you. I can get some folks who've done this in other parts of the company to show you how they're doing this work."
To me, you're not enforcing. One of the issues of organisations just mashing process on top of organisations is folks don't understand why. Why are we doing this, and how is this supposed to make anything better? One of the ways to ease folks into a different way of working is to change your expectations of them and make that clear to them.
Instead of saying "What are you building? When will it be ready? What's the ROI?" say "What are you learning? Are we doing the right thing? How will we know?" And then if they don't know how to get the answers to that, don't make them feel stupid. Say, "Look, I'm gonna help you with that. I'll show you how the other teams are doing it. I'll get you some training. We'll work on this."
That's super powerful because you're changing the expectations that you have for your team, and you're making it explicit to them.
Navigating Conflicting Forces - Outcomes vs. Predictability
Mat: I've got this image in my head of people in a large organisation where they're on this journey that you've described with their team. Maybe they're a leader somewhere in the middle of the organisation, working with multiple teams, and they're starting to see some progress. The teams are on board, they trust that the questions you're asking are genuine and authentic, and they really want to understand the outcomes.
They're starting to come back with great questions themselves around who does what, what's the behaviour we're trying to change, how are we trying to change it, are we successfully doing that or not. Whilst that starts to get some traction and momentum, at the same time this leader's got other people in the organisation – maybe some more traditional executives who are getting investors on their boards asking for their KPIs to be met and the efficiency and the predictability they expect so they can forecast.
They have jobs to do themselves, and they seek some predictability. How do you help guide that person to navigate those two conflicting forces?
Jeff: It's hard. I've seen it multiple times. I think there are a couple of ways to navigate those political challenges in an organisation. One is you have to model the behaviour that you want to see both in your teams and in your colleagues as well.
Every interaction that you have with your peers at leadership level should contain these types of conversations around the customer, around learning, around value, around risk mitigation, and continuing to model the behaviour you want to see.
Someone says, "Well, we just have to build the iPhone app."
"Okay, great. But why? Why do we have to build the iPhone app?"
"Because we have to increase mobile revenue."
"Why? What is it today? What are we hoping to get?"
The Power of Renaming Teams
There's a super simple trick I wrote about probably a decade ago. If you're in a leadership position to get the organisation to start to think differently about how to do work, it's simply changing the names of the teams.
For example, let's say you and I work on the iPhone app team. What's our mission? Build an iPhone app. Exactly. So that's the iPhone app team, and that's the CRM team and that's the Android app team, whatever.
"What if we change the name of that team? Same team, same people. But it's the mobile revenue team. All of a sudden, the purpose of the team has fundamentally changed. It's no longer 'build iPhone app.' It's 'increase revenue through the mobile channel.'"
That might be an iPhone app, might be an Android app, might be a better website, might be a million different things. But from a leadership perspective, one of the things that you can influence is the name of these teams, and how you name them determines what work they do. That's really powerful.
Prove the Model
The other thing that you can do as a leader is prove the model. There's a lot of "my idea is better than your idea" type of conversations at work. Instead of saying, "I think we should work this way," say, "Look, I've got a pilot team in my group that's been doing this for the last three months. Here's what the team looks like. Here's the work that they're doing. Here's how they work. Here's what they're producing. Here's their happiness score. Here's their productivity. Here's their efficiency. Here's the impact of the work that they're doing with the customer."
If you've got one or two of those teams working that way, that's a compelling argument for saying, "Look, let's give it a shot." You've got the evidence that says this is a better way of working. Proving the model is always a good way to go.
Team Autonomy and Empowerment
Mat: One of the things that I'm picking up on in what you're saying leads to an outcome within teams that I've seen – around autonomy and empowerment within teams. Something I'm always trying to do in my role in organisations is make myself redundant. If the team don't need me anymore, I've done my job.
I'm at work where I've been very clear with the rest of the leadership team: I'm getting involved in way too many decisions, and I need to remove myself from those decisions because I'm slowing us down. If I have to have all of the context to be able to get involved with that and help move us forward, then we're gonna go slower than we should.
We're very quickly removing me from decisions, and it's been a great journey. Terrifying for me because I don't know as much about what's going on. But I'm seeing the teams themselves equipped with questions like "who does what by how much?" – that's one tool around the OKRs. Also equipped with other tools and ways of working, and usually it comes down to: are they asking the right questions? Are they applying the level of critical thinking to achieve those outcomes?
"Ultimately, if we can get teams to be more autonomous, leaders have a much better time of scaling themselves without burnout, without having to get really drawn in. When teams make decisions when you're not in the room that are fighting to achieve the outcome that you also want to achieve, that's when you really start to move quicker. That's when you start to really see the benefits of agility."
Have you got any thoughts on that that you'd like to share?
Jeff: It's a really tough sell. I see it all the time because I think that leaders have defined themselves – I don't want to speak in absolutes, so the majority of leaders have defined themselves in a way that says, "I tell people what to do." That's my job.
If you ask any kid – 10 years old, 12 years old, 9 years old – "What's a boss?" they'll say "A boss is someone who tells people what to do." I think we grow up with that, and I think leadership canon for the last hundred years has roughly said that, with the exception of the last 20 to 30 years where we've seen a lot of agile-themed, agility-themed leadership books and materials come out.
Still, I think the overwhelming majority of people believe that it's their job when they're in a leadership role to tell their teams what to do and to be keenly aware of every little detail. Because what if my boss comes to me and says, "Hey, what are your teams doing?" If the answer is "I don't know," that's probably a bad answer.
I agree with you. Day-to-day decision stuff – who better to make that decision than the teams doing the work day to day? They know far more about it than I do. They're with the work every day, they're with the customer every day, they're getting the feedback.
There's no reason for you to run these tiny things past the leader every day. It's exhausting for the leader, as you said, and the team knows more about it. Big strategic shifts, invalidated hypotheses, radical shifts in the market, new competitive threats – absolutely, let's talk about that.
The Two-Way Solution
I think there's a two-way solution here. Number one, leaders need to let go a little bit and understand that the most qualified people to make decisions about the day-to-day trivial stuff are the team doing the work.
David Marquet said this in "Turn the Ship Around." He ran the worst-performing nuclear submarine crew in the American Navy and turned it around to the best-performing crew. Basically, what he said was he pushed decision-making down as close to the work as possible. The only decision he kept for himself was whether or not to launch a nuclear missile, because people are gonna die and he didn't want that on anybody. That's his job as the leader.
Same thing here. You're gonna push decisions all the way down, and we've got to get folks to think about that.
Demand Proactive Transparency
To make that easier for people to swallow, people who are not used to this way of working, I think we have to demand greater proactive transparency from the teams.
Teams love to play the victim. "They don't let me work this way. My boss won't let me work this way. My boss doesn't get agility, doesn't get customer-centricity. She just comes down here and yells at us."
"What if on a weekly basis, without being asked for it, you sent your leader three bullet points in an email every week? Here's what we did this week. Here's what we learned. Here's what we're planning on doing next week."
If there's anything significant, you're gonna put that in there as well. But otherwise, just those three things. You're not even asking for a response. Weekly update, three bullet points, 15 minutes max of effort on your part.
In my opinion and in my experience, what happens is leaders chill out. Because all of a sudden they know what's going on. They see that you're doing work, that you're making objective decisions, and that you're taking the time to keep them informed. When their boss comes to them and says, "Hey, what are your teams doing?" they can just look at that email and be like, "This is what Mat's team is doing, this is what Jeff's team is doing."
To me, if there's a role here – and it's not an insignificant one – for the teams to play to improve their ways of working or to improve the comfort level that leaders have with new ways of working, this is it.
Mat: I have had the privilege of being someone on the recipient of those equivalent three-bullet-point emails running 12 different product teams, trying to understand what was going on. You're right – the stress levels go down when you understand proactively what's going on. It became the first thing I would do on a Monday morning knowing I had all that information.
It was something that teams were doing as part of their own weekly reviews as a team, and they just captured it and shared it. So there's no extra work for them. But it made this huge difference of suddenly I could understand where did I need to actually spend my time to help, rather than trying to chase and get information or get too close into managing people who didn't need it because they had it in hand.
I was able to prioritise and think, "Oh, that team looks like they're struggling, so we're gonna go and ask them some questions, see how I can remove some blockers for them."
Jeff: And if there is a blocker, add it in there. "We've been trying for three months to get access to customers. The sales team keeps blocking us. Can really use your help here."
The Shift from Being Rewarded for Knowing to Being Rewarded for Learning
Mat: There's a thing I've observed over the years – it takes a while to get there before you actually start getting rewarded for it in most organisations. In forward-thinking, very agile organisations, it starts a lot earlier, and I think that's something I'd like to try and shift left, try and get it earlier in people's careers.
It's this shift between: spend your entire career being rewarded for being knowledgeable, for being the expert, and knowing how to do something. You get promoted for that, you'll get a bonus for that, you'll get rewarded for it time after time. The more you learn, the more capable you become, the more experienced you are, you've got the answers for everything, you get promoted. You work your way up the career ladder.
Then you hit this tipping point where you hit a level where you realise there aren't many people around you at that point who are seeing the problems. Everyone's busy, everyone's focused on their thing. Then you realise that actually it's your job to call out that this thing isn't working. It becomes your responsibility to say, "There's a problem here we need to address as a company, as an organisation."
As an exec – Nick Muldoon is our CEO – we have an exec weekly, and the majority of that conversation is each of us saying what we don't understand, what we don't know, what we haven't figured out yet. We trust each other that all the rest of it's in hand and working beautifully. The things we really want to talk about is what don't we understand and what are we learning or what are we seeing that we need to try and figure out what to do with.
I see people struggle with that transition if they've not started it earlier in their career. Going back to the basics around sharing the learnings and are we actually achieving what we wanted to, are we seeing the behaviour shift, are we seeing it measured – if we're saying no, having the freedom to be able to call that out earlier, I think it makes that transition in life a lot more straightforward.
Jeff: Look, there's a level of seniority, and the subtheme here that we are dancing around but haven't yet named is psychological safety. It's this feeling that I'm comfortable calling things out that are against the grain, that contradict the plan, that are not working, and I keep seeing and nobody's addressing.
"I think there's a level of seniority that brings some psychological safety. But ultimately, organisational culture has to make it safe."
In other words, if leaders like you and your leadership team are consistently curious – "What do we not know? What are we not aware of? What's not working?" – your teams are going to feel comfortable calling those things out to you because you're asking those questions.
When they change the questions that they ask, it models psychological safety. It models the kinds of questions they want their teams to ask, and that's how change starts.
Building Psychological Safety - "If You Don't Know How, I'll Help You"
Mat: I couldn't agree more, Jeff. I think we've covered a lot of ground today, and psychological safety is one of those really hard intangible things for some people, particularly if they've never experienced it. We see it when we get new people joining our team. We're in a privileged environment where we have a lot of psychological safety.
When new people join from organisations that haven't had that, their behaviour is almost fighting against it. They hold on to their protected ways of working where they get a little bit territorial and they don't want to be vulnerable. It can take a good few months for people to settle in and relax into it.
There was a piece that I want to go back to, and maybe we wrap up on this. You talked earlier around a leader talking to their team and asking them questions to help them understand that it's okay to come back and say, "This thing that we've been developing, this product that we've been getting out the door, isn't having the desired impact." To look at it, question it, be curious, and come back to it.
The thing that you touched on there which I really love was that supportive nature of it. It's okay to do this, and if you don't know how to do it, I'll help you. If you were to give one last tip to our audience – how would you encourage people, leaders specifically, to move more into that space?
Jeff: I think it's a question of asking the right questions. I've been married a long time – half my life, it turns out. I did the maths the other day. If I've learned nothing in my 20-plus years of being married, I've learned that you don't start out immediately solving the problem. You listen and you ask questions. I've learned that. It took a long time.
I think that's our nature as leaders as well. The tendency is "let me solve that for you." Well, hang on. Before you jump to solutions, dig into the problem. What's the issue here? What's the problem? How can I best help you?
"Well, listen, we've set these customer-centric goals now. We've got great OKRs. Thanks for teaching us how to do that. Normally though, we're told what to do, and no one's telling us what to do now, and we don't know what to do. We have no idea how to figure that out. In the past, people have told us. Now I don't know what to do. Can you help us? How do we figure that out?"
To me, those are the kinds of answers you want to elicit from your teams. What's actually going on here?
This is where five whys comes in. "Well, you know, we keep hearing that we should be talking to customers. The reality is it's really difficult to get to our customers."
"Why is it difficult?"
"Well, because we're in a B2B space and we sell aeroplane engines."
"Okay, great. And why does that make it difficult to reach customers?"
"Well, because we have a sales team."
"Why does that make it difficult?"
"Well, because they guard their contacts and they don't want us messing with it."
"Okay, now I understand."
"I think if it's about asking the right questions as a leader, and then when you get to the root cause, you say, 'Well, listen, I can try to unblock it in this way. Do you think that would be helpful? Yes or no?' That becomes far more of a partnership than a hierarchical relationship."
Then you trust me to be honest with you about how well things are working and where things need help, and that's tremendous.
I run a very, very tiny business in the sense of number of people – it's three and a half people total. Even in a three-and-a-half-person business, people try to do good work and people don't want to bother you with what's going on. Sometimes people get overwhelmed, whether it's with work or personal stuff or a combination of the two, and then things start to slip.
The more you can foster that kind of transparency and trust, psychological safety, the less you find out that something is broken with the consequences of it being broken. You find out well in advance of anything actually happening.
Mat: I love that, Jeff. I think that's a great place to wrap up. I'm really grateful for your time, really enjoyed the conversation, and thank you for sharing your wisdom.
Jeff: My pleasure, Mat. Thanks so much for having me. This was fun.
---
Thank you to Jeff Gothelf for joining us on this episode of the Easy Agile Podcast. To learn more about Jeff's work and get your copy of "Who Does What By How Much," visit jeffgothelf.com. You can also find his other books, including "Lean UX" and "Sense and Respond," which provide the foundation for the customer-centric approach to OKRs discussed in this episode.
Subscribe to the Easy Agile Podcast on your favourite platform, and join us for more conversations about agile, product development, and building better teams.
Related Episodes
- Podcast
Easy Agile Podcast Ep.32 Why Your Retrospectives Keep Failing (and How to Finally Fix Them)
In this insightful episode, we dive deep into one of the most common frustrations in engineering and dev teams: retrospectives that fail to drive meaningful change. Join Jaclyn Smith, Senior Product Manager at Easy Agile, and Shane Raubenheimer, Agile Technical Consultant at Adaptavist, as they unpack why retrospectives often become checkbox exercises and share practical strategies for transforming them into powerful engines of continuous improvement.
Want to put these insights into practice? We hosted a live, hands-on retro action workshop to show you exactly how to transform your retrospectives with practical tools and techniques you can implement immediately.
Key topics covered:
- Common retrospective anti-patterns and why teams become disengaged
- The critical importance of treating action items as "first-class citizens"
- How to surface recurring themes and environmental issues beyond team control
- Practical strategies for breaking down overwhelming improvement initiatives
- The need for leadership buy-in and organizational support for retrospective outcomes
- Moving from "doing agile" to "being agile" through effective reflection and action
This conversation is packed with insights for making your retrospectives more impactful and driving real organizational change.
About our guests
Jaclyn Smith is a Senior Product Manager at Easy Agile, where she leads the Easy Agile TeamRhythm product that helps teams realize the full benefits of their practices. With over five years of experience as both an in-house and consulting agile coach, Jaclyn has worked across diverse industries helping teams improve their ways of working. At Easy Agile, she focuses on empowering teams to break down work effectively, estimate accurately, and most importantly, take meaningful action to continuously improve their delivery and collaboration.
Shane Raubenheimer is an Agile Technical Consultant at Adaptavist, a global family of companies that combines teamwork, technology, and processes to help businesses excel. Adaptavist specializes in agile consulting, helping organizations deliver customer value through agile health checks, coaching, assessments, and implementing agile at scale. Shane brings extensive experience working across multiple industries—from petrochemical to IT, digital television, and food industries—applying agile philosophy to solve complex organizational challenges. His expertise spans both the technical and cultural aspects of agile transformation.
Transcript
This transcript has been lightly edited for clarity and readability while maintaining the authentic conversation flow.
Opening and introductions
Jaclyn Smith: Hi everyone, and welcome back to the Easy Agile Podcast. Today I'm talking to Shane Raubenheimer, who's with us from Adaptavist. Today we're talking about why your retrospectives keep failing and how to finally fix them. Shane, you and I have spent a fair amount of time together exploring the topic of retros, haven't we? Do you want to tell us a little bit about yourself first?
Shane Raubenheimer: Yeah, hello everyone. I'm Shane Raubenheimer from Adaptavist. I am an agile coach and technical consultant, and along with Jaclyn, we've had loads of conversations around why retros don't work and how they just become tick-box exercises. Hopefully we're going to demystify some of that today.
Jaclyn Smith: Excellent. What's your background, Shane? What kind of companies have you worked with?
Shane Raubenheimer: I've been privileged enough to work across multiple industries—everything from petrochemical to IT, to digital television, food industry. All different types of applied work, but with the agile philosophy.
Jaclyn Smith: Excellent, a big broad range. I should introduce myself as well. My name is Jaclyn. I am a Senior Product Manager here at Easy Agile, and I look after our Team Rhythm product, which helps teams realize the benefits of being agile. I stumbled there because our whole purpose at Easy Agile is to enable our customers to realize the benefits of being agile.
My product focuses on team and teamwork, and teamwork happens at every level as we know. So helping our customers break down work and estimate work, reflect—which is what we're talking about today—and most importantly, take action to improve their ways of working. I am an agile coach by trade as well as a product manager, and spent about five years in a heap of different industries, both as a consultant like you Shane, and as an in-house coach as well.
The core problem: When retrospectives become checkbox exercises
Jaclyn Smith: All right, let's jump in. My first question for you Shane—I hear a lot that teams get a bit bored with retros, or they face recurring issues in their retrospectives. Is that your experience? Tell me about what you've seen.
Shane Raubenheimer: Absolutely. I think often what should be a positive rollup and action of a sequence of work turns out to normally become a checkbox exercise. There's a lot of latency in the things that get uncovered and discussed, and they just tend to perpetually roll over. It almost becomes a checkbox exercise from what I've seen, rather than the mechanism to actively change what is happening within the team—but more importantly, from influences outside the team.
I think that's where retros fail, because often the team does not have the capability to do any kind of upward or downstream problem solving. They tend to just mull about different ways to ease the issues within the team by pivoting the issues rather than solving them.
I think that's where retros fail, because often the team does not have the capability to do any kind of upward or downstream problem solving. They tend to just mull about different ways to ease the issues within the team by pivoting the issues rather than solving them.
Jaclyn Smith: Yeah, I would agree. Something that I see regularly too is because they become that checkbox, teams get really bored of them. They do them because they're part of their sprint, part of their work, but they're not engaged in them anymore. It's just this thing that they have to do.
It also can promote a tendency to just look at what's recently happened and within their sphere of influence to solve. Whereas I think a lot of the issues that sometimes pop up are things that leadership need to help teams resolve, or they need help to solve. It can end up with them really focusing on "Oh well, there's this one bit in how we do our code reviews, we've got control over that, we'll try to fix that." Or as you say, the same recurring issues come up and they don't seem to get fixed—they're just the same complaints every time.
Shane Raubenheimer: Absolutely. You find ways that you put a band-aid on them just so you can get through to the next phase. I think the problem with that is the impact that broader issues have on teams is never completely solvable within that space, and it's no one else's mandate necessarily to do it. When an issue is relatable to a team, exposing why it's not a team-specific issue and it's more environmental or potentially process-driven—that's the bit that I feel keeps getting missed.
When an issue is relatable to a team, exposing why it's not a team-specific issue and it's more environmental or potentially process-driven—that's the bit that I feel keeps getting missed.
The pressure problem and overwhelming solutions
Jaclyn Smith: Yeah, I think so too. The other thing you just sparked for me—the recurring issue—I think that also happens when the team are under pressure and they don't feel like they have the time to solve the problems. They just need to get into the next sprint, they need to get the next bit of work done. Or maybe that thing that they need to solve is actually a larger thing—it's not something small that they can just change.
They need to rethink things like testing strategies. If that's not working for you, and it's not just about fixing a few flaky tests, but you need to re-look at how you're approaching testing—it seems overwhelming and a bit too big.
Shane Raubenheimer: Absolutely. Often environmental issues are ignored in favor of what you've been mandated to do. You almost retrofit the thing as best you can because it's an environmental issue. But finding ways to expose that as a broader-based issue—I think that should be the only output, especially if it's environmental and not team-based.
The problem of forgotten action items
Jaclyn Smith: Something I've also seen recently is that teams will come up with great ideas of things that they could do. As I said before, sometimes they're under pressure and they don't feel they have the capacity to make those changes. Sometimes those actions get talked about, everyone thinks it's a wonderful idea, and then they just get forgotten about. Teams end up with this big long backlog of wonderful experiments and things that they could have tried that have just been out of sight, out of mind. Have you seen much of that yourself?
Shane Raubenheimer: Plenty. Yes, and often teams err on the side of what's expected of them rather than innovate or optimize. I think that's really where explaining the retrospective concept to people outside fully-stacked or insular teams is the point here. You need, very much like in change management, somebody outside the constructs of teams to almost champion that directive—the same way as you would do lobbying for money or transformation. It needs to be taken more seriously and incorporated into not just teams being mini-factories supporting a whole.
You transform at a company level, you change-manage at a company level. So you should action retrospective influences in the same way. Naturally you get team-level ones, and that's normally where retrospectives do go well because it's the art of the possible and what you're mandated to do. I think bridging the gap between what we can fix ourselves and who can help us expose it is a big thing.
I see so much great work going to waste because it simply isn't part of the day job, or should be but isn't.
You transform at a company level, you change-manage at a company level. So you should action retrospective influences in the same way.
Making action items first-class citizens
Jaclyn Smith: Yeah, absolutely. I know particularly in the pre-Covid times when we were doing a lot of retros in person, or mostly in person with stickies on walls, I also found even if we took a snapshot of the action column, it would still end up on a Confluence board or something somewhere and get forgotten about. Then the next retro comes around and you sort of feel like you're starting fresh and just looking at the last sprint again. You're like, "Oh yeah, someone raised that last retro, but we still didn't do anything about that."
Shane Raubenheimer: I think Product Owners, Scrum Masters, or any versions of those kinds of roles need to treat environmental change or anti-pattern change as seriously as they treat grooming work—the actual work itself. Because it doesn't matter how good you are if the impediments that are outside of your control are not managed or treated with the same kind of importance as the actual work you're doing. That'll never change, it'll just perpetuate. Sooner or later you hit critical mass. There's no scenario where your predictability or velocity gets better if these things are inherent to an environment you can't control.
Product Owners, Scrum Masters, or any versions of those kinds of roles need to treat environmental change or anti-pattern change as seriously as they treat grooming work—the actual work itself.
Jaclyn Smith: Yeah, that's true. We've talked about action items being first-class citizens and how we help teams do that for that exact reason. Because a retro is helpful to build relationships and empathy amongst the team for what's happening for each of them and feel a sense of community within their team. But the real change comes from these incremental changes that are made—the conversations that spark the important things to do to make those changes to improve how the team works.
That action component is really the critical part, or maybe one of two critical parts of a retro. I feel like sometimes it's the forgotten child of the retro. Everyone focuses a lot on engaging people in getting their ideas out, and there's not as much time spent on the action items and what's going to be done or changed as a result.
Beyond team-level retrospectives
Shane Raubenheimer: Absolutely, consistently. I think it's symptomatic potentially of how retros are perceived. They're perceived as an inward-facing, insular reevaluation of what a team is doing. But I've always thought, in the same way you have the concept of team of teams, or if you're in a scaled environment like PI planning, I feel retrospectives need the same treatment or need to be invited to the VIP section to become part of that.
Because retrospectives—yes, they're insular or introspective—but they need to be exposed at the same kind of level as things like managing your releases or training or QA, and they're not.
Jaclyn Smith: Yeah, I think like a lot of things, they've fallen foul of the sometimes contentious "agile" word. People tend to think, "Oh retros, it's just one of those agile ceremonies or agile things that you do." The purpose of them can get really lost in that, and how useful they can be in creating change. At the end of the day, it's about improving the business outcomes. That's why all of these things are in place—you want to improve how well you work together so that you can get to the outcome quicker.
At the end of the day, it's about improving the business outcomes. That's why all of these things are in place—you want to improve how well you work together so that you can get to the outcome quicker.
Shane Raubenheimer: Absolutely. Outcome being the operative word, not successfully deploying code or...
Jaclyn Smith: Or ticking the retro box, successfully having a retro.
Shane Raubenheimer: Yeah, exactly. Being doing agile instead of being agile, right?
Expanding the scope of retrospectives
Jaclyn Smith: One hundred percent. It also strikes me that there is still a tendency for retros to be only at a team level and only a reflection of the most recent period of time. So particularly if a team are doing Scrum or some version of Scrum with sprints, to look back over just the most recent period. I think sometimes the two things—the intent of a retro but also the prime directive of the retro—gets lost.
In terms of intent, you can run a retro about anything. Think about a post-mortem when you have an incident and everyone gets together to discuss what happened and how we prevent that in the future. I think people forget that you can have a retro and look at your system of work, and even hone in on something like "How are we estimating? Are we doing that well? Do we need to improve how we're doing that?" Take one portion of what you're working on and interrogate it.
You can run a retro about anything. I think people forget that you can have a retro and look at your system of work, and even hone in on something like "How are we estimating? Are we doing that well? Do we need to improve how we're doing that?" Take one portion of what you're working on and interrogate it.
Understanding anti-patterns
Shane Raubenheimer: Absolutely. You just default to "what looks good, what can we change, what did we do, what should we stop or start doing?" That's great and all, but without some kind of trended analysis over a period of time, you might just be resurfacing issues that have been there all along. I think that's where the concept or the lack of understanding of anti-patterns comes in, because you're measuring something that's happened again rather than measuring or quantifying why is it happening at all.
I think that's the big mistake of retros—it's almost like an iterative band-aid.
I think that's the big mistake of retros—it's almost like an iterative band-aid.
Jaclyn Smith: Yeah. Tell me a little bit more about some of the anti-patterns that you have seen or how they come into play.
Shane Raubenheimer: One of them we've just touched on—I think the buzzword for it is the cargo cult culture for agile. That's just cookie-cutting agile, doing agile because you have to instead of being agile. Literally making things like your stand-up or your review or even planning just becomes "okay, well we've got to do this, so we've ticked the box and we're following through."
Not understanding the boundaries of what your method is—whether you like playing "wagile" or whether you're waterfall sometimes, agile at other times, and you mistake that variability as your agility. But instead, you don't actually have an identity. You're course-correcting blindly based on what's proportionate to what kind of fire you've got in your way.
Another big anti-pattern is not understanding the concept of what a team culture means and why it's important to have a team goal or a working agreement for your team. Almost your internal contracting. We do it as employees, right?
I think a lot of other anti-patterns come in where something's exposed within a team process, and because it's not interrogated or cross-referenced across your broader base of teams, it's not even recognized as a symptom. It is just a static issue. For me, that's a real anti-pattern in a lot of ways—lack of directive around what to do with retrospectives externally as well as internally. That's simply not a thing.
A lot of other anti-patterns come in where something's exposed within a team process, and because it's not interrogated or cross-referenced across your broader base of teams, it's not even recognized as a symptom. It is just a static issue. For me, that's a real anti-pattern in a lot of ways—lack of directive around what to do with retrospectives externally as well as internally.
Jaclyn Smith: Yeah, I think that's a good call-out for anyone watching or listening. If you're not familiar with anti-patterns, they're common but ineffective responses to recurring problems. They may seem helpful initially to solve an immediate problem, but they ultimately lead to negative outcomes.
Shane, what you just spoke about there with retrospectives—an example of that is that the team feel disengaged with retrospectives and they're not getting anything useful out of it, or change isn't resulting from the retrospectives. So the solution is to not hold them as frequently, or to stop doing them, or not do them at different levels or at different times. That's a really good example of an anti-pattern. It does appear to fix the problem, but longer term it causes more problems than it solves.
Another one that I see is with breaking down work. The idea that spending time together to understand and gain a shared understanding of the work and the outcome that you need takes a lot of time, and breaking down that work and getting aligned on how that work is going to break down on paper can look like quite an investment. But it's also saving time at the other end, reducing risk, reducing duplication and rework to get a better outcome quicker. You shift the time spent—development contracts because you've spent a little bit more time discovering and understanding what you're doing.
A common anti-pattern that I see there is "we spent way too long looking at this, so we're going to not do discovery in the same way anymore," or "one person's going to look at that and break it down."
The budget analogy
Shane Raubenheimer: I always liken it to your budget. The retrospective is always the nice shiny holiday—it's always the first to go.
I always liken it to your budget. The retrospective is always the nice shiny holiday—it's always the first to go.
Jaclyn Smith: It's the contractor.
Shane Raubenheimer: Yeah. It's almost like exposing stuff that everybody allegedly knows to each other is almost seen as counterintuitive because "we're just talking about stuff we all know." It often gets conflated into "okay, we'll just do that in planning." But the reality is the concept of planning and how you amend what you've done in the retrospective—that's a huge anti-pattern because flattening those structures from a ceremonies perspective is what teams tend to do because of your point of "well, we're running out of daylight for doing actual development."
But it's hitting your head against the wall repeatedly and hoping for a different outcome without actually implying a different outcome. Use a different wall even. I think it's because people are so disillusioned with retrospectives. I firmly believe it's not an internal issue. I believe if the voices are being heard at a budgeting level or at a management level, it will change the whole concept of the retrospective.
Solution 1: Getting leadership buy-in
Jaclyn Smith: I like it, and that's a good thread to move on to. So what do we do about it? How do we help change this? What are some of the practical tips that people can deploy?
Shane Raubenheimer: A big practical tip—and this is going to sound like an obvious one—is actual and sincere buy-in. What I mean by that is, as a shareholder, if I am basing your performance and your effectiveness on the quality and output of the work that you're promising me, then I should be taking the issues that you're having that are repeating more seriously.
Because if you're course-correcting for five, six, or seven sprints and you're still not getting this increasing, predictable velocity, and if it's not your team size or your attitude, it's got to be something else. I often relate that to it being environmental.
Buying into the outputs for change the same way as you would into keeping everyone honest, managing budgets, and chasing deadlines—it should all be part of the same thing. They should all be sitting at the VIP table, and I think that's a big one.
Buying into the outputs for change the same way as you would into keeping everyone honest, managing budgets, and chasing deadlines—it should all be part of the same thing. They should all be sitting at the VIP table.
Solution 2: Making patterns visible
Jaclyn Smith: I think so too. Something that occurs to me, and it goes back to what we were talking about right at the beginning, is sometimes identifying that there's a pattern there and that the same thing keeps coming up isn't actually visible, and that's part of the problem, right?
I know some things we've been doing in Easy Agile TeamRhythm around that recently, attempting to help teams with this. We've recently started surfacing all incomplete action items in retrospectives so people can see that big long list. Because they can convert their action items to Jira items or work items, they can also see where they've just been sitting and languishing in the backlog forever and a day and never been planned for anything to be done about them.
We've recently started surfacing all incomplete action items in retrospectives so people can see that big long list. Because they can convert their action items to Jira items or work items, they can also see where they've just been sitting and languishing in the backlog forever and a day and never been planned for anything to be done about them.
We've added a few features to sort and that kind of thing. Coming in the future—and we've been asked about this a lot—is "what about themes? What about things that are bubbling up?" So that's definitely on our radar that will be helpful.
I think that understanding that something has been raised—a problem getting support from another team, or with a broken tool or an outdated tool that needs to be replaced in the dev tooling or something like that—if that's been popping up time and time again and you don't know about it, then even as the leader of that team, you don't have the ammunition to then say "Look, this is how much it's slowed us down."
I think we live in such a data world now. If those actions are also where the evidence is that this is what needs to change and this is where the barriers are...
Solution 3: The power of trend analysis
Shane Raubenheimer: Certainly. I agree. Touching on the trend analytics approach—we do trend analysis on everything except what isn't happening or what is actually going wrong, because we just track the fallout of said lack of application. We don't actually trend or theme, to your point.
We do trend analysis on everything except what isn't happening or what is actually going wrong, because we just track the fallout of said lack of application.
We theme everything when we plan, yet somehow we don't categorize performance issues as an example. If everybody's having a performance issue, that's the theme. We almost need to categorize or expose themes that are outward-facing, not just inward-facing. Because it's well and good saying "well, our automated testing system doesn't work"—what does that mean? Why doesn't it work?
I think it should inspire external investigation. When you do a master data cleanup, you don't just say "well, most of it looks good, let's just put it all in the new space." You literally interrogate it at its most definitive and lowest level. So why not do the same with theming and trending environmental issues that you could actually investigate, and that could become a new initiative that would be driven by a new team that didn't even know it was a thing?
Jaclyn Smith: Yeah, and you're also gathering data at that point to evidence the problem rather than "oh, it's a pain point that keeps coming up." It is, but it gives you the opportunity to quantify that pain point a little bit as well. I think that is sometimes really hard to do when you're talking about developer experience or team member experience. Even outside of product engineering teams, there are things in the employee experience that affect the ability for that delivery—whatever you're delivering—to run smoothly. You want to make that as slick as possible, and that's how you get the faster outcomes.
Solution 4: The human factor
Shane Raubenheimer: Absolutely. You can never underestimate the human factor as well. If everything I'm doing and every member of my team is doing is to the best of not just their capability, but to the best of the ability in what they have available to them, you become jaded, you become frustrated. Because if you're hitting your head against the same issue regardless of how often you're pivoting, that can be very disillusioning, especially if it's not been taken as seriously as your work output.
If everything I'm doing and every member of my team is doing is to the best of not just their capability, but to the best of the ability in what they have available to them, you become jaded, you become frustrated.
We run a week late for a customer delivery or a customer project, and we start complaining about things like money, budget overspend, over-utilization. But identifying systematic or environmental issues that you can actually quantify should be treated in exactly the same way. I feel very strongly about this.
Solution 5: Breaking down overwhelming action items
Jaclyn Smith: We tend to nerd out about this stuff, Shane, and you're in good company. You've also reminded me—we've put together a bit of a workshop to help teams and people understand how to get the most out of their retrospectives, not just in terms of making them engaging, but fundamentally how to leverage actions to make them meaningful and impactful.
We've spoken a lot about the incremental change that is the critical factor when it is something that's within the team's control or closely to the team's control. That's how you get that expansion of impact—the slow incremental change. We've talked about sometimes those action items seem overwhelming and too big. What's your advice if that's the scenario for a team? What do you see happen and what can they do?
Shane Raubenheimer: I would suggest following the mantra of "if a story is too big, you don't understand enough about it yet, or it's not broken down far enough." Incremental change should be treated in exactly the same way. The "eat the elephant one bite at a time" analogy. If it's insurmountable, identify a portion of it that will make it a degree less insurmountable next time, and so on and so forth.
If we're iterating work delivery, problem-solving should be done in rapid iteration as well. That's my view.
Jaclyn Smith: I like it.
The "eat the elephant one bite at a time" analogy. If it's insurmountable, identify a portion of it that will make it a degree less insurmountable next time, and so on and so forth. If we're iterating work delivery, problem-solving should be done in rapid iteration as well.
Wrapping up: What's next?
Jaclyn Smith: I think we're almost wrapping up in terms of time. What can people expect from us if they join our webinar on July 10th, I believe it is, where we dive and nerd out even more about this topic, Shane?
Shane Raubenheimer: I think the benefit of the webinar is going to be a practical showing of what we're waxing lyrical about. It's easy to speak and evangelize, but I think from the webinar we'll show turning our concepts into actual actions that you can eyeball and see the results of.
With our approach that we took to our workshop, I think people will very quickly get the feeling of "this is dealing with cause and effect in a cause and effect way." So practical—to put that in one sentence, an active showing or demonstration of how to quantify and actually do what we've been waxing lyrical about.
the benefit of the webinar is going to be a practical showing of what we're waxing lyrical about. It's easy to speak and evangelize, but I think from the webinar we'll show turning our concepts into actual actions that you can eyeball and see the results of.
Jaclyn Smith: Excellent. That was a lovely summation, Shane. If anyone is interested in joining, we urge you to do so. You can hear us talking more about that but get some practical help as well. There is a link to the registration page in the description below.
I think that's about all we have time for today. But Shane, as always, it's been amazing and lovely to chat to you and hear your thoughts on a pocket of the agile world and helping teams.
Shane Raubenheimer: Yeah, it's always great engaging with you. I always enjoy our times together, and it's been my pleasure. I live for this kind of thing.
Jaclyn Smith: It's wonderful! Excellent. Well, I will see you on the 10th, and hopefully we'll see everyone else as well.
Shane Raubenheimer: Perfect. Yeah, looking forward to it.
Jaclyn Smith: Thanks.
Ready to end the frustration of ineffective retrospectives?
Jaclyn Smith and Shane Raubenheimer also hosted a live, hands-on webinar designed to turn retrospectives into powerful engines for continuous improvement.
In this highly interactive session, they talked about how teams can:
- Uncover why retrospectives get stuck in repetitive cycles
- Clearly capture and assign actionable insights
- Identify and avoid common retrospective pitfalls and anti-patterns
- Get hands-on experience with Easy Agile TeamRhythm to streamline retrospective actions
- Practical tools, techniques, and clear next steps to immediately enhance retrospectives and drive meaningful team improvements.
- Podcast
Easy Agile Podcast Ep.26 Challenging the status quo: Women in engineering
"It was great to be able to have this conversation with Maysa and have her share her story. So many great takeaways." - Nick Muldoon
Join Nick Muldoon, Co-founder and Co-CEO of Easy Agile as he chats with Maysa Safadi, Engineering Manager at Easy Agile.
As a woman, growing up in the middle east and being passionate about pursuing a career in the world of tech, don’t exactly go hand in hand. Navigating her way through a very patriarchal society, Maysa talks about her career journey and how she got to where she is today.
Having the odds stacked against her, Maysa talks about challenging the status quo, the constant pressure to prove herself in a male-dominated industry, the importance of charting your own course and her hopes for the future of women in tech.
This is such an inspiring episode, we hope you enjoy it as much as we did.
Transcript
Nick Muldoon:
Hi, team. Nick Muldoon, co-founder co-CEO at Easy Agile, and I'm joined today by Maysa Safadi, who's an engineering manager here at Easy Agile. We'll get into Maysa's story and journey in just a little bit, but before we do, I just wanted to say a quick acknowledgement to the traditional custodians of the land from which we are recording and indeed broadcasting today, and they are the people of the Dharawal speaking country just south of Sydney and Australia. We pay our respects to elders past, present, and emerging, and extend that same respect to all Aboriginal Torres Strait Islander and First Nations people that are joining us and listening in today. Maysa, welcome. Thanks for joining us.
Maysa Safadi:
Thank you, Nick. Thank you for inviting me.
Nick Muldoon:
So, Maysa's on today. We're going to explore Maysa's journey on her career to this point, and I think one of the things that interests me in Maysa's journey is she's come from a fairly patriarchal society in the Middle East, and has overcome a lot of odds that some of her peers didn't overcome, and she's managed to come to Australia, start a family in Australia, has three beautiful children and is an engineering manager after spending so many years as a software engineer. So, Maysa, I'd love to learn a little bit about the early stages of your life and how you got into university.
Maysa Safadi:
I was born and raised in United Arab Emirates. I am one of nine. I have three brothers and five sisters. I'm the middle child actually. Dad and mom, they were very focused on really raising good healthy kids and more important is to educate all of their kids regardless if they are boys or girls. Started my education at schools there. When I graduated from high school, I end up getting enrolled in a college like what you call it here in Australia, TAFE.
Education in United Arab Emirates, it's not free. Being one of nine and having that aim and goal for my father to educate all of us. When it comes to education, it was two factors that play big part of it. Can dad afford sending me to that college or university? and then after I finish, will I be able to find a job in that field? One of my dream jobs, I remember growing up I wanted to be a civil engineer, and I remember my older brother, he's the second, was telling me it's good that you want to study civil engineering. Remember, you will not be able to find a job.
Nick Muldoon:
Tell me why.
Maysa Safadi:
United Arab Emirates, it's male dominated country. Civil engineering is a male dominated industry. If you are going to look for a job after a graduation, it is pretty much given to males and Emirati males first. So, kind of it needs to go very down in the queue before it gets to me, and to be realistic, sometimes you give up your dreams because you know that you are not going to have a chance later in life.
Nick Muldoon:
Oh, my gosh, this is demoralizing.
Maysa Safadi:
Unfortunately.
Nick Muldoon:
Okay,
Maysa Safadi:
So, the decision for me to get to engineering, it was, again, I couldn't really go to university because it was too expensive. My older sister had a friend who told her about this institute that they are teaching computers. When it came to mom and dad, they really told us, "Do whatever you want, study whatever you want, it is you who is going to basically study that field and you need to like it and you need to make sure that you can make the most of it." So, with that institute, it was reasonably okay for my dad to pay for my fees and they were teaching computers. I thought, "Yeah, all right, computers, it is in science field, right? I can't maybe study civil engineering, but I'm really very interested to know more about computers."
Nick Muldoon:
Similar, close enough.
Maysa Safadi:
Close enough. I end up getting enrolled and I remember the very first subject was fundamentals of computers or computer fundamentals, something like this, and I thought, "Yeah, all right, that is interesting," and I did really finish my education from there. After two years I ended up getting a diploma in computer science.
Nick Muldoon:
So, was this a unique situation for you or were most of your girlfriends from high school also going on to college?
Maysa Safadi:
It's unique actually, unique to my family. I'm not saying it's rare, you will find other families doing it, but it's not common. It is unique because, yes, most of the girls, if not all they go to school, it's compulsory in United Arab Emirate, but very small number of them pursue higher education. Pretty much girls, they end up finishing school and the very first chance to get married, they end up getting married and starting their own family. I remember-
Nick Muldoon:
And you've chosen a different path because-
Maysa Safadi:
Oh, yeah.
Nick Muldoon:
... yes, you have a family today obviously, but you established your career, you didn't finish school and get married.
Maysa Safadi:
I think I really give so much credit to mom and dad in that sense. They told us education is more important than starting a family or getting married. They said, "Finish your degree, finish your education, then get married." The other thing they said, "Do not even get married while you are studying because for sure you won't be able to finish it. Maybe because your husband wouldn't want you to finish it. Maybe you will become so busy with the kids and you will put it back." I remember actually so many times with my older sisters when someone, it's traditional marriage there, when some people come and propose to marry or to propose for their hands, my dad always used to say, "No, finish your education first."
Nick Muldoon:
So, this is interesting because I think your eldest was born when you went and actually continued education and got your master's, is that correct?
Maysa Safadi:
Yes. I got diploma in computer science. However, I always wanted a bachelor degree. I knew that there is more to it. I fell in love with computing but I wanted more, and always I had that perception in mind, "If I'm going to get a better opportunity, then I have to have a better certificate or education." So, I thought getting a bachelor degree is going to give me better chances. I was working in United Arab Emirates and saving money, and Wollongong University had a branch there in Dubai. So, I had my eyes on finishing my degree there. Eventually I end up enrolling at Wollongong University, Dubai campus, to get my bachelor degree in computer science.
Nick Muldoon:
So, just for folks that are listening along, Wollongong is the regional area of Australia where Maysa and I and many of our team live. So, University of Wollongong is the local Wollongong University that has a branch in Dubai.
Nick Muldoon:
So you were with University of Wollongong doing this bachelor degree, and how did you make the transition and move to Australia?
Maysa Safadi:
When I was studying at Wollongong University, Dubai campus, and was working at the same time to be able to pay the fees, I met my husband at work, and happened that he has a skilled migrant visa to come to Australia, coincident. So, I was thinking, "All right, he is going to go to Australia, he is a person that I do really see spending the rest of my life with. So, how about if I transfer my papers to Wollongong University here in Australia, finish my degree from here, while he gets the chance to live in the country, and then we can make our minds. 'Is it a place for us to continue our life here?' If not, it was a good experience. If good, that is another new experience and journey that we are going to take." So, we end up coming to Australia. I finished my degree from here.
Nick Muldoon:
What did you find when you arrived at Australia? How was it different from United Arab Emirates? How was it different for women? How was it different for women in engineering given what your brother had said about civil engineering in Dubai?
Maysa Safadi:
I had a culture shock when I came to Australia. Yes, I was in a country that.... male-dominated country, third world country, no opportunities for females, to a country where everything is so different. The way of living, the communication, the culture, everything was so different. When it comes to engineering, because I didn't really finish my degree in United Arab Emirate, so I didn't even get the chance to work in engineering though. However, knowing about the country and knowing about the way they take talents in, I knew I had slim chances. Now, coming to Australia and to finish my degree at the university, it was challenging. Someone from the Middle East, english is second language, being in computer degree where looking around me, "My god, where are the girls? I don't really see many of them around." And then, yeah, getting into that stereotype of industry or of a field where it is just only for males.
Nick Muldoon:
Yeah, so a bit of a culture shock coming across. I guess fast forward, you've spent a decade in software engineering and then progressing into engineering leadership. What was the change and how did you perceive the change going from a team member to a people leader?
Maysa Safadi:
I graduated from Wollongong University and I end up getting a job at Motorola as a graduate software engineer. In the whole team there was three females.
Nick Muldoon:
How big was the team?
Maysa Safadi:
How big was the team? It was around 20.
Nick Muldoon:
Okay.
Maysa Safadi:
Yep. There was the network team which had, I can't remember how many, but it was a different team. The team I was in, it is development team, and there was three girls in there, one of them another graduate that end up coming to the program and one that started a year before. Interesting, these two females, they are not in IT anymore. I really loved the problem solving, I really loved seeing the outcome of my work in people's hands because I was developing features for mobile phones. So, all was in mind then as an IC, how to become better at my work, how to learn more, how to prove myself to everyone that I'm capable as much as any other male in the team.
Nick Muldoon:
Do you think, Maysa, that that's something that you've had to do throughout your career to prove yourself?
Maysa Safadi:
Yes, yes. It's a tough industry. Really not seeing so many females it makes it hard because you look for role models that makes you think, "Oh, she made it. I can make it. If she's still in there, then I can learn from her." I missed all of that. I never had another mentor in my career or having even a female manager in all of the jobs I had before. So, always I was dealing with males, always I was trying to navigate my way to show them the different perspective I can bring. Even the subtle interactions I used to have with them giving me that, "You are not capable enough. You are not there yet. This is our territory. Why are you here?" All of these things, it does really, without you think about it, it does really sink your self-esteem and the self-worth when you are in industries like this. Yeah.
Nick Muldoon:
So, I'm conscious, you know are in this position now, you've kind of talked about you can't be what you can't see. If you can't see a woman that's a people leader and you're not reporting to one, then it's hard to see how you can become that. But, here you are, you have become that, and for our team here, you are one of the women leaders in the company, which is fantastic. So, I guess, what are the sorts of activities that you are undertaking to try and be present and be visible that you can be a woman people leader in the engineering field. I think it was earlier last month perhaps that you were at WomenHack in Sydney.
Maysa Safadi:
Yes, I've been-
Nick Muldoon:
What's WomenHack?
Maysa Safadi:
Okay. WomenHack, it is organization to bring diverse talented women intake together, to support them, to educate them, and not just only that, to try to connect them with other companies that they appreciate diversity and inclusion, and basically try to recruit... Pretty much, it is finding opportunities for women in tech, in companies that they do value the diversity.
Nick Muldoon:
Okay. So, I think it's interesting, I see these parallels here between your mom and dad that kind of went out on a limb and extended themselves financially to get six girls through a college and university education in the Middle East, and they were doing something that was perhaps fairly progressive at the time. You said it wasn't common. It sounds like WomenHack is bringing together more progressive companies these days, that are creating opportunities for women to get into leadership or even to accelerate their careers.
Maysa Safadi:
Yes, it is so pleasing to see the change that has happened over the years. When I reflect back in 2000, when I graduated and end up working in IT, and all of the behaviors, there was no knowledge or there was no awareness how much diversity is important, and they were not even aware that really females are really quitting the field or not that many females enrolls in the first place in degrees like computing or engineering. Even education through the school, no awareness was there. Then you see now the progress that is happening, more awareness is during school. Universities, they are trying to make the degrees or the fields more inviting for females and diversity. They are trying to bridge the gaps. So, many companies that are taking action to make it easier for females to be in the field and to progress in the field.
So, WomenHack, there are so many other groups like Women in Tech, there are so many companies that are allies to females in tech as well, where they are trying to really support and make their voice heard by other companies. Is as well all of the research and the science, are really proving that having diversity in teams, it is going to be more beneficial for the companies, for the teams, to have more engaging teams having these differences. So, yeah, there is a lot of awareness happening at the moment, and so many companies are trying to do something about it. I wish if that was early on.
Nick Muldoon:
Earlier in your career.
Maysa Safadi:
Earlier in my career, yes. So, many times I felt so isolated. So many times I was sitting back and saying, "Is it worth the fight?" Why do I have to work always twice as hard, to just only prove that I'm capable? Why does it have to be this way? Why I'm not equal?" That what actually made me change my career from IC to people leader. I didn't want to put other females... Being people leader wasn't just only for females, it was for me to voice, to be able to help pretty much. People leader to be able to help anyone in the field regardless if they are males or females. Moreso is to lead by example, is to be a role model for others, is to show others that if I can make it, then definitely you can as well, is to provide the support, it's to build that trust.
Nick Muldoon:
So, how can we, as an industry, I guess, how do we change... I'm reflecting on Iran at the moment, and the activities that have taken place over the last 60 days in particular, but really just more media coverage for hundreds of years of oppression of women. What do you hope, you being a people leader, a woman that's come from the Middle East, what do you hope for these young women and girls in our Iran over their trajectory? If we're still making a journey here in Australia, in a male dominated industry, what sort of hope do you have over the 20 years from here to 2040, for these women that are in the Middle East today and still haven't found a progressive society?
Maysa Safadi:
Politics. It's the game of power. Really hoping is the awareness to get there for these females in locked countries to know that there are better opportunities for them. They need to be stronger, they need to support each other, they need to empower each other. As much as it is easy said, it's not that easy done. However, all of that frustration that is built in them, it is surfacing from time to time. I'm really hoping for Iranian women, not just only Iranians, I'm really hoping for every woman in the world, regardless if it is a third world country or even if it is advanced country like Australia, is to always feel that they are worthy, is always to feel that they can have a voice, they can be part of life, and they are doing meaningful things.
Now, if they are raised in a way that always being told you are second, always being told you role only to get married and raise family, they will believe it themselves. So, it needs to come from women like us, leading by example, being role models, sending the awareness. Really media, we need to use the media very well so we can get to these people who are really locked in their countries now thinking that this is normal. It a lot of work needs to happen.
Nick Muldoon:
Well, that's an interesting observation. It is normalized for them, isn't it? So, look, reflecting on my own upbringing, I remember that my parents would always say you can achieve anything you put your mind to, but I could open up the newspaper, I could look on TV and I could see a host of people that were people that look like me, that is white males that were Australian, that were successful in business, and so I believed that I could do and be whatever I wanted to do and be. So, I guess, how do we get this message out? How do we tell your story more broadly to get this message out? That you can do whatever you put your mind to, you can achieve whatever you hope to achieve. There's something interesting for me to reflect on about the media piece that you're talking about.
Maysa Safadi:
Yeah, and I think the countries that they are advanced, the countries that they are really recognizing women more and more, they are more responsible in sending that awareness. They have to do more. It is basically, yeah, media, it is such an important thing. This is what people read everyday or watch everyday.
Nick Muldoon:
I guess, I'm conscious, like we're talking about half a world away in the Middle East, but you're actually involved in a community group here at home. What's that group that you're involved in and how's that helping women?
Maysa Safadi:
Yeah, I'm a board member for organization called Women Illawarra. It is run by women, for women. Basically this organization is to help women in domestic violence, it's basically to set them in the right path. It gives them services and it does educate them and even help them with the counseling, with legal support so they can get out of these situations. Make them believe that they can be part of this society, that they are important voice in the society, in the community, and they can really contribute and make an impact. So, by providing this education and this support, it is empowering these women to take matters their hand, and again, to really set the path for their own life and their own success. They need to take control back again, and yeah, even help their kids see their moms that they are really doing the right thing.
Nick Muldoon:
It's this interesting thread that comes through in your entire life story and your journey, that mom and dad wanted you to have an education so that you were empowered to chart your own course in life-
Maysa Safadi:
Yes.
Nick Muldoon:
... and here you are today, giving back to other women, trying to help them get an education and feel empowered so that they can chart their own course in life. I think that's fantastic. Thank you, Maysa.
Maysa Safadi:
Thank you.
Nick Muldoon:
What is your hope for women over the next 10 years? Because it sounds like we're on a trajectory, we're making progress in some countries, we're not making as much progress in other countries. What's your hope for 2030? What does it look like?
Maysa Safadi:
My hope for 2030, or my hope for... I really hope it is even five years, less than 10 years. My hope for 10 years is not to have conversations about how to reduce the gap between males and females, because by the 10 years time, that should be the way everyone operates. My hope in 10 years time is to have equal opportunities for anyone regardless what's their gender, background, language they speak, physical abilities, it needs to be equal, it needs to.... Equity, it is such an important thing. Giving exposure to the same opportunity, it is so important regardless what's your abilities. Stereotyping, I need that to get totally erased from the world.
We are all a human, we did not really choose where we born, who our parents are, what our upbringing, what our financial situation, it wasn't our choice, why do we have to get penalized for it? We have responsibility toward the world to help everyone. We are social people, we really thrive when we have good connections and good bonds, we really need to tap into the things that makes us better. So, we have so many talents that we can use it to the benefits of the world. I know countries always going to have fights and politics, that everyone is looking for the power, that's not going to disappear. But us, as people part of this world, we really need to try to uplift and upskill everyone around us. I really hope for the females in all of the other countries to know that they are worth it, to know that they are as good as anyone else. They have the power, they don't realize how much strength and power they have. So, it comes from self-belief. Believe in yourself, and you will be surprised how much you will be able to achieve.
Nick Muldoon:
There you go. Believe in yourself and you'll be surprised with how much you are able to achieve. Maysa Safadi thank you so much for your time. Really appreciate it.
Maysa Safadi:
Thank you so much Nick. Thank you everyone.
- Podcast
Easy Agile Podcast Ep.12 Observations on Observability
On this episode of The Easy Agile Podcast, tune in to hear developers Angad, Jared, Jess and Jordan, as they share their thoughts on observability.
Wollongong has a thriving and supportive tech community and in this episode we have brought together some of our locally based Developers from Siligong Valley for a round table chat on all things observability.
💥 What is observability?
💥 How can you improve observability?
💥 What's the end goal?

"This was a great episode to be a part of! Jess and Jordan shared some really interesting points on the newest tech buzzword - observability""
Be sure to subscribe, enjoy the episode 🎧
Transcript
Jared Kells:
Welcome everybody to the Easy Agile podcast. My name's Jared Kells, and I'm a developer here at Easy Agile. Before we begin, Easy Agile would like to acknowledge the traditional custodians of the land from which we broadcast today, the Wodiwodi people of the Dharawal nation, and pay our respects to elders past, present and emerging, and extend that same respect to any aboriginal people listening with us today.
Jared Kells:
So today's podcast is a bit of a technical one. It says on my run sheet here that we're here to talk about some hot topics for engineers in the IT sector. How exciting that we've got a couple of primarily front end engineers and Angad and I are going to share some front end technical stuff and Jess and Jordan are going to be talking a bit about observability. So we'll start by introductions. So I'll pass it over to Jess.
Jess Belliveau:
Cool. Thanks Jared. Thanks for having me one as well. So yeah, my name's Jess Belliveau. I work for Apptio as an infrastructure engineer. Yeah, Jordan?
Jordan Simonovski:
I'm Jordan Simonovski. I work as a systems engineer in the observability team at Atlassian. I'm a bit of a jack of all trades, tech wise. But yeah, working on building out some pretty beefy systems to handle all of our data at Atlassian at the moment. So, that's fun.
Angad Sethi:
Hello everyone. I'm Angad. I'm working for Easy Agile as a software dev. Nothing fancy like you guys.
Jared Kells:
Nothing fancy!
Jess Belliveau:
Don't sell yourself short.
Jared Kells:
Yeah, I'll say. Yeah, so my name's Jared, and yeah, senior developer at Easy Agile, working on our apps. So mainly, I work on programs and road maps. And yeah, they're front end JavaScript heavy apps. So that's where our experience is. I've heard about this thing called observability, which I think is just logs and stuff, right?
Jess Belliveau:
Yeah, yeah. That's it, we'll wrap up!
Jared Kells:
Podcast over! Tell us about observability.
Jess Belliveau:
Yeah okay, I'll, yeah. Well, I thought first I'd do a little thing of why observability, why we talk about this and sort of for people listening, how we got here. We had a little chat before we started recording to try and feel out something that might interest a broader audience that maybe people don't know a lot about. And there's a lot of movements in the broad IT scope, I guess, that you could talk about. There's so many different things now that are just blowing up. Observability is something that's been a hot topic for a couple of years now. And it's something that's a core part of my job and Jordan's job as well. So it's something easy for us to talk about and it's something that you can give an introduction to without getting too technical. So we don't want to get down. This is something that you can go really deep into the weeds, so we picked it as something that hopefully we can explain to you both at a level that might interest the people at home listening as well.
Jess Belliveau:
Jordan and I figured out these four bullet points that we wanted to cover, and maybe I can do the little overview of that, and then I can make Jordan cover the first bullet point, just throw him straight under the bus.
Jordan Simonovski:
Okay!
Jess Belliveau:
So we thought we'd try and describe to you, first of all, what is observability. Because that's a pretty, the term doesn't give you much of what it is. It gives you a little hint, but it'll be good to base line set what are we talking about when we say what is observability. And then why would a development team want observability? Why would a company want observability? Sort of high level, what sort of benefits you get out of it and who may need it, which is a big thing. You can get caught up in these industry hot buzz words and commit to stuff that you might not need, or that sort of stuff.
Jared Kells:
Yep.
Jordan Simonovski:
Yep.
Jess Belliveau:
We thought we'd talk about some easy wins that you get with observability. So some of the real basic stuff you can try and get, and what advantages you get from it. And then we just thought because we're no going to try and get too deep, we could just give a few pointers to some websites and some YouTube talks for further reading that people want to do, and go from there. So yeah, Jordan you want to-
Jared Kells:
Sounds good.
Jess Belliveau:
Yeah. I hopefully, hopefully. We'll see how this goes! And I guess if you guys have questions as well, that's something we should, if there's stuff that you think we don't cover or that you want to know more, ask away.
Jordan Simonovski:
I guess to start with observability, it's a topic I get really excited about, because as someone that's been involved in the dev ops and SRE space for so long, observability's come along and promises to close the loop or close a feedback loop on software delivery. And it feels like it's something we don't really have at the moment. And I get that observability maybe sounds new and shiny, but I think the term itself exists to maybe differentiate itself from what's currently out there. A lot of us working in tech know about monitoring and the loading and things like that. And I think they serve their own purpose and they're not in any way obsolete either. Things like traditional monitoring tools. But observability's come along as a way to understand, I think, the overwhelmingly complex systems that we're building at the moment. A lot of companies are probably moving towards some kind of complicated distributed systems architecture, microservices, other buzz words.
Jordan Simonovski:
But even for things like a traditional kind of monolith. Observability really serves to help us ask new questions from our systems. So the way it tends to get explained is monitoring exits for our known unknowns. With seniority comes the ability to predict, almost, in what way your systems will fail. So you'll know. The longer you're in the industry, you know this, like a Java server fails in x, y, z amount of ways, so we should probably monitor our JVM heap, or whatever it is.
Jared Kells:
I was going to say that!
Jordan Simonovski:
I'll try not to get too much into-
Jared Kells:
Runs out of memory!
Jordan Simonovski:
Yeah. So that's something that you're expecting to fail at some point. And that's something that you can consider a known unknown. But then, the promise of observability is that we should be shipping enough data to be able to ask new questions. So the way it tends to get talked about, you see, it's an unknown unknown of our system, that we want to find out about and ask new questions from. And that's where I think observability gets introduced, to answer these questions. Is that a good enough answer? You want me to go any further into detail about this stuff? I can talk all day about this.
Jared Kells:
Is it like a [crosstalk 00:08:05]. So just to repeat it back to you, see if I've understood. Is it kind of like if I've got a, traditionally with a Java app, I might log memories. It's because I know JVM's run out of memory and that's a thing that I monitor, but observability is more broad, like going almost over the top with what you monitor and log so that you can-
Jordan Simonovski:
Yeah. And I wouldn't necessarily say it's going over the top. I think it's maybe adding a bit more context to your data. So if any of you have worked with traces before, observability is very similar to the way traces work and just builds on top of the premise of traces, I guess. So you're creating these events, and these events are different transactions that could be happening in your applications, usually submitting some kind of request. And with that request, you can add a whole bunch of context to it. You can add which server this might be running on, which time zone. All of these additional and all the exciters. You can throw in user agency into there if you want to. The idea of observability is that you're not necessarily constrained by high cardinality data. High cardinality data being data sets that can change quite largely, in terms of the kinds of data they represent, or the combinations of data sets that you could have.
Jordan Simonovski:
So if you want shipping metrics on something, on a per user basis and you want to look at how different users are affected by things, that would be considered a high cardinality metric. And a lot of the time it's not something that traditional monitoring companies or metric providers can really give you as a service. That's where you'll start paying insanely huge bills on things like Datadog or whatever it is, because they're now being considered new metrics. Whereas observability, we try and store our data and query it in a way that we can store pretty vast data sets and say, "Cool. We have errors coming from these kinds of users." And you can start to build up correlations on certain things there. You can find out that users from a particular time zone or a particular device would only be experiencing that error. And from there, you can start building up, I think, better ways of understanding how a particular change might have broken things. Or some particular edge cases that you otherwise couldn't pick up on with something like CPU or memory monitoring.
Angad Sethi:
Would it be fair to say-
Jared Kells:
Yeah. It's [crosstalk 00:11:02].
Angad Sethi:
Oh, sorry Jared.
Jared Kells:
No you can-
Angad Sethi:
Would it be fair to say that, so, observability is basically a set of principles or a way to find the unknown unknowns?
Jordan Simonovski:
Yeah.
Angad Sethi:
Oh.
Jess Belliveau:
And better equip you to find, one of the things I find is a lot of people think, you get caught up in thinking observability is a thing that you can deploy and have and tick a box, but I like your choice of word of it being a set of principles or best practices. It's sort of giving you some guidance around these, having good logging coming out of your application. So structured logs. So you're always getting the same log format that you can look at. Tracing, which Jordan talked a little bit about. So giving you that ability to follow how a user is interacting with all the different microservices and possibly seeing where things are going wrong, and metrics as well. So the good thing with metrics is we're turning things a bit around and trying to make an application, instead of doing, and I don't want to get too technical, black box monitoring, where we're on the outside, trying to peer in with probes and checks like that. But the idea with metrics is the application is actually emitting these metrics to inform us what state it is in, thereby making it more observable.
Jess Belliveau:
Yeah, I like your choice of words there, Angad, that it's like these practices, this sort of guide of where to go, which probably leads into this next point of why would a team want to implement it. If you want to start again, Jordan?
Jordan Simonovski:
Yeah, I can start. And I'll give you a bit more time to speak as well, Jess in this one. I won't rant as much.
Jess Belliveau:
Oh, I didn't sign up for that!
Jordan Simonovski:
I think why teams would want it is because, it really depends on your organization and, I guess, the size of the teams you're working in. Most of the time, I would probably say you don't want to build observability yourself in house. It is something that you can, observability capabilities themselves, you won't achieve it just by buying a thing, like you can't buy dev ops, you can't buy Agile, you can't buy observability either.
Jared Kells:
Hang on, hang on. It says on my run sheet to promote Easy Agile, so that sounds like a good segue-
Jess Belliveau:
Unless you want to buy it. If you do want to buy Agile, the [crosstalk 00:13:55] in the marketplace.
Jared Kells:
Yeah, sorry, sorry, yeah! Go on.
Jordan Simonovski:
You can buy tools that make your life a lot easier, and there are a lot of things out there already which do stuff for people and do surface really interesting data that people might want to look at. I think there are a couple of start ups like LightStep and Honeycomb, which give you a really intuitive way of understanding your data in production. But why you would need this kind of stuff is that you want to know the state of your systems at any given point in time, and to build, I guess, good operational hygiene and good production excellence, I guess as Liz Fong-Jones would put it, is you need to be able to close that feedback loop. We have a whole bunch of tools already. So we have CICD systems in place. We have feature flags now, which help us, I guess, decouple deployments from releases. You can deploy code without actually releasing code, and you can actually give that power to your PM's now if you want to, with feature flags, which is great.
Jordan Simonovski:
But what you can also do now is completely close this loop, and as you're deploying an application, you can say, "I want to canary this deployment. I want to deploy this to 10% of my users, maybe users who are opted in for Beta releases or something of our application, and you can actually look at how that's performing before you release it to a wider audience. So it does make deployments a lot safer. It does give you a better understanding of how you're affecting users as well. And there are a whole bunch of tools that you can use to determine this stuff as well. So if you're looking at how a lot of companies are doing SRE at the moment, or understanding what reliable looks like for their applications, you have things like SLO's in place as well. And SLO's-
Jared Kells:
What's an SLO?
Jordan Simonovski:
They're all tied to user experiences. So you're saying, "Can my user perform this particular interaction?" And if you can effectively measure that and know how users are being affected by the changes you're making, you can easily make decisions around whether or not you continue shipping features or if you drop everything and work on reliability to make sure your users aren't affected. So it's this very user centric approach to doing things. I think in terms of closing the loop, observability gives us that data to say, "Yes, this is how users are being affected. This is how, I guess the 99th percentile of our users are fine, but we have 1% who are having adverse issues with our application." And you can really pinpoint stuff from there and say, "Cool. Users with this particular browser or this particular, or where we've deployed this app to," let's say if you have a global deployment of some kind, you've deployed to an island first, because you don't really care what happens to them. You can say, "Oh, we've actually broken stuff for them." And you can roll it back before you impact 100% of your users.
Jared Kells:
Yeah. I liked what you said about the test. I forgot the acronym, but actually testing the end user behavior. That's kind of exciting to me, because we have all these metrics that are a bit useless. They're cool, "Oh, it's using 1% CPU like it always is, now I don't really care," but can a user open up the app and drag an issue around? It's like-
Jess Belliveau:
Yeah, that's a really great example, right?
Jared Kells:
That's what I really care about.
Jess Belliveau:
The 1% CPU thing, you could look at a CPU usage graph and see a deployment, and the CPU usage doesn't change. Is everything healthy or not? You don't know, whereas if you're getting that deeper level info of the user interactions, you could be using 1% CPU to serve HTTP500 errors to the 80% of the customer base, sort of thing.
Angad Sethi:
How do you do that? The SLO's bit, how do you know a user can log in and drag an issue?
Jordan Simonovski:
Yeah. I think that would come with good instrumenting-
Angad Sethi:
Good question?
Jordan Simonovski:
Yeah, it comes down to actually keeping observability in mind when you are developing new features, the same way you would think about logging a particular thing in your code as you're writing, or writing test for your code, as you're writing code as well. You want to think about how you can instrument something and how you can understand how this particular feature is working in production. Because I think as a lot of Agile and dev ops principles are telling us now is that we do want our applications in production. And as developers, our responsibilities don't end when we deploy something. Our responsibility as a developer ends when we've provided value to the business. And you need a way of understanding that you're actually doing that. And that's where, I guess, you do nee do think about observability with a lot of this stuff, and actually measuring your success metrics. So if you do know that your application is successful if your user can log in and drag stuff around, then that's exactly what you want to measure.
Jared Kells:
I think that we have to build-
Jordan Simonovski:
Yeah?
Jared Kells:
Oh, sorry Jordan.
Jordan Simonovski:
No, you go.
Jared Kells:
I was just going to say we have to build our apps with integration testing in mind already. So doing browser based tests around new features. So it would be about building features with that and the same thing in mind but for testing and production.
Jess Belliveau:
Yeah and the actual how, the actual writing code part, there's this really great project, the open telemetry project, which provides all these sort of API's and SDK's that developers can consume, and it's vendor agnostic. So when you talk about the how, like, "How do I do this? How do I instrument things?" Or, "How do I emit metrics?" They provide all these helpful libraries and includes that you can have, because the last thing you want to do is have to roll this custom solution, because you're then just adding to your technical debt. You're trying to make things easier, but you're then relying on, "Well I need to keep Jared Kells employed, because he wrote our log in engine and no one else knows how it works.
Jess Belliveau:
And then the other thing that comes to mind with something like open telemetry as well, and we talked a bit about Datadog. So Datadog is a SaaS vendor that specializes in observability. And you would push your metrics and your logs and your traces to them and they give you a UI to display. If you choose something that's vendor agnostic, let's just use the example of Easy Agile. Let's say they start Datadog and then in six months time, we don't want to use Datadog anymore, we want to use SignalFx or whatever the Splunk one is now.
Jordan Simonovski:
I think NorthX.
Jess Belliveau:
Yeah. You can change your end point, push your same metrics and all that sort of stuff, maybe with a few little tweaks, but the idea is you don't want to tie in to a single thing.
Jordan Simonovski:
Your data structures remain the same.
Jess Belliveau:
Yeah. So that you could almost do it seamlessly without the developers knowing. There's even companies in the past that I think have pushed to multiple vendors. So you could be consuming vendor A and then you want to do a proof of concept with vendor B to see what the experience is like and you just push your data there as well.
Jared Kells:
Yeah. I think our coupling to Datadog will be I all the dashboards and stuff that we've made. It's not so much the data.
Jess Belliveau:
Yeah. That's sort of the big up sell, right. It's how you interact. That's where they want to get their hooks in, is making it easier for you to interpret that data and manipulate it to meet your needs and that sort of stuff.
Jordan Simonovski:
Observability suggests dashboards, right?
Jess Belliveau:
Yeah, perhaps. You used this term as well, Jordan, "production excellence." And when we talk about who needs observability, I was thinking a bit about that while you were talking. And for me, production excellence, or in Apptio we call it production readiness, operational readiness and that sort of stuff is like we want to deploy something to production like what sort of best practices do we want to have in place before we do that? And I think observability is a real great idea, because it's helping you in the future. You don't know what problems you're going to have down the line, but you're equipping your teams to be able to respond to those problems easily. Whereas, we've all probably been there, we've deployed code of production and we have no observability, we have a huge outage. What went wrong? Well, no one knows, but we know this is the fix, and it's hard to learn from that, or you have to learn from that I guess, and protect the user against future stuff, yeah.
Jess Belliveau:
When I think easy wins for observability, the first thing that really comes to mind is this whole idea of structured logging, which is really this idea that your application is you're logging, first of all. Quite important as a baseline starting point, but then you have a structured log format which lets you programmatically pass the logs as well. If you go back in time, maybe logging just looked like plain text with a line, with a timestamp, an error message. Whatever the developer decided to write to the standard out, or to the error file or something like that. Now I think there's a general move to having JSON, an actual formatted blob with that known structure so you can look into it. Tracing's probably not an easy win. That's a little bit harder. You can implement it with open telemetry and libraries and stuff. Requires a bit more understanding of your code base, I guess, and where you want tracing to fire, and that sort of stuff, parsing context through, things like that.
Jordan Simonovski:
I think Atlassian, when you probably just want to know that everything is okay. At a fairly superficial level. Maybe you just want to do some kind of up time on a trend. And then as, I guess, your code might get more complex or your product gets a bit more complex, you can start adding things in there. But I think actually knowing or surfacing the things you know might break. Those would probably be your quickest wins.
Jess Belliveau:
Well, let's mention some things for further reading. If you want to go get the whole picture of the whole, real observability started to get a lot of movement out of the Google SRE book from a few years ago. The Google SRE stuff covers the whole gamut of their soak reliability engineering practice, and observability is a portion of that, there's some great chapters on that. O'Reilly has an observability book, I think, just dedicated to observability now.
Jordan Simonovski:
I think that's still in early release, if people want to google chapters.
Jess Belliveau:
The open telemetry stuff, we'll drop a link to that I think that's really handy to know.
Angad Sethi:
From [inaudible 00:26:12], which is my perspective, as a developer, say I wanted to introduce cornflake use Datadog at Easy Agile. Not very familiar, I'm not very comfortable with it. I know how to navigate, but what's a quick way for me to get started on introducing observability? Sorry to lock my direct job or at my workplace.
Jordan Simonovski:
I would lean, I could be biased here. Jess correct me or give your opinion on this, I would lean heavily towards SLO's for this. And you can have a quick read in the SRE-
Jess Belliveau:
What does SLO stand for, Jordan?
Jordan Simonovski:
Okay, sorry. Buzz words! SLO is a service level objective, not to be confused with service level agreement. An agreement itself is contractual and you can pay people money if you do breach those. An SLO is something you set in your team and you have a target of reliability, because we are getting to the point where we understand that all systems at any point in time are in some kind of degraded state. And yeah, reliability isn't necessarily binary, it's not unreliable or reliable. Most of the time, it's mostly reliable and this gives us a better shared language, I guess. And you can have a read in the SRE handbook by Google, which is free online, which gives you a pretty good understanding of Datadog.
Jordan Simonovski:
I think the last time I used it had a SLO offering. But I think like I was mentioning earlier, you set an SLO on particular functionalities or features of your application. You're saying, "My user can do this 99% of the time," or whatever other reliability target you might want to set. I wouldn't recommend five nines of reliability. You'll probably burn yourself out trying to get there. And you have this target set for yourself. And you know exactly what you're measuring, you're measuring particular types of functionality. And you know when you do breach these, users are being affected. And that's where you can actually start thinking about observability. You can think about, "What other features are we implementing that we can start to measure?" Or, "What user facing things are we implementing that we can start to measure?"
Jordan Simonovski:
Other things you could probably look at are, I think they're all covered in the book anyway, data freshness in a way. You want to make sure the data users are being displayed is relatively fresh. You don't want them looking at stale data, so you can look at measuring things like that as well. But you can pretty much break it down into most functionalities of a website. It's no longer like a ping check, that you're just saying, "Yes, HTTP, okay. My application is fine." You're saying, "My users are actually being affected by things not working." And you can start measuring things from there. And that should give you a better understanding, or a better idea, at least, of where to start with what you want to measure and ow you want to measure it. That would be my opinion on where to get started with this if you do want to introduce it.
Jared Kells:
We're going to talk a little bit about state and how with some of these, like our very front end heavy applications that we're building, so the applications we build just basically run inside the browser and the traditional state as you would think about it, is just pulling a very simple API that writes some things into the database with some authentication, and that sort of stuff. So in terms of reliability of the services, it's really reliable. Those tiny API's just never have problems, because it's just so simple. And well, they've got plenty of monitoring around it. But all our state is actually, when you say, "Observe the state of the system," for the most part, that's state in a browser. And how do we get observability into that?
Jess Belliveau:
A big thing is really, there's not one thing fits all as well. When we talk about the SLO stuff as well, it's understanding what is important to not so much maybe your company but your team as well. If you're delivering this product, what's important to you specifically? So one SLO that might work for me at Apptio probably isn't going to work for Easy Agile. This is really pushing my knowledge, as well, of front end stuff, but when we say we want to observe the state as well, we don't necessarily mean specifically just the state. You could want to understand with each one of those API's when it's firing, what the request response time is for that API firing. So that might be an important metric. So you can start to see if one of those APIs is introducing latency, and so your user experience is degraded. Like, "Hey when we were on release three, when users were interacting with our service here, it would respond in this percentile latency. We've done a release and since then, now we're seeing it's now in this percentile. Have we degraded performance performance?" Users might not be complaining, but that could be something that the team then can look into, add to a sprint. Hey, I'm using Agile terms now. Watch out!
Jared Kells:
That's a really good example, Jess. Performance issues for us are typically not an API that's performing poorly. It's something in this very complicated front end application is not running in the same order as it used to, or there's some complex interaction we didn't think of, so it's requesting more data than expected. The APIs are returning. They're never slow, for the most part, but we have performance regressions that we may not know about without seeing them or investigating them. The observability is really at the individual user's browser level. That makes sense? I want to know how long did it take for this particular interaction to happen.
Jess Belliveau:
Yeah. I've never done that sort of side of things. As well, the other thing I guess, you could potentially be impacted in as well as then, you're dealing with end user manifestations as well. You could perceive-
Jared Kells:
Yeah sure.
Jess Belliveau:
... Greater performance on their laptop or something, or their ISP or that sort of stuff. It'd be really hard to make sure you're not getting noise from that sort of thing as well.
Jordan Simonovski:
Yeah. There are tools like Sentry, I guess, which do exist to give you a bit more of an understanding what's happening on your front end. The way Sentry tends to work with JavaScript, is you'll upload a minified map of your JS to Sentry, deploy your code and then if something does break or work in a fairly unexpected way, that tends to get surfaced with Sentry will tell you exactly which line this kind of stuff is happening on, and it's a really cool tool for that company stuff. I don't know if it'd give you the right type of insights, I think, in terms of performance or-
Jared Kells:
Yeah, we use a similar tool and it does work for crashes and that sort of thing. And on the observability front, we log actions like state mutations in side the front end, not the actual state change, but just labels that represent that you updated an issue summary or you clicked this button, that sort of thing, and we send those with our crash reports. And it's super helpful having that sort of observability. So I think I know what you guys are talking about. But I'm just [crosstalk 00:35:25], yeah.
Jess Belliveau:
Yeah, that's almost like, I guess, a form of tracing. For me and Jordan, when we talk about tracing, we might be thinking about 12 different microservices sitting in AWS that are all interacting, whereas you're more shifting that. That's sort of all stuff in the browser interacting and just having that history of this is what the user did and how they've ended up-
Jared Kells:
In that state.
Jess Belliveau:
In that state, yeah.
Jordan Simonovski:
I guess even if you don't have a lot of microservices, if you're talking about particular, like you're saying for the most part your API requests are fine but sometimes you have particularly large payloads-
Jared Kells:
We actually have to monitor, I don't know, maybe you can help with this, we actually should be monitoring maybe who we're integrating with. It's actually much more likely that we'll have a performance issue on a Xero API rather than... We don't see it, the browser sees it as well, which is-
Jordan Simonovski:
Yeah, and tracing does solve all of those regressions for you. Most tracing libraries, like if you're running Node apps or whatever on your backend. I can just tell you about Node, because I probably have the most experience writing Node stuff. You pretty much just drop in Didi trace, which is a Datadog library for tracing into your backend and your hook itself into all of, I think, the common libraries that you'll tend to work with, I think. Like if you're working for express or for a lot of just HADP libraries, as well as a few AWS services, it will kind of hook itself into that. And you can actually pinpoint. It will kind of show you on this pretty cool service map exactly which services you're interacting with and where you might be experiencing a regression. And I think traces do serve to surface that information, which is cool. So that could be something worth investigating.
Jess Belliveau:
It's funny. This is a little bit unrelated to observability, but you've just made me think a bit more about how you're saying you're reliant on third party providers as well. And something I think that's really important that sometimes gets missed is so many of us today are relying on third party providers, like AWS is a huge thing. A lot of people writing apps that require AWS services. And I think a lot of the time, people just assume AWS or Jira or whatever, is 100% up time, always available. And they don't write their code in such a way that deals with failures. And I think it's super important. So many times now I've seen people using the AWS API and they don't implement exponential back off. And so they're basically trying to hit the AWS API, it fails or they might get throttled, for example, and then they just go into a fail state and throw an error to the user. But you could potentially improve that user experience, have a retry mechanism automatically built in and that sort of stuff. It doesn't really tie into the observability thing, but it's something.
Jared Kells:
And the users don't care, right? No one cares if it's an AWS problem. It's your problem, right, your app is too slow.
Jess Belliveau:
Well, they're using your app. Exactly right. It reflects on you sort of thing, so it's in your interest to guard against an upstream failure, or at least inform the user when it's that case. Yeah.
Jared Kells:
Well, I think we're going to have to call it, this podcast, because it was an hour ago. We had instructed max 45 minutes.
Jess Belliveau:
We could just keep going. We might need a part two! Maybe we can request [cross talk 00:39:21].
Jared Kells:
Maybe! Yeah.
Jess Belliveau:
Or we'll just start our own podcast! Yeah.
Angad Sethi:
So what were your biggest learnings today, given it's been Angad and I are just learning about observability, Angad what was your biggest learning today about observability? My biggest learning was that observability does not equal Datadog. No, sorry! It was just very fascinating to learn about quantifying the known unknowns. I don't know if that's a good takeaway, but...
Jess Belliveau:
Any takeaway is a good takeaway! What about you, Jared?
Jared Kells:
I think, because I we were going to talk about state management, and part of it was how we have this ability, at the moment to, the way our front ends are architected, we can capture the state of the app and get a customer to send us their state, basically. And we can load it into our app and just see exactly how it was, just the way our state's designed. But what might be even cooler is to build maybe some observability into that front end for support. I'm thinking instead of just having, we have this button to send us out your support information that sends us a bunch of the state, but instead of console logging to the browser log, we could be console logging, logging in our front end somewhere that when they click, "send support information," our customers should be sending us the actions that they performed.
Jared Kells:
Like, "Hey there's a bug, send us your support information." It doesn't have to be a third party service collecting this observability stuff. We could just build into our... So that's what I'm thinking about.
Jess Belliveau:
Yeah, for sure. It'll probably be a lot less intrusive, as well, as some of the third party stuff that I've seen around.
Jared Kells:
Yeah. It's pretty hard with some of these integrations, especially if you're developing apps that get run behind a firewall.
Jess Belliveau:
Yeah
Jared Kells:
You can't just talk to some of these third parties. So yeah, it's cool though. It's really interesting.
Jess Belliveau:
Well, I hope someone out there listening has learned something, and Jordan and I will send some links through, and we can add them, hopefully, to the show notes or something so people can do some more reading and...
Jared Kells:
All thanks!
Jess Belliveau:
Thanks for having us, yeah.
Jared Kells:
Thanks all for your time, and thanks everybody for listening.
Jordan Simonovski:
Thanks everyone.
Angad Sethi:
That was [inaudible 00:41:55].
Jess Belliveau:
Tune in next week!


