New Course: Run Better Retros in Jira

Learn with Easy Agile

Easy Agile Podcast Ep.33 How to Align Teams Through Strategic Goal Setting

Listen on
Subscribe to our newsletter
  • website.easyagile.com/blog/rss.xml

In this episode, we dive deep into the challenges of aligning teams with strategic goals across organisations of all sizes. From fast-growing startups to large enterprises, teams everywhere struggle with the same fundamental issue: translating high-level objectives into actionable work that drives real value.

Our guest Andreas Wengenmayer, Practice Lead for Enterprise Strategy and Planning at catworkx (the #2 Atlassian partner worldwide and #1 in EMEA), shares his 11 years of experience helping organisations bridge the gap between strategic vision and team execution.

Want to see these concepts in action? Andreas and Hayley hosted an interactive webinar where they demonstrated practical techniques for strategic goal alignment using Easy Agile Programs. Watch the recording here→

About Our Guest

Andreas Wengenmayer is the Practice Lead for Enterprise Strategy and Planning at catworkx, one of the leading Atlassian Platinum Solution Partners globally and the #1 in EMEA. With over a decade of hands-on experience helping enterprise teams scale agile effectively, Andreas specialises in bridging the gap between strategy and execution. His work focuses on guiding organisations through complex transformation programs, optimising portfolio planning practices, and helping teams adopt frameworks like SAFe with clarity and purpose. Known for blending pragmatic insight with systems thinking, Andreas brings stories from the field - ranging from fast-moving startups to complex, multinational enterprises.

Transcript

Note: This transcript has been lightly edited for clarity, readability, and flow while preserving the authentic conversation and insights shared.

Recognising the signs - when teams aren't aligned

Hayley Rodd: Awesome to have you here. So I'm going to start with a bit of a reality check. We've worked in organisations across the spectrum from really fast-growing startups to really big enterprises. From your experience, when you walk into a PI planning or quarterly planning session, and I'm sure they're pretty hectic, what are the telltale signs that teams aren't truly aligned on what success looks like?

Andreas Wengenmayer: That's a great question - one I hear frequently. You can imagine, especially post-COVID when teams returned to in-person planning sessions. Back in 2017, we'd have huge arenas with hundreds of people in one place. People are happy to see each other, excited to chat with colleagues from different locations. This became even more pronounced after COVID, when everyone was working from home more frequently. That's a good sign - the mood is positive.

But you also notice some teams under pressure. They'd rather be working on actual deliverables. They know they have to be there, and it takes two full days to complete all the planning. Meanwhile, they're carrying a mental backlog - technical debt, unfinished work from the previous PI, catching up on delayed items.

This is what I often observe: teams get bogged down discussing minor details. People debate specifics, and you can see they're frustrated about something deeper - but they're not addressing the root cause. This creates its own negative momentum and can derail otherwise solid planning sessions.

Teams get bogged down discussing minor details. People debate specifics, and you can see they're frustrated about something deeper - but they're not addressing the root cause. This creates its own negative momentum and can derail otherwise solid planning sessions.

Sometimes you have to step in and ask what's really underneath. What's the actual cause? People say, "Yeah, I have to be here because that's the format, but I'm not engaged." Maybe it didn't work well in the past and there's lingering skepticism.

The prevailing attitude then becomes: "This isn't really collaborative. Leadership plans from the top anyway. The outcomes are predetermined - here's the plan, just make it work and update your boards." When people feel they can't meaningfully contribute or influence direction, they simply go through the motions.

My favourite example happens at the end when teams must formulate their objectives. It becomes a box-checking exercise - create something that satisfies the coach or Release Train Engineer so everyone can "get back to real work."

What good alignment actually looks like

Hayley Rodd: You've touched on so many things there. I can imagine walking into that room and feeling the pressure. People getting caught up in minor details rather than talking about root causes, or not asking the five whys to get to that root cause. You also touched on getting buy-in across the organisation. When people are really nailing it, when alignment is really there, what does that room feel like?

Andreas Wengenmayer: Yes, I've fortunately experienced those environments, and they're actually more common than you might think. When companies genuinely commit to grassroots planning, truly investing the time it requires, and ensure teams aren't overwhelmed from the start with everything marked "priority zero," you create the foundation for successful planning.

When companies genuinely commit to grassroots planning, truly investing the time it requires, and ensure teams aren't overwhelmed from the start with everything marked "priority zero," you create the foundation for successful planning.

You can see it immediately in people's body language and interactions. The energy in the room is palpable. If people appear resigned or intimidated, afraid to speak up, that's typically a red flag. The opposite creates magic.

Think about high-performing teams, like being a Scrum Master with an exceptional group. The best teams aren't just collections of highly skilled individuals in specific roles.

The best teams are those who communicate openly, genuinely enjoy each other's company, maintain positive energy, and actively support one another. This dynamic enables remarkable achievements. Maybe someone knows a contact in another tribe, release train, or department who can provide crucial answers and facilitate communication. Communication is absolutely fundamental.

That collaborative spirit is the hallmark of truly effective teams.

Hayley Rodd: Absolutely. We would know it in our day-to-day work, right? If your teams aren't communicating, if they're too overburdened as you said, it's not a good place to start. But if you can get that starting point right, if you can get that communication right, so many things will flow after that.

Andreas Wengenmayer: Absolutely. Looking back at any planning cycle, the real test is: did you plan the right things? You only know at the quarter's end whether you estimated capacity accurately.

Here's the crucial question: How does your organisation respond when goals aren't met? Do stakeholders focus on finding solutions? Do team members feel safe asking probing questions and seeking answers? Or does the blame game begin, searching for scapegoats?

How does your organisation respond when goals aren't met? Do stakeholders focus on finding solutions? Do team members feel safe asking probing questions and seeking answers? Or does the blame game begin, searching for scapegoats?

When you're permitted, encouraged, even, to be genuinely open and honest, you become much better at assessing realistic capacity. What makes stakeholders universally happy is predictability. You want confidence that your plans will actually materialise, that your commitments will be fulfilled.

Success breeds success, creating a positive foundation for the next PI. It's a continuous cycle that can spiral upward toward excellence or downward toward dysfunction.

The startup vs. enterprise spectrum

Hayley Rodd: Let's talk about the two ends of the spectrum. You've got a lot of experience, so I love hearing about this. Small companies will often say, "We're agile, we can pivot quickly, we don't need formal goal setting." Then enterprises are going all out on OKRs, cascading objectives, saying they're aligned because they've got those things in place. Yet both struggle with the same core problem. What's really going on?

Andreas Wengenmayer: You're absolutely right. I've been in agile projects since 2014, 11 years now, and I've seen a lot of companies pre-COVID, post-COVID, different sizes.

Starting with the really small ones, startup companies - what's really astonishing is that some very small startup companies tend to become overly complex, which is amazing. Some want solutions that are way too overblown. Basically, they need a sailing boat, but they're thinking about ordering an aircraft carrier.

Some startups want solutions that are way too overblown. Basically, they need a sailing boat, but they're thinking about ordering an aircraft carrier.

Maybe that's part of startup CEO culture - where everyone's a CEO on LinkedIn, and they think, "We're corporate, we have to be like that." They mostly get to their senses in the end, but small companies tend to be overly complex and overblown when it comes to technology, tooling, and organisation.

On the other end, large corporations sometimes seem to try their best to become truly agile - living the values everywhere. Still, it's a challenge. In most cases, there's some kind of hybrid planning going on. There's still a roadmap, which is good, but at some level, some people still stick to classical approaches, have some waterfall going on in the back.

I personally have never seen, for example, a full SAFe organisation where it's done truly at every level. There's a good balance and it should be healthy, but it all comes down to execution.

I feel like mid-sized companies are often the healthiest when it comes to that.

There's a balance of method and tooling, but you still need a solid understanding of goal setting and tracking. This includes pivoting when goals aren't right and learning from how you did things in the past. The gap between management and teams isn't that huge, and it's easier to bridge.

Avoiding death by KPI

Hayley Rodd: You've touched on so many fundamental things around getting the method and tooling right, but also that cultural aspect. I love the insight around mid-size organisations often striking that balance well. When we're thinking about the enterprise risk - which could be "death by KPI" or OKR, do you agree? Can you paint a picture of what that looks like and how it actually makes teams less focused?

Andreas Wengenmayer: Absolutely. There is such a thing as "death by KPI." KPIs are important to get a clear picture - you do need metrics, and there's merit to it. But as always, it's about choosing the right KPIs, the right metrics.

My favourite example is comparing story points across teams or ARTs. You measure velocity, and I have to repeat again and again: it's only individual to one team. You shouldn't compare it to another team or across tribes or ARTs - that doesn't work because you're creating the wrong incentives.

You see what will happen: "Well, okay, my stakeholders want higher amounts of story points. Let's estimate the stories bigger." Of course, that's a continuous loop, but it doesn't give you anything. Story points as a metric are just guidance for a team to get a better feeling for estimations.

You see what will happen: "Well, okay, my stakeholders want higher amounts of story points. Let's estimate the stories bigger." Of course, that's a continuous loop, but it doesn't give you anything. Story points as a metric are just guidance for a team to get a better feeling for estimations.

You want predictability - you want to meet a certain range. So it's not a great KPI when it comes to monitoring progress across teams. They have better KPIs in place.

Other metrics tend to create what I call bureaucracy. If you spend too much time creating reports, you have less time to create anything of value.

Hayley Rodd: I think there's so much in what you're saying about people being realistic and honest, open to pivoting or changing a goal if it's not the right one. Admitting to that is really difficult because no one wants to admit that what they set out to do is failing. But understanding that failure can sometimes be a benefit - you can learn from that. There's so much in that cultural aspect, right?

Andreas Wengenmayer: Absolutely. Coming back to goals rather than KPIs - KPIs are like being on a boat in your control room. You see what the engine is doing, the temperature - those are KPIs. Goals, on the other hand, are the course that you set.

KPIs are like being on a boat in your control room. You see what the engine is doing, the temperature - those are KPIs. Goals, on the other hand, are the course that you set.

You could be a small company like a startup - you're in a canoe, you're rowing. Or you're a large company - you're like a big freighter. Still, if you don't set the right course, the right goal, you will never reach your destination. Your team can be as proficient and perfectly working as they could be. If the course isn't right, hopefully you have enough provisions on board to survive a long journey.

Where organisations get stuck in goal setting

Hayley Rodd: Where do organisations typically get stuck? Is it defining the goals, communicating the goals, or translating them into action - that execution point you made before?

Andreas Wengenmayer: It could be basically any one of those. If you have a smaller or mid-size company, it's easier to communicate - you don't have to bridge as huge a gap. But still, you have high-level goals that have to be translated into real work. Real value is created in the teams.

If you have a high-level goal that's highly abstract and sounds good on paper - "increase customer satisfaction," "create better products," "make the world a better place" - people still have to understand: What does that mean to my daily work? If I'm a developer, what's my stake in that? How can I contribute?

If you have a high-level goal that's highly abstract and sounds good on paper - "increase customer satisfaction," "create better products," "make the world a better place" - people still have to understand: What does that mean to my daily work? If I'm a developer, what's my stake in that? How can I contribute?

That's when communication and breaking down goals becomes really important. Breaking them down the right way, having them visible and transparent, and creating that feeling of contribution. You make it visible that you're not just working for yourself or your team, but you're really contributing. You understand what you're working on and why you're doing it. Purpose is critical.

Bridging the strategy-to-sprint gap

Hayley Rodd: That's a really good segue into the next question about translating strategic vision into team-level objectives that people can grab onto and execute. Leadership will often say something like "increase customer satisfaction," and teams are left going, "What does that mean for me in my sprint this week?" How does an organisation bridge that gap between the high-level leadership view and what we can do in our sprints as a team?

Andreas Wengenmayer: First of all, you as company management need to take the time. There have been, and still are, a lot of approaches with company values, putting posters on walls, creating marketing. Those are all values - that's what a company is like. Then you link it with your products, services - great services, customer satisfaction. Okay. Then comes the real challenge: we want to succeed and create the next service, software solution, or product.

The goal is clear on a high level, but how do we break it down? That's when the real work comes into play - breaking down the goals into smaller pieces.

It's like building a LEGO space station when I was a kid. You have the picture on the box in the beginning - 'Oh, that's what we're going to build.' Then you have to start putting together all the small pieces. You need a plan, you need the little pictures of the steps. You start with the big picture, then you're breaking it down one piece at a time. You create different parts, and they come together at the end. Same goes for goals.

It's like building a LEGO space station. You have the picture on the box in the beginning - 'Oh, that's what we're going to build.' Then you have to start putting together all the small pieces. You need a plan, you need the little pictures of the steps. You start with the big picture, then you're breaking it down one piece at a time. You create different parts, and they come together at the end. Same goes for goals.

Hayley Rodd: Nice. A colleague of mine often says you eat an elephant one bite at a time - similar thing, right? When you see that big goal, it's really overwhelming. But if you can break it down into those chunks and smaller pieces, it becomes so much more manageable and achievable. People can get behind that vision.

Managing moving targets

Hayley Rodd: In fast-moving environments, goals often shift. We're agile, we're always moving. How do you help teams stay connected to a moving target without either ignoring changes or constantly thrashing around?

Andreas Wengenmayer: Back in the nineties and early 2000s, there was a computer game that wasted a lot of time in offices where you were shooting at geese in Scottish Highlands. It was a big phenomenon because people were trying to get the next high score.

If you think of moving targets, it's a bit like that. Imagine you're doing your work - whether you're a hunter or developer doesn't matter - but you approach, you take aim, and the geese keep flying up. You miss the target. Same thing if you have moving goals.

It's harder to aim and approach them right. What you should avoid as a company or someone in charge is constant interference. Stick to your goals or objectives that were agreed upon during PI planning. Don't change them midterm during a PI.

What you should avoid as a company or someone in charge is constant interference. Stick to your goals or objectives that were agreed upon during PI planning. Don't change them midterm during a PI.

That doesn't mean you can't learn from mistakes or wrong goals. You can adjust them, but you have to adjust them in the right place and time, which is during planning. Of course, if something security-related comes up, you have to act, but it has to be agreed upon, and you still have to communicate it and create understanding.

Keeping goals visible and actionable

Hayley Rodd: Even when goals are well-defined, keeping them visible and actionable throughout a PI is tough. What practices or tools have you found most effective for maintaining connection between daily work and high-level strategic objectives?

Andreas Wengenmayer: Good question. Having the goals present at all times helps a lot. If you just meet for planning, have your goals set, and never look back during the PI, it doesn't do you any good.

That could be a piece of paper on the wall like we had back in the day - and still could be if you're working in the office. Also, choose the right tools to track the goals and create acceptance for tools. Really use them. Look into them - whether it's an OKR tool or some other solution, even PI objectives. Are we still on track?

What really helps is if it's not static but shows progress, and especially shows the link of what you're contributing - like what you achieved in your last sprint and how it plays into the objectives or goals, progress moving ahead. There's always a good feeling - everybody loves a green bar moving ahead or a checklist.

What really helps is if your tool is not static but shows progress, and especially shows the link of what you're contributing - like what you achieved in your last sprint and how it plays into the objectives or goals, progress moving ahead. There's always a good feeling - everybody loves a green bar moving ahead or a checklist.

It helps keep the vision and goals present.

Hayley Rodd: When I was a teenager in my final year of high school here in Australia, I wanted a specific score on my final exams. I had a big poster in front of my desk that I sat at for hours every day studying. Looking back, I didn't know what I was doing - I just wanted to visualise my goal, and I didn't know the psychology behind it. But I'm happy to report I got that mark and above.

I think it was as you were saying - that constant reminder, that piece of paper worked for me. In organisations, we're looking for something a bit more complex sometimes, but I like your "keep it simple" advice. It doesn't always have to be super complex. It can just be a checklist, progress bar, or piece of paper - something that helps you feel connected to the goal and reminds you of it often.

When good work doesn't align with goals

Hayley Rodd: Have you seen situations where teams were delivering lots of work - good work, but it wasn't clearly contributing to company goals? What tends to cause that disconnect?

Andreas Wengenmayer: Yeah, that happens quite a bit. I can think of one example with very technical teams, like in semiconductors. Very smart people - everyone has a PhD in physics, material science. Awesome, smart people who tend to love their job. They're awesome, but they're also perfectionists who can still improve things and want to make them even better.

If you're in the business of producing machines used to produce semiconductors, for example, it's a complex task with a complex supply chain or value chain. You're creating lithography machines to create wafers used by other companies, and in the end, you have a customer planning the release of a new phone.

Your customer waits, the end customer waits, and you have to deliver on time. Sometimes this creates a challenge because teams still want to improve and make it even better. That's when economics come into play - the view of the big picture. You still have to communicate it. You shouldn't discourage such a great team, but you need to get the bigger perspective back to the teams and create acceptance instead of saying, "Hey, stop what you're doing, it's good enough." You don't want that. It all comes back to transparency and communication.

On the other spectrum, what you sometimes have is just too much workload on teams. Time for planning gets cut short, and if you don't take enough time to plan well, no wonder the results don't work out. It's just a lot of busy work - a lot of things getting done, but not necessarily the right things at the right time.

On the other spectrum, what you sometimes have is just too much workload on teams. Time for planning gets cut short, and if you don't take enough time to plan well, no wonder the results don't work out. It's just a lot of busy work - a lot of things getting done, but not necessarily the right things at the right time.

Hayley Rodd: If you don't do that planning at the start, you're setting yourself up for misalignments. If you're not communicating that plan regularly, you're setting yourself up for that busy work and people getting distracted. It's just so common. That planning part is so fundamental to getting it right.

One piece of advice for frustrated leaders

Hayley Rodd: We're on the home stretch now. If you could give one piece of advice to an engineering or product leader who's been frustrated because their teams seem to be going through the motions of PI planning or quarterly planning without real buy-in, what would it be?

Andreas Wengenmayer: I can resonate with that so well, and many can. I'd say: take the time to find out what's really going on. Investigate the root cause. It's like if you have a house and you're trying to fix a crack in the wall - you can look at the crack and do some superficial fixing or use a thick layer of paint, but you still have to find out what's causing that issue. Maybe something deeper.

You mentioned the five whys - that can be one way, but you have to have some understanding of the right way to approach people. You don't want to put anyone on the spot. Looking for a scapegoat doesn't help anybody.

We need to look at what's behind it, what's causing it. It all comes back to investing enough time for planning, but doing it with purpose. Not doing the whole planning like theatre, where everybody acts their part - that doesn't do you any good.

It all comes back to investing enough time for planning, but doing it with purpose. Not doing the whole planning like theatre, where everybody acts their part - that doesn't do you any good.

People have to understand why they're doing it. There has to be purpose and understanding - enough time, no distractions, and a positive atmosphere where everybody can contribute and be open.

You don't want people saying nothing because they don't dare to criticise or say no.

The connection between goal clarity and team motivation

Hayley Rodd: What's one thing you wish more organisations understood about the connection between goal clarity and team motivation?

Andreas Wengenmayer: We could get back to the boats we mentioned before. You want to arrive at your destination. If you're not clear about the destination, or maybe some people in your rowing boat don't want to go there, they might not join the rowing. If your crew is not invested, it will take you longer to reach a destination, or you won't get there as well.

It's the same thing. Motivation is key, and I don't talk about superficial motivation that just annoys everybody. Motivation is a positive environment where people rely on each other. They really like spending time with those people.

"Hey, I really like to go to lunch with you and talk to you" - not "I'd rather be home and not talk to anybody." You're not annoyed if your teammate asks you a question; you're happy to help. You're feeling safe that when you have a problem or question, you will get help.

That creates the right kind of motivation - that positive environment, and that can make a lot of things happen. It comes back to openness and transparency, not as buzzwords, but to get the clear picture. As a stakeholder, you get the correct current state because you get true answers.

I've seen strange situations in major corporations where people really didn't report what they were working on or show the right results. I've seen complete shadow Jira environments - one for internal use and one for external use with customers. There can be huge misalignments because people didn't dare to show real progress. In the long term, it will backfire. If you don't have trust in your environment, in your company, you will have a hard time.

I've seen strange situations in major corporations where people really didn't report what they were working on or show the right results. I've seen complete shadow Jira environments - one for internal use and one for external use with customers. There can be huge misalignments because people didn't dare to show real progress. In the long term, it will backfire. If you don't have trust in your environment, in your company, you will have a hard time.

Wrapping up

Hayley Rodd: There are so many key themes coming up throughout our conversation. You've talked about ongoing communication across teams, really planning with purpose, getting that context and buy-in to help with motivation, and allowing for radical candour - being really open if something's not working and being okay to call it out. So many cultural and communication elements are critical to the success of quarterly planning, PI planning, and organisations generally. Great takeaways.

We're going to end it there, but I want to end with a teaser for our interactive webinar that you and I are doing together on September 4th, which dives deeper and shows how to operationalise the ideas we've chatted about here using Easy Agile Programs and linking back to the fundamental services that catworkx provides organisations.

Andreas, it's been super wonderful to chat with you. I look forward to our webinar coming up on September 4th.

Andreas Wengenmayer: Thank you so much for having me. Looking forward to September 4th and seeing you again, talking more about tooling, boats, duck hunt, and anything in between.

Ready to transform your strategic planning?

The conversation doesn't end here. Andreas and Hayley hosted an interactive webinar where they showed how you can put these strategic alignment concepts into practice.

They spoke about:

  • Practical techniques for breaking down strategic goals into actionable team objectives
  • How to maintain goal visibility throughout your PI cycles
  • Real-world examples of successful alignment transformations

Watch the webinar recording here →

Related Episodes

  • Podcast

    Easy Agile Podcast Ep.12 Observations on Observability

    On this episode of The Easy Agile Podcast, tune in to hear developers Angad, Jared, Jess and Jordan, as they share their thoughts on observability.  

    Wollongong has a thriving and supportive tech community and in this episode we have brought together some of our locally based Developers from Siligong Valley for a round table chat on all things observability.

    💥 What is observability?
    💥 How can you improve observability?
    💥 What's the end goal?

    Angad Sethi

    "This was a great episode to be a part of! Jess and Jordan shared some really interesting points on the newest tech buzzword - observability""

    Be sure to subscribe, enjoy the episode 🎧

    Transcript

    Jared Kells:

    Welcome everybody to the Easy Agile podcast. My name's Jared Kells, and I'm a developer here at Easy Agile. Before we begin, Easy Agile would like to acknowledge the traditional custodians of the land from which we broadcast today, the Wodiwodi people of the Dharawal nation, and pay our respects to elders past, present and emerging, and extend that same respect to any aboriginal people listening with us today.

    Jared Kells:

    So today's podcast is a bit of a technical one. It says on my run sheet here that we're here to talk about some hot topics for engineers in the IT sector. How exciting that we've got a couple of primarily front end engineers and Angad and I are going to share some front end technical stuff and Jess and Jordan are going to be talking a bit about observability. So we'll start by introductions. So I'll pass it over to Jess.

    Jess Belliveau:

    Cool. Thanks Jared. Thanks for having me one as well. So yeah, my name's Jess Belliveau. I work for Apptio as an infrastructure engineer. Yeah, Jordan?

    Jordan Simonovski:

    I'm Jordan Simonovski. I work as a systems engineer in the observability team at Atlassian. I'm a bit of a jack of all trades, tech wise. But yeah, working on building out some pretty beefy systems to handle all of our data at Atlassian at the moment. So, that's fun.

    Angad Sethi:

    Hello everyone. I'm Angad. I'm working for Easy Agile as a software dev. Nothing fancy like you guys.

    Jared Kells:

    Nothing fancy!

    Jess Belliveau:

    Don't sell yourself short.

    Jared Kells:

    Yeah, I'll say. Yeah, so my name's Jared, and yeah, senior developer at Easy Agile, working on our apps. So mainly, I work on programs and road maps. And yeah, they're front end JavaScript heavy apps. So that's where our experience is. I've heard about this thing called observability, which I think is just logs and stuff, right?

    Jess Belliveau:

    Yeah, yeah. That's it, we'll wrap up!

    Jared Kells:

    Podcast over! Tell us about observability.

    Jess Belliveau:

    Yeah okay, I'll, yeah. Well, I thought first I'd do a little thing of why observability, why we talk about this and sort of for people listening, how we got here. We had a little chat before we started recording to try and feel out something that might interest a broader audience that maybe people don't know a lot about. And there's a lot of movements in the broad IT scope, I guess, that you could talk about. There's so many different things now that are just blowing up. Observability is something that's been a hot topic for a couple of years now. And it's something that's a core part of my job and Jordan's job as well. So it's something easy for us to talk about and it's something that you can give an introduction to without getting too technical. So we don't want to get down. This is something that you can go really deep into the weeds, so we picked it as something that hopefully we can explain to you both at a level that might interest the people at home listening as well.

    Jess Belliveau:

    Jordan and I figured out these four bullet points that we wanted to cover, and maybe I can do the little overview of that, and then I can make Jordan cover the first bullet point, just throw him straight under the bus.

    Jordan Simonovski:

    Okay!

    Jess Belliveau:

    So we thought we'd try and describe to you, first of all, what is observability. Because that's a pretty, the term doesn't give you much of what it is. It gives you a little hint, but it'll be good to base line set what are we talking about when we say what is observability. And then why would a development team want observability? Why would a company want observability? Sort of high level, what sort of benefits you get out of it and who may need it, which is a big thing. You can get caught up in these industry hot buzz words and commit to stuff that you might not need, or that sort of stuff.

    Jared Kells:

    Yep.

    Jordan Simonovski:

    Yep.

    Jess Belliveau:

    We thought we'd talk about some easy wins that you get with observability. So some of the real basic stuff you can try and get, and what advantages you get from it. And then we just thought because we're no going to try and get too deep, we could just give a few pointers to some websites and some YouTube talks for further reading that people want to do, and go from there. So yeah, Jordan you want to-

    Jared Kells:

    Sounds good.

    Jess Belliveau:

    Yeah. I hopefully, hopefully. We'll see how this goes! And I guess if you guys have questions as well, that's something we should, if there's stuff that you think we don't cover or that you want to know more, ask away.

    Jordan Simonovski:

    I guess to start with observability, it's a topic I get really excited about, because as someone that's been involved in the dev ops and SRE space for so long, observability's come along and promises to close the loop or close a feedback loop on software delivery. And it feels like it's something we don't really have at the moment. And I get that observability maybe sounds new and shiny, but I think the term itself exists to maybe differentiate itself from what's currently out there. A lot of us working in tech know about monitoring and the loading and things like that. And I think they serve their own purpose and they're not in any way obsolete either. Things like traditional monitoring tools. But observability's come along as a way to understand, I think, the overwhelmingly complex systems that we're building at the moment. A lot of companies are probably moving towards some kind of complicated distributed systems architecture, microservices, other buzz words.

    Jordan Simonovski:

    But even for things like a traditional kind of monolith. Observability really serves to help us ask new questions from our systems. So the way it tends to get explained is monitoring exits for our known unknowns. With seniority comes the ability to predict, almost, in what way your systems will fail. So you'll know. The longer you're in the industry, you know this, like a Java server fails in x, y, z amount of ways, so we should probably monitor our JVM heap, or whatever it is.

    Jared Kells:

    I was going to say that!

    Jordan Simonovski:

    I'll try not to get too much into-

    Jared Kells:

    Runs out of memory!

    Jordan Simonovski:

    Yeah. So that's something that you're expecting to fail at some point. And that's something that you can consider a known unknown. But then, the promise of observability is that we should be shipping enough data to be able to ask new questions. So the way it tends to get talked about, you see, it's an unknown unknown of our system, that we want to find out about and ask new questions from. And that's where I think observability gets introduced, to answer these questions. Is that a good enough answer? You want me to go any further into detail about this stuff? I can talk all day about this.

    Jared Kells:

    Is it like a [crosstalk 00:08:05]. So just to repeat it back to you, see if I've understood. Is it kind of like if I've got a, traditionally with a Java app, I might log memories. It's because I know JVM's run out of memory and that's a thing that I monitor, but observability is more broad, like going almost over the top with what you monitor and log so that you can-

    Jordan Simonovski:

    Yeah. And I wouldn't necessarily say it's going over the top. I think it's maybe adding a bit more context to your data. So if any of you have worked with traces before, observability is very similar to the way traces work and just builds on top of the premise of traces, I guess. So you're creating these events, and these events are different transactions that could be happening in your applications, usually submitting some kind of request. And with that request, you can add a whole bunch of context to it. You can add which server this might be running on, which time zone. All of these additional and all the exciters. You can throw in user agency into there if you want to. The idea of observability is that you're not necessarily constrained by high cardinality data. High cardinality data being data sets that can change quite largely, in terms of the kinds of data they represent, or the combinations of data sets that you could have.

    Jordan Simonovski:

    So if you want shipping metrics on something, on a per user basis and you want to look at how different users are affected by things, that would be considered a high cardinality metric. And a lot of the time it's not something that traditional monitoring companies or metric providers can really give you as a service. That's where you'll start paying insanely huge bills on things like Datadog or whatever it is, because they're now being considered new metrics. Whereas observability, we try and store our data and query it in a way that we can store pretty vast data sets and say, "Cool. We have errors coming from these kinds of users." And you can start to build up correlations on certain things there. You can find out that users from a particular time zone or a particular device would only be experiencing that error. And from there, you can start building up, I think, better ways of understanding how a particular change might have broken things. Or some particular edge cases that you otherwise couldn't pick up on with something like CPU or memory monitoring.

    Angad Sethi:

    Would it be fair to say-

    Jared Kells:

    Yeah. It's [crosstalk 00:11:02].

    Angad Sethi:

    Oh, sorry Jared.

    Jared Kells:

    No you can-

    Angad Sethi:

    Would it be fair to say that, so, observability is basically a set of principles or a way to find the unknown unknowns?

    Jordan Simonovski:

    Yeah.

    Angad Sethi:

    Oh.

    Jess Belliveau:

    And better equip you to find, one of the things I find is a lot of people think, you get caught up in thinking observability is a thing that you can deploy and have and tick a box, but I like your choice of word of it being a set of principles or best practices. It's sort of giving you some guidance around these, having good logging coming out of your application. So structured logs. So you're always getting the same log format that you can look at. Tracing, which Jordan talked a little bit about. So giving you that ability to follow how a user is interacting with all the different microservices and possibly seeing where things are going wrong, and metrics as well. So the good thing with metrics is we're turning things a bit around and trying to make an application, instead of doing, and I don't want to get too technical, black box monitoring, where we're on the outside, trying to peer in with probes and checks like that. But the idea with metrics is the application is actually emitting these metrics to inform us what state it is in, thereby making it more observable.

    Jess Belliveau:

    Yeah, I like your choice of words there, Angad, that it's like these practices, this sort of guide of where to go, which probably leads into this next point of why would a team want to implement it. If you want to start again, Jordan?

    Jordan Simonovski:

    Yeah, I can start. And I'll give you a bit more time to speak as well, Jess in this one. I won't rant as much.

    Jess Belliveau:

    Oh, I didn't sign up for that!

    Jordan Simonovski:

    I think why teams would want it is because, it really depends on your organization and, I guess, the size of the teams you're working in. Most of the time, I would probably say you don't want to build observability yourself in house. It is something that you can, observability capabilities themselves, you won't achieve it just by buying a thing, like you can't buy dev ops, you can't buy Agile, you can't buy observability either.

    Jared Kells:

    Hang on, hang on. It says on my run sheet to promote Easy Agile, so that sounds like a good segue-

    Jess Belliveau:

    Unless you want to buy it. If you do want to buy Agile, the [crosstalk 00:13:55] in the marketplace.

    Jared Kells:

    Yeah, sorry, sorry, yeah! Go on.

    Jordan Simonovski:

    You can buy tools that make your life a lot easier, and there are a lot of things out there already which do stuff for people and do surface really interesting data that people might want to look at. I think there are a couple of start ups like LightStep and Honeycomb, which give you a really intuitive way of understanding your data in production. But why you would need this kind of stuff is that you want to know the state of your systems at any given point in time, and to build, I guess, good operational hygiene and good production excellence, I guess as Liz Fong-Jones would put it, is you need to be able to close that feedback loop. We have a whole bunch of tools already. So we have CICD systems in place. We have feature flags now, which help us, I guess, decouple deployments from releases. You can deploy code without actually releasing code, and you can actually give that power to your PM's now if you want to, with feature flags, which is great.

    Jordan Simonovski:

    But what you can also do now is completely close this loop, and as you're deploying an application, you can say, "I want to canary this deployment. I want to deploy this to 10% of my users, maybe users who are opted in for Beta releases or something of our application, and you can actually look at how that's performing before you release it to a wider audience. So it does make deployments a lot safer. It does give you a better understanding of how you're affecting users as well. And there are a whole bunch of tools that you can use to determine this stuff as well. So if you're looking at how a lot of companies are doing SRE at the moment, or understanding what reliable looks like for their applications, you have things like SLO's in place as well. And SLO's-

    Jared Kells:

    What's an SLO?

    Jordan Simonovski:

    They're all tied to user experiences. So you're saying, "Can my user perform this particular interaction?" And if you can effectively measure that and know how users are being affected by the changes you're making, you can easily make decisions around whether or not you continue shipping features or if you drop everything and work on reliability to make sure your users aren't affected. So it's this very user centric approach to doing things. I think in terms of closing the loop, observability gives us that data to say, "Yes, this is how users are being affected. This is how, I guess the 99th percentile of our users are fine, but we have 1% who are having adverse issues with our application." And you can really pinpoint stuff from there and say, "Cool. Users with this particular browser or this particular, or where we've deployed this app to," let's say if you have a global deployment of some kind, you've deployed to an island first, because you don't really care what happens to them. You can say, "Oh, we've actually broken stuff for them." And you can roll it back before you impact 100% of your users.

    Jared Kells:

    Yeah. I liked what you said about the test. I forgot the acronym, but actually testing the end user behavior. That's kind of exciting to me, because we have all these metrics that are a bit useless. They're cool, "Oh, it's using 1% CPU like it always is, now I don't really care," but can a user open up the app and drag an issue around? It's like-

    Jess Belliveau:

    Yeah, that's a really great example, right?

    Jared Kells:

    That's what I really care about.

    Jess Belliveau:

    The 1% CPU thing, you could look at a CPU usage graph and see a deployment, and the CPU usage doesn't change. Is everything healthy or not? You don't know, whereas if you're getting that deeper level info of the user interactions, you could be using 1% CPU to serve HTTP500 errors to the 80% of the customer base, sort of thing.

    Angad Sethi:

    How do you do that? The SLO's bit, how do you know a user can log in and drag an issue?

    Jordan Simonovski:

    Yeah. I think that would come with good instrumenting-

    Angad Sethi:

    Good question?

    Jordan Simonovski:

    Yeah, it comes down to actually keeping observability in mind when you are developing new features, the same way you would think about logging a particular thing in your code as you're writing, or writing test for your code, as you're writing code as well. You want to think about how you can instrument something and how you can understand how this particular feature is working in production. Because I think as a lot of Agile and dev ops principles are telling us now is that we do want our applications in production. And as developers, our responsibilities don't end when we deploy something. Our responsibility as a developer ends when we've provided value to the business. And you need a way of understanding that you're actually doing that. And that's where, I guess, you do nee do think about observability with a lot of this stuff, and actually measuring your success metrics. So if you do know that your application is successful if your user can log in and drag stuff around, then that's exactly what you want to measure.

    Jared Kells:

    I think that we have to build-

    Jordan Simonovski:

    Yeah?

    Jared Kells:

    Oh, sorry Jordan.

    Jordan Simonovski:

    No, you go.

    Jared Kells:

    I was just going to say we have to build our apps with integration testing in mind already. So doing browser based tests around new features. So it would be about building features with that and the same thing in mind but for testing and production.

    Jess Belliveau:

    Yeah and the actual how, the actual writing code part, there's this really great project, the open telemetry project, which provides all these sort of API's and SDK's that developers can consume, and it's vendor agnostic. So when you talk about the how, like, "How do I do this? How do I instrument things?" Or, "How do I emit metrics?" They provide all these helpful libraries and includes that you can have, because the last thing you want to do is have to roll this custom solution, because you're then just adding to your technical debt. You're trying to make things easier, but you're then relying on, "Well I need to keep Jared Kells employed, because he wrote our log in engine and no one else knows how it works.

    Jess Belliveau:

    And then the other thing that comes to mind with something like open telemetry as well, and we talked a bit about Datadog. So Datadog is a SaaS vendor that specializes in observability. And you would push your metrics and your logs and your traces to them and they give you a UI to display. If you choose something that's vendor agnostic, let's just use the example of Easy Agile. Let's say they start Datadog and then in six months time, we don't want to use Datadog anymore, we want to use SignalFx or whatever the Splunk one is now.

    Jordan Simonovski:

    I think NorthX.

    Jess Belliveau:

    Yeah. You can change your end point, push your same metrics and all that sort of stuff, maybe with a few little tweaks, but the idea is you don't want to tie in to a single thing.

    Jordan Simonovski:

    Your data structures remain the same.

    Jess Belliveau:

    Yeah. So that you could almost do it seamlessly without the developers knowing. There's even companies in the past that I think have pushed to multiple vendors. So you could be consuming vendor A and then you want to do a proof of concept with vendor B to see what the experience is like and you just push your data there as well.

    Jared Kells:

    Yeah. I think our coupling to Datadog will be I all the dashboards and stuff that we've made. It's not so much the data.

    Jess Belliveau:

    Yeah. That's sort of the big up sell, right. It's how you interact. That's where they want to get their hooks in, is making it easier for you to interpret that data and manipulate it to meet your needs and that sort of stuff.

    Jordan Simonovski:

    Observability suggests dashboards, right?

    Jess Belliveau:

    Yeah, perhaps. You used this term as well, Jordan, "production excellence." And when we talk about who needs observability, I was thinking a bit about that while you were talking. And for me, production excellence, or in Apptio we call it production readiness, operational readiness and that sort of stuff is like we want to deploy something to production like what sort of best practices do we want to have in place before we do that? And I think observability is a real great idea, because it's helping you in the future. You don't know what problems you're going to have down the line, but you're equipping your teams to be able to respond to those problems easily. Whereas, we've all probably been there, we've deployed code of production and we have no observability, we have a huge outage. What went wrong? Well, no one knows, but we know this is the fix, and it's hard to learn from that, or you have to learn from that I guess, and protect the user against future stuff, yeah.

    Jess Belliveau:

    When I think easy wins for observability, the first thing that really comes to mind is this whole idea of structured logging, which is really this idea that your application is you're logging, first of all. Quite important as a baseline starting point, but then you have a structured log format which lets you programmatically pass the logs as well. If you go back in time, maybe logging just looked like plain text with a line, with a timestamp, an error message. Whatever the developer decided to write to the standard out, or to the error file or something like that. Now I think there's a general move to having JSON, an actual formatted blob with that known structure so you can look into it. Tracing's probably not an easy win. That's a little bit harder. You can implement it with open telemetry and libraries and stuff. Requires a bit more understanding of your code base, I guess, and where you want tracing to fire, and that sort of stuff, parsing context through, things like that.

    Jordan Simonovski:

    I think Atlassian, when you probably just want to know that everything is okay. At a fairly superficial level. Maybe you just want to do some kind of up time on a trend. And then as, I guess, your code might get more complex or your product gets a bit more complex, you can start adding things in there. But I think actually knowing or surfacing the things you know might break. Those would probably be your quickest wins.

    Jess Belliveau:

    Well, let's mention some things for further reading. If you want to go get the whole picture of the whole, real observability started to get a lot of movement out of the Google SRE book from a few years ago. The Google SRE stuff covers the whole gamut of their soak reliability engineering practice, and observability is a portion of that, there's some great chapters on that. O'Reilly has an observability book, I think, just dedicated to observability now.

    Jordan Simonovski:

    I think that's still in early release, if people want to google chapters.

    Jess Belliveau:

    The open telemetry stuff, we'll drop a link to that I think that's really handy to know.

    Angad Sethi:

    From [inaudible 00:26:12], which is my perspective, as a developer, say I wanted to introduce cornflake use Datadog at Easy Agile. Not very familiar, I'm not very comfortable with it. I know how to navigate, but what's a quick way for me to get started on introducing observability? Sorry to lock my direct job or at my workplace.

    Jordan Simonovski:

    I would lean, I could be biased here. Jess correct me or give your opinion on this, I would lean heavily towards SLO's for this. And you can have a quick read in the SRE-

    Jess Belliveau:

    What does SLO stand for, Jordan?

    Jordan Simonovski:

    Okay, sorry. Buzz words! SLO is a service level objective, not to be confused with service level agreement. An agreement itself is contractual and you can pay people money if you do breach those. An SLO is something you set in your team and you have a target of reliability, because we are getting to the point where we understand that all systems at any point in time are in some kind of degraded state. And yeah, reliability isn't necessarily binary, it's not unreliable or reliable. Most of the time, it's mostly reliable and this gives us a better shared language, I guess. And you can have a read in the SRE handbook by Google, which is free online, which gives you a pretty good understanding of Datadog.

    Jordan Simonovski:

    I think the last time I used it had a SLO offering. But I think like I was mentioning earlier, you set an SLO on particular functionalities or features of your application. You're saying, "My user can do this 99% of the time," or whatever other reliability target you might want to set. I wouldn't recommend five nines of reliability. You'll probably burn yourself out trying to get there. And you have this target set for yourself. And you know exactly what you're measuring, you're measuring particular types of functionality. And you know when you do breach these, users are being affected. And that's where you can actually start thinking about observability. You can think about, "What other features are we implementing that we can start to measure?" Or, "What user facing things are we implementing that we can start to measure?"

    Jordan Simonovski:

    Other things you could probably look at are, I think they're all covered in the book anyway, data freshness in a way. You want to make sure the data users are being displayed is relatively fresh. You don't want them looking at stale data, so you can look at measuring things like that as well. But you can pretty much break it down into most functionalities of a website. It's no longer like a ping check, that you're just saying, "Yes, HTTP, okay. My application is fine." You're saying, "My users are actually being affected by things not working." And you can start measuring things from there. And that should give you a better understanding, or a better idea, at least, of where to start with what you want to measure and ow you want to measure it. That would be my opinion on where to get started with this if you do want to introduce it.

    Jared Kells:

    We're going to talk a little bit about state and how with some of these, like our very front end heavy applications that we're building, so the applications we build just basically run inside the browser and the traditional state as you would think about it, is just pulling a very simple API that writes some things into the database with some authentication, and that sort of stuff. So in terms of reliability of the services, it's really reliable. Those tiny API's just never have problems, because it's just so simple. And well, they've got plenty of monitoring around it. But all our state is actually, when you say, "Observe the state of the system," for the most part, that's state in a browser. And how do we get observability into that?

    Jess Belliveau:

    A big thing is really, there's not one thing fits all as well. When we talk about the SLO stuff as well, it's understanding what is important to not so much maybe your company but your team as well. If you're delivering this product, what's important to you specifically? So one SLO that might work for me at Apptio probably isn't going to work for Easy Agile. This is really pushing my knowledge, as well, of front end stuff, but when we say we want to observe the state as well, we don't necessarily mean specifically just the state. You could want to understand with each one of those API's when it's firing, what the request response time is for that API firing. So that might be an important metric. So you can start to see if one of those APIs is introducing latency, and so your user experience is degraded. Like, "Hey when we were on release three, when users were interacting with our service here, it would respond in this percentile latency. We've done a release and since then, now we're seeing it's now in this percentile. Have we degraded performance performance?" Users might not be complaining, but that could be something that the team then can look into, add to a sprint. Hey, I'm using Agile terms now. Watch out!

    Jared Kells:

    That's a really good example, Jess. Performance issues for us are typically not an API that's performing poorly. It's something in this very complicated front end application is not running in the same order as it used to, or there's some complex interaction we didn't think of, so it's requesting more data than expected. The APIs are returning. They're never slow, for the most part, but we have performance regressions that we may not know about without seeing them or investigating them. The observability is really at the individual user's browser level. That makes sense? I want to know how long did it take for this particular interaction to happen.

    Jess Belliveau:

    Yeah. I've never done that sort of side of things. As well, the other thing I guess, you could potentially be impacted in as well as then, you're dealing with end user manifestations as well. You could perceive-

    Jared Kells:

    Yeah sure.

    Jess Belliveau:

    ... Greater performance on their laptop or something, or their ISP or that sort of stuff. It'd be really hard to make sure you're not getting noise from that sort of thing as well.

    Jordan Simonovski:

    Yeah. There are tools like Sentry, I guess, which do exist to give you a bit more of an understanding what's happening on your front end. The way Sentry tends to work with JavaScript, is you'll upload a minified map of your JS to Sentry, deploy your code and then if something does break or work in a fairly unexpected way, that tends to get surfaced with Sentry will tell you exactly which line this kind of stuff is happening on, and it's a really cool tool for that company stuff. I don't know if it'd give you the right type of insights, I think, in terms of performance or-

    Jared Kells:

    Yeah, we use a similar tool and it does work for crashes and that sort of thing. And on the observability front, we log actions like state mutations in side the front end, not the actual state change, but just labels that represent that you updated an issue summary or you clicked this button, that sort of thing, and we send those with our crash reports. And it's super helpful having that sort of observability. So I think I know what you guys are talking about. But I'm just [crosstalk 00:35:25], yeah.

    Jess Belliveau:

    Yeah, that's almost like, I guess, a form of tracing. For me and Jordan, when we talk about tracing, we might be thinking about 12 different microservices sitting in AWS that are all interacting, whereas you're more shifting that. That's sort of all stuff in the browser interacting and just having that history of this is what the user did and how they've ended up-

    Jared Kells:

    In that state.

    Jess Belliveau:

    In that state, yeah.

    Jordan Simonovski:

    I guess even if you don't have a lot of microservices, if you're talking about particular, like you're saying for the most part your API requests are fine but sometimes you have particularly large payloads-

    Jared Kells:

    We actually have to monitor, I don't know, maybe you can help with this, we actually should be monitoring maybe who we're integrating with. It's actually much more likely that we'll have a performance issue on a Xero API rather than... We don't see it, the browser sees it as well, which is-

    Jordan Simonovski:

    Yeah, and tracing does solve all of those regressions for you. Most tracing libraries, like if you're running Node apps or whatever on your backend. I can just tell you about Node, because I probably have the most experience writing Node stuff. You pretty much just drop in Didi trace, which is a Datadog library for tracing into your backend and your hook itself into all of, I think, the common libraries that you'll tend to work with, I think. Like if you're working for express or for a lot of just HADP libraries, as well as a few AWS services, it will kind of hook itself into that. And you can actually pinpoint. It will kind of show you on this pretty cool service map exactly which services you're interacting with and where you might be experiencing a regression. And I think traces do serve to surface that information, which is cool. So that could be something worth investigating.

    Jess Belliveau:

    It's funny. This is a little bit unrelated to observability, but you've just made me think a bit more about how you're saying you're reliant on third party providers as well. And something I think that's really important that sometimes gets missed is so many of us today are relying on third party providers, like AWS is a huge thing. A lot of people writing apps that require AWS services. And I think a lot of the time, people just assume AWS or Jira or whatever, is 100% up time, always available. And they don't write their code in such a way that deals with failures. And I think it's super important. So many times now I've seen people using the AWS API and they don't implement exponential back off. And so they're basically trying to hit the AWS API, it fails or they might get throttled, for example, and then they just go into a fail state and throw an error to the user. But you could potentially improve that user experience, have a retry mechanism automatically built in and that sort of stuff. It doesn't really tie into the observability thing, but it's something.

    Jared Kells:

    And the users don't care, right? No one cares if it's an AWS problem. It's your problem, right, your app is too slow.

    Jess Belliveau:

    Well, they're using your app. Exactly right. It reflects on you sort of thing, so it's in your interest to guard against an upstream failure, or at least inform the user when it's that case. Yeah.

    Jared Kells:

    Well, I think we're going to have to call it, this podcast, because it was an hour ago. We had instructed max 45 minutes.

    Jess Belliveau:

    We could just keep going. We might need a part two! Maybe we can request [cross talk 00:39:21].

    Jared Kells:

    Maybe! Yeah.

    Jess Belliveau:

    Or we'll just start our own podcast! Yeah.

    Angad Sethi:

    So what were your biggest learnings today, given it's been Angad and I are just learning about observability, Angad what was your biggest learning today about observability? My biggest learning was that observability does not equal Datadog. No, sorry! It was just very fascinating to learn about quantifying the known unknowns. I don't know if that's a good takeaway, but...

    Jess Belliveau:

    Any takeaway is a good takeaway! What about you, Jared?

    Jared Kells:

    I think, because I we were going to talk about state management, and part of it was how we have this ability, at the moment to, the way our front ends are architected, we can capture the state of the app and get a customer to send us their state, basically. And we can load it into our app and just see exactly how it was, just the way our state's designed. But what might be even cooler is to build maybe some observability into that front end for support. I'm thinking instead of just having, we have this button to send us out your support information that sends us a bunch of the state, but instead of console logging to the browser log, we could be console logging, logging in our front end somewhere that when they click, "send support information," our customers should be sending us the actions that they performed.

    Jared Kells:

    Like, "Hey there's a bug, send us your support information." It doesn't have to be a third party service collecting this observability stuff. We could just build into our... So that's what I'm thinking about.

    Jess Belliveau:

    Yeah, for sure. It'll probably be a lot less intrusive, as well, as some of the third party stuff that I've seen around.

    Jared Kells:

    Yeah. It's pretty hard with some of these integrations, especially if you're developing apps that get run behind a firewall.

    Jess Belliveau:

    Yeah

    Jared Kells:

    You can't just talk to some of these third parties. So yeah, it's cool though. It's really interesting.

    Jess Belliveau:

    Well, I hope someone out there listening has learned something, and Jordan and I will send some links through, and we can add them, hopefully, to the show notes or something so people can do some more reading and...

    Jared Kells:

    All thanks!

    Jess Belliveau:

    Thanks for having us, yeah.

    Jared Kells:

    Thanks all for your time, and thanks everybody for listening.

    Jordan Simonovski:

    Thanks everyone.

    Angad Sethi:

    That was [inaudible 00:41:55].

    Jess Belliveau:

    Tune in next week!

  • Podcast

    Easy Agile Podcast Ep.7 Sarah Hajipour, Agile Coach

    Caitlin Mackie

    "I absolutely loved my conversation with Sarah, she shared some amazing advice that I can't wait to put into practice!"

    We spoke about the agile mindset beyond IT & development teams, how teams such as marketing and finance are starting to adopt the methodology and the benefits of doing so.

    In celebration of international women's day, we discussed the future of women in agile, and steps we should be taking to support one another towards an inclusive and enabling environment.

    Be sure to subscribe, enjoy the episode 🎧

    Transcript

    Caitlin Mackie:

    Hello everyone and welcome back to the Easy Agile Podcast for 2021. Each episode, we talk with some of the most interesting people in tech, in agile, and in leading businesses around the world to share fresh perspectives and learn from the wealth of knowledge each guest has to share. I'm Caitlin and I'm the Graduate Marketing Coordinator at Easy Agile and your host for this episode. We are thrilled to be back and have some amazing guests lined up this season. So to kick us off, I'm really excited to be talking with Sarah Hajipour.

    Caitlin Mackie:

    Sarah has so much rich and diverse experience in the agile space. She's an agile coach, a business transformation leader, a project and program manager, and more recently a podcast host and author. She's the jack of all trades and has been in the business agility space for over 10 years. In this episode, Sarah and I chat about the significance of goal setting and in particular goal setting in unpredictable times. We chat about her most recent projects, the Agility Podcast with Sarah Hajipour and her book on Agile Case Studies.

    Caitlin Mackie:

    And of course with International Women's Day coming up, Sarah shared some amazing advice and her thoughts on the way forward for women in agile. She highlighted the importance of raising your hand and asking for help when you need it, as well as embracing qualities that aren't always traditionally thought of in leaders. It was such a thoughtful and insightful discussion. I got a lot of value out of our conversation and received some great advice, and I'm really looking forward to putting into practice. I know those listening will feel the same. Let's jump in.

    Caitlin Mackie:

    Sarah, thank you so much for joining us and spending some time with me today.

    Sarah Hajipour:

    Sure. Thanks for having me.

    Caitlin Mackie:

    So being our first guest for the year, I wanted to ask you about any new year's resolutions. Are you on track? Are you a believer in them or do you have a different type of goal setting process?

    Sarah Hajipour:

    That's a great question because we discussed this with a couple of friends and we realized new year's resolution is always going to be some kind of like a huge goal that we don't know if we're going to meet it or not. And thinking agile business agility and as an agile coach, I believe in the fact that let's have smaller goals and review them every three months, every six months and see where we at. Instead of looking into huge goals that we don't know what's going to happen because there's always a lot of uncertainties, even in our personal lives regarding the goals that we set up for ourselves. So yeah, that's how I look at it. Quarterly, quarterly personal goals. Let's say that.

    Caitlin Mackie:

    Yeah. Yeah. I love that. Yeah, I think if the last year has taught us anything, I think we can all agree how unpredictable things can become. So those original goals.

    Sarah Hajipour:

    That's true.

    Caitlin Mackie:

    Yeah. The original goals might have to take a couple of detours. So what would be your advice for setting career goals in uncertain times?

    Sarah Hajipour:

    That's a great question. For career goals I believe it really matters that you do something that you're interested in at least. If you still haven't found your passion, that's fine especially people like young professionals. It's okay if you haven't found your passion yet, but you can still follow a basically career path starting with things that you like to do, kind of you enjoy and you learn through the way.

    Sarah Hajipour:

    I was listening to one of the fashion icons on YouTube a couple of days ago and the interviewer was asking her, "What was your career path? How did you get to this place you are now?" And I loved what she told everybody, the students, and that was go and find a career, find a job and learn. You first need to learn a lot of skills before you decide what you're actually good at. You decide, you understand what's your weaknesses and your strengths, right? Because not all of us have these amazing ideas all the time and that's fine.

    Sarah Hajipour:

    I'm not very much pro-everybody has to be a visionary and everybody has to have like big, shiny goals and ideas. I think that's perfectly fine to just find the kind of job that or the kind of career path that you're comfortable with and then sometimes get out of your comfort zone and then discover as you go. Life is to explore, not to just push yourself on the corner all the time and just compare yourself with everybody else.

    Caitlin Mackie:

    Yeah. I love that. That's great advice. So you've recently added podcast host and author to your resume. Were they always career goals of yours?

    Sarah Hajipour:

    No, absolutely not. Well, I'm a little bit of an introverted person. So kind of sit in front of a camera even talking and having people hear me was always like, "Oh my God, I know I need to talk about this even with my teams and stuff," but I will do it only if it's necessary. What got me into podcasting was that I figured there's a lot of questions that I'm finding answers when I'm having conversations and meetups and in different groups, professional groups that I'm in. And I wanted other people to hear those as well. I talked to people who have great insights and have been way longer than me in the career. So I'm learning at the same time. And I wanted to share that learning with everybody else. That's the reason I'm doing the podcast.

    Caitlin Mackie:

    Yeah, that's great. Yeah, I love that. And I think you kind of touched on this earlier, but I think being in the agile space, sometimes it can be a nice reminder for you to have a bit of a focus, but then reflect and understand sort of where to be more effective and adjust accordingly. I know you mentioned that with your career goals, do you think that those agile principles can be applied beyond the usual use case?

    Sarah Hajipour:

    I do. I believe that it's a very intuitive like agile is a very intuitive way of working and a way of thinking. That's why now it's expanded to other industries. They didn't stay with DevOps and IT and development. It is now a lot of different industries adopting this because it's a mindset change. And just not just using scrum. It's not just using Kanban. It is about understanding how to be able to reflect on and adapt to the faster changes that are happening in the world. And that also applies to our personal lives as well.

    Sarah Hajipour:

    I mean, I used to have set goals when I was 18-years-old, I'm going to be this at 30, but did they happen? No. In some aspects I achieved much, much more. And in some aspects I just changed my goal. I think the changes that are happening in the world that are more rapid, it demands us to change as well. Yeah.

    Caitlin Mackie:

    Yeah. Awesome. So just to circle back a little bit there for your podcast just for our audience listening, what platforms can they access your podcast on?

    Sarah Hajipour:

    I'm on all of the main platforms. I'm in Apple podcasts. I'm in Spotify, I'm in Amazon. Most of the prominent podcast platforms.

    Caitlin Mackie:

    Awesome. And then just again, for our audience, your podcast is called the Agility Podcast with Sarah Hajipour.

    Sarah Hajipour:

    That's correct. Yes.

    Caitlin Mackie:

    Awesome. That's great. What do you think has been the most valuable lesson you've learned from your podcast so far? Is it something a guest has shared or something you've learned along the way?

    Sarah Hajipour:

    What I have learned, I have learned a lot from the people that I interview because I make sure that I talk to people who know more than me and have been in this field more than me, and in different industries. The main thing I would say is that agile business agility is about mindset rather than the tools and processes. And the fact that the world overall is moving towards a more human-centric way of working.So basically that's why I say agile is more intuitive rather than just following ABCD. Yeah. This is the core, the main thing that that I have learned from my interviewees.

    Caitlin Mackie:

    Yeah, amazing. You've also started writing a book at the moment. Can you tell us a little bit more about that? How did that project begin?

    Sarah Hajipour:

    I actually love this project. In this book, the way I actually started writing the book was the book came first and then the podcast happened. I attend a lot of meetups. So for young professionals and even for professionals who are very much skilled in what they do, meetups are great place to meet and expand your network and learn from your peers. So I was attending all of these and I was learning from people. And then I decided I really want to have one-on-one conversations with them. And eventually I figured that a lot of the agile coaches, a lot of executive levels and a lot of consultants, they have a lot to share, but I didn't see any platform that kind of unifies that.

    Sarah Hajipour:

    I said, "Okay what are the learnings that we can share?" A lot of the mistakes because of the meetups groups, people feel safe to share and be vulnerable. And I was in multiple meetups so I heard very similar stories from people, the mistakes that have been repeated by a coach somewhere else. So I thought that'd be a great idea to put these in agile cases. So it's going to be Agile Case Studies and share it with everyone so. Especially the young coaches or stepping into the business, there's a lot of unknowns. I don't want them to be afraid. I don't want them to think, "Okay, this is a huge task." There's always going to be a lot of unknowns.

    Sarah Hajipour:

    Yes, I just see that. I kind of want to give that visibility that everybody else is experiencing the same, even if they have 25 years of experience, which is amazing, right?

    Caitlin Mackie:

    Yeah.

    Sarah Hajipour:

    And that's the reason I started writing the book. So I interview with agile coaches and agile consultants that have been around at least five to 10 years and led agile transformation projects. And then from there, one of my interviewers once said, "You should do a podcast. I like to talk about this too." I'm like, "This is great" and that was like the week after I was like running around looking for tools to start my podcast.

    Caitlin Mackie:

    Oh, amazing. Sounds so good. What's the process been like? How have you found from ideation to where you are now, and then eventually when you're publishing it?

    Sarah Hajipour:

    For the podcast?

    Caitlin Mackie:

    For the book.

    Sarah Hajipour:

    For the book, so I go to these meetups and I listen to what's the coaches and the executives are sharing. The ones that are exciting for me are kind of a new for me, I will ask them, I connect with them over in LinkedIn and people are so open to sharing their experience with you. I've never had even one person said to tell me, "No, I don't want to talk about this or anything." People want to share. So I approach and I say, "Hey, I have a book outline or guideline. It's a two pager." I send it to them and I asked them if they are interested to talk to me about this and they let me know and then I'll select a time.

    Sarah Hajipour:

    And first session, it's like a half an hour. It's a kind of a brainstorming session. What are the key cases that they feel they want to share? Then we pick one and the session after that, they'll actually go through the case with me. I record it, draft it and then share it on Google Drive back and forth until we're happy with the outcome.

    Caitlin Mackie:

    Yeah. Awesome. Do you have a timeline at the moment? When can we expect to be able to read it?

    Sarah Hajipour:

    I'm looking forward to around the end of 2021, because it's 100 cases and I think that I'll have that.

    Caitlin Mackie:

    Yeah. Awesome. It's so exciting. Lots to look forward too.

    Sarah Hajipour:

    Thank you.

    Caitlin Mackie:

    Now, I also wanted to touch on International Women's Day is coming up and you've been in the agile space for a few years now. I assume you've probably witnessed a bit of change in this space. Have there been any pivotal moments that have sort of led to where you are today?

    Sarah Hajipour:

    Well, I think that a lot of women are being attracted to the agile practice, the different agile roles. And I have seen a lot more women as scrum masters, as product owners and as agile managers or agile project managers. A lot of different roles are being kind of flourishing in this area. And I've seen a lot of women contribute. One my goals actually in my book and on my podcast is to be able to find these women and talk to them regardless of where they are in the world. Yeah, I just feel that women can grow really in this area in the agile mindset, because women are more the collaboration piece.

    Sarah Hajipour:

    I can't tell we're less competitive. I haven't done research on that, but I have discussed it with people. Do you think that women are more collaborative rather than competitive? Because competition is great, but you need a lot of collaboration in agile and a lot of nurturing. You need to have that nurturing feeling, the nurturing mindset, that's what a scrum master does. One of the key characteristics of a scrum master has to be they have to have this nurturing perspective to bring it to the team.

    Caitlin Mackie:

    It's funny you mentioned because I actually have read some stuff myself about women typically possessing more of that open leadership style and that open leadership seems to complement the agile space really nicely so.

    Sarah Hajipour:

    That's exactly, yeah.

    Caitlin Mackie:

    Yeah. Yeah. That's great and I think there's lots that we can take from that, open leadership and the direct leadership. So men and women coming forward and finding that middle ground and yeah, I feel like agile is a great space to do that in?

    Sarah Hajipour:

    Yeah, I totally agree. Yeah.

    Caitlin Mackie:

    Yeah, yeah. So what drove your passion? I guess what made you want to pursue a career in this space?

    Sarah Hajipour:

    I love the collaboration piece and I love the vulnerability because like people are allowed to be vulnerable and in the teams that they work in. And it is a culture that is more human rather than super strict. We're not allowed to make mistakes. We're not allowed to be wrong. Leaders are supposed to know everything right off the bat. But in reality, that's not the case. Leaders have to feel comfortable not knowing a lot of things that are not even known. But a lot of times I always say we're in the unknown unknown zone. And in that zone, even leaders are not supposed to know everything.

    Sarah Hajipour:

    So a lot of it starts with what are the other things that I learned from my interviewees is that it all starts with the leadership. So the agile transformations, the leaders have to first create that atmosphere of collaboration and of trust and psychological safety among themselves. And then only then they can help with teams to be able to thrive in those kinds of atmospheres as well.

    Sarah Hajipour:

    Women in agile and women in leadership. I like to say and what I see is a lot of men and women both that are changing their perspective from process of tool-centric to people-centric because it works better for everyone. And I see change really happening in all industries. I see it in retail. I see it in construction, obviously in IT, in finance system. And there's men and women like hand-in-hand trying to kind of embrace this way of thinking and this way of working.

    Sarah Hajipour:

    And women are being more comfortable to grow and kind of raise their hand and say, "Hey, I can make each page. I can take this role" because they understand because they bring that psychological safety that women for ages, it has been a workplace has been something that was mostly men and we're gradually getting into the workforce or the business world as females. So that psychological safety has allowed women to raise their hand and grow in different roles and leadership roles obviously.

    Caitlin Mackie:

    Yeah, yeah. I couldn't agree more. Has there been any resources or networks, things like that that have helped you along your journey?

    Sarah Hajipour:

    Learning from everybody else like creating a network, expanding my network to kind of coming in and saying, "Hey, I don't know. I want to know." There is all of these amazing things that are happening. I like to understand how this works and I remember it was one of these founders. Who's the founder of Apple? Oh my God. Don't tell me.

    Caitlin Mackie:

    Steve Jobs.

    Sarah Hajipour:

    I love this quote from Steve Jobs that says, "There has never been a time where I asked for help and people didn't help me." So just raise your hand and say, "I need help." And what does that help that I need? I need to know about this. What does it mean? What does scrum mean to you? How does it work in your industry? How does it work? And really I think that was the key for me up until now to connect with people and just be vulnerable and let them teach me.

    Caitlin Mackie:

    Yeah. I think my next question would be about how do we amplify that diverse and empowered community of women and our job in increasing the representation of women in agile? And yeah, what do you think is key to achieve a supportive and enabling environment?

    Sarah Hajipour:

    What I have seen and realized is that women really need to be and are being more supportive of each other. There was a study in HBR, Harvard Business Review in 2016 that said, "If there is only one woman in the pool of the interviewees, there's a zero chance for that woman to get the job, even if she's the best." So this calls for not which women are actually working great on that. Not being the queen bee, but also engaging and including other women. Because the more women in different roles, the more we are going to be receptive in those communities. That I think is a key that we understand that and support each other, help each other, build the communities around it.

    Sarah Hajipour:

    There is a community Women in Agile that is in different cities and different parts of the world that I'm a member of as well doing a great job. It's not just women actually in those groups. I see men participating as well, but it's predominantly women are trying to give each other insights from all aspects of the agile practices, the agile ways of working and stuff. Yeah.

    Caitlin Mackie:

    Yeah. So I think what's the way forward? I guess what's your prediction for women in agile? What do we need to do to continue that momentum?

    Sarah Hajipour:

    I think women will do great in anything and everything they put their minds in, regardless. We're human bottom line and we all have this potential to be able to grow in whatever we put our mind and heart on, regardless of our gender. So I would love for women to kind of be able to get that holistic perspective that regardless of their gender, they can do anything and they are, we are.

    Sarah Hajipour:

    We read about other women who have been successful in the fields of business that you felt that probably women can't do like women astronauts. There are women physicists. Women engineering leads and all of these that have been less common. The world is changing for the better and that's great.

    Caitlin Mackie:

    Yeah, yeah. I absolutely love that

    Sarah Hajipour:

    It's a great time to be alive.

    Caitlin Mackie:

    Yeah. That's exciting. Yeah, exactly.

    Sarah Hajipour:

    Yes.

    Caitlin Mackie:

    Yeah. I definitely think that we are beginning to see a huge increase and the visibility of female role models across so many industries. So it's great to have that. But Sarah, this has been such a great conversation. I wanted to finish with a final question for you and that was if you could give one piece of advice to women just starting their career in their industry, what would it be?

    Sarah Hajipour:

    I would say maybe the best advice that I can give is that we do have the power. And we need to look, number one, beyond gender and kind of have that belief that we can do anything that we want. And second is don't be shy to open up and build your community like build a community, join a community of agile practitioners of agile coaches, even people, specifically people who know more than you.

    Sarah Hajipour:

    And don't be afraid to ask help. Don't be afraid to say, "Hey, I'm new to this and I love to learn from you guys." Don't be afraid to put yourself out there and you're going to learn a lot that you wouldn't even expect. Just like you're going to get the result so you're going to hear things beyond what you've expected. There's a lot to human potential that could be unleashed when you just put yourself out there and let others contribute to your growth.

    Caitlin Mackie:

    That's amazing. That's great advice, Sarah. Loved every minute of our conversation. So thank you so much for joining me today. I really appreciate it.

    Sarah Hajipour:

    My pleasure. Thank you so much for having me.

  • Podcast

    Easy Agile Podcast Ep.34 Henrik Kniberg on Team Productivity, Code Quality, and the Future of Software Engineering

    TL;DR

    Henrik Kniberg, the agile coach behind Spotify's model, discusses how AI is fundamentally transforming software development. Key takeaways: AI tools like Cursor and Claude are enabling 10x productivity gains; teams should give developers access to paid AI tools and encourage experimentation; coding will largely disappear as a manual task within 3–4 years; teams will shrink to 2 people plus AI; sprints will become obsolete in favour of continuous delivery; product owners can now write code via AI, creating pull requests instead of user stories; the key is treating AI like a brilliant intern – when it fails, the problem is usually your prompt or code structure, not the AI. Bottom line: Learn to use AI now, or risk being left behind in a rapidly changing landscape.

    Introduction

    Artificial intelligence is fundamentally reshaping how software teams work, collaborate, and deliver value. But with this transformation comes questions: How do we maintain team morale when people fear being replaced? What happens to code quality when AI writes most of the code? Do traditional agile practices like sprints still make sense?

    In this episode, I sit down with Henrik Kniberg to tackle these questions head-on. Henrik is uniquely positioned to guide us through this transition – he's the agile coach and entrepreneur who pioneered the famous Spotify model and helped transform how Lego approached agile development. Now, as co-founder of Abundly AI, he's at the forefront of helping teams integrate AI into their product development workflows.

    This conversation goes deep into the practical realities of AI-powered development: from maintaining code review processes when productivity increases 10x, to ethical considerations around AI usage, to what cross-functional teams will look like in just a few years. Henrik doesn't just theorise – he shares real examples from his own team, where their CEO (a non-coder) regularly submits pull requests, and where features that once took a sprint can now be built during a 7-minute subway ride.

    Whether you're a developer wondering if AI will replace you, a product owner looking to leverage these tools, or a leader trying to navigate this transformation, this episode offers concrete, actionable insights for thriving in the AI era.

    About Our Guest

    Henrik Kniberg is an agile coach, author, and entrepreneur whose work has shaped how thousands of organisations approach software development. He's best known for creating the Spotify model – the squad-based organisational structure that revolutionised how large tech companies scale agile practices. His work at Spotify and later at Lego helped demonstrate how agile methodologies could work at enterprise scale whilst maintaining team autonomy and innovation.

    Henrik's educational videos have become legendary in the agile community. His "Agile Product Ownership in a Nutshell" video, created over a decade ago, remains one of the most-watched and shared resources for understanding product ownership, with millions of views. His ability to distil complex concepts into simple, visual explanations has made him one of the most accessible voices in agile education.

    More recently, Henrik has turned his attention to the intersection of AI and product development. As co-founder of Abundly AI, he's moved from teaching about agile transformation to leading AI transformation – helping companies and teams understand how to effectively integrate generative AI tools into their development workflows. His approach combines his deep understanding of team dynamics and agile principles with hands-on experience using cutting-edge AI tools like Claude, Cursor, and GitHub Copilot.

    Henrik codes daily using AI and has been doing so for over two and a half years, giving him practical, lived experience with these tools that goes beyond theoretical understanding. He creates educational content about AI, trains teams on effective AI usage, and consults with organisations navigating their own AI transformations. His perspective is particularly valuable because he views AI through the lens of organisational change management – recognising that successful AI adoption isn't just about the technology, it's about people, culture, and process.

    Based in Stockholm, Sweden, Henrik continues to push the boundaries of what's possible when human creativity and AI capabilities combine, whilst maintaining a pragmatic, human-centred approach to technological change.

    Transcript

    Note: This transcript has been lightly edited for clarity and readability.

    Maintaining Team Morale and Motivation in the AI Era

    Tenille Hoppo: Hi there, team, and welcome to this new episode of the Easy Agile Podcast. My name is Tenille Hoppo, and I'm feeling really quite lucky to have an opportunity to chat today with our guest, Henrik Kniberg.

    Henrik is an agile coach, author, and entrepreneur known for pioneering agile practices at companies like Spotify and Lego, and more recently for his thought leadership in applying AI to product development. Henrik co-founded Abundly AI, and when he isn't making excellent videos to help us all understand AI, he is focused on the practical application of generative AI in product development and training teams to use these technologies effectively.

    Drawing on his extensive experience in agile methodologies and team coaching, Henrik seems the perfect person to learn from when thinking about the intersection of AI, product development, and effective team dynamics. So a very warm welcome to you, Henrik.

    Henrik Kniberg: Thank you very much. It's good to be here.

    Tenille: I think most people would agree that motivated people do better work. So I'd like to start today by touching on the very human element of this discussion and helping people maintain momentum and motivation when they may be feeling some concern or uncertainty about the upheaval that AI might represent for them in their role.

    What would you suggest that leaders do to encourage the use of AI in ways that increase team morale and creativity rather than risking people feeling quite concerned or even potentially replaced?

    Henrik: There are kind of two sides to the coin. There's one side that says, "Oh, AI is gonna take my job, and I'm gonna get fired." And the other side says, "Oh, AI is going to give me superpowers and give us all superpowers, and thereby give us better job security than we had before."

    I think it's important to press on the second point from a leader's perspective. Pitch it as this is a tool, and we are entering a world where this tool is a crucial tool to understand how to use – in a similar way that everyone uses the Internet. We consider it obvious that you need to know how to use the Internet. If you don't know how to use the Internet, it's going to be hard.

    "I encourage people to experiment, give them access to the tools to do so, and encourage sharing. And don't start firing people because they get productive."

    I also find that people tend to get a little bit less scared once they learn to use it. It becomes less scary. It's like if you're worried there's a monster under your bed, maybe look under your bed and turn on the lights. Maybe there wasn't a monster there, or maybe it was there but it was kind of cute and just wanted a hug.

    Creating a Culture of Safe Experimentation

    Tenille: I've read that you encourage experimentation with AI through learning – I agree it's the best way to learn. What would you encourage leaders and team leaders to do to create a strong culture where teams feel safe to experiment?

    Henrik: There are some things. One is pretty basic: just give people access to good AI tools. And that's quite hard in some large organisations because there are all kinds of resistance – compliance issues, data security issues. Are we allowed to use ChatGPT or Claude? Where is our data going? There are all these scary things that make companies either hesitate or outright try to stop people.

    Start at that hygiene level. Address those impediments and solve them. When the Internet came, it was really scary to connect your computer to the Internet. But now we all do it, and you kind of have to, or you don't get any work done. We're at this similar moment now.

    "Ironically, when companies are too strict about restricting people, then what people tend to do is just use shadow AI – they use it on their own in private or in secret, and then you have no control at all."

    Start there. Once people have access to really good AI tools, then it's just a matter of encouraging and creating forums. Encourage people to experiment, create knowledge-sharing forums, share your own experiments. Try to role-model this yourself. Say, "I tried using AI for these different things, and here's what I learned." Also provide paths for support, like training courses.

    The Right Mindset for Working with AI

    Tenille: What would you encourage in team members as far as their mindset or skills go? Certainly a nature of curiosity and a willingness to learn and experiment. Is there anything beyond that that you think would be really key?

    Henrik: It is a bit of a weird technology that's never really existed before. We're used to humans and code. Humans are intelligent and kind of unpredictable. We hallucinate sometimes, but we can do amazing things. Code is dumb – it executes exactly what you told it to do, and it does so every time exactly the same way. But it can't reason, it can't think.

    Now we have AI and AI agents which are somewhere in the middle. They're not quite as predictable as code, but they're a lot more predictable than humans typically. They're a lot smarter than code, but maybe not quite as smart as humans – except for some tasks when they're a million times smarter than humans. So it's weird.

    You need a kind of humble attitude where you come at it with a mindset of curiosity. Part of it is also to realise that a lot of the limitation is in you as a user. If you try to use AI for coding and it wrote something that didn't work, it's probably not the model itself. It's probably your skills or lack of skills because you have to learn how to use these tools. You need to have this attitude of "Oh, it failed. What can I do differently next time?" until you really learn how to use it.

    "There can be some aspect of pride with developers. Like, 'I've been coding for 30 years. Of course this machine can't code better than me.' But if you think of it like 'I want this thing to be good, I want to bring out the best in this tool' – not because it's going to replace me, but because it's going to save me a tonne of time by doing all the boring parts of the coding so I can do the more interesting parts – that kind of mindset really helps."

    Maintaining Code Quality and Shared Understanding

    Tenille: Our team at Easy Agile is taking our steps and trying to figure out how AI is gonna work best for us. I put the question out to some of our teams, and there were various questions around people taking their first steps in using AI as a co-pilot and producing code. There are question marks around consistency of code, maintaining code quality and clean architecture, and even things like maintaining that shared understanding of the code base. What advice do you have for people in that situation?

    Henrik: My first piece of advice when it comes to coding – and this is something I do every day with AI, I've been doing for about two and a half years now – is that the models now, especially Claude, have gotten to the level where it's basically never the AI's fault anymore. If it does anything wrong, it's on you.

    You need to think about: okay, am I using the wrong tool maybe? Or am I not using the tool correctly?

    For example, the current market leader in terms of productivity tools with AI is Cursor. There are other tools that are getting close like GitHub Copilot, but Cursor is way ahead of anything else I've seen. With Cursor, it basically digs through your code base and looks for what it needs.

    But if it fails to find what it needs, you need to think about why. It probably failed for the same reason a human might have failed. Maybe your code structure was very unstructured. Maybe you need to explain to the AI what the high-level structure of your code is.

    "Think of it kind of like a really smart intern who just joined your team. They're brilliant at coding, but now they got confused about something, and it's probably your code – something in it that made it confused. And now you need to clarify that."

    There are ways to do that. In Cursor, for example, you can create something called cursor rules, which are like standing documents that describe certain aspects of your system. In my team, we're always tweaking those rules. Whenever we find that the AI model did something wrong, we're always analysing why. Usually it's our prompt – I just phrased it badly – or I just need to add a cursor rule, or I need to break the problem down a little bit.

    It's exactly the same thing as if you go to a team and give them this massive user story that includes all these assumptions – they'll probably get some things wrong. But if you take that big problem and sit down together and analyse it and split it into smaller steps where each step is verifiable and testable, now your team can do really good work. It's exactly the same thing with AI.

    Addressing the Code Review Bottleneck

    Tenille: One of our senior developers found that he was outputting code at a much greater volume and faster speed, but the handbrake he found was actually their code review processes. They were keeping the same processes they had previously, and that was a bit of a handbrake for them. What kind of advice would you have there?

    Henrik: This reminds me of the general issue with any kind of productivity improvement. If you have a value stream, a process where you do different parts – you do some development, some testing, you have some design – whenever you take one part of the process and make it super optimised, the bottleneck moves to somewhere else.

    If testing is no longer the bottleneck, maybe coding is. And when coding is instant, then maybe customer feedback – or lack of customer feedback – is the bottleneck. The bottleneck just keeps moving. In that particular case, the bottleneck became code review. So I would just start optimising that. That's not an AI problem. It's a process problem.

    Look at it: what exactly are we trying to do when we review? Maybe we could think about changing the way we review things. For example, does all code need to be reviewed? Would it be enough that the human who wrote it and the AI, together with the human, agree that this is fine? Or maybe depending on the criticality of that change, in some cases you might just let it pass or use AI to help in the reviewing process also.

    "I think there's value in code review in terms of knowledge sharing in a large organisation. But maybe the review doesn't necessarily need to be a blocking process either. It could be something you go back and look at – don't let it stop you from shipping, but maybe go back once per week and say, 'Let's look at some highlights of some changes we've made.'"

    We produce 10 times more code than in the past, so reviewing every line is not feasible. But maybe we can at least identify which code is most interesting to look at.

    Ethical Considerations: Balancing Innovation with Responsibility

    Tenille: Agile emphasises people over process and delivering value to customers. Now with AI in the mix, there's potential for raising some ethical considerations. I'm interested in your thoughts on how teams should approach these ethical considerations that come along with AI – things like balancing rapid experimentation against concerns around bias, potential data privacy concerns.

    Henrik: I would treat each ethical question on its own merits. Let me give you an example. When you use AI – let's say facial recognition technology that can process and recognise faces a lot better than any human – I kind of put that in the bucket of: any tool that is really useful can also be used for bad things. A hammer, fire, electricity.

    That doesn't have so much to do with the tool itself. It has much more to do with the rules and regulations and processes around the tool. I can't really separate AI in that sense. Treat it like any other system. Whenever you install a camera somewhere, with or without AI, that camera is going to see stuff. What are you allowed to do with that information? That's an important question. But I don't think it's different for AI really, in that sense, other than that AI is extremely powerful. So you need to really take that seriously, especially when it comes to things like autonomous weapons and the risk of fraud and fake news.

    "An important part of it is just to make it part of the agenda. Let's say you're a recruitment company and you're now going to add some AI help in screening. At least raise the question: we could do this. Do we want to do this? What is the responsible way to do it?"

    It's not that hard to come up with reasonable guidelines. Obviously, we shouldn't let the AI decide who we're going to hire or not. That's a bad idea. But maybe it can look at the pile of candidates that we plan to reject and identify some that we should take a second look at. There's nothing to lose from that because that AI did some extra research and found that this person who had a pretty weak CV actually has done amazing things before.

    We're actually working with a company now where we're helping them build some AI agents. Our AI agents help them classify CVs – not by "should we hire them or not," but more like which region in Sweden is this, which type of job are we talking about here. Just classifying to make it more likely that this job application reaches the right person. That's work that humans did before with pretty bad accuracy.

    The conclusion was that AI, despite having biases like we humans do, seemed to have less biases than the human. Mainly things like it's never going to be in a bad mood because it hasn't had its coffee today. It'll process everybody on the same merits.

    I think of it like a peer-to-peer thing. Imagine going to a doctor – ideally, I want to have both a human doctor and an AI doctor side by side, just because they both have biases, but now they can complement each other. It's like having a second opinion. If the AI says we should do this and the doctor says, "No, wait a second," or vice versa, having those two different opinions is super useful.

    Parallels Between Agile and AI Transformations

    Tenille: You're recognised as one of the leading voices in agile software development. I can see, and I'm interested if you do see, some parallels between the agile transformations that you led at Spotify and Lego with the AI transformations that many businesses are looking at now.

    Henrik: I agree. I find that when we help companies transition towards becoming AI native, a lot of the thinking is similar to agile. But I think we can generalise that agile transformations are not really very special either – it's organisational change.

    There are some patterns involved regardless of whether you're transitioning towards an agile way of working or towards AI. Some general patterns such as: you've got to get buy-in, it's useful to do the change in an incremental way, balance bottom-up with top-down. There are all these techniques that are useful regardless. But as an agilist, if you have some skills and competence in leading and supporting a change process, then that's going to be really useful also when helping companies understand how to use AI.

    Tenille: Are you seeing more top-down or bottom-up when it comes to AI transformations?

    Henrik: So far it's quite new still. The jury's not in yet. But so far it looks very familiar to me. I'm seeing both. I'm seeing situations where it's pure top-down where managers are like "we got to go full-out AI," and they push it out with mixed results. And sometimes just completely bottom-up, also with mixed results.

    Sometimes something can start completely organically and then totally take hold, or it starts organically and then gets squashed because there was no buy-in higher up. I saw all of that with agile as well. My guess is in most cases the most successful will be when you have a bit of both – support and guidance from the top, but maybe driven from the bottom.

    "I think the bottom-up is maybe more important than ever because this technology is so weird and so fast-moving. As a leader, you don't really have a chance if you try to control it – you're going to slow things down to an unacceptable level. People will be learning things that you can't keep up with yourself. So it's better to just enable people to experiment a lot, but then of course provide guidance."

    AI for Product Owners: From Ideation to Pull Requests

    Tenille: You're very well known for your guidance and for your ability to explain quite complex concepts very simply and clearly. I was looking at your video on YouTube today, the Agile Product Ownership in a Nutshell video, which was uploaded about 12 years ago now. Thinking about product owners, there's a big opportunity now with AI for generating ideas, analysing data, and even suggesting new features. What's your advice for product owners and product managers in using AI most effectively?

    Henrik: Use it for everything. Overuse it so you can find the limits. The second thing is: make sure you have access to a good AI model. Don't use the free ones. The difference is really large – like 10x, 100x difference – just in paying like $20 per month or something. At the moment, I can particularly strongly recommend Claude. It's in its own category of awesomeness right now. But that of course changes as they leapfrog each other. But mainly: pay up, use a paid model, and then experiment.

    For product owners, typical things are what you already mentioned – ideation, creating good backlog items, splitting a story – but also writing code. I would say as a PO, there is this traditional view, for example in Scrum, that POs should not be coding. There's a reason for that: because coding takes time, and then as PO you get stuck in details and you lose the big picture.

    Well, that's not true anymore. There are very many things that used to be time-consuming coding that is basically a five-minute job with a good prompt.

    "Instead of wasting the team's time by trying to phrase that as a story, just phrase it as a pull request instead and go to the team and demonstrate your running feature."

    That happened actually today. Just now, our CEO, who's not a coder, came to me with a pull request. In fact, quite often he just pushes directly to a branch because it's small changes. He wants to add some new visualisation for a graph or something in our platform – typically admin stuff that users won't see, so it's quite harmless if he gets it wrong.

    He's vibe coding, just making little changes to the admin, which means he never goes to my team and says, "Hey, can you guys generate this report or this graph for how users use our product?" No, he just puts it in himself if it's simple.

    Today we wanted to make a change with how we handle payments for enterprise customers. Getting that wrong is a little more serious, and the change wasn't that hard, but he just didn't feel completely comfortable pushing it himself. So he just made a PR instead, and then we spent 15 minutes reviewing it. I said it was fine, so we pushed it.

    It's so refreshing that now anybody can code. You just need to learn the basic prompting and these tools. And then that saves time for the developers to do the more heavyweight coding.

    Tenille: It's an interesting world where we can have things set up where anyone could just jump in and with the right guardrails create something. It makes Friday demos quite probably a lot more interesting than maybe they used to be in the past.

    Henrik: I would like to challenge any development team to let their stakeholders push code, and then find out whatever's stopping you from doing that and fix that. Then you get to a very interesting space.

    Closing the Gap Between Makers and Users

    Tenille: A key insight from your work with agile teams in the past has been to really focus on minimising that gap between maker and user. Do you think that AI helps to close that gap, or do you think it potentially risks widening it if teams are focusing too much on AI predictions and stop talking to their customers effectively?

    Henrik: I think that of course depends a lot on the team. But from what I've seen so far, it massively reduces the gap. Because if I don't have to spend a week getting a feature to work, I can spend an hour instead. Then I have so much more time to talk to my users and my customers.

    If the time to make a clickable prototype or something is a few seconds, then I can do it live in real time with my customers, and we can co-create. There are all these opportunities.

    I find that – myself, my teams, and the people I work with – we work a lot more closely with our users and customers because of this fast turnaround time.

    "Just yesterday I was teaching a course, and I was going home sitting on the subway. It was a 15-minute subway ride. I finally got a seat, so I had only 7 minutes left. There's this feature that I wanted to build that involved both front-end and back-end and a database schema change. Well, 5 minutes later it was done and I got off the subway and just pushed it. That's crazy."

    Of course, our system is set up optimised to enable it to be that fast. And of course not everything will work that well. But every time it does, I've been coding for 30 years, and I feel like I wake up in some weird fantasy every day, wondering, "Can I really be this productive?" I never would have thought that was possible.

    Looking Ahead: The Future of Agile Teams

    Tenille: I'd like you to put your futurist hat on for a moment. How do you see the future of agile teamwork in, say, 10 to 15 years time? If we would have this conversation again in 2035, given the exponential growth of AI and improvements over the last two to three years, what do you think would be the biggest change for software development teams in how they operate?

    Henrik: I can't even imagine 10 years. Even 5 years is just beyond imagination. That's like asking someone in the 1920s to imagine smartphones and the Internet. I think that's the level of change we're looking at.

    I would shorten the time a little bit and say maybe 3 or 4 years. My guess there – and I'm already seeing this transfer happen – is that coding will just go away. It just won't be stuff that we humans do because we're too slow and we hallucinate way too much.

    But I think engineering and the developer role will still be there, just that we don't type lines of code – in the same way that we no longer make punch cards or we no longer write machine code and poke values into registers using assembly language. That used to be a big part of it, but no longer.

    "In the future, as developers, a lot of the work will still be the same. You're still designing stuff, you're thinking about architecture, you're interacting with customers, and you're doing all the other stuff. But typing lines of code is something that we're gonna be telling our kids about, and they're not gonna believe that we used to do that."

    The other thing is smaller teams, which I'm already seeing now. I think the idea of a cross-functional team of 5 to 7 people – traditionally that was considered quite necessary in order to have all the different skills needed to deliver a feature in a product. But that's not the case anymore. If you skip ahead 2 or 3 years when this knowledge has spread, I think most teams will be 2 people and an AI, because then you have all the domain knowledge you need, probably.

    As a consequence of that, we'll just have more teams. More and smaller teams. Of course, then you need to collaborate between the teams, so cross-team synchronisation is still going to be an issue.

    Also, I'm already seeing this now, but this concept of sprints – the whole point is to give a team some peace of mind to build something complex, because typically you would need a week or two to build something complex. But now, when it takes a day and some good prompting to do the same thing that would have taken a whole sprint, then the sprint is a day instead. If the sprint is a day, is there any difference between a sprint planning meeting and a daily standup? Not really.

    I think sprints will just kind of shrink into oblivion. What's going to be left instead is something a little bit similar – some kind of synchronisation point or follow-up point. Instead of a sprint where every 2 weeks we sit down and try to make a plan, I think it'll be very much continuous delivery on a day-to-day basis. But then maybe every week or two we take a step back and just reflect a little bit and say, "Okay, what have we been delivering the past couple of weeks? What have we been learning? What's our high-level focus for the next couple of weeks?" A very, very lightweight equivalent of a sprint.

    I feel pretty confident about that guess because personally, we are already there with my team, and I think it'll become a bit of a norm.

    Final Thoughts: Preparing for the Future

    Henrik: No one knows what's gonna happen in the future, and those who say they do are kidding themselves. But there's one fairly safe bet though: no matter what happens in the future with AI, if you understand how to use it, you'll be in a better position to deal with whatever that is. That's why I encourage people to get comfortable with it, get used to using it.

    Tenille: I have a teenage daughter who I'm actually trying to encourage to learn how to use AI, because I feel like when I was her age, the Internet was the thing that was sort of coming mainstream. It completely changed the way we live. Everything is online now. And I feel like AI is that piece for her.

    Henrik: Isn't it weird that the generation of small children growing up now are going to consider this to be normal and obvious? They'll be the AI natives. They'll be like, "Of course I have my AI agent buddy. There's nothing weird about that at all."

    Tenille: I'll still keep being nice to my coffee machine.

    Henrik: Yeah, that's good. Just in case, you know.

    ---

    Thank you to Henrik Kniberg for joining us on this episode of the Easy Agile Podcast. To learn more about Henrik's work, visit Abundly AI or check out his educational videos on AI and agile practices.

    Subscribe to the Easy Agile Podcast on your favourite platform, and join us for more conversations about agile, product development, and the future of work.