No items found.

Easy Agile Podcast Ep.16 Enabling high performing agile teams with Adaptavist

Listen on
Subscribe to our newsletter
Angad Sethi

"Really enjoyed my conversation with William and Riz, I'm looking forward to implementing their recommendations with our team" - Angad Sethi

In this epsiode I spoke with William Rojas and Rizwan Hasan from Adaptavist about the ways we can enable high performing agile teams:

  • The significance of team alignment
  • When and where you should be using tools to assist with your team objectives
  • Prioritizing what conversations you need to be apart of
  • Advice for remote teams

Subscribe/Listen on your favorite podcasting app.

Thanks William & Rizwan!

Transcript

Angad Sethi:

Good afternoon/evening/morning everyone. How you guys going?

Rizwan Hasan:

Oh, good. Thanks Angad.

William Rojas:

Yeah. How are you?

Angad Sethi:

Yeah, really good. Really, really stoked to be having a chat with you guys. Should we start by introducing ourselves? Riz, would you like to take it?

Rizwan Hasan:

Sure. My name's Riz Hasan, I'm based in Brussels, Belgium. Very newly based here, actually used to be based in New York, not too far from William. We usually used to work together on the same team. My role here at Adaptavist is I'm a team lead for our consulting group in EMEA. So in the European region and in the UK. So day to day for me is a lot of internal management, but also working with customers and my consultants on how our customers are scaling agile and helping them with tool problems, process problems, people problems, all the above.

Angad Sethi:

Yeah. Yeah. Sounds awesome.

William Rojas:

As for myself, William Rojas. I'm actually based out of a little suburban town called Trumble in Connecticut, which is about an hour plus northeast of New York, basically. And as Rez mentioned, yeah, we've worked for a number of years we've worked together, we were running a agile transformation and scaling adoption team for Adaptavist. My new role now is actually I took on a presales principle, basically a presale principle consultant these days. It's actually a new role within Adaptavist, and what we do is we have, actually all of us, I think most of us are all like ex-consultants that support the pre-sales process, and work in between the sales team, and the delivery team, and all the other teams that support our clients at Adaptavist.

Angad Sethi:

Awesome, awesome.

William Rojas:

I help find to solutions for clients and make the proposals and support them through, get them on through delivery.

Angad Sethi:


I'm Angad, I'm a software developer and I'm working on Easy Agile programs and Easy Agile roadmaps, two of the products we offer for the Atlassian marketplace. We're super excited to speak to you guys about how your teams are operating in, like what's a day to day. Riz, would you like to answer that?

Rizwan Hasan:

Sure. Yeah. So apart from like the internal management stuff, I think what's particular to this conversation is how we walk clients through how to navigate planning at scale, right?

Angad Sethi:

Yeah.

Rizwan Hasan:

I'm working with a client right now who's based in the states, but they're acquiring other software companies left and right. Which I think is also a trend that's happening within this SaaS ecosystem. And when that happens, they're trying to bring all that work in together. So we're talking through ways of how to visualize all that in an easy way that isn't really too much upfront heavy with identifying requirements or understanding what systems we want to pull in, but more so what do you want to pull in? So really right now, in this phase of the data that I'm working with this client, it's really just those initial conversations about what are you planning? What are you doing? What's important to you? So it's a lot of these conversations about that.

Angad Sethi:

And so you mentioned it's a lot of internal management. Are some of your clients fellow workmates, or are they external clients?

Rizwan Hasan:

They're mostly internal because I manage a team, so I have different people who are working on different types of projects where they might be doing cloud migrations. They might be doing some scripting work. In terms of services, we cover everything within the Atlassian ecosystem, whether it be business related, process related, tool related. So it's a big mix of stuff at all times.

Angad Sethi:

Cool. And is it usually like you're speaking to all the team leads, and giving them advice on agile ceremonies, and pushing work through pipelines and stuff?

Rizwan Hasan:

Yeah, actually, so a story of when I first moved to Brussels, because we've... So professional services started at Adaptavist in the UK, and this was maybe like seven-eight years ago, and it's expanded and myself and William were part of like the first group of consultants who were in North America. That expanded really quickly, and now that we're in EMEA, it's almost like a different entity. It's a different way of working, and a lot of leadership has moved over to North America, so there's new systems and processes and ceremonies and then all that's happening. But because of time zones there's a conflict.


So what I started to do when we got here was to reintroduce some of those habits and consistent conversations to have, to really be much more on a better planning cadence. So interacting with people who would be, say, bringing work to delivery in presale. So folks who are, who work similar to William's capacity over here in this region, and then also project managers who would be responsible for managing that work. Right? So on the equivalent of like a scrum master on an engagement or like an RTE on a big engagement. Right?

Angad Sethi:

Yep. Yep. That's awesome. Just one thing I really liked was your terminology. You used conversations over ceremonies or speaks about the agile mindset in that sense, where you're not just pushing ceremonies on teams, where you actually embody being agile. Well, I'm assuming you are from your conversation, but I guess we'll unpack that. What about you, William? What's your [crosstalk 00:06:32]

William Rojas:

I was going to say, one of the things that's interesting challenge that we face, because Adaptavist has an entire branch that does product development and there are product developers, and product managers, and product marketing, and all sorts of things like that. And they set plans and they focus, deliver and so forth, as you would expect a normal product organization to do. On the consulting side, one of the things that's very interesting is that a lot of our, like we have to answer to two bosses, right? Like our clients come in and say, "Hey, we need this," and we have to support them. In the meantime, we have a lot of internal projects, internal procedures and processes and things that we want do as a company, as a practice, but at the same time, we still need to answer to our clients.

Angad Sethi:

I see.

William Rojas:

So that's actually one of the interesting challenges that from an agile perspective, we're constantly facing having to balance out between sometimes conflicting priorities. And that is definitely something that, and although consulting teams at different levels face this challenge. Right?

Angad Sethi:

Yeah.

William Rojas:

So as Riz mentioned, we're constantly bringing in more work and like, "Okay, we need you to now adjust and re-plan to do something different, then manage." Yes. It's an ongoing problem that's just part of this part of this world kind of thing.

Angad Sethi:

Yeah. Okay. I see. And so if I heard that correctly, so it's, I guess you're constantly recommending agile processes, but you may not necessarily get to practice it?

William Rojas:


But more so we're both practicing for ourselves as well as trying to tell our clients to practice it or trying to adjust.

Angad Sethi:

I see, yeah.

William Rojas:

You know, a client comes in with needs and says, "Okay, now we have to re-plan or teach them how to do it, or re-accommodate their new emerging priorities as well." So we ultimately end up having to practice agile with and for our clients, as well as for ourselves. It's that constant rebalancing of having to weave in client needs into internal needs, and then the constant re-priority that may come as a result of that.

Angad Sethi:

Yeah.

William Rojas:

And then we're constantly looking for like, how do we make this thing more efficient, more effective? How do we really be lean about how we do the work and so forth? That is definitely one thing that we practice. We try to practice that on a daily basis.

Angad Sethi:

Yeah. And I guess that's a very, a tricky space to be... not a tricky space. It can be tricky, I guess, but adding to the trickiness is remote work. Do you guys have a lot of clients who have transitioned to remote work? And I don't know, has it, has it bought to light problems, which can be a good thing, or like what's your experience been?

William Rojas:

So that's interesting because so I've been doing consulting for over a couple decades, and traditionally, so I've done a lot of that, that travel warrior, every week you go travel to the client to do your work, you travel back and you do that again next week, and you do that month after month. In coming to Adaptavist, Adaptavist has historically always been a remote consulting company. So five years ago it was like, wow, we would go to clients saying like, "Okay, we need you to do this." And we're like, "Yeah, we can deliver that. And no, we don't need to, you know. We may come in and do a onsite visit to introduce ourselves, but we can deliver all this work remotely." So we've always had that history.

Angad Sethi:

Okay.

William Rojas:

But nonetheless, when COVID hit and everybody went remote, we definitely experienced a whole new set of companies were now suddenly having to work remotely, and having to establish new processes and practices that basically forced them to be remote. And I think we've had the fortune of in a sense, having always been-

Angad Sethi:

Yep, remote start.

William Rojas:

... S8's.

Angad Sethi:

Yeah.

William Rojas:

I know whenever we bring on people into the company, into consulting particular, that's one of the things we always point out. Remote work is not the same as being in the office. It has its ups and downs. But we've always had that benefit. I think we've been able to assist some of our clients, like, This is how this is how it's done, this is how we do it." So we've been able to teach by example type of thing for some of the clients.

Angad Sethi:

There you go.

William Rojas:

Yeah.

Angad Sethi:

Awesome. That was actually going to be my next question is what's the working structure at Adaptavist and what sort of processes? I'm sure that it's a big company and therefore there'd be tools and processes particular to teams in themselves. Just from your experiences, what are some of the processes or tools you guys are using?

Rizwan Hasan:

So, in terms of planning and work management, because we started off as a remote first company, and since COVID, business is good. I'll be frank there, it's been good for us because we specialize in this market. We've had a huge hiring spurt in all these different areas, and one thing that I noticed internally, as well as problems that... I wouldn't say problems, but a trend that we're seeing with a lot of other clients is that because of this remote push, and the need for an enterprise to be able to give the teams the tools they need to do their work, there's a lot more flexibility in what they can use, which has pros and cons.

On the pro side, there's flexibility, the teams can work the way they want. On the con side, administration might be difficult, alignment might be difficult. So we're seeing a lot of that with customers and ours. So we're almost going on this journey with customers as we're scaling ourselves, and learning how to navigate this new reality of working in a hybrid environment.


William Rojas:

I think in terms of some of the tooling and so forth that we get to do. So we obviously internally we have, we're pretty, pretty much in Atlassian. Atlassian stack, that is very much how we work every day. All our work is using Atlassian tools. All our work is tracked, all our client work is tracked in JIRA, all our sales work, basically everything we do, we use JIRA and Confluence, we're really big on Confluence. We have a lot of customizations we've done to our instance over the years, things that we just have developed, and so that's internal.

I think the other aspect is often, depending on the client that comes to us and the type of work that we're doing for that client, then the types of tools that we use can pretty much run the full gamut. We have a lot of Atlassians, we do a lot of work in JIRA with our clients, like work in Confluence. Sometimes we're working on helping them scale, so we bring on some of the add-on to support some of the scaling practices within to support JIRA. We'll do a lot of JSM work. We do often DevOps work, and then we'll bring on a lot of the DevOps tool sets that you would expect to find, so things to support delivery pipelines.

So it really depends quite a bit on the client. We even do some agile transformation work. And then there, we do some a lot of custom build things, practices and so forth. And we bring in surveys and tools that we've been able to develop over the years to support that particularly. So a lot of the tools often are dictated by what the client and the specific engagement call for.

Angad Sethi:

In my personal experience recently with COVID, I find myself in a lot of meetings, we are experimenting with, with Async decision making. Have you experimented with Async decision making processes yet?

Rizwan Hasan:

I'll start by saying I hate meetings. I think most meetings are a waste of time, and I tell my team this. And I'm like, "If we don't need to meet, like we're not going to meet."

Angad Sethi:

Yeah. Awesome.

Rizwan Hasan:

And I think that really comes. Yeah, awesome, for sure. Awesome.

Angad Sethi:

I love it.

Rizwan Hasan:

But it comes down to really is when you do meet, are you having the right conversation? And I think a key component being like an agile team, quote-unquote, is you have an understanding of what we all are doing collectively and what the priorities are. Which is tough to actually get. So when we talk about like asynchronous decision making, with a team that has some degree of understanding of what priorities are, what goals are, it gets easier. And you can have more low impact interactions with people.


So we use Slack a lot and we have a lot of internal bots on our Slack to be able to present information and collect feedback at asynchronous times, because there's voting features, there's places where you can comment. And I think when we talk about teams that are growing across the globe and also time zones and flexible working, that's a real thing now. There's a practical way of how to do that, that we're starting to dig into what does that look like?

Angad Sethi:

Do you find yourself in a million Slack groups?

Rizwan Hasan:

Yep.

Angad Sethi:

Yep. You do. Do you see any extra hurdles you've got to skip because of that? Because you maybe, do you find yourself hopping from conversation to conversation, whereas it would just be easier if everyone was in the same conversation? Does that happen a bit?

Rizwan Hasan:

Yeah. Yeah. All the time.

Angad Sethi:

I hear you, yeah, there you go. Okay. Cool.

William Rojas:

But I would say we have a lot of impromptu. I think we do have a lot of impromptu meetings. And sometimes we may be in a Slack typing away. It says, you know what? [crosstalk 00:17:29]

Angad Sethi:

Just jump in a huddle.

William Rojas:

Into Zoom and then let's chat or Slack conversation, and then just face to face conversation, and then just address it then and there. But I think we have been looking at, it's almost like I think a balance between the time spent on the meeting, and the amount of people that need to be in the meeting, and the benefit and value that comes out of that meeting. And a daily meeting where work was people would pick up work or support from a sales perspective. And it was very, very much necessary as per part of the work coming into the consulting pipeline. But it felt very inefficient.

So that's one of the means, for example, we did away with, and it's now a completely asynchronous process, by which work comes in and it gets allocated, people pick it up, people support it, we deliver things, we track where things are and so forth. And we now use all of that is basically all done through Slack. So we did away with all the meetings around, "Hey, who can help with this?" But meantime, we have another meeting where we're trying to get people on projects. And that is very much a, we need to negotiate on that often. So that's a meeting that's still very much done.


Angad Sethi:

Yep.

William Rojas:

Everybody comes in, we all talk, we decide what we need to get done. People balance back and forth. So that trade off I think is really important to really understand what, there are meetings that are necessary, very valuable, and they should remain. And there's ones that really a Slack is a much better mechanism to be able to make those kind of decisions

Angad Sethi:

Yeah. Very true. Yeah. And does it well, sorry, firstly, pardon the location change. I'm sitting right next to the router now, so hopefully the iPhone holds. What sort of a scale are we speaking about here in your Slack? The reason I ask is with larger organizations, it can be harder to scale. Therefore I'm just trying to get a gauge of what scale your Slack is at.

Rizwan Hasan:

So we just hit, we are just over the 500 mark, that'd be in terms of employees. With basically our general, which seems to be, I think, I don't want to say universal, but the standard across any organization that has Slack general as the best indicator of how many people you have logged on. So we're just about the 500 mark, which I would say is probably around mid-size, but it's definitely getting to the point where we're starting to see, it's almost a little bit too much in order to disseminate information, find their information, etc.

We're actually partners with Slack also. So we work with them pretty closely on some opportunities. [crosstalk 00:20:39] Yeah, exactly. And we're starting to talk with customers also about the same problem, about how much is too much, and when do you start to form communities around people that are delivering the same type of value. So those conversations are more aligned and there's not just a whole lot of chatter and people get confused, like when they read Slack and like, "Oh, is this the priority now? Or am I supposed to be doing this or change in process?" That communication is harder now, I think, really. And this is where a lot of folks, I think, who are moving to this remote environment are struggling with, is that alignment communication.

Angad Sethi:

Yeah. Very true.

William Rojas:

And it is, I would say fairly organic, like our channel proliferation. We do have, I would think even for company of our size, we're pretty loose about how channels get proliferated, who gets to create them, what they're for and so forth. But then it gives the flexibility of based upon your interests or the context of what you need to communicate on, then you can either join a channel that supports it or create a channel if necessary to support it. So it is, in that sense, pretty organic. But it is true that there are hundreds, if not thousands of Slack channels that we have, and so kind of staying like which one should you be on, is definitely one of our biggest challenges.


Angad Sethi:

Yeah. Well, that just blows my mind just because like 500 people on a Slack. Our whole company is 35 people and I'm pulling my hair out being in too many Slacks. So well A, that blows my mind.

William Rojas:

It does allow us, for example, to have client specific Slack channels. So anybody, if you need to talk about, if you're working on a particular account, you're working for a client, then there's a channel for that. And if you're working on another client, there's another channel. The thing I find helpful about it is that it gives you that context of if I want to communicate with so and so, if I communicate with Riz on a particular account, I will go to the account channel. If I want to talk to Riz one-on-one, I go to a one-on-one chat.

Angad Sethi:

I see, yep, the flexibility.

William Rojas:

So we do have that benefit of where to put the information. But it does mean that I have probably over a hundred channels in my roster of things that I follow, and I'm always behind.

Angad Sethi:

Yeah.

William Rojas:

Well, yeah. So the next level of it is, then you begin to prioritize which channels should I really be notified about, and which ones are most important. I want to track those. And I try to keep that list to a minimum in terms of unread messages, and the stuff that I try to get to, and I'm bored and I have nothing else to do so, but yeah.

Rizwan Hasan:

I've been leaving a lot of channels too. I've been just really cutting the cord with some channels. You know, I had some motivation to really help out here, but I just can't and it's just too much noise. And just got to cut the cord and be like, if it's empty, there's no conversation happening or if it's slow, then move on.

Angad Sethi:

Yep.

William Rojas:

We also have the ability to, you can get added back in. So sometimes you leave and then somebody will put you back in, like, "I need you to talk about this." But it is pretty organic. I know we do leave it up to the individual to decide how best to manage that.


Rizwan Hasan:

Yeah.

Angad Sethi:

That's awesome.

Rizwan Hasan:

We had a instance today, actually, where there was an old, it was basically a sales opportunity, a customer who had reached out to us for a certain ask, and we hadn't heard from them for months, like eight-nine months. And someone posted, someone who I'm pretty close with on our sales team posted, "Hey, this is kicking back up again, but I don't have the capacity." And I just left immediately as I saw that message. I was like, "I can't help out. Sorry."

Angad Sethi:

Yeah. The old so-and-so has left the group is a bit of a stab in the heart, but yeah.

Rizwan Hasan:

Yeah.

Angad Sethi:

We will get over it. Just coming back to a point you mentioned, Riz, you said you used the words, alignment and communication. Both of you when consulting with clients, are those the two main themes you guys like to base your recommendations around?

Rizwan Hasan:

I'll give you a very consulting answer and say it depends.

Angad Sethi:

Yeah.

Rizwan Hasan:

But when we engage with a customer, one of the toughest parts of our job is understanding if there is even alignment in the group of people that we're talking to as well, because at the scale of projects that sometimes we work with, we have like 20 to 25 people on a call. And of all of those people, they may have different motivations or objectives of what they're wanting with their engagement with us. So I would say, that's primarily what's driving what we're trying to find out, what we're trying to do with them is get some alignment between the group and ourselves, and communicating that is not always easy.

Angad Sethi:

Yeah.


William Rojas:

Let's say, adding on what Riz, that also depends quite a bit on the specific engagement with that client. So in particular, if the engagement, because if an engagement is like, "Get me onto the cloud." Okay. You know, come in. Often there's much better alignment for something like that. If the engagements are more about, "Hey, help us scale agile, help us get better at how we deliver." Then the need for alignment, the need to make sure that we're all communicating correctly, we all understand, we all come to the meeting with the same objectives and so forth, is so much more critical.

Angad Sethi:

Yeah.

William Rojas:

So in those kind of engagements, we're constantly realigning. Because it's not even like we had the alignment. It's like yeah. Okay. We have it, next week it's gone. We got to go back and get it again. So that keeping, making sure that everybody's marching towards the same set of objectives, defining what those objectives are, letting them evolve as appropriate and so forth, all that becomes so much more critical.

Angad Sethi:

Yeah.

William Rojas:

And that's where the tools, that's where things like JIRA and then again, like how do we scale? How do we show what everybody's doing? And so forth, that's where it becomes that much more important. And in those kind of engagements, the tooling becomes essential. Not that the tooling's going to answer it, but the tooling becomes a way by which it helps us communicate, yeah. This is what we all agree we're going to do. Okay. The tool says so because that's the decision we've made.

Angad Sethi:

Yeah.

Rizwan Hasan:

It's really interesting that you say cloud migration, William, like when you say, "Okay, I'm moving to cloud, we know what the alignment is," but even then, I'm finding is that, especially within the Atlassian ecosystem, because that's what we're exposed to all the time, but when we're moving data from a completely old infrastructure to something brand new, it's not going to be the same. And you have folks who are thinking that, "Oh, we're just going to be taking all this stuff from here and putting it over there." But what usually doesn't come along with it is that you're going to have to also change the way you work slightly. There's going to be changes that you're not accounting for.

And that's where the alignment conversation really is important because we work with small companies who understand, okay, moving to the cloud will be completely different. We also work with legacy organizations like financial institutions that have a lot of red tape, and process, and security concerns, and getting that alignment and understanding with them first of what this means to move to a completely different way of working, is also part of that conversation. So it's a constant push and pull with that.

Angad Sethi:

Yeah, yeah. It's really heartwarming to hear the two of you deal with the JCMA, which is the geo cloud migration system.

Rizwan Hasan:

Quite a bit, yeah.

Angad Sethi:

That's awesome, because yeah, that's something we are working on currently as well. So I'll end with a super hard question and I'll challenge you guys to not use the word depends in there. And the question is the number one piece of advice for remote teams practicing agile. Start with you, Riz.

Rizwan Hasan:

Get to know each other.

Angad Sethi:

Yeah, okay.

Rizwan Hasan:

Keep it personal. I think one of the hardest things about this new reality is making that connection with someone, and when you have that, that builds trust, and when you have trust, everything's a lot easier. So I'd say that. People really aren't... The enemy. That's not the right word, but work shouldn't be a conflict. It should be more of like a negotiation, and if you trust each other, it's a lot easier to do that.

Angad Sethi:

Yeah.

Rizwan Hasan:

So yeah.

Angad Sethi:

That's awesome.

William Rojas:

It really is.

Angad Sethi:

I'm going to definitely take that back with me.

William Rojas:


Yeah. And just if I could quickly add to that. That's like looking for ways how to replace the standing around by the, having a cup of coffee. How do you replace that in a remote setting?

Rizwan Hasan:

Yeah.

Angad Sethi:

Yeah.

William Rojas:

How do you still have that personal interaction that maybe there's an electronic medium in between, but there's still sort of that personal setting. I think that's one of the things you're looking for. Because yeah, it is very much about trust. And I think to that, I would also add, back to the alignment. Right? Because in some ways that strong interaction helps build and maintain the alignment, because often it's not so much that you get alignment is that you stay aligned.

So it is this constant, and having those interactions, having that trust and so forth, is what in a sense allows us to stay aligned. Because we know each other, we know how to help each other, we support each other, so we stay in alignment. So the trust and so forth are a good way to help build and maintain the alignment itself that you're looking for. That's absolutely. In remote world, you don't have the benefit of seeing each other, the whiteboard, all those things are not the same.

Angad Sethi:

Very true. Getting cup a coffee, yep.

William Rojas:

But we still need to stay in sync with what needs to get done. That's so important.

Angad Sethi:

Very true. And so would you guys want to drop any names of tools you're using to facilitate that trust between team members in a remote setting?

William Rojas:

So I would say, like I mentioned from my role, one of the things that we do is in the presales area, we support some of our larger accounts, almost as more of like a solution account manager, per se. So we come in and help make sure that the client is getting the solution that is meant to be delivered. So we work with the delivery teams, we work with the client, we sit in between.

There's one large client that we've been working on for years now, and we basically, to the point that they're moving towards some flavor of safe. That I wouldn't call it fully safe, but they do have a lot of safe practices, but they do PI planning, and so we come in and join the PI planning. That's actually one of the, like I said, how do you stay alive?

Angad Sethi:

That circle. Yeah. [crosstalk 00:33:15]


William Rojas:

You pull up your program definition, you look at what features you want to deliver in the PI, who's going to deliver that feature in the PI, and then in your readout, go back to the tool and say, "Look, this is what we've agreed to." Others can ask questions and so forth, and constantly going back to... For example, just last week, we're doing now sprint planning and saying, "Actually, okay, this feature's going to drag on another sprint. Let me go back and readjust in," this client is using the Easy Agile programs. The original plan of saying this features not going to be, not two sprints, but the three sprints instead, for example.

So that habit of getting into using the tool to communicate what we decided and what we just had to make changes to. So it becomes this, a communication vehicle, it's really important. Yeah, they use programs, they use the roadmap piece of programs to help them do their PI planning, and stay in sync with what it is that ultimately gets communicated out at the end of PI. And then during the sprints of the PI itself, and it's very helpful for them. Again, there's I think they have seven trainings, and they all use that to help stay in sync, stay aligned.

Angad Sethi:

Awesome. Awesome.

William Rojas:

One other quick thing I'll say is, I think there will be, some of where we've gone will now become status quo, become permanent. So I think that this has been as shift across the market, across the industry, across company, how people work. So the idea of remote work, the idea of using tooling to really establish communication, and help facilitate communication, all that, while it's been around, I think the big difference is now everybody, like you have no choice. Everybody has to do it.

Angad Sethi:

Has to. Yeah.

William Rojas:

And I think we've definitely seen a big shift across the entire industry because of that. That will now solidify and let's see what the next level brings. But I definitely think that we've reached a new stage of maturity and so forth pretty much globally, which is pretty cool.

Angad Sethi:

Yeah.

Rizwan Hasan:

Yeah.

Angad Sethi:

Yeah, it is. Thank you guys. I won't keep you too long. I think, has the sun set there, Riz? I can see the reflection going dark.


Rizwan Hasan:

Yeah. It is getting there. Yeah, for sure.

Angad Sethi:

Yeah. Yeah. I won't hold you guys for too long.

Rizwan Hasan:

All good.

Angad Sethi:

But thank you so much for the conversation. I honestly, I took a lot away from that. And yeah, I hope I can add you guys to my LinkedIn. I would love to be in touch still.

William Rojas:

Definitely.

Rizwan Hasan:

Yeah, sure.

Angad Sethi:

Yeah. Trying to establish a point of contact, not to add to one of your Slack channels, but yeah. Just so that we can be in conversation regarding the product and improving it.

Rizwan Hasan:

Yeah, sure. And we have a partner management channel. I know we've been talking to Haley a little bit.

Angad Sethi:

Awesome.

Rizwan Hasan:

She was reaching out, that's about some other stuff.

Angad Sethi:

Beautiful.

Rizwan Hasan:

Yeah, happy to. We engage with your product and it's in our white papers too, and we're going to put out another white paper this year where we're going to talk about Easy Agile too. So yeah. We'll stay in touch.

Angad Sethi:

Cool.

William Rojas:

I just gave you, so my LinkedIn is under a different, my LinkedIn is not with my work email. Because that way I can keep the same account place to place.

Angad Sethi:

Sounds good.

William Rojas:

Yeah. You can look me up on LinkedIn with that.

Angad Sethi:

Wicked awesome. Thanks guys.

William Rojas:

Awesome. All right.

Angad Sethi:

Have a good day.

Related Episodes

  • Text Link

    Easy Agile Podcast Ep.12 Observations on Observability

    On this episode of The Easy Agile Podcast, tune in to hear developers Angad, Jared, Jess and Jordan, as they share their thoughts on observability.  

    Wollongong has a thriving and supportive tech community and in this episode we have brought together some of our locally based Developers from Siligong Valley for a round table chat on all things observability.

    💥 What is observability?
    💥 How can you improve observability?
    💥 What's the end goal?

    Angad Sethi

    "This was a great episode to be a part of! Jess and Jordan shared some really interesting points on the newest tech buzzword - observability""

    Be sure to subscribe, enjoy the episode 🎧

    Transcript

    Jared Kells:

    Welcome everybody to the Easy Agile podcast. My name's Jared Kells, and I'm a developer here at Easy Agile. Before we begin, Easy Agile would like to acknowledge the traditional custodians of the land from which we broadcast today, the Wodiwodi people of the Dharawal nation, and pay our respects to elders past, present and emerging, and extend that same respect to any aboriginal people listening with us today.

    Jared Kells:

    So today's podcast is a bit of a technical one. It says on my run sheet here that we're here to talk about some hot topics for engineers in the IT sector. How exciting that we've got a couple of primarily front end engineers and Angad and I are going to share some front end technical stuff and Jess and Jordan are going to be talking a bit about observability. So we'll start by introductions. So I'll pass it over to Jess.

    Jess Belliveau:

    Cool. Thanks Jared. Thanks for having me one as well. So yeah, my name's Jess Belliveau. I work for Apptio as an infrastructure engineer. Yeah, Jordan?

    Jordan Simonovski:

    I'm Jordan Simonovski. I work as a systems engineer in the observability team at Atlassian. I'm a bit of a jack of all trades, tech wise. But yeah, working on building out some pretty beefy systems to handle all of our data at Atlassian at the moment. So, that's fun.

    Angad Sethi:

    Hello everyone. I'm Angad. I'm working for Easy Agile as a software dev. Nothing fancy like you guys.

    Jared Kells:

    Nothing fancy!

    Jess Belliveau:

    Don't sell yourself short.

    Jared Kells:

    Yeah, I'll say. Yeah, so my name's Jared, and yeah, senior developer at Easy Agile, working on our apps. So mainly, I work on programs and road maps. And yeah, they're front end JavaScript heavy apps. So that's where our experience is. I've heard about this thing called observability, which I think is just logs and stuff, right?

    Jess Belliveau:

    Yeah, yeah. That's it, we'll wrap up!

    Jared Kells:

    Podcast over! Tell us about observability.

    Jess Belliveau:

    Yeah okay, I'll, yeah. Well, I thought first I'd do a little thing of why observability, why we talk about this and sort of for people listening, how we got here. We had a little chat before we started recording to try and feel out something that might interest a broader audience that maybe people don't know a lot about. And there's a lot of movements in the broad IT scope, I guess, that you could talk about. There's so many different things now that are just blowing up. Observability is something that's been a hot topic for a couple of years now. And it's something that's a core part of my job and Jordan's job as well. So it's something easy for us to talk about and it's something that you can give an introduction to without getting too technical. So we don't want to get down. This is something that you can go really deep into the weeds, so we picked it as something that hopefully we can explain to you both at a level that might interest the people at home listening as well.

    Jess Belliveau:

    Jordan and I figured out these four bullet points that we wanted to cover, and maybe I can do the little overview of that, and then I can make Jordan cover the first bullet point, just throw him straight under the bus.

    Jordan Simonovski:

    Okay!

    Jess Belliveau:

    So we thought we'd try and describe to you, first of all, what is observability. Because that's a pretty, the term doesn't give you much of what it is. It gives you a little hint, but it'll be good to base line set what are we talking about when we say what is observability. And then why would a development team want observability? Why would a company want observability? Sort of high level, what sort of benefits you get out of it and who may need it, which is a big thing. You can get caught up in these industry hot buzz words and commit to stuff that you might not need, or that sort of stuff.

    Jared Kells:

    Yep.

    Jordan Simonovski:

    Yep.

    Jess Belliveau:

    We thought we'd talk about some easy wins that you get with observability. So some of the real basic stuff you can try and get, and what advantages you get from it. And then we just thought because we're no going to try and get too deep, we could just give a few pointers to some websites and some YouTube talks for further reading that people want to do, and go from there. So yeah, Jordan you want to-

    Jared Kells:

    Sounds good.

    Jess Belliveau:

    Yeah. I hopefully, hopefully. We'll see how this goes! And I guess if you guys have questions as well, that's something we should, if there's stuff that you think we don't cover or that you want to know more, ask away.

    Jordan Simonovski:

    I guess to start with observability, it's a topic I get really excited about, because as someone that's been involved in the dev ops and SRE space for so long, observability's come along and promises to close the loop or close a feedback loop on software delivery. And it feels like it's something we don't really have at the moment. And I get that observability maybe sounds new and shiny, but I think the term itself exists to maybe differentiate itself from what's currently out there. A lot of us working in tech know about monitoring and the loading and things like that. And I think they serve their own purpose and they're not in any way obsolete either. Things like traditional monitoring tools. But observability's come along as a way to understand, I think, the overwhelmingly complex systems that we're building at the moment. A lot of companies are probably moving towards some kind of complicated distributed systems architecture, microservices, other buzz words.

    Jordan Simonovski:

    But even for things like a traditional kind of monolith. Observability really serves to help us ask new questions from our systems. So the way it tends to get explained is monitoring exits for our known unknowns. With seniority comes the ability to predict, almost, in what way your systems will fail. So you'll know. The longer you're in the industry, you know this, like a Java server fails in x, y, z amount of ways, so we should probably monitor our JVM heap, or whatever it is.

    Jared Kells:

    I was going to say that!

    Jordan Simonovski:

    I'll try not to get too much into-

    Jared Kells:

    Runs out of memory!

    Jordan Simonovski:

    Yeah. So that's something that you're expecting to fail at some point. And that's something that you can consider a known unknown. But then, the promise of observability is that we should be shipping enough data to be able to ask new questions. So the way it tends to get talked about, you see, it's an unknown unknown of our system, that we want to find out about and ask new questions from. And that's where I think observability gets introduced, to answer these questions. Is that a good enough answer? You want me to go any further into detail about this stuff? I can talk all day about this.

    Jared Kells:

    Is it like a [crosstalk 00:08:05]. So just to repeat it back to you, see if I've understood. Is it kind of like if I've got a, traditionally with a Java app, I might log memories. It's because I know JVM's run out of memory and that's a thing that I monitor, but observability is more broad, like going almost over the top with what you monitor and log so that you can-

    Jordan Simonovski:

    Yeah. And I wouldn't necessarily say it's going over the top. I think it's maybe adding a bit more context to your data. So if any of you have worked with traces before, observability is very similar to the way traces work and just builds on top of the premise of traces, I guess. So you're creating these events, and these events are different transactions that could be happening in your applications, usually submitting some kind of request. And with that request, you can add a whole bunch of context to it. You can add which server this might be running on, which time zone. All of these additional and all the exciters. You can throw in user agency into there if you want to. The idea of observability is that you're not necessarily constrained by high cardinality data. High cardinality data being data sets that can change quite largely, in terms of the kinds of data they represent, or the combinations of data sets that you could have.

    Jordan Simonovski:

    So if you want shipping metrics on something, on a per user basis and you want to look at how different users are affected by things, that would be considered a high cardinality metric. And a lot of the time it's not something that traditional monitoring companies or metric providers can really give you as a service. That's where you'll start paying insanely huge bills on things like Datadog or whatever it is, because they're now being considered new metrics. Whereas observability, we try and store our data and query it in a way that we can store pretty vast data sets and say, "Cool. We have errors coming from these kinds of users." And you can start to build up correlations on certain things there. You can find out that users from a particular time zone or a particular device would only be experiencing that error. And from there, you can start building up, I think, better ways of understanding how a particular change might have broken things. Or some particular edge cases that you otherwise couldn't pick up on with something like CPU or memory monitoring.

    Angad Sethi:

    Would it be fair to say-

    Jared Kells:

    Yeah. It's [crosstalk 00:11:02].

    Angad Sethi:

    Oh, sorry Jared.

    Jared Kells:

    No you can-

    Angad Sethi:

    Would it be fair to say that, so, observability is basically a set of principles or a way to find the unknown unknowns?

    Jordan Simonovski:

    Yeah.

    Angad Sethi:

    Oh.

    Jess Belliveau:

    And better equip you to find, one of the things I find is a lot of people think, you get caught up in thinking observability is a thing that you can deploy and have and tick a box, but I like your choice of word of it being a set of principles or best practices. It's sort of giving you some guidance around these, having good logging coming out of your application. So structured logs. So you're always getting the same log format that you can look at. Tracing, which Jordan talked a little bit about. So giving you that ability to follow how a user is interacting with all the different microservices and possibly seeing where things are going wrong, and metrics as well. So the good thing with metrics is we're turning things a bit around and trying to make an application, instead of doing, and I don't want to get too technical, black box monitoring, where we're on the outside, trying to peer in with probes and checks like that. But the idea with metrics is the application is actually emitting these metrics to inform us what state it is in, thereby making it more observable.

    Jess Belliveau:

    Yeah, I like your choice of words there, Angad, that it's like these practices, this sort of guide of where to go, which probably leads into this next point of why would a team want to implement it. If you want to start again, Jordan?

    Jordan Simonovski:

    Yeah, I can start. And I'll give you a bit more time to speak as well, Jess in this one. I won't rant as much.

    Jess Belliveau:

    Oh, I didn't sign up for that!

    Jordan Simonovski:

    I think why teams would want it is because, it really depends on your organization and, I guess, the size of the teams you're working in. Most of the time, I would probably say you don't want to build observability yourself in house. It is something that you can, observability capabilities themselves, you won't achieve it just by buying a thing, like you can't buy dev ops, you can't buy Agile, you can't buy observability either.

    Jared Kells:

    Hang on, hang on. It says on my run sheet to promote Easy Agile, so that sounds like a good segue-

    Jess Belliveau:

    Unless you want to buy it. If you do want to buy Agile, the [crosstalk 00:13:55] in the marketplace.

    Jared Kells:

    Yeah, sorry, sorry, yeah! Go on.

    Jordan Simonovski:

    You can buy tools that make your life a lot easier, and there are a lot of things out there already which do stuff for people and do surface really interesting data that people might want to look at. I think there are a couple of start ups like LightStep and Honeycomb, which give you a really intuitive way of understanding your data in production. But why you would need this kind of stuff is that you want to know the state of your systems at any given point in time, and to build, I guess, good operational hygiene and good production excellence, I guess as Liz Fong-Jones would put it, is you need to be able to close that feedback loop. We have a whole bunch of tools already. So we have CICD systems in place. We have feature flags now, which help us, I guess, decouple deployments from releases. You can deploy code without actually releasing code, and you can actually give that power to your PM's now if you want to, with feature flags, which is great.

    Jordan Simonovski:

    But what you can also do now is completely close this loop, and as you're deploying an application, you can say, "I want to canary this deployment. I want to deploy this to 10% of my users, maybe users who are opted in for Beta releases or something of our application, and you can actually look at how that's performing before you release it to a wider audience. So it does make deployments a lot safer. It does give you a better understanding of how you're affecting users as well. And there are a whole bunch of tools that you can use to determine this stuff as well. So if you're looking at how a lot of companies are doing SRE at the moment, or understanding what reliable looks like for their applications, you have things like SLO's in place as well. And SLO's-

    Jared Kells:

    What's an SLO?

    Jordan Simonovski:

    They're all tied to user experiences. So you're saying, "Can my user perform this particular interaction?" And if you can effectively measure that and know how users are being affected by the changes you're making, you can easily make decisions around whether or not you continue shipping features or if you drop everything and work on reliability to make sure your users aren't affected. So it's this very user centric approach to doing things. I think in terms of closing the loop, observability gives us that data to say, "Yes, this is how users are being affected. This is how, I guess the 99th percentile of our users are fine, but we have 1% who are having adverse issues with our application." And you can really pinpoint stuff from there and say, "Cool. Users with this particular browser or this particular, or where we've deployed this app to," let's say if you have a global deployment of some kind, you've deployed to an island first, because you don't really care what happens to them. You can say, "Oh, we've actually broken stuff for them." And you can roll it back before you impact 100% of your users.

    Jared Kells:

    Yeah. I liked what you said about the test. I forgot the acronym, but actually testing the end user behavior. That's kind of exciting to me, because we have all these metrics that are a bit useless. They're cool, "Oh, it's using 1% CPU like it always is, now I don't really care," but can a user open up the app and drag an issue around? It's like-

    Jess Belliveau:

    Yeah, that's a really great example, right?

    Jared Kells:

    That's what I really care about.

    Jess Belliveau:

    The 1% CPU thing, you could look at a CPU usage graph and see a deployment, and the CPU usage doesn't change. Is everything healthy or not? You don't know, whereas if you're getting that deeper level info of the user interactions, you could be using 1% CPU to serve HTTP500 errors to the 80% of the customer base, sort of thing.

    Angad Sethi:

    How do you do that? The SLO's bit, how do you know a user can log in and drag an issue?

    Jordan Simonovski:

    Yeah. I think that would come with good instrumenting-

    Angad Sethi:

    Good question?

    Jordan Simonovski:

    Yeah, it comes down to actually keeping observability in mind when you are developing new features, the same way you would think about logging a particular thing in your code as you're writing, or writing test for your code, as you're writing code as well. You want to think about how you can instrument something and how you can understand how this particular feature is working in production. Because I think as a lot of Agile and dev ops principles are telling us now is that we do want our applications in production. And as developers, our responsibilities don't end when we deploy something. Our responsibility as a developer ends when we've provided value to the business. And you need a way of understanding that you're actually doing that. And that's where, I guess, you do nee do think about observability with a lot of this stuff, and actually measuring your success metrics. So if you do know that your application is successful if your user can log in and drag stuff around, then that's exactly what you want to measure.

    Jared Kells:

    I think that we have to build-

    Jordan Simonovski:

    Yeah?

    Jared Kells:

    Oh, sorry Jordan.

    Jordan Simonovski:

    No, you go.

    Jared Kells:

    I was just going to say we have to build our apps with integration testing in mind already. So doing browser based tests around new features. So it would be about building features with that and the same thing in mind but for testing and production.

    Jess Belliveau:

    Yeah and the actual how, the actual writing code part, there's this really great project, the open telemetry project, which provides all these sort of API's and SDK's that developers can consume, and it's vendor agnostic. So when you talk about the how, like, "How do I do this? How do I instrument things?" Or, "How do I emit metrics?" They provide all these helpful libraries and includes that you can have, because the last thing you want to do is have to roll this custom solution, because you're then just adding to your technical debt. You're trying to make things easier, but you're then relying on, "Well I need to keep Jared Kells employed, because he wrote our log in engine and no one else knows how it works.

    Jess Belliveau:

    And then the other thing that comes to mind with something like open telemetry as well, and we talked a bit about Datadog. So Datadog is a SaaS vendor that specializes in observability. And you would push your metrics and your logs and your traces to them and they give you a UI to display. If you choose something that's vendor agnostic, let's just use the example of Easy Agile. Let's say they start Datadog and then in six months time, we don't want to use Datadog anymore, we want to use SignalFx or whatever the Splunk one is now.

    Jordan Simonovski:

    I think NorthX.

    Jess Belliveau:

    Yeah. You can change your end point, push your same metrics and all that sort of stuff, maybe with a few little tweaks, but the idea is you don't want to tie in to a single thing.

    Jordan Simonovski:

    Your data structures remain the same.

    Jess Belliveau:

    Yeah. So that you could almost do it seamlessly without the developers knowing. There's even companies in the past that I think have pushed to multiple vendors. So you could be consuming vendor A and then you want to do a proof of concept with vendor B to see what the experience is like and you just push your data there as well.

    Jared Kells:

    Yeah. I think our coupling to Datadog will be I all the dashboards and stuff that we've made. It's not so much the data.

    Jess Belliveau:

    Yeah. That's sort of the big up sell, right. It's how you interact. That's where they want to get their hooks in, is making it easier for you to interpret that data and manipulate it to meet your needs and that sort of stuff.

    Jordan Simonovski:

    Observability suggests dashboards, right?

    Jess Belliveau:

    Yeah, perhaps. You used this term as well, Jordan, "production excellence." And when we talk about who needs observability, I was thinking a bit about that while you were talking. And for me, production excellence, or in Apptio we call it production readiness, operational readiness and that sort of stuff is like we want to deploy something to production like what sort of best practices do we want to have in place before we do that? And I think observability is a real great idea, because it's helping you in the future. You don't know what problems you're going to have down the line, but you're equipping your teams to be able to respond to those problems easily. Whereas, we've all probably been there, we've deployed code of production and we have no observability, we have a huge outage. What went wrong? Well, no one knows, but we know this is the fix, and it's hard to learn from that, or you have to learn from that I guess, and protect the user against future stuff, yeah.

    Jess Belliveau:

    When I think easy wins for observability, the first thing that really comes to mind is this whole idea of structured logging, which is really this idea that your application is you're logging, first of all. Quite important as a baseline starting point, but then you have a structured log format which lets you programmatically pass the logs as well. If you go back in time, maybe logging just looked like plain text with a line, with a timestamp, an error message. Whatever the developer decided to write to the standard out, or to the error file or something like that. Now I think there's a general move to having JSON, an actual formatted blob with that known structure so you can look into it. Tracing's probably not an easy win. That's a little bit harder. You can implement it with open telemetry and libraries and stuff. Requires a bit more understanding of your code base, I guess, and where you want tracing to fire, and that sort of stuff, parsing context through, things like that.

    Jordan Simonovski:

    I think Atlassian, when you probably just want to know that everything is okay. At a fairly superficial level. Maybe you just want to do some kind of up time on a trend. And then as, I guess, your code might get more complex or your product gets a bit more complex, you can start adding things in there. But I think actually knowing or surfacing the things you know might break. Those would probably be your quickest wins.

    Jess Belliveau:

    Well, let's mention some things for further reading. If you want to go get the whole picture of the whole, real observability started to get a lot of movement out of the Google SRE book from a few years ago. The Google SRE stuff covers the whole gamut of their soak reliability engineering practice, and observability is a portion of that, there's some great chapters on that. O'Reilly has an observability book, I think, just dedicated to observability now.

    Jordan Simonovski:

    I think that's still in early release, if people want to google chapters.

    Jess Belliveau:

    The open telemetry stuff, we'll drop a link to that I think that's really handy to know.

    Angad Sethi:

    From [inaudible 00:26:12], which is my perspective, as a developer, say I wanted to introduce cornflake use Datadog at Easy Agile. Not very familiar, I'm not very comfortable with it. I know how to navigate, but what's a quick way for me to get started on introducing observability? Sorry to lock my direct job or at my workplace.

    Jordan Simonovski:

    I would lean, I could be biased here. Jess correct me or give your opinion on this, I would lean heavily towards SLO's for this. And you can have a quick read in the SRE-

    Jess Belliveau:

    What does SLO stand for, Jordan?

    Jordan Simonovski:

    Okay, sorry. Buzz words! SLO is a service level objective, not to be confused with service level agreement. An agreement itself is contractual and you can pay people money if you do breach those. An SLO is something you set in your team and you have a target of reliability, because we are getting to the point where we understand that all systems at any point in time are in some kind of degraded state. And yeah, reliability isn't necessarily binary, it's not unreliable or reliable. Most of the time, it's mostly reliable and this gives us a better shared language, I guess. And you can have a read in the SRE handbook by Google, which is free online, which gives you a pretty good understanding of Datadog.

    Jordan Simonovski:

    I think the last time I used it had a SLO offering. But I think like I was mentioning earlier, you set an SLO on particular functionalities or features of your application. You're saying, "My user can do this 99% of the time," or whatever other reliability target you might want to set. I wouldn't recommend five nines of reliability. You'll probably burn yourself out trying to get there. And you have this target set for yourself. And you know exactly what you're measuring, you're measuring particular types of functionality. And you know when you do breach these, users are being affected. And that's where you can actually start thinking about observability. You can think about, "What other features are we implementing that we can start to measure?" Or, "What user facing things are we implementing that we can start to measure?"

    Jordan Simonovski:

    Other things you could probably look at are, I think they're all covered in the book anyway, data freshness in a way. You want to make sure the data users are being displayed is relatively fresh. You don't want them looking at stale data, so you can look at measuring things like that as well. But you can pretty much break it down into most functionalities of a website. It's no longer like a ping check, that you're just saying, "Yes, HTTP, okay. My application is fine." You're saying, "My users are actually being affected by things not working." And you can start measuring things from there. And that should give you a better understanding, or a better idea, at least, of where to start with what you want to measure and ow you want to measure it. That would be my opinion on where to get started with this if you do want to introduce it.

    Jared Kells:

    We're going to talk a little bit about state and how with some of these, like our very front end heavy applications that we're building, so the applications we build just basically run inside the browser and the traditional state as you would think about it, is just pulling a very simple API that writes some things into the database with some authentication, and that sort of stuff. So in terms of reliability of the services, it's really reliable. Those tiny API's just never have problems, because it's just so simple. And well, they've got plenty of monitoring around it. But all our state is actually, when you say, "Observe the state of the system," for the most part, that's state in a browser. And how do we get observability into that?

    Jess Belliveau:

    A big thing is really, there's not one thing fits all as well. When we talk about the SLO stuff as well, it's understanding what is important to not so much maybe your company but your team as well. If you're delivering this product, what's important to you specifically? So one SLO that might work for me at Apptio probably isn't going to work for Easy Agile. This is really pushing my knowledge, as well, of front end stuff, but when we say we want to observe the state as well, we don't necessarily mean specifically just the state. You could want to understand with each one of those API's when it's firing, what the request response time is for that API firing. So that might be an important metric. So you can start to see if one of those APIs is introducing latency, and so your user experience is degraded. Like, "Hey when we were on release three, when users were interacting with our service here, it would respond in this percentile latency. We've done a release and since then, now we're seeing it's now in this percentile. Have we degraded performance performance?" Users might not be complaining, but that could be something that the team then can look into, add to a sprint. Hey, I'm using Agile terms now. Watch out!

    Jared Kells:

    That's a really good example, Jess. Performance issues for us are typically not an API that's performing poorly. It's something in this very complicated front end application is not running in the same order as it used to, or there's some complex interaction we didn't think of, so it's requesting more data than expected. The APIs are returning. They're never slow, for the most part, but we have performance regressions that we may not know about without seeing them or investigating them. The observability is really at the individual user's browser level. That makes sense? I want to know how long did it take for this particular interaction to happen.

    Jess Belliveau:

    Yeah. I've never done that sort of side of things. As well, the other thing I guess, you could potentially be impacted in as well as then, you're dealing with end user manifestations as well. You could perceive-

    Jared Kells:

    Yeah sure.

    Jess Belliveau:

    ... Greater performance on their laptop or something, or their ISP or that sort of stuff. It'd be really hard to make sure you're not getting noise from that sort of thing as well.

    Jordan Simonovski:

    Yeah. There are tools like Sentry, I guess, which do exist to give you a bit more of an understanding what's happening on your front end. The way Sentry tends to work with JavaScript, is you'll upload a minified map of your JS to Sentry, deploy your code and then if something does break or work in a fairly unexpected way, that tends to get surfaced with Sentry will tell you exactly which line this kind of stuff is happening on, and it's a really cool tool for that company stuff. I don't know if it'd give you the right type of insights, I think, in terms of performance or-

    Jared Kells:

    Yeah, we use a similar tool and it does work for crashes and that sort of thing. And on the observability front, we log actions like state mutations in side the front end, not the actual state change, but just labels that represent that you updated an issue summary or you clicked this button, that sort of thing, and we send those with our crash reports. And it's super helpful having that sort of observability. So I think I know what you guys are talking about. But I'm just [crosstalk 00:35:25], yeah.

    Jess Belliveau:

    Yeah, that's almost like, I guess, a form of tracing. For me and Jordan, when we talk about tracing, we might be thinking about 12 different microservices sitting in AWS that are all interacting, whereas you're more shifting that. That's sort of all stuff in the browser interacting and just having that history of this is what the user did and how they've ended up-

    Jared Kells:

    In that state.

    Jess Belliveau:

    In that state, yeah.

    Jordan Simonovski:

    I guess even if you don't have a lot of microservices, if you're talking about particular, like you're saying for the most part your API requests are fine but sometimes you have particularly large payloads-

    Jared Kells:

    We actually have to monitor, I don't know, maybe you can help with this, we actually should be monitoring maybe who we're integrating with. It's actually much more likely that we'll have a performance issue on a Xero API rather than... We don't see it, the browser sees it as well, which is-

    Jordan Simonovski:

    Yeah, and tracing does solve all of those regressions for you. Most tracing libraries, like if you're running Node apps or whatever on your backend. I can just tell you about Node, because I probably have the most experience writing Node stuff. You pretty much just drop in Didi trace, which is a Datadog library for tracing into your backend and your hook itself into all of, I think, the common libraries that you'll tend to work with, I think. Like if you're working for express or for a lot of just HADP libraries, as well as a few AWS services, it will kind of hook itself into that. And you can actually pinpoint. It will kind of show you on this pretty cool service map exactly which services you're interacting with and where you might be experiencing a regression. And I think traces do serve to surface that information, which is cool. So that could be something worth investigating.

    Jess Belliveau:

    It's funny. This is a little bit unrelated to observability, but you've just made me think a bit more about how you're saying you're reliant on third party providers as well. And something I think that's really important that sometimes gets missed is so many of us today are relying on third party providers, like AWS is a huge thing. A lot of people writing apps that require AWS services. And I think a lot of the time, people just assume AWS or Jira or whatever, is 100% up time, always available. And they don't write their code in such a way that deals with failures. And I think it's super important. So many times now I've seen people using the AWS API and they don't implement exponential back off. And so they're basically trying to hit the AWS API, it fails or they might get throttled, for example, and then they just go into a fail state and throw an error to the user. But you could potentially improve that user experience, have a retry mechanism automatically built in and that sort of stuff. It doesn't really tie into the observability thing, but it's something.

    Jared Kells:

    And the users don't care, right? No one cares if it's an AWS problem. It's your problem, right, your app is too slow.

    Jess Belliveau:

    Well, they're using your app. Exactly right. It reflects on you sort of thing, so it's in your interest to guard against an upstream failure, or at least inform the user when it's that case. Yeah.

    Jared Kells:

    Well, I think we're going to have to call it, this podcast, because it was an hour ago. We had instructed max 45 minutes.

    Jess Belliveau:

    We could just keep going. We might need a part two! Maybe we can request [cross talk 00:39:21].

    Jared Kells:

    Maybe! Yeah.

    Jess Belliveau:

    Or we'll just start our own podcast! Yeah.

    Angad Sethi:

    So what were your biggest learnings today, given it's been Angad and I are just learning about observability, Angad what was your biggest learning today about observability? My biggest learning was that observability does not equal Datadog. No, sorry! It was just very fascinating to learn about quantifying the known unknowns. I don't know if that's a good takeaway, but...

    Jess Belliveau:

    Any takeaway is a good takeaway! What about you, Jared?

    Jared Kells:

    I think, because I we were going to talk about state management, and part of it was how we have this ability, at the moment to, the way our front ends are architected, we can capture the state of the app and get a customer to send us their state, basically. And we can load it into our app and just see exactly how it was, just the way our state's designed. But what might be even cooler is to build maybe some observability into that front end for support. I'm thinking instead of just having, we have this button to send us out your support information that sends us a bunch of the state, but instead of console logging to the browser log, we could be console logging, logging in our front end somewhere that when they click, "send support information," our customers should be sending us the actions that they performed.

    Jared Kells:

    Like, "Hey there's a bug, send us your support information." It doesn't have to be a third party service collecting this observability stuff. We could just build into our... So that's what I'm thinking about.

    Jess Belliveau:

    Yeah, for sure. It'll probably be a lot less intrusive, as well, as some of the third party stuff that I've seen around.

    Jared Kells:

    Yeah. It's pretty hard with some of these integrations, especially if you're developing apps that get run behind a firewall.

    Jess Belliveau:

    Yeah

    Jared Kells:

    You can't just talk to some of these third parties. So yeah, it's cool though. It's really interesting.

    Jess Belliveau:

    Well, I hope someone out there listening has learned something, and Jordan and I will send some links through, and we can add them, hopefully, to the show notes or something so people can do some more reading and...

    Jared Kells:

    All thanks!

    Jess Belliveau:

    Thanks for having us, yeah.

    Jared Kells:

    Thanks all for your time, and thanks everybody for listening.

    Jordan Simonovski:

    Thanks everyone.

    Angad Sethi:

    That was [inaudible 00:41:55].

    Jess Belliveau:

    Tune in next week!

  • Text Link

    Easy Agile Podcast Ep.20 The importance of the Team Retrospective

    "It was great chatting to Caitlin about the importance of the Team Retrospective in creating a high performing cross-functional team" - Chloe Hall

    In this episode, I was joined by Caitlin Mackie - Content Marketing Coordinator at Easy Agile.

    In this episode, we spoke about;

    • Looking at the team retrospective as a tool for risk mitigation rather than just another agile ceremony
    • The importance of doing the retrospective on a regular cycle
    • Why you should celebrate the wins?
    • Taking the action items from your team retrospective to your team sprint planning
    • Timeboxing the retrospective
    • Creating a psychologically safe environment for your team retrospective

    I hope you enjoy today's episode as much as I did recording it.

    Transcript

    Chloe Hall:

    Hi, everyone. Welcome to the Easy Agile Podcast. I'm Chloe, Marketing Coordinator at Easy Agile, and I'll be your host for today's episode. Before we begin, we'd like to acknowledge the traditional custodians of the land from which I am recording today, the Wodi Wodi people of the Dharawal Speaking nation and pay our respects to elders past, present, and emerging. We extend that same respect to all Aboriginal and to Strait Islander peoples who are tuning in today. So today, we have a bit of a different episode for you. I'm going to be talking with Easy Agile's very own Content Marketing Coordinator, Caitlin Mackie. Caitlin is the Product Owner* of our Brand and Conversions Team*. Now this team is a cross-functional team who have only been together for roughly six months. And within their first few months, as a team, mind you they also had two brand new employees, they worked on a company rebrand.

    Chloe Hall:

    A new team, a huge task, the possibility of the team being high performing was unlikely at this point in time. So, the team was too new to have already formed that trust, strong relationships, and psychological safety, but somehow they came together and managed to work together, creating a flow of continuous improvement and ship this rebrand. So, I've brought for you today Caitlin onto the podcast to discuss the team's secret for success. Welcome to the podcast, Caitlin.

    Caitlin Mackie:

    Thanks, Chloe. It's a bit different sitting on this side. I'm used to being in your shoes. I feel [inaudible 00:01:45]. I feel uncomfortable. [inaudible 00:01:46].

    Chloe Hall:

    Yeah. It's my first time hosting as well, so very strange. Isn't it? How are you feeling today?

    Caitlin Mackie:

    Yeah. Good. I'm excited. I'm excited to chat about our experience coming together as a cross-functional Agile team, and hopefully share some of the things that worked for us with our listeners.

    Chloe Hall:

    Yes, I know myself, and I'm sure our audience is very excited to hear what your team's secret to success was. Did you want to start off by telling us what was this big secret that really helped you work together as a team?

    Caitlin Mackie:

    That's a great question, Chloe. And that's a big question. I'm not sure if there's one key thing, I suppose, it is that ultimate secret source or that one thing that led to the success. I'm sure we all want to hear what that is. I would also love to know if there's just this one key ingredient, but I think something for us, and probably one of the most memorable things that really worked for us, and there was a lot for us to benefit from doing this, was actually doing our retrospectives. So that's probably the first thing that comes to mind when it comes to what led to our success.

    Chloe Hall:

    Okay. Yeah. In the beginning, why did you start doing the retrospectives?

    Caitlin Mackie:

    So, we were a new forming team, like you mentioned before, and we seen retrospectives as another Agile ceremony, and we saw other teams doing it and they were having a lot of success from it, so we became to jump on that bandwagon. And I think with being a new forming team, there are so many things that come into play. So, you're trying to figure each other out, how we all like to work and communicate with each other, all of that. And we were the first ever team dedicated to owning and improving our website. And we also knew it was likely that we'd be responsible for designing and launching a rebrand.

    Caitlin Mackie:

    So when you try and stitch all of that together, and then consider all those elements, we knew that we needed to reserve some time to be able to quickly iterate and call out what works and what doesn't. And what we did understand is that retrospectives are a great opportunity for the whole team to get together and uncover any problematic issues and have an open discussion aimed at really identifying room for improvement, or calling out what's working well, so we can continue to do that. So, I think retros allowed us to understand where we can have the most impact and how to be a really effective cross-functional Agile team.

    Chloe Hall:

    Wow. That is already so insightful. Yeah, it sounds like the retrospectives really helped you to gain that momentum into finding who your team is, becoming a well-working, high-performing cross-functional team. So, how often were you doing the retro? Were you doing this on a regular cycle, or was it just, "Okay. We have a problem. Some blockers have come up, we need to do a retro"?

    Caitlin Mackie:

    Yeah. I think initially retro, we kind of viewed retros as this thing where like, "Oh, we've done a few sprints now. We should probably do a retro and just reflect on how those few sprints went." It was kind of like this thing. It was always back of our mind. And we knew we needed to do it, but weren't really sure about the cadence and the way to go about it. So now, we do retros on a Friday morning, which is the last day of our weekly sprint. And then we jump into sprint planning after that. So after bio break as well, so let the team digest everything we talked about in retrospectives. And then we come into sprint planning with all the topics that we're discussed, and we will have a really nice, fresh perspective.

    Chloe Hall:

    Yeah.

    Caitlin Mackie:

    So, I think this works really well for us because everything is happening in a timely manner. We've just had a discussion about the best things that happened in the sprint or what worked really well, so you want to make sure you can practice the same behavior in the following, and vice versa for the improvements that you want to make. So, that list of action items that come out of a retrospective provide a really nice contact, context, sorry. And you have them all in mind during sprint planning.

    Caitlin Mackie:

    So for example, in the previous sprint, it might have come up that you underestimated your story points or there wasn't enough detail on your user stories. So, with each story or task that you are bringing into the sprint, you're then asking the question, is everyone happy with the level of detail? What are we missing? Or we've only story pointed this or two, is it more likely to be a five? So, everything is really fresh in your mind, and I definitely think that helps create momentum. When you've got the whole team working to figure out how you can be more effective with every sprint.

    Chloe Hall:

    That's such a great point that you just made Caitlin. And I love how going from doing the team retrospective, that you actually can take those action items and go into your sprint and put them into place straight away. It's really good. Otherwise, I feel like if you do the sprint retrospective on the Friday, and you're like, "Okay, these are our action items," get to Monday sprint planning and you're just thinking of the weekend. That [inaudible 00:07:20]

    Caitlin Mackie:

    Yeah, a hundred percent. Yeah. They're super fresher mind for everyone. So, it might not work for every team, but we find it works really well for us, because we're being really deliberate with how we approach sprint planning.

    Chloe Hall:

    Yeah. And then with that, I could see how doing the retro, how it could easily go over time, but then your team has sprint planning scheduled after. So, it's like you can't go over time. How have you managed to kind of time box that retrospective?

    Caitlin Mackie:

    Yeah, that's a really, really good question. And it is on purpose as well that they are scheduled closely together. Som as mentioned above, the discussion you've had in the retrospectives provides a nice momentum going to the sprint planning, but it does mean we have to watch the clock. And initially, this can be quite awkward, because you want to make sure that everyone feels heard and that everybody has the same opportunity to contribute. And I think this responsibility falls on the scrum master, or the product owner, or whoever's facilitating the retrospective to call it out and make sure everyone has the chance to be heard. You'll naturally have people tell the longer story or add a lot of extra context before getting to the point. And then you'll have others that will be a lot more direct. And I'm a lot like the latter. I struggle to get to the point, which doesn't work well when you're trying to time box a retrospective, right?

    Chloe Hall:

    And I can relate, same personality.

    Caitlin Mackie:

    Yes. So with this, I think it really comes down to communicating the expectation and the priority from the get go. With our team and with any team, you will want to figure out who you can perform really well and continually improve to exceed expectations and be better and learn and grow together. And I think if you all share that same mindset going into the retrospective and acknowledging that it's a safe


    space to have difficult conversations. And as long as you're communicating with empathy, the team knows that it's never anything personal, and it's all in the best interest of the team. And that then helps the less direct communicators, like myself, address their point more concisely and really forces them to be more deliberate and succinct with their communication style.

    Caitlin Mackie:

    And that's really key to being able to stick to that time box, I think. And it does take practice, because it comes down to creating that psychological safety in your team. But once that's there, it's so much easier to call out when someone's going down a windy track, and bring the focus back and sort of say, "I hear you, what's the action item?" And just become a lot more deliberate.

    Chloe Hall:

    Wow. I couldn't even imagine like how hard it would be, with the personalities that yourself and I have, just trying to be so direct and get rid of all the fluffy stuff. I mean, look at what it's done to form such an amazing team that we have. So, you mentioned that aspect of psychological safety before. And how do you think being in a new cross-functional team... Only six months together, you had those new employees, do you think you were able to create a psychological safety space at any point?

    Caitlin Mackie:

    That's another fantastic question. And I feel like, honestly, it would be best to have a team discussion around this. It'd be interesting to hear everybody's perspectives around what contributes to that element of psychological safety and if everybody feels the same. So, I can't speak for the team, but my personal opinion on this or personal experience is that creating an environment of psychological safety really comes down to a mutual trust and respect. And at the end of the day, we all share the same goal. So, we all really, really respect what each other brings to the table and understand how all of these moving parts that we are working on individually all come together to achieve the goal. So, when we're having these open discussions in retros, or not even in retros, just communicating in general really, it's clear that we're asking questions in the best interest of the team and individual motives never come into play, or people aren't just offering their opinion when it's unwarranted or providing feedback, or being overly critical when they weren't asked to do so.

    Caitlin Mackie:

    So, none of those toxic behaviors happen, because we all respect that whatever piece of work is in question or the topic of discussion, the person owning that work, at the end of the day, is the expert. And we trust them, and we don't doubt each other for a second. And I think the other half of that is that we're also really lucky that if something doesn't go as we planned, we're all there to pick each other up and go again. So, this ties quite nicely into actually one of our values at Easy Agile is commit as a team. And this is all about acknowledging that we grow and succeed when we do it together, and to look after one another and engage with authenticity and courage. Som I may be biased, but I wholeheartedly believe that our team completely embraces that. And there's just such an admiration for what we all bring to the table, and I think that's really key to creating the psychological safety.

    Chloe Hall:

    I love that your team is really embracing our value, commit as a team and putting it into place, because that's what we're all about at Easy Agile, and it's just so great to see it as well. I think the other thing that


    I wanted to address was... So again, during this cross functional team, and you've got design and dev, how do you think retros assisted you in allowing to work out what design and dev needed from each other?

    Caitlin Mackie:

    For sure. So, for some extra context for our listeners as well, so in our team, we've got two developers, Haley and David, and a designer, Matt and myself, who's in the marketing. So, we're very much a cross-functional little mini team. So, we all have the same goal and that same focus, but we also are all working on these little individual components that we then stitch together. So,, I think... We doing retros regularly. What we were able to identify was a really effective design and development cycle. So, we figured out a rhythm for what one another needed at certain points. For example, something we discovered really early was making sure that we didn't bring design and dev work into the same sprint. We needed to have a completely finished design file before dev starts working on it. And that might sound really obvious, but initially we thought, "Oh, well, if you have a half finished design file, dev can start working on that. And by the time that's done, the rest of the design file will be done."

    Caitlin Mackie:

    But what we failed to acknowledge is that by doing that, we weren't leaving enough capacity to iterate or address any issues or incorporate feedback on the first part of that design file, or if dev started working on it and design then gets told, "Oh, this part right here, it's not possible," so the designer is back working on the first part. And it just creates a lot of these roadblocks. So in retros, this came up and we were able to raise it and understand that what design needed from dev and what dev needed from design in order to make sure we weren't blockers for each other. And the action item out of the retro is that we all agreed that a design file had to be completely finished before dev picks up the work.

    Chloe Hall:

    I think it's so great that you were able to identify these blockers early on. Do you think like doing the retro on a weekly reoccurring basis was able to bring up those blockers quickly, or do you think it wouldn't have made a difference?

    Caitlin Mackie:

    No, definitely. I, a hundred percent, think that retros allowed us to address the blockers in a way more timely and effective manner. And we kind of touched on that before, but yeah, retros let you address the blockers, unpack them, understand why they're happening and what we need to do to make sure they don't happen again. So, for sure.

    Chloe Hall:

    Yeah. Yeah. I guess I want to talk a little bit now about the wins, the very exciting part of the retro, the part that we all love. So, how important do you think the wins are within the retro?

    Caitlin Mackie:

    So important. So, so, so important. It's like, when you achieve something epic as a team, you have to call it out. Celebrate all the wins, big, small. Some weeks will be better than others, but embrace that glass half full mentality. And there's always something to be proud of and celebrate, so call it out amongst


    each other, share it with the whole company, publicly recognize it. Yeah, I think it's so important to embrace the wins. It just sort of creates a really positive atmosphere when you're in the team, makes everybody feel heard and recognized for their really positive contribution that they're making. And I think a big thing here as well is that if you've achieved something epic as a team, it's helpful for other teams to hear that as well, right? You figured out a cool new way to do something, share it. If it helped you as a team, it's most likely going to help another team.

    Caitlin Mackie:

    So I think celebrating the wins isn't even just reserved for work stuff either, right? If somebody's doing something amazing outside of work or hit a personal goal, get behind it.

    Chloe Hall:

    Yeah.

    Caitlin Mackie:

    To celebrate all the wins always.

    Chloe Hall:

    Yeah. And I think it's so good how you mentioned that it's vital to celebrate the wins of someone's personal life as well, because at the end of the day, we're all human beings. Yes,, we come to work, but we do have that personal element. And knowing what someone's like outside of work as well is an element to creating that psychological safe space and team bonding, which is so vital to having a good team at the end of the day. Yeah.

    Caitlin Mackie:

    Yeah, a hundred percent. Yeah, you hit the nail in the head with that. We talked about psychological safety before, and I definitely think incorporating that, acknowledging that, yeah, we are ourselves at work, but we also have a whole other life outside of that too, so just being mindful of that and just cheering each other on all the time. That's what we got to do, be each other's biggest cheerleaders.

    Chloe Hall:

    Yeah, exactly. That's the real key to success. Isn't it?

    Caitlin Mackie:

    Yeah, that's it. That's the key.

    Chloe Hall:

    So, you've been working really well as a new cross functional, high performing Agile team. How do you think... What is your future process for retros?

    Caitlin Mackie:

    We will for sure continue to do them weekly. It's part of the Agile manifesto, but we want to focus on responding to change, and I think retros really allow us to do that. It's beneficial and really valuable for


    the team. And when you can set the team up for success, you're going to see that positive impact that has across the organization as a whole. So yeah, we've found a nice cadence and a rhythm that works for us. So, if it ain't broke, don't fix it.

    Chloe Hall:

    Yeah.

    Caitlin Mackie:

    Is that what they say? Is that the saying?

    Chloe Hall:

    I don't know. I think so, but let's just go with it. [inaudible 00:19:02], don't fix it.

    Caitlin Mackie:

    There we go. Yeah.

    Chloe Hall:

    You can quote Caitlin Mackie on that one.

    Caitlin Mackie:

    Quote me on that.

    Chloe Hall:

    Okay, Caitlin. Well, there's just one final thing that I want to address today. I thought end of the podcast, let's just have a little bit of fun, and we're going to do a little snippet of Caitlin's hot tip. So, for the audience listening, I want you to think of something that they can take away from this episode, an action item that they can start doing within their teams today. Take it away.

    Caitlin Mackie:

    Okay. Okay. All right. I would say always have the retrospective. Don't skip it. Even if there's minimal items to discuss, new things will always come up. And you have to regularly provide ways for the team to share their thoughts. And I'll leave you with, always promote positive dialogue and show value and appreciation for team ideas and each other. That's my-

    Chloe Hall:

    I love that.

    Caitlin Mackie:

    That's my hot tip.

    Chloe Hall:


    Thanks, Caitlin. Thanks for sharing. I really like how you said always promote positive dialogue. I think that is so great. Yeah. Well, thanks, Caitlin. Thanks for jumping on the podcast today and-

    Caitlin Mackie:

    Thanks, Chloe.

    Chloe Hall:

    Yeah. Sharing your team's experience with retrospectives and new cross functional team. It's been really nice hearing from you, and there's so much that our audience can take away from what you've shared with us today. And I hope that we've truly inspired everybody listening to get out there and implement the team retrospective on a regular basis. So, yeah, thank you.

    Caitlin Mackie:

    Thank you so much, Chloe. Thanks for having me. It was fun, fun to be on this side. And I hope everyone enjoys this episode.

    Chloe Hall:

    Thanks, Caitlin.

    Caitlin Mackie:

    Thanks. Bye.

  • Text Link

    Easy Agile Podcast Ep.8 Gerald Cadden Strategic Advisor & SAFe Program Consultant at Scaled Agile Inc.

    Sean Blake

    Gerald shared that companies often face the same challenges over & over again when it comes to implementing agile, but the real challenge and most crucial is overcoming a fixed mindset.

    "Gerald helps massive companies work better together while keeping teams focused on people and on the customer. I'll be revisiting this episode."

    Gerald also highlights the difference between consultants & coaches, and the value of having good mentors + more

    I loved this episode and know you will too!

    Be sure to subscribe, enjoy the episode 🎧

    Transcript

    Sean Blake:

    Hello, and welcome to this episode of the Easy Agile Podcast. Sean Blake here with you today. And we've got a great guest for you it's Gerald Cadden a Strategic Advisor and SAFe Program Consultant Trainer at Scaled Agile, Inc. Gerald is an experienced business, an IT professional, Strategic Advisor and Scaled Agile Program Consultant Trainer SPCT at Scaled Agile. Thanks, Gerald. Welcome to the Easy Agile Podcast. It's really great to have you on as a guest today, and thank you for spending a bit of time with us and sharing your expertise with our audience on the Easy Agile Podcast.

    Sean Blake:

    So I'm really interested and I'm interested in this story that... For all the guests that we have at the podcast, but can you tell me a little bit about your career today? I find that people find their way to these Agile roles or the Agile industry through so many diverse types of jobs in the past. Some people used to be plumbers or tradies, or they worked in finance or in banking. How did you find your way into working at somewhere like Scaled Agile?

    Gerald Cadden:

    Good morning, Sean. Thanks for having me here guys. I'm very happy to be here with you guys today. Career things are always an interesting question. I'm 53 and so when I look back I wonder how do I get to where I am? And you can often look at just a series of fortunate events. And I worked in retail shoe stores and then I decided to do something in my life. Did an IT diploma then did a degree and I started working in the IT side. I pretty much started as a developer because that was where the money was and so that's where you wanted to go. I didn't stay as a developer long. Okay. All right. I was a terrible developer so I wasn't good at it. It was frustrating.

    Gerald Cadden:

    I moved into some pre-sales work and that led me to doing business analysis and I really liked the BA work because I got to work with people and see changes. I could work with the developers, still got to work really directly with the customer which was much more interesting for me. So I spent a lot of time in BA doing the development work, doing business process reengineering my transitioned over to rational unified process. When it was around spent countless hours writing use cases doing your mail diagrams, convincing people on how to make the changes on those. And then Agile came along and I had to make a complete brain switch. So all of this stuff that I'd learned and depended on as a BA suddenly disappeared because Agile didn't require that as an upfront way of working. It required that to be in the background if you wanted it and it was more a collaboration.

    Gerald Cadden:

    So about 2004, 2005 started working with Agile a lot more by this time I was living in the U.S. So that's where I got my agile experience, stayed there for a long time. Got great experience and then I moved over to working with SAFe around 2011. The catalyst for that as I was working for the large financial firm in New York with a team there. And we were redesigning a large methodology for them to implement Agile at scale. Went to a seminar in 2011 at an Agile conference saw Dean Leffingwell presentation on SAFe and just looked up and went, "Well we can stop working on our methodology. It's done."

    Gerald Cadden:

    So hardly after that meeting I ran outside and tackled Dean Leffingwell because I wanted him to look at my diagrams and everything and give me some affirmation that I was doing the right thing. Dean is got a very frank face and he pulled his frank face and he looked at me and just said, "You know what? Just use SAFe?" And I'm like, "Yeah, we will." And so I started my SAFe journey around that time and we implemented that financial company and I've been on that journey ever since.

    Sean Blake:

    So take us back 10 years ago to 2011. And you're working at this financial company, you've heard of this concept of SAFe really for the first time you started to implement it. How did the people at that company respond to you bringing in this new way of thinking this new framework? It sounded you already had the diagrams on the frameworks and the concepts forming in your mind did you find that an easy process? I think I already know the answer, but how complex was that to try and introduce SAFe for the first time into an organization of that magnitude?

    Gerald Cadden:

    Yeah, this is a very large financial firm, a very old financial firm so very traditional ways of working. So what's interesting is the same challenges SAFe comes up against today they're present before SAFe even began. And so the same challenges of the past management approaches trying to move to faster ways of working was still there. So as we were furiously drawing diagrams in Visio, trying to create models for people to understand it was hard to create a continuum of knowledge and education that would get people to move from the mindset they had to the mindset we wanted them to have. And it was an evolving journey for myself and the team that I was working with. I work with a really great guy and his name is Algona, a very, very smart man.

    Gerald Cadden:

    And so the two of us we're always scratching our heads as to how to get the management to change their minds. And we focused on education, but it was still a big challenge. I finished on the project as they started with SAFe. I moved to different management role in the company that we continued the work there. Michael Stump he used to work for Scaled Agile I think he works now at a different company, but he continued a lot of that work and did a really good job and they did implement SAFe. They made changes, but they faced all the same challenges. The management mindset overcoming moving away from the silos to a more network structured organization. Just the tooling, just the simple things was still a challenge and there's still a challenge today. So the nature of the organization is still evolving even in the modern day Agile world.

    Sean Blake:

    You mentioned there that part of the challenge is around mindset and education. Have you found any shortcuts into how you change a team's mindset? The way they approach their work, the way that they approach working with other teams in that organization? I assume the factor of success has a lot to do with, has the team changed their mindset on the way they were working before and now committed to this new way of working? And can you talk to us a little bit about how do you go about changing a team's mindset?

    Gerald Cadden:

    Maybe I'll change the direction of your question here, because what I've found is usually you don't have to work too hard to change the mindset of a team. Most of the teams are really eager to try new things and be innovative. You only come across some people in teams who may be their career path has got them to a certain point where they're happy with the way the world is and they don't want to change. The mindset you really need to change is around that leadership space and that's still true today. So the teams will readily adapt if management can create the environment that allows them to do it and if they can be empowered. But it's really... If you want to enable the team it's getting the leadership around them to change their mindset, to change the structures that are constraining the teams from doing the best job they can.

    Gerald Cadden:

    And so that for me was the big discovery as you went along and it's still true today. As Agile has been evolving I've noticed that people don't always put leadership at the top of the list of challenges but for me it's always been at that top of the list. A lot of people want to look at leadership and say things about them unflattering things, but you have to remember these are human beings. And the best way to come to leadership is to really begin with a conversation, help them understand. They know the challenges, but we need to help them understand what's causing the issues that are creating those challenges.

    Gerald Cadden:

    As you work with them and educate them you can to open their minds up a little more. Does that mean they'll actually change? Not necessarily. Political motivations, ideologies other things constrained leadership from moving. But conversations and education I think are the way to really approach leadership. And getting to know them as a person, take an interest in their challenges, take an interest in them as an individual. So create that social bond is an important thing. As a consultant that was always hard to do because as a consultant you're always seen as an external force and it's hard to build that somewhat social relationship with that leadership and build that trust.

    Sean Blake:

    Yeah, that's so true. Isn't it. I remember on an Agile transformation that I was on previously, how Agile coach really would spend just as much time with the leadership team as they would with us the Agile team. And it seems strange that the coach was spending so much time trying to really coach the leadership team on how they should think about this new way of working, but you put it in the right context there it's so important that they create that environment for their people and for their teams to feel safe in trying something new. Yeah, that's really important.

    Gerald Cadden:

    I think if you looked at how Agile evolves, when you look at the creation of the Agile manifesto and its principles and then the following frameworks like ScrumXP, et cetera it evolved from a team perspective. So everybody made the assumption that we needed to create these things for the teams to follow, but as people worked with teams they found that it wasn't the teams at all the teams adapt, but the management and the structures of the organizations are not adapting. And so that's really where it went.

    Gerald Cadden:

    I can't recall the number of countless Scrum implementations you worked on and you just hit that ceiling of organizational challenges. And it was always very frustrating for the teams. I think there's a an opposite side to that too is that too many in the Agile world just look at the teams as the center of the world and you can't approach it from that way either the teams are very important to delivering value to the customers, but it's the organization as a whole that delivers value. And I think you really have to sit back and just say, "The teams are part of that how do we change the organization inclusive of the teams?"

    Sean Blake:

    Okay. That's really interesting. Gerald, you've spoken a bit about teams and mindset, when you go into an organization, a big auto manufacturer or a big airline or a financial services company and they're asking for your help, or they're asking for your training, how do you assess where that organization is up to? What's their level of maturity from an Agile point of view? Do you have organizations that are coming to you who have in their mind that they're ready to go SAFe and then you turn up on day one and it turns out no one has any real idea about what that type of commitment looks like?

    Gerald Cadden:

    Yeah, it's a good question. Because I think as I look back at the history of this, in 2011, 2012 when SAFe really got going, as you went forward I mean, there was no concept of where to begin. Consultants were just figuring it out for themselves and like most consulting or most methodologies they got engaged in an IT space and at the team level. And people would try to grow from the team level upwards. And at some point we need to know I've struggled a lot with this because I was just trying to figure out where it is that. So my consulting hat was always on to sit down, talk to people about their challenges, find a way to help figure out how to solve the challenges whether it was going to be Scrum or SAFe or whatever is going to be right.

    Gerald Cadden:

    Those are just tools in the toolbox. But when Scaled Agile as I was working with... Excuse me, as I was working with SAFe, Scaled Agile brought out the implementation roadmap. It produced so much more clarity that came later in my time with SAFe and I wish it had come earlier because it really began to help me clarify that initial thing that we call getting over the tipping point. How to work with the organization you're talking to, work with the right people, understand their challenges, help them understand what causes those problems, which is the more traditional ways of working the traditional management mindset, help them connect SAFe as a way to overcome those challenges and begin to show them. If you looked at the roadmap it's this contiguous step-by-step thing, but what you find in reality is there are gaps between those steps and in those gaps is the time you as a transitional team are having lots of conversation with the management.

    Gerald Cadden:

    If you put them through a training class they're not going to come out of the class going, "Oh, wow that's it. We know what to do." It takes follow-up conversation. You have to have one-on-ones one on many conversations, cover topics of gains so you can remove the assumptions or sorry the misassumptions. So it's a lot of that kind of work that the roadmap its there for those who are implementing SAFe today use it. It is one of the most helpful tools you'll have.

    Sean Blake:

    Awesome. Yeah. I think just acknowledging the difference between the tools in the toolbox and then the other fact that you're dealing with humans and you're dealing with attitudes and motivations and behaviors and habits there's two very different things there really. It sounds you need to take them all together on that journey.

    Gerald Cadden:

    Yeah. A side to that we train so many SPCs like SAFe program consultants. We train them, training them out of classes all the time with us and our partners. The thing that you can, you can teach them about the framework, but you can't necessarily teach them how to be a good consultant or a good... I want to say I use the term consultant and coach, right?

    Sean Blake:

    Yes.

    Gerald Cadden:

    Sometimes I like to say a good consultant can be a good coach, but a good coach can't necessarily be a good consultant because there's another world of knowledge you need to have like how do you sit down and talk to executives? How do you learn the patients and the kinds of questions you need to ask, how do you learn to build those relationships and understand how to work the politics? So there are things outside the knowledge of an SPC that they need to gain. So young people coming in and running to do this SPC course I want to prepare you for everything, but it gives you the foundations.

    Sean Blake:

    So when you're in a organization or you're coaching people to go back to their organization how do you teach them those coaching skills so that when they come in and they've got to learn the politics, they've got to identify the red flags, they've got to manage the dependencies, they've got to bring new teams onto the train. How do you go about equipping that more human and communications of the toolbox really?

    Gerald Cadden:

    I think you can obviously teach the fundamentals of the framework by running through the training courses. But mentoring for me is the way to go. Every time I teach a training class I make it very clear to people when they go back and they're starting a transformation don't go this alone. Find experienced people that have done this and the experience shouldn't just be with SAFe their experience should be having worked with large organizations having experience with the portfolio level if necessary. Simply because there are skills that people develop over years of their career if they don't have at the beginning.

    Gerald Cadden:

    I mean, if I look back at some of the horrific things I had said in meetings and in front of executives my boss would put his hands up in front of his face because I was young and impulsive and immature and I see that today. So when I first came to the U.S I worked with some younger BAs and they would say things in a meetings and you quickly have to dance around some things to, "We didn't really want to say that right now." So I think mentoring is the skill. We can teach you the tactical skills, but teaching you the political skills, the human skills is something that takes mentoring and time.

    Sean Blake:

    Mentoring so important in that context. Isn't it?

    Gerald Cadden:

    Yeah.

    Sean Blake:

    Okay. So let's rewind 12 months ago to March 2020, a month that's probably burned into a lot of people's mind is the month that COVID changed our lives for the foreseeable future. I know that Easy Agile had a lot of content out there, articles about how to do remote PI Planning, how to help your virtual teams work better together and we didn't know that COVID was coming we just saw this trend happening in the workforce and we had this content available.

    Sean Blake:

    And then I was checking out our website analytics and we had this huge spike in what I assume were people in these companies trying to work out for the first time, how to do PI Planning virtually, how to keep very literally their release trains on the tracks in a time where people were either leaving the state, working from home for the first time, it's really like someone dropped the bomb in the middle of these release trains and people scrambling on how we are we going to do this virtually now? Did you have a lot of questions at the time on how are we going to do this? And how have you seen companies respond to those challenges?

    Gerald Cadden:

    Yeah. I remember being in Boulder, Colorado in January of 2020 and I just come back from vacation in Australia and that's when COVID was coming around and you were hearing about things in January, 2020. I was talking with my colleagues and we were wondering how bad this is going to be within two months the world was falling apart. And for us I think a good way to tell that story is to look at what Scaled Agile did. We knew our business that it was very reliant on our partner success and it still is today. And so as we began to see the physical world of PI Planning and training, as we began to see that completely falling apart the company had to quickly adapt.

    Gerald Cadden:

    We already had a set of priorities set for the PI and we implement Scaled Agile internally in the company. At the time we're running the company as a train itself because it's 170 all people. So they had to reprioritize the different epics, we pushed a new features and it was all about what do we need to change now to keep our partners afloat by getting them online and a really good team at Scaled Agile in a really cross-company effort to get short-term online materials created to keep the partners upright so they could keep teaching. They could find ways to do this, to do PI Planning, to do they're inspecting adapts all online. And so we pushed out a lot of material just simply in the form of PowerPoint slides that they could then incorporate into tools like Mural, Al tool. SAFe collaborate we went about developing this and we've been maturing that over time.

    Gerald Cadden:

    And so now we're in a world where we have a lot more stability. We saw a big dip like everybody else, but the question is, are you going to come out of that dip? And so what we did notice within probably even the second quarter of that year where the tail end of it we saw it starting to come up again, which our partners starting to teach more online. So the numbers told us that the materials we're producing were working. So for us it was just a great affirmation that organizing yourself the way we did organize yourself, the quick way we could adapt saved us. So Scaled Agile could have gone the way of a lot of companies and not being able to survive because our partners wouldn't have survived. We had the ability to adapt. So it's a great success story from my perspective.

    Sean Blake:

    Well, that's great. We're all glad you're still around to tell the story.

    Gerald Cadden:

    Yes we are.

    Sean Blake:

    And Gerald, whether you're reflecting on companies you've worked with in the past, or maybe even that internal Scaled Agile example you just touched on. Are there specific meetings or ceremonies or checking points that are really important as part of the Agile release train process? What are the things that really for you are mandatory or the most important elements that company should really hold onto during that really set up stage of trying to move towards the Scaled Agile approach?

    Gerald Cadden:

    So I interpret your question correctly. I think for me when you're implementing the really important things to focus on as a team first of all is the PI Planning. That is the number one thing. It's the first one people want to change because it's two days long and everybody has to come and it can cost companies a quite a significant sum of money to run that every 10 to 12 weeks. And so you will run very quickly as I had in the past in the car company you run very quickly into the financial controller who wants to understand why you're spending $40,000 a quarter on a big two-day meeting. And so they lie, they start questioning every item on the bill, but that's the most significant one.

    Gerald Cadden:

    PI Planning is significant. The inspect and adapt is the other one simply because at the end if you remove that feedback cycle, what we call closing the loop if you remove that then we have no opportunities to improve. So those two events themselves create the bookends what we get started with and how we close the loop, but there are smaller events that happen in between the team events are obviously all important. But more significant for me is the constant, the event for the product management team or program management team how are you going to filter them, excuse me.

    Gerald Cadden:

    Who are going to need to get together on a regular basis to ensure that then we call this the Sync. So this is the ART Sync or the POPM Sync. You need to make sure those are happening because those are these more dynamic feedback loops and ensure the progress of good architectural requirements or good features coming through so that when you get to PI Planning the teams have significant things to work on. So if you had to give me my top three events, PI Planning, inspect and adapt, and the ART Sync and product POPM Sync.

    Sean Blake:

    Awesome. I know there's always that temptation for teams to find the shortcuts and define the workarounds where they don't have to do certain meetings or certain check-ins, but in terms of communication it must be terribly important for these teams to make sure they're still communicating and they don't use the framework as an excuse to stop meeting together and to stop collaborating.

    Gerald Cadden:

    Yeah. I mean, I went through when I started implementing at the large car company in the U.S I decided to rip the bandaid off. They had several teams working on projects and they weren't doing well, when I looked at the challenges and decided we're going to implement SAFe some of the management they were, "Are you crazy? Why would you do this?" But they trusted me. And so we did rip the bandaid off and we formed them all into a not. We launched set up. And I remember at the end of the PIs some of the management have had a lot of doubts that were coming up after they sat through the PI and they said they just couldn't believe how great that was.

    Gerald Cadden:

    Even though the first PI was a little chaotic they understood the work and the collaboration, the alignment, just the discussions that took place were far more powerful for them. And teams were happier, they were walking out to a different environment. So it changed the mood a great deal. So I think the teams their ability to be heard in one of the most significant places is during PI Planning, they get that chance to be heard. They get that chance to participate rather than just be at the end where they're told what to do.

    Sean Blake:

    Mm-hmm (affirmative). So it really empowers the team.

    Gerald Cadden:

    Yeah. Absolutely.

    Sean Blake:

    That's great. So as a company moves out of the implementation phase and becomes a little bit more used to the way of doing things what's the best way for them to go about communicating that progress to the wider organization and then really evangelizing this way of working to try and get more teams on board and more Agile release trains set up so that it's really a whole company approach.

    Gerald Cadden:

    Yeah. A good question. So I think first of all the system demo that we do. So the regular system demos that take place, this is an event where you can invite people to. So when you get to the end of the program increment, the 10, 12, or the eight, 10 or 12 weeks and you're doing your PI system demo that's a chance for you to invite people that may be in the organization who are next on the list and they're going to be doing this, or they're curious, or if you have external suppliers who you're trying to get on board as part of the training have them come. Have them come to these events so they can just participate. They can see what goes on and it takes away some of the fear of what that stuff is. It gives them work much.

    Gerald Cadden:

    So the system demo whether you do it during the PI, but definitely the PI system demo and you want that one. So more ad hoc things and one of the things that I've seen organizations really fail to do is when they're having success the leadership around the train need to go out and I hate the term evangelize, but go out and show the successes. Get out and talk about this at the next company meeting present where they were and where they are now. But as part of that don't share just the metrics that show greater delivery of value show the human metrics, show how the team went from maybe a certain level of disgruntlement to maybe feeling happier and getting better feedback, show with how the business and technology have come closer together because they're able to collaborate and actually produce value together rather than being at odds because the system makes them at odds.

    Sean Blake:

    Awesome. Gerald is there anything else you'd like to share with our audience before we wrap up the episode? Any tips or words of encouragement, or perhaps some advice for those who are considering scaling up their Agile teams.

    Gerald Cadden:

    I think that the one piece of advice again, I'll reiterate back to the earlier point I made is as you are going through the implementation process and you're starting to launch your train and train your teams figure out how you're going to support them when you launch. Putting people through an SPC class or through all the other classes they won't come out safe geniuses. They'll have knowledge and they'll have the enthusiasm and have some trepidation as well, but you need good coaching. So figure out as you're beginning the implementation pattern where you're designing the teams et cetera, figure out what your coaching pattern is going to be. Hire the people with the knowledge and the experience work with a partner for the knowledge and experience. They shouldn't stay there forever if you work with consultants.

    Gerald Cadden:

    Their job should be to come in and empower you not to stay there permanently, but without that coaching and coaching over a couple of PIs your teams tend to run into problems and go backwards. So to keep that momentum moving forward for me it's figure out the coaching pattern. The only other one I would say too is make sure that you get good collaboration between product and the people who are going to be the product management role on architecture, get rid of the grievances, have them work together because those can stifle you. Get in and talk about the environments before you launch. You don't want funny problems when you, "Oh, the architecture is terrible." Okay. Let's talk about that before we launch." So just a couple of things that I think are really important things to focus on before you launch the train.

    Sean Blake:

    Awesome. I really appreciate that Gerald. I've actually learned a lot in our chat around. It's the same challenges that you had 10 years ago it's the same challenges that we have today. The really the COVID is the challenge of how do you focus on the mindset change. We've talked about the teams are eager to change. There might be a few grumbly voices along the way, but really it's about leadership providing a welcoming and safe environment to foster that change and the difference between being a coach and a consultant, the importance of mentoring. Wow we actually covered a lot of ground didn't we?

    Gerald Cadden:

    I may get some hate mail for that comment, but...

    Sean Blake:

    Oh, we'll see. Time will tell. Thanks so much Gerald for joining us on the Easy Agile Podcast. And we appreciate you sharing your expertise with us and the audience for the podcast. Thanks for having you.

    Gerald Cadden:

    Happy to do it anytime. Thanks for having me here today.

    Sean Blake:

    Thanks Gerald.