How to Build Observability for Forge Applications

Observability is a crucial aspect of building and maintaining reliable applications, and at Easy Agile, we've dedicated significant effort to making our applications as observable as possible. In a recent talk at Atlas Camp 2025, Ryan Brown, our Staff Engineer, shared our experiences transitioning to Forge and the lessons learned along the way. Here are the key highlights from his talk.
What is observability and why it matters
Observability is about gathering and visualizing enough data to understand the health of your software systems and fix problems when they arise. Unlike traditional monitoring, which tells you what happened and when, observability goes a step further to explain why and how issues occur.
For us at Easy Agile, strong observability has:
- Helped us make better decisions about performance and infrastructure needs
- Drastically improved our customer support by letting us quickly pinpoint issues
- Let us adapt to the reality of modern, distributed applications
As Ryan put it during his talk: "When I was working on applications back in the day, we were running on two, three, maybe four data centers at most. Now we're talking about applications that grow at a global scale with people using them across the globe."
Modern applications are far more distributed than they used to be, with thick frontends becoming as complex as backends. This distribution makes the traditional approach of manually checking a few servers inadequate - we need sophisticated observability solutions to understand what's happening across our systems.
The three pillars of observability
Observability stands on three key pillars:
Metrics
These are the number-based measurements of your software - CPU usage, request times, error counts, and so on. But raw numbers alone aren't very helpful. Context is critical - knowing you have five errors is very different from knowing you have five errors from one specific user on one server over two minutes.
Metrics become particularly powerful when identifying trends over time, so you can pinpoint when performance issues began or correlate changes with system behavior.
Logs
Logs are often the first thing engineers get experienced with when dealing with observability. They provide detailed records of events and are the cornerstone of understanding what's happening in your system. They give more information than metrics alone, capturing not just what happened but details about each event that occurred.
They're richer in information but also generate more data. For example, logs might tell you an error occurred because an API returned a bad response - but that alone still doesn't tell you why the downstream API failed.
Traces
Traces help you understand what's happening across all your systems working together. They track transactions as they flow through various services, showing you exactly what happens when a user clicks a button or performs an action.
"Tracing is surprisingly simple," Ryan explained. "You just have a unique identifier for a transaction that gets shared from one system to the next system to the next system."
Traces are particularly useful when you can expose that identifier to end users when something goes wrong. That ID can show you exactly what happened in that transaction, taking the guesswork out of troubleshooting.
Observability on Connect vs. Forge
Our current Connect approach
At Easy Agile, we've implemented comprehensive observability across our Connect apps.
- Metrics, logs, and traces captured for all our applications
- All this data sent to a third-party observability service to get a centralized view
- Front-end components that also send observability data to the same service
The approach reduces cognitive load on our engineers and gives a complete end-to-end visibility into user journeys from end to end and what's happening across our systems. For this Ryan recommends that you can use commercial services like Datadog or New Relic, or even open-source options like Signoz or elements of the ELK stack, which are also viable alternatives.
Transition concerns for Forge
When we first looked at Forge, we had several questions:
- How well would tracing work with Forge's communication model?
- Could we define our own custom metrics?
- Would we be able to collect metrics and logs from both front-end and back-end?
- Could we identify specific user transactions for support purposes?
At the time of our evaluation, Forge functions were the primary compute option available, which raised questions about how standard tracing libraries would work in Forge's unique communication model.
Experimenting with a simple app
Instead of trying to migrate one of our bigger products, we created a simple "Sprint Notes" app to test observability on Forge. This let us:
- Quickly iterate through different Forge topologies
- Keep the experiment focused on observability
- Include all the necessary components (UI, API, database)
We took this app through several stages:
- Connect on Forge: Running our Connect app through Forge
- Forge with Remotes: Using two approaches - calling remotes through Forge functions, and calling remotes directly from the UI
- Forge Native: Completely rewriting the app to use Forge functions and entity store
Key findings
Connect on Forge
When running our Connect app through Forge:
- All our existing observability worked without modifications
- Front-end and back-end metrics continued to function as before
- Tracing identifiers propagated correctly
- Logs and metrics didn't appear in the Atlassian Developer Console (which makes sense since we weren't using Forge modules)
The observability story for Connect on Forge is pretty straightforward - it just works. As Ryan noted, "All the existing observability that we had in place on Connect worked just as it did before. There was no big change or issues with what we saw there."
We even experimented with different methods of sending data to our observability service, including using a proxy under our own domain, and everything worked consistently as long as the correct content security headers were set.
Forge with remotes
For Forge with remotes, we found:
- Front-end and back-end observability mostly worked
- We had to set permissions.external.fetch.clientto allow sending metrics from the front-end
- Adding these permissions required a major version upgrade
- Metrics didn't capture all API calls, particularly front-end calls to Atlas APIs
- Tracing identifiers weren't exposed to remote callers, limiting end-to-end tracing
"This is a little disappointing... we found that when we did calls to our remotes, Forge would generate tracing identifiers for use on the back-end, but unfortunately those identifiers weren't then being exposed to what was making those remote calls," Ryan explained.
While we could see logs in the Developer Console and some metrics were captured, the inability to propagate tracing identifiers back to callers meant we couldn't achieve full UI-to-database traceability or show transaction IDs to end users without implementing workarounds.
Forge native
With a fully native Forge approach:
- Custom metrics and basic tracing are possible but require significant work
- We needed to implement custom handling with libraries like OpenTelemetry
- UI observability remained similar to the remotes implementation
- We could access correlation IDs through undocumented Forge API features
We managed to create a proof of concept showing traces from Forge functions to our observability service, though implementing complete data store tracing would require additional work.
Common pitfalls and solutions
A key lesson from our experiments: be careful with data volume. As Ryan candidly shared, "I'm not going to admit how much money I've spent on observability by accident...*cough*15,000*cough*"
Other important considerations:
- Major version upgrades are required when adding permissions for observability
- The documentation for exporting logs and metrics is technically correct but easy to misinterpret
- Traces do not automatically propagate back to callers in remote implementations
Recommendations and best practices
Based on our findings, we recommend:
1. Establish observability baselines
Figure out what data you genuinely need instead of collecting everything. Observability services charge by volume, so collecting unnecessary data can quickly become expensive.
2. Use OpenTelemetry
It aligns with Forge's internal implementation and provides good standardization. While it may be harder to use than some out-of-the-box solutions, the long-term benefits are worth it.
3. Consider an observability proxy
This enables:
- Authentication for incoming metrics
- Adding additional context
- Data redaction for sensitive information
- Decoupling your implementation from specific providers
When combined with OpenTelemetry, this approach means your Forge components don't need to know what observability service you're using, avoiding permission changes if you switch providers.
4. Plan permission updates strategically
Since they require major version upgrades, incorporate them early in your development.
Final thoughts
Achieving all three pillars of observability on Forge is possible, though it requires different approaches depending on your implementation strategy:
- Connect on Forge works seamlessly with existing observability
- Forge with remotes requires some additional configuration
- Forge native needs more custom implementation
These experiments are being prepared for open source, so stay tuned to Ryan's LinkedIn for updates.
Related Articles
- Workflow
8 Software Development Methodologies Explained
Software development teams are known for using a wide variety of agile methodologies, approaches, and tools to bring value to customers. Depending on the needs of the team and the product's stakeholders, it’s common for teams to deploy and utilize a combination of software development methodologies.
Most dev teams combine methodologies and frameworks to build their own unique approach to product development. You’ll find there are plenty of overlapping principles from one methodology to the next. The key is choosing a system and working as a team to fine-tune and improve that approach so you can continue to reduce waste, maximize efficiency, and master collaboration.
In this post, we’ll outline and compare the following eight software development processes:
1. Agile software development methodology
2. Waterfall methodology
3. Feature driven development (FDD)
4. Lean software development methodology
5. Scrum software development methodology
6. Extreme programming (XP)
7. Rapid application development (RAD)
8. DevOps deployment methodology
1. Agile software development methodology
Agile is the most common term used to describe development methods. It’s often used as an umbrella term to label any methodology that’s agile in nature, meaning an iterative process that reduces waste and maximizes efficiency.
Most software development methodologies are agile with a strong emphasis on iteration, collaboration, and efficiency, as opposed to traditional project management. It’s like comparing jazz to classical music. 🎷
Traditional, linear management methods, such as the waterfall method we’ll cover below, are like classical music, led by one conductor who has a set plan for how the music should be played. The agile process, on the other hand, is more like jazz, which comes together through collaboration, experimentation, and iteration between band members. It’s adaptive and evolves with new ideas, situations, and directions.
2. The waterfall methodology
The waterfall approach is a traditional methodology that’s not very common in software development anymore. For many years, the waterfall model was the leading methodology, but its rigid approach couldn’t meet the dynamic needs of software development.
It’s more common to see the waterfall method used for project management rather than product development. At the beginning of a project, project managers gather all of the necessary information and use it to make an informed plan of action up front. Usually, this plan is a linear, step-by-step process with one task feeding into the next, giving it the “waterfall” name.
The approach is plan-driven and rigid, leaving little room for adjustments. It’s more or less the opposite of agile, prioritizing sticking to the plan rather than adapting to new circumstances.
3. Feature driven development (FDD)
Feature driven development is also considered an older methodology. Although it uses some agile principles, it’s viewed as the predecessor of today’s agile and lean methodologies.
As the name says, this process focuses on frequently implementing client-valued features. It’s an iterative process with all eyes on delivering tangible results to end users. The process is adaptive, improving based on new data and results that are collected regularly to help software developers identify and react to errors.
This kind of focused agile methodology can work for some teams that want a highly structured approach and clear deliverables while still leaving some freedom for iteration.
4. Lean software development methodology
Lean software development comes from the principles of lean manufacturing. At its core, lean development strives to improve efficiency by eliminating waste. By reducing tasks and activities that don’t add real value, team members can work at optimal efficiency.
The five lean principles provide a workflow that teams use to identify waste and refine processes. Lean is also a guiding mindset that can help people work more efficiently, productively, and effectively.
The philosophies and principles of lean can be applied to agile and other software development methodologies. Lean development provides a clear application for scaling agile practices across large or growing organizations.
5. Scrum software development methodology
Scrum is a system regularly used by software development teams. Like many software development methodologies, Scrum is agile, focusing on a value-driven approach. The Scrum process is based on empiricism, which is the theory that knowledge comes from hands-on experience and observable facts.
One Scrum takes place over a preset amount of time called a sprint. Usually, the time frame is between two to four weeks and the Scrum is at the beginning of the sprint. The goal of each sprint is to yield an imperfect but progressing version of a product to bring to stakeholders so that feedback can be integrated right away into the next sprint.
The specific goals of each sprint are determined by a product owner who orders and prioritizes backlog items (the artifacts that need completion). The sprint process repeats over and over again with the development team adjusting and iterating based on successes, failures, and stakeholder feedback.
Learn more about Scrum — the complete program planning solution for Jira.
6. Extreme programming (XP)
Extreme programming, also called XP, is a methodology based on improving software quality and responsiveness. It’s an agile approach that evolves based on customer requirements; the ultimate goal is producing high-quality results. Quality isn’t just limited to the final product — it applies to every aspect of the work, ensuring a great work experience for developers, programmers, and managers.
Decision-making in extreme programming is based on five values: communication, simplicity, feedback, courage, and respect. The specifics of XP can’t apply to all situations, but the general framework can provide value no matter the context.
7. Rapid application development (RAD)
Rapid application development (RAD), sometimes called rapid application building (RAB), is an agile methodology that aims to produce quality results at a low-cost investment. The process prioritizes rapid prototyping and frequent iteration.
Rapid application development begins with defining the project requirements. From there, teams design and build imperfect prototypes to bring to stakeholders as soon as possible. Prototyping and building repeat over and over through iterations until a product is complete and meets customer requirements.
This is ideal for smaller projects with a well-defined objective. The process helps developers make quick adjustments based on frequent feedback from stakeholders. It’s all about creating quick prototypes that can get in front of users for constructive feedback as soon as possible. This feedback is pulled into the user design so that development decisions are based on the direct thoughts and concerns of those who will use the product.
8. DevOps deployment methodology
The DevOps deployment methodology is a combination of Dev (software development) and Ops (information technology operations). Together, they create a set of practices designed to improve communication and collaboration between the departments responsible for developing a product.
It's an ongoing loop of communication between product developers and Ops teams (IT operations.) Like so many agile processes, it relies on continuous feedback to help teams save time, increase customer satisfaction, improve launch speed, and reduce risks.
The steps of DevOps deployment repeat, aiming to increase customer satisfaction with new features, functionality, and improvements. However, this methodology has some drawbacks. Some customers don’t want continuous updates to their systems once they are satisfied with an end product.
Software development made easy
Most software development teams use a combination of methodologies and frameworks to fit their team size, team dynamics, and the type of work being completed. The key is to use an agile methodology and work together to continually improve your systems as you learn and grow.
Easy Agile is dedicated to helping teams work better together with agile. We design agile apps for Jira with simple, collaborative, and flexible functionality. From team agility with Easy Agile TeamRhythm, to scaled agility with Easy Agile Programs, our apps can help your agile teams deliver better for your customers.
Book a 1:1 demo to learn more about our suite of Jira tools, or contact our team if you have additional questions. We offer a free, 30-day trial, so you can try out our products before making a commitment.
- Workflow
Your Guide To Agile Software Development Life Cycles
A common misunderstanding with agile software development methodologies is that they don't follow a formal process. Each team just does their own thing with little or no planning, and somehow it all works out. Well, we hate to burst your bubble, but software development doesn't work like that, agile or not. 🤯
Just like with traditional waterfall projects, agile projects follow an agile software development life cycle (SDLC). From a process perspective, the primary difference is a linear approach with waterfall and an iterative approach with agile. We'll get into this a little more later.
First, let's walk through how an agile SDLC aligns with agile principles. Then we’ll talk about the agile SDLC in both Scrum and Kanban environments.
How the agile software development life cycle supports agile principles
The Agile Manifesto states four basic values that drive improvement in software development processes. They are:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan.Those are great values! Now raise your hand if you remember the next sentence. Anyone?? Let us refresh your memory: "That is, while there is value in the items on the right, we value the items on the left more."
Too often, new agile software development teams are so excited to start "doing agile" they forget to fully comprehend the entire contents of the Agile Manifesto. We get it — it's hard to remember all 68 words when you're excited. 🤓
So let's take a look at that again: The items on the right have value. That doesn't sound like you should eliminate all documentation, processes, and tools. You actually need some of those things to function efficiently as a team. At the very least, you’ll need to negotiate some type of contract if you're building software for an external stakeholder and you want to get paid.
We'd love to be able to tell you exactly how many processes and how much documentation and planning you'll need, but we can't. Part of being agile is figuring things out as you go along based on your team environment and customer needs. As your agile team matures, you'll begin to inspect and adapt the processes, tools, and project documentation your team needs to work efficiently and effectively.
Now let’s look at a couple of agile software development life cycle models.
The Scrum SDLC model
Remember earlier we talked about waterfall being linear and agile being iterative? Scrum is the perfect agile framework to highlight the difference.
The traditional waterfall model of product development requires several steps before you arrive at a final product. Waterfall projects meet the Definition of Done only after the entire project is complete and in the hands of the user or stakeholder. It's linear — a straight path from start to finish.
The agile method of Scrum, on the other hand, is iterative and adaptive. Scrum teams break the deliverables into smaller pieces with shorter time frames called sprints. The intent is to deliver slices of working software with each iteration throughout the entire product development process.
Rather than a single sprint, as shown above, a full Scrum life cycle looks more like this:
For each iteration, the team plans, develops, reviews, and deploys updates to the product functionality. As stakeholders perform acceptance testing and see the working product, they may ask for new priorities or requirements. That feedback is added to the product backlog to be prioritized with other features and work by the product owner. Then, the process starts again.
Since software is always evolving, this process repeats until the product has either matured to a maintenance level or has reached the end of its useful life and is retired.
Particularly for Scrum, planning is a huge part of the SDLC. Sprint planning brings the team together to prioritize work based on the sprint goal defined by the Product Owner. The daily standup gives the team a chance to coordinate their activities for the day. The sprint review allows the Product Owner and other stakeholders to inspect and discuss deliverables produced during the sprint. And, finally, the sprint retrospective creates the opportunity for the team to reflect on the process, team dynamics, and potential improvements for future.
Smoother Sprint Planning with
Easy Agile User Story Maps
Backlog refinement is also a type of planning recommended to be completed prior to a sprint planning session or at the end of a sprint. During refinement, teams can discuss the feasibility of specific functionalities or ideas for development methods to meet the acceptance criteria. They can also plan around resource availability. For example, they might consider creating extra unit tests to reduce the efforts of a tester who will be on vacation part of the next sprint.
The difference between planning in Scrum and waterfall is how much work you plan and when. Waterfall plans the entire project at the beginning. Scrum planning happens all through the development of the product, from the beginning to the end.
The Kanban agile methodology
A Kanban framework has a little different agile process. Work items aren't necessarily related to or dependent on each other. Individual team members can work asynchronously to push new code to production as soon as it's ready. Yet, Kanban is still iterative in that work items are prioritized in a backlog, and then they are developed, reviewed, and pushed to production.
New backlog items are added to the board based on the end-user feedback. The prioritization of work items is regularly reviewed and adjusted, aligning perfectly with the agile value of responding to change.
A big difference with Kanban is that instead of committing to work based on story points and team velocity, each column in the Kanban board can only hold a limited number of work items (WIP limits). This helps teams stay focused, identify bottlenecks in their process, learn where automation might be helpful, and generally understand where their process is working and where it needs a little help.
With Kanban, there is more focus on the continuous flow of work through each stage. The WIP limits help teams identify specific stages that are impeding the workflow so they can figure out the cause, fix it, and ultimately become more efficient. .
Each Kanban team can choose the columns on their board to suit their needs. The goal of Kanban is to improve the speed of work progressing through the board. Close monitoring and measuring work item movement is critical to Kanban teams.
Working with the agile software development life cycle
Whether you're working in a mature company or a startup team, there's value in an appropriate amount of documentation, tools, and process in agile software development methods. In fact, establishing an agile software development life cycle will help your team operate efficiently.
TIP! Looking for more team alignment? Try Easy Agile Programs
Remember to refer back to the Agile Manifesto and The 12 Principles Behind the Agile Manifesto if you get stuck. These values and principles don't apply only to what you're building but also to how your team works. The key concept behind agile frameworks is to inspect and adapt — including both the software and how you’re functioning as a team.
Use as much process and documentation as you need, but no more. Look at what you have today and identify key items you don’t think the team can function without. Then add or eliminate steps as you discover the best way for your team to work in an agile framework.
At Easy Agile, we're here to help you get the most out of your agile practices and to help you grow into a high-performance, agile team. 💪 If you want to learn more, check out our other blog articles on agile topics.
If you need help with Atlassian's Jira tool, we've got some great apps for you to try. Our Easy Agile Programs for Jira app can help your planning activities through alignment at scale and visualising dependencies.
- Engineering
Foo Bar Nah
Or why you should give meaningful names to example variables
I bent over my desk in frustration, suppressing the urge to scream so as not to upset the rhythmic clack-clack of my coworkers. I’d been frustrated all morning by a particularly nasty React infinite re-rendering issue that I just couldn’t get working. The urge to scream came when, my own toolbox exhausted, I turned to Google.
You see, it looked like someone else had come across the same issue and had decided to record a solution for prosperity (and internet points). I eagerly scanned the page for the sample code that would save my morning. Finding it, my eyes were drawn to the dreaded fooBarBaz and I knew my morning was about to get a whole lot worse before it got better.
I actually love the history of programming and the little easter eggs fellow developers have passed down (my personal favourite - I am a teapot). These help to make this job interfacing with computers much more fun and human. I can appreciate that the practice of using fooBarBaz in naming example functions and variables has a long and storied tradition dating back at least to the Tech Model Railroad Club at MIT circa 1960. I acknowledge that the use of fooBarBaz is primarily not to introduce any distractions from the point which is being demonstrated. I also think that we should pretty much stop using them.
I am always awed by the amount of information my fellow developers have left out there for me on the internet. So many people in this field seem to have an innate need to help others, leading them to put in countless hours to fill Stack Overflow and blogs with useful information. I can only imagine that the people putting in their time and effort to this end are hoping that their efforts will help as many people as possible. fooBarBaz gets in the way of that.
Let me take off my developer hat for a second and put on my recently discarded, slightly misshapen and battered psychologist one. Interweaving complex facts into stories is a time tested technique which facilitates learning. Here in Australia, the technique has been used for tens of thousands of years by the Aboriginal and Torres Strait Islander peoples to help them to remember important and complex information such as the locations of waterholes across vast tracts of inhospitable desert. Our brains are networks of interconnected neurons so we are more likely to hold on to what we have learned when we are able to integrate new information into our current knowledge base. The modern term for this is associative learning.
Additionally, as I’m sure you’ll remember from school, keeping learning interesting has been demonstrated to be a powerful motivator which energises learning.
When we take all this time and effort to communicate with our fellow developers we can and should harness the advantage of associative learning and intrinsic motivation to make sure that the information we are putting out there is as useful as possible to as many people as possible. To this end I believe that we should give as much thought to meaningful naming when creating example code as we do in our own codebases.
Marijn Haverbeke’s Eloquent Javascript regularly comes at the top of lists of books you should read when learning Javascript (JS). It is no coincidence that he is also a master at using meaningful names to help people to better understand coding principles. When teaching new programmers about string comparison in JS he uses the following example:
Marijn piggybacks off our existing knowledge about Springfield’s favourite cartoon characters to give extra meaning and interest to this example. We know that Itchy and Scratchy are a mouse and cat respectively and so most definitely not the same.
Consider the same example but rendered with the dreaded Foo/Bar instead:
To seasoned developers, this might be easy enough to parse: you’ve read hundreds of examples like this and so have learned the association between Foo and Bar and internalised it. But this creates a barrier for learning for new developers who have not yet internalised this rule and instead increases the mental load for them to understand the concept. It also misses out on creating that little spark of interest or joy to help pique the reader's interest and so increase their motivation to understand the underlying concept.
I am not saying there is absolutely no place for fooBarBaz (although I think their utility is limited). The best way to use these terms is to emphasise that anything could be put in a certain place. An example of this is when we’re talking about arguments and parameters in JS functions. You see, there is no type checking in vanilla JS and so if we have a function like the following that takes a parameter and simply logs its value to the console, it doesn’t matter what type of argument we pass in:
I believe that these terms have the most utility in this case as their purpose is to emphasise that their type doesn’t matter. I would also add the caveat to this that using these terms in this way is only suitable when you are producing content for experienced developers who are going to have built a working understanding of these terms.
Even if this is aimed at experienced developers, I still believe that more meaningful names would be better in this example:
Another example where more meaningful variable names would be useful is in relation to metasyntactic variables. These variables are commonly found in source code and are intended to be modified or substituted before real-world usage. Whilst these variables are only placeholders, I believe that it is also better to use a variable name which offers more context to your developer comrade to assist them when they are reading and implementing the code in future.
We work in a wonderful profession with a rich history, where many people are willing to donate their time to helping to educate and mentor their fellow programmers. Using meaningful variable names in place of fooBarBaz is one way that we can ensure that this effort is worthwhile and helps as many people as possible. It lowers the barriers to entry for the profession, helping to create a more diverse and welcoming programming community.
So ditch the fooBarBaz (but not the Teapot) and go forth and spark joy!