Category
Engineering
- Engineering
How to Build Observability for Forge Applications
Observability is a crucial aspect of building and maintaining reliable applications, and at Easy Agile, we've dedicated significant effort to making our applications as observable as possible. In a recent talk at Atlas Camp 2025, Ryan Brown, our Staff Engineer, shared our experiences transitioning to Forge and the lessons learned along the way. Here are the key highlights from his talk.
What is observability and why it matters
Observability is about gathering and visualizing enough data to understand the health of your software systems and fix problems when they arise. Unlike traditional monitoring, which tells you what happened and when, observability goes a step further to explain why and how issues occur.
For us at Easy Agile, strong observability has:
- Helped us make better decisions about performance and infrastructure needs
- Drastically improved our customer support by letting us quickly pinpoint issues
- Let us adapt to the reality of modern, distributed applications
As Ryan put it during his talk: "When I was working on applications back in the day, we were running on two, three, maybe four data centers at most. Now we're talking about applications that grow at a global scale with people using them across the globe."
Modern applications are far more distributed than they used to be, with thick frontends becoming as complex as backends. This distribution makes the traditional approach of manually checking a few servers inadequate - we need sophisticated observability solutions to understand what's happening across our systems.
The three pillars of observability
Observability stands on three key pillars:
Metrics
These are the number-based measurements of your software - CPU usage, request times, error counts, and so on. But raw numbers alone aren't very helpful. Context is critical - knowing you have five errors is very different from knowing you have five errors from one specific user on one server over two minutes.
Metrics become particularly powerful when identifying trends over time, so you can pinpoint when performance issues began or correlate changes with system behavior.
Logs
Logs are often the first thing engineers get experienced with when dealing with observability. They provide detailed records of events and are the cornerstone of understanding what's happening in your system. They give more information than metrics alone, capturing not just what happened but details about each event that occurred.
They're richer in information but also generate more data. For example, logs might tell you an error occurred because an API returned a bad response - but that alone still doesn't tell you why the downstream API failed.
Traces
Traces help you understand what's happening across all your systems working together. They track transactions as they flow through various services, showing you exactly what happens when a user clicks a button or performs an action.
"Tracing is surprisingly simple," Ryan explained. "You just have a unique identifier for a transaction that gets shared from one system to the next system to the next system."
Traces are particularly useful when you can expose that identifier to end users when something goes wrong. That ID can show you exactly what happened in that transaction, taking the guesswork out of troubleshooting.
Observability on Connect vs. Forge
Our current Connect approach
At Easy Agile, we've implemented comprehensive observability across our Connect apps.
- Metrics, logs, and traces captured for all our applications
- All this data sent to a third-party observability service to get a centralized view
- Front-end components that also send observability data to the same service
The approach reduces cognitive load on our engineers and gives complete end-to-end visibility into user journeys and what's happening across our systems. For this Ryan recommends that you can use commercial services like Datadog or New Relic, or even open-source options like Signoz.
Transition concerns for Forge
When we first looked at Forge, we had several questions:
- How well would tracing work with Forge's communication model?
- Could we define our own custom metrics?
- Would we be able to collect metrics and logs from both front-end and back-end?
- Could we identify specific user transactions for support purposes?
At the time of our evaluation, Forge functions were the primary compute option available, which raised questions about how standard tracing libraries would work in Forge's unique communication model.
Experimenting with a simple app
Instead of trying to migrate one of our bigger products, we created a simple "Sprint Notes" app to test observability on Forge. This let us:
- Quickly iterate through different Forge topologies
- Keep the experiment focused on observability
- Include all the necessary components (UI, API, database)
We took this app through several stages:
- Connect on Forge: Running our Connect app through Forge
- Forge with Remotes: Using two approaches - calling remotes through Forge functions, and calling remotes directly from the UI
- Forge Native: Completely rewriting the app to use Forge functions and entity store
Key findings
Connect on Forge
When running our Connect app through Forge:
- All our existing observability worked without modifications
- Front-end and back-end metrics continued to function as before
- Tracing identifiers propagated correctly
- Logs and metrics didn't appear in the Atlassian Developer Console (which makes sense since we weren't using Forge modules)
The observability story for Connect on Forge is pretty straightforward - it just works. As Ryan noted, "All the existing observability that we had in place on Connect worked just as it did before. There was no big change or issues with what we saw there."
We even experimented with different methods of sending data to our observability service, including using a proxy under our own domain, and everything worked consistently as long as the correct content security headers were set.
Forge with remotes
For Forge with remotes, we found:
- Front-end and back-end observability mostly worked
- We had to set permissions.external.fetch.client to allow sending metrics from the front-end
- Adding these permissions required a major version upgrade
- Metrics didn't capture all API calls, particularly front-end calls to Atlas APIs
- Tracing identifiers weren't exposed to remote callers, limiting end-to-end tracing
"This is a little disappointing... we found that when we did calls to our remotes, Forge would generate tracing identifiers for use on the back-end, but unfortunately those identifiers weren't then being exposed to what was making those remote calls," Ryan explained.
While we could see logs in the Developer Console and some metrics were captured, the inability to propagate tracing identifiers back to callers meant we couldn't achieve full UI-to-database traceability or show transaction IDs to end users without implementing workarounds.
Forge native
With a fully native Forge approach:
- Custom metrics and basic tracing are possible but require significant work
- We needed to implement custom handling with libraries like OpenTelemetry
- UI observability remained similar to the remotes implementation
- We could access correlation IDs through undocumented Forge API features
We managed to create a proof of concept showing traces from Forge functions to our observability service, though implementing complete data store tracing would require additional work.
Recommendations and best practices
Based on our findings, we recommend:
1. Establish observability baselines
Figure out what data you genuinely need instead of collecting everything. Observability services charge by volume, so collecting unnecessary data can quickly become expensive.
2. Use OpenTelemetry
It aligns with Forge's internal implementation and provides good standardization. While it may be harder to use than some out-of-the-box solutions, the long-term benefits are worth it.
3. Consider an observability proxy
This enables:
- Authentication for incoming metrics
- Adding additional context
- Data redaction for sensitive information
- Decoupling your implementation from specific providers
When combined with OpenTelemetry, this approach means your Forge components don't need to know what observability service you're using, avoiding permission changes if you switch providers.
4. Plan permission updates strategically
Since they require major version upgrades, incorporate them early in your development.
5. Other important considerations
- Major version upgrades are required when adding permissions for observability
- The documentation for exporting logs and metrics is technically correct but easy to misinterpret
- Traces do not automatically propagate back to callers in remote implementations
Final thoughts
Achieving all three pillars of observability on Forge is possible, though it requires different approaches depending on your implementation strategy:
- Connect on Forge works seamlessly with existing observability
- Forge with remotes requires some additional configuration
- Forge native needs more custom implementation
These experiments are being prepared for open source, so stay tuned to Ryan's LinkedIn for updates.
- Engineering
How I got into web development
I fell into web development, that's how it feels without digging into the details. That does not sound like how you want to go about choosing a career but in reality, it was years of small decisions and nudges that I ended up doing work I really enjoy.
I grew up enjoying all things computers, but let's be honest it was mostly video games. In high school, I took all of the subjects available that had anything to do with computing, except for the software development subject ironically. I think because I didn't know about the potential creative side, I thought it was all hard maths. There was a subject I did where my major project was a batman flash animation, I was motivated in that class -- while others might have been bludging off and playing flash games, I was focused on making my flash animation and enjoying every minute of it.
Then it came time to do something after high school. I still didn't have a good idea of what I wanted to do so I choose a broad degree that involved computers — Information Technology at UOW. It was a mandatory programming subject in that degree that gave me a taste of software development, it involved building programs where the output was just in the terminal which didn't excite me much.
Then came a subject that everything started clicking together, Web Programming -- I think it was called. It combined design and code, used some newer web technologies, and was taught by lecturers who were clearly passionate about the web. We did projects like redesigning the movies page and I loved it. It also gave me a taste of making something that was useful, other programming subjects I did previously were just about outputting lines to the console.
After this experience it still didn't occur to me that I could get a job doing this, I didn't know anyone that was doing web development as a job. So after university, I went on a month-long holiday and tried not to think about what I was going to do when I got back.
When I got back I started applying for jobs, and a few of them for this role called frontend developer which was a new term to me. It was in the process of doing the take-home projects that I was given as part of the interview processes that I realised I could get a job in this thing I enjoyed.
It's been around 5 years since then and time has gone fast. In that time I was thrown into the deep end of many projects and have always finished them with more knowledge than when I started. Thanks to both the talented people that I have worked with and the challenge of the projects themselves.
I'm looking forward to the next chapter here at Easy Agile!
- Engineering
Foo Bar Nah
Or why you should give meaningful names to example variables
I bent over my desk in frustration, suppressing the urge to scream so as not to upset the rhythmic clack-clack of my coworkers. I’d been frustrated all morning by a particularly nasty React infinite re-rendering issue that I just couldn’t get working. The urge to scream came when, my own toolbox exhausted, I turned to Google.
You see, it looked like someone else had come across the same issue and had decided to record a solution for prosperity (and internet points). I eagerly scanned the page for the sample code that would save my morning. Finding it, my eyes were drawn to the dreaded fooBarBaz and I knew my morning was about to get a whole lot worse before it got better.
I actually love the history of programming and the little easter eggs fellow developers have passed down (my personal favourite - I am a teapot). These help to make this job interfacing with computers much more fun and human. I can appreciate that the practice of using fooBarBaz in naming example functions and variables has a long and storied tradition dating back at least to the Tech Model Railroad Club at MIT circa 1960. I acknowledge that the use of fooBarBaz is primarily not to introduce any distractions from the point which is being demonstrated. I also think that we should pretty much stop using them.
I am always awed by the amount of information my fellow developers have left out there for me on the internet. So many people in this field seem to have an innate need to help others, leading them to put in countless hours to fill Stack Overflow and blogs with useful information. I can only imagine that the people putting in their time and effort to this end are hoping that their efforts will help as many people as possible. fooBarBaz gets in the way of that.
Let me take off my developer hat for a second and put on my recently discarded, slightly misshapen and battered psychologist one. Interweaving complex facts into stories is a time tested technique which facilitates learning. Here in Australia, the technique has been used for tens of thousands of years by the Aboriginal and Torres Strait Islander peoples to help them to remember important and complex information such as the locations of waterholes across vast tracts of inhospitable desert. Our brains are networks of interconnected neurons so we are more likely to hold on to what we have learned when we are able to integrate new information into our current knowledge base. The modern term for this is associative learning.
Additionally, as I’m sure you’ll remember from school, keeping learning interesting has been demonstrated to be a powerful motivator which energises learning.
When we take all this time and effort to communicate with our fellow developers we can and should harness the advantage of associative learning and intrinsic motivation to make sure that the information we are putting out there is as useful as possible to as many people as possible. To this end I believe that we should give as much thought to meaningful naming when creating example code as we do in our own codebases.
Marijn Haverbeke’s Eloquent Javascript regularly comes at the top of lists of books you should read when learning Javascript (JS). It is no coincidence that he is also a master at using meaningful names to help people to better understand coding principles. When teaching new programmers about string comparison in JS he uses the following example:
Marijn piggybacks off our existing knowledge about Springfield’s favourite cartoon characters to give extra meaning and interest to this example. We know that Itchy and Scratchy are a mouse and cat respectively and so most definitely not the same.
Consider the same example but rendered with the dreaded Foo/Bar instead:
To seasoned developers, this might be easy enough to parse: you’ve read hundreds of examples like this and so have learned the association between Foo and Bar and internalised it. But this creates a barrier for learning for new developers who have not yet internalised this rule and instead increases the mental load for them to understand the concept. It also misses out on creating that little spark of interest or joy to help pique the reader's interest and so increase their motivation to understand the underlying concept.
I am not saying there is absolutely no place for fooBarBaz (although I think their utility is limited). The best way to use these terms is to emphasise that anything could be put in a certain place. An example of this is when we’re talking about arguments and parameters in JS functions. You see, there is no type checking in vanilla JS and so if we have a function like the following that takes a parameter and simply logs its value to the console, it doesn’t matter what type of argument we pass in:
I believe that these terms have the most utility in this case as their purpose is to emphasise that their type doesn’t matter. I would also add the caveat to this that using these terms in this way is only suitable when you are producing content for experienced developers who are going to have built a working understanding of these terms.
Even if this is aimed at experienced developers, I still believe that more meaningful names would be better in this example:
Another example where more meaningful variable names would be useful is in relation to metasyntactic variables. These variables are commonly found in source code and are intended to be modified or substituted before real-world usage. Whilst these variables are only placeholders, I believe that it is also better to use a variable name which offers more context to your developer comrade to assist them when they are reading and implementing the code in future.
We work in a wonderful profession with a rich history, where many people are willing to donate their time to helping to educate and mentor their fellow programmers. Using meaningful variable names in place of fooBarBaz is one way that we can ensure that this effort is worthwhile and helps as many people as possible. It lowers the barriers to entry for the profession, helping to create a more diverse and welcoming programming community.
So ditch the fooBarBaz (but not the Teapot) and go forth and spark joy!
- Engineering
4 hacks for writing frontend tests 10x faster (probably!)
We all know writing unit tests is important but sometimes it feels like it can take up more time than the feature work itself. I’ve found a couple of handy hacks that I feel have increased my speed when writing tests whilst also improving their quality and, being the kind fellow I am, I’m going to share those with you:
Hack 1: Use Testing Playground
I guess the first tip in this article is - use Testing Library. I didn’t make this its own point because it is already so popular. But if you are not using it yet, make sure you do!
Unit testing with Testing Library is really easy and intuitive. Despite this, it can still be challenging to find the right queries or to understand why an element isn't being matched.
Enter Testing Playground.
Testing Playground allows you to render a component in a sandbox providing you with direct visual feedback of the component. It also allows you to interact with the rendered component to come up with the best queries to select elements. And, like Testing Library, everything it does is with accessibility (a11y) in front of mind so it teaches you about the importance of a11y and best practices while you use it.
There are many ways you can use Testing Playground including a chrome extension and a browser based app.
The best way I have found which has been an absolute time saver for me though is by invoking screen.logTestingPlaygroundURL() right from the test block itself. I usually find myself doing this as soon as I get the component rendering, just to get the lay of the land and work out what parts of it my test might like to interact with.
Hack 2: Use test.todo
Please don’t jump down my throat, but I have tried Test Driven Development and didn’t like it. Like anarchism, I think it sounds awesome in theory, but found that it actually slowed down my development cycle when I tried to implement it.
I still like the idea of getting some thinking about testing down before I finish building a feature though and have settled on a process that, for me, seems to work well and keep my development moving along.
I now use Jest’s test.todo to record what I am going to test as I am planning to and building out a feature (Big thanks to Karl for first introducing me to the idea!).
My usual process goes a bit like this. First I capture the requirements spelled out for me by my awesome Product Owner (Hi Biz!) in test.todo form, like a todo list. Then, as I am building and encounter other edge cases and important testing areas I add these as test.todo’s too. That way, when it comes to testing time, I have thought through a lot of what I am going to test and am less likely to miss testing edge cases or important functional requirements.
A simple example for a test.todo for the following <UserDetails /> component:
import React from "react";export interface User { firstName: string; lastName: string; username: string; emailAddress: string; SEN: string;}export interface ShowHideUserDetailsProps { showDetails: boolean; user: User;}const UserDetails = ({ showDetails, user }: ShowHideUserDetailsProps) => ( <> {showDetails ? ( <div> <h1>User Details</h1> <ul> <li>{user.firstName}</li> <li>{user.lastName}</li> <li>{user.username}</li> <li>{user.emailAddress}</li> <li>{user.SEN}</li> </ul> </div> ) : ( <div> <h1>Privacy Protected</h1> </div> )} </>);export default UserDetails;
Might be as follows:
describe('<UserDetails />', () => { test.todo('Should show user details when show details is true'); test.todo('Should NOT show user details when show details is false');});
Hack 3: Use builder functions
I used to find myself creating objects for each test to mock out values for testing. Then I wrote another component which used the same object and mocked it out again there. There’s got to be a better way, I thought. And Matt Smith at FinoComp introduced me to one.
I now use builder functions which return commonly used object types in testing and which allow properties to be overridden everywhere. There is certainly a little bit of extra time needed to set them up but I find that, once they are done, the next time you have to interact with that object you are so glad they are there.
// Example typeinterface User { firstName: string; lastName: string; username: string; emailAddress: string; SEN: string;}interface ShowHideUserDetailsProps { showDetails: boolean; user: User;}// Builder pattern to build a mock object for the// ShowHideUserDetailsProps typeexport const buildShowHideUserDetailsProps = (overrides?: Partial<ShowHideUserDetailsProps>): ShowHideUserDetailsProps => { const defaultShowHideUserDetailsProps = { showDetails: false, user: { firstName: "Jazmyne", lastName: "Jacobs", username: "Kylee_Skiles37", emailAddress: "Rashawn13@gmail.com", SEN: "SEN-123456" } }; return { ...defaultShowHideUserDetailsProps, ...overrides };};
There are some limitations to this pattern however as they become less useful with deeply nested object types. Additionally, they do require some upkeep when object types change in the codebase which brings me to my next point...
Hack 4: Use a tool to mock your Typescript types
Look, I’m going to be straight here. This is the part where I plug my own work but at least I left it for last, right?
Whenever I found myself creating another mock object for testing I kept looking at my Typescript types and thinking, can’t something look at that and do it for me? This sent me down a search for a solution and I was so stoked to find intermock which is a command line tool that does just that. While it is still a work in progress and has some limitations I have found it super helpful when writing tests.
I did find using the combination of the CLI and copy/pasting from the terminal a little cumbersome though. How could I make this even easier I thought.
Enter my VSCode extension, Emulative. Simply select your type name, run a command through the Command Palette in VSCode and you can use Emulative to produce Typescript objects, Json objects or the aforementioned builder functions from Typescript types. These are automatically copied to the clipboard but the plain objects can also be sent to a new scratch file.
But wait, there’s more! Where I work at Easy Agile we have a bunch of properties we work with from Jira which aren’t accurately represented by string or number. With Emulative you can set up key/value pairs which will overwrite any matching properties which are found in your types.
Shout out to Easy Agile for giving me the time, resources and encouragement during an Inception Week to work on Emulative!
Well, that’s it for me, I hope you find some of these tips and tricks useful for speeding up front end testing.
Either way please feel free to sound off in the comments about nifty tricks you have found to improve your unit testing speed (or just how wrong I am about TDD).