19th March 2026
Author: Jamie Sawyer, Director, Echidna Solutions
In this series of articles, I will be doing a (very!) deep dive into Atlassian Cloud Migrations, and in so doing, trying to give you as much of the context as I can that I've learned in my 15 years in the Atlassian ecosystem. If our goal is true understanding of Cloud Migration, we first need to build up a level of prerequisite knowledge on the history of Atlassian and their tools.
In our last part, we jumped into the Echidna Approach itself - a step-by-step guide on how it was developed, how it works, and why it is able to provide such great results.
In the final part of our deep-dive, it's time for story time with Jamie - a selection of personal anecdotes and experiences in the Cloud Migration space over the years, presented alongside the lessons that can be learned from them. In our first half, we explore tales on the fringes of Discoveries - enormous systems, complicated politics, and one of the biggest surprises Jamie has ever had!
With histories told and methodologies explained, it is finally time to report on how the Echidna Approach works in practice. In this section, we will review a number of real case studies which have been anonymised for the sake of privacy. Covering a variety of topics from across the engagements, we will be providing deep technical details alongside the tangible lessons that can be learned from these experiences.
On a personal note, writing these has been an incredibly nostalgic experience for me - the elation, generalised confusion, and fiery rage came flooding back as the words went into my editor. 15 years is a heck of a long time in this particular industry, and there are obviously stories on other topics or that didn't quite fit that I'll have to tell another time!
Interviewing end users from individual teams is a great idea in theory - you get direct engagement from those using the tools, build trust in the process reducing change management overhead, and get to hear directly which parts of the tooling are really causing problems. This said, when you've got hundreds of teams using the system, how can you ensure coverage?
This was the reality during an early Cloud Migration discovery I was involved in, with one of our largest UK-based customers at the time. Initial discussions began with moving their entire Atlassian estate to the Cloud, but the sheer scale of this endeavour meant that a "big bang" approach was almost immediately sidelined as unviable. Instead, focus was put onto their primary development group within the business, and their ~10,000-user instances of the Atlassian Data Center tools. Even so, the scale of this instance alone was still enormous, with 300-400 teams utilising the platform - having a single interview with each one was obviously not going to be possible.
Normally in this situation we would discuss with the project team to identify groupings, and supplement this with a review of tool usage to find commonalities in configuration or Plugins. In the case of this customer, however, it wouldn't be this easy. We had been working with the company for a couple of years by this point, helping them to rationalise their footprint to reduce operating expenditures, merge instances together and migrate them to Data Center, and update their operational management approaches. The Jira instance that was in-scope for the Cloud Migration was therefore well known to us, and it had quite the history.
Administrating an instance of almost 10,000 users can be a challenge, especially in a business made up of technical people who both know what they want and have a strong philosophy of developer-led change. With the potential workload of system administration being so high, the approach that was taken was to simply give people Jira Administrator access. When we first started working with the customer, they had over 150 Jira Administrators on the system, working from across the business - mostly team leads, Scrum Masters, and other engaged techies. The result was inevitable: every team ended up with their own configuration - individual workflows, custom fields, and ScriptRunner automations - and these individualised processes had become so solidly embedded over the years that in many cases no-one even understood their own configuration any more.
During one of our earliest engagements with this customer, this lack of operational management structure was our first target - much of our planned future work would become wildly more complex if configurations continued to move under our feet. Unsurprisingly, there was some resistance to centralisation efforts - a few teams were very much against losing administrative access, and particularly having their ability to add or change custom fields removed. For the majority however, we were surprised to find that the administrators within the teams really didn't want this level of access - they only had it because it was the only way they had of getting changes made in the system, and they'd far prefer a centralised support channel. Different teams were therefore managed in different ways - for the few that had serious technical understanding of the Atlassian platforms, they had their administrative facilities retained, albeit with a shorter process-led leash, and invitations to become members of the change board for the platforms. Others that had more functionality-led reasons for remaining administrators (the most common being a need to regularly update custom field options), new processes were made to help them enact changes more quickly through the now-centralised support team, and eventually ended up with some custom automations to enable self service. For most though, they were simply glad to be rid of the burden!
By the time the Cloud Migration discovery was due to start, these changes had been embedded for a couple of years. For the most part, this process was working well - most teams were happy, some of the "super users" had volunteered to drop their administrative access now change could be applied more easily, and newer teams at least were starting to utilise more shared configuration and taxonomies. This said, much of the technical debt still remained unpaid - most projects were unique in configuration, support load remained high, and teams still found themselves having challenges working together. This had been recently compounded by the merger of the company with an even larger American brand - getting teams from either side of this merger working more closely was a high priority, and the configuration of the Atlassian tools (particularly Jira) was a real sore point.
It was with this context that we came into our first planning meetings for the discovery process, and we knew from the off that this would be a challenge. With the recent merger aligning their vision of the future state, it was actually the current state of the platform that was the bigger unknown. With hundreds of teams in scope for the migration, and the history of unique implementations at the forefront of our minds, we searched for a way to get appropriate coverage without extending the project's timeline enormously.
The answer came from the super users mentioned previously - with administrative access being locked down, they had become oracles to many teams. After discussion with them, we identified our groupings - 4 interviews with the super users themselves, grouped by department, and another 4 with some key contacts from specific teams that the super users identified as being particularly unique.
With a week and a half of the interviews down, this approach was working well - as much as the implementation in the tooling was unique for many teams, the underlying processes bore a number of stark similarities. Most of the challenges looked to be around integrations with the smorgasbord of development tools that were in use¹, but some level of shared configuration was looking promising. It was then that our primary contact in the project team came to us with another team - a hardware team that he'd just found out were notoriously unique, and asked us to add them to the interviews. Not a problem, just one more interview to schedule. A day later, and there was another. And then another...
By the time the interviews had completed, we ended up having completed 27 interviews, and our project plan had been adjusted to the point that it was unrecognisable. Each individual addition was no big deal, but it was very much a "boiling the frog" situation unfolding. Beyond the timeline impact, the interviews were providing less value as time went on. That's not to say that there was no value - there would always be some new tidbit that would allow us to be better prepared during implementation - but it's fair to say that there was a point of diminishing returns that was exceeded. Stakeholders and the project team were obviously made aware of the timeline as each change was made, and were given insight into the value derived from each session. This said, if I were to be critical looking back, it would be fair to say that there was a failure, primarily in our project management, in not aggregating and communicating the consolidated impact and value.
The end result of this exceptionally large discovery was an 80+ page current state report alone, a more detailed Gap Analysis than was probably necessary, but on the positive side, a smoother implementation phase than one would have expected going into the project², and overall a happy customer.
In terms of learnings from this engagement - there was definitely a point somewhere between the original 8 interviews and the resultant 27 interviews where the value gained in implementation effort reduction was overtaken by the timeline impacts of scheduling and running additional interviews. From this point onwards, we enforced a hard cap of 20 interviews, and generally tried to keep the count of interviews below 12-15 or so. In cases where the complexity warranted more interviews than this, we would take alternative approaches - splitting the effort into waves, for example.
The next big learning was in ensuring a holistic view of the project is kept in sight and in mind throughout, especially as a project manager. Although each individual extra interview seemed justified in general, the end result was that by the end of the interviews, the value derived from them simply didn't justify the effort.
Finally, on a more positive note - super users within the customer business can be absolute gold dust. In this case we only became aware of their camaraderie when we saw their shared Slack channel when discussing Jira at one of the members' desk. Today, it's common practice for us to try to find out who these internal "Jira Gurus" are, and actively seek them out to discuss things - they'll often be huge sources of information!
¹Necessarily so, I might add - the age and nature of the business meant there were teams working on mobile, web, and differing types of desktop software, alongside teams working with embedded systems. Although there was some unnecessary crossover (particularly around some of their CI/CD tooling choices), the vast majority of variation was a direct consequence of the diversity of their tech stack.
²This is not to say it wasn't without its challenges - we came across occasional unique implementations within individual teams that weren't expected, had some tricky conversations with teams from another department that were reliant on data that came from direct database access on the Data Center instance, and ran into an interesting bug in agile board indexing on Cloud that I've not seen before or since!
Designing the future of an Atlassian estate can be fraught at times - the nature of the direction-setting means that there are many voices and opinions vying for attention. In general, this means that we want to keep our workshops as small as possible - gives more space for people to be heard, and allows for relationships to form between the users and project team. In bigger businesses this can be challenging - especially when word gets out that a migration project is personally sponsored by a member of the executive team.
This was the situation we were presented with during the discovery of a migration project for a large tech-focused business in the UK. Our primary sponsor for the project was the CIO of the company, and they had been very heavily engaged from initial contact through to the end of current state analysis. Everything had been running smoothly so far, but as the future state planning session was booked in, things started to feel a little off - the CIO was getting less vocal in stand-ups, and was becoming more difficult to get in touch with.
The reasons for this revealed themselves during our scheduling session for the future state workshops. The CIO arrived at the meeting with an entourage, including the CISO, the CTO, and a handful of other people I recognised as being members of the development and IT security leadership teams. There was confusion on the faces of the customer's project team leadership, and the tension in the air was palpable.
The CISO began, and made it clear that as such a strategic migration with such a large security footprint, it was imperative that IT Security would be integrated into all elements of the future state design.
The CTO then followed up by making it clear that such a strategic migration would have such a large impact on their software development processes, and that it was imperative that the Development teams would be integrated into all elements of the future state design.
Stealing a glance at the CIO at this point, they were looking harangued - it was obvious that there was some politics in play within the exec team that we were not aware of.
The CIO brushed themselves down and explained that ever since these requirements had become evident, the team had been exploring options for ensuring that all voices are heard, and that there was an obvious solution to this - the upcoming quarterly big room planning session that they had scheduled. With teams from across the business coming to the conference site near their headquarters, they had a unique opportunity to ensure that we could engage with everyone that we would need to - but that we only had a single day after the big room planning was complete before everyone started travelling back home. Oh, and the session was of course next week, the week before we had been planning to commence the individual workshops.
I will admit, at this point, the remainder of that meeting is something of a blur to me now - I spent much of the remainder scribbling notes and trying to come up with a plan, while others were discussing how strategically important everything was.
Coming out of the meeting, the CIO pulled me aside and apologised for springing this on me - there had been some animosity brewing between the CISO and CTO for a while apparently, and when the CIO had been providing a quick update on the project's progress, the proverbial and the fan came into rapid proximity. The escalation was threatening to delay everything by months, putting the entire project in jeopardy, so the CIO had suggested in the heat of the moment to run the sessions at the big room planning session instead. Not an ideal situation for sure, but one we now found ourselves having to manage.
So, knowing already that big sessions would be wildly less efficient than the smaller workshops we would normally run, plus having our workshop time reduced from our provisional four 2-hour sessions to a single 7-hour monster (with lunch provided, of course!), how on earth was this going to work?
After the initial shock dispersed, the project team came together to plan. We fell back on the fundamentals for project solving - understand where you are now, what your goal is, and plot a path from A to B. We knew where we were¹, the thing that was less clear was where we wanted to go.
To some extent, our main goal was a question we'd implicitly answered during our planning for the original workshops - to come to agreements on the proposed future state across four main areas² - something that's eminently possible in a larger setting. We'd been given a new goal by the exec of ensuring visibility and input from various areas across all components of the future state design - not ideal, but viable. The problem we were wrestling with was that there were other, less well-defined goals present here normally as well - to give people the opportunity to feel ownership of the plan, to engage teams in the art of the possible and broaden their horizons, and to ensure concerns are understood properly, even if not possible to fully mitigate. In the moment, these ancillary goals felt unachievable given the limitations we had in place - both time and session sizes - we needed to come up with something quickly if this was going to be viable.
We got the list of invitees through the following day, a total of 35-40 people on the calendar invite, leaders from across Development, IT Security, Infrastructure and Testing teams would be there, alongside the CISO and CTO. With a project team of just 5 people (myself and a colleague as external consultants, the CIO, and two people from their team), we knew it would be all hands on deck to make this work.
With expected dropout rates for the session as a whole, the team suggested we'd probably land with about 25-35 people in the session in reality. The majority of the people that were on the list we'd actually talked to before - whether in formal interviews or in passing conversations about the project - so at least introductions shouldn't be necessary, and we already had good relationships with most of them.
Our first instinct was to split the session down to 4 parallel groups, and have one of the team facilitating each. This would effectively be like running 4 normal workshops in parallel, and the scale would be broadly manageable at about 7-8 people per group. If we ensured a split of different departments in each group, it would ensure that we'd have coverage from all departments on all strategic elements. The CIO highlighted the issue here though - what about the CISO and CTO? Would they be happy not having full individual visibility? They were of course right here - we needed to adjust the approach, but in the stress of the moment we were coming up empty. We needed to stretch, clear our heads, and regroup after lunch.
My consultant colleague and I were chatting at the coffee machine, and an idea began to crystallise - how do we manage parallelisation in the world of computer science? We prepare for parallelisation, we fork, perform our processes, join the threads back together, and continue our primary work. Could we take this idea and use the pre- and post-parallelisation points to satisfy the execs?
Returning to the project team, we had the seeds of an idea. In our previous meeting, we'd been looking at these parallel sessions in the same way as we would look at normal workshops, just being run at the same time. This would obviously not satisfy the execs as they wouldn't have visibility over everything. We explained what we'd been discussing over lunch³, and how we could exploit the pre-fork and post-join points to ensure all participants had a broad picture. After a couple of minutes of blank-looking faces while we tried to explain the minutiae of CPU threading and parallelisation, one of the project leads lit up and said "oh, yeah, like a breakout session!".
So, it's fair to say that this portion of the story was a little embarrassing to write in hindsight. In the moment, pressure was high, and the whole team were struggling. It turns out our massive "Eureka!" moment was just using breakout sessions in the same way as you would in any other meeting. The fact that no-one in the project team had considered this sooner just goes to show how poorly everyone can perform when put under stress!
So, with that basic realisation under our belt, we started to plan out our agenda.
We'd kick off with everyone in the main meeting room, presenting an introduction to the project as a whole, the highlights of our findings from the current state analysis⁴, and a short Q&A - 45 minutes would suffice. Next, we'd present our preliminary future state design to everyone - 20-30 minutes for each of the four areas would fill the 3-hour morning session nicely with breaks for coffee.
Lunch would then lead into our breakout sessions in the smaller meeting rooms. Each group would be facilitated by a member of the project team, and I would jump between sessions to answer questions and have discussions with attendees. These parallel sessions would run similarly to those we'd already been planning, but with less time for the current state and preliminary future state design since they would have already been covered. With more attendees though, we would retain the ~2 hour long sessions to ensure everyone could engage. Grouping attendees together would be easy enough - we had enough representatives from different areas to ensure we would still have 3-4 SMEs in the area, similar to the original plan, and then augment the groups with representatives from the other parts of the business to ensure coverage.
A short afternoon break later, and we'd be back in the main room with about 2 hours of the day left. Each facilitator would prepare 15-20 minutes of material to present the decisions that were made, the questions that were raised, and any resolutions that were agreed to. The final 30-45 minutes of the day could then be allocated to any remaining Q&A, and we could all head to the pub.
We agreed: it was a good plan. It would no doubt be a stressful day, but it looked like it would hit all our goals. The CIO presented the agenda to their peers, everyone was happy (or at least satisfied), and we had the green light.
You likely have questions at this point - did the meetings go well? Was the approach signed off? How was implementation? Did the tensions get so high between the exec that there was a full-on fist-fight on the meeting room floor?
This is sadly where storytelling runs headlong into reality. The sessions went fine - a fair bit more stressful to run, but we achieved all of our goals and got a large group of people engaged and feeling partial ownership of the project. They weren't the most efficient workshops that I've ever seen, there were a few personalities asking irrelevant questions, and there was a little follow-up required on the App estate⁵, but really not all that different to the usual. One big thing that worked in our favour was the absence of the CISO and CTO - there was an exec meeting booked in relatively last-minute so they sent their apologies to us, and they didn't really show that much interest later in the project either. The only proper wrinkle that we had to deal with was the absence of the CIO (due to the same exec meeting), but that just meant I had to run their breakout session and couldn't roam between sessions - annoying, but not the end of the world.
So, what lessons can we glean from this? Most obviously, people don't perform at their best in high-stress situations - previously noted, but worth reiterating. Secondly, future state really does end up as a hotspot for company politics to come into play - engaging widely and early in the process is always a good call. On this point, I don't think we missed any realistic engagement in this project, at worst I think we were not mentally prepared enough for the sudden injection of stress! The final lesson is that running workshops in larger groups can work, but it's a much more stressful and complicated endeavour - it's certainly not something I would recommend attempting, but it is just about viable if necessary.
¹"Feeling like we were in a whole heap of trouble" is probably a good summary...
²For this customer, these topic areas were (broadly): process, App estate, security, and operational management.
³In far too much technical detail, which is probably not surprising given the document you're reading!
⁴All attendees would also get a copy of the current state report and our preliminary future state design later that afternoon so they were able to get some pre-reading done.
⁵Just getting a couple of App vendors to agree to some security-related terms that weren't particularly out of the ordinary.
Time has moved on since the previous anecdotes, as it is wont to do, and with that, instances have grown, gained history, and built up technical debt. More recently, we engaged with a customer wanting to move their two-decade-old instances of Jira and Confluence Data Center onto the Cloud - we went in prepared for the worst, and even then, what we found surprised us.
At the time of our first discussions, the customer was actually preparing for an xCMA-based engagement that they had just been quoted by a large Atlassian Consultancy. The proposal that they had received was substantial, and quoted for an "Optimise and Shift" approach as opposed to a "Lift and Shift" - and with that, they provided a long list of areas that required preparation prior to the migration taking place¹. The customer was interviewing potential Atlassian Admin contractors for a 6-month contract, simply to cover the preparatory tasks prior to the main migration effort starting, and expected to extend the contract by 3-6 months as required to provide customer-side assistance during the migration itself.
During our first call, we were discussing the situation they found themselves in, and it quickly became clear that they weren't convinced by the traditional migration approach that had been proposed. It was looking to be an expensive and time-consuming process, and they didn't really have a vision of what their platform would look like coming out of the end of it.
The project team explained on these early calls that their Atlassian systems were about 20 years old, and over that time had grown quite organically. As the company itself had grown, the Atlassian tools had moved from being a useful dev tool spun up "under the desk" by a keen development team, to being a strategic and critical piece of core infrastructure². They admitted that, at least partially due to this history, the centralised knowledge of the instances was pretty limited, meaning that the understanding of how individual elements of the platform had been configured was spread across the company, or had been lost during two decades of staff turnover.
Although the Jira instance had around 500 Projects, many hadn't even been looked at in years, and yet more were inactive and only required for occasional audit purposes. They had actually managed to take a first pass at quantifying the inactive Projects already, and had come up with a list of just 70-80 Projects that had been updated in the last couple of years. This said, the lack of centralised understanding of the platforms meant that there was some uncertainty as to the accuracy of this data.
Within those Projects, there was a high degree of variability - as they had been developed by different groups over the years, there was no configuration sharing, and JQL autocomplete was barely usable due to the high number of custom fields performing the same or similar functions.
As the company had grown over the years, there was a drive to get more consolidated data out of the system. Progress against OKRs was managed manually by team leads in spreadsheets, and the effort that went into company-wide reporting on a quarterly basis was both immense and fraught - with lack of consistency in usage and lack of common taxonomies in play being the primary causes.
As we were discussing the platforms in play, it became obvious that the "clean-up" proposed in Jira was certainly appealing, but the prospect of performing it on the existing system felt wildly overwhelming and even then, had the potential to be ineffective. The phrase "just burn it all down" was said at one point! Well, that might be an option, I responded - what if we built out a fresh instance of Jira on the Cloud, then looked at the data as an ETL problem? This was very interesting to the customer, and a month or so later, the Strategic Migration project began.
We were preparing ourselves for significant complexity in our build-out - with such a large diversity of configuration on the source system, and the sheer history present in the system, we came into the project prepared for the worst.
After getting set up with system access, we started to look into the back ends. First impressions aligned with the messaging that we'd been given - there were very few shared configurations, custom fields with the same or similar names were common, and most of the instance was something of a digital graveyard, with many Projects sat unused for many years.
This said, there were still a few things that came as a surprise. We had been told that ScriptRunner was installed, so were expecting a depth of customisation through code, but the Plugin was actually disabled and hadn't been used in years. The Plugin estate overall was quite slim - outside of a couple of vendor-supplied integration tools and a simple reporting tool, there really wasn't anything there. Looking into the workflows, many of them were evidently forks from the Jira Software simplified workflow, and in most cases minimal further configuration had been performed. As much as these came as a relief from a migration complexity perspective, we remained somewhat guarded - the age and complexity of the system was still in the forefront of our minds.
It was when we moved onto the interviews that our original expectations came crumbling down. The first interview progressed quite normally - a development team, relatively rational process, frustration at a few manual tasks that had to be performed on a regular cadence, nothing out of the ordinary. We sketched out their process on the whiteboard, and even at this point I was spotting some areas for enhancement on the Jira side - integrations with their Source Code Management and chat systems would really help in a few places. The next interview came around, and it was the same story - another development team, another relatively rational process, and similar frustrations. As we were walking through the process and writing it on the whiteboard, I noticed it bore remarkable similarities to the previous interview - a few differences in what they called different steps and such, but broadly the same picture.
That evening, I opened my notebook, and with the photos of both process flows in front of me, built out a sketch Jira workflow that would fit both teams equally - a remarkably easy job given that the two teams interviewed were worlds apart in their current configuration. In our third interview, we went through the agenda as normal, but when we got to their process, on a hunch, I asked them to simply describe it at a high level first. I pulled out my notebook as they were talking, and traced their description through the workflow I'd put together the previous day - it fit bar one minor addition. I showed them the previously-designed flow, and it became clear - a few years prior, the development teams had got together and defined a global process to be used. They had Agile experts in-place across the development teams, and had aligned their processes significantly - and none of this had been reflected in their Jira configurations.
That afternoon, now that I understood this global process that was in-use, I went through each Project that was in-scope for migration. All but 3 Projects appeared to be aligned to it, and the 3 remaining were owned by two teams, one of which I was interviewing the following day. What we had believed to be a highly-complex Jira migration with large numbers of custom flows to be designed, ended up being a small handful of configurations that would be widely shared between teams. Don't get me wrong - it didn't trivialise the whole thing - many individual teams would require some custom automations for their particular technical integrations and such, but the scale had dropped dramatically.
Even with engagements that went better than expected, there are still lessons to be learned. First of all, this anecdote really highlights the value that interviewing individual teams has - in just a week's worth of discussions with the teams, we'd not only simplified the migration process enormously, but had also gained a huge amount of understanding of how their process had developed over the years, and how their current tooling really wasn't supporting them. The second lesson is to not be too tied to a particular agenda or process - you have to be flexible in your structures so when surprises crop up (and they do, regularly!), you can adjust accordingly. In this case, we changed the structure of our third interview to test a theory, and rewrote the agendas entirely for subsequent interviews given the new understanding of process that we had. The final lesson is one I've mentioned previously - the information gleaned from simply understanding where you are now can be hugely valuable. In this case, the process standardisation was not well-known outside of the direct development teams, and ended up being revolutionary information for the project team, IT Operations teams, and even later on the finance team that were reporting on dev team data!
¹The proposal documents themselves were visibly Confluence exports, and I recognised the format as a customised version of Atlassian's standard format that they supply to Partners to assist with migration assessments, likely automated based on presales data, and gave high-level information on clean-ups that were required across the board to progress.
²If I had a penny for every time I'd heard this origin story, even in the largest customers...
In the second half of our review of Strategic Migrations in practice, we relay some tales about the implementation phase for large migrations, and the technical detail of what went into them.