19th March 2026
Author: Jamie Sawyer, Director, Echidna Solutions
In this series of articles, I will be doing a (very!) deep dive into Atlassian Cloud Migrations, and in so doing, trying to give you as much of the context as I can that I've learned in my 15 years in the Atlassian ecosystem. If our goal is true understanding of Cloud Migration, we first need to build up a level of prerequisite knowledge on the history of Atlassian and their tools.
In our last part, we investigated how tooling used to support Cloud Migrations has evolved over the years, and how this evolution has impacted the approach to these migrations taken by many Atlassian Consultancies.
In part 4 of our deep-dive, we are detailing the entire end-to-end process that we use here at Echidna Solutions - every single step. Similarly to the previous part, we are splitting this part into 3 pages to aid with readability - in our first post we explored the origins of the approach, and gave a brief overview, and in our second post we jumped head-first into the Discovery and Planning phase. In our final post, we will close out our investigation by looking into Implementation, and reviewing the approach against a more traditional migration engagement.
As mentioned previously, the approach for implementation is, by necessity, bespoke. As such, unlike the discovery and planning process, I'm unable to detail a step-by-step guide here. Instead, in this section, I will be providing some overall guidance for the implementation process, and I will provide a number of real-life examples of Implementations in Part 5.
As a first note on implementation, I would always recommend having a regular cadence with the overall project team to update schedules and project plans as needed. In most cases I've found either weekly or twice-weekly works well, and ensures that any project team members working in parallel are kept abreast of any changes that could impact them. When in these cadence meetings, make sure to review the project plan itself, and adjust any dates or resourcing accordingly - it should not be surprising to see plans change at times. This could be due to individual teams having scheduling conflicts pop up, or perhaps some unexpected complexity in networking or user directories surfaces mid-project. Regardless of reason, it's imperative that the plan remains flexible, teams are impacted as minimally as possible, and if any teams are impacted, that they're talked to as soon as possible.
For the early stages of Development Migrations, the process should be firmly focused on tool design and viability testing. For me, I have some standard tooling that we've developed at Echidna Solutions to give me a baseline for CSV- and JSON/REST-based migration paradigms, and it's this code that I fork and customise for the customer's unique requirements.
In many cases during the Development phase, it's easiest to simply use a copy of the source system alongside a sandbox of the destination for debugging purposes. At other times, however, faster iteration loops may be required, and mock systems incorporating the complex or target areas of interest may be better in these cases. A word of warning - although on the source side (BTF) it's easy enough to run a local environment, for target (Cloud) the "free" tier is notably missing a bunch of features including permissions. Make sure you confirm that what you're trying to test is available via the Atlassian website (e.g. the Jira Pricing page), and if not, consider other options such as the sandbox in Premium or Enterprise, pushing data to differently-named Projects or Spaces per iteration, or even using the Production system if it's not yet live.
It's worth keeping a clear delineation between the Development and Staging phases of the project - don't simply give your last Development instance to UAT testers and ask them to get testing! Instead, clear down all your dev environments, and do an end-to-end fresh run, including rebuilding any mappings that have been developed and producing fresh exports from the source system. This allows you to time the process (necessary to establish realistic downtime windows), review the impact of all steps of the process on both the teams and the source and target systems, and to ensure that any issues raised are not the knock-on effects of earlier development runs.
Regarding the source system for Staging runs - ideally this should be a 1:1 clone of the Production source system, including the same hardware capabilities, surrounding infrastructure such as databases and networking tooling, and a recent cut of data. This enables you to test all components of the migration, including any read-only implementation that you are performing on source. The only area that this does not replicate well is the performance impact on the source system of the exports performed - this said, the impacts here are typically minimal on any modern systems that don't have existing major performance issues¹. Sometimes system replication can't be achieved, or at least can't be achieved as cleanly as described here. In those cases, you'll need to plan some exceptions - perhaps perform the lockdown test on a non-representative test instance, then perform the extract process from the Production system itself during a low-usage period to establish timings.
Regarding UAT - my biggest single piece of advice is to not give the users too long to perform testing. This may seem counterintuitive, but from running what must be hundreds of UATs with customers, the reality is that the majority of users will inevitably leave any testing that they perform to the last few days of a scheduled test anyway. Generally speaking, I try to schedule a week for any given UAT period, and give a further week of buffer in the plan for if any re-run is required. More important than the length of the test is the amount of notice given to users - if your planning is tight, and you've given yourself sufficient buffer, these static dates can be communicated to teams the moment planning is complete. As a final note on this topic, I'd also suggest inviting users to calendar events alongside your communication of UAT windows - it's more likely to remain in-mind when they're doing their day jobs!
Issue reports in UAT can be something of a mixed bag. For Cloud Migrations in my experience, the vast majority tend to be issues on the user side (rather than the migration process or Cloud platform side) - I more often than not find myself talking users through where boards are in Cloud, what their new shared workflow status means to their team, or where they can find a particular custom field that's hidden behind a collapsing box. This said, there will still be issues that crop up that require action rather than education. When fixing issues, try to find a post-migration fix as a first option - these can be implemented directly during UAT, confirmed as resolved by the user, and added to the runbook. In cases that a change needs to be made to the migration process itself, you should ideally re-run the process and provide users with additional UAT time, but reality dictates that this will be a judgement call in the moment.
Once UAT is signed off, your plan should include as small a time as possible prior to the Production migration. If you have a configuration lockdown in place, this helps to reduce the impact of the lockdown on users, while if not, it reduces the possibility of configuration changes negatively impacting the migration process.
For Production migrations themselves, particularly if they're being done out-of-hours, I would always recommend that it's not run entirely solo, and that a second person is available throughout. This ensures that there is someone for issues or challenges to be discussed with, and in the case of major incidents to give a perspective that is not right in the middle of it all. Most migrations are trouble-free by this point, but I've been involved in migrations where customer data centres have gone offline mid-export, where VPNs have dropped out, or in one case, where the whole source system unceremoniously destroyed itself in the wee hours of the night².
Finally, we have our active support period after the Production migration - it's always worth keeping some time dedicated exclusively to this task in the schedule, particularly when doing a multi-wave migration. This said, in my experience and with a well-managed project, high-priority issues tend to be few and far between. Similarly to my experiences with UAT, the majority of issues are more around user education in the new platform than anything explicitly technical. This said, being available and jumping in to help out users early really helps to ensure the system's being used effectively, and is able to support the users in their jobs.
¹It's at this point that I will say - if you're trying to migrate from an extremely old version or an instance which is on its last legs, you're going to need to be really careful here. I have used approaches similar to what's described here to migrate from Confluence 3.5 (from ~2011) to the Cloud before, and it worked, but came with a whole host of additional complexities (not least the fact that pages were stored in a completely different format) but this is wildly outside the scope of this document - come and chat with Echidna Solutions if you're considering similar.
²This one was a customer who had something of an erratic IT team, and was actually caused by a hardware failure in their DC. The story I heard was that the physical lights next to their NAS used to indicate drive failure had been installed upside-down, resulting in healthy disks being swapped out of their RAID array instead of the failed ones. I can't be sure of the veracity of that report however - the impact of this failure was pretty massive, and I only heard the story from a contact I had after the company went under a year or two later.
With all the implementation complete, let's review the process and where the customer now sits.
First and most obviously - the customer is on the Atlassian Cloud. Not only this, but the customer is using configuration and processes which have been built for the Cloud platform, are based on their current working practices, and have been implemented across all teams holistically rather than piecemeal over many years. This has a transformative impact on teams - they're no longer working around decades-old configuration that was in the way, inter-team communication is more consistent due to shared language or configuration, and they have access to modern Cloud tooling like Rovo or Jira Automations in a manner that they didn't before. As Anthony De Silva, Senior Product Manager at Rightmove, said in our recent case study - "[taking this approach] helped us realise there were other more efficient ways to achieve the same Jira outcomes [and that] we're clearly a better prod-dev department because of these changes".
In a traditional lift-and-shift migration, although they'd be "on the Cloud", they will not have experienced any changes from the source in workflow or implementation - which for many teams means they're still not being effectively supported by the tools. Worse still, due to the differences in BTF and Cloud environments, some areas will perform worse or fail entirely. Automations remain aligned to BTF paradigms and may be missing functionality, Apps have different functionality to their BTF predecessors, and the configuration of the platform itself isn't optimised for the use of Cloud-specific functionality such as Rovo, Teams, or Goals.
Secondly, we should consider how much time has been spent to get to this point. For most customers, although the process as described above may seem very involved, it actually ends up being remarkably quick. For an experienced team performing discovery and implementation, and certainly in the case of the team at Echidna Solutions, the process of implementing a greenfield configuration can be significantly faster than the process of migrating existing configuration (due to differences in the platforms discussed previously). Although the discovery process can be time-consuming when dealing with many stakeholders, the trust that is built up through this process has a significant positive impact on longer-term change management, and reduces the effort later in the migration to a similar degree.
In more traditional migration processes, there are two slightly hidden areas that can impact the elapsed time of the project - mandatory platform clean-up before the migration, and the post-migration support efforts. With most Atlassian Consultancies, any migration quotation will require the customer to "clean up" their instance prior to the migration process itself. This will involve removing or archiving old data (at both the Project/Space and Issue/Page levels), removing or changing Plugins (to those that have supported migration paths), and cleaning up configuration. The list of changes requested as a prerequisite can be immense - most customers with older systems and 1,000+ users will end up spending at least 6 months performing these tasks, even with a dedicated resource. Post-migration support is often overlooked, but ends up being a long tail of the process with many traditional migrations - with less bespoke preparation, it's not surprising that there are more issues to be dealt with post-migration. Most customers that I've seen going through this will spend at least a couple of months with escalated support effort caused by an xCMA migration - I've even seen some still dealing with fallout a year later.
Next up, we need to consider the actual migration time itself, as this effectively defines downtime windows. It is true that techniques such as JSON/REST migrations and CSV migrations simply take longer to execute than an xCMA migration of the same data - the addition of transformation alone is a task that simply isn't performed in an xCMA migration. However, with discovery and planning being such an important part of the Echidna Approach, this comparison is not really fair. The reality is that with an ETL-focused process, you will be defining the scope of the extract carefully, focusing on business value and business continuity over data integrity alone. This means that in most cases, the size of the migration being performed is lower for ETL migrations than for equivalent xCMA migrations. In my experience, downtime windows between the two options are generally pretty similar - neither approach has a "problem" in this area.
The elephant in the room is cost. For obvious reasons, it's difficult for me to explicitly state how much is spent on a generic migration, since there are so many variables involved. This said, what I can do is, from the perspective of a customer purchasing a migration service from an Atlassian Consultancy, look at the effort expended by the Consultancy, and any additional costs that the customer will incur.
For Consulting effort itself, the two approaches have quite different weightings on where effort is expended. The Echidna Approach has significantly more effort expended in discovery and planning than the xCMA path. On the other hand, the traditional approach will require much more technical effort in the migration process in order to fix the myriad issues caused by exceptions. Overall, for a migration of sufficient size or complexity (which covers most Data Center instances), these effort sinks will cancel each other out, so the Consulting effort for both is typically similar.
The real cost that can be overlooked, however, is the non-Consulting costs involved. As discussed previously, most Atlassian Consultancies will require significant up-front cleanup to be performed - typically 6 months or more of effort. For many teams, there simply will not be capacity in existing teams to perform this function on top of their normal job, forcing them to hire a contractor to fill the gap - in the UK in 2026, you'll typically be running upwards of £750 a day in accrued costs for an experienced Atlassian contractor, so upwards of a £90,000 spend there. Post-migration also has significant costs involved in an *xCMA*-led migration - increased support load has a cost that is perhaps more difficult to define, but even an average-sized migration will run up a couple of resource-months of support effort caused by the migration. Most other non-Consulting costs incurred by the traditional migration path are shared by the ETL-led approach - project teams will be required, time will be spent on UAT, and costs of resultant platforms will (typically) be the same¹.
There is some non-Consulting cost present in the ETL process that doesn't exist in the traditional path - in particular, the effort expended by customer teams in the discovery and planning phase. If we assume the average ETL migration has 8 interviews of 3 people, each of which running 2 hours plus another 2 hours for prep and minutes review, we end up with 96 hours (or 12 days) of effort being expended. Obviously this will scale depending on the customer's individual needs, but overall cost impact is relatively minimal.
Overall, the primary goal of the Echidna Approach is always to have a system built both for the Cloud and that can support the teams. If this goal is achieved, teams are able to work more effectively, making full use of the features that Cloud provides. As shown above, even with these lofty goals, these migrations can be delivered in a shorter timeline than a comparable traditional migration, and at a lower overall cost to the business.
The recent project that Echidna Solutions engaged in with Rightmove provides evidence of this fact - the project was completed within 20 weeks elapsed, compared to the 9-12 months estimated elapsed time for the traditional approach from another Atlassian Consultancy. The overall cost to Rightmove also ended up lower - without the need to hire a contractor for instance preparation and migration support, Rightmove spent less than half of the budget that they had allocated based on a quotation for an xCMA-led migration. The migration itself was a roaring success - no P1/2/3 issues were raised after any of the waves, and the teams were thrilled with the result. Obviously, we can't compare this result directly to a traditional approach, but from my personal experiences of similar-sized companies taking that xCMA-focused path, this outcome would be exceptionally rare!
At this point, you've got a pretty good picture of the Echidna Approach, and have a solid understanding of how it compares to the more traditional xCMA-focused approaches that are commonplace in the industry. Fair warning though - the process itself can only ever take you so far, and at Echidna Solutions we've been building up the tooling, techniques, and experience to make these migrations a success for years - that's not something that I can simply put down on paper! This said, to help steer you through some of these elements, in our next and final part we will be exploring some real-life examples of challenges in migrations, and how the team members at Echidna Solutions worked through them.
¹I have been involved in some migrations where the target system ends up being smaller than expected due to findings in discovery, but this isn't typical per se - instead just a result of more thorough investigation and more time spent on goal definitions.
In the fifth and final part of the series, I will be recounting a selection of real-life scenarios that the team at Echidna Solutions have encountered in customer migrations, and how we approached these challenges.
Part 5 will be released on Monday 23rd March.