19th March 2026
Author: Jamie Sawyer, Director, Echidna Solutions
In this series of articles, I will be doing a (very!) deep dive into Atlassian Cloud Migrations, and in so doing, trying to give you as much of the context as I can that I've learned in my 15 years in the Atlassian ecosystem. If our goal is true understanding of Cloud Migration, we first need to build up a level of prerequisite knowledge on the history of Atlassian and their tools.
In our last part, we investigated how tooling used to support Cloud Migrations has evolved over the years, and how this evolution has impacted the approach to these migrations taken by many Atlassian Consultancies.
In part 4 of our deep-dive, we are detailing the entire end-to-end process that we use here at Echidna Solutions - every single step. Similarly to the previous part, we are splitting this part into 3 pages to aid with readability - in our previous post we explored the origins of the approach, and gave a brief overview. Here, we dig into the largest single phase - Discovery and Planning.
In a more traditional xCMA migration, Discovery and Planning is typically quite a cursory exercise, often performed within the opening couple of days of an engagement or within the presales process. The approach most often taken reduces planning to an (at least partially) automated exercise - simple data points are extracted from the systems¹, and pushed through a calculator to get a standardised price and project plan. Given the runbook-driven nature of these engagements, this is no bad thing. It ensures that estimates and plans are accurate and aligned to the reality of a repeatable process - at least, for the majority of migrations on smaller or less complex systems.
The problem that the xCMA migration approach has is dealing with exceptions. The majority of migrations, at least during the Server EoL era, were from small, simple Server instances with limited complexity - generally speaking, these map cleanly enough to the world of the Cloud. The moment that the source instance became larger or more complex however, cracks in the xCMA migration approach began to appear. In my experience, although the majority of small migrations were smooth enough, large or complex migrations going wildly over schedule and over budget were not at all uncommon. This is all to say nothing of the usability of the newly-created site - with data parity being the primary (or even only) success metric, teams often found themselves struggling in the post-migration world.
In the Echidna Approach, we give Discovery and Planning the time and effort that is really required to deal with scale or complexity. As our implementation is fundamentally bespoke by design, it is of paramount importance to understand the teams, processes, and tools that are in-scope, in order to use this knowledge to craft a customised environment which allows users to take full advantage of the Cloud tooling.
Although our approach can and will adjust somewhat based on customer need, in broad terms there are 8 key stages that we will execute:
Kick-off
Technical Discovery
Team Interviews
Current State Review
Preliminary Future State Design
Future State Design Workshops
Gap Analysis
Collaborative Project Scheduling
¹Stats extracted typically include number of Issues, Projects, pages, custom fields, user macros, and Plugins, while more advanced systems also attempt to extract information regarding usage of various Plugins (although effectiveness is variable here).
Our first port of call in any Consultancy engagement is our kick-off. Typically attended by the Echidna team alongside the major stakeholders and primary project team at the customer, this event is an unsurprisingly standard affair. Individual introductions, digging into the planned process, and understanding and aligning goals are all on the cards here.
In terms of advice - a prepared slide deck simply covering off the goals at the start and individual topics that need to be covered will obviously help to keep things on track, and remember that the tone and style established here (particularly with more senior stakeholders) can easily set the course for the rest of the engagement.
Out of the kick-off, our next step is to dig into the Atlassian tools themselves. Typically with support from the customer's Atlassian administration team, we start to look into the tooling that's already in place, in order to understand the Current State of the customer's Atlassian estate.
As a rule, during this discovery, we will have a list of questions we want to answer about the estate and usage profiles on the system. A non-exhaustive list would include:
How many of the Projects/Spaces in the system are actively being used?
Do any of the Projects/Spaces appear to be used by the same subset of the user base?
Are there any obvious categories of usage that can be seen at a glance?
How much similarity is there between configuration of Projects/Spaces in the same category?
How heavy is the customer usage of BTF features that are different in Cloud (e.g. usage of Conditions in workflows, multi-team and/or multi-Project boards, or of Script Fragments or custom Groovy in ScriptRunner)?
How much storage is used by attachments, and what's the profile over creation date (histograms are handy here)?
What does the admin process look like, is it documented, and how reactive or consultative is it?
Which teams do we need to be "higher-touch" with (either due to complexity of process, importance of stability, or simply due to attitude!)?
Obviously, the exact questions will differ based on customer Plugin usage, their goals, and your own experiences¹. Some elements will be targeted for direct data extraction², while others will be manual review, or based on discussions with the estate's Administration team.
I would generally advise to perform a good portion of the technical discovery prior to any Team Interviews - the contextual information that you're able to get from this process can help guide some of the questioning. This said, you will always still need to perform some further technical investigations after the interviews, based on the findings that you get from the teams - it's not one-and-done!
¹Honestly, we add new things to the list of questions every engagement based on the ever-evolving tooling surrounding the process!
²These days I tend to get most data over the REST interfaces, using Python's jira library for extraction and Jupyter notebooks for analysis.
With a baseline understanding of the systems and their implementation in place, it's time to chat with the teams. This is the single most important point of the process in a lot of ways, and works both to shape large swathes of the ongoing design, as well as to build up trust between the teams working on the systems and the migration team as a whole. This trust will help to improve communications between the users and the project team, enable additional flexibility in scheduling or process adjustments, and reduce the overall change management costs by improving the team members' perception of the project as a whole.
The number of interviews can vary wildly from one customer to another based on needs - from just 2 to 3 in smaller, simpler businesses, up to 15 to 20 interviews in large or complex situations. How interview attendee groupings are decided can also vary significantly - some businesses find it better to interview by team or department, while others group teams with similar requirements together. As such, the scheduling of this ends up being a collaborative effort between Echidna Solutions and the customer, following 3 main guidelines:
Each interview should ideally remain small in terms of attendees - beyond 3 or 4 users and it starts to be challenging to ensure all voices are heard, and to keep the interview effective.
Attendees in a single interview share broadly similar work practices and tool usage - there's little point in a session attended by both a change management team that have a unique approval flow and heavy documentation, and an Infrastructure Support team that focus entirely on a service desk!
The set of all attendees should, as much as is viable, represent voices from across the business - ideally covering the majority of approaches and processes used in the system. In very large businesses, with unique and complex approaches throughout, full coverage may not always be possible - in these cases I would strongly recommend separating related groups into "waves" for both discovery and migration, keeping the total number of interviews to maintain coverage for each wave below 20 or so.
The interviews themselves are structured in order to ensure we are able to understand 3 primary points:
What the interviewees do in their jobs, and how they go about doing that (from a business value and process perspective)
How the interviewees currently integrate their usage of Atlassian tooling into this process
What pain points the interviewees are experiencing with their current usage of the tooling
To make sure we don't spend the entire session focused on one area¹, the interviews have a timetabled agenda which is sent out to participants prior to the session to give the team time to prepare their thoughts. Interviews are typically 1.5 to 2 hours long (depending on topics and scheduling), but often result in ad-hoc follow-up conversations happening, whether over Slack or over a coffee in the kitchen - there's a lot to think about! After the session, Echidna send out a summary of findings to attendees, ensuring there's no confusion and that everyone is aligned.
A typical agenda for the interview of a development team might look like:
Introduction to the Migration Project and Team (5 minutes, led by project team)
What your team does, and how it fits into the business (15 minutes, led by development team)
How your team works in the existing Atlassian tools (15 minutes, led by development team)
Company-specific topics as needed - this is obviously variable in time and scope, but example topics could be how build and deployment management is integrated, how testers within the team perform their tasks, or the team's usage of a self-developed integrated tool
What challenges do you have using existing tooling, and what workarounds are currently used (15 minutes, led by development team)
Vision of the future platform (15 minutes, led by project team) - typically an "art of the possible" discussion, often including ideas from other teams to gauge interest
Free-form discussions and meeting close (15 minutes)
¹By which I typically mean the pain points - folks love to get into the weeds on that topic!
One of the more interesting refinements made to this process as time has gone on is an explicit feedback step to the customer's project team prior to any designing of the future state. It became obvious early on that, even for project teams that know their business inside out, the results of the discovery process can be surprising.
It is not uncommon to find that, as someone external to the business, we are able to see situations differently to the customer's own teams (that are sat right in the middle of it all). Common tropes that come up here are levels of commonality in process between different teams, shared frustrations presenting in different ways, or workarounds for issues used by different teams resulting in data coming out of those teams that cannot be compared.
The structure of this feedback can vary depending on customer's preferences and the content involved - documented summaries of findings, casual discussions over coffee, or formal project meetings can all work.
Based on what we know of the current state of the platforms and usage, alongside the goals discussed in the kick-off, the team at Echidna Solutions will begin to pull together a draft design for the future state of the platform. The structure of the design can flex based on customer need, but will often contain these elements:
Process design - based on the process flows described in team interviews, we formalise these ideas into designs that can be used to generate workflows in Jira, or information architecture for Confluence, for example. Automation design can sometimes be a large enough component of this to warrant its own section altogether.
Infrastructure and Network design - although the majority of infrastructure in the Atlassian Cloud is a "black box" to the customer, elements such as federation of environments in Cloud Enterprise, integrations into internal systems, and VPN access will often need considerable consideration.
User Access design - in many cases, at its surface this might be simple - connect to our Azure AD, all good - but there's often nuance that may be overlooked. At the very least, on- and off-boarding processes, external users, and outlier teams (such as recent acquisitions) should be considered here.
App Estate design - with Plugins and Apps being so different, App estates post-migration will typically look very different to those on the source system. Selecting appropriate Apps based on business need here is key, and picking the right Apps can be challenging given the scale of Atlassian's Marketplace¹.
The design at this point is not expected to be perfect, but is instead built to be used as a jumping-off point for workshopping a finalised design. Many elements of the document will have become self-evident based on the discussions in presales, kick-off, and from our earlier Discovery stages. For others, however, final decisions will still need to be made, and options will remain open. In some cases, extreme options are presented in the preliminary design, not necessarily with the expectation that they will be the correct choice, but more to spur discussion or to open eyes to alternate options that would have otherwise remained obscured.
¹Remember also that there's a security element here - most Atlassian Connect Apps host customer data in the vendor's own infrastructure, rather than within Atlassian's infrastructure, which can be a concern for IT Security teams.
With our preliminary design constructed, we need to refine and confirm all of these different areas with both the customer's project team and with the business as a whole. The number, structure, and scheduling of these workshops is defined collaboratively by the project team, based on the individual requirements of the business.
Similarly to Team Interviews, on average individual workshops sit in the 1.5 to 2 hour range, and should be kept relatively tight - 4 or fewer customer attendees is ideal. Unlike Team Interviews however, the size and length of individual workshops is much more variable, depending on the customer, attendees, and topic. In the past I've hosted everything from all-day sessions attended by C-suite members and representatives from IT Operations, IT Security, Development and Testing teams just to cover off Infrastructure, User Access and App Estate designs¹, all the way down to a casual 30 minute catch-up with a primary stakeholder over lunch to solidify plans for Apps in the future.
The goal of these sessions is to review all parts of the Preliminary Future State Design. The most effective way to achieve this is to separate by topic area. Workshops tend to be less formally agenda-driven than interviews, so structure tends to ebb and flow based on need. Individual and project introductions will obviously initiate the session, but content varies dramatically depending on topic. This said, there are some key considerations we should be aware of:
Ensure the entire section of the document is discussed - it's easy to miss a topic, and key questions should be communicated to attendees before the session to ensure everyone is fully prepared.
When designing process, visual tools (whiteboards, virtual whiteboards, pen and paper) are really useful to ensure the group comes to a collective understanding.
Key topics for discussion should ideally be visible to all participants throughout the session, and checked off as discussed.
Once each workshop session is over, provide an updated "final" version of the design to attendees to comment on or amend as appropriate.
Although difficult, try not to get sidetracked by current practices in existing tools - approaching topics in as greenfield a manner as possible will help ensure your target system is as effective as possible to the business.
As a "gotcha" based on experience more than anything else - make sure you engage with IT Security as early as possible - the sooner they have sight of the plans, the easier those plans will be to approve.
Once all workshops are complete, we present a final version of the full design to the customer's project team for final review and approval.
Finally, a word of warning for anyone doing this themselves - it's typically at this point that internal politics tend to rear up. It's not uncommon for tensions to rise between groups with different opinions on future direction - whether between IT Security teams and users or admins over use of Apps, or between teams wanting to move in different directions. This can be somewhat challenging to navigate at times, even after many years of experience, but in many cases being a relatively neutral third party can help to de-escalate!
¹Not necessarily the most efficient workshop I've ever been involved in, but given the changes being proposed and the challenges that existed internally, was quite reasonably deemed as necessary.
With our current state documented, and our future state agreed, all that remains in the way of scheduling the implementation is defining the path between these two states. Detailed advice on how to approach Gap Analysis is effectively impossible to provide, since the approach is necessarily unique depending on the customer's state, goals, and limitations.
In the interest of providing at least some guidance, at Echidna Solutions we generally follow these rules when creating a Gap Analysis:
Use any and all tools at your disposal. As discussed previously, there's an abundance of tooling available to support migrations, augmented by any tools or skills that you may bring to the table¹.
Don't assume that one approach will fit all content in any given system - most migration techniques aren't all-encompassing, so considering simpler paths for less complex Projects or Spaces is a reasonable approach.
Don't forget that manual effort is an option - manual implementation of configurations is a strong approach to ensure they're aligned to Cloud paradigms, and do consider manually adjusting JQL for Extract to target only data that brings real business value (e.g. only data updated in the last 3 years).
When designing your approach, actively try to challenge yourself to consider approaches that you don't typically use - it's easy to fall into using the same tools every time and become blinded to what the best tool for the job actually is.
The approach defined in the Gap Analysis is then presented to the customer's project team, giving them opportunity to question and clarify elements of the approach, in order to ensure alignment for scheduling.
¹As previously mentioned, although I've had a varied background in terms of programming languages, I'll typically build tools interacting with APIs using Python, for two main reasons. First, the jira library is unreasonably effective, and makes auth loops for both BTF and Cloud so much easier, meaning I can interact with both source and target in a consistent manner. Secondly, since Python is typically installed (or at least trivially installable) on most platforms, I don't need to worry about installing any additional tooling (such as ScriptRunner) on the Atlassian platforms themselves.
With our planned approach in-hand, we can finally move on to scheduling in all the elements of the migration itself. Prior to starting this process, it's critical to ensure that all elements of the migration are appropriately estimated - building out a mock plan to ensure all elements of the process are covered is critical.
The collaboration itself should be focused on planning out timelines based on the estimates you've put together - taking into consideration things like availability of teams for user testing, grouping related teams together (if taking a waved approach), or any important calendar events for the customer like major software releases.
It is also worth considering whether to implement any additional change control on system configuration at any point in the process. As a minimum, I would recommend having the Administrators communicate any changes to the project team to keep them abreast. If it's viable in the schedule, a better approach would be to have project team approval prior to change (with some categories of change being auto-approved), and a complete configuration change lockdown being implemented prior to Staging. Customer appetite for these kinds of restrictions can have a big impact here, so if it's not viable, ensure that you consider the schedule impact of any changes that might occur, and the likelihood of those changes, when doing your planning.
On a related note, one big piece of advice I would give you is to ensure that you include some breathing room in your schedule. This allows for unexpected events to occur, but for user-impacting dates (such as UAT or Prod Migrations) to remain static. Given how important getting those dates right is, I would normally say "the more buffer the better" - just don't go overboard with it, as you don't want the fundamental structure of the platform to shift between Development and Staging for instance!
Overall, at times this process can be remarkably easy, with details falling naturally from previous conversations - being more of a simple solidification of the existing plan than fresh project planning. Sometimes though, this can be a maddening jigsaw puzzle, trying to satisfy countless stakeholders and their unique and conflicting schedules. The best advice I can give you is to allocate more time to this process than you expect - it's better to have the time and not need it than the opposite!
We continue our exploration of the Echidna Approach in our next page - Implementation and Review.