Enterprises are typically very concerned about how long it will take to complete a data migration project. After all, prolonged migrations can cost organizations hundreds to thousands of dollars. As a result, many data migration solution providers have prioritized speed over all other factors. Unfortunately, hastily speeding up migrations can end up costing the enterprise more money in the long run if the project is not done correctly. Like the story of the tortoise and the hare has taught us: being the fastest does not always win the race.
Datadobi has rightfully earned a reputation for speed. Over the years, we have completed thousands of migrations between nearly every possible source and target storage device, both on-premise and in the cloud. However, we also recognize that there are times that being methodical through the process of the data migration can enable us to speed up the project overall.
You might be asking yourself, “How does slowing down the migration make the project go faster?” Let’s take a closer look at the process to see what I mean.
Managing Bottlenecks Enables Faster Data Migrations
Datadobi has had many enterprise customers who needed to migrate file content that had been relocated by an archiving appliance. This type of migration creates a natural bottleneck because the content to migrate is split between high performance primary storage and lower performance secondary storage.
Between the two storage tiers lies the archiving appliance that traditionally has to be employed to recall the previously archived data. The recall process is slow and error-prone, since large-scale recall is not typically the use-case for which these archive appliances are designed. The primary use-case is the opposite — to periodically recall only small amounts of data that were relocated to the secondary platform. There is also an assumption that the bulk of the archived data will expire prior to the need for a migration to occur. Unfortunately, many organizations’ governance policies do not allow data to be deleted. The archive data tends to never expire, and a migration will require all of the archived data to be recalled through the original archiving appliance.
This puts us in a situation which I refer to as ‘managing bottlenecks’. In this scenario, we could easily overload the recall process by reading too quickly, with the result being massive recalls along with poor observed performance. Or, even worse, corrupted data could be returned which would be copied to the destination.
The Solution to Fast and Effective Data Migration
The better solution is to manage the process efficiently and intelligently by eliminating the bottleneck (the archiving appliance itself). In this situation, pure speed must be balanced against running extra checks and balances to ensure the source system is not overloaded, and that valid data is read into the migration stream directly from either primary or secondary storage. Organizations should be able to complete the process without requiring massive recalls through the archiving appliance. When Datadobi manages the bottlenecks, we are managing both the speed and the accuracy of the migration while keeping the source systems in production servicing their existing workloads.
Our software, being the most enterprise-ready tool on the market, allows organizations to manage bottlenecks in migrations. Our QoS feature in DobiMigrate is second to none when dealing with the above scenario. DobiMigrate can limit the speed and impact on either the source or target to ensure it will not go faster than either device can handle. We can limit the number of threads, proxies, iterations, and even cap how fast DobiMigrate can read or write in terms of megabytes or gigabytes. Additionally, we can modify them by raising or lowering the limits in real time.
DobiMigrate Puts Enterprises in Control of Data Migration
We recommend customers start slower than they believe the storage device can handle, and slowly raise the speed or threads. This way enterprises can reach the max speed without outpacing the source or target. Using native monitoring tools, DobiMigrate can monitor how the source and target are doing.
Even in the middle of an incremental copy, DobiMigrate can increase the speeds. A slow increase guarantees the source is able to keep up with rehydrating the data and the target is archiving the data fast enough.
Playing with a combination of threads, proxies, and throughput allows us to find the magical spot that to go as fast as the bottlenecks allow. Being able to properly manage these challenges allows enterprises to squeeze all of the speed out of the source and target — regardless of how the data needs to be archived.
Just like the story of the tortoise and the hare, this is how Datadobi “tortoises” its way to winning the data migration race. While other tools will go as fast as they can and knock over source and target, they will have to start over again and again. We can determine the perfect speed to make sure Datadobi crosses the finish line first and accomplishes this migration faster – and more thoroughly – than our competitors.