|
|
Question : Transferring Large Amount of Data from Windows to Linux
|
|
|
|
Greetings!
Our company currently has a server running Windows Server 2003 that contains a rather large amount of data (close to 500 gigabytes), made up of tens of thousands of very small files, mostly pictures. We are in the midst of converting a great deal of our server infrastructure to the Ubuntu Linux platform, and need to transfer all of the data on this server to the new, Linux-based server that will take its place.
The problem that I have encountered in the past, during any attempt to transfer this server's data from one location to another, is that although a transfer of 500 gigs of data in and of itself should not take an unreasonably long time, the fact that this data consists of tens of thousands of small files causes the process to proceed exceedingly slow, taking days or even weeks to complete. I have tried numerous methods of file transfer, such as the built-in file copy capabilities of the OS, external utilities such as RichCopy, etc., but none have seemed to make a significant difference in transfer time. (This is all occurring over a gigabit Ethernet connection, BTW.)
To make matters worse, this server will be live and receiving updates during the transfer process, which means that somehow the data on the new server and the data on the old server need to be synced so that changes made during the transfer process are then reflected on the new server.
We do back up this server on a nightly basis using the Macrium Reflect imaging software, and restoring the image is a considerably faster process, since it operates on a block level rather than a file level, but this is not workable for this particular transfer, as we need to move the data from the NTFS file system to the Linux file system. Also, I cannot simply remove the hard drive from the source server and place it in the destination server to do a local copy, both because the source server needs to remain live during the process, but also because it is in a RAID 5 array.
Any suggestions on the best way to accomplish this would be most appreciated.
Thanks!
- Tom
|
|
|
|
Answer : Transferring Large Amount of Data from Windows to Linux
|
|
The reason copying large numbers of small files takes so much longer is that each file close operation results in a directory write to the disk -- this causes both a seek operation and a full rotation of the disk to do the write. With a Windows-Windows copy you could drastically reduce this time by ensuring that write caching was enabled on the drive you're copying to -- which would defer the writes (they'd be "written" to a memory cache) and DRAMATICALLY speed up the copies.
However, I'm not a "Linux person", so I don't know how you set this option on your Linux box -- but it's almost certain that there is such an option. I'd ask a "How do I enable write caching" question in the LInux OS zone -- if you set that option your copies will be MUCH faster. The write cache will still need to be flushed occasionally, but that will be FAR fewer independent directory writes than the one/file that you're seeing now.
|
|
|
|