FAQ SearchLogin
Tuxera Home
View unanswered posts | View active topics It is currently Sat May 15, 2021 03:07



Post new topic Reply to topic  [ 6 posts ] 
NTDF-3G slow on large file 
Author Message

Joined: Wed Oct 05, 2011 16:13
Posts: 3
Post NTDF-3G slow on large file
Hello,

I have a 160GB image file of a broken NTFS disk as source on a local EXT4 drive. Target is a new 500 GB WD passport USB drive, factory formatted as NTFS. The USB drive is mounted as ntfs-3g, Version 2010.10.2 integrated FUSE 27. Loop mounting the image and doing "cp -a ..." resulted in decent copying rates, as expected for a USB-3 drive connected to a USB2 port. However copying the image file direct was slow:
dd if=/spare/bon/alex.img bs=4M | dd bs=4M of=alex.img
38156+1 Datensätze ein
38156+1 Datensätze aus
160039240192 Bytes (160 GB) kopiert, 44073,2 s, 3,6 MB/s
38155+3 Datensätze ein
38155+3 Datensätze aus
160039240192 Bytes (160 GB) kopiert, 44074,1 s, 3,6 MB/s

Copying the big file started at expected rate, but after some the GB it slowed doen to about 3.2 MB/s. I also tried dd
if=/spare/bon/alex.img of=alex.img bs=4M and had the same behaviour.

Is this expected behaviour?


Wed Oct 05, 2011 16:23
Profile
NTFS-3G Lead Developer

Joined: Tue Sep 04, 2007 17:22
Posts: 1286
Post Re: NTDF-3G slow on large file
Hi

Quote:
Is this expected behaviour?


Copying big files from USB to USB is generally slow, whatever the file systems. You should make an intermediate copy on a non-USB device.
http://forums.fedoraforum.org/showthrea ... 193&page=2

Regards

Jean-Pierre


Wed Oct 05, 2011 21:32
Profile

Joined: Wed Oct 05, 2011 16:13
Posts: 3
Post Re: NTDF-3G slow on large file
I was copying from a local file to USB.


Thu Oct 06, 2011 12:28
Profile
NTFS-3G Lead Developer

Joined: Tue Sep 04, 2007 17:22
Posts: 1286
Post Re: NTDF-3G slow on large file
Hi,

Quote:
I was copying from a local file to USB.

Then the first thing to check is whether your target device is too much fragmented. A frequent cause for fragmentation is a device getting filled (over 85%). You can easily check that with ntfsinfo :
Code:
ntfsinfo -vF alex.img UNMOUNTED-DEVICE | grep '0x.*0x' | wc

If you get hundreds of fragments, then that is the cause. If so, you will also get poor performance when updating files within the loop-mounted alex.img

Note : the big block size (4M) you used for dd is useless. The fuse library, on which ntfs-3g is based, uses 4K buffers by default. With more recent ntfs-3g you can use bigger buffers, but there is a system limit generally set at 128K.

Regards

Jean-Pierre


Thu Oct 06, 2011 15:18
Profile

Joined: Wed Oct 05, 2011 16:13
Posts: 3
Post Re: NTDF-3G slow on large file
Well, as I wrote, the disk was new, 500 GB. I unpacked the 160 GB image, then I tried to copy the image file itself. I aborted the several tries, all getting slow after some time. The data I gave above was running over night.

# ntfsinfo -vF alex.img /dev/sdb1 | grep '0x.*0x' | wc
10389 31168 317666
mount...
# df |grep passp
/dev/sdb1 488352764 316714828 171637936 65% /mnt/passport

So at least the disk is not filled > 85 %.

The image file was created with ntfsclone --rescue from a disk with read errors to an ext3 disk. Could this be the cause of the many fragments?


Fri Oct 07, 2011 10:22
Profile
NTFS-3G Lead Developer

Joined: Tue Sep 04, 2007 17:22
Posts: 1286
Post Re: NTDF-3G slow on large file
Hi,

Quote:
# ntfsinfo -vF alex.img /dev/sdb1 | grep '0x.*0x' | wc
10389 31168 317666

So, there are over 10000 fragments, this is the cause for bad throughput. Note that recent ntfs-3g have been improved when creating such a fragmented file.
Quote:
The image file was created with ntfsclone --rescue from a disk with read errors to an ext3 disk. Could this be the cause of the many fragments?

Probably yes. You must have created the image without the --save option, so the image is a sparse file which only contains the clusters in use, and there is at least one fragment per set of consecutive used clusters. The only way to avoid this is to fill the holes with zeroes (using cp with option --sparse=never)... but this is likely to lead to more space being required on the target device, which leads to another cause for fragmentation.

If you saved your original partition with option --save, and you are now doing a --restore, you also get a sparse file with the same consequences.

You can check whether alex.img is sparse by comparing the outputs of :
Code:
du alex.img
du --apparent-size alex.img


You should probably extract and copy the files from alex.img and use the big sparse file only for extracting its contents.

Regards

Jean-Pierre


Fri Oct 07, 2011 14:09
Profile
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 6 posts ] 


Who is online

Users browsing this forum: No registered users and 5 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group.
Original forum style by Vjacheslav Trushkin.