FAQ SearchLogin
Tuxera Home
View unanswered posts | View active topics It is currently Fri May 07, 2021 03:51



Post new topic Reply to topic  [ 2 posts ] 
Problem with safe capacity for big writes of Ntfs-3g? 
Author Message

Joined: Mon Dec 08, 2014 09:52
Posts: 1
Post Problem with safe capacity for big writes of Ntfs-3g?
Hi,
In the 64th line of param.h file writes below:
#define SAFE_CAPACITY_FOR_BIG_WRITES 0X100000000LL
It means that the minimal safe capacity for big writes of Ntfs-3g is 4GB.

My question is that the SAFE_CAPACITY_FOR_BIG_WRITES could be changed to 1GB(0X40000000LL) or lower?
If I change the minimal safe capacity, what will happen? And how could I test this situation?

Thank you!


Tue Dec 09, 2014 09:49
Profile
NTFS-3G Lead Developer

Joined: Tue Sep 04, 2007 17:22
Posts: 1286
Post Re: Problem with safe capacity for big writes of Ntfs-3g?
Hi,

Quote:
My question is that the SAFE_CAPACITY_FOR_BIG_WRITES could be changed to 1GB(0X40000000LL) or lower?
If I change the minimal safe capacity, what will happen? And how could I test this situation?

This is related to how the cluster allocator is designed : if it cannot allocate in a single pass, the file system is declared full (ENOSPC). If you want to allocate many clusters (a big_write buffer of 128K is usually 32 clusters), and the file system is small and fragmented, this situation is likely to happen. Your data is not at risk, but you might not be able to fill your file system up.

Regards

Jean-Pierre


Tue Dec 09, 2014 14:32
Profile
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 2 posts ] 


Who is online

Users browsing this forum: No registered users and 6 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group.
Original forum style by Vjacheslav Trushkin.