[NTLUG:Discuss] Maximum files per directory

MadHat madhat at unspecific.com
Mon May 19 20:01:20 CDT 2003


On Sun, 2003-05-18 at 23:12, Vaidya, Harshal (Cognizant) wrote:
> Hey Richard,
> 
>   Files can be created as long as inodes can be allocated from the
> filesystem. The inodes are allocated from the superblock.  It dosen't
> matter whether the files are created in a single directory or different
> directories.
> 

There are limits on the number of file handlers a single directory can
handle and it is going to depend on which OS version, its tweaks, as
well as the file system you have installed.  It is not limitless and
depending on resources.  To give you an idea, by default EXT2 and EXT3
is 32000, in theory, you could raise it to 65535, JFS is 65535, MINIX is
250, MINIX2 is 65535, UFS is 32000, REISER is 65535.  This is not a
limitation of Linux but in all OSs and the file systems they use.
Do a search for max link count.

> As Steve Baker mentioned there are performance implications if you have
> an excessively large number of files in a directory. This is because of
> the way files references are stored in the directory. A directory, as
> you may be aware, is a simple table,column representation of 
> filename vs names vs inode numbers. 
> 
> When ever, a file is opened, the directory which it's contained in is
> searched for the inode number for that file. The inode is then accessed
> from the superblock. If there are a huge number of files in the same
> directory, this search for the inode number takes a long time,
> consequently, resulting in slow throughput by the OS.
> Regards,
> Harshal Vaidya.
> 
> 
> 
> 
>    
> 
> -----Original Message-----
> From: Steve Baker [mailto:sjbaker1 at airmail.net] 
> Sent: Sunday, May 18, 2003 3:22 AM
> To: rwolfe at rwolfe.com; NTLUG Discussion List
> Subject: Re: [NTLUG:Discuss] Maximum files per directory
> 
> 
> Richard Wolfe wrote:
> > Is there a limit to the number of files I can put in a single 
> > directory? And even if there's no limit, are there practical reasons 
> > why I wouldn't want to exceed a certain number of files? Is 10,000 ok?
> 
> > How about 100,000? I'm running 2.4.18 (RedHat 8.0), and the filesystem
> 
> > is ext3, if any of that makes a difference.
> 
> A lot of things start to slow down quite a bit when you have a lot of
> files in one directory.  We had a system that put 6,400 files in one
> directory and the time taken to open a file went WAY up.
> 
> I'd strongly advise you to find a way to split it up into at least a
> couple of levels of heirarchy.
> 
> ---------------------------- Steve Baker -------------------------
> HomeEmail: <sjbaker1 at airmail.net>    WorkEmail: <sjbaker at link.com>
> HomePage : http://www.sjbaker.org
> Projects : http://plib.sf.net    http://tuxaqfh.sf.net
>             http://tuxkart.sf.net http://prettypoly.sf.net -----BEGIN
> GEEK CODE BLOCK----- GCS d-- s:+ a+ C++++$ UL+++$ P--- L++++$ E--- W+++
> N o+ K? w--- !O M- V-- PS++ PE- Y-- PGP-- t+ 5 X R+++ tv b++ DI++ D G+
> e++ h--(-) r+++ y++++ -----END GEEK CODE BLOCK-----
> 
> 
> _______________________________________________
> https://ntlug.org/mailman/listinfo/discuss
> 
> ______________________________________________________________________
> 
> _______________________________________________
> https://ntlug.org/mailman/listinfo/discuss
-- 
MadHat at Unspecific.com
`But I don't want to go among mad people,' Alice remarked.
`Oh, you can't help that,' said the Cat: `we're all mad here...'
   -- Lewis Carroll - _Alice's_Adventures_in_Wonderland_




More information about the Discuss mailing list