2009/10/12 John Dong <jdong@xxxxxxxxxx>: > > On Oct 12, 2009, at 12:16 PM, jim owens wrote: > >> Pär Andersson wrote: >>> >>> I just ran into the max hard link per directory limit, and remembered >>> this thread. I get EMLINK when trying to create more than 311 (not 272) >>> links in a directory, so at least the BUG() is fixed. >>> What is the reason for the limit, and is there any chance of increasing >>> it to something more reasonable as Mikhail suggested? >>> For comparison I tried to create 200k hardlinks to the the same file in >>> the same directory on btrfs, ext4, reiserfs and xfs: >> >> what real-world application uses and needs this many hard links? >> >> jim > > I don't think that's a good counterargument for why this is not a bug. > > Can't think of any off the top of my head for Linux, but definitely in OS X > Time Machine can easily create 200+ hardlinks.-- > To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html > As a lurker, I've actually got a real-world example of something I do that would probably hit this. Its was hinted at before - web urls are sometimes rediculously long. I run a web archiver on my router box that saves every http url I hit to a file named after its url with a date appended. But then I periodically run a de-duplicator on the saved stuff, which hard-links together all files with the same contents (except empty ones) I bet there are lots of examples that would exceed this limit within those dirs. -- Brian_Brunswick____brian@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx!Shortsig_rules! -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
