Find the Largest Open Files and Their Owner(s) on Linux with lsof

In this article we covered finding the largest files on a file system. Often this doesn't account for the whole story when you scramble to clear up a filled volume; for as long as a process exists which has opened a given file that file will - even if apparently deleted - continue to exist until that process releases it.

This strategy has numerous benefits, not the least of which is the ability to upgrade libraries and binaries in-place. The software which relies on these libraries continues to run using the version it was started up with, preventing crashes due to version mismatching and giving you time to update the binaries themselves before restarting for minimal downtime.

Unfortunately, this makes things slightly confusing. You may have deleted a 400MB log file expecting to have immediately freed 400MB but df is still reporting that your file system is full. If you know what process "owns" that file it's usually a simple matter of restarting the corresponding software. You won't always know this, however, and that's where some clever use of the lsof command comes in handy.

lsof spits out the size and owner of all open files. If you already know the file you're looking for it's as simple as grepping the output:

# lsof | grep "/var/log/zimbra.log"
COMMAND     PID    USER   FD      TYPE             DEVICE    SIZE/OFF       NODE NAME
rsyslogd   1285    root    6w      REG             202,17    65125680    2031675 /var/log/zimbra.log

If we pipe lsof's output through awk and sort we can get some useful, human-readable information. This command will give us the 10 largest currently open files, the size of the files in megabytes and the name of the process(es) using them:

# lsof / | awk '{if($7 > 1048576) print $7/1048576 "MB" " " $9 " " $1}' | sort -n -u | tail

For example:

498.804MB /var/log/zimbra.log zimbra

To view more or less than 10 results add -n X where X is the number of lines you would like to see to the tail command.


There are no comments for this item.