=^.^=

gen_initramfs_list.sh: Cannot open '/usr/share/v86d/initramfs'

karma

If you encounter this error while compiling a newer kernel:

  CC      init/calibrate.o
  LD      init/built-in.o
  HOSTCC  usr/gen_init_cpio
  /usr/src/linux-3.2.12-gentoo/scripts/gen_initramfs_list.sh: Cannot open '/usr/share/v86d/initramfs'
make[1]: *** [usr/initramfs_data.cpio] Error 1
make: *** [usr] Error 2

Emerge v86d:

# emerge v86d

Credit due: big my secret: Problem of make kernel about v86d initramfs

Delete All Entries for a Given Criterion in ip_conntrack Table

karma

You may find yourself in a position where it is necessary to remove all the entries in Netfilter's connection tracking table (ip_conntrack) for a particular criterion, like the source or destination IP.

For example, I recently detected a user on one of my networks engaged in what was likely a TCP denial of service attack against root name servers (despite the odd fact that the destination port was 80). Being a NATted user, all of their connections were being tracked. The default time-out for tracking an established connection being 5 days, simply disconnecting the user at the second layer would not relieve the congestion on my routers within an acceptable time frame.

#  cat /proc/sys/net/ipv4/netfilter/ip_conntrack_count
98636

#  cat /proc/net/ip_conntrack | grep "xxx.xxx.xxx.xxx"
tcp      6 416408 ESTABLISHED src=xxx.xxx.xxx.xxx dst=192.168.5.147 sport=58967 dport=80 packets=1 bytes=40 [UNREPLIED] src=192.168.5.147 dst=yyy.yyy.yyy.yyy sport=80 dport=58967 packets=0 bytes=0 mark=0 secmark=0 use=1
tcp      6 416406 ESTABLISHED src=xxx.xxx.xxx.xxx dst=192.168.9.239 sport=58967 dport=80 packets=1 bytes=40 [UNREPLIED] src=192.168.9.239 dst=yyy.yyy.yyy.yyy sport=80 dport=58967 packets=0 bytes=0 mark=0 secmark=0 use=1
tcp      6 416400 ESTABLISHED src=xxx.xxx.xxx.xxx dst=192.168.11.231 sport=58968 dport=80 packets=1 bytes=40 [UNREPLIED] src=192.168.11.231 dst=yyy.yyy.yyy.yyy sport=80 dport=58968 packets=0 bytes=0 mark=0 secmark=0 use=1
tcp      6 416387 ESTABLISHED src=xxx.xxx.xxx.xxx dst=192.168.14.37 sport=58968 dport=80 packets=1 bytes=40 [UNREPLIED] src=192.168.14.37 dst=yyy.yyy.yyy.yyy sport=80 dport=58968 packets=0 bytes=0 mark=0 secmark=0 use=1
tcp      6 416381 ESTABLISHED src=xxx.xxx.xxx.xxx dst=192.168.11.103 sport=58968 dport=80 packets=1 bytes=40 [UNREPLIED] src=192.168.11.103 dst=yyy.yyy.yyy.yyy sport=80 dport=58968 packets=0 bytes=0 mark=0 secmark=0 use=1
tcp      6 416275 ESTABLISHED src=xxx.xxx.xxx.xxx dst=192.168.9.57 sport=58967 dport=80 packets=1 bytes=40 [UNREPLIED] src=192.168.9.57 dst=yyy.yyy.yyy.yyy sport=80 dport=58967 packets=0 bytes=0 mark=0 secmark=0 use=1
tcp      6 415776 ESTABLISHED src=xxx.xxx.xxx.xxx dst=192.168.1.52 sport=58967 dport=80 packets=1 bytes=40 [UNREPLIED] src=192.168.1.52 dst=yyy.yyy.yyy.yyy sport=80 dport=58967 packets=0 bytes=0 mark=0 secmark=0 use=1
tcp      6 417319 ESTABLISHED src=xxx.xxx.xxx.xxx dst=192.168.43.60 sport=58967 dport=80 packets=1 bytes=40 [UNREPLIED] src=192.168.43.60 dst=yyy.yyy.yyy.yyy sport=80 dport=58967 packets=0 bytes=0 mark=0 secmark=0 use=1
tcp      6 417319 ESTABLISHED src=xxx.xxx.xxx.xxx dst=192.168.43.13 sport=58968 dport=80 packets=1 bytes=40 [UNREPLIED] src=192.168.43.13 dst=yyy.yyy.yyy.yyy sport=80 dport=58968 packets=0 bytes=0 mark=0 secmark=0 use=1

...

It is possible to clear tracking entries en masse by removing then reloading the iptables rule that requires them to be tracked in the first place, but on a production gateway this is even less acceptable than waiting for them to expire. Fortunately, we can interact with the ip_conntrack table via conntrack-tools.

Against common sense, conntrack-tools is not available in the repositories for (at least version 5.2) of ClearOS, my favourite router distro. I grabbed a couple recent versions in RPM form but they didn't feel like playing ball so I ended up with version 0.9.5 from ftp://ftp.pbone.net/mirror/archive.fedoraproject.org/fedora/linux/updates/8/i386.newkey/conntrack-tools-0.9.5-3.fc8.i386.rpm

Apparently newer versions allow one to use -D intuitively, i.e.:

# conntrack -D -s xxx.xxx.xxx.xxx

But this is not the case for at least versions including and prior to 0.97 - these require the d, dport, s and sport flags.

This wonderful person provides a way to pipe the output of conntrack -L (which lists entries the way I'd like to delete them, i.e. -s only) into sed which then breaks the output lines up and awk runs them with conntrack -D appropriately. I had to do some cleanup to get it to work due to the way their blog software mangles punctuation (a lot of my first posts here are mangled in the same way - pobody's nerfect!):

 conntrack -L -s xxx.xxx.xxx.xxx | sed 's/=/ /g'| awk '{system("conntrack -D -s "$6" -d "$8" -p "$1" --sport="$10" --dport="$12)}'

It should be pretty clear how conntrack -L -s can be modified to work with the destination address or more complicated pattern matching.

Now we can see the ip_conntrack table is at a more reasonable level:

[root@router ~]#  cat /proc/sys/net/ipv4/netfilter/ip_conntrack_count
46676

ip_conntrack: table full, dropping packet.

karma

Connections in to and out of your network are working sporadically. Your router's dmesg is flooded with "ip_conntrack: table full, dropping packet." What do you do?

This condition occurs when the connection tracking table has reached its limit. Connection tracking is a function of Netfilter that stores information like the source and destination IP addresses, port numbers, protocol type, state and timeout of a two-way connection. This facility lets us create sophisticated and informed Netfilter rules in a way that is not possible to accurately derive on a packet header-by-header basis.

The conntrack table takes the form of a memory structure; if there were no constraints on the size of the table it could conceivably start knocking off userspace processes if it became too large (i.e. under DoS conditions). Entries in the conntrack table expire either when their timeout has been reached or the connection has been properly closed. In cases where connections are not being closed according to protocol (poor network connectivity, DoS, spoof attack, etc.) the table can fill rapidly causing an intermittent denial of service condition on your network.

The most prominent symptom of a full connection tracking table is that your old, running connections (secure shell sessions) will continue to function while it becomes impossible to establish new ones. Worse, as the entries continue to time out and the table keeps filling up you may "get lucky" and establish a new connection here and there, making the situation much more confusing.

Depending on the situation you may have one or two options. If you have gobs and gobs of RAM available or the (for example) attack is low-volume you can adjust the entry limit of the table. First, check what the current limit is:

# cat /proc/sys/net/ipv4/ip_conntrack_max
65536

You can see how full the table currently is by running:

# cat /proc/sys/net/ipv4/netfilter/ip_conntrack_count
62168

ip_conntrack_max is determined as a multiple of how much RAM the system boots up with but generally stops at 65536 regardless. You may find that this isn't even enough for a high volume network under normal conditions. We can adjust the limit temporarily thus:

# echo 131072 > /proc/sys/net/ipv4/ip_conntrack_max

If this turns out to be your magic bullet and you're sure no other actions need to be taken to mitigate your particular situation add the following line to /etc/sysctl.conf:

net.ipv4.netfilter.ip_conntrack_max = 131072

To load the value from sysctl.conf run:

# sysctl -p

If you don't have the option of throwing more RAM at the problem you may be forced to make an executive decision in the interest of preserving network services for legitimate clients. You can decrease the load on the conntrack table by removing rules that use stateful logic (i.e. containing "-t nat" or "-m state"). The brute force option is to rmmod the ip_conntrack module:

# rmmod ip_conntrack

However this may not be possible in all environments. The other option is to flush your rules and set the default policy to allow:

# iptables -P
# iptables -F

This is also typically the effect of

# /etc/init.d/iptables stop
or
# /etc/init.d/firewall stop

Parsing and Embedding Twitter Feeds in PHP

karma

Twitter feeds can be obtained in XML with this URL scheme:

http://twitter.com/statuses/user_timeline/$username.xml?count=$count

Where $username is the twitter user's name and $count is how far back you want to go.

If your environment allows url_fopen you can load the whole thing directly into an object with simplexml_load_file().

Portability is good and allow_url_fopen is arguably very dangerous. For the sake of brevity let's rip this class from the PHP manual's fopen() comments:

class HTTPRequest
{
    var $_fp;        // HTTP socket
    var $_url;        // full URL
    var $_host;        // HTTP host
    var $_protocol;    // protocol (HTTP/HTTPS)
    var $_uri;        // request URI
    var $_port;        // port
   
    // scan url
    function _scan_url()
    {
        $req = $this->_url;
       
        $pos = strpos($req, '://');
        $this->_protocol = strtolower(substr($req, 0, $pos));
       
        $req = substr($req, $pos+3);
        $pos = strpos($req, '/');
        if($pos === false)
            $pos = strlen($req);
        $host = substr($req, 0, $pos);
       
        if(strpos($host, ':') !== false)
        {
            list($this->_host, $this->_port) = explode(':', $host);
        }
        else
        {
            $this->_host = $host;
            $this->_port = ($this->_protocol == 'https') ? 443 : 80;
        }
       
        $this->_uri = substr($req, $pos);
        if($this->_uri == '')
            $this->_uri = '/';
    }
   
    // constructor
    function HTTPRequest($url)
    {
        $this->_url = $url;
        $this->_scan_url();
    }
   
    // download URL to string
    function DownloadToString()
    {
        $crlf = "\r\n";
       
        // generate request
        $req = 'GET ' . $this->_uri . ' HTTP/1.0' . $crlf
            .    'Host: ' . $this->_host . $crlf
            .    $crlf;
       
        // fetch
        $this->_fp = fsockopen(($this->_protocol == 'https' ? 'ssl://' : '') . $this->_host, $this->_port);
        fwrite($this->_fp, $req);
        while(is_resource($this->_fp) && $this->_fp && !feof($this->_fp))
            $response .= fread($this->_fp, 1024);
        fclose($this->_fp);
       
        // split header and body
        $pos = strpos($response, $crlf . $crlf);
        if($pos === false)
            return($response);
        $header = substr($response, 0, $pos);
        $body = substr($response, $pos + 2 * strlen($crlf));
       
        // parse headers
        $headers = array();
        $lines = explode($crlf, $header);
        foreach($lines as $line)
            if(($pos = strpos($line, ':')) !== false)
                $headers[strtolower(trim(substr($line, 0, $pos)))] = trim(substr($line, $pos+1));
       
        // redirection?
        if(isset($headers['location']))
        {
            $http = new HTTPRequest($headers['location']);
            return($http->DownloadToString($http));
        }
        else
        {
            return($body);
        }
    }
}

We also need to give the links anchors, so we'll modify Jonathan Sampson's clever little function to not shorten URLs since twitter already does this for us these days:

function auto_link_text($text)
{
   $pattern  = '#\b(([\w-]+://?|www[.])[^\s()<>]+(?:\([\w\d]+\)|([^[:punct:]\s]|/)))#';
   $callback = create_function('$matches', '
       $url       = array_shift($matches);
       $url_parts = parse_url($url);

       $text = parse_url($url, PHP_URL_HOST) . parse_url($url, PHP_URL_PATH);
       $text = preg_replace("/^www./", "", $text);

       $last = -(strlen(strrchr($text, "/"))) + 1;

       return sprintf(\'<a rel="nowfollow" href="%s">%s</a>\', $url, $text);
   ');

   return preg_replace_callback($pattern, $callback, $text);
}

Here we go:

$username = "username";
$count = 5;
$feed = "http://twitter.com/statuses/user_timeline/$username.xml?count=$count";

$r = new HTTPRequest($feed);
$tweets = $r->DownloadToString();
$twitter_xml = simplexml_load_string($tweets);

$tweet_data = '';
foreach($twitter_xml->status as $status_object)
{
	$tweet_epoch = strtotime($status_object->created_at);
	$tweet_data .= "<div class=\"tweet\"><span class=\"tweet_date\">".date("F j g:ia", $tweet_epoch)."</span><div class=\"tweet_message\">".auto_link_text($status_object->text)."</div></div>";
}

print($tweet_data);

Find and Delete the Largest Files on a File System

karma

Using Find on the CLI

The fastest way to find large or run-away files on a whole filesystem or specific directory is to run:
find /path -xdev -type f -follow | xargs ls -lsh | sort -rhk 6,6 | head -20
Where /path is the target and 20 is the number of results you would like to see (sparing yourself a flooded terminal buffer). The output looks something like:

16M -rwxr-xr-x 1 user group 16M Jan 6 06:02 ./static/files/windows/bootdist.zip 2.0M -rw-r--r-- 1 user group 2.0M Jun 23 2022 ./static/files/2022/06/23/sonic-codebg.png 2.0M -rw-r--r-- 1 user group 2.0M Jan 21 2022 ./static/files/2022/01/21/laptopbags.jpg 1.6M -rw-r--r-- 1 user group 1.6M Jan 12 2020 ./static/files/2020/01/12/remainindoors.gif 1.4M -rw-r--r-- 1 user group 1.4M Jan 19 2022 ./static/files/2022/01/19/wunderland.mp4 1.4M -rw-r--r-- 1 user group 1.4M Dec 15 2021 ./static/files/2021/12/15/tradesecrets.png 988K -rw-r--r-- 1 user group 985K Jul 19 2022 ./static/files/2022/07/19/me_bitlockerreset.png 904K -rw-r--r-- 1 user group 904K Jun 13 2023 ./static/files/2023/06/13/img_20230612_114518.jpg 660K -rw-r--r-- 1 user group 657K Jun 20 2019 ./static/files/2019/06/20/foxpaws_2019.png …

NOTE: Using find with the -xdev argument ignores other filesystems mounted under the given path, i.e: letting you search from / (root) but avoiding the loops and pitfalls of /proc, /sys, /dev etc. If, for example, your /home folder is on a different partition and you use the -xdev option, it won't be searched. You will need to execute the search again on that specific path.

Using du from the CLI

du recursively summarizes disk usage of the given path. To list the 20 largest files under a given tree run:

du -ah /path | sort -nr | head -n 20 1016K /nsm/repo/transmission-gtk-3.00-14.el9.x86_64.rpm 1016K /nsm/repo/lvm2-libs-2.03.21-3.el9.x86_64.rpm 1012K /nsm/repo/lvm2-libs-2.03.17-7.el9.x86_64.rpm 1012K /nsm/mysql/playbook/issues.ibd 1008K /nsm/repo/cockpit-ws-300.3-1.0.1.el9_3.x86_64.rpm 1008K /nsm/repo/cockpit-ws-300.1-1.0.1.el9_3.x86_64.rpm 1008K /nsm/repo/bluez-5.64-2.el9.x86_64.rpm 1008K /nsm/elasticsearch/indices/mRlZOMbzTuydO8jY_0qayQ/0/index/_kv.cfs 1000K /nsm/repo/python3-pillow-9.1.1-4.el9.x86_64.rpm 1000K /nsm/repo/ibus-typing-booster-2.11.0-5.el9.noarch.rpm 1000K /nsm/repo/gnome-keyring-40.0-3.el9.x86_64.rpm 1000K /nsm/repo/exiv2-0.27.5-2.el9.x86_64.rpm 992K /nsm/repo/stix-fonts-2.0.2-11.el9.noarch.rpm 988K /nsm/repo/urw-base35-p052-fonts-20200910-6.el9.noarch.rpm 988K /nsm/repo/btrfs-progs-5.15.1-0.el9.x86_64.rpm 988K /nsm/repo/annobin-12.12-1.el9.x86_64.rpm 980K /nsm/repo/glibc-langpack-en-2.34-60.0.2.el9.x86_64.rpm 972K /nsm/elasticsearch/indices/1Jf3pLWrTvOQ5E_Nx1VDcw/0/index/_29s.cfs 968K /nsm/repo/xorg-x11-server-Xwayland-22.1.9-2.el9.x86_64.rpm 968K /nsm/docker-registry/docker/registry/v2/blobs/sha256/7d

To list the 20 largest directories by the total size of their contents (useful when seeking out profuse collections of small files):

du -aBm /nsm | sort -nr | head -n 20 56692M /nsm 34486M /nsm/elasticsearch 34483M /nsm/elasticsearch/indices 32647M /nsm/elasticsearch/indices/aTbHOx-WS6e5D1VrmooK9w 21549M /nsm/elasticsearch/indices/aTbHOx-WS6e5D1VrmooK9w/1/index 21549M /nsm/elasticsearch/indices/aTbHOx-WS6e5D1VrmooK9w/1 11098M /nsm/elasticsearch/indices/aTbHOx-WS6e5D1VrmooK9w/0/index 11098M /nsm/elasticsearch/indices/aTbHOx-WS6e5D1VrmooK9w/0 8310M /nsm/repo 6280M /nsm/docker-registry/docker 6280M /nsm/docker-registry 6256M /nsm/docker-registry/docker/registry/v2 6256M /nsm/docker-registry/docker/registry 6254M /nsm/docker-registry/docker/registry/v2/blobs/sha256 6254M /nsm/docker-registry/docker/registry/v2/blobs 5138M /nsm/elasticsearch/indices/aTbHOx-WS6e5D1VrmooK9w/1/index/_oqm.fdt 3660M /nsm/backup 2798M /nsm/elastic-fleet/artifacts 2798M /nsm/elastic-fleet 2748M /nsm/elasticsearch/indices/aTbHOx-WS6e5D1VrmooK9w/1/index/_oqm_Lucene90_0.dvd

Using Graphical Utilities in the GUI

[attachment-lDtttU]
Baobab (Gnome Disk Analyzer)

There is a much cooler though equally less efficient way: graphical file explorers that represent disk usage in a more immediate and visually intuitive fashion. Directories are inset their parents and sized by proportion of space utilized. There exist a number of options:

  • FSView or File Size View on KDE Plasma's Konqueror. You may need to install the konqueror-plugins or konqueror-plugin-fsview, depending on your flavour, package if it is not already available.
  • The same functionality is provided by standalone wrappers with a smaller dependency footprint KDirStat and qfsview.
  • Baobab, also styled as Gnome Disk Analyzer on some distributions.
  • Filelight
  • qdirstat

opensource.com > 3 open source GUI disk usage analyzers for Linux takes a deep dive into the latter three options while the preceding are covered (en Français) at Coagul.org > Outils pour analyser l’espace disque et visualiser l'occupation d'un disque.

[attachment-WG6U6L]
KDE Plasma's Konqueror > FSView

FSView does not work over kio abstractions (ssh/sftp/fish/ftp etc) but works fine (though tends to perform excruciatingly slowly) over NFS.

[attachment-a5wWkL]
KDirstat Original Source: (from Outiles&ellipse; by coagul.org)

Open File Locks

Your woes may not be over. Even after deleting files they can stick around until every process that is using them has terminated and/or released their lock on the file(s') inode. Please continue reading the continuation: Find the Largest Open Files and Their Owner(s) on Linux with lsof if you are experiencing problems with ghost files and/or goblins.