qemu-img convert -O raw diskimage.qcow diskimage.raw
Posts Tagged ‘vm’
qemu-img convert -O raw diskimage.qcow diskimage.raw
I noticed one of my new Xen dom0s was coughing up our friend, the ip_conntrack: table full, dropping packet message today. If you like to get your money’s worth out of your dedis the RAM available to dom0 is probably limited – meaning a correspondingly low default ip_conntrack_max. I’m sure you can see how this might be a problem, even more so if it is lower than the ip_conntrack_max of your virtual machines.
None of my previous CentOS dedis had NAT/conntrack modules loaded by default and this dom0 had no need for NAT – being of a fully bridged configuration and routing only public IPs. My first guess was that this dedi’s redhatty initrd loaded the modules through the typical mash-everything-against-the-kernel-and-see-what-sticks approach so I tried removing the NAT and connection tracking related modules:
# rmmod iptable_nat ERROR: Module iptable_nat is in use
OK, let’s take a look at the tables:
[root@cl-t067-252cl ~]# iptables-save # Generated by iptables-save v1.3.5 on Sat Jul 21 21:27:40 2012 *nat :PREROUTING ACCEPT [931:50495] :POSTROUTING ACCEPT [446:25128] :OUTPUT ACCEPT [7:502] -A POSTROUTING -s 192.168.122.0/255.255.255.0 -d ! 192.168.122.0/255.255.255.0 -p tcp -j MASQUERADE --to-ports 1024-65535 -A POSTROUTING -s 192.168.122.0/255.255.255.0 -d ! 192.168.122.0/255.255.255.0 -p udp -j MASQUERADE --to-ports 1024-65535 -A POSTROUTING -s 192.168.122.0/255.255.255.0 -d ! 192.168.122.0/255.255.255.0 -j MASQUERADE COMMIT
It seems I have a subnet I was not aware of…
virbr0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
Who put that there? libvirt, apparently. According to that article not only is our problem ip_conntrack_max, but:
However, NAT slows down things and only recommended for desktop installations.
Seems highly logical to me. Their solution didn’t look very permanent so I first deleted the symlink in the autostart directory for “default”:
# cd /etc/libvirt/qemu/networks/autostart/ # ls -lsah total 16K 8.0K drwx------ 2 root root 4.0K Jul 21 21:17 . 8.0K drwx------ 3 root root 4.0K May 14 09:18 .. 0 lrwxrwxrwx 1 root root 14 Jul 21 21:17 default.xml -> ../default.xml # mv default.xml # cd .. # cp default.xml ~/ # /etc/init.d/libvirtd restart
That didn’t do anything at all. Still had virbr0, still had the iptables rules and still had the kernel modules.
Apparently that was the wrong thing to do. All of my interfaces, bridges, etc seemed to come back up (except virbr0) and the NAT/conntrack modules were missing but not a single VM was routing.
On to their method:
# virsh net-destroy default # virsh net-undefine default # service libvirtd restart
Everything looks great. You still have the NAT/conntrack modules loaded but we should be able to take those out one by one.
# lsmod | grep nat iptable_nat 40517 0 ip_nat 52973 2 ipt_MASQUERADE,iptable_nat ip_conntrack 91749 4 ipt_MASQUERADE,iptable_nat,ip_nat,xt_state nfnetlink 40457 2 ip_nat,ip_conntrack ip_tables 55329 2 iptable_nat,iptable_filter x_tables 50377 7 xt_physdev,ipt_MASQUERADE,iptable_nat,xt_state,ipt_REJECT,xt_tcpudp,ip_tables
Boned again.`Now default.xml is missing (I’m assuming that’s what net-destroy does) – good thing we made a backup first!
# cd /etc/libvirt/qemu/networks/ # cp ~/default.xml ./ # ln -s default.xml autostart/ # reboot
OK. Screw it. We’ll do it the hard way.
#!/bin/bash ifconfig virbr0 down iptables -t nat -D POSTROUTING -s 192.168.122.0/255.255.255.0 -d ! 192.168.122.0/255.255.255.0 -p tcp -j MASQUERADE --to-ports 1024-65535 iptables -t nat -D POSTROUTING -s 192.168.122.0/255.255.255.0 -d ! 192.168.122.0/255.255.255.0 -p udp -j MASQUERADE --to-ports 1024-65535 iptables -t nat -D POSTROUTING -s 192.168.122.0/255.255.255.0 -d ! 192.168.122.0/255.255.255.0 -j MASQUERADE iptables -D INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT iptables -D INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT iptables -D INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT iptables -D INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT iptables -D FORWARD -d 192.168.122.0/255.255.255.0 -o virbr0 -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -D FORWARD -s 192.168.122.0/255.255.255.0 -i virbr0 -j ACCEPT iptables -D FORWARD -i virbr0 -o virbr0 -j ACCEPT iptables -D FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable iptables -D FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable rmmod iptable_nat rmmod ipt_MASQUERADE rmmod ip_nat rmmod xt_state rmmod ip_conntrack
HOW DO YOU LIKE ME NOW?!
I recently mentioned using APC with threaded Apache and PHP was incredibly unstable. For a few days I moved back to the prefork (process-based) Apache MPM to see if it was worth bringing back APC. This eventually ended up bringing the load average on the SQL VM to about 40 with what I would normally consider an appropriate number of MaxClients and the odd spike at about 10 when restricted to 128. Page load time was also noticeably increased so I decided to give XCache a whirl (thanks to LiteStar mentioning it in the comments of that article).
XCache is a stable, and (so far it seems) genuinely thread-safe opcode cache and datastore. It is developed my mOo, a lighttpd developer and hosted under the lighttpd domain. Depsite this, it is built as a PHP extension in the same manner as APC meaning it can be used with mod_php, fastcgi etc. Like APC, XCache features shared memory variable storage functions with a simple name, value scheme in addition to the opcode cache. I was able to quickly write an SQL wrapper using these that reduced the database hits in one of my apps by a factor of about 10,000 – making virtually the whole thing run out of RAM.
Also similarly to APC, XCache comes with bundled admin scripts which – while perhaps not as pretty as APC’s – I certainly find more insightful. Some folks find it faster, but I find it works so it looks like I’ll be sold for some time.
Cheers, Litey. And cheers, mOo.
UPDATE Or not… After a few weeks of faithful service xcache finally segfaulted on me:
apache2: segfault at 332e352e ip b5ba90f7 sp a4b4cb70 error 4 in xcache.so[b5b96000+1d000] apache2: segfault at 4320526d ip b5ba4297 sp aab5ad60 error 4 in xcache.so[b5b96000+1d000] apache2: segfault at 43205445 ip b5ba90f7 sp aab58b70 error 4 in xcache.so[b5b96000+1d000] apache2: segfault at 2 ip b5ba90f7 sp a534db70 error 4 in xcache.so[b5b96000+1d000] apache2: segfault at 362e3335 ip b5ba90f7 sp ac35bb70 error 4 in xcache.so[b5b96000+1d000]
I’m doing my best at this moment to reproduce the suituation.
UPDATE It turns out this might actually be a bug in PHP. mOo was kind enough to respond in detail:
A PHP 5.3 bug No.62432 ReflectionMethod random corrupt memory on high
concurrent was fixed that cause PHP itself unstable, bogus reported as
XCache bug. It is false positive to reproduce with XCache
loaded/enabled only, just because XCache make PHP run faster than
with/without other opcode cacher. PHP up to 5.3.14 is affected. It is
highly suggested to upgrade to newer PHP
so, upgrade to 5.4 if applicable. upgarde to
http://xcache.lighttpd.net/wiki/Release-2.0.1 if you still have
problem with PHP 5.4 + XCache 1.3.2
Which I actually realized when reading this while he was making his reply https://groups.google.com/forum/?fromgroups#!topic/xcache/pZHjUu3Dq3k:
PHP 5.3.14 is unstable, Please upgrade to new version. You have been warned I’ve tried hard to make it stable even in 2.0 big jump, yet anything that can go wrong goes wrong.
Just goes to show I need to do more reading before jumping the gun :p
apache attack ClearOS disk dns documentary dos economics emerge encryption file system filesystem firewall Gentoo init iptables iran kernel linux media military mysql netfilter network Networking php politics portage power propaganda ram religion router script search Security server ssh united states Virtualization vm war web xen zimbra