Tag Archives: linux

CentOS 8 + AqBanking + HBCI + Commerzbank

As of 01/2021, this is a working method:

dnf install epel-release
dnf install dnf-plugins-core
dnf config-manager –set-enabled PowerTools
yum install aqbanking
(This will install AqBanking 6.1.4, which is recent enough)

gct-tool create -t ohbci -n cb.medium.2021

aqhbci-tool4 adduser -t ohbci -n cb.medium.2021 –context=1 -b 50040000 -u TEILNEHMERNUMMER -c TEILNEHMERNUMMER -s hbci.commerzbank.de -N SomeIdentifierDoesntMatter –rdhtype=10 –cryptmoderah –hbciversion=300

aqhbci-tool4 getkeys -u 1
aqhbci-tool4 createkeys -u 1
aqhbci-tool4 sendkeys -A -u 1
aqhbci-tool4 iniletter -u 1

At this point, send INI letter to your Commerzbank guy for activation. After activation check that everything works as expected:

aqhbci-tool4 getsysid -u 1
aqhbci-tool4 getaccounts -u 1
aqbanking-cli listaccs -b 50040000
aqbanking-cli request –transactions -b 50040000 -c test.txt
aqbanking-cli export –exporter=csv -c test.txt -o transactions.csv

galera: Won’t start if there’s only one node

Stupid Galera fucks up every now and then. If there’s only one node left and it doesn’t start with:

[ERROR] WSREP: failed to open gcomm backend connection: 110: failed to reach primary view: 110 (Connection timed out)

Then you need to “bootstrap” it:

In grastate.dat:

safe_to_bootstrap: 0

Must be 1 not 0!

And then just do:

service mysql start –wsrep-new-cluster

Unstable crap…

CentOS 6: Switching from mod_php to fastcgi (Link only)

This short tutorial for Apache 2.2 worked:

https://www.kutukupret.com/2016/06/29/centos-6-httpd-2-2-and-php-fpm/

Actually, the following should have worked too, there’s no mention for the need of installing fastcgi at all, I assumed php-fpm by itself would be enough but that doesn’t seem to be the case. Maybe an Apache 2.4 thing?

https://developers.redhat.com/blog/2017/10/25/php-configuration-tips/

Finding the 100% CPU culprit in multi-threaded applications

Wow, after many years with Linux I just stubled upon some ultra useful functionality and now I feel kind of stupid because I didn’t know about it all that time, lol. :)

I have a multi-threaded application – namely Asterisk, a software PBX – that was always at 250% CPU for many months without a visible reason. Googling for Asterisk + high CPU brought me to this site: https://moythreads.com/wordpress/2009/05/06/why-does-asterisk-consume-100-cpu

Basically everything is already explained there but I’ll give another example. Here sample output from top:

PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
16780 asterisk -11   0 4681m 115m  10m S 207.5  3.0 649:34.98 asterisk

So, Asterisk going berserk at 207% CPU. Let’s install pstack:

yum install gdb

Let’s look at all the threads of Asterisk, PID 16780, using the magic parameters -LlFm to ps which will show all threads of that process:

ps -LlFm 16780

The output will be something like the following. Look for the C column which means CPU usage:

F S UID        PID  PPID   LWP  C NLWP PRI  NI ADDR SZ WCHAN    RSS PSR STIME TTY        TIME CMD
[...]
1 S asterisk     -     - 32498  0    -  49   - -     - futex_     -   7 04:15 -        00:00:00 -
1 S asterisk     -     - 32499  0    -  49   - -     - poll_s     -   1 04:15 -        00:00:00 -
1 S asterisk     -     -   418  0    -  49   - -     - inotif     -   4 04:33 -        00:00:00 -
1 R asterisk     -     -  3967 99    -  49   - -     - -          -   0 05:59 -        12:14:44 -
1 R asterisk     -     -  4367 99    -  49   - -     - -          -   1 06:05 -        12:08:27 -
1 S asterisk     -     - 22668  0    -  49   - -     - poll_s     -   4 16:40 -        00:00:19 -
1 S asterisk     -     - 23627  0    -  49   - -     - poll_s     -   7 17:20 -        00:00:12 -
1 S asterisk     -     - 23641  0    -  49   - -     - poll_s     -   2 17:20 -        00:00:11 -
[...]

Notice those two entries with 99 CPU: LWP 3967 and 4367! Let’s look at these little f*ckers in more detail:

pstack 16780 > /tmp/asterisk.stack.txt

Let’s look into /tmp/asterisk.stack.txt and search for our two LWPs 3967 and 4367…

Thread 35 (Thread 0x7ff211a9b700 (LWP 3967)):
#0  0x00007ff248cb96ec in recv () from /lib64/libc.so.6
#1  0x00007ff23c07b8b1 in ooSocketRecv () from /usr/lib64/asterisk/modules/chan_ooh323.so
#2  0x00007ff23c06461f in ooH2250Receive () from /usr/lib64/asterisk/modules/chan_ooh323.so
#3  0x00007ff23c064fba in ooProcessCallFDSETsAndTimers () from /usr/lib64/asterisk/modules/chan_ooh323.so
#4  0x00007ff23c06518e in ooMonitorCallChannels () from /usr/lib64/asterisk/modules/chan_ooh323.so
#5  0x00007ff23c14ed95 in ooh323c_call_thread () from /usr/lib64/asterisk/modules/chan_ooh323.so
#6  0x000000000057a1a8 in dummy_start ()
#7  0x00007ff2476eaaa1 in start_thread () from /lib64/libpthread.so.0
#8  0x00007ff248cb893d in clone () from /lib64/libc.so.6
Thread 34 (Thread 0x7ff21195e700 (LWP 4367)):
#0  0x00007ff248cb96ec in recv () from /lib64/libc.so.6
#1  0x00007ff23c07b8b1 in ooSocketRecv () from /usr/lib64/asterisk/modules/chan_ooh323.so
#2  0x00007ff23c06461f in ooH2250Receive () from /usr/lib64/asterisk/modules/chan_ooh323.so
#3  0x00007ff23c064fba in ooProcessCallFDSETsAndTimers () from /usr/lib64/asterisk/modules/chan_ooh323.so
#4  0x00007ff23c06518e in ooMonitorCallChannels () from /usr/lib64/asterisk/modules/chan_ooh323.so
#5  0x00007ff23c14ed95 in ooh323c_call_thread () from /usr/lib64/asterisk/modules/chan_ooh323.so
#6  0x000000000057a1a8 in dummy_start ()
#7  0x00007ff2476eaaa1 in start_thread () from /lib64/libpthread.so.0
#8  0x00007ff248cb893d in clone () from /lib64/libc.so.6

Aha! Something about ooh323 which is a module responsible for H.323 which we really don’t need in 2019. Let’s deactivate the culprit and put this into /etc/asterisk/modules.conf:

noload => chan_ooh323.so

Restart asterisk and voila:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
25893 asterisk -11   0 4601m  64m  10m S 14.3  1.7   3:16.92 asterisk

14% CPU is about what Asterisk really consumes given the load. Problem solved. pstack and ps to the rescue!

galera: State not recoverable

Once a year or so Galera on one of my 3 nodes breaks down and in the log you can find something like:

Failed to open channel ... State not recoverable

In my case, I end up with a empty gvwstate.dat file and that’s the problem. I delete the file and restart MySQL, then Galera syncs with the other nodes and everything is fine again.

Link to another article: https://github.com/codership/galera/issues/354

No MySQL root access when installing Galera from binaries on CentOS 7

EDIT: I AM the one who is stupid! Arrrrgh. Just installed Galera on yet another box and saw this passing by in yum:

A RANDOM PASSWORD HAS BEEN SET FOR THE MySQL root USER !
You will find that password in '/root/.mysql_secret'.

Arrrrghhhh. Apologies to the RPM packagers. EDIT END.

Stupid RPM packagers screwed up. If you follow this how-to – http://galeracluster.com/documentation-webpages/gettingstarted.html – after a while you end up at a point where it says “In the database client, run the following query: SHOW STATUS LIKE 'wsrep_cluster_size'; …so you try “mysql -p” only to find that you don’t have access. WTF. There’s probably already a password set by the RPM packagers but we don’t know it. So, we try our usual –skip-grant-tables thing and then try SET PASSWORD FOR 'root'@'localhost' = PASSWORD('MyNewPass'); but this will result in: “ERROR 1290 (HY000): The MySQL server is running with the --skip-grant-tables option so it cannot execute this statement” – WTF again. When you are used to resetting MySQL root pw’s you usually first run the PW change command (SET or ALTER or UPDATE or whatever) and then you enter FLUSH PRIVILEGES. The trick here is to do the opposite. First enter

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

mysql> SET PASSWORD FOR 'root'@'localhost' = PASSWORD('MyNewPass');
Query OK, 0 rows affected (0.00 sec)

Found this here after hours of troubleshooting: http://galeracluster.com/community/?place=msg%2Fcodership-team%2Fw8NEekKipwY%2FgGlkSQNOedMJ
F*CKERS!

OpenNebula & CentOS: OneFlow doesn’t start

If OneFlow doesn’t start and you find this in your /var/log/one/oneflow.error:

/usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- treetop (LoadError)
from /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require'
from /usr/lib/one/oneflow/lib/models/role.rb:17
from /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require'
from /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require'
from /usr/lib/one/oneflow/lib/models.rb:26
from /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require'
from /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require'
from /usr/lib/one/oneflow/oneflow-server.rb:49

Then do:

gem install treetop polyglot parse-cron

Although treetop is in EPEL (as rubygem-treetop.noarch) you get this when you try to install it:

Error: Package: rubygem-treetop-1.4.10-1.el6.noarch (epel)
Requires: rubygem(polyglot)

It depends on polyglot but polyglot is not in EPEL – how stupid is that? And nowhere else to be found. Some other 3rd party repos claim to have it but adding another repo just for a single ruby gem – no thanks.

And you also need parse-cron, sigh.

CentOS: Fix broken yum repo metadata

Happened because I added EPEL, I believe. Very strange, on another, identical machine (steps 1:1) it worked fine, but on this box suddenly this appeared when trying to yum install something:

[...]
--> Processing Dependency: libnuma.so.1()(64bit) for package: libvirt-0.10.2-29.el6_5.11.x86_64
--> Processing Dependency: libnl.so.1()(64bit) for package: libvirt-0.10.2-29.el6_5.11.x86_64
--> Processing Dependency: libnetcf.so.1()(64bit) for package: libvirt-0.10.2-29.el6_5.11.x86_64
--> Processing Dependency: libgnutls.so.26()(64bit) for package: libvirt-0.10.2-29.el6_5.11.x86_64
--> Processing Dependency: libavahi-common.so.3()(64bit) for package: libvirt-0.10.2-29.el6_5.11.x86_64
--> Processing Dependency: libavahi-client.so.3()(64bit) for package: libvirt-0.10.2-29.el6_5.11.x86_64
---> Package qemu-kvm.x86_64 2:0.12.1.2-2.415.el6_5.10 will be installed
http://mirror2.hs-esslingen.de/centos/6.5/updates/x86_64/repodata/607e7e1f0586f3b6c3478b8b07debbb174be378c0b45f30836e74aaaf3919b5e-filelists.sqlite.bz2: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
Trying other mirror.
http://centos.mirror.sharkservers.co.uk/centos/6.5/updates/x86_64/repodata/607e7e1f0586f3b6c3478b8b07debbb174be378c0b45f30836e74aaaf3919b5e-filelists.sqlite.bz2: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
Trying other mirror.
http://ftp.plusline.de/centos/6.5/updates/x86_64/repodata/607e7e1f0586f3b6c3478b8b07debbb174be378c0b45f30836e74aaaf3919b5e-filelists.sqlite.bz2: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
Trying other mirror.
http://ftp.hosteurope.de/mirror/centos.org/6.5/updates/x86_64/repodata/607e7e1f0586f3b6c3478b8b07debbb174be378c0b45f30836e74aaaf3919b5e-filelists.sqlite.bz2: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
Trying other mirror.
http://mirror.netcologne.de/centos/6.5/updates/x86_64/repodata/607e7e1f0586f3b6c3478b8b07debbb174be378c0b45f30836e74aaaf3919b5e-filelists.sqlite.bz2: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
Trying other mirror.
http://centos.intergenia.de/6.5/updates/x86_64/repodata/607e7e1f0586f3b6c3478b8b07debbb174be378c0b45f30836e74aaaf3919b5e-filelists.sqlite.bz2: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
Trying other mirror.
http://centos.bio.lmu.de/6.5/updates/x86_64/repodata/607e7e1f0586f3b6c3478b8b07debbb174be378c0b45f30836e74aaaf3919b5e-filelists.sqlite.bz2: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
Trying other mirror.
http://centos.psw.net/centos/6.5/updates/x86_64/repodata/607e7e1f0586f3b6c3478b8b07debbb174be378c0b45f30836e74aaaf3919b5e-filelists.sqlite.bz2: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
Trying other mirror.
http://mirror.maeh.org/centos/6.5/updates/x86_64/repodata/607e7e1f0586f3b6c3478b8b07debbb174be378c0b45f30836e74aaaf3919b5e-filelists.sqlite.bz2: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
Trying other mirror.
ftp://mirror.fraunhofer.de/centos.org/6.5/updates/x86_64/repodata/607e7e1f0586f3b6c3478b8b07debbb174be378c0b45f30836e74aaaf3919b5e-filelists.sqlite.bz2: [Errno 14] PYCURL ERROR 19 - "Given file does not exist"
Trying other mirror.
Error: failure: repodata/607e7e1f0586f3b6c3478b8b07debbb174be378c0b45f30836e74aaaf3919b5e-filelists.sqlite.bz2 from updates: [Errno 256] No more mirrors to try.
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest

Fixed by googling and running this:

yum clean metadata
yum clean dbcache
yum update

Backup storage: CentOS & QNAP & iSCSI

I’m using a CentOS 6 box as backup server running BackupPC. Until a couple days ago I had a Thecus N7700PRO NAS with 4x 3 TB discs configured as RAID5 which was accessed via iSCSI from the CentOS box. Then, 2 harddrives died at the same time (or not, at least that’s what the Thecus reported at that point) and Thecus support said the system cannot see drive 1 (one of the apparently failed ones) anymore, although the Thecus still showed it, and I should try to install a new harddrive in place of drive 1 and try to duplicate all data from drive 2 to the new drive 1 using “dd”. When I rebooted the Thecus hoping that it might just magically work again, the whole RAID5 was gone. Like it never existed. I said screw it and bought a QNAP 19″ 1U TS-412U off of eBay (new) for about 580 EUR. Along that four 4 TB enterprise discs from different vendors according to the supported harddrive compatibility list from QNAP. Here are the required steps to get the backup server back in business:

1. Insert harddrives, power on, initial setup, QNAP will get a IP via DHCP instead of the 169.254.100.100 that’s mentioned in the quick start guide.

2. Download latest firmware from here and upload when prompted to.

3. Set up RAID5 or whatever you prefer. You can only choose ext3 or ext4 but don’t get confused by it, it’s just the lower level that QNAP uses and on top we will later build our own XFS filesystem and use LVM.

4. Configured iSCSI target & LUN according to this QNAP link (sorry, in German only, but the pictures should be sufficient to figure it out), but I chose Instant Allocation. You may have to wait for the RAID5 to get built before you can choose the LUN location. Also, after you created the target it may take a while before the target becomes available (you can check the progress under “iSCSI Target List” – “Alias” – “id:0 …” – “Status”).

5. The QNAP is directly connected to eth1 on the CentOS box without a switch. eth1 has IP 192.168.1.1, the QNAP has 192.168.1.100.

6. On the CentOS box, delete the old Thecus from the iSCSI initiator database:
iscsiadm -m node -o delete

7. Make sure node startup is set to automatic in /etc/iscsi/iscsid.conf:
node.startup = automatic

8. Discover the new QNAP:
iscsiadm -m discovery -t sendtargets -p 192.168.1.100:3260

9. Make sure it’s there:
iscsiadm -m node
This should output something like:
192.168.1.100:3260,1 iqn.2004-04.com.qnap:ts-412u:iscsi.raid5.c8af3e

10. If you want, reboot the box and confirm that it’s still there when you execute iscsiadm -m node after the reboot.

11. In dmesg something like this should have popped up now:
scsi5 : iSCSI Initiator over TCP/IP
scsi 5:0:0:0: Direct-Access QNAP iSCSI Storage 3.1 PQ: 0 ANSI: 5
sd 5:0:0:0: Attached scsi generic sg1 type 0
sd 5:0:0:0: [sdb] 23017373696 512-byte logical blocks: (11.7 TB/10.7 TiB)
sd 5:0:0:0: [sdb] Write Protect is off
sd 5:0:0:0: [sdb] Mode Sense: 2f 00 00 00
sd 5:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
sdb: unknown partition table
sd 5:0:0:0: [sdb] Attached SCSI disk

12. I like LVM, it’s not necessary, but maybe it will be of use later on. Check with pvdisplay that /dev/sdb is there (maybe reboot or run pvscan if it’s not):
— Physical volume —
PV Name /dev/sdb

13. I created PV, VG and LV and assigned 100% of the available space to the LV:
pvcreate /dev/sdb
vgcreate data /dev/sdb
lvcreate --name rz-nas01 -l100%FREE data

14. Now, you should have /dev/mapper/data-rz--nas01 or /dev/data/rz-nas01 which are just links to a /dev/dm-x device. If you don’t, you can try restarting /etc/init.d/lvm2-monitor or just reboot. Run lvdisplay to check the LV is there and “LV Status” is “available”.

15. Create a filesystem on the LV, I chose XFS:
mkfs.xfs /dev/mapper/data-rz--nas01
This could take a few moments.

16. If you want the storage to get mounted automatically on boot, use something like this in /etc/fstab:
/dev/mapper/data-rz--nas01 /var/lib/BackupPC xfs defaults,_netdev 0 0

17. Mount it with mount /dev/mapper/data-rz--nas01 /mnt if you want to test first, otherwise you can just do mount -a.

18. Done!

CentOS & yum-cron: Automatic reboot after kernel update

If you’re using yum-cron and want to automatically reboot your CentOS box whenever the kernel gets updated you can add this code to /etc/cron.daily/0yum.cron right before exit 0 at the end:

entry=`cat /boot/grub/grub.conf | grep '^default' | cut -d '=' -f2`
entry=`expr $entry + 1`
if [ "`cat /boot/grub/grub.conf | grep '^title' | tail -n +$entry | head -1 | sed -e 's/.*(\(.*\)).*/\1/'`" != "`uname -r`" ]; then
sleep 10 ; reboot
fi

This was taken from here with a tiny correction displayed in bold above.

Depending on your version of CentOS you may want to adjust /etc/anacrontab or /etc/crontab to set the times when cron.daily and therefore yum-cron will be run to avoid reboots due to a kernel updates in the middle of the day.