Fix Zimbra stats graph/cron jobs

If your zimbra scheduled tasks are not running or if you don’t see your stats graphs on admin panel, the first thing that you should check if zimbra’s cronjobs. When you reinstall/move your zimbra installation we normally tend to miss cron job setups required for zimbra.

To fix this, find the Zimbra crontabs directory at /opt/zimbra/zimbramon/crontabs

Now, lets put alwal the cronjob’s to a single file (just to make your job easier) as follows:

[[email protected] crontabs]# cat crontab >> crontab.zimbra
[[email protected] crontabs]# cat crontab.ldap >> crontab.zimbra
[[email protected] crontabs]# cat crontab.logger >> crontab.zimbra
[[email protected] crontabs]# cat crontab.mta >> crontab.zimbra
[[email protected] crontabs]# cat crontab.store >> crontab.zimbra

Finally,Load the crontab.zimbra file to crontab as follows:

[[email protected] crontabs]# crontab crontab.zimbra

Voila, that’s it. Wait for few minutes to start seeings the graphs. You can also verify the cornjobs by using ‘crontab -l’.

Fix: openvz, iptables, csf and errors

CSF has been one of the first choice for years now for me to secure server with a easily usable iptables manager. More than that it works as “A Stateful Packet Inspection (SPI) firewall, Login/Intrusion Detection and Security application for Linux servers”.

While using csf on OpenVZ vps systems we end up facing lots of issues with respect to iptables modules. If you’re a sysadmin managing hardware nodes it might not be easy though it has got to do something quite simple. I’m pasting the similar issue once again here for a reference and let us recheck what we normally oversee.

Error received during csf test:

:~# /etc/csf/csftest.pl
Testing ip_tables/iptable_filter…OK
Testing ipt_LOG…OK
Testing ipt_multiport/xt_multiport…OK
Testing ipt_REJECT…OK
Testing ipt_state/xt_state…OK
Testing ipt_limit/xt_limit…OK
Testing ipt_recent…FAILED [Error: iptables: No chain/target/match by that name.] – Required for PORTFLOOD and PORTKNOCKING features
Testing xt_connlimit…FAILED [Error: iptables: No chain/target/match by that name.] – Required for CONNLIMIT feature
Testing ipt_owner/xt_owner…FAILED [Error: iptables: No chain/target/match by that name.] – Required for SMTP_BLOCK and UID/GID blocking features
Testing iptable_nat/ipt_REDIRECT…OK
Testing iptable_nat/ipt_DNAT…OK

RESULT: csf will function on this server but some features will not work due to some missing iptable modules

Now, the quick remedy that we get to resolve this issue is to enable all the iptable modules in /etc/vz/vz.conf of the OpenVZ hardware node as follows:

IPTABLES=”ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ipt_state iptable_nat ip_nat_ftp ipt_recent ipt_owner ipt_conntrack ipt_helper ipt_REDIRECT ipt_recent ipt_owner”

and restart all VM’s etc using  “service vz restart” to activate all modules to all VPS systems.

The other options we have is to enable these modules for a specific VPS/VM as follows: (PS: here 100 is a VPS id)

vzctl set 100 –iptables ipt_REJECT –iptables ipt_tos –iptables ipt_TOS –iptables ipt_LOG –iptables ip_conntrack –iptables ipt_limit –iptables ipt_multiport –iptables iptable_filter –iptables iptable_mangle –iptables ipt_TCPMSS –iptables ipt_tcpmss –iptables ipt_ttl –iptables ipt_length –iptables ipt_state –iptables iptable_nat –iptables ip_nat_ftp –iptables ipt_owner –iptables ipt_recent –save

If you run the above command with –setmode option, the modules will be applied.

Or you can add the IPTABLES line mentioned earlier (vz.conf entry) to VPS configuration file  (/etc/vz/conf/100.conf for the above example VPS) and restart VPS with the following command to make it effective.

vzctl restart 100

This was quite simple and known answer. But I have always found that it doesn’t go so smooth. This might be because of one simple reason that I quote here:

We keep removing OLD kernel packages etc and install the new ones to ensure that we have secure system with latest kernel and packages. During this process we end up loosing kernel modules required for the new modules. I figure this out once again while working on my hardware node just by typing the find command as follows under /lib

find /lib –name *ipt*

I was expecting the iptables modules to be listed under my current kernel but to my surprise I didn’t find them. I found them in one of the very old kernel that was installed on the box. Crazy. So, I decided to jump in and reinstall iptables packages quickly on the machine:

yum reinstall iptables-devel.i686 iptables-devel.x86_64 iptables-ipv6.x86_64 iptables.i686 iptables.x86_64

Here is the final output of my csf test after restarting my VM.

~# /etc/csf/csftest.pl
Testing ip_tables/iptable_filter…OK
Testing ipt_LOG…OK
Testing ipt_multiport/xt_multiport…OK
Testing ipt_REJECT…OK
Testing ipt_state/xt_state…OK
Testing ipt_limit/xt_limit…OK
Testing ipt_recent…OK
Testing xt_connlimit…OK
Testing ipt_owner/xt_owner…OK
Testing iptable_nat/ipt_REDIRECT…OK
Testing iptable_nat/ipt_DNAT…OK

RESULT: csf should function on this server

That’s it. Now you know why iptables doesn’t work even if you had not changed anything recently on hardware node (and forgotten that you restarted your system with a new kernel). Verify the iptables modules with lsmod and reinstall the packages if required.

RAID1 – Boot from second drive after disk failure

RAID1 Drives allow you to have a redundant solution to bring back system with a mirrored drive during disk failures.

Let us look at a disk failure in one of the linux machines.

Run

cat /proc/mdstat

This will show the current raid statistics as as follows:

server1:~# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb3[1]
4594496 blocks [2/1] [_U]

md1 : active raid1 sdb2[1]
497920 blocks [2/1] [_U]

md0 : active raid1 sdb1[1]
144448 blocks [2/1] [_U]

unused devices:

The current output shows that the primary drive has gone bad (Observe [_U]).

You can further investigate this using mdadm command as follows:

# mdadm --detail /dev/md0

# mdadm -D /dev/md0

The output would confirm the drive which has gone bad.

If your server is unstable, you might think of removing the bad drive and boot it back temporarily from the second drive in place. For this you should ensure that the grub is installed on the second drive as well so that it boots without any trouble. It is a best practice to install the grub on both drives after configuring RAID1. If it is not done, Not an issue, its not too late to configure that before rebooting the machine for disk removal. Even otherwise through rescue mode grub can be installed easily.

To install grub when you’re on working server:

With (Grub v1.x), Goto grub prompt
grub>>
Find existing grub setups using find command
grub>> find /grub/stage1
If you have any you will find
root (hd0,0)
otherwise you will have to continue with the grub setup as follows,
grub>> root(hd0,0)
grub>> setup(hd0)
grub>> root(hd1,0)
grub>> setup(hd1)

The above lines setup grub on MBR of both the drives. Depending on the drives currently available on the machine/status of your raid you can follow the above instrutions to recover the GRUB while troubleshooting RAID1 setup’s.

If you’re on (Grub v2.x) grub-install /dev/sdX (PS: X in /dev/sdX is drive letter. eg: if you want to install grub on first drive ie sda, then change X with a) command should do all the work.

Once you have the grub installed on drive, you can remove the bad drive from the RAID array using mdadm commands.

In our case (from the initial mdstats output), we should mark bad drive as fail and remove it from the RAID array as follows:
mdadm --manage /dev/md0 --fail /dev/sda1
mdadm --manage /dev/md0 --remove /dev/sda1

Repeat this command for other arrays’s too.

Now you’re good to go ahead shutdown the system and remove the drive. If you have a replacement drive, better add it before rebooting and follow the instructions required to rebuild the RAID arrays.

OpenXenManager

  I have been searching for the right alternative for managing my Xen VM’s and hosts. Its a pain to boot a Windows VM just for  accessing XenCenter. You can download OpenXenManager from http://sourceforge.net/projects/openxenmanager/ or Install it on your Ubuntu machine using the following command:

#sudo apt-get install openxenmanager

PS: Running it from the source from SF.net link didn’t work for me on Ubuntu 12.04 but found it working after installing via apt-get.

Once installed, its very similar to working with the windows version of Citrix Xen Center.

 

Hope you too find it useful.

 

Dell OpenManage Express Install

Dell OpenManage System Administration (OMSA) tools makes it easy to see server health, storage management, hardware health checks etc on DELL Servers from anywhere.

If you’re running RHEL5, CentOS, Scientific Linux, RHEL4+yum, SLES+yum, you can easily install DELL OpenManage System Administration Tool (OMSA) from DELL Official yum repository.  Read this link for more information Dell Hardware Repo latest.

Set up this repository:

wget -q -O - http://linux.dell.com/repo/hardware/latest/bootstrap.cgi | bash

Install OpenManage Server Administrator

yum install srvadmin-all

Once the installation is over, OpenManage services  can be start using following command.

srvadmin-services.sh start

srvadmin-services.sh start

Then access the OpenManage via browser at https://:1311

NOTE: OMSA will not install on unsupported systems. If you receive a message at install that the system is not supported, it is likely that your system is not supported, and the install will fail. This is most common on SC-class systems, as OMSA is completely unsupported on these systems.

Firewall Rule:

To enable port 1311 you can add the following rules to /etc/sysconfig/iptables on Redhat based systems.

-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 1311 -j ACCEPT

Save the file and restart iptables

#service iptables restart

You can also find direct links to DELL support on OMSA interface.

I used the same instructions to install OMSA on Citrix XenServer which is based on CentOS recently.

Other Links:

DELL Linux Community-supported repository

Dell OpenManage Server Administrator Version 6.1 documentation