VMWare Workstation 11 can not be installed under Ubuntu 15.04

VMWare communities thread again gives a quick for this issue here:

Find the solution below:

 

Step 1: log in as root (e.g. sudo -s)

Step 2: Enter your Root password.

Step 3: Enter these commands:

 

curl http://pastie.org/pastes/9934018/download -o /tmp/vmnet-3.19.patch

cd /usr/lib/vmware/modules/source

tar -xf vmnet.tar

patch -p0 -i /tmp/vmnet-3.19.patch

mv vmnet.tar vmnet.tar.SAVED

tar -cf vmnet.tar vmnet-only

rm -r vmnet-only

vmware-modconfig –console –install-all

 

VMware will now compile the vmnet module for Kernel 3.19. (please make sure you have DKMS installed)

Vmware Workstation 10.x patch for Linux kernel 3.13

Vmware Workstation breaks if you try using upcoming Linux kernel release’s and at the same time, VMWare community moves fast to push a quick patches to applied for those who dare to use cutting edge beta OS on their machines.

WoodyZ on https://communities.vmware.com provides a patch which just works for Linux Kernel 3.13.

Here is the patch provided from WoodyZ for your quick reference.

Apply the patch to /usr/lib/vmware/modules/source/vmnet.tar (Extract, apply the patch using patch command, compress the files back to vmnet.tar) and run vmware workstation again.

 

--- vmnet-only/filter.c 2013-10-18 15:11:55.000000000 -0400
+++ vmnet-only/filter.c 2013-12-21 20:15:15.000000000 -0500
@@ -27,6 +27,7 @@
#include "compat_module.h"
#include <linux/mutex.h> #include <linux/netdevice.h> +#include <linux/version.h> #if COMPAT_LINUX_VERSION_CHECK_LT(3, 2, 0)
# include <linux/module.h> #else
@@ -203,7 +204,11 @@
#endif</code>

static unsigned int
+#if LINUX_VERSION_CODE &lt; KERNEL_VERSION(3, 13, 0)
VNetFilterHookFn(unsigned int hooknum, // IN:
+#else
+VNetFilterHookFn(const struct nf_hook_ops *ops, // IN:
+#endif
#ifdef VMW_NFHOOK_USES_SKB
struct sk_buff *skb, // IN:
#else
@@ -252,7 +257,12 @@

/* When the host transmits, hooknum is VMW_NF_INET_POST_ROUTING. */
/* When the host receives, hooknum is VMW_NF_INET_LOCAL_IN. */
- transmit = (hooknum == VMW_NF_INET_POST_ROUTING);
+
+#if LINUX_VERSION_CODE &lt; KERNEL_VERSION(3, 13, 0)
+ transmit = (hooknum == VMW_NF_INET_POST_ROUTING);
+#else
+ transmit = (ops-&gt;hooknum == VMW_NF_INET_POST_ROUTING);
+#endif

packetHeader = compat_skb_network_header(skb);
ip = (struct iphdr*)packetHeader;

OpenVZ & Digital Ocean read/write test results

Here is a quick view on the read and write results on Digital Ocean & OpenVZ:

Write Operation with Digital Ocean Server
[email protected]:~# dd if=/dev/zero of=test bs=1048576 count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 7.93444 s, 271 MB/s
Read Operation with Digital Ocean Server
[email protected]:~# dd if=test of=/dev/null bs=1048576
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 3.39856 s, 632 MB/s

Write Operation with OpenVz Server
[email protected]:~# dd if=/dev/zero of=test bs=1048576 count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 20.3058 s, 106 MB/s
[email protected]:~# dd if=test of=/dev/null bs=1048576
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 2.26675 s, 947 MB/s
Read Operation with OpenVZ Server

The above results show that the Digital Ocean SSD does make it faster to write to the disk while the Reads are better in OpenVZ VPS that I own. This difference might change depending on the overhead of the Hardware node which would be serving my VPS/CloudVPS. More to digg out in coming days.

Fix: openvz, iptables, csf and errors

CSF has been one of the first choice for years now for me to secure server with a easily usable iptables manager. More than that it works as “A Stateful Packet Inspection (SPI) firewall, Login/Intrusion Detection and Security application for Linux servers”.

While using csf on OpenVZ vps systems we end up facing lots of issues with respect to iptables modules. If you’re a sysadmin managing hardware nodes it might not be easy though it has got to do something quite simple. I’m pasting the similar issue once again here for a reference and let us recheck what we normally oversee.

Error received during csf test:

:~# /etc/csf/csftest.pl
Testing ip_tables/iptable_filter…OK
Testing ipt_LOG…OK
Testing ipt_multiport/xt_multiport…OK
Testing ipt_REJECT…OK
Testing ipt_state/xt_state…OK
Testing ipt_limit/xt_limit…OK
Testing ipt_recent…FAILED [Error: iptables: No chain/target/match by that name.] – Required for PORTFLOOD and PORTKNOCKING features
Testing xt_connlimit…FAILED [Error: iptables: No chain/target/match by that name.] – Required for CONNLIMIT feature
Testing ipt_owner/xt_owner…FAILED [Error: iptables: No chain/target/match by that name.] – Required for SMTP_BLOCK and UID/GID blocking features
Testing iptable_nat/ipt_REDIRECT…OK
Testing iptable_nat/ipt_DNAT…OK

RESULT: csf will function on this server but some features will not work due to some missing iptable modules

Now, the quick remedy that we get to resolve this issue is to enable all the iptable modules in /etc/vz/vz.conf of the OpenVZ hardware node as follows:

IPTABLES=”ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ipt_state iptable_nat ip_nat_ftp ipt_recent ipt_owner ipt_conntrack ipt_helper ipt_REDIRECT ipt_recent ipt_owner”

and restart all VM’s etc using  “service vz restart” to activate all modules to all VPS systems.

The other options we have is to enable these modules for a specific VPS/VM as follows: (PS: here 100 is a VPS id)

vzctl set 100 –iptables ipt_REJECT –iptables ipt_tos –iptables ipt_TOS –iptables ipt_LOG –iptables ip_conntrack –iptables ipt_limit –iptables ipt_multiport –iptables iptable_filter –iptables iptable_mangle –iptables ipt_TCPMSS –iptables ipt_tcpmss –iptables ipt_ttl –iptables ipt_length –iptables ipt_state –iptables iptable_nat –iptables ip_nat_ftp –iptables ipt_owner –iptables ipt_recent –save

If you run the above command with –setmode option, the modules will be applied.

Or you can add the IPTABLES line mentioned earlier (vz.conf entry) to VPS configuration file  (/etc/vz/conf/100.conf for the above example VPS) and restart VPS with the following command to make it effective.

vzctl restart 100

This was quite simple and known answer. But I have always found that it doesn’t go so smooth. This might be because of one simple reason that I quote here:

We keep removing OLD kernel packages etc and install the new ones to ensure that we have secure system with latest kernel and packages. During this process we end up loosing kernel modules required for the new modules. I figure this out once again while working on my hardware node just by typing the find command as follows under /lib

find /lib –name *ipt*

I was expecting the iptables modules to be listed under my current kernel but to my surprise I didn’t find them. I found them in one of the very old kernel that was installed on the box. Crazy. So, I decided to jump in and reinstall iptables packages quickly on the machine:

yum reinstall iptables-devel.i686 iptables-devel.x86_64 iptables-ipv6.x86_64 iptables.i686 iptables.x86_64

Here is the final output of my csf test after restarting my VM.

~# /etc/csf/csftest.pl
Testing ip_tables/iptable_filter…OK
Testing ipt_LOG…OK
Testing ipt_multiport/xt_multiport…OK
Testing ipt_REJECT…OK
Testing ipt_state/xt_state…OK
Testing ipt_limit/xt_limit…OK
Testing ipt_recent…OK
Testing xt_connlimit…OK
Testing ipt_owner/xt_owner…OK
Testing iptable_nat/ipt_REDIRECT…OK
Testing iptable_nat/ipt_DNAT…OK

RESULT: csf should function on this server

That’s it. Now you know why iptables doesn’t work even if you had not changed anything recently on hardware node (and forgotten that you restarted your system with a new kernel). Verify the iptables modules with lsmod and reinstall the packages if required.

Citrix Xen – Server Pool unavailable

Due to power failure if your Citrix Xen Slave server doesn’t get back online in pool, this might help:

When the Xen environment changes the pool master, If a slave cannot reach the pool master it goes into a failed state. To change this edit pool.conf file (vi /etc/xensource/pool.conf) and changed the pool master IP address from 10.174.XX.XX to 10.174.XX.YY (to the correct new pool master address). The following identifies the changes as represented within the .conf file:

slave:10.174.20.155

to

slave:10.174.20.157

Once the change is complete, run xe-toolstack-restart. The management interfaces should be restarted and as a precaution the server  restart should help. On reboot, you will be able to see the slave server joining the pool without any trouble.

Reference: http://forums.citrix.com/thread.jspa?threadID=242210&tstart=30

Citrix XenServer: ballooning daemon is not running

If you’re finding issues starting VM’s on your Citrix XenServers due to ballooning issues as mentioned below:

$ sudo xe vm-start name-label=vm1
The server failed to handle your request, due to an internal error. The given message may give details useful for debugging the problem.
message: Failure(“The ballooning daemon is not running”)

Then try this command then restart VM to find it working.

$ sudo xe-toolstack-restart
Stopping xapi: .. [ OK ]
Stopping the v6 licensing daemon: [ OK ]
Stopping the memory ballooning daemon: [FAILED]
Stopping perfmon: [FAILED]
Stopping the fork/exec daemon: [ OK ]
Stopping the multipath alerting daemon: [ OK ]
Starting the multipath alerting daemon: [ OK ]
Starting the fork/exec daemon: [ OK ]
Starting perfmon: [ OK ]
Starting the memory ballooning daemon: [ OK ]
Starting the v6 licensing daemon: [ OK ]
Starting xapi: ……start-of-day complete. [ OK ]
done.

Thanks for the pointers by http://www.krzywanski.net/archives/919