Wind River Support Network

HomeDefectsOVP-215
Fixed

OVP-215 : [selinux] create storage domain failed while selinux enforcing on ovirt-node side

Created: Sep 26, 2013    Updated: Mar 11, 2016
Resolved Date: Oct 28, 2013
Found In Version: 5.0.1
Fix Version: 5.0.1.9
Severity: Severe
Applicable for: Wind River Linux 5
Component/s: Kernel

Description

Problem Description
======================
[selinux] create storage domain failed while selinux enforcing on ovirt-node side

Expected Behavior
======================
it works well

Observed Behavior
======================
selinux works under permission module on ovirt-node side, create storage domain works well.

Logs
======================
vdsm.log
----------------
Thread-537::DEBUG::2013-09-26 09:34:34,223::task::568::TaskManager.Task::(_updateState) Task=`a57d26bc-8ef6-4fc5-9840-4c74b3173dfa`::moving from state init -> state preparing
Thread-537::INFO::2013-09-26 09:34:34,224::logUtils::41::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=1, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection': '128.224.158.244:/exports/data', 'iqn': '', 'portal': '', 'user': '', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000', 'port': ''}], options=None)
Thread-537::DEBUG::2013-09-26 09:34:34,232::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6,nfsvers=3 128.224.158.244:/exports/data /windriver/data-center/mnt/128.224.158.244:_exports_data' (cwd None)
Thread-537::ERROR::2013-09-26 09:34:34,267::hsm::2212::Storage.HSM::(connectStorageServer) Could not connect to storageServer
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2208, in connectStorageServer
    conObj.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 302, in connect
    return self._mountCon.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 208, in connect
    fileSD.validateDirAccess(self.getMountObj().getRecord().fs_file)
  File "/usr/share/vdsm/storage/mount.py", line 260, in getRecord
    (self.fs_spec, self.fs_file))
OSError: [Errno 2] Mount of `128.224.158.244:/exports/data` at `/windriver/data-center/mnt/128.224.158.244:_exports_data` does not exist
Thread-537::INFO::2013-09-26 09:34:34,269::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 100, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-537::DEBUG::2013-09-26 09:34:34,270::task::1151::TaskManager.Task::(prepare) Task=`a57d26bc-8ef6-4fc5-9840-4c74b3173dfa`::finished: {'statuslist': [{'status': 100, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-537::DEBUG::2013-09-26 09:34:34,270::task::568::TaskManager.Task::(_updateState) Task=`a57d26bc-8ef6-4fc5-9840-4c74b3173dfa`::moving from state preparing -> state finished
Thread-537::DEBUG::2013-09-26 09:34:34,271::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-537::DEBUG::2013-09-26 09:34:34,271::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-537::DEBUG::2013-09-26 09:34:34,272::task::957::TaskManager.Task::(_decref) Task=`a57d26bc-8ef6-4fc5-9840-4c74b3173dfa`::ref 0 aborting False
Thread-538::DEBUG::2013-09-26 09:34:36,332::BindingXMLRPC::908::vds::(wrapper) client [128.224.158.244]::call volumesList with () {}
MainProcess|Thread-538::DEBUG::2013-09-26 09:34:36,333::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/sbin/gluster --mode=script volume info --xml' (cwd None)
MainProcess|Thread-538::DEBUG::2013-09-26 09:34:36,412::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0
Thread-538::DEBUG::2013-09-26 09:34:36,413::BindingXMLRPC::915::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {}}
Thread-540::DEBUG::2013-09-26 09:34:41,432::BindingXMLRPC::161::vds::(wrapper) [128.224.158.244]
Thread-540::DEBUG::2013-09-26 09:34:41,433::task::568::TaskManager.Task::(_updateState) Task=`ee4b2494-2dcc-4daa-9bb2-aab5205fdac4`::moving from state init -> state preparing
Thread-540::INFO::2013-09-26 09:34:41,434::logUtils::41::dispatcher::(wrapper) Run and protect: disconnectStorageServer(domType=1, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection': '128.224.158.244:/exports/data',
 'iqn': '', 'portal': '', 'user': '', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000', 'port': ''}], options=None)
Thread-540::DEBUG::2013-09-26 09:34:41,434::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /bin/umount -f -l /windriver/data-center/mnt/128.224.158.244:_exports_data' (cwd None)
Thread-540::ERROR::2013-09-26 09:34:41,450::hsm::2292::Storage.HSM::(disconnectStorageServer) Could not disconnect from storageServer
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2288, in disconnectStorageServer
    conObj.disconnect()
  File "/usr/share/vdsm/storage/storageServer.py", line 308, in disconnect
    return self._mountCon.disconnect()
  File "/usr/share/vdsm/storage/storageServer.py", line 221, in disconnect
    self._mount.umount(True, True)
  File "/usr/share/vdsm/storage/mount.py", line 242, in umount
    return self._runcmd(cmd, timeout)
  File "/usr/share/vdsm/storage/mount.py", line 230, in _runcmd
    raise MountError(rc, ";".join((out, err)))
MountError: (1, ';umount: /windriver/data-center/mnt/128.224.158.244:_exports_data: not found\n')
Thread-540::DEBUG::2013-09-26 09:34:41,451::misc::1054::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage)
Thread-540::DEBUG::2013-09-26 09:34:41,451::misc::1056::SamplingMethod::(__call__) Got in to sampling method
Thread-540::DEBUG::2013-09-26 09:34:41,452::misc::1054::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan)
Thread-540::DEBUG::2013-09-26 09:34:41,452::misc::1056::SamplingMethod::(__call__) Got in to sampling method
Thread-540::DEBUG::2013-09-26 09:34:41,452::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None)
Thread-540::DEBUG::2013-09-26 09:34:41,464::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> = 'iscsiadm: No session found.\n'; <rc> = 21
Thread-540::DEBUG::2013-09-26 09:34:41,465::misc::1064::SamplingMethod::(__call__) Returning last result 

Workaround

Any defect based on selinux policy enforcement can be worked around by disabling policy enforcement; another workaround example is described in the work-around of old defect WIND00419948.

Disable policy enforcement on the kernel command line with "selinux=1 enforcing=0" or in the /etc/selinux/config file.  This is called "permissive" mode and allows the test cases to proceed while recording the policy violations in /var/log/audit/audit.log.  (some logs during boot are registered in dmsg instead).  Defect recording for selinux policy violations should include the failure logs in this file so it is good to get into the habit of examining it.

If you could share this workaround whenever you address a defect based on policy enforcement that would be good information to share and remind people about.

Steps to Reproduce

1) /lpg-build/cdc/fast_prod/wrlinuxovp/wrlinux-x/wrlinux/configure --enable-jobs=32 --enable-parallel-pkgbuilds=32 --enable-kernel=preempt-rt --enable-addons=wr-ovp --enable-rootfs=ovp-ovirt-node --enable-board=intel_xeon_core --with-rcpl-version=0

2) make fs

3) deploy images

4) enable selinux

5) boot target and UP to ovirt-engine.

6) create storage domain

Check the behaving and logs. 
Live chat
Online