Wind River Support Network

HomeDefectsOVP-1329
Not to be fixed

OVP-1329 : kvm userspace : vdsm glusterVolumeRebalanceStart got unexpected behaving

Created: Jul 5, 2013    Updated: Mar 11, 2016
Resolved Date: May 5, 2014
Found In Version: 5.0.1
Severity: Severe
Applicable for: Wind River Linux 5
Component/s: Userspace

Description

Problem Description
======================
glusterVolumeRebalanceStart got unexpected behaving.
Those gluster rebalance jobs worked under 'gluster' commands.
So I think it's a vdsm-gluster's issue.
Behaving:
1. Create a glusterVolume and Start.
2. Mount the glusterVolume and Create many files(about 100 files).
3. Set the glusterVolume option=cluster.rebalance-stats value=on
4. Add a new brick in this glusterVolume.
5. Got glusterVolumeReBalanceStatus, showing the status 'Not Started'.
6. Run glusterVolumeReBalanceStart,
Issue :  Nothing has been written in new brick, and glusterVolumeReBalanceStatus show the job has completed, 0Bytes has been copied.

root@localhost:/media/sda1/export# vdsClient -s 0 glusterVolumeBrickAdd volumeName="ssme" bricks=128.224.165.233:/media/sda1/export/brick3
Done
root@localhost:/media/sda1/export# vdsClient -s 0 glusterVolumeSet volumeName="ssme" option=cluster.rebalance-stats value=on
Done
root@localhost:/media/sda1/export# vdsClient -s 0 glusterVolumeRebalanceStatus ssme
{'message': '                                    Node Rebalanced-files          size       scanned      failures         status run time in secs\n                               ---------      -----------   -----------   -----------   -----------   ------------   --------------\n                               localhost                0        0Bytes             0             0    not started             0.00\nvolume rebalance: ssme: success: ',
 'rebalance': 'UNKNOWN',
 'status': {'code': 0, 'message': 'Done'}}
Done
root@localhost:/media/sda1/export# vdsClient -s 0 glusterVolumeRebalanceStart "ssme"
{'status': {'code': 0, 'message': 'Done'}}
Done
root@localhost:/media/sda1/export#
root@localhost:/media/sda1/export# gluster volume rebalance ssme status
                                    Node Rebalanced-files          size       scanned      failures         status run time in secs
                               ---------      -----------   -----------   -----------   -----------   ------------   --------------
                               localhost                0        0Bytes             0             0      completed             0.00
volume rebalance: ssme: success: 
root@localhost:/media/sda1/export# 


Expected Behavior
======================
After RebalanceStart some files copied to new added brick.

Observed Behavior
======================
gluster commands work well.
After run below command, got expected result.
$ gluster volume rebalance "ssme"  start
$ gluster volume rebalance "ssme"  status

Logs
======================
Attached. 

Steps to Reproduce

1) /lpg-build/cdc/fast_prod/wrlinuxovp/dvd_install/lv15_13sp/wrlinux-5/wrlinux/configure --enable-jobs=8 --enable-parallel-pkgbuilds=4 --enable-kernel=preempt-rt --enable-addons=wr-ovp --enable-rootfs=ovp-ovirt-node --enable-board=intel_xeon_core
2) Start vdsmd service
3) Run below commands.
    $ vdsClient -s 0 glusterVolumeCreate volumeName="ssme" bricks=128.224.165.233:/media/sda1/export/brick1,128.224.165.233:/media/sda1/export/brick2
    $ vdsClient -s 0 glusterVolumeStart volumeName="ssme"
    $ mount -t glusterfs 128.224.165.233:/ssme /mnt/
    $ for i in `seq -w 1 100`; do cp -rp /var/log/messages /mnt/copy-test-$i ; done
    $ ls -lA brick1/ | wc -l
    $ ls -lA brick2/ | wc -l
    $ vdsClient -s 0 glusterVolumeBrickAdd volumeName="ssme" bricks=128.224.165.233:/media/sda1/export/brick3
    $ vdsClient -s 0 glusterVolumeSet volumeName="ssme" option=cluster.rebalance-stats value=on
    $ vdsClient -s 0 glusterVolumeRebalanceStart "ssme"
    $ vdsClient -s 0 glusterVolumeRebalanceStatus "ssme"
    $ ls -lA brick3/
Live chat
Online