Fixed
Created: Mar 6, 2018
Updated: Dec 3, 2018
Resolved Date: Jun 7, 2018
Found In Version: 6.0.0.24
Fix Version: 6.0.0.37
Severity: Severe
Applicable for: Wind River Linux 6
Component/s: BSP
jffs2 gc thread take huge cpu resource on powerpc target with WRLinux 6.0
After analysis and testing, customer found there is always nod info was not be del, just set null to node:
/*************************************************/
int jffs2_do_link (struct jffs2_sb_info *c, struct jffs2_inode_info *dir_f, uint32_t ino, uint8_t type, const char *name, int namelen, uint32_t time)
…….
mutex_lock(&c->alloc_sem);
mutex_lock(&dir_f->sem);
for (fd = dir_f->dents; fd; fd = fd->next) {
if (fd->nhash == nhash &&
!memcmp(fd->name, name, namelen) &&
!fd->name[namelen]) {
jffs2_mark_node_obsolete(c, fd->raw);
/* We don't want to remove it from the list immediately,
because that screws up getdents()/seek() semantics even
more than they're screwed already. Turn it into a
node-less deletion dirent instead -- a placeholder */
fd->raw = NULL; ?? ???GC??????????????
fd->ino = 0;
break;
}
}
mutex_unlock(&dir_f->sem);
/*************************************************/
They try to set fd node ino to -, and remove fd node from list when fd->raw set to NULL, I put the detailed modification snapshot in attachment for refer.
After modification, looks this kind of issue was not there.
${WRL_DIR}/wrlinux-6/wrlinux/configure --enable-board=${LOCAL_CPU_TYPE} --enable-kernel=standard --enable-rootfs=glibc_small --with-template=feature/build_libc --enable-jobs=12 --enable-parallel-pkgbuilds=8 --with-layer=${PATCH_DIR}/osc-patch/osc-glibc-patch-layer --with-rcpl-version=${RCPL_VERSION}
This issue happen on customer side by far, it random happen in their lab with the following script:
/***********************************************/
while [ True ];do
touch /opt/vrpv8/var/log/`cat /dev/urandom | head -n 10 | md5sum | head -c 10`.html
let x++
sleep 1
rm -rf /opt/vrpv8/var/log/*.html
sleep 1
if [ 10000 -eq $x ]; then
exit
fi
done
/***********************************************/