Running heapscope against a parallel application that spawns lots of threads, performance of that appliation is massively degraded since all threads are pinned on a single CPU. This is a regression compared to the original WRL7 GA, due to the fix for LIN7-1142 . heapscope needs a command-line option to turn off cpu-pinning in order to deal with such cases. We need to give users the option of trading the risk of false positive leaks against system performance if they want to. Documentation of the command-line switch should mention that turning off CPU pinning degrades accuracy of leak detection. Minimal documentation about existence of the switch should be sufficient.