After I left a loop moving the data from my LVM2 LVs to a new raid1 array overnight (I wanted to also reorder the LVs physically at the beginning of the disk) I came back in the morning to see that one of the moves had hung. I killed it, and the loop kept going. Unfortunately something had already gone wrong and I had to issue pvmove –abort to get things in order. And that almost worked, except I started getting these:

> sudo lvs --all
Number of segments in active LV pvmove0 does not match metadata
Number of segments in active LV pvmove0 does not match metadata
LV          VG     Attr   LSize   Origin Snap%  Move Log Copy%  Convert 
download    lionvg -wi-ao 200.00g
home        lionvg -wi-ao 100.00g
kvmpool     lionvg -wi-ao  50.00g
opt         lionvg -wi-ao  15.00g
[pvmove0]   lionvg p-C-a-      0
repos       lionvg -wi-ao   2.00g
root        lionvg -wi-ao   4.00g 
usr         lionvg -wi-ao  20.00g
var         lionvg -wi-ao  20.00g

I was furthermore unable to destroy or make inactive that stale pvmove0 logical volume:

> sudo lvchange -an lionvg/pvmove0
  Unable to change pvmove LV pvmove0
  Use 'pvmove --abort' to abandon a pvmove

The recommended way of using vgcfgrestore to restore the metadata from backup is not something to do because I managed to actually modify my LVs before deciding to work on this issue.

The solution is, however, hidden in the advice above. Just use vgcfgbackup to get an up to date backup of your metadata, then edit the file, delete the section about the stale pvmove0 and restore the edited backup with vgcfgrestore.