If you’re copying data from an NFS device, the local root user of your NFS client will not have omnipotent access over the data, and so if the permissions are set with everyone noaccess, i.e. r-wr-w— or similar (ending in — instead of r–) then even root will fail to copy some files.
To capture the outstanding files after the initial rsync run as root, you’ll need to determine the UID of the owner(s) of the failed files, create dummy users for those uids and perform subsequent rsync’s su’d to those dummy users. You won’t get read access any other way.
The following shell script will take a look at the log file of failures generated by rysnc -au /src/* /dest/ 2> rsynclog and list uid’s of user accounts that have read access to the failed-to-copy data. (Note: when using rsync, appending a * will effectively miss .hidden files. Lose the * and use trailing slashes to capture all files including hidden files and directories).
subsequent rsync operations can be run by each of these users in turn to catch the failed data. This requires the users to be created on the system performing the copy, e.g. useradd -o -u<UID> -g0 -d/home/dummyuser -s/bin/bash dummyuser
This could also easily be incorporated into the script of course.
#!/usr/bin/bash
#Variables Section
SRC=”/source_dir”
DEST=”/destination_dir”
LOGFILE=”/tmp/rsynclog”
RSYNCCOMMAND=”/usr/local/bin/rsync -au ${SRC}/* ${DEST} 2> ${LOGFILE}”
FAILEDDIRLOG=”/tmp/faileddirectorieslog”
FAILEDFILELOG=”/tmp/failedfileslog”
UIDLISTLOG=”/tmp/uidlistlog”
UNIQUEUIDS=”/tmp/uniqueuids”#Code Section
#Create a secondary list of all the failed directories
grep -i opendir ${LOGFILE} | grep -i failed ${LOGFILE} | cut -d\” -f2 > ${FAILEDDIRLOG}#Create a secondary list of all the failed files
grep -i “send_files failed” ${LOGFILE} | cut -d\” -f2 > ${FAILEDFILELOG}#You cannot determine the UID of the owner of a directory, but you can for a file
#Remove any existing UID list log file prior to writing a new one
if [ -f ${UIDLISTLOG} ]; then
rm ${UIDLISTLOG}
fi#Create a list of UID’s for failed file copies
cat ${FAILEDFILELOG} | while read EACHFILE; do
ls -al ${EACHFILE} | awk {‘print $3’} >> ${UIDLISTLOG}
done#Sort and remove duplicates from the list
cat ${UIDLISTLOG} | sort | uniq > ${UNIQUEUIDS}cat ${UNIQUEUIDS}
exit
Don’t forget to chmod +x a script before executing it on a Linux/UNIX system.