Search Result for linux commands on windows — 6

May 18

Using Linux commands on WIndows

Wouldn’t it be nice if you could pipe the output from windows commands into non-windows commands like grep, cut, awk, sort etc that are available to you on alternative unix-based operating systems?

 

Download and install GNUWin32 from here and the CoreUtils package here and Grep here that should do it.  There are more packages available though here

Once installed, add the path to the bin directory to your Windows System Environment Variable Path

Environment_variables Path

A few useful commands will now be available on the command line.  My favourite is comm which compares files and can be quite flexible with the output with the -1 -2 or -3 switches to suppress lines that appear in file1, file2 or both files respectively.   You can also combine them e.g. -12 -23, 13 to affect the output, so that only the desired output is achieved.  This takes a bit of playing around with, but is very powerful and very simple.  So much so, that it is my number 1 go to tool for file comparison.  Examples shown the in the screenshots below.

comm-helpcomm-3 comm_windows

Note:  Some Windows tools such as icacls export text to a format other than ANSI.  When viewed using Notepad or Notepad++, all appears fine, but if you cat them , you’ll see there are effectively spaces between each character, meaning grep won’t work.  Such text files will need to be saved in ANSI format first.  You can do this using Notepad++.  After selecting Encode in ANSI, save it, then retry grep for a more successful pattern match!

ANSI

 

Facebooktwittergoogle_plusredditpinterestlinkedinmail
comment?
Aug 08

Dell BIOS updates w/o Windows

If like me, you have a Dell laptop running linux and you want to bring your firmware up to date, you’ll realise that the executables downloadable from Dell’s support site require Windows OS to run.  Or do they?  The good new is No.  They don’t.

OK, so they won’t run on Linux either, but they will run from a FreeDOS command line.

Long story short,  download SystemRescueCD

Create a bootable USB Stick using THESE instructions…

mkdir -p /tmp/cdrom

sudo mount -o loop,exec ~/Downloads/systemrescuecd-x86-4.5.4.iso         #your version maybe newer!

plug in the usb stick      #be prepared to loose everything on it!

cd /tmp/cdrom

sudo bash ./usb_inst.sh

Create a folder on the USB stick called Dell for example, and copy the BIOS update for your computer into it.

Boot the computer with the USB stick and choose the FreeDOS option (it can be found in one of the menus), otherwise it’ll boot into the default linux command line environment, and you don’t want that for this.

At the FreeDOS command prompt A:> change to C:> and type dir to view the files on the USB stick.

You should see the Dell directory you created.  cd into the Dell directory and run the executable BIOS upgrade program.

Reboot into your Linux OS.  The following commands show the firmware level and other info for your computer.

You may need to install libsmbios first

sudo apt-get install libsmbios-bin

dell-fw-commands

 

Facebooktwittergoogle_plusredditpinterestlinkedinmail
comment?
Nov 30

Manipulating Files in Linux

RHCE 2: Manipulating files in Linux.  The following blog post is a concise summary of how one can interact with files on a Linux system.  In fact the information contained herein applies to all Linux and UNIX.  By my own admission, if you learn everything contained in this one post of the many posts on my blog, you’ll be well on your way when it comes to turning your hand to any UNIX or Linux system.  Basic but essential knowledge.

Creating a file

“Everything is a file”.  You’ll hear that said about UNIX and/or Linux.  Unlike Windows, there is no registry, just the filesystem.  As such, everything is represented by a file somewhere in the filesystem.  More on the different types of file later.

cat, touch, vi, vim, nano, tee and > (after a command) are all used to create files.    tee is special since when you pipe a command into tee, it will write standard input to standard output as well as displaying the results on screen (using > would hide output to the screen as it redirects it to a file instead).

Listing files

ls, ll, ll -i (displays inode of the file) commands are used with many possible switches to display directory listings.  e.g. ls -al (long listing showing permissions, including hidden files)  ls -lart (same but sorted in reverse date order too) are common uses of the ls command.

Display contents of a file

cat, more, less, head, tail, view, vi, vim, nano, uniq and strings are all commands used to display files in similar but slightly different ways, i.e. in their entirety, a page at a time, the top lines, the bottom lines, in an editor, just unique lines of the file and just ascii text (not binary information) contained within a file.

Copy or Rename a file

cp, rsync, tar, cpio, mv   -can all be used to copy files, move files or rename files.  In Linux, you don’t rename a file, you move it.

Remove a file

rm, erase, rmdir (if it’s a directory, though rm -r will recurse through the tree removing subdirectories as well as files contained beneath the specified starting point.  This is dangerous, especially when used with rm -rf to force it.)

Ownership and Permissions

Like Windows, files have an owning user, they also have an owning group attribute as well as permissions that dictate what level of access the owning user, owning group and everyone else has to the file (or directory).  This is slightly different to Windows, whereby permissions can be set on multiple groups added to the ACL (access control list) of a file (or directory) and takes some getting used to.

To change owner or group use the chown and chgrp commands, or just the chown user:group command to do both in one go.

To change the permissions, use the chmod command.

-rwxrwxrwx    where – means regular file (more on different file types later), then the first rwx is read, write, execute permissions of the owner, the second rwx is the same for the group and the third rwx is everyone else.  Each permission bit has a value

– 421 421 421

So to set permissions of owner full access, group read, everyone read i.e. rwxr–r– would be 4+2+1, 4, 4 i.e. 744 so chmod 744 filenameFull access for everyone would be chmod 777 filename.

Types of file

Regular   (ascii or binary)

Executable   (allowed to execute)

Directory   (contains one or more files)

Symlink   (hard or soft link to another file – hard has it’s own inode but is still linked, soft shares the inode of the linked file.  ln -s realfile linkfile is a common use.  It’s common to get the order the wrong way around.)

Device   (character/raw or block special files are used to send streams of data to kernel modules which controls the sending of the data stream to hardware, e.g. a volume group has a character special file, a disk device has a block special file)

Named Pipe (fifo – first in first out used to send one-way streams of data to other processes (inter-process communication or IPC).

Socket    -a two-way named pipe.  Used for system services for example, whereby information is received and transmitted.

File attributes

Besides permissions that control access to a file, files on a Linux system can also have attributes applied to them that controls what can and can’t be done to the file – even by the root user.

stat   -Display statistics about a file.

wc    -Word count a file (can also be used with wc -l to count lines in a file, or wc -c to count characters)

lsattr    -List attributes of a file.

chattr     -Change attributes of a file.

a   -Can only be appended to

A   -Access time not updated

c    -Auto compress

d    -cannot be backed up by the dump command

D   -contents of the directory are written synchronously to disk

i    -is immutable (cannot be changed or deleted)

j    -is added to the journal before being written to disk on journalling file systems

s    -is securely deleted, i.e. actual data blocks are wiped too

S   -file is synchronously written to disk

u   -undeletable

Pattern matching

The famous grep command is used to simply match lines of text contained in a file, or more cleverly lines containing patterns of text (defined by regular expressions) in a file or files.  More on Regular Expressions will be covered later.

grep -l pattern file1 file2 file3   -finds lines containing pattern in files file1, file2 and file3

grep -n pattern file1    -find the pattern and displays the line numbers where the matches occur.

grep -v     -anything but the pattern matches

grep ^pattern   or    grep pattern$  matches the patterns when they occur at the beginning or the end of the line only.

grep -i   ignores case (because Linux is case sensitive of course)

egrep or grep -E ‘pattern1|pattern2’ file1    -displays either pattern matched

Comparing files

diff, comm and grep are used to compare two files and print matching lines and differing lines, e.g. diff -c file1 file2   displays the output in 3 sections.   comm 123 file1 file2 very similar to diff -c whereby section 1, 2 and/or 3 are suppressed instead of displayed.  Section 1 contains lines unique to file1, section2 contains lines unique to file2 and section3 contains lines in both.  Use of comm takes some getting used to, so read the man page to be sure you’re getting the results you’re after and not something else, or just use diff -c.  comm is very cool tool though, and I find myself using it more than diff.  A new favourite is grep -Fxv -f decommissioned backupclients which would list any lines in a list of backupclients that were not found in the decommissioned list.

Finding files

The find command in UNIX/Linux is fantastic, but like Linux itself, it has a reputation for having a steep learning curve.  I’ll try to make it easy by keeping this short and sweet.

find path option action   where option and action have values and commands specifed respectively, i.e. find path option value action command

e.g. find ./ -size -1G -exec ls -al {} \;     find     ./      -size -1G      -exec ls -al {} \;   will find files from the present working directory down that are less than 1Gb and will long list any matches

other options are

-name     match names (can also use regular expressions like grep)

-atime     last accessed time

-user       owning user is

-mtime    last modified time

-ctime     change time

-group     owning group

-perm     permissions are e.g. 744

-inum     inode number is

-exec can be replaced with -ok or -print to keep the command simpler for simpler finding requirements.  -exec can execute any command upon the files found that match the specified matched conditions, e.g. ls, cp, mv or rm (very dangerous).

the locate command can also be used to find files.  for executable binary commands, it might be quicker to use which or whereis to display the path of the binary that would be executed if the full path was not specified (relying upon the PATH environment variable to locate and prioritise.  Also check for any command aliases in your ~/.profile and ~/.bashrc if whereis or which turns nothing up as a command alias by one name may be calling a binary by another name.  I begin to digress!

Sorting files

sort

sort -k2 -n    -sort on column 2, numerically (useful if the file contains columns of data).  Can also be used to sort by month, e.g. ls -al | sort -k 6M  and use -o outputfile to write results to a file rather than > or >>

Extracting data from a file

cut and awk can be used to extract delimited lines of data from a file or columns of data from a file respectively, e.g.

cat filename | cut -d, -f3 filename     -displays the third key in a comma delimited file

cat filename | awk {‘print $3’}    -displays the third column in a file

Translating data in a file

sed and tr are stream editors for filtering and transforming text and translating or deleting characters respectively.  many great examples of sed are to be found on the internet.

a simple example of sed would be echo day | sed s/day/night/ to convert all occurrences of the word day into night.

a similar, simple example of tr would be tr “day” “night” < input.txt > output.txt

 

Facebooktwittergoogle_plusredditpinterestlinkedinmail
comment?
May 22

Data Migration using robocopy

As a compliment to my recent post “Data Migration using emcopy”

http://www.cyberfella.co.uk/2014/05/02/emcopy/

I thought it only fair to follow up with an equivalent post for good ol’ robocopy.  This has mainly come about having discovered an annoying bug in emcopy whereby it doesn’t ignore the directories specified by more than one /xd exclusion – it always excludes the last one specified, but none of the others?!

Robocopy on the other hand, does allow exclusion of more than one directory, each one specified using the /xd switch and can be a full path (to exclude very specific directories) or just one word (to exclude any directories with that name anywhere in the directory tree).

The switch worth mentioning the most though, is the /FFT switch.  Update:  AND THE /B SWITCH (use backup rights). Uh, and come to think of it, the /XO switch too (especially if you’re running a repeated backup of data to a USB HDD).

Note: Use /TIMFIX with /B to correct non-copying of datestamps on files, resulting in 02/01/1980 datestamps on all files copied with /B Backup rights.

When migrating data into a Celerra/VNX/NetApp CIFS Server, the act of copying data from a NTFS volume on a Windows Server to a Linux based Filesystem on a NAS is enough to throw the timestamps on the files out just enough to make robocopy think that the source file is newer, even when it’s not.  This means that subsequent copies of the changed files take just as long as the initial copy.

By appending /FFT to the long list of switches used in your robocopy command, it allows for a discrepancy of up to 2 seconds – enough to provide a convenient workaround to this problem.

In practice, this brought down troublesome 36 hour copy operations requiring a weekend cut-over to be arranged, down to just over 1 hour – cue cliche  – saving time and money.

An example command is given below…

robocopy d:\source e:\dest *.* /xd “System Volume Information” d:\Migration homedirs profiles wtsprofiles /e /np /fft /xo /r:1 /w:1

There are many more switches available in robocopy, including the ability to use multiple threads in newer versions (highly recommended).  Just type robocopy /? from the Windows command line to see the other options.

In practice I found emcopy to be inconsistent with copying ACE’s across to large filesystems, completely skipping some folders when creating an empty folder structure using the /xf * /create method.  This means that file data (and the missing subfolders) subsequently copied into place with /nosec would be forced to inherit the parent permissions.  Most likely not a problem, but if the data has lots of bespoke permissions then it becomes a huge problem as data is generally more “open” at the parent levels.

To re-sync permissions, the following command was useful.

for /f “delims=” %%f IN (‘dir g:\root\ /ad /b’) DO robocopy /E /Copy:S /IS /IT q:\%%f g:\root\%%f

This has since been updated.  To re-sync folder perms between source and dest trees, this works…

robocopy s:\ d:\ /lev:3 /MIR /SEC /SECFIX /V /B /TIMFIX /xo /xn /xc /r:1 /w:1

Due to folders being missed, I never deal with a file system using a single command.  I always break it up and handle each top level folder as an individual job by placing the command in a for loop as shown above.

Alternatively use the following to replicate changed files and their security, and also set the security on unchanged files.  The /V shows the unchanged files being fixed.

for /f “delims=” %%f IN (‘dir g:\root\ /ad /b’) DO robocopy q:\ g:\root\ /MIR /SEC /SECFIX /V /B /TIMFIX /r:1 /w:1

I found myself fighting for a day or two with an apparent intermittent problem copying NTFS security when robocopying data from NAS to NAS.  Despite using all the methods described above, sometimes the NTFS permissions just weren’t being copied across.  I have since discovered that using the /B switch with every other method mentioned already, fixes this annoying problem and the ACE’s come across perfectly.

I’ve since encountered odd behaviour using for loops that has resulted in a mistrust of them.  So, I code each top level folder as an individual line in a batch file.  The problems were encountered where there were spaces in the folder names irrespective of using “delims=”, robocopy didn’t always get it right thereafter.

Robocopy doesn’t let you copy certain folders.  It lets you exclude certain folders but that’s not much use if you only want to copy folders starting with u6* for example.  In this situation, e.g. migrating all users whose usernames begin with u6 to a separate filesystem, you need to use the for loop.

for /f “delims=” %f IN (‘dir s:\root\u5* /ad /b’) DO robocopy s:\root\%f t:\root\%f /COPYALL /R:1 /W:1 /ZB /NP /L /FFT /LOG+:D:\cyberfellaltd\u6mig.log

Update: 28/2/2017  Real World Example: Migration of a subset of users to new filesystem.  Two passes, two different approaches.  One does initial copy of just usernames beginning with u5, the second generates a list of missing users after the first pass and does a second pass targeting the missing users.  Note that this is a hybrid Bash/Batch script and requires the installation of GNUWin32 on Windows in order to work.  This is covered here.

for /f “delims=” %%f IN (‘dir s:\root\u5* /ad /b/ o’) DO robocopy s:\root\%%f t:\root\%%f /COPYALL /R:1 /W:1 /ZB /NP /FFT /LOG+d:\mattb\u5mig.log (does first pass on all u5 users)

dir /ad /b /o s:\root\u5* | tr ‘[:upper:]’ ‘[:lower:]’ | tee t:\src.txt | wc –l      (counts 2113 and writes list of all u5 users to src.txt)

dir /ad/b /o t:\root\u5* | tr ‘[:upper:]’ ‘[:lower:]’ | tee t:\dest.txt | wc –l    (counts 2113 and writes list of all u5 users to dest.txt)

comm -23 t:\src.txt t:\dest.txt | tee t:\missing.txt | wc –l  (counts 0 differences and writes list of any missing u5 users to missing.txt)

for /f “delims=” %%f IN (cat t:\missing.txt) DO robocopy s:\root\%%f t:\root\%%f /COPYALL /R:1 /W:1 /ZB /NP /FFT /LOG+d:\mattb\u5mig.log (does 2nd pass on any missing users only)

If you have folders containing ampersand characters in the name, your copies can fail.  This post here covers a way to deal with it using variables.

Facebooktwittergoogle_plusredditpinterestlinkedinmail
1 comment
Aug 28

Networker Cheatsheet

Here is a handy cheatsheet in troubleshooting failing backups and recoveries using emc’s Networker all taken from real-world experience (and regularly updated).

If it’s helped you out of a pinch, and is worth a dollar, then please consider donating to help maintain this useful blog.

Is backup server running?

nsrwatch -s backupserver        -Gives a console version of the NMC monitoring screen

Check the daemon.raw is being written to…

cp /nsr/logs/daemon.raw ~/copyofdaemon.raw

nsr_render_log -l ~/copyofdaemon.raw > ~/copyofdaemon.log

tail -10 ~/copyofdaemon.log

You may find mminfo and nsradmin commands are unsuccessful.  the media database may be unavailable and/or you may receive “program not registered” error that usually implies the Networker daemons/services are not running on the server/client.  This can also occur during busy times such as clone groups running (even though this busy-ness is not reflected in the load averages on the backup server.

Client config.

Can you ping the client / resolve the hostname or telnet to 7937?

Are the static routes configured (if necessary).

Can the client resolve the hostnames for the backup interfaces? have connectivity to them?

Does the backup server appear in the nsr/res/servers file?

Can you run a save -d3 -s /etc on the client?

From the backup server (CLI)…

nsradmin -p 390113 -s client

Note:  If the name field is incorrect according to nsradmin (happens when machines are re-commissioned without being rebuilt) then you need to stop nsrexecd, rename /nsr/nsrladb folder to /nsr/nsrladb.old, restart nsrexecd, and most importantly, delete and recreate the client on the networker backup server, before retrying a savegrp -vc client_name group_name

Also check that all interface names are in the servers file for all interfaces on all backup servers and storage nodes likely to back the client up.

Can you probe the client?

savegrp -pvc client groupname

savegrp -D2 -pc client groupname (more verbose)

Bulk import of clients

Instead of adding clients manually one at a time in the NMC, you can perform an initial bulk import.

nsradmin -i bulk-import-file

where the bulk-import-file contains many lines like this

create type: NSR Client;name:w2k8r2;comment:SOME COMMENT;aliases:w2k8r2,w2k8r2-b,w2k8r2.cyberfella.co.uk;browse policy:Six Weeks;retention policy:Six Weeks;group:zzmb-Realign-1;server network interface:backupsvrb1;storage nodes:storagenode1b1;

Use excel to form a large csv, then use Notepad++ to remove commas.  Be aware there is a comma in the aliases field, so use an alternative character in excel to represent this then replace it with a comma once all commas have been removed from the csv.

Add user to admin list on bu server

nsraddadmin -u user=username, host=*     

where username is the username minus the domain name prefix (not necessary).

Reset NMC Password (Windows)

The default administrator password is administrator.  If that doesn’t work, check to see that the GST service is started using a local system account (it is by default), then in Computer Management, Properties, Advanced Properties, create a System Environment Variable; GST_RESET_PW=1

Stop and start the GST Service and attempt to logon to the NMC using the default username and password pair above.

When done, set GST_RESET_PW=<null>

Starting a Backup / Group from the command line

On the backup server itself:  savegrp -D5 -G <group_name>

Ignore the index save sets if you are just testing a group by adding  -I

Just backing up the :index savesets in a group: savegrp -O -G <group_name>

On a client: save -s <backup_server_backupnic_name> <path>

Reporting with mminfo

List names of all clients backed up over the last 2 weeks (list all clients)

mminfo -q “savetime>2 weeks ago” -r ‘client’ | sort | uniq

mminfo -q ‘client=client-name, level=full’ -r ‘client,savetime,ssid,name,totalsize’

in a script with a variable, use double quotes so that the variable gets evaluated, and to sort on american date column…

mminfo -q client=${clientname},level=full -r ‘client,savetime,ssid,level,volume’ | sort -k 2.7,2.10n -k 2.1,2.5n -k 2.4,2.5n

mminfo -ot -c client -q “savetime>2 weeks ago”

mminfo -r “ssid,name,totalsize,savetime(16),volume” -q “client=client_name,savetime >10/01/2012,savetime <10/16/2012”

List the last full backup ssid’s for subsequent use with recover command (unix clients)

mminfo -q ‘client=server1,level=full’ -r ‘client,savetime,ssid’

Is the client configured properly in the NMC? (see diagram above  for hints on what to check in what tabs)

How many files were backed up in each saveset (useful for counting files on a NetApp which is slow using the find command at host level)

sudo mminfo -ot -q ‘client=mynetappfiler,level=full,savetime<7 days ago’ -r ‘name,nfiles’

name                         nfiles

/my_big_volume          894084

You should probably make use of the ssflags option in the mminfo report too, which adds an extra column regarding the status of the saveset displaying one or more of the following characters CvrENiRPKIFk with the common fields shown in bold below along with their meanings.

C Continued, v valid, r purged, E eligible for recycling, N NDMP generated, i incomplete, R raw, P snapshot, K cover, I in progress, F finished, k checkpoint restart enabled.

Check Client Index

nsrck -L7 clientname

Backing up Virtual Machines using Networker,VCentre and VADP

To back up virtual machine disk files on vmfs volumes at the vmware level (as opposed to the individual file level backups of the individual vm’s), networker can interface with the vcenter servers to discover what vm’s reside on the esxi clusters managed by them, and their locations on the vmfs shared lun.  For this to work, the shared lun’s also need to be presented/visible to the VADP Proxy (Windows server with Networker client and/or Server running as a storage node) in the fc switch fabric zone config.

The communication occurs as shown in blue.  i.e.

The backup server starts backup group containing vadp clients.

The vadp proxy asks vcentre what physical esxi host has the vm, and where the files reside on the shared storage luns.

The vadp proxy / networker storage node then tells the esxi host to maintain a snapshot of the vm while the vmdk files are locked for backup.

the vmdk files are written to the storage device (in my example, a data domain dedup device)

when the backup is complete, the client index is updated on the backup server, and the changes logged by the snapshot are applied to the now unlocked vmdk and then the snapshot is deleted on the esxi host.

Configuring Networker for VADP Backups via a VADP Proxy Storage Node

The VADP Proxy is just a storage node with fibre connectivity to the SAN and access to the ESXi DataStore LUNs.

In Networker, right click Virtualisation, Enable Auto Discovery

VADP-enable

Complete the fields, but notice there is an Advanced tab.  This is to be completed as follows…  not necessarily like you’d expect…

vadp-advanced

Note that the Command Host is the name of the VADP Proxy, NOT the name of the Virtual Center Server.

Finally, Run Auto Discovery.  A map of the infrastructure should build in the Networker GUI

vadp-gui

Ensure vc, proxy and networker servers all have network comms and can resolve each others names.

You should now be ready to configure a VADP client.

Configuring a VADP client (Checklist)

GENERAL TAB

vadp-client-general

IDENTITY
COMMENT
application_name – VADP
VIRTUALIZATION
VIRTUAL CLIENT
(TICK)
PHYSICAL HOST
client_name
BACKUP
DIRECTIVE
VCB DIRECTIVE
SAVE SET
*FULL*
SCHEDULE
Daily Full

APPS AND MODULES TAB

vadp-client-appsmods

 

 

 

 

 

 

BACKUP
BACKUP COMMAND
nsrvadp_save -D9
APPLICATION INFORMATION
VADP_HYPERVISOR=fqdn_of_vcenter (hostname in caps)
VADP_VM_NAME=hostname_of_vm (in caps)
VADP_TRANSPORT_MODE=san
DEDUPLICATION
Data Domain Backup
PROXY BACKUP
VMWare
hostname_of_vadp_proxy:hostname_of_vcenter.fqdn(VADP)

GLOBALS 1 OF 2 TAB
ALIASES
hostname
        hostname.fqdn
        hostname_backup
        hostname_backup.fqdn
        ip_front
        ip_back

GLOBALS 2 OF 2 TAB
REMOTE ACCESS
user=svc_vvadpb,host=hostname_vadp_proxy
        user=SYSTEM,host=hostname_vadp_proxy
        *@*

OWNER NOTIFICATION
  /bin/mail -s “client completion : hostname_client” nwmonmail

Recovery using recover on the backup client

sudo recover -s backup_server_backup_interface_name

Once in recover, you can cd into any directory irrespective of permissions on the file system.

Redirected Client Recovery using the command line of the backup server.

Initiate the recover program on the backup server…
sudo recover -s busvr_interface -c client_name -iR -R client_name

or use…  -iN (No Overwrite / Discard)
-iY (Overwrite)

-iR (Rename ~ )

Using recover> console

Navigate around the index of recoverable files just like a UNIX filesystem

Recover>    ls    pwd cd\

Change Browsetime
Recover>    changetime yesterday
1 Nov 2012 11:30:00 PM GMT

Show versions of a folder or filename backed up
Recover>      versions     (defaults to current folder)
Recover>    versions myfile

Add a file to be recovered to the “list” of files to be recovered
Recover>    add
Recover>     add myfile

List the marked files in the “list” to be recovered
Recover>    list

Show the names of the volumes where the data resides
Recover>    volumes

Relocate recovered data to another folder
Recover>    relocate /nsr/tmp/myrecoveredfiles

Recover>  relocate “E:\\Recovered_Files”     (for Redirected Windows Client Recovery from Linux Svr)

View the folder where the recovered files will be recovered to
Recover>    destination

Start Recovery
Recover>    recover

SQL Server Recovery (database copy) on a SQL Cluster

First, rdc to cluster name and run command prompt as admin on cluster name (not cluster node)
nsrsqlrc -s <bkp-server-name> -d MSSQL:CopyOfMyDatabase -A <sql cluster name> -C MyDatabase_Data=R:\MSSQL10_50.MSSQLSERvER\MSSQL\Data\CopyOfMyDatabase.mdf,MyDatabase_log=R:\MSSQL_10_50\MSSQLSERVER\MSSQL\Data\CopyOfMyDatabase.ldf MSSQL:MyDatabase

Delete the NSR Peer Information of the NetWorker Server on the client/storage node.

Please follow the steps given below to delete the NSR peer information on NetWorker Server and on the Client.

1. At NetWorker server command line, go to the location /nsr/res

2. Type the command:

nsradmin -p nsrexec
print type:nsr peer information; name:client_name
delete
y

Delete the NSR Peer Information for the client/storage node from the NetWorker Server.

Specify the name of the client/storage node in the place of client_name.

1. At the client/storage node command line, go to the location /nsr/res

2. Type the command:

nsradmin -p nsrexec
print type:nsr peer information
delete

y

VADP Recovery using command line

Prereqs to a successful VADP restore are that the virtual machine be removed from the Inventory in VCenter (right click vm, remove from Inventory), and the folder containing the virtual machines files in the vmware datastore be renamed or removed. If the vm still exists in vmware or in the datastore, VADP will not recover it.

Log onto the backup server over ssh and obtain the save set ID for your VADP “FULLVM” backup.

mminfo –avot –q “name=FULLVM,level=full”

Make a note of the SSID for the vm/backup client (or copy it to the cut/paste buffer)

e.g. 1021210946

Log onto the VADP Proxy (which has SAN connectivity over fibre necessary to recover the files back to the datastore using the san VADP recover mode)

recover.exe –S 1021210946 –o VADP:host=VC_Svr;VADP:transmode=san

Note that if you want to recover a VM back to a different vCenter,Datastore,ESX host and/or different resource pool, you can do that from the recover command too, rather than waiting to do it using the vsphere client.  this can be used if your vm still exists in vmware and you don’t want to overwrite it.  You can additionally specify VADP:host=  VADP:datacenter=  VADP:resourcepool=  VADP:hostsystem= and VADP:datastore= fields in the recover command, separated by semicolons and no spaces.

I’ve found that whilst the minimal command above may work on some environments, others demand a far more detailed recover.exe command with all VADP parameters set before it’ll communicate with the VC.  A working example is shown below (with each VADP parameter separated on a newline for readability – you’ll need to put it into a single line, and remove any spaces between each .

recover.exe -S 131958294 -o

VADP:host=vc.fqdn;

VADP:transmode=san;

VADP:datacenter=vmware-datacenter-name;

VADP:hostsystem=esxihost.fqdn;

VADP:displayname=VM_DISPLAYNAME;

VADP:datastore=“config=VM_DataStore#Hard disk 2=VM_DataStore_LUN_Name#Hard disk 1=VM_DataStore_LUN_Name”;

VADP:user=mydomain\vadp_user;

VADP:password=vadp_password

Creating new DataDomain Devices in Networker

In Networker Administrator App from NMC Console, Click Devices button at the top.
Right click Devices in the Left hand pane, New Device Wizard (shown)

Select Data Domain, Next, Next

 Use an existing data domain system
Choose a data domain system in the same physical location to your backup server!
Enter the Data Domain OST username and password

Browse and Select
Create a New Folder in sequence, e.g. D25, tick it.

Highlight the automatically generated Device Name, Copy to clipboard (CTRL-C), Next

Untick Configure Media Pools (label device afterwards using Paste from previous step), Next

Select Storage Node to correspond with device locality from “Use an existing storage node”, Next

Agree to the default SNMP info (unless reconfiguration for custom monitoring environment is required), Next

Configure, Finish

Select new device (unlabelled, Volume name blank), right click, Label

Paste Device Name in clipboard buffer (CTRL-V)
Select Pool to add the Device into, OK.

 

 

 

 

Slow backups of large amounts of data to DataDomain deduplication device

If you have ridiculously slow backups of large amounts of data, check in Networker NMC to see the name of the storage node (Globals2 tab of the client configuration), then connect to the DataDomain and look under the Data Management, DD Boost screen for “Clients” of which your storage node will be one.  Check how many CPU’s and Memory it has.  e.g. Guess which one is the slow one (below)

 

 

 

 

 

Then SSH to the storage node and check what processes are consuming the most CPU and Memory (below)

 

 

 

 

 

 

 

In this example (above), despite dedicating a storage node backup a single large applications data, the fact that it only has 4 cpu’s and is scanning every file that ddboost is attempting to deduplicate means that a huge bottleneck is introduced.  This is a typical situation whereby decommissioned equipment has been re-purposed.

Networker Server

ssh to the networker server and issue the nsrwatch command.  It’s a command line equivalent to connecting to the Enterprise app in the NMC and looking at the monitoring screen.  Useful if you can’t connect to the NMC.

 

 

 

 

 

 

Blank / Empty Monitoring Console

If you’re NMC is displaying a blank monitoring console, try this before restarting the NMC…

Tick or Un-tick and Re-tick Archive Requests.

monitoring-refresh

 

 

 

 

 

 

 

 

 

 

Tape Jukebox Operations

ps -ef | grep nsrjb     -Maybe necessary to kill off any pending nsrjb processes before new ones will work.

nsrjb -C | grep <volume>    -Identify the slot that contains the tape (volume)

nsrjb -w -S <slot>      -Withdraw the tape in slot <slot>

nsrjb -d       -Deposit all tapes in the cap/load port into empty slots in the jukebox/library.

Note:  If you are removing and replacing tapes you should take note what pools the removed tapes belong it and allocate new blank tapes deposited into the library to the same pools to eliminate impact on backups running out of tapes.

Exchange Backups

The application options of the backup client (exchange server in DAG1 would be as follows

NSR_SNAP_TYPE=vss

NSR_ALT_PATH=C:\temp

NSR_CHECK_JET_ERRORS=none

NSR_EXCH2010_BACKUP=passive

NSR_EXCH_CHECK=no

NSR_EXCH2010_DAG=GB-DAG1

NSR_EXCH_RETAIN_SNAPSHOTS=no

NSR_DEVICE_INTERFACE=DATA_DOMAIN

NSR_DIRECT_ACCESS=no

Adding a NAS filesystem to backup (using NDMP)

Some pre-reqs on the VNX need to be satisfied before NDMP backups will work.  This is explained here

General tab

general-tab

 

 

 

 

 

 

 

 

 

 

 

The exported fs name can be determined by logging onto the VNX as nasadmin and issuing the following command

server_mountpoint server_2 -list

Apps and Modules tab

apps_modules_tab

 

 

 

 

 

 

 

 

 

Application Options that have worked in testing NDMP Backups.

Leave datadomain unticked in Networker 8.x and ensure you’ve selected a device pool other than default, or Networker may just sit waiting for a tape while you’re wondering why NDMP backups aren’t starting!

HIST=y
UPDATE=y
DIRECT=y
DSA=y
SNAPSURE=y
#OPTIONS=NT
#NSR_DIRECT_ACCESS=NO
#NSR_DEVICE_INTERFACE=DATA_DOMAIN

Backup Command: nsrndmp_save -s backup_svr -c nas_name -M -T vbb -P storage_node_bu_interface or don’t use -P if Backup Server acts as SN.

To back up an NDMP client to a non-NDMP device, use the -M option.

The value for the NDMP backup type depends on the type of NDMP host. For example, NetApp, EMC, and Procom all support dump, so the value for the Backup Command attribute is:

nsrndmp_save -T dump

Globals 1 tab

globals1

 

 

 

 

Globals2 tab

globals2

 

 

 

 

 

 

 

 

List full paths of VNX filesystems required for configuring NDMP save client on Networker (run on VNX via SSH)

server_mount server_2

List full paths required to configure NDMP backup clients (emc VNX)

server_mount server_2

e.g. /root_vdm_2/CYBERFELLA_Test_FS

Important:  If the filesystem being backd up contains more than 5 million files, set the timeout attribute to zero in the backup group’s properties.

Command line equivalent to the NMC’s Monitoring screen

nsrwatch

Command line equivalent to the NMC’s Alerts pane

printf “show pending\nprint type:nsr\n” | /usr/sbin/nsradmin -i-

Resetting Data Domain Devices

Running this in one go if you’ve not done it before is not advised.  Break it up into individual commands (separated here by pipes) and ensure the output is what you’d expect, then re-join commands accordingly so you’re certain you’re getting the result you want.  This worked in practice though.  It will only reset Read Only (.RO) devices so it won’t kill backups, but will potentially kill recoveries or clones if they are in progress.

nsr_render_log -lacedhmpty -S “1 hour ago” /nsr/logs/daemon.raw | grep -i critical | grep RO | awk {‘print $10’} | while read eachline; do nsrmm | grep $eachline | cut -d, -f1 | awk {‘print $7’}; done | while read eachdevice; do nsrmm -HH -v -y -f “${eachdevice}”; done

Identify OS of backup clients via CLI

The NMC will tell you what the Client OS is, but it won’t elaborate and tell you what type, e.g. Solaris, not Solaris 11 or Linux, not Linux el6.  Also, as useful as the NMC is, it continually drives me mad how you cant export the information on the screen to excel.  (If someone figures this out, leave a comment below).

So, here’s how I got what I wanted using the good ol’ CLI on the backup server.  Luckily for me the backup server is Linux.
Run the following command on the NetWorker server, logging the putty terminal output to a file:

nsradmin
. type: nsr client
show client OS type
show name
show os type
p

This should get you a list of client names and what OS they’re running according to Networker in your putty.log file.  Copy and paste the list into a new file called mylist.  Extract just the Solaris hosts…

grep -i -B1 solaris >mylist
grep name mylist | cut -d: -f2 | cut -d\; -f1 >mysolarislist

sed ‘s/^ *//’ mysolarislist | grep -v \\-bkp > solarislist

You’ll now have a nice clean list of solaris networker client hostnames.  You can remove any backup interface names by using

grep -v b$

to remove all lines ending in b.

One liner…

grep -i -B1 solaris mylist | grep name | cut -d: -f2 | cut -d\; -f1 | sed ‘s/^ *//’ | grep -v \\-bkp | grep -v b$ | sort | uniq > solarislist

Now this script will use that list of hostnames to ssh to them and retrieve more OS detail with the uname -a command.  Note that if SSH keys aren’t set up, you’ll need to enter your password each time a new SSH session is established.  This isn’t as arduous as it sounds.  use PuTTY right click to paste the password each time, reducing effort to a single mouse click.
#!/bin/bash

cat solarislist | while read eachhost; do
echo “Processing ${eachhost}”
ssh -n -l cyberfella -o StrictHostKeyChecking=no ${eachhost} ‘uname -a’ >> solaris_os_ver 2>&1
done

This generates a file solaris_os_ver that you can just grep for ^SunOS and end up with a list of all the networker clients and the full details of the OS on them.

grep ^SunOS solaris_os_ver | awk ‘{print $1 $3 $2}’

Facebooktwittergoogle_plusredditpinterestlinkedinmail
Comments Off on Networker Cheatsheet
Apr 04

Disk Recovery and Forensics

Who doesn’t love the word “Forensics”?  It’s a word that brings out the inner geek in all of us, yet the reality is usually pretty grim – like when your only hard drive containing all your important files and photos fails.

The first thing you should do if you suspect your hard drive is failing or has failed is not attempt to write to it and if necessary hard shut the machine down asap by pushing and holding the power button on your PC.  Any further writes could lunch the drive for good making recovery impossible.  In otherwords STOPPP!!

Anyway, here’s some notes from recent tinkerings with Ubuntu Rescue Remix (Google it, Download it).  It’s a bootable Live CD which boots a computer into a command line only Linux environment, and for the remaining 2% who are still reading, provides you with a good handful of tools that stand you the best chance of recovering data from a failing hard disk.

Assuming you’ve just booted it and your hard disk(s) are attached, the first thing to do is identify which disk corresponds to which device name in /dev.  This can be done using lshw or fdisk -l

lshw > /tmp/hardware

cat /tmp/hardware | less

The next step is to clone the dodgy disk to either another disk, or to an image file or both.  You choose.

ddrescue /dev/sda /dev/sdc

or (restartable clone to an image file)

ddrescue –direct –retrim –max-retries=3 /dev/sda imagefile logfile

If you’ve cloned to another healthy disk, then you should fsck /dev/sdc to fix any errors, then attempt to mount it with mount /dev/sdc1 /mnt/mydisk and see if you can read any data on it.  You may be as good as done at this point with no further need to go on to employing other more targeted tools for recovering data off an unmountable drive.  Failing that, try to stay calm (really – it helps), clone the disk to an imagefile the best you can, then read on.  If you can’t stay calm, then run testdisk and benefit from a more intuitive menu driven interface of various recovery options.

testdisk

Or if you’re enjoying this new found challenge of getting the photos back before the missus finds out, read on about using foremost and other similar, powerful recovery commands.

sudo foremost -i imagefile -o /recovery/foremost -w       (list recoverable files only)

sudo foremost -i imagefile -o /recovery/foremost -t jpg           (recover jpg files only)

If you suspect that the partitioning information on the drive is gone, then you can replace it using gpart to guess what the previous partitioning scheme was based upon whats on the drive.  This is good if you’re an overzealous techy who blanked the drive to install the latest OS without thinking about who else had an account on the computer and what they may have had stored.  Not good.  Don’t do it again.

sudo gpart /dev/sda

Or instead of using foremost, you could try scalpel.  Like foremost, but configurable and well, a bit better.

vi /etc/scalpel/scalpel.conf     (to configure options)

sudo scalpel imagefile -o /recovery/scalpel/

Or maybe try magicrescue on the cloned disk if there’s multiple file types to be recovered (requires the presence of recipes for the filetypes to be recovered).

/usr/share/magicrescue/recipes

Enable DMA on the cloned disk first to speed things up.

hdparm -d 1 -c 1 -u 1 /dev/hdc

sudo magicrescue  -r gzip -r png  -d /recovery/magicrescue /dev/sdc

If it’s specificly photos you’re wanting to recover, then there are two tools to choose from; photorec and recoverjpeg.

sudo photorec imagefile         (imagefile is the disk imagefile, not an image as in picture)

sudo recoverjpeg /dev/sdc1       (recovers any obvious jpeg files on partition /dev/sdc1)

If the files you want to recover were deleted on the original drive, then assuming the drive has come from a windows computer and was formatted with NTFS, then you can use ntfsundelete to recover the deleted files.

ntfsundelete -s /dev/sdc1     (scans for inodes of deleted files which can be subsequently recovered)

ntfsundelete /dev/sdc1 -u -i 3689 -o work.doc -d /recovered/ntfsundelete

If you want to recover old files previously written to a disk containing a new FAT filesystem, then you’re into using autopsy and dls, fls, icat and sorter from sleuthkit to create a secondary image of unallocated blocks contained in the image and list the inodes of files apparently contained within them, recover those files and optionally sort them by filetype, respectively.

sudo autopsy -d /media/disk/autopsy 192.168.0.1      (use your local ip address)

dls imagefile > imagefile_deletedblocks        (create secondary, smaller imagefile)

fls imagefile_deletedblocks -r -f fat -i raw      (list inode numbers of any deleted files found)

icat -r -f fat -i raw imagefile_deletedblocks inode_number > myfile.doc    (recover a file)

sudo sorter -h -s -i raw -f fat -d out -C /usr/share/sleuthkit/windows.sort /imagefile

This just touches upon ways you can recover lost data, with a few useful examples, but remember each command in it’s own right has a multitude of options which can be perused using the man command and reading the accompanying manual.  You can also google man sorter for example, and read the man page in a web browser.  I hope you get some data back!

Facebooktwittergoogle_plusredditpinterestlinkedinmail
3 comments
Social Media Auto Publish Powered By : XYZScripts.com