Windows

Robocopy

Robocopy is a very powerful copying program that exist in Windows.  However, you will need to know what switches to use to make it run optimally.

Here are some of the common ones which you will need.

Option

Description

/e

Copies subdirectories. Note that this option includes empty directories. For additional information, see Remarks.

  • /MIR = /e + /purge

/zb

Uses Restart mode. If access is denied, this option uses Backup mode.

/r:5

Specifies the number of retries on failed copies. The default value of N is 1,000,000 (one million retries).

/w:10

Specifies the wait time between retries, in seconds. The default value of N is 30 (wait time 30 seconds).

/copy:DT

Specifies the file properties to be copied. The following are the valid values for this option:

D Data

A Attributes

T Time stamps

S NTFS access control list (ACL)

O Owner information

U Auditing information

The default value for CopyFlags is DAT (data, attributes, and time stamps).

/v

Produces verbose output, and shows all skipped files.

/nfl

Specifies that file names are not to be logged.

/np

Specifies that the progress of the copying operation (the number of files or directories copied so far) will not be displayed.

No Progress – don’t display % copied

/LOG+:C:\HPMA\Logs\FileCopy.log

Writes the status output to the log file (overwrites the existing log file).

Include the + to append to the log file

/rh:hhmm-hhmm

Specifies run times when new copies may be started.

/MT[:N]

Creates multi-threaded copies with N threads. N must be an integer between 1 and 128. The default value for N is 8.

The /MT parameter cannot be used with the /IPG and /EFSRAW parameters.

Redirect output using /LOG option for better performance.

Specifying a Copy Schedule in Robocopy

You can use the built-in scheduling capability of Robocopy to specify a copy schedule instead of resorting to the Windows Task Scheduler to perform copies. There are actually a few different ways to use a copy schedule. When you specify the /MON:n switch Robocopy stays running and continually monitors the source directory for changes. When it detects that “n” or more changes have occurred to the source directory it implements these changes in the destination. (That is, when files get created in the source, the files are automatically copied to the destination.)

C:\> ROBOCOPY C:\Temp1 C:\Temp3 /MON:1

You exit the running of Robocopy by pressing the CTRL+C combination.

Similar behavior exists by specifying the /MOT:m switch. In this case, Robocopy stays running and performs another copy (if necessary) in “m” minutes’ time if things have changed.

C:\> ROBOCOPY C:\Temp1 C:\Temp3 /MOT:1

So, with this command line, Robocopy looks for changes once every minute, and if there are any they are implemented. As before, press CTRL+C to stop Robocopy from running.

A third way of scheduling a copy is to use the /RH:hhmm-hhmm switch. This tells Robocopy that it can only copy files between the hours/minutes of the first “hhmm” and the second “hhmm”. There are, of course, three scenarios here. If the timeframe specified with /RH has already passed, Robocopy will remain paused until the time occurs the next day. If the current system time is within the boundaries established with /RH then the copy occurs immediately. Finally, if the timeframe specified with /RHis in the future, Robocopy remains paused until the time occurs, and then the copy is performed. As an example:

C:\> ROBOCOPY C:\Temp1 C:\Temp3 /RH:1300-1400

This tells Robocopy to do its copying between the hours of 1300 and 1400 (1:00 pm and 2:00 pm).

Advertisements
Windows

Using Microsoft DiskSpd to Test Your Storage Subsystem

Problem

Whether you are implementing a new storage infrastructure or performing an upgrade/update to your existing hardware it’s always good to have a tool that you can use to get a baseline measurement of the storage subsystem performance, so you can compare this baseline with the performance after you’ve made your changes. This tip will look at using the DiskSpd utility to gather these performance metrics.

Solution

There are many different tools you could use to gather these performance metrics, but the DiskSpd utility is good because it’s well documented and really easy to use. It has a command line tool so it makes it really easy to run multiple tests with different parameters if that is something you require.

The link to download is at https://gallery.technet.microsoft.com/DiskSpd-a-robust-storage-6cd2f223

Example 1


Diskspd.exe -b8K -d60 -h -L -o2 -t4 -r -w30 -c50M c:\io.dat

The above will do the following:

Set the block size to 8K, run the test for 60 seconds, disable all hardware and software caching, measure and display latency statistics, leverage 2 overlapped IOs and 4 threads per target, random 30% writes and 70% reads and create a 50MB test file at c:\io.dat

On my X270 latop with SSD HDD, here are the results:


C:\Users\Paul\Downloads\Diskspd-v2.0.17\amd64fre>Diskspd.exe -b8K -d60 -h -L -o2 -t4 -r -w30 -c50M c:\io.dat

Command Line: Diskspd.exe -b8K -d60 -h -L -o2 -t4 -r -w30 -c50M c:\io.dat

Input parameters:

timespan: 1
 -------------
 duration: 60s
 warm up time: 5s
 cool down time: 0s
 measuring latency
 random seed: 0
 path: 'c:\io.dat'
 think time: 0ms
 burst size: 0
 software cache disabled
 hardware write cache disabled, writethrough on
 performing mix test (read/write ratio: 70/30)
 block size: 8192
 using random I/O (alignment: 8192)
 number of outstanding I/O operations: 2
 thread stride size: 0
 threads per file: 4
 using I/O Completion Ports
 IO priority: normal

Results for timespan 1:
*******************************************************************************

actual test time: 60.00s
thread count: 4
proc count: 4

CPU | Usage | User | Kernel | Idle
-------------------------------------------
 0| 19.74%| 2.34%| 17.40%| 80.26%
 1| 17.24%| 2.21%| 15.03%| 82.76%
 2| 21.41%| 2.60%| 18.80%| 78.59%
 3| 18.80%| 2.45%| 16.35%| 81.20%
-------------------------------------------
avg.| 19.30%| 2.40%| 16.89%| 80.70%

Total IO
thread | bytes | I/Os | MB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
 0 | 882876416 | 107773 | 14.03 | 1796.21 | 1.111 | 1.501 | c:\io.dat (50MB)
 1 | 889380864 | 108567 | 14.14 | 1809.44 | 1.103 | 1.511 | c:\io.dat (50MB)
 2 | 871604224 | 106397 | 13.85 | 1773.28 | 1.125 | 1.647 | c:\io.dat (50MB)
 3 | 871333888 | 106364 | 13.85 | 1772.73 | 1.126 | 1.598 | c:\io.dat (50MB)
-----------------------------------------------------------------------------------------------------
total: 3515195392 | 429101 | 55.87 | 7151.66 | 1.116 | 1.565

Read IO
thread | bytes | I/Os | MB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
 0 | 618479616 | 75498 | 9.83 | 1258.30 | 0.256 | 0.442 | c:\io.dat (50MB)
 1 | 623673344 | 76132 | 9.91 | 1268.86 | 0.251 | 0.344 | c:\io.dat (50MB)
 2 | 610189312 | 74486 | 9.70 | 1241.43 | 0.258 | 0.559 | c:\io.dat (50MB)
 3 | 608403456 | 74268 | 9.67 | 1237.80 | 0.261 | 0.475 | c:\io.dat (50MB)
-----------------------------------------------------------------------------------------------------
total: 2460745728 | 300384 | 39.11 | 5006.38 | 0.256 | 0.461

Write IO
thread | bytes | I/Os | MB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
 0 | 264396800 | 32275 | 4.20 | 537.91 | 3.110 | 1.165 | c:\io.dat (50MB)
 1 | 265707520 | 32435 | 4.22 | 540.58 | 3.101 | 1.290 | c:\io.dat (50MB)
 2 | 261414912 | 31911 | 4.16 | 531.85 | 3.150 | 1.568 | c:\io.dat (50MB)
 3 | 262930432 | 32096 | 4.18 | 534.93 | 3.127 | 1.486 | c:\io.dat (50MB)
-----------------------------------------------------------------------------------------------------
total: 1054449664 | 128717 | 16.76 | 2145.28 | 3.122 | 1.385

%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
 min | 0.030 | 1.105 | 0.030
 25th | 0.081 | 2.429 | 0.093
 50th | 0.144 | 2.888 | 0.280
 75th | 0.304 | 3.374 | 2.334
 90th | 0.570 | 4.113 | 3.194
 95th | 0.744 | 4.936 | 3.699
 99th | 1.137 | 7.254 | 5.430
3-nines | 3.808 | 17.071 | 10.830
4-nines | 17.809 | 45.939 | 34.079
5-nines | 36.691 | 63.048 | 53.159
6-nines | 63.540 | 65.086 | 65.086
7-nines | 63.540 | 65.086 | 65.086
8-nines | 63.540 | 65.086 | 65.086
9-nines | 63.540 | 65.086 | 65.086
 max | 63.540 | 65.086 | 65.086

C:\Users\Paul\Downloads\Diskspd-v2.0.17\amd64fre>

Example 2

Parameter

Description

-b Block size for reads/writes. For this test we will use 64K since this is mainly what SQL Server would use to read data. You could run multiple tests using different block sizes to simulate other SQL Server read/write operations
-d Test duration in seconds
-Suw Disables hardware and software buffering
-L Gather disk latency statistics
-t Number of threads per target. I keep this value to the number of cores on my server
-W Warmup duration. Number of seconds test runs before gathering statistics
-w Percentage of write requests, i.e. if set to 30% other 70% of IO test will be reads
-c Creates a test file of the specified size
> diskperf.out Output file to save generated statistics. If omitted statistics will be displayed on your screen

diskspd –b8K –d30 –o4 –t8 –h –r –w25 –L –Z1G –c20G T:\iotest.dat > DiskSpeedResults.txt

This example command line will run a 30 second random I/O test using a 20GB test file located on the T: drive, with a 25% write and 75% read ratio, with an 8K block size. It will use eight worker threads, each with four outstanding I/Os and a write entropy value seed of 1GB. It will save the results of the test to a text file called DiskSpeedResults.txt. This is a pretty good set of parameters for a SQL Server OLTP workload.

Interpreting the Results

The first section of the output displays the command used to initiate this test along with a summary of the commands.

Command Line: diskspd.exe -b64K -d600 -Suw -L -t8 -W30 -w20 -c10G C:\diskperftestfile.dat

Input parameters:

timespan: 1
-------------
duration: 600s
warm up time: 30s
cool down time: 0s
measuring latency
random seed: 0
path: 'C:\diskperftestfile.dat'
think time: 0ms
burst size: 0
software cache disabled
hardware write cache disabled, writethrough on
performing mix test (read/write ratio: 80/20)
block size: 65536
using sequential I/O (stride: 65536)
number of outstanding I/O operations: 2
thread stride size: 0
threads per file: 8
using I/O Completion Ports
IO priority: normal

This next section gives you an overview of your CPU usage during your test. This will tell you if your storage is not the issue and there is a CPU bottleneck of some sort causing you not to get the best performance out of your storage subsystem. This is definitely not the issue in our case as you can see the CPU is ~98% idle.


actual test time: 600.00s
thread count: 8
proc count: 8

CPU | Usage | User | Kernel | Idle
-------------------------------------------
 0| 1.65%| 0.61%| 1.04%| 98.35%
 1| 1.40%| 0.49%| 0.91%| 98.60%
 2| 1.64%| 0.61%| 1.02%| 98.36%
 3| 4.11%| 0.32%| 3.79%| 95.89%
 4| 1.48%| 0.44%| 1.04%| 98.52%
 5| 1.23%| 0.27%| 0.96%| 98.77%
 6| 1.43%| 0.43%| 1.00%| 98.57%
 7| 1.19%| 0.31%| 0.88%| 98.81%
-------------------------------------------
avg.| 1.77%| 0.44%| 1.33%| 98.23%

This next section is the meat and potatoes of the report. Here we get a breakdown of the IOPS, MB/s and latency statistics for each thread in our test along with a summary. Note there is a separate table for Reads, Writes and Total IO.


Total IO
thread | bytes | I/Os | MB/s | I/O per s | AvgLat | LatStdDev | file
----------------------------------------------------------------------------------------------------------------------------
 0 | 6983254016 | 106556 | 11.10 | 177.59 | 11.259 | 2.446 | C:\diskperftestfile.dat (10240MB)
 1 | 6918569984 | 105569 | 11.00 | 175.95 | 11.364 | 2.598 | C:\diskperftestfile.dat (10240MB)
 2 | 6918766592 | 105572 | 11.00 | 175.95 | 11.364 | 2.770 | C:\diskperftestfile.dat (10240MB)
 3 | 7065501696 | 107811 | 11.23 | 179.68 | 11.128 | 2.204 | C:\diskperftestfile.dat (10240MB)
 4 | 7078346752 | 108007 | 11.25 | 180.01 | 11.108 | 2.184 | C:\diskperftestfile.dat (10240MB)
 5 | 6915096576 | 105516 | 10.99 | 175.86 | 11.370 | 2.332 | C:\diskperftestfile.dat (10240MB)
 6 | 6920536064 | 105599 | 11.00 | 176.00 | 11.361 | 2.417 | C:\diskperftestfile.dat (10240MB)
 7 | 6908018688 | 105408 | 10.98 | 175.68 | 11.381 | 2.377 | C:\diskperftestfile.dat (10240MB)
----------------------------------------------------------------------------------------------------------------------------
total: 55708090368 | 850038 | 88.55 | 1416.73 | 11.291 | 2.424

Read IO
thread | bytes | I/Os | MB/s | I/O per s | AvgLat | LatStdDev | file
----------------------------------------------------------------------------------------------------------------------------
 0 | 5587075072 | 85252 | 8.88 | 142.09 | 11.042 | 2.391 | C:\diskperftestfile.dat (10240MB)
 1 | 5529665536 | 84376 | 8.79 | 140.63 | 11.184 | 2.562 | C:\diskperftestfile.dat (10240MB)
 2 | 5520490496 | 84236 | 8.77 | 140.39 | 11.179 | 2.837 | C:\diskperftestfile.dat (10240MB)
 3 | 5650841600 | 86225 | 8.98 | 143.71 | 10.913 | 2.176 | C:\diskperftestfile.dat (10240MB)
 4 | 5655363584 | 86294 | 8.99 | 143.82 | 10.854 | 2.091 | C:\diskperftestfile.dat (10240MB)
 5 | 5527568384 | 84344 | 8.79 | 140.57 | 11.202 | 2.345 | C:\diskperftestfile.dat (10240MB)
 6 | 5527896064 | 84349 | 8.79 | 140.58 | 11.186 | 2.440 | C:\diskperftestfile.dat (10240MB)
 7 | 5529731072 | 84377 | 8.79 | 140.63 | 11.218 | 2.379 | C:\diskperftestfile.dat (10240MB)
----------------------------------------------------------------------------------------------------------------------------
total: 44528631808 | 679453 | 70.78 | 1132.42 | 11.096 | 2.414

Write IO
thread | bytes | I/Os | MB/s | I/O per s | AvgLat | LatStdDev | file
----------------------------------------------------------------------------------------------------------------------------
 0 | 1396178944 | 21304 | 2.22 | 35.51 | 12.125 | 2.468 | C:\diskperftestfile.dat (10240MB)
 1 | 1388904448 | 21193 | 2.21 | 35.32 | 12.079 | 2.616 | C:\diskperftestfile.dat (10240MB)
 2 | 1398276096 | 21336 | 2.22 | 35.56 | 12.094 | 2.350 | C:\diskperftestfile.dat (10240MB)
 3 | 1414660096 | 21586 | 2.25 | 35.98 | 11.986 | 2.103 | C:\diskperftestfile.dat (10240MB)
 4 | 1422983168 | 21713 | 2.26 | 36.19 | 12.114 | 2.256 | C:\diskperftestfile.dat (10240MB)
 5 | 1387528192 | 21172 | 2.21 | 35.29 | 12.037 | 2.155 | C:\diskperftestfile.dat (10240MB)
 6 | 1392640000 | 21250 | 2.21 | 35.42 | 12.056 | 2.193 | C:\diskperftestfile.dat (10240MB)
 7 | 1378287616 | 21031 | 2.19 | 35.05 | 12.036 | 2.250 | C:\diskperftestfile.dat (10240MB)
----------------------------------------------------------------------------------------------------------------------------
total: 11179458560 | 170585 | 17.77 | 284.31 | 12.066 | 2.305

Finally we have a summary table of per percentile latencies from our test. For this section the higher nine percentiles will sometimes show up as they do for the 6-nines row and above in this example. This is because there was not enough data from the test we performed to be able to differentiate these statistics.


%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | 0.135 | 1.112 | 0.135
25th | 9.696 | 10.647 | 9.861
50th | 10.940 | 11.912 | 11.122
75th | 12.260 | 13.275 | 12.490
90th | 13.563 | 14.634 | 13.832
95th | 14.415 | 15.507 | 14.726
99th | 16.426 | 17.637 | 16.794
3-nines | 32.362 | 28.716 | 31.543
4-nines | 58.255 | 45.651 | 56.318
5-nines | 141.810 | 130.510 | 140.661
6-nines | 189.805 | 183.253 | 189.805
7-nines | 189.805 | 183.253 | 189.805
8-nines | 189.805 | 183.253 | 189.805
9-nines | 189.805 | 183.253 | 189.805
max | 189.805 | 183.253 | 189.805

Now that you have a good baseline to refer back to you can rerun the same test any time you suspect there might be an issue with your storage hardware or after any sort of storage subsystem maintenance to confirm if in fact the performance has changed.

Windows

Create Self-Signed Cert

Use this PowerShell script to create a self-signed cert for testing purposes.


New-SelfSignedCertificate -Subject "TestPullServer" -DnsName DSCSVR1,www.contoso.com

This example creates a self-signed SSL server certificate in the computer MY store with the Subject Alternative Name set to DSCSVR1http://www.contoso.com and Subject and Issuer name set to TestPullServer.

For more advanced settings, go to https://gallery.technet.microsoft.com/scriptcenter/Self-signed-certificate-5920a7c6

Windows

List of critical ADFS events to monitor

As we know in ADFS event we have two types, the ADFS admin event log and ADFS Tracing debug log. The debug log is recommended to be disabled and only enable it when ADFS service has the issue.

On ADFS admin event aspect, I think here is the list of critical events in ADFS service.

Event ID 324

The Federation Service could not authorize token issuance for caller ‘defined’ to relying party ‘defined’.

Event ID 411

Token validation failed. See inner exception for more details.

Event ID 413

An error occurred during processing of a token request. The data in this event may have the identity of the caller (application) that made this request. The data includes an Activity ID that you can cross-reference to error or warning events to help diagnose the problem that caused this error.

Event ID 500

More information for the event entry with Instance ‘Error’. There may be more events with the same Instance ID with more information.

Event ID 501

More information for the event entry with Instance ‘Error’. There may be more events with the same Instance ID with more information.

Others

Other common event IDs such as error 364 or error 342 are only showing one user is trying to do authentication with ADFS but enters incorrect username or password, so it is not critical on the ADFS service level.

On the services aspects, we can monitor the ADFS services on the ADFS server and WAP server (if we have).

For the ADFS health monitoring, we can also monitor this endpoint and ensure it is returning 200 code: