You are on page 1of 3

5/16/2014 Using ndisk64 to test new disk set-ups like Flash Systems (AIXpert Blog)

https://www.ibm.com/developerworks/community/blogs/aixpert/entry/using_ndisk64_to_test_new_disk_set_ups_like_flash_systems?lang=en 1/3
3
0 Like

Tweet 1
Using ndisk64 to test new disk set-ups like Flash Systems
nagger | June 4 2013 | Comments (4) | Visits (4706)
This is a typical request from IBMers in services or direct from customers wanting to confirm a
good disk setup or confirm disks deliver as promised. This time "... a test of SAN disk
performance where LUN's are carved from a IBM Flash System 820 with SVC. Primary
objective is to show the IBM Flash System 820 can do hundreds of thousands IOPS with
response time <1ms with good throughput. This is for an Oracle RDBMS using ASM"
Wow!! those new Flash Systems are really the next generation of disks. Well disk testing is not
trivial and you should assign a a good number of days - although Flash drives are much simpler to test than those
ancient history brown spinning patters!!! Remember, 2013 is the year the Winchester disk started to be phased out. In
2010, roughly 900 Power Systems users at Technical Universities voted Solid State Drives as the most likely
technology to be "business as usual" in fiver years time - that time is NOW - due to the price is down and the sizes are
up enough for the change to Flash System. They even reduces the size of the machine to drive the I/O so you can trade a
little Server costs against the Flash costs.
Back to ndisk64 - My response was:
I assume you have typed ./nmon64 and carefully read the output and worked examples.
To get to high I/O rates you need to run many threads ( -M start at 32 or 64) against many files one per thread so set lots
of them up before you start ( -F filelist put the filenames in the file called filelist). It will of course, trash files you have, if
you write to them. So don't go using a real databases files!
Smaller block sizes means higher transaction rates but never ever go below 4KB (-b 4k ).
Higher block sizes means higher bandwidth ( -b 128k or larger).
Most databases generate random I/O ( -R ) these days - even a table scan generates random I/O.
Make sure you run the workload for a while to get a stable rate - at least 5 minutes (-t 300 ).
Note: 1 minutes for quick tests to get the right options.
Read and write - ratio is up to you. I would do pure-read, pure-write and mixed with read:write ratio at 80:20 as this is
typical.
Avoid the AIX file system cache as it is cheating (or put that any other way so fast every one knows you can't be doing real
I/O) and confusing so use logical volumes - not sure what ASM presents to you!
For logical volumes, you have to specify the size: -s 2G
With previous, SSD's I hit the maximum I/O rates on the third attempt.
Best of luck.
Then I added as with any benchmark:
Make sure you have plenty of CPU time available - monitor it with nmon. I would start with 16 CPUs just to be safe.
And don't forget to make sure you have best to date Firmware and AIX TL plus service packs.
Don't forget the disk queue depth setting.
I hope this helps, cheers Nigel Griffiths.
Here is the latest ndisk64 binary version 7.5: I will put it on the AIX wiki nstress page.
Here is the help output so you don't need to install and run it to see them:
Usage: ./ndisk64_75 version 7.5
Complex Disk tests - sequential or random read and write mixture
./ndisk64_75 -S Sequential Disk I/O test (file or raw device)
-R Random Disk I/O test (file or raw device)
-t <secs> Timed duration of the test in seconds (default 5)
-f <file> use "File" for disk I/O (can be a file or raw device)
-f <list> list of filenames to use (max 16) [separators :,+]
example: -f f1,f2,f3 or -f /dev/rlv1:/dev/rlv2
-F <file> <file> contains list of filenames, one per line (upto 2047 files)
-M <num> Multiple processes used to generate I/O
-s <size> file Size, use with K, M or G (mandatory for raw device)
examples: -s 1024K or -s 256M or -s 4G
The default is 32MB
My Blogs Public Blogs My Updates
Search This Blog
AIXpert Blog Log in
to participate
About this blog
AIXpert Blog is about the AIX operating
system from IBM running on POWER based
machines called Power Systems and
software related to it like IBM Systems
Director, PowerVM for virtualisation and
PowerSC for security plus performance
monitoring and nmon
Links
YouTube vidoes on AIX, POWER7,...
Tags
Find a Tag
- & 5.2 6 6.1 6.3 770 780 795 active aem
af f inity aix aix6 aix7 ame analyser cpu
director entitlement ethernet firmware
hmc hypervisor linux lpar memory
monitoring nmon on performance pools
power power6 power7
powervm processor ram server
shared storage systems tools topas
upgrade vios virtual vm workload wpar
Cloud List
Recent tweets
Follow @mr_nmon

Share
Sign in (or register) English IBM
Technical topics Evaluation software Community Events Search developerWorks
My home Forums Blogs Communities Profiles Podcasts Wikis Activities IBM Champion program
5/16/2014 Using ndisk64 to test new disk set-ups like Flash Systems (AIXpert Blog)
https://www.ibm.com/developerworks/community/blogs/aixpert/entry/using_ndisk64_to_test_new_disk_set_ups_like_flash_systems?lang=en 2/3
Add a Comment More Actions
Comments (4)
-r <read%> Read percent min=0,max=100 (default 80 =80%read+20%write)
example -r 50 (-r 0 = write only, -r 100 = read only)
-b <size> Block size, use with K, M or G (default 4KB)
-O <size> first byte offset use with K, M or G (times by proc#)
-b <list> or use a colon separated list of block sizes (31 max sizes)
example -b 512:1k:2K:8k:1M:2m
-q flush file to disk after each write (fsync())
-Q flush file to disk via open() O_SYNC flag
-i <MB> Use shared memory for I/O MB is the size(max=256 MB)
-v Verbose mode = gives extra stats but slower
-l Logging disk I/O mode = see *.log but slower still
-o "cmd" Other command - pretend to be this other cmd when running
Must be the last option on the line
-K num Shared memory key (default 0xdeadbeef) allows multiple programs
Note: if you kill ndisk, you may have a shared memory
segment left over. Use ipcs and then ipcrm to remove it.
-p Pure = each Sequential thread does read or write not both
-P file Pure with separate file for writers
-C Open files in Concurrent I/O mode O_CIO
-D Open files in Direct I/O mode O_DIRECT
-z percent Snooze percent - time spent sleeping (default 0)
Note: ignored for Async mode
To make a file use dd, for 8 GB: dd if=/dev/zero of=myfile bs=1M count=8196
Asynchronous I/O tests (AIO)
-A switch on Async I/O use: -S/-R, -f/-F and -r, -M, -s, -b, -C, -D to determine I/O types
(JFS file or raw device)
-x <min> minimum outstanding Async I/Os (default=1, min=1 and min<max)
-X <max> maximum outstanding Async I/Os (default=8, max=1024)
see above -f <file> -s <size> -R <read%> -b <size>
For example:
dd if=/dev/zero of=bigfile bs=1m count=1024
./ndisk64_75 -f bigfile -S -r100 -b 4096:8k:64k:1m -t 600
./ndisk64_75 -f bigfile -R -r75 -b 4096:8k:64k:1m -q
./ndisk64_75 -F filelist -R -r75 -b 4096:8k:64k:1m -M 16
./ndisk64_75 -F filelist -R -r75 -b 4096:8k:64k:1m -M 16 -l -v
For example:
./ndisk64_75 for Asynch compiled in version
./ndisk64_75 -A -F filelist -R -r50 -b 4096:8k:64k:1m -M 16 -x 8 -X 64
Add a Comment More Actions
Permalink 1 PrernaAwasthi commented June 17 2013
Hi <br /> Are there any generic sizing guidelines for RAMSAN storage , As we know that RAMSAN can
support more then 3lakh I/Ops . Is there any specific parameter which we need to tune with NPIV settings ,
are there any spcifc processor requirement, <div>&nbsp;</div> I went through some of the available
documents , however i couldn't get any sizing specific guidelines with RAMSAN storage,
Permalink 2 nagger commented June 17 2013
Hello, not sure why an IBMer is asking this question via a public blog comment! I don't have specific
information but the release notes that come with the product might have recommended settings. Obviously
a large Queue Depth is needed. As for CPU requirements that is a rather upside down question. It will be
the same CPU cycle requirements as for hard disks (you will still be running a device driver to the adapter) -
its just that the disks will respond much more quickly. I seriously doubt that most application could demand
all the IOPS available - the application has to create or manipulate the data and that takes CPU time. The
point is that the time wasted waiting for the disks to response becomes nearly zero so your application runs
faster i.e. zero disk bottlenecks. This could drive up your CPU use but only because the applications no
longer waits. Best of luck, Nigel
Permalink 3 Mike_Pete commented Jan 9
I've been using ndisk (ndisk64_75 actually) to do some comparisons between regular SAN backed vscsi
and SSP (VIO 2.2.3.1) but I have hit some sort of a problem. ndisk will run just fine several times but then will
just stop working. Once broken, I can't get it to work again without rebooting. I've tried using different options
(read vs write and others), changing files, logging back in, and changing users but nothing seems to clear
this error. Anyone have any ideas? The error is below, hopefully it pastes correctly. <div>&nbsp;</div>
ERROR: File exists <br /> Assert failure: SYSCALL retured -1:errno=17 <br /> File=ndisk64.c,
Function=main, Line=1430, <br /> Operation="shmid_parallel = shmget(shm_user_key, shm_size,
0020000 | 0000400 | 0000200 | 0002000)" <br /> Memory fault(coredump)
Permalink 4 nagger commented Jan 9
You might have Control-D the run and left IPC resource as ndisk64 did not complete and tidy up. <br /> Run:
ipcs -m <br /> Look for entries with a a KEY of 0xdeadbeef or 0xbeefdead. Then use ipcrm to remove them
5/16/2014 Using ndisk64 to test new disk set-ups like Flash Systems (AIXpert Blog)
https://www.ibm.com/developerworks/community/blogs/aixpert/entry/using_ndisk64_to_test_new_disk_set_ups_like_flash_systems?lang=en 3/3
Show: 10 20 30 items per page Previous Next
About
Help
Contact us
Submit content
Feeds
Newsletters
Report abuse
Terms of use
Third party notice
IBM privacy
IBM accessibility
Faculty
Students
Business Partners
1
Previous Entry Main Next Entry
but only them!! <br /> Thanks Nigel Griffiths
Follow
Like

You might also like