RE: ORION num_disks

From: Allen, Brandon <>
Date: Fri, 12 Sep 2008 14:04:10 -0700
Message-ID: <04DDF147ED3A0D42B48A48A18D574C450D542BEE@NT15.oneneck.corp>

You could be right, I'm really not sure and have just come to most of my current conclusions through trial and error. One thing I've noticed, at least on Linux (OEL4 & 5) is that orion seems to return pretty consistent results regardless of how high I push the load for a single execution, e.g., even if I run with num_small 50 (I usually focus more on IOPS since I work with OLTP systems) and/or num_disks 50, I'll get about the same throughput as if I run with 5 or 10. I also never see it spawn multiple processes/threads at the OS level, so it seems to just be doing AIO from a single process. I've found that I can push the system much harder if I run multiple orion processes concurrently, so what I'll usually do is something like this:

  1. Create four 4GB files with dd
  2. Create for lun files, e.g. test1.lun, test2.lun, test3.lun and test4.lun, each pointing to 1 of the 4 test files I created
  3. Put four orion commands in a script like this to run four orion commands in the background: orion -run advanced -matrix point -num_large 0 -num_small 5
    -testname mytest1 -num_disks 1 &
    orion -run advanced -matrix point -num_large 0 -num_small 5
    -testname mytest2 -num_disks 1 &
    orion -run advanced -matrix point -num_large 0 -num_small 5
    -testname mytest3 -num_disks 1 &
    orion -run advanced -matrix point -num_large 0 -num_small 5
    -testname mytest4 -num_disks 1 &
  4. Run the script

I'll repeat the above test, increasing the number of concurrent executions until I find the peak performance. Maybe I'm just doing something wrong with the standard load-setting parameters, but this seems to be the only way I can get orion to max out my systems.

-----Original Message-----

From: Greg Rahn []

I believe num_disks has to do with the number of I/O threads that are spawned and num_large has to do with the number of outstanding I/Os that are targeted to be issued.

-- Received on Fri Sep 12 2008 - 16:04:10 CDT

Original text of this message