Regression Analysis for profiling L-Store metadata throughput handling

From ReddNet
Revision as of 10:03, 23 June 2006 by 129.59.170.97 (talk) (→‎<font color=Blue> Results)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Objective

As part of some of the initial testing with L-Store these tests are attempting to create a base reference model for meta data handling with L-Store. Specifically we are interested in seeing that as we increase from very small file sizes to large file sizes whether the latency profile is a linear fit or a non-linear fit.

Parameters

1. File Size: 1KB, 500KB, 1MB, 50 MB, 100MB, 150MB, 200MB, 250MB

2. Number of Files: 30 files

3. Number of threads: 10 threads

Results

  • Number of Threads: 10
  • Block Size : 1MB (Each Slice is 1MB)
  • Number of Files : 30
  • Current Status: In progress
  • Time of Completion:


Type of Test Number of files Average File Size (MB) Average Transfer Time (sec) Average Throughput (MB/sec) Median Transfer Time (sec) Maximum Transfer Time (sec) Std Deviation on Transfer Time
profile_upload 30 0.001 2.8 0.0003 2.7 11.0 0.63
profile_upload 30 0.5 3.0 0.16 3.0 10.0 0.45
profile_upload 30 1.0 3.5 0.29 3.4 4.5 0.22
profile_upload 30 50.0 8.4 5.95 8.3 10.0 0.49
profile_upload 30 100.0 13.0 7.69 13.0 16.0 0.8
profile_upload 30 150.0 18.0 8.33 18.0 19.0 0.87
profile_upload 30 200.0 23.0 8.69 23.0 27.0 1.3
profile_upload 30 250.0 27.0 9.26 27.0 47.0 1.2
profile_upload 30 500.0 54.0 9.26 53.0 78.0 4.8
profile_upload 30 1000.0 100.0 10 100.0 120.0 4.0