Tests: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
|||
Line 14: | Line 14: | ||
== Current Tests == | == Current Tests == | ||
=== TSI simulation data using LoDN tools === | === TSI simulation data using LoDN tools May '08 (Blondin/Sellers)=== | ||
=== Ongoing Usage Tests - Spring '08 (Sheldon) === | === Ongoing Usage Tests - Spring '08 (Sheldon) === | ||
Revision as of 12:00, 15 May 2008
Test Documentation
- For each test:
- description of what you are testing
- estimate duration, estimate what you expect?
- description for your test method (what you did)
- result
Proposed Tests
- Stress test on SFASU
- What happens when depots start to get full? (Harold and Dan)
- depots should do low-level reclamation/resource recovery - Dan and Alan will do this test later this week
Current Tests
TSI simulation data using LoDN tools May '08 (Blondin/Sellers)
Ongoing Usage Tests - Spring '08 (Sheldon)
These tests are continuous, although they can be stopped (just ask me).
Results are updated hourly.
Description of Tests
Currently all of these tests run on a handful of systems in the Vanderbilt Physics department. Our connection out is limited at 1 Gbs, this is one potential bottleneck on the tests I can run (the other is the 622 Mbs bandwidth of Vanderbilt's external connection.
- Downloads of a large 1.1 GByte file
- I have loaded several 1.1 GBytes files into the full REDDnet deployment of depots (40 depots or so).
- My tests continuously download this file and compare the checksum of the downloaded file to the original to check for bit rot (amoung other things).
- As soon as one download completes, another is started.
- A cron job is used to keep this process going.
- Several of these jobs are run on on different systems in the physics cluster on the 9th floor of Stevenson Center.
- Uploads/downloads of a large 1.1 GByte file
- I initially download one of the several files that I seeded REDDnet with.
- Once I verify that the download checksum is equal to that of the original file, I upload it back to REDDnet, giving it a new name so it doesn't overwrite the orginal.
- After each upload/download cycle, the files checksum is compared to the original.
- These process is kept going by a cron job.
- I count and plot the number of times files are successfully uploaded and downloaded, and add the number of downloads from the previous test, to get the total number of 1.1 GByte files moved each hour. This is ploted on the results webpage. Also shown are the number of times a move error occurs, and the average upload and download times for the 1.1 GByte files.
- Recursive uploads/downloads of a directory.
- I use a recursive download to download a directory that was originally seeded on the REDDnet depots. I compare this directory to the original. If it is OK, I upload to a new temporary directory, download it again, and compare again.
- The upload/download cycle is repeated several times, and is kept going by a cron job.
- Statistics for these movements are shown on my results webpage.
Motivations
- provide an ongoing "heartbeat" for REDDnet (is it working?)
- tests for bit rot in depots (which would result in regular failures).
- provide some feedback regarding performance (which would result in changes in download/upload times).
- provide some feedback on failure rates...
Results
- Results are updated hourly.