https://www.reddnet.org/mwiki/api.php?action=feedcontributions&user=Tacketar&feedformat=atomReddNet - User contributions [en]2024-03-28T21:33:09ZUser contributionsMediaWiki 1.39.3https://www.reddnet.org/mwiki/index.php?title=Downloads&diff=3545Downloads2008-06-11T22:22:46Z<p>Tacketar: </p>
<hr />
<div>=== IBP ===<br />
<br />
* [http://loci.cs.utk.edu/lors/distributions/ibp-1.4.0.4.tar.gz UTK IBP 1.4.0.4, released Jan. 2007]<br />
* [http://www.lstore.org/pwiki/uploads/Download/ibp_server-accre.tgz ACCRE IBP server]<br />
<br />
=== IBP benchmarking tools ===<br />
<br />
* [http://www.lstore.org/pwiki/uploads/Download/ibp_perf.tgz ibp_perf]<br />
=== L-Store ===<br />
<br />
* [http://www.lstore.org/pwiki/uploads/Docs/lstcp.jar latest L-Store client]<br />
<br />
=== LoRS ===<br />
<br />
* [http://loci.cs.utk.edu/lors/distributions/lors-0.82.1.tar.gz LoRS 0.82.1, released Oct. 2004]<br />
<br />
=== LoDN ===<br />
<br />
* ???</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Downloads&diff=3543Downloads2008-06-11T22:18:34Z<p>Tacketar: /* IBP */</p>
<hr />
<div>=== IBP ===<br />
<br />
* [http://loci.cs.utk.edu/lors/distributions/ibp-1.4.0.4.tar.gz UTK IBP 1.4.0.4, released Jan. 2007]<br />
* [http://www.lstore.org/pwiki/uploads/Download/ibp_server-accre.tgz ACCRE IBP server]<br />
<br />
=== L-Store ===<br />
<br />
* [http://www.lstore.org/pwiki/uploads/Docs/lstcp.jar latest L-Store client]<br />
<br />
=== LoRS ===<br />
<br />
* [http://loci.cs.utk.edu/lors/distributions/lors-0.82.1.tar.gz LoRS 0.82.1, released Oct. 2004]</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Main_Page&diff=3367Main Page2008-04-16T15:31:46Z<p>Tacketar: </p>
<hr />
<div>[[Image:reddnetmap.gif|right|550px]]<br />
<br />
== REDDnet Science (Research Projects Using REDDnet) ==<br />
<br />
* [http://www.americaview.org/ AmericaView] - Satellite remote sensing data and technologies in support of applied research, K-16 education, workforce development, and technology transfer.<br />
<br />
* [http://cms.cern.ch/ CMS] - Elementary Particle Physics at the [http://public.web.cern.ch/ CERN] Large Hadron Collider.<br />
<br />
* Structural Biology - Image reconstruction of large macromolecular assemblies through a collaborative effort of Vanderbilt and Lawrence Berkeley National Laboratory researchers.<br />
<br />
* [http://www.phy.ornl.gov/tsi/ Terascale Supernova Initiative] - a multidisciplinary collaboration to develop models for core collapse supernovae and related enabling technologies.<br />
<br />
* [http://www.ngda.org/ National Geospatial Digital Archive] (NGDA) - a collecting network for the archiving of geospatial images and data. <br />
<br />
* [http://www.vanderbilt.edu/americas/English/pagemanager.php?page=Merin.php Retinopathy] - Diabetic Eye Disease Screening in Peru and Bolivia<br />
<br />
== REDDnet Documentation == <br />
<br />
=== Online Documentation ===<br />
<br />
* Documentation for all aspects of REDDnet needs to be completed and added or linked to this wiki. Below is the list of what needs to be developed/Core institution and person assigned to do the task/Deadline<br />
** How to get started with L-Store/VU-person/March 31<br />
** How to get started with LoDN/UTK-person/March 31<br />
** L-Store/VU-person/March 31<br />
** LoDN/VU-person/March 31<br />
** IBP/VU-Alan/March 31<br />
** Standard IO/UTK-person/March 31<br />
** Data and Directory Services/VU-person/March 31<br />
<br />
=== REDDnet RT Helpdesk ===<br />
<br />
* Request Tracker for REDDnet to be set up by April 30th by VU/Mat.<br />
<br />
=== Past Documentation ===<br />
<br />
* [http://events.internet2.edu/2007/spring-mm/sessionDetails.cfm?session=3160&event=267 Network Storage Virtualization for Data Intensive Collaboration] Track Session at the [http://events.internet2.edu/2007/spring-mm/ Spring 2007 Internet2 Meeting] in Arlington, VA<br />
** [http://mimir.accre.vanderbilt.edu/cgi-bin/public/DocDB/DisplayMeeting?conferenceid=8 Agenda and Talks] <br />
<br />
* [http://mimir.accre.vanderbilt.edu/cgi-bin/public/DocDB/ShowDocument?docid=86 Three slide summary of L-Store, IBP, and REDDnet]<br />
<br />
* [http://mimir.accre.vanderbilt.edu/cgi-bin/public/DocDB/ShowDocument?docid=73 REDDnet NSF MRI Proposal] <br />
<br />
* [[L-Store Usage Instructions]] - setup, uploading, downloading, and several other Lstore options explained (includes [[LoRS Instructions]])<br />
<br />
* [http://mimir.accre.vanderbilt.edu/cgi-bin/public/DocDB/ShowDocument?docid=84 L-Store Presentation at the University of Sao Paulo, July, 2006] <br />
<br />
* [http://mimir.accre.vanderbilt.edu/cgi-bin/public/DocDB/ShowDocument?docid=82 L-Store Presentation at LBNL, Sept. 9, 2006] <br />
<br />
*[[Protocol Standardization Efforts]] and development ideas<br />
<br />
*'''The Vanderbilt/ACCRE Booth at [[SC06]] will highlight REDDnet technology'''<br />
<br />
*'''Sign up for the [http://lists.accre.vanderbilt.edu/cgi-bin/mailman/listinfo/reddnet REDDnet Mailing List]<br />
<br />
== Logistical Networking Software Development ==<br />
*[[Protocol Standardization Efforts]] and development ideas<br />
<br />
== Component Technologies and Partners ==<br />
<br />
* [http://www.lstore.org/pwiki/pmwiki.php L-Store], the Logistical Storage project at ACCRE (Vanderbilt)<br />
<br />
* [http://loci.cs.utk.edu/ LoCI], the Logistical Networking and Internetworking Laboratory at the University of Tennessee<br />
<br />
* the [http://www.ultralight.org/ UltraLight] Project, an Ultrascale Information System for Data Intensive Research<br />
<br />
* the Vanderbilt [http://www.vanderbilt.edu/americas/ Center for the Americas]<br />
<br />
== REDDnet@Work ==<br />
* [[REDDnet@I2: REDDnet Activities meeting, 21April08]] at the [http://events.internet2.edu/2008/spring-mm/ Spring 2008 Internet2 Member Meeting], Washington, DC.<br />
* [[REDDnet at Work Page]] -- Organization, [[REDDnet Meetings and Minutes Page|Meeting Notes]], Work Plans, Events, etc.<br />
* [http://mgmt.reddnet.org:8080/storcore/jsp/depotsPrintView.jsp?external=true REDDnet Depot Status Page]<br />
*[[REDDnet Tools and Applications Meeting Spring 2007]]<br />
*[[REDDnet Tools and Applications Meeting 2006]] December 4, 8:00am-5:00pm, Hyatt Regency McCormick Place, Chicago, IL. In coordination with the [http://events.internet2.edu/2006/fall-mm/index.html Fall 2006 Internet2 Member Meeting]<br />
<br />
== Collaborators ==<br />
<br />
=== Core Institutions ===<br />
<br />
<table width="600px" border=0 cellspacing="0" cellpadding="0"><br />
<tr><td><br />
[[Image:vubw.jpg|center|Vanderbilt]]<br />
</td><td><br />
[[Image:utorange.gif|70px|center|Tennessee]]<br />
</td><td><br />
[[Image:SFA.gif|70px|center|Stephen F. Austin]]<br />
</td><td><br />
[[Image:nevoa.png|60px|center|nevoa]]<br />
</td><td><br />
[[Image:NCstate.gif|50px|center|N. C. State]]<br />
</td><td><br />
[[Image:udel.gif|55px|center|Delaware]]<br />
</td></tr><br />
<tr><td align="center"><br />
Vanderbilt<br />
</td><td align="center"><br />
Tennessee<br />
</td><td align="center"><br />
S. F. Austin<br />
</td><td align="center"><br />
Nevoa Networks<br />
</td><td align="center"><br />
N. C. State<br />
</td><td align="center"><br />
Delaware<br />
</td></tr><br />
<br />
</table><BR><br />
<br />
=== Collaborating Host Institutions ===<br />
<br />
<table width="700px" border=0 cellspacing="0" cellpadding="0"><br />
<tr><td><br />
[[Image:usp.gif|90px|center|USP]]<br />
</td><td><br />
[[Image:uerj.jpg|70px|center|UERJ]]<br />
</td><td><br />
[[Image:michigan.jpg|60px|center|Michigan]]<br />
</td><td align="center"><br />
[[Image:fermilab.gif|55px|center|Florida]]<br />
</td><td align="center"><br />
[[Image:fnal.gif|55px|center|Fermilab]]<br />
</td><td align="center"><br />
[[Image:citlogo.gif|55px|center|Caltech]]<br />
</td><td align="center"><br />
[[Image:AMPATH.gif|55px|center|AMPATH]]<br />
</td><td><br />
[[Image:FIU.gif|55px|center|FIU]]<br />
</td></tr><br />
<tr><td align="center"><br />
S&atilde;o Paulo<br />
</td><td align="center"><br />
Rio de Janeiro<br />
</td><td align="center"><br />
Michigan<br />
</td><td align="center"><br />
Florida<br />
</td><td align="center"><br />
Fermilab<br />
</td><td align="center"><br />
Caltech<br />
</td><td align="center"><br />
AMPATH<br />
</td><td align="center"><br />
FIU<br />
</td></tr><br />
<br />
</table><BR><br />
<br />
=== Survey for Collaborators/Application Community ===<br />
<br />
* The survey linked below has been developed to understand the needs of the application community so that obtainable expectations can be established between Core Institutions and Collaborators and any other members of the application community.<br />
<br />
* Goal: Have survey completed by all members of the application community by May 30<br />
** [http://www.reddnet.org/REDDnet_Survey.doc click here for survey]<br />
<br />
== Support ==<br />
<br />
[[Image:NSF.gif|50px]] <B>This work is supported by NSF Grant PHY-0619847 and by the Vanderbilt [http://www.vanderbilt.edu/americas/ Center for the Americas]</B></div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Main_Page&diff=3366Main Page2008-04-16T15:31:38Z<p>Tacketar: </p>
<hr />
<div>[[Image:reddnetmap.gif|right|550px]]<br />
dummy<br />
== REDDnet Science (Research Projects Using REDDnet) ==<br />
<br />
* [http://www.americaview.org/ AmericaView] - Satellite remote sensing data and technologies in support of applied research, K-16 education, workforce development, and technology transfer.<br />
<br />
* [http://cms.cern.ch/ CMS] - Elementary Particle Physics at the [http://public.web.cern.ch/ CERN] Large Hadron Collider.<br />
<br />
* Structural Biology - Image reconstruction of large macromolecular assemblies through a collaborative effort of Vanderbilt and Lawrence Berkeley National Laboratory researchers.<br />
<br />
* [http://www.phy.ornl.gov/tsi/ Terascale Supernova Initiative] - a multidisciplinary collaboration to develop models for core collapse supernovae and related enabling technologies.<br />
<br />
* [http://www.ngda.org/ National Geospatial Digital Archive] (NGDA) - a collecting network for the archiving of geospatial images and data. <br />
<br />
* [http://www.vanderbilt.edu/americas/English/pagemanager.php?page=Merin.php Retinopathy] - Diabetic Eye Disease Screening in Peru and Bolivia<br />
<br />
== REDDnet Documentation == <br />
<br />
=== Online Documentation ===<br />
<br />
* Documentation for all aspects of REDDnet needs to be completed and added or linked to this wiki. Below is the list of what needs to be developed/Core institution and person assigned to do the task/Deadline<br />
** How to get started with L-Store/VU-person/March 31<br />
** How to get started with LoDN/UTK-person/March 31<br />
** L-Store/VU-person/March 31<br />
** LoDN/VU-person/March 31<br />
** IBP/VU-Alan/March 31<br />
** Standard IO/UTK-person/March 31<br />
** Data and Directory Services/VU-person/March 31<br />
<br />
=== REDDnet RT Helpdesk ===<br />
<br />
* Request Tracker for REDDnet to be set up by April 30th by VU/Mat.<br />
<br />
=== Past Documentation ===<br />
<br />
* [http://events.internet2.edu/2007/spring-mm/sessionDetails.cfm?session=3160&event=267 Network Storage Virtualization for Data Intensive Collaboration] Track Session at the [http://events.internet2.edu/2007/spring-mm/ Spring 2007 Internet2 Meeting] in Arlington, VA<br />
** [http://mimir.accre.vanderbilt.edu/cgi-bin/public/DocDB/DisplayMeeting?conferenceid=8 Agenda and Talks] <br />
<br />
* [http://mimir.accre.vanderbilt.edu/cgi-bin/public/DocDB/ShowDocument?docid=86 Three slide summary of L-Store, IBP, and REDDnet]<br />
<br />
* [http://mimir.accre.vanderbilt.edu/cgi-bin/public/DocDB/ShowDocument?docid=73 REDDnet NSF MRI Proposal] <br />
<br />
* [[L-Store Usage Instructions]] - setup, uploading, downloading, and several other Lstore options explained (includes [[LoRS Instructions]])<br />
<br />
* [http://mimir.accre.vanderbilt.edu/cgi-bin/public/DocDB/ShowDocument?docid=84 L-Store Presentation at the University of Sao Paulo, July, 2006] <br />
<br />
* [http://mimir.accre.vanderbilt.edu/cgi-bin/public/DocDB/ShowDocument?docid=82 L-Store Presentation at LBNL, Sept. 9, 2006] <br />
<br />
*[[Protocol Standardization Efforts]] and development ideas<br />
<br />
*'''The Vanderbilt/ACCRE Booth at [[SC06]] will highlight REDDnet technology'''<br />
<br />
*'''Sign up for the [http://lists.accre.vanderbilt.edu/cgi-bin/mailman/listinfo/reddnet REDDnet Mailing List]<br />
<br />
== Logistical Networking Software Development ==<br />
*[[Protocol Standardization Efforts]] and development ideas<br />
<br />
== Component Technologies and Partners ==<br />
<br />
* [http://www.lstore.org/pwiki/pmwiki.php L-Store], the Logistical Storage project at ACCRE (Vanderbilt)<br />
<br />
* [http://loci.cs.utk.edu/ LoCI], the Logistical Networking and Internetworking Laboratory at the University of Tennessee<br />
<br />
* the [http://www.ultralight.org/ UltraLight] Project, an Ultrascale Information System for Data Intensive Research<br />
<br />
* the Vanderbilt [http://www.vanderbilt.edu/americas/ Center for the Americas]<br />
<br />
== REDDnet@Work ==<br />
* [[REDDnet@I2: REDDnet Activities meeting, 21April08]] at the [http://events.internet2.edu/2008/spring-mm/ Spring 2008 Internet2 Member Meeting], Washington, DC.<br />
* [[REDDnet at Work Page]] -- Organization, [[REDDnet Meetings and Minutes Page|Meeting Notes]], Work Plans, Events, etc.<br />
* [http://mgmt.reddnet.org:8080/storcore/jsp/depotsPrintView.jsp?external=true REDDnet Depot Status Page]<br />
*[[REDDnet Tools and Applications Meeting Spring 2007]]<br />
*[[REDDnet Tools and Applications Meeting 2006]] December 4, 8:00am-5:00pm, Hyatt Regency McCormick Place, Chicago, IL. In coordination with the [http://events.internet2.edu/2006/fall-mm/index.html Fall 2006 Internet2 Member Meeting]<br />
<br />
== Collaborators ==<br />
<br />
=== Core Institutions ===<br />
<br />
<table width="600px" border=0 cellspacing="0" cellpadding="0"><br />
<tr><td><br />
[[Image:vubw.jpg|center|Vanderbilt]]<br />
</td><td><br />
[[Image:utorange.gif|70px|center|Tennessee]]<br />
</td><td><br />
[[Image:SFA.gif|70px|center|Stephen F. Austin]]<br />
</td><td><br />
[[Image:nevoa.png|60px|center|nevoa]]<br />
</td><td><br />
[[Image:NCstate.gif|50px|center|N. C. State]]<br />
</td><td><br />
[[Image:udel.gif|55px|center|Delaware]]<br />
</td></tr><br />
<tr><td align="center"><br />
Vanderbilt<br />
</td><td align="center"><br />
Tennessee<br />
</td><td align="center"><br />
S. F. Austin<br />
</td><td align="center"><br />
Nevoa Networks<br />
</td><td align="center"><br />
N. C. State<br />
</td><td align="center"><br />
Delaware<br />
</td></tr><br />
<br />
</table><BR><br />
<br />
=== Collaborating Host Institutions ===<br />
<br />
<table width="700px" border=0 cellspacing="0" cellpadding="0"><br />
<tr><td><br />
[[Image:usp.gif|90px|center|USP]]<br />
</td><td><br />
[[Image:uerj.jpg|70px|center|UERJ]]<br />
</td><td><br />
[[Image:michigan.jpg|60px|center|Michigan]]<br />
</td><td align="center"><br />
[[Image:fermilab.gif|55px|center|Florida]]<br />
</td><td align="center"><br />
[[Image:fnal.gif|55px|center|Fermilab]]<br />
</td><td align="center"><br />
[[Image:citlogo.gif|55px|center|Caltech]]<br />
</td><td align="center"><br />
[[Image:AMPATH.gif|55px|center|AMPATH]]<br />
</td><td><br />
[[Image:FIU.gif|55px|center|FIU]]<br />
</td></tr><br />
<tr><td align="center"><br />
S&atilde;o Paulo<br />
</td><td align="center"><br />
Rio de Janeiro<br />
</td><td align="center"><br />
Michigan<br />
</td><td align="center"><br />
Florida<br />
</td><td align="center"><br />
Fermilab<br />
</td><td align="center"><br />
Caltech<br />
</td><td align="center"><br />
AMPATH<br />
</td><td align="center"><br />
FIU<br />
</td></tr><br />
<br />
</table><BR><br />
<br />
=== Survey for Collaborators/Application Community ===<br />
<br />
* The survey linked below has been developed to understand the needs of the application community so that obtainable expectations can be established between Core Institutions and Collaborators and any other members of the application community.<br />
<br />
* Goal: Have survey completed by all members of the application community by May 30<br />
** [http://www.reddnet.org/REDDnet_Survey.doc click here for survey]<br />
<br />
== Support ==<br />
<br />
[[Image:NSF.gif|50px]] <B>This work is supported by NSF Grant PHY-0619847 and by the Vanderbilt [http://www.vanderbilt.edu/americas/ Center for the Americas]</B></div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Presentation_information&diff=3166Presentation information2008-01-24T23:56:37Z<p>Tacketar: </p>
<hr />
<div>= Proposed changes =<br />
:[[Development ideas | Follow link for details]]<br />
<br />
= Development timeline =<br />
* Existing protocol documentation (real soon)<br />
* Add web page for ACCRE IBP depot software and tools releases(real soon). <br />
** It's probably more suited to binary distribution since their is no autotool support yet.<br />
** ACCRE IBP Depot<br />
** ibp_perf - IBP server benchmarking tool. Benchmarks:<br />
*** allocation creates / sec<br />
*** allocation removals / sec<br />
*** Upload performance (MB/s)<br />
*** Download performance (MB/s)<br />
*** Mixed upload/download performance. upload/download ratio is a user controlled parameter (MB/s)<br />
*** User can specify depot, port, RID, number of simultaneous threads allocation count, upload/download count, and blocksize.<br />
** ibp_sperf - Benchmarks FIFO or depot-depot copy performance along a chain of depots. <br />
*** Provides one way transfer rate<br />
*** User controls number of simultaneous transfers and block sizes<br />
*** The data path can also include the client(upload a file or dummy data) or just the depots(dummy data only).<br />
*** depot-depot method: Uses 2 allocations/depot and uses IBP_copy() calls to frog jump the data down the path.<br />
*** FIFO method: The end points of the path have a fixed size "normal" allocation as specified by the user. The intermediate depots use a single FIFO type allocation. The FIFO allocation size is controlled by the user. A single IBP_copy() command is used for each internal depot. This results in data tranfers > allocation size which is perfectly valid with FIFO allocations.<br />
* Re-implement C IBP client (4/1/08 - it's a longtime but I have to work around ACCRE downtime in mid-march)<br />
* 1st draft: Create minimal Linux/Mac StdIO/FUSE module using directory and data service API and protocol (6/1/08)<br />
** This will be a draft implementation which will probably be thrown out<br />
** How fast this gets done is really dependent on coming up with the minimal set needed for the directory+data services api and protocol. Also depends on Larry's time getting similar support into L-Store. Micah/Chris, what about LoDN supporting it also?<br />
** Minimal functionality: fopen, fclose, fread, fwrite, stat, fstat, lstat, truncate, fseek, ftell, rewind, fgetpos, fsetpos, opendir, closedir, readdir, and rewinddir. (Am I missing something?)<br />
* Create Asynchronous IBP client API - no protocol change, just add client side optimizations.<br />
* Minimal StdIO/FUSE module which supports Async IBP for improved performance</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Presentation_information&diff=3165Presentation information2008-01-24T23:52:10Z<p>Tacketar: </p>
<hr />
<div>= Proposed changes =<br />
:[[Development ideas | Follow link for details]]<br />
<br />
= Development timeline =<br />
* Existing protocol documentation (real soon)<br />
* Add web page for ACCRE IBP depot software and tools releases(real soon). <br />
** It's probably more suited to binary distribution since their is no autotool support yet.<br />
** ACCRE IBP Depot<br />
** ibp_perf - IBP server benchmarking tool. Benchmarks:<br />
*** allocation creates / sec<br />
*** allocation removals / sec<br />
*** Upload performance (MB/s)<br />
*** Download performance (MB/s)<br />
*** Mixed upload/download performance. upload/download ratio is a user controlled parameter (MB/s)<br />
*** User can specify depot, port, RID, number of simultaneous threads allocation count, upload/download count, and blocksize.<br />
** ibp_sperf - Benchmarks FIFO or depot-depot copy performance along a chain of depots. <br />
*** Provides one way transfer rate<br />
*** User controls number of simultaneous transfers and block sizes<br />
*** The data path can also include the client(upload a file or dummy data) or just the depots(dummy data only).<br />
*** depot-depot method: Uses 2 allocations/depot and uses IBP_copy() calls to frog jump the data down the path.<br />
*** FIFO method: The end points of the path have a fixed size "normal" allocation as specified by the user. The intermediate depots use a single FIFO type allocation. The FIFO allocation size is controlled by the user. A single IBP_copy() command is used for each internal depot. This results in data tranfers > allocation size which is perfectly valid with FIFO allocations.<br />
* Re-implement C IBP client (4/1/08 - it's a longtime but I have to work around ACCRE downtime in mid-march)<br />
* 1st draft: Create minimal Linux/Mac StdIO/FUSE module using directory and data service API and protocol (6/1/08)<br />
** This will be a draft implementation which will probably be thrown out<br />
** How fast this gets done is really dependent on coming up with the minimal set needed for the directory+data services api and protocol. Also depends on Larry's time getting similar support into L-Store. Micah/Chris, what about LoDN supporting it also?<br />
** Minimal functionality: open, close, fread<br />
* Create Asynchronous IBP client API - no protocol change, just add client side optimizations.<br />
* Minimal StdIO/FUSE module which supports Async IBP for improved performance</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Presentation_information&diff=3164Presentation information2008-01-24T23:51:51Z<p>Tacketar: </p>
<hr />
<div>= Proposed changes =<br />
:[[Development ideas | Follow link for details]]<br />
<br />
= Development timeline =<br />
* Existing protocol documentation (real soon)<br />
* Add web page for ACCRE IBP depot software and tools releases(real soon). It's probably more suited to binary distribution since their is no autotool support yet.<br />
** ACCRE IBP Depot<br />
** ibp_perf - IBP server benchmarking tool. Benchmarks:<br />
*** allocation creates / sec<br />
*** allocation removals / sec<br />
*** Upload performance (MB/s)<br />
*** Download performance (MB/s)<br />
*** Mixed upload/download performance. upload/download ratio is a user controlled parameter (MB/s)<br />
*** User can specify depot, port, RID, number of simultaneous threads allocation count, upload/download count, and blocksize.<br />
** ibp_sperf - Benchmarks FIFO or depot-depot copy performance along a chain of depots. <br />
*** Provides one way transfer rate<br />
*** User controls number of simultaneous transfers and block sizes<br />
*** The data path can also include the client(upload a file or dummy data) or just the depots(dummy data only).<br />
*** depot-depot method: Uses 2 allocations/depot and uses IBP_copy() calls to frog jump the data down the path.<br />
*** FIFO method: The end points of the path have a fixed size "normal" allocation as specified by the user. The intermediate depots use a single FIFO type allocation. The FIFO allocation size is controlled by the user. A single IBP_copy() command is used for each internal depot. This results in data tranfers > allocation size which is perfectly valid with FIFO allocations.<br />
* Re-implement C IBP client (4/1/08 - it's a longtime but I have to work around ACCRE downtime in mid-march)<br />
* 1st draft: Create minimal Linux/Mac StdIO/FUSE module using directory and data service API and protocol (6/1/08)<br />
** This will be a draft implementation which will probably be thrown out<br />
** How fast this gets done is really dependent on coming up with the minimal set needed for the directory+data services api and protocol. Also depends on Larry's time getting similar support into L-Store. Micah/Chris, what about LoDN supporting it also?<br />
** Minimal functionality: open, close, fread<br />
* Create Asynchronous IBP client API - no protocol change, just add client side optimizations.<br />
* Minimal StdIO/FUSE module which supports Async IBP for improved performance</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Presentation_information&diff=3163Presentation information2008-01-24T23:50:27Z<p>Tacketar: </p>
<hr />
<div>= Proposed changes =<br />
:[[Development ideas | Follow link for details]]<br />
<br />
= Development timeline =<br />
* Existing protocol documentation (real soon)<br />
* Add web page for ACCRE IBP depot software and tools releases(real soon). It's probably more suited to binary distribution since their is no autotool support yet.<br />
** ACCRE IBP Depot<br />
** ibp_perf - IBP server benchmarking tool. Benchmarks:<br />
*** allocation creates / sec<br />
*** allocation removals / sec<br />
*** Upload performance (MB/s)<br />
*** Download performance (MB/s)<br />
*** Mixed upload/download performance. upload/download ratio is a user controlled parameter (MB/s)<br />
** ibp_sperf - Benchmarks FIFO or depot-depot copy performance along a chain of depots. <br />
*** Provides one way transfer rate<br />
*** User controls number of simultaneous transfers and block sizes<br />
*** The data path can also include the client(upload a file or dummy data) or just the depots(dummy data only).<br />
*** depot-depot method: Uses 2 allocations/depot and uses IBP_copy() calls to frog jump the data down the path.<br />
*** FIFO method: The end points of the path have a fixed size "normal" allocation as specified by the user. The intermediate depots use a single FIFO type allocation. The FIFO allocation size is controlled by the user. A single IBP_copy() command is used for each internal depot. This results in data tranfers > allocation size which is perfectly valid with FIFO allocations.<br />
* Re-implement C IBP client (4/1/08 - it's a longtime but I have to work around ACCRE downtime in mid-march)<br />
* 1st draft: Create minimal Linux/Mac StdIO/FUSE module using directory and data service API and protocol (6/1/08)<br />
** This will be a draft implementation which will probably be thrown out<br />
** How fast this gets done is really dependent on coming up with the minimal set needed for the directory+data services api and protocol. Also depends on Larry's time getting similar support into L-Store. Micah/Chris, what about LoDN supporting it also?<br />
** Minimal functionality: open, close, fread<br />
* Create Asynchronous IBP client API - no protocol change, just add client side optimizations.<br />
* Minimal StdIO/FUSE module which supports Async IBP for improved performance</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Presentation_information&diff=3162Presentation information2008-01-24T23:35:36Z<p>Tacketar: </p>
<hr />
<div>= Proposed changes =<br />
:[[Development ideas | Follow link for details]]<br />
<br />
= Development timeline =<br />
* Existing protocol documentation (real soon)<br />
* Add web page for ACCRE IBP depot software and tools releases(real soon). It's probably more suited to binary distribution since their is no autotool support yet.<br />
** ACCRE IBP Depot<br />
** ibp_perf - IBP server benchmarking tool. Benchmarks:<br />
*** allocation creates / sec<br />
*** allocation removals / sec<br />
*** Upload performance (MB/s)<br />
*** Download performance (MB/s)<br />
*** Mixed upload/download performance. upload/download ratio is a user controlled parameter (MB/s)<br />
** ibp_sperf - Benchmarks FIFO performance along a chain of depots. Provides one way transfer rate.<br />
<br />
* Re-implement C IBP client (4/1/08 - it's a longtime but I have to work around ACCRE downtime in mid-march)<br />
* 1st draft: Create minimal Linux/Mac StdIO/FUSE module using directory and data service API and protocol (6/1/08)<br />
** This will be a draft implementation which will probably be thrown out<br />
** How fast this gets done is really dependent on coming up with the minimal set needed for the directory+data services api and protocol. Also depends on Larry's time getting similar support into L-Store. Micah/Chris, what about LoDN supporting it also?<br />
** Minimal functionality: open, close, fread<br />
* Create Asynchronous IBP client API - no protocol change, just add client side optimizations.<br />
* Minimal StdIO/FUSE module which supports Async IBP for improved performance</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Presentation_information&diff=3161Presentation information2008-01-24T23:35:04Z<p>Tacketar: </p>
<hr />
<div>= Proposed changes =<br />
:[[Development ideas | Follow link for details]]<br />
<br />
= Development timeline =<br />
* Existing protocol documentation (real soon)<br />
* Add web page for ACCRE IBP depot software and tools releases(real soon)<br />
It's probably more suited to binary distribution since their is no autotool support yet.<br />
** ACCRE IBP Depot<br />
** ibp_perf - IBP server benchmarking tool. Benchmarks:<br />
*** allocation creates / sec<br />
*** allocation removals / sec<br />
*** Upload performance (MB/s)<br />
*** Download performance (MB/s)<br />
*** Mixed upload/download performance. upload/download ratio is a user controlled parameter (MB/s)<br />
** ibp_sperf - Benchmarks FIFO performance along a chain of depots. Provides one way transfer rate.<br />
<br />
* Re-implement C IBP client (4/1/08 - it's a longtime but I have to work around ACCRE downtime in mid-march)<br />
* 1st draft: Create minimal Linux/Mac StdIO/FUSE module using directory and data service API and protocol (6/1/08)<br />
** This will be a draft implementation which will probably be thrown out<br />
** How fast this gets done is really dependent on coming up with the minimal set needed for the directory+data services api and protocol. Also depends on Larry's time getting similar support into L-Store. Micah/Chris, what about LoDN supporting it also?<br />
** Minimal functionality: open, close, fread<br />
* Create Asynchronous IBP client API - no protocol change, just add client side optimizations.<br />
* Minimal StdIO/FUSE module which supports Async IBP for improved performance</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Presentation_information&diff=3160Presentation information2008-01-24T23:34:52Z<p>Tacketar: </p>
<hr />
<div>= Proposed changes =<br />
:[[Development ideas | Follow link for details]]<br />
<br />
= Development timeline =<br />
* Existing protocol documentation (real soon)<br />
* Add web page for ACCRE IBP depot software and tools releases(real soon)<br />
:It's probably more suited to binary distribution since their is no autotool support yet.<br />
<br />
** ACCRE IBP Depot<br />
** ibp_perf - IBP server benchmarking tool. Benchmarks:<br />
*** allocation creates / sec<br />
*** allocation removals / sec<br />
*** Upload performance (MB/s)<br />
*** Download performance (MB/s)<br />
*** Mixed upload/download performance. upload/download ratio is a user controlled parameter (MB/s)<br />
** ibp_sperf - Benchmarks FIFO performance along a chain of depots. Provides one way transfer rate.<br />
<br />
* Re-implement C IBP client (4/1/08 - it's a longtime but I have to work around ACCRE downtime in mid-march)<br />
* 1st draft: Create minimal Linux/Mac StdIO/FUSE module using directory and data service API and protocol (6/1/08)<br />
** This will be a draft implementation which will probably be thrown out<br />
** How fast this gets done is really dependent on coming up with the minimal set needed for the directory+data services api and protocol. Also depends on Larry's time getting similar support into L-Store. Micah/Chris, what about LoDN supporting it also?<br />
** Minimal functionality: open, close, fread<br />
* Create Asynchronous IBP client API - no protocol change, just add client side optimizations.<br />
* Minimal StdIO/FUSE module which supports Async IBP for improved performance</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Presentation_information&diff=3159Presentation information2008-01-24T23:34:31Z<p>Tacketar: </p>
<hr />
<div>= Proposed changes =<br />
:[[Development ideas | Follow link for details]]<br />
<br />
= Development timeline =<br />
* Existing protocol documentation (real soon)<br />
* Add web page for ACCRE IBP depot software and tools releases(real soon)<br />
:It's probably more suited to binary distribution since their is no autotool support yet.<br />
** ACCRE IBP Depot<br />
** ibp_perf - IBP server benchmarking tool. Benchmarks:<br />
*** allocation creates / sec<br />
*** allocation removals / sec<br />
*** Upload performance (MB/s)<br />
*** Download performance (MB/s)<br />
*** Mixed upload/download performance. upload/download ratio is a user controlled parameter (MB/s)<br />
** ibp_sperf - Benchmarks FIFO performance along a chain of depots. Provides one way transfer rate.<br />
<br />
* Re-implement C IBP client (4/1/08 - it's a longtime but I have to work around ACCRE downtime in mid-march)<br />
* 1st draft: Create minimal Linux/Mac StdIO/FUSE module using directory and data service API and protocol (6/1/08)<br />
** This will be a draft implementation which will probably be thrown out<br />
** How fast this gets done is really dependent on coming up with the minimal set needed for the directory+data services api and protocol. Also depends on Larry's time getting similar support into L-Store. Micah/Chris, what about LoDN supporting it also?<br />
** Minimal functionality: open, close, fread<br />
* Create Asynchronous IBP client API - no protocol change, just add client side optimizations.<br />
* Minimal StdIO/FUSE module which supports Async IBP for improved performance</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Presentation_information&diff=3158Presentation information2008-01-24T23:33:46Z<p>Tacketar: </p>
<hr />
<div>= Proposed changes =<br />
:[[Development ideas | Follow link for details]]<br />
<br />
= Development timeline =<br />
* Existing protocol documentation (real soon)<br />
* Add web page for ACCRE IBP depot software and tools releases(real soon)<br />
** ACCRE IBP Depot<br />
** ibp_perf - IBP server benchmarking tool. Benchmarks:<br />
*** allocation creates / sec<br />
*** allocation removals / sec<br />
*** Upload performance (MB/s)<br />
*** Download performance (MB/s)<br />
*** Mixed upload/download performance. upload/download ratio is a user controlled parameter (MB/s)<br />
** ibp_sperf - Benchmarks FIFO performance along a chain of depots. Provides one way transfer rate.<br />
** It's probably more suited to binary distribution since their is no autotool support yet.<br />
* Re-implement C IBP client (4/1/08 - it's a longtime but I have to work around ACCRE downtime in mid-march)<br />
* 1st draft: Create minimal Linux/Mac StdIO/FUSE module using directory and data service API and protocol (6/1/08)<br />
** This will be a draft implementation which will probably be thrown out<br />
** How fast this gets done is really dependent on coming up with the minimal set needed for the directory+data services api and protocol. Also depends on Larry's time getting similar support into L-Store. Micah/Chris, what about LoDN supporting it also?<br />
** Minimal functionality: open, close, fread<br />
* Create Asynchronous IBP client API - no protocol change, just add client side optimizations.<br />
* Minimal StdIO/FUSE module which supports Async IBP for improved performance</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Presentation_information&diff=3157Presentation information2008-01-24T23:26:12Z<p>Tacketar: /* Proposed changes */</p>
<hr />
<div>= Proposed changes =<br />
:[[Development ideas | Follow link for details]]<br />
<br />
= Development timeline =<br />
* Existing protocol documentation (real soon)<br />
* Add web page for ACCRE IBP depot software releases(real soon)<br />
** It's probably more suited to binary distribution since their is no autotool support yet.<br />
* Re-implement C IBP client (4/1/08 - it's a longtime but I have to work around ACCRE downtime in mid-march)<br />
* 1st draft: Create minimal Linux/Mac StdIO/FUSE module using directory and data service API and protocol (6/1/08)<br />
** This will be a draft implementation which will probably be thrown out<br />
** How fast this gets done is really dependent on coming up with the minimal set needed for the directory+data services api and protocol. Also depends on Larry's time getting similar support into L-Store. Micah/Chris, what about LoDN supporting it also?<br />
<br />
* Create Asynchronous IBP client API - no protocol change, just add client side optimizations.<br />
* Minimal StdIO/FUSE module which supports Async IBP for improved performance</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Presentation_information&diff=3156Presentation information2008-01-24T23:26:01Z<p>Tacketar: New page: = Proposed changes = : Follow link for details = Development timeline = * Existing protocol documentation (real soon) * Add web page for ACCRE IBP depot software ...</p>
<hr />
<div>= Proposed changes =<br />
:[[[[Development ideas | Follow link for details]]<br />
<br />
= Development timeline =<br />
* Existing protocol documentation (real soon)<br />
* Add web page for ACCRE IBP depot software releases(real soon)<br />
** It's probably more suited to binary distribution since their is no autotool support yet.<br />
* Re-implement C IBP client (4/1/08 - it's a longtime but I have to work around ACCRE downtime in mid-march)<br />
* 1st draft: Create minimal Linux/Mac StdIO/FUSE module using directory and data service API and protocol (6/1/08)<br />
** This will be a draft implementation which will probably be thrown out<br />
** How fast this gets done is really dependent on coming up with the minimal set needed for the directory+data services api and protocol. Also depends on Larry's time getting similar support into L-Store. Micah/Chris, what about LoDN supporting it also?<br />
<br />
* Create Asynchronous IBP client API - no protocol change, just add client side optimizations.<br />
* Minimal StdIO/FUSE module which supports Async IBP for improved performance</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=All_Hands_Meeting,_Feb_1,_2008&diff=3155All Hands Meeting, Feb 1, 20082008-01-24T23:11:55Z<p>Tacketar: /* 09:15-10:00 --- IBP */</p>
<hr />
<div>==Agenda for All-Hands==<br />
<br />
"What needs to happen to make REDDnet happen?"<br />
<br />
Meeting Goals:<br />
* lay out a roadmap and vision for REDDnet<br />
* specify tasks and jobs and prioritize<br />
* identify and make necessary policy decisions<br />
<br />
roadmap should be driven by application groups: CMS, TVNA, AmericaView, Facit,...<br />
<br />
we need a scribe for each session to write down action items from<br />
that session and major decisions.<br />
<br />
====List of questions we know should be addressed====<br />
<br />
* titles of working group probably need to be changed?<br />
* opt-in policy for new groups<br />
* contributed nodes policy (do we want to require contributors to be collaborators?)<br />
* what services does REDDnet provide?<br />
<br />
===Morning Session 9-12:30===<br />
<br />
====09:00-09:15 --- Management Overview====<br />
<br />
====09:15-10:00 --- IBP====<br />
:[[Presentation information]]<br />
<br />
====10:00-10:45 --- Software Integration and Interoperability====<br />
* standardizing on an exnode - what is current state of exnode interoperability?<br />
* how should checksum issue be resolved?<br />
<br />
====10:45-11:00 --- 15 minute coffee break====<br />
<br />
====11:00-11:45 --- LoCI Tools====<br />
<br />
====11:45-12:30 --- L-Store Tools====<br />
<br />
===Afternoon Session 1:30-5:00===<br />
<br />
====01:30-02:15 --- Infrastruction Maintenance and Operations====<br />
<br />
====02:15-03:00 --- Application Liason Activities====<br />
<br />
* how do new groups join?<br />
<br />
====03:00-03:15 --- 15 minute coffee break====<br />
<br />
====03:15-04:00 --- Applications====<br />
<br />
====04:00-05:00 --- The Big Picture====</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=All_Hands_Meeting,_Feb_1,_2008&diff=3154All Hands Meeting, Feb 1, 20082008-01-24T23:11:25Z<p>Tacketar: /* 09:15-10:00 --- IBP */</p>
<hr />
<div>==Agenda for All-Hands==<br />
<br />
"What needs to happen to make REDDnet happen?"<br />
<br />
Meeting Goals:<br />
* lay out a roadmap and vision for REDDnet<br />
* specify tasks and jobs and prioritize<br />
* identify and make necessary policy decisions<br />
<br />
roadmap should be driven by application groups: CMS, TVNA, AmericaView, Facit,...<br />
<br />
we need a scribe for each session to write down action items from<br />
that session and major decisions.<br />
<br />
====List of questions we know should be addressed====<br />
<br />
* titles of working group probably need to be changed?<br />
* opt-in policy for new groups<br />
* contributed nodes policy (do we want to require contributors to be collaborators?)<br />
* what services does REDDnet provide?<br />
<br />
===Morning Session 9-12:30===<br />
<br />
====09:00-09:15 --- Management Overview====<br />
<br />
====09:15-10:00 --- IBP====<br />
[[Presentation information]]<br />
<br />
====10:00-10:45 --- Software Integration and Interoperability====<br />
* standardizing on an exnode - what is current state of exnode interoperability?<br />
* how should checksum issue be resolved?<br />
<br />
====10:45-11:00 --- 15 minute coffee break====<br />
<br />
====11:00-11:45 --- LoCI Tools====<br />
<br />
====11:45-12:30 --- L-Store Tools====<br />
<br />
===Afternoon Session 1:30-5:00===<br />
<br />
====01:30-02:15 --- Infrastruction Maintenance and Operations====<br />
<br />
====02:15-03:00 --- Application Liason Activities====<br />
<br />
* how do new groups join?<br />
<br />
====03:00-03:15 --- 15 minute coffee break====<br />
<br />
====03:15-04:00 --- Applications====<br />
<br />
====04:00-05:00 --- The Big Picture====</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Directory_and_data_management_services_APIs&diff=3153Directory and data management services APIs2008-01-24T22:21:31Z<p>Tacketar: Directory and data management services APIs moved to Directory and data management services API and protocol</p>
<hr />
<div>#REDIRECT [[Directory and data management services API and protocol]]</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Directory_and_data_management_services_API_and_protocol&diff=3152Directory and data management services API and protocol2008-01-24T22:21:31Z<p>Tacketar: Directory and data management services APIs moved to Directory and data management services API and protocol</p>
<hr />
<div>This page lists all of the functions that we can think of that may be made interoperable.<br />
Functional areas can be added, and discussions of how each function applies to different frameworks (lStore, Lodn, ...) <br />
<br />
- Directory manipulation - opendir like interface<br />
(create,delete,move,walk)<br />
<br />
- Open/close; obtain a handle/ release handle<br />
<br />
- locking file & byte range<br />
<br />
- Specify lun + upload, download<br />
<br />
- get file mappings<br />
<br />
- replication<br />
<br />
- fault tolerance<br />
<br />
- permissions<br />
<br />
- auth/authz<br />
<br />
- allocate space<br />
<br />
- data migration<br />
<br />
- soft links - internal and external<br />
<br />
- recover failed mappings<br />
<br />
- encryption<br />
<br />
- add arbitrary metadata ala xfs<br />
<br />
- object integrity<br />
<br />
- fsck/format/default options<br />
<br />
- object life times<br />
<br />
- advanced resource allocation<br />
<br />
- workflows<br />
<br />
- data-pathing<br />
<br />
- registration of a micro-services framework<br />
<br />
- quotas<br />
<br />
- versioning<br />
<br />
- file semantics - versioning/cow/locking etc<br />
<br />
<br />
<br />
- other ideas ???</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Development_ideas&diff=3132Development ideas2008-01-23T16:11:00Z<p>Tacketar: /* Validation along the entire data path */</p>
<hr />
<div>= Suggested changes to existing protocol =<br />
<br />
== Re-order parameters in IBP_STATUS command ==<br />
The existing IBP v1.4 implementation is:<br />
<br />
:''version IBP_STATUS '''RID''' IBP_ST_INQ password TIMEOUT \n''<br />
:''version IBP_STATUS '''RID''' IBP_ST_CHANGE password TIMEOUT \n max_hard max_soft max_duration \n''<br />
:''version IBP_STATUS IBP_ST_RES TIMEOUT \n''<br />
<br />
Notice that two of the commands have a primary command, ''IBP_STATUS'', a resource ID (''RID''), followed by a sub-command (''IBP_ST_INQ, IBP_ST_CHANGE'') and the last version has no ''RID'', just a sub-command, ''IBP_ST_RES''. The current implementation can only be parsed by first reading the whole line in and then counting the number of arguments. The argument count is then used to determine which command is actually being issued. A more natural version of the commands would always have the sub-command immediately follow the IBP_STATUS command.<br />
<br />
== Change in ''RID'' format ==<br />
The current definition of an ''RID'' is an integer as defined in ''struct ibp_depot''. The definition of an integer is architecture dependent and hense not portable. An alternative definition would be to define the ''RID'' as a character string. This would provide flexibility in its implmentation and use. The current IBP client libraries already treat the ''RID'' as an opaque character string for all commands except ''IBP_Allocate()''.<br />
<br />
== Provide interface to IBP data structures ==<br />
No explicit interfaces are provided for any of the various IBP data structures. A more flexible approach would be to add API calls to manipulate these structures indirectly.<br />
<br />
== IBP_MCOPY current status ==<br />
The documentation for this command is sparse. It looks like numerous different multicast methods were implemented but there is very little documentation describing them. Should this command be dropped?<br />
<br />
== NFU ==<br />
There is very little documentation describing the NFU implementation in the current LoCI depot and the documentation provided has errors and is not fully supported. The concept of the NFU is very powerful and I wonder if it should be split out as a separate specification altogether. Hunter's Java implementation is quite elegant. In his implementation the NFU calls are actually Java JAR files stored as allocations. These allocation are then registered with the NFU manager with hooks for checksums for data integrity. Having the NFU call operating in a Java container is extremely appealing. Java can place the NFU call in a box to limit it's resource consumption(memory, cpu, threads, etc) making it much more difficult for an NFU call to inadvertently or maliciously take down the depot or NFU manager. Also because of Java's portability deploying new NFU calls becomes trivial.<br />
<br />
= Security =<br />
<br />
== Add support for SSL ==<br />
Self-explanatory<br />
<br />
== Auth/AuthZ for IBP_ALLOC command ==<br />
This command has the potential for abuse and could result in a a "Denial of Space" attack on the depot. If the concept of an "account" is added one could then come up with additional methods to share resources for example adding the concept of an account quota. It also provides a tracking mechanism on who is *creating* allocations.<br />
<br />
== Virtual Capabilities(vcap) ==<br />
The current implementation only allows a single set of caps for an allocation. So once a user has access to a cap it can never be revoked. Virtual caps is designed to solve this problem. The idea is a user presenting the IBP_MANAGE cap could request the depot issue a new set of caps with a shorter duration. These new vcaps could then be provided to a 3rd party. At any time the original cap owner can revoke access to the allocation by simply using the IBP_MANAGE command to delete the vcap. Another useful feature to consider is restricting the vcap to a specific byte range of the original cap.<br />
<br />
== IBP "Accounts" ==<br />
In order for several of these ideas to work a new set of commands would need to be added to manage the accounts.<br />
<br />
<br />
= Data Integrity =<br />
<br />
== Validation along the entire data path ==<br />
Dan has added a bunch of text on this subject elsewhere so I'm not going to go into detail.<br />
<br />
The current implementation allows for validating at the end points only. This is accomplished by having the data originator calculate a checksum before uploading the data. This checksum can be appended to the data uploaded data or it can be stored externally in the exnode. The consumer can then download the data, calculate the checksum, and compare it to what is stored. This approach is not well suited to live data streams since the raw data will have to be buffered until the consumer can download the data to verify it.<br />
<br />
An alternative approach would be to standardize on a checksum algorithm and have the client calculate the checksum as the data is being streamed to the depot while the depot simultaneously calculates the checksum as it receives the data. The sender would pass on it's checksum for validation by the receiver. Any discrepancy occurring during the network transfer would be immediately detected while the data is still in the senders original buffer. The depot could then store this checksum as part of the allocation for use later. Most OS will immediately detect a write failure but not necessarily bit rot when reading unless the disk is part of a RAID array. Likewise when a reader requests data the reverse process can occur. Namely the depot and receiver both calculate the checksum as the data is being sent. The depot would additionally compare the original checksum stored with what was just calculated in order to detect disk errors. If no errors occurred the depot would go ahead and send the checksum down to the receiver for validation. This process is computationally efficient since the data is never re-read. The checksum is just part of the transfer pipeline.<br />
<br />
Building this validation procedure into the protocol simplifies the data integrity higher level tools require. These checksums could be used by higher level tools to verify replicated copies and detect data changes. The checksums should be treated as opaque strings and could be accessed by additional IBP_MANAGE sub-commands:<br />
<br />
* IBP_GET_CHECKSUM - Return the allocations checksum<br />
* IBP_VALIDATE_CHECKSUM - Re-calculate the checksum<br />
<br />
Using single checksum for an entire allocation is not efficient if random I/O on an allocation is allowed. In this case changing a single byte of a 10MB allocation would require the re-processing of the entire allocation. Another option would be to specify that for every 64KB of data (I picked this out of the blue so feel free to suggest something different) a checksum is generated. This means each allocation could have multiple checksums. In this case if a single byte was changed only 64KB of data would have to be re-processed. If the checksum field on the client is treated as an opaque string then having 1 or multiple checksums is irrelevant. Both cases can be treated the same.<br />
<br />
= Miscellanous =<br />
<br />
== Support UDP transfers ==<br />
What about usingthe UDT implementation since it can mimic FAST, web100, etc. TCP congestion control methods...</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Development_ideas&diff=3131Development ideas2008-01-23T01:38:09Z<p>Tacketar: </p>
<hr />
<div>= Suggested changes to existing protocol =<br />
<br />
== Re-order parameters in IBP_STATUS command ==<br />
The existing IBP v1.4 implementation is:<br />
<br />
:''version IBP_STATUS '''RID''' IBP_ST_INQ password TIMEOUT \n''<br />
:''version IBP_STATUS '''RID''' IBP_ST_CHANGE password TIMEOUT \n max_hard max_soft max_duration \n''<br />
:''version IBP_STATUS IBP_ST_RES TIMEOUT \n''<br />
<br />
Notice that two of the commands have a primary command, ''IBP_STATUS'', a resource ID (''RID''), followed by a sub-command (''IBP_ST_INQ, IBP_ST_CHANGE'') and the last version has no ''RID'', just a sub-command, ''IBP_ST_RES''. The current implementation can only be parsed by first reading the whole line in and then counting the number of arguments. The argument count is then used to determine which command is actually being issued. A more natural version of the commands would always have the sub-command immediately follow the IBP_STATUS command.<br />
<br />
== Change in ''RID'' format ==<br />
The current definition of an ''RID'' is an integer as defined in ''struct ibp_depot''. The definition of an integer is architecture dependent and hense not portable. An alternative definition would be to define the ''RID'' as a character string. This would provide flexibility in its implmentation and use. The current IBP client libraries already treat the ''RID'' as an opaque character string for all commands except ''IBP_Allocate()''.<br />
<br />
== Provide interface to IBP data structures ==<br />
No explicit interfaces are provided for any of the various IBP data structures. A more flexible approach would be to add API calls to manipulate these structures indirectly.<br />
<br />
== IBP_MCOPY current status ==<br />
The documentation for this command is sparse. It looks like numerous different multicast methods were implemented but there is very little documentation describing them. Should this command be dropped?<br />
<br />
== NFU ==<br />
There is very little documentation describing the NFU implementation in the current LoCI depot and the documentation provided has errors and is not fully supported. The concept of the NFU is very powerful and I wonder if it should be split out as a separate specification altogether. Hunter's Java implementation is quite elegant. In his implementation the NFU calls are actually Java JAR files stored as allocations. These allocation are then registered with the NFU manager with hooks for checksums for data integrity. Having the NFU call operating in a Java container is extremely appealing. Java can place the NFU call in a box to limit it's resource consumption(memory, cpu, threads, etc) making it much more difficult for an NFU call to inadvertently or maliciously take down the depot or NFU manager. Also because of Java's portability deploying new NFU calls becomes trivial.<br />
<br />
= Security =<br />
<br />
== Add support for SSL ==<br />
Self-explanatory<br />
<br />
== Auth/AuthZ for IBP_ALLOC command ==<br />
This command has the potential for abuse and could result in a a "Denial of Space" attack on the depot. If the concept of an "account" is added one could then come up with additional methods to share resources for example adding the concept of an account quota. It also provides a tracking mechanism on who is *creating* allocations.<br />
<br />
== Virtual Capabilities(vcap) ==<br />
The current implementation only allows a single set of caps for an allocation. So once a user has access to a cap it can never be revoked. Virtual caps is designed to solve this problem. The idea is a user presenting the IBP_MANAGE cap could request the depot issue a new set of caps with a shorter duration. These new vcaps could then be provided to a 3rd party. At any time the original cap owner can revoke access to the allocation by simply using the IBP_MANAGE command to delete the vcap. Another useful feature to consider is restricting the vcap to a specific byte range of the original cap.<br />
<br />
== IBP "Accounts" ==<br />
In order for several of these ideas to work a new set of commands would need to be added to manage the accounts.<br />
<br />
<br />
= Data Integrity =<br />
<br />
== Validation along the entire data path ==<br />
Dan has added a bunch of text on this subject elsewhere so I'm not going to go into detail.<br />
<br />
The current implementation allows for validating at the end points only. This is accomplished by having the data originator calculate a checksum before uploading the data. This checksum can be appended to the data uploaded data or it can be stored externally in the exnode. The consumer can then download the data, calculate the checksum, and compare it to what is stored. This approach is not well suited to live data streams since the raw data will have to be buffered until the consumer can download the data to verify it.<br />
<br />
An alternative approach would be to standardize on a checksum algorithm and have the client calculate the checksum as the data is being streamed to the depot while the depot simultaneously calculates the checksum as it receives the data. Any discrepancy occurring during the network transfer would be immediately detected while the data is still in the senders original buffer. The depot could then store this checksum as part of the allocation for use later. Most OS will immediately detect a write failure but not necessarily bit rot when reading unless the disk is part of a RAID array. Likewise when a reader requests data the reverse process can occur. Namely the depot and receiver both calculate the checksum as the data is being sent. The depot would additionally compare the original checksum stored with what was just calculated in order to detect disk errors. If no errors occurred the depot would go ahead and send the checksum down to the receiver for validation. This process is computationally efficient since the data is never re-read. The checksum is just part of the transfer pipeline.<br />
<br />
Building this validation procedure into the protocol simplifies the data integrity higher level tools require. These checksums could be used by higher level tools to verify replicated copies and detect data changes. The checksums should be treated as opaque strings and could be accessed by additional IBP_MANAGE sub-commands:<br />
<br />
* IBP_GET_CHECKSUM - Return the allocations checksum<br />
* IBP_VALIDATE_CHECKSUM - Re-calculate the checksum<br />
<br />
Using single checksum for an entire allocation is not efficient if random I/O on an allocation is allowed. In this case changing a single byte of a 10MB allocation would require the re-processing of the entire allocation. Another option would be to specify that for every 64KB of data (I picked this out of the blue so feel free to suggest something different) a checksum is generated. This means each allocation could have multiple checksums. In this case if a single byte was changed only 64KB of data would have to be re-processed. If the checksum field on the client is treated as an opaque string then having 1 or multiple checksums is irrelevant. Both cases can be treated the same.<br />
<br />
<br />
= Miscellanous =<br />
<br />
== Support UDP transfers ==<br />
What about usingthe UDT implementation since it can mimic FAST, web100, etc. TCP congestion control methods...</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Development_ideas&diff=3130Development ideas2008-01-23T01:37:53Z<p>Tacketar: </p>
<hr />
<div>= Suggested changes to existing protocol =<br />
<br />
== Re-order parameters in IBP_STATUS command ==<br />
The existing IBP v1.4 implementation is:<br />
<br />
:''version IBP_STATUS '''RID''' IBP_ST_INQ password TIMEOUT \n''<br />
:''version IBP_STATUS '''RID''' IBP_ST_CHANGE password TIMEOUT \n max_hard max_soft max_duration \n''<br />
:''version IBP_STATUS IBP_ST_RES TIMEOUT \n''<br />
<br />
Notice that two of the commands have a primary command, ''IBP_STATUS'', a resource ID (''RID''), followed by a sub-command (''IBP_ST_INQ, IBP_ST_CHANGE'') and the last version has no ''RID'', just a sub-command, ''IBP_ST_RES''. The current implementation can only be parsed by first reading the whole line in and then counting the number of arguments. The argument count is then used to determine which command is actually being issued. A more natural version of the commands would always have the sub-command immediately follow the IBP_STATUS command.<br />
<br />
== Change in ''RID'' format ==<br />
The current definition of an ''RID'' is an integer as defined in ''struct ibp_depot''. The definition of an integer is architecture dependent and hense not portable. An alternative definition would be to define the ''RID'' as a character string. This would provide flexibility in its implmentation and use. The current IBP client libraries already treat the ''RID'' as an opaque character string for all commands except ''IBP_Allocate()''.<br />
<br />
== Provide interface to IBP data structures ==<br />
No explicit interfaces are provided for any of the various IBP data structures. A more flexible approach would be to add API calls to manipulate these structures indirectly.<br />
<br />
== IBP_MCOPY current status ===<br />
The documentation for this command is sparse. It looks like numerous different multicast methods were implemented but there is very little documentation describing them. Should this command be dropped?<br />
<br />
== NFU ==<br />
There is very little documentation describing the NFU implementation in the current LoCI depot and the documentation provided has errors and is not fully supported. The concept of the NFU is very powerful and I wonder if it should be split out as a separate specification altogether. Hunter's Java implementation is quite elegant. In his implementation the NFU calls are actually Java JAR files stored as allocations. These allocation are then registered with the NFU manager with hooks for checksums for data integrity. Having the NFU call operating in a Java container is extremely appealing. Java can place the NFU call in a box to limit it's resource consumption(memory, cpu, threads, etc) making it much more difficult for an NFU call to inadvertently or maliciously take down the depot or NFU manager. Also because of Java's portability deploying new NFU calls becomes trivial.<br />
<br />
= Security =<br />
<br />
== Add support for SSL ==<br />
Self-explanatory<br />
<br />
== Auth/AuthZ for IBP_ALLOC command ==<br />
This command has the potential for abuse and could result in a a "Denial of Space" attack on the depot. If the concept of an "account" is added one could then come up with additional methods to share resources for example adding the concept of an account quota. It also provides a tracking mechanism on who is *creating* allocations.<br />
<br />
== Virtual Capabilities(vcap) ==<br />
The current implementation only allows a single set of caps for an allocation. So once a user has access to a cap it can never be revoked. Virtual caps is designed to solve this problem. The idea is a user presenting the IBP_MANAGE cap could request the depot issue a new set of caps with a shorter duration. These new vcaps could then be provided to a 3rd party. At any time the original cap owner can revoke access to the allocation by simply using the IBP_MANAGE command to delete the vcap. Another useful feature to consider is restricting the vcap to a specific byte range of the original cap.<br />
<br />
== IBP "Accounts" ==<br />
In order for several of these ideas to work a new set of commands would need to be added to manage the accounts.<br />
<br />
<br />
= Data Integrity =<br />
<br />
== Validation along the entire data path ==<br />
Dan has added a bunch of text on this subject elsewhere so I'm not going to go into detail.<br />
<br />
The current implementation allows for validating at the end points only. This is accomplished by having the data originator calculate a checksum before uploading the data. This checksum can be appended to the data uploaded data or it can be stored externally in the exnode. The consumer can then download the data, calculate the checksum, and compare it to what is stored. This approach is not well suited to live data streams since the raw data will have to be buffered until the consumer can download the data to verify it.<br />
<br />
An alternative approach would be to standardize on a checksum algorithm and have the client calculate the checksum as the data is being streamed to the depot while the depot simultaneously calculates the checksum as it receives the data. Any discrepancy occurring during the network transfer would be immediately detected while the data is still in the senders original buffer. The depot could then store this checksum as part of the allocation for use later. Most OS will immediately detect a write failure but not necessarily bit rot when reading unless the disk is part of a RAID array. Likewise when a reader requests data the reverse process can occur. Namely the depot and receiver both calculate the checksum as the data is being sent. The depot would additionally compare the original checksum stored with what was just calculated in order to detect disk errors. If no errors occurred the depot would go ahead and send the checksum down to the receiver for validation. This process is computationally efficient since the data is never re-read. The checksum is just part of the transfer pipeline.<br />
<br />
Building this validation procedure into the protocol simplifies the data integrity higher level tools require. These checksums could be used by higher level tools to verify replicated copies and detect data changes. The checksums should be treated as opaque strings and could be accessed by additional IBP_MANAGE sub-commands:<br />
<br />
* IBP_GET_CHECKSUM - Return the allocations checksum<br />
* IBP_VALIDATE_CHECKSUM - Re-calculate the checksum<br />
<br />
Using single checksum for an entire allocation is not efficient if random I/O on an allocation is allowed. In this case changing a single byte of a 10MB allocation would require the re-processing of the entire allocation. Another option would be to specify that for every 64KB of data (I picked this out of the blue so feel free to suggest something different) a checksum is generated. This means each allocation could have multiple checksums. In this case if a single byte was changed only 64KB of data would have to be re-processed. If the checksum field on the client is treated as an opaque string then having 1 or multiple checksums is irrelevant. Both cases can be treated the same.<br />
<br />
<br />
= Miscellanous =<br />
<br />
== Support UDP transfers ==<br />
What about usingthe UDT implementation since it can mimic FAST, web100, etc. TCP congestion control methods...</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Development_ideas&diff=3129Development ideas2008-01-23T01:23:00Z<p>Tacketar: </p>
<hr />
<div>= Suggested changes to existing protocol =<br />
<br />
== Re-order parameters in IBP_STATUS command ==<br />
The existing IBP v1.4 implementation is:<br />
<br />
:''version IBP_STATUS '''RID''' IBP_ST_INQ password TIMEOUT \n''<br />
:''version IBP_STATUS '''RID''' IBP_ST_CHANGE password TIMEOUT \n max_hard max_soft max_duration \n''<br />
:''version IBP_STATUS IBP_ST_RES TIMEOUT \n''<br />
<br />
Notice that two of the commands have a primary command, ''IBP_STATUS'', a resource ID (''RID''), followed by a sub-command (''IBP_ST_INQ, IBP_ST_CHANGE'') and the last version has no ''RID'', just a sub-command, ''IBP_ST_RES''. The current implementation can only be parsed by first reading the whole line in and then counting the number of arguments. The argument count is then used to determine which command is actually being issued. A more natural version of the commands would always have the sub-command immediately follow the IBP_STATUS command.<br />
<br />
== Change in ''RID'' format ==<br />
<br />
The current definition of an ''RID'' is an integer as defined in ''struct ibp_depot''. The definition of an integer is architecture dependent and hense not portable. An alternative definition would be to define the ''RID'' as a character string. This would provide flexibility in its implmentation and use. The current IBP client libraries already treat the ''RID'' as an opaque character string for all commands except ''IBP_Allocate()''.<br />
<br />
== Provide interface to IBP data structures ==<br />
<br />
No explicit interfaces are provided for any of the various IBP data structures. A more flexible approach would be to add API calls to manipulate these structures indirectly.<br />
<br />
= Security =<br />
<br />
== Add support for SSL ==<br />
Self-explanatory<br />
<br />
== Auth/AuthZ for IBP_ALLOC command ==<br />
This command has the potential for abuse and could result in a a "Denial of Space" attack on the depot. If the concept of an "account" is added one could then come up with additional methods to share resources for example adding the concept of an account quota. It also provides a tracking mechanism on who is *creating* allocations.<br />
<br />
== Virtual Capabilities(vcap) ==<br />
The current implementation only allows a single set of caps for an allocation. So once a user has access to a cap it can never be revoked. Virtual caps is designed to solve this problem. The idea is a user presenting the IBP_MANAGE cap could request the depot issue a new set of caps with a shorter duration. These new vcaps could then be provided to a 3rd party. At any time the original cap owner can revoke access to the allocation by simply using the IBP_MANAGE command to delete the vcap. Another useful feature to consider is restricting the vcap to a specific byte range of the original cap.<br />
<br />
== IBP "Accounts" ==<br />
In order for several of these ideas to work a new set of commands would need to be added to manage the accounts.<br />
<br />
<br />
= Data Integrity =<br />
<br />
== Validation along the entire data path ==<br />
Dan has added a bunch of text on this subject elsewhere so I'm not going to go into detail.<br />
<br />
The current implementation allows for validating at the end points only. This is accomplished by having the data originator calculate a checksum before uploading the data. This checksum can be appended to the data uploaded data or it can be stored externally in the exnode. The consumer can then download the data, calculate the checksum, and compare it to what is stored. This approach is not well suited to live data streams since the raw data will have to be buffered until the consumer can download the data to verify it.<br />
<br />
An alternative approach would be to standardize on a checksum algorithm and have the client calculate the checksum as the data is being streamed to the depot while the depot simultaneously calculates the checksum as it receives the data. Any discrepancy occurring during the network transfer would be immediately detected while the data is still in the senders original buffer. The depot could then store this checksum as part of the allocation for use later. Most OS will immediately detect a write failure but not necessarily bit rot when reading unless the disk is part of a RAID array. Likewise when a reader requests data the reverse process can occur. Namely the depot and receiver both calculate the checksum as the data is being sent. The depot would additionally compare the original checksum stored with what was just calculated in order to detect disk errors. If no errors occurred the depot would go ahead and send the checksum down to the receiver for validation. This process is computationally efficient since the data is never re-read. The checksum is just part of the transfer pipeline.<br />
<br />
Building this validation procedure into the protocol simplifies the data integrity higher level tools require. These checksums could be used by higher level tools to verify replicated copies and detect data changes. The checksums should be treated as opaque strings and could be accessed by additional IBP_MANAGE sub-commands:<br />
<br />
* IBP_GET_CHECKSUM - Return the allocations checksum<br />
* IBP_VALIDATE_CHECKSUM - Re-calculate the checksum<br />
<br />
Using single checksum for an entire allocation is not efficient if random I/O on an allocation is allowed. In this case changing a single byte of a 10MB allocation would require the re-processing of the entire allocation. Another option would be to specify that for every 64KB of data (I picked this out of the blue so feel free to suggest something different) a checksum is generated. This means each allocation could have multiple checksums. In this case if a single byte was changed only 64KB of data would have to be re-processed. If the checksum field on the client is treated as an opaque string then having 1 or multiple checksums is irrelevant. Both cases can be treated the same.<br />
<br />
<br />
= Miscellanous =<br />
<br />
== Support UDP transfers ==<br />
What about usingthe UDT implementation since it can mimic FAST, web100, etc. TCP congestion control methods...</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Development_ideas&diff=3128Development ideas2008-01-23T01:20:24Z<p>Tacketar: </p>
<hr />
<div>= Suggested changes to existing protocol =<br />
<br />
== Re-order parameters in IBP_STATUS command ==<br />
The existing IBP v1.4 implementation is:<br />
<br />
:''version IBP_STATUS '''RID''' IBP_ST_INQ password TIMEOUT \n''<br />
:''version IBP_STATUS '''RID''' IBP_ST_CHANGE password TIMEOUT \n max_hard max_soft max_duration \n''<br />
:''version IBP_STATUS IBP_ST_RES TIMEOUT \n''<br />
<br />
Notice that two of the commands have a primary command, ''IBP_STATUS'', a resource ID (''RID''), followed by a sub-command (''IBP_ST_INQ, IBP_ST_CHANGE'') and the last version has no ''RID'', just a sub-command, ''IBP_ST_RES''. The current implementation can only be parsed by first reading the whole line in and then counting the number of arguments. The argument count is then used to determine which command is actually being issued. A more natural version of the commands would always have the sub-command immediately follow the IBP_STATUS command.<br />
<br />
== Change in ''RID'' format ==<br />
<br />
The current definition of an ''RID'' is an integer as defined in ''struct ibp_depot''. The definition of an integer is architecture dependent and hense not portable. An alternative definition would be to define the ''RID'' as a character string. This would provide flexibility in its implmentation and use. The current IBP client libraries already treat the ''RID'' as an opaque character string for all commands except ''IBP_Allocate()''.<br />
<br />
== Provide interface to IBP data structures ==<br />
<br />
No explicit interfaces are provided for any of the various IBP data structures. A more flexible approach would be to add API calls to manipulate these structures indirectly.<br />
<br />
= Security =<br />
<br />
== Add support for SSL ==<br />
Self-explanatory<br />
<br />
== Auth/AuthZ for IBP_ALLOC command ==<br />
This command has the potential for abuse and could result in a a "Denial of Space" attack on the depot. If the concept of an "account" is added one could then come up with additional methods to share resources for example adding the concept of an account quota. It also provides a tracking mechanism on who is *creating* allocations.<br />
<br />
== Virtual Capabilities(vcap) ==<br />
The current implementation only allows a single set of caps for an allocation. So once a user has access to a cap it can never be revoked. Virtual caps is designed to solve this problem. The idea is a user presenting the IBP_MANAGE cap could request the depot issue a new set of caps with a shorter duration. These new vcaps could then be provided to a 3rd party. At any time the original cap owner can revoke access to the allocation by simply using the IBP_MANAGE command to delete the vcap. Another useful feature to consider is restricting the vcap to a specific byte range of the original cap.<br />
<br />
== IBP "Accounts" ==<br />
In order for several of these ideas to work a new set of commands would need to be added to manage the accounts.<br />
<br />
<br />
= Data Integrity =<br />
<br />
== Validation along the entire data path ==<br />
Dan has added a bunch of text on this subject elsewhere so I'm not going to go into detail.<br />
<br />
The current implementation allows for validating at the end points only. This is accomplished by having the data originator calculate a checksum before uploading the data. This checksum can be appended to the data uploaded data or it can be stored externally in the exnode. The consumer can then download the data, calculate the checksum, and compare it to what is stored. This approach is not well suited to live data streams since the raw data will have to be buffered until the consumer can download the data to verify it.<br />
<br />
An alternative approach would be to standardize on a checksum algorithm and have the client calculate the checksum as the data is being streamed to the depot while the depot simultaneously calculates the checksum as it receives the data. Any discrepancy occurring during the network transfer would be immediately detected while the data is still in the senders original buffer. The depot could then store this checksum as part of the allocation for use later. Most OS will immediately detect a write failure but not necessarily bit rot when reading unless the disk is part of a RAID array. Likewise when a reader requests data the reverse process can occur. Namely the depot and receiver both calculate the checksum as the data is being sent. The depot would additionally compare the original checksum stored with what was just calculated in order to detect disk errors. If no errors occurred the depot would go ahead and send the checksum down to the receiver for validation. This process is computationally efficient since the data is never re-read. The checksum is just part of the transfer pipeline.<br />
<br />
Building this validation procedure into the protocol simplifies the data integrity higher level tools require. These checksums could be used by higher level tools to verify replicated copies and detect data changes. The checksums should be treated as opaque strings and could be accessed by additional IBP_MANAGE sub-commands:<br />
<br />
* IBP_GET_CHECKSUM - Return the allocations checksum<br />
* IBP_VALIDATE_CHECKSUM - Re-calculate the checksum<br />
<br />
Using single checksum for an entire allocation is not efficient if random I/O on an allocation is allowed. In this case changing a single byte of a 10MB allocation would require the re-processing of the entire allocation. Another option would be to specify that for every 64KB of data (I picked this out of the blue so feel free to suggest something different) a checksum is generated. This means each allocation could have multiple checksums. In this case if a single byte was changed only 64KB of data would have to be re-processed. If the checksum field on the client is treated as an opaque string then having 1 or multiple checksums is irrelevant. Both cases can be treated the same.<br />
<br />
<br />
<br />
<br />
<br />
= Miscellanous =</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Development_ideas&diff=3127Development ideas2008-01-23T00:49:17Z<p>Tacketar: </p>
<hr />
<div>= Suggested changes to existing protocol =<br />
<br />
== Re-order parameters in IBP_STATUS command ==<br />
The existing IBP v1.4 implementation is:<br />
<br />
:''version IBP_STATUS '''RID''' IBP_ST_INQ password TIMEOUT \n''<br />
:''version IBP_STATUS '''RID''' IBP_ST_CHANGE password TIMEOUT \n max_hard max_soft max_duration \n''<br />
:''version IBP_STATUS IBP_ST_RES TIMEOUT \n''<br />
<br />
Notice that two of the commands have a primary command, ''IBP_STATUS'', a resource ID (''RID''), followed by a sub-command (''IBP_ST_INQ, IBP_ST_CHANGE'') and the last version has no ''RID'', just a sub-command, ''IBP_ST_RES''. The current implementation can only be parsed by first reading the whole line in and then counting the number of arguments. The argument count is then used to determine which command is actually being issued. A more natural version of the commands would always have the sub-command immediately follow the IBP_STATUS command.<br />
<br />
== Change in ''RID'' format ==<br />
<br />
The current definition of an ''RID'' is an integer as defined in ''struct ibp_depot''. The definition of an integer is architecture dependent and hense not portable. An alternative definition would be to define the ''RID'' as a character string. This would provide flexibility in its implmentation and use. The current IBP client libraries already treat the ''RID'' as an opaque character string for all commands except ''IBP_Allocate()''.<br />
<br />
== Provide interface to IBP data structures ==<br />
<br />
No explicit interfaces are provided for any of the various IBP data structures. A more flexible approach would be to add API calls to manipulate these structures indirectly.<br />
<br />
= Security =<br />
<br />
== Add support for SSL ==<br />
<br />
== Auth/AuthZ for IBP_ALLOC command ==<br />
This command has the potential for abuse and could result in a a "Denial of Space" attack on the depot. If the concept of an "account" is added one could then come up with additional methods to share resources for example adding the concept of an account quota. It also provides a tracking mechanism on who is *creating* allocations.<br />
<br />
== Virtual Capabilities(vcap) ==<br />
The current implementation only allows a single set of caps for an allocation. So once a user has access to a cap it can never be revoked. Virtual caps is designed to solve this problem. The idea is a user presenting the IBP_MANAGE cap could request the depot issue a new set of caps with a shorter duration. These new vcaps could then be provided to a 3rd party. At any time the original cap owner can revoke access to the allocation by simply using the IBP_MANAGE command to delete the vcap. Another useful feature to consider is restricting the vcap to a specific byte range of the original cap.<br />
<br />
== IBP "Accounts" ==<br />
In order for several of these ideas to work a new set of commands would need to be added to manage the accounts.<br />
<br />
<br />
= Data Integrity =<br />
<br />
== Data ==<br />
<br />
= Miscellanous =</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Development_ideas&diff=3126Development ideas2008-01-22T23:18:20Z<p>Tacketar: </p>
<hr />
<div>= Suggested changes to existing protocol =<br />
<br />
== Re-order parameters in IBP_STATUS command ==<br />
The existing IBP v1.4 implementation is:<br />
<br />
:''version IBP_STATUS '''RID''' IBP_ST_INQ password TIMEOUT \n''<br />
:''version IBP_STATUS '''RID''' IBP_ST_CHANGE password TIMEOUT \n max_hard max_soft max_duration \n''<br />
:''version IBP_STATUS IBP_ST_RES TIMEOUT \n''<br />
<br />
Notice that two of the commands have a primary command, ''IBP_STATUS'', a resource ID (''RID''), followed by a sub-command (''IBP_ST_INQ, IBP_ST_CHANGE'') and the last version has no ''RID'', just a sub-command, ''IBP_ST_RES''. The current implementation can only be parsed by first reading the whole line in and then counting the number of arguments. The argument count is then used to determine which command is actually being issued. A more natural version of the commands would always have the sub-command immediately follow the IBP_STATUS command.<br />
<br />
== Change in ''RID'' format ==<br />
<br />
The current definition of an ''RID'' is an integer as defined in ''struct ibp_depot''. The definition of an integer is architecture dependent and hense not portable. An alternative definition would be to define the ''RID'' as a character string. This would provide flexibility in its implmentation and use. The current IBP client libraries already treat the ''RID'' as an opaque character string for all commands except ''IBP_Allocate()''.<br />
<br />
== Provide interface to IBP data structures ==<br />
<br />
No explicit interfaces are provided for any of the various IBP data structures. A more flexible approach would be to add API calls to manipulate these structures indirectly.<br />
<br />
= Security =<br />
<br />
= Data Integrity =<br />
<br />
= Miscellanous =</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Development_ideas&diff=3125Development ideas2008-01-22T23:17:01Z<p>Tacketar: </p>
<hr />
<div>= Suggested changes to existing protocol =<br />
<br />
== Re-order parameters in IBP_STATUS command ==<br />
The existing IBP v1.4 implementation is:<br />
<br />
:''version IBP_STATUS '''RID''' IBP_ST_INQ password TIMEOUT \n''<br />
:''version IBP_STATUS '''RID''' IBP_ST_CHANGE password TIMEOUT \n max_hard max_soft max_duration \n''<br />
:''version IBP_STATUS IBP_ST_RES TIMEOUT \n''<br />
<br />
Notice that two of the commands have a primary command, ''IBP_STATUS'', a resource ID (''RID''), followed by a sub-command (''IBP_ST_INQ, IBP_ST_CHANGE'') and the last version has no ''RID'', just a sub-command, ''IBP_ST_RES''. The current implementation can only be parsed by first reading the whole line in and then counting the number of arguments. The argument count is then used to determine which command is actually being issued. A more natural version of the commands would always have the sub-command immediately follow the IBP_STATUS command.<br />
<br />
== Change in ''RID'' format ==<br />
<br />
The current definition of an ''RID'' is an integer as defined in ''struct ibp_depot''. The definition of an integer is architecture dependent and hense not portable. An alternative definition would be to define the ''RID'' as a character string. This would provide flexibility in its implmentation and use. The current IBP client libraries already treat the ''RID'' as an opaque character string for all commands except ''IBP_Allocate()''.<br />
<br />
== Provide interface to IBP data structures ==<br />
<br />
No explicit interfaces are provided for any of the various IBP data structures. A more flexible approach would be to add API calls to manipulate these structures indirectly.</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Development_ideas&diff=3124Development ideas2008-01-22T23:14:25Z<p>Tacketar: </p>
<hr />
<div>= Suggested changes to existing protocol =<br />
<br />
== Re-order parameters in IBP_STATUS command ==<br />
The existing IBP v1.4 implementation is:<br />
<br />
:''version IBP_STATUS '''RID''' IBP_ST_INQ password TIMEOUT \n''<br />
:''version IBP_STATUS '''RID''' IBP_ST_CHANGE password TIMEOUT \n max_hard max_soft max_duration \n''<br />
:''version IBP_STATUS IBP_ST_RES TIMEOUT \n''<br />
<br />
Notice that two of the commands have a primary command, ''IBP_STATUS'', a resource ID (''RID''), followed by a sub-command (''IBP_ST_INQ, IBP_ST_CHANGE'') and the last version has no ''RID'', just a sub-command, ''IBP_ST_RES''. The current implementation can only be parsed by first reading the whole line in and then counting the number of arguments. The argument count is then used to determine which command is actually being issued. A more natural version of the commands would always have the sub-command immediately follow the IBP_STATUS command.<br />
<br />
== Change in ''RID'' format ==<br />
<br />
The current definition of an ''RID'' is an integer as defined in ''struct ibp_depot''. The definition of an integer is architecture dependent and hense not portable. An alternative definition would be to define the ''RID'' as a character string. This would provide flexibility in its implmentation and use. The current IBP client libraries already treat the ''RID'' as an opaque character string for all commands except ''IBP_Allocate()''.<br />
<br />
== Provide interface to IBP data structures ==</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Development_ideas&diff=3122Development ideas2008-01-22T23:10:13Z<p>Tacketar: </p>
<hr />
<div>= Suggested changes to existing protocol =<br />
<br />
== Re-order parameters in IBP_STATUS command ==<br />
The existing IBP v1.4 implementation is:<br />
<br />
:''version IBP_STATUS '''RID''' IBP_ST_INQ password TIMEOUT \n''<br />
:''version IBP_STATUS '''RID''' IBP_ST_CHANGE password TIMEOUT \n max_hard max_soft max_duration \n''<br />
:''version IBP_STATUS IBP_ST_RES TIMEOUT \n''<br />
<br />
Notice that two of the commands have a primary command, ''IBP_STATUS'', a resource ID (''RID''), followed by a sub-command (''IBP_ST_INQ, IBP_ST_CHANGE'') and the last version has no ''RID'', just a sub-command, ''IBP_ST_RES''. The current implementation can only be parsed by first reading the whole line in and then counting the number of arguments. The argument count is then used to determine which command is actually being issued. A more natural version of the commands would always have the sub-command immediately follow the IBP_STATUS command.<br />
<br />
== Change in ''RID'' format ==<br />
<br />
<br />
== Provide interface to IBP data structures ==</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Development_ideas&diff=3120Development ideas2008-01-22T21:40:37Z<p>Tacketar: </p>
<hr />
<div>= Suggested changes to existing protocol =<br />
<br />
== Re-order parameters in IBP_STATUS command ==<br />
The existing IBP v1.4 implementation is:<br />
<br />
:''version IBP_STATUS '''RID''' IBP_ST_INQ password TIMEOUT \n''<br />
:''version IBP_STATUS '''RID''' IBP_ST_CHANGE password TIMEOUT \n max_hard max_soft max_duration \n''<br />
:''version IBP_STATUS IBP_ST_RES TIMEOUT \n''<br />
<br />
Notice that two of the commands have a primary command, ''IBP_STATUS'', a resource ID (''RID''), followed by a sub-command (''IBP_ST_INQ, IBP_ST_CHANGE'') and the last version has no ''RID'', just a sub-command, ''IBP_ST_RES''. The current implementation can only be parsed by first reading the whole line in and then counting the number of arguments. The argument count is then used to determine which command is actually being issued. A more natural version of the commands would always have the sub-command immediately follow the IBP_STATUS command.</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Development_ideas&diff=3119Development ideas2008-01-22T21:40:17Z<p>Tacketar: New page: = Changes to existing protocol = == Re-order parameters in IBP_STATUS command == The existing IBP v1.4 implementation is: :''version IBP_STATUS '''RID''' IBP_ST_INQ password TIMEOUT \...</p>
<hr />
<div>= Changes to existing protocol =<br />
<br />
== Re-order parameters in IBP_STATUS command ==<br />
The existing IBP v1.4 implementation is:<br />
<br />
:''version IBP_STATUS '''RID''' IBP_ST_INQ password TIMEOUT \n''<br />
:''version IBP_STATUS '''RID''' IBP_ST_CHANGE password TIMEOUT \n max_hard max_soft max_duration \n''<br />
:''version IBP_STATUS IBP_ST_RES TIMEOUT \n''<br />
<br />
Notice that two of the commands have a primary command, ''IBP_STATUS'', a resource ID (''RID''), followed by a sub-command (''IBP_ST_INQ, IBP_ST_CHANGE'') and the last version has no ''RID'', just a sub-command, ''IBP_ST_RES''. The current implementation can only be parsed by first reading the whole line in and then counting the number of arguments. The argument count is then used to determine which command is actually being issued. A more natural version of the commands would always have the sub-command immediately follow the IBP_STATUS command.</div>Tacketarhttps://www.reddnet.org/mwiki/index.php?title=Protocol_Standardization_Efforts&diff=3118Protocol Standardization Efforts2008-01-22T20:44:14Z<p>Tacketar: </p>
<hr />
<div>'''SUMMARY''' - Given the increasing number of logistical networking software components, the REDDnet community is actively seeking to establish a series of standard protocols that would ensure the interoperability between these components and the services they need and/or the clients they serve. Although the Internet Backplane Protocol is already documented, it is also under revision by members of the REDDnet community, some of whom are the original authors.<br />
<br />
----<br />
<br />
This diagram illustrates the different projects that need to inter-operate within the standards being developed here.<br />
<br />
[[Image:Reddnet_design2.png|diagram]] <br />
<br />
*TSSP is the higher-level spec including use of protocols/standards IBP and exMSP<br />
*REDDnet software is the set of implementations shown here<br />
<br />
A minimum requirement is for the 3 main areas to interoperate<br />
*(but not initially among projects within each area)<br />
*ibp 1.4 compatibility<br />
*exMSP currently means<br />
**LoRS reads a base-exnode-schema (LoRS schema)<br />
**Lstcp reads a base-schema (or just IBP calls) from the L-Server<br />
<br />
The data-management systems L-Store and LoDN currently have no additional interoperability.<br />
<br />
----<br />
<br />
The following is the list of protocols under deliberation/refinement:<br />
<br />
* Transfer and Storage Services Protocol<br />
** [[TSSP Framework]]<br />
** [[TSSP Procedures]]<br />
** [[TSSP Messaging]]<br />
<br />
* Resource Discovery Protocol (see [[Resource Discovery Standardization]])<br />
** The Discovery Service<br />
** [[RDP Messages]]<br />
<br />
* Exnode Management Services Protocol<br />
** [[Exnode specification]]<br />
** The Metadata Hosting Service<br />
** [[exMSP Messages]]<br />
<br />
* Internet Backplane Protocol<br />
** [http://www.reddnet.org/mwiki/index.php/Internet_Backplane_Protocol Version 1.4]<br />
** [[1.4 Revision]]<br />
** [[Development ideas]]<br />
<br />
* Network Functional Unit (NFU)<br />
**[http://loci.cs.utk.edu/ibp/files/LoNC-FDNA03.pdf LOCI NFU doc]<br />
**[[NFU Specification]]</div>Tacketar