Workpackage III (WP-III): Distributed data-management
In this context, data refers to both input and output from applications. Typically, the size of the managed data is very large (several TB). The methods accessing the data as well as providing the data for transport must be reliable and efficient. These methods must also consider that cluster nodes might not have direct connection to the Internet (in contrast to the login-/frontend nodes).
Another aspect regarding data sets is that requests will include access of both temporal and spatial data slices, which is thereby followed by computational or graphical analysis. Furthermore, each individual data format might require special filters to perform the split of data into slices. Thus, how these filters can be integrated efficiently have to be considered.
If the output data from a program cannot or may not be processed directly after the program has finished execution, then this data must be saved to intermediary storage. Different data storage resources can provide access to the corresponding storage in different ways. Therefore, a software layer should be designed that provides a unified interface for access to a storage resource, independent of its local configuration.
Distributed processing (for example parallel numerical simulations) may use and/or store data in different locations. Therefore, it is possible to optimize data access by replication of copies to different locations. Optimization of data access, in this case, considers server resilience (or more specifically data availability) and data transfer latency. In both cases, optimization and distribution of data, management of the replication location is necessary. By abstracting the data identification from where the data is located (assigned by the replica management), it is possible to enhance the access methods to automatically retrieve the most favorable copy.
Responsible for the work package: Mikael Högqvist (ZIB)
Technical contact persons:
- Thomas Radke (AEI)
- Detlef Elstner (AIP)
- Hans-Martin Adorf (MPA)
- Wolfgang Voges (MPE)
- Angelika Reiser (TUM)
- Stefan Jordan (ZAH)
- Thomas Röblitz (ZIB)
- Mikael Högqvist (ZIB)
Requirement specification and design of the architecture
Access methods (data cuts, Firewalls, etc. ), unified storage management, replica management, staging methods; consideration of how to connect to other work packages, particularly regarding the management of metadata.
Development of access methods
Methods for accessing data slices in local data sets, initially. In the second step these methods are enhanced to access remote data sets.
Development of management of data storage
Implementation of a flexible interface for access to the local data storage management. Furthermore, the components must be easily adaptable to differences in local configurations.
Development of replica management
The replica management provides an information service to use logical data identifiers that are mapped to physical storage locations. The first version provides support for a central information service alongside methods for replica registration and access of data with logical identifiers (wherein a replica is selected at random). The second version uses a distributed information service and implements optimal replica selection.
Test of the developed components by adaptation of the community
Selected community applications are integrated gradually with finalized functionality. A higher quality of the software is assured through these tests.