An Analysis Based on Amazon S3 That Makes Use of Real-World Service Simulation Techniques Is Presented in This Study, Which Aims to Investigate the Latency Performance of Distributed Storage Systems
DOI:
https://doi.org/10.15379/ijmst.v10i5.3791Keywords:
Hierarchical Tree structure, data retrieval, data nodes.Abstract
The generation of parity nodes is strongly dependent on data nodes in the erasure codes that are currently in use. The higher the tolerance for mistake, and the more people are willing to It is possible that our chances of successfully recovering the original data will improve if we are able to increase the number of parity nodes as well. The storage overhead will increase as the number of parity nodes increases, and the repair load on data nodes will also increase. This is due to the fact that data nodes are queried often in order to assist in the repair of parity nodes at the same time. In the event that a global parity node fails in LRC [25, 26], for example, it is necessary to solve all of the data nodes. As a consequence of the "increasing demands on the network's data nodes," the amount of time required to process read requests for data nodes would increase more than before. Google search is an example of an application that should not be used for retrieving data on a regular basis. "Produces both data and parity nodes, it is possible for the latter to take over some of the repair work that is normally done by the former. This is done in an effort to reduce the amount of time that is spent waiting." To put it another way, the number of data nodes that may be accessed does not change under any circumstances, regardless of whether or not a parity node is operational. When it comes to storage costs, it would seem that parity nodes suffer extra expenses. If the design is correct, generating parity nodes by employing parity nodes may help reduce access latency without increasing or lowering the storage needs. This is something that we will demonstrate in the coming sections, which are over your head.