نتایج جستجو برای: memory de duplication
تعداد نتایج: 1802836 فیلتر نتایج به سال:
Cloud computing makes IT more efficient and cost effective in today’s world. Cloud computing act as a virtual server that the user can access via internet on a needed basis and this eliminates the need for the companies to host their own servers and purchase of expensive software. On the other hand there arise many new types of cyber theft. The main concerns in cloud computing are data integrit...
Duplicate and near duplicate web pages are stopping the process of search engine. As a consequence of duplicate and near duplicates, the common issue for the search engines is raising the indexed storage pages. This high storage memory will slow down the process which automatically increases the serving cost. Finally, the duplication will be raised while gathering the required data from the var...
We propose an unsupervised approach for linking records across arbitrarily many files, while simultaneously detecting duplicate records within files. Our key innovation involves the representation of the pattern of links between records as a bipartite graph, in which records are directly linked to latent true individuals, and only indirectly linked to other records. This flexible representation...
The rapid adoption of cloud storage services has created an issue that many duplicated copies of files are stored in the remote storage servers, which not only wastes the communication bandwidth for duplicated file uploading, but also increases the cost of security data management. To solve this problem, client-side deduplication was introduced to avoid the client from uploading files already e...
This paper proposes an efficient algorithm to de-duplicate based on demographic information which contains two name strings, viz. GivenName and Surname, of individuals. The algorithm consists of two stagesenrolment and de-duplication. In both stages, all name strings are reduced to generic name strings with the help of phonetic based reduction rules. Thus there may be several name strings havin...
Data deduplication is a technique for eliminating duplicate copies of data, and has been widely used in cloud storage to reduce storage space and upload bandwidth. Promising as it is, an arising challenge is to perform secure deduplication in cloud storage. Although convergent encryption has been extensively adopted for secure deduplication, a critical issue of making convergent encryption prac...
A major problem that arises from integrating different databases is the existence of duplicates. Data cleaning is the process for identifying two or more records within the database, which represent the same real world object (duplicates), so that a unique representation for each object is adopted. Existing data cleaning techniques rely heavily on full or partial domain knowledge. This paper pr...
Congenital defects are those abnormalities present at birth that result from errors arising during development (Noden and de la Hunta, 1985). Congenital duplications are interesting among congenital defects because they are composed by two individuals. Multiple births most frequently result from fertilization of separately ovulated female gametes. However, complete or partial separation of clea...
Many people now store large quantities of personal and corporate data on laptops or home computers. These often have poor or intermittent connectivity, and are vulnerable to theft or hardware failure. Conventional backup solutions are not well suited to this environment, and backup regimes are frequently inadequate. This paper describes an algorithm which takes advantage of the data which is co...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید