Many attempts have been made towards developing optimized algorithms and introduction of several models for skyline queries. Skyline queries had been studied as the maximum vectors before Borzsony and colleagues [23] introduced skyline queries for database applications. Various algorithms are provided for skyline computations, including calculation of the progressive skyline using auxiliary structures, the nearest neighbor algorithm for processing the skyline query, the branch and bound skyline (BBS) algorithm, the sort-filter skyline (SFS) algorithm that exploits pre-arranged lists, and linear erase-sorting (LESS) algorithm for the skyline. Recently, there has been a growing interest in addressing the problem of "large dimensions" for …show more content…
In [28], it is shown that the proposed method is not correct. Sharifzadeh and colleagues [29] have proposed another version of the method that eliminates the flaws of the previous version, but it is computationally costly. Other methods for calculating skyline have been suggested as well. A multi-objective optimized process for skyline has been presented based on the genetic algorithm by Özyer et al. [30]. Fuzzy skyline has been proposed to make the skyline more flexible. In particular, fuzzy skyline in [31] is considered and a flexible overcome relation is provided. This relationship makes it possible to extend skyline with points that are not highly dominated by any other points (even if they are strongly dominated). Yet, other methods arise to improve the skyline. A variety of fuzzy skyline methods are studied in [32]. In the paper, five incentives are raised to fuzzify the skyline. The first motivation is to refine the skyline with the introduction of a sequence of points to find the best skyline points. The second is to make the skyline more flexible. This means that where points' conditions are close to the skyline points, they are also considered as members of the skyline. In the third case, the skyline can become easier with different methods. For example, by
such as web 2.0 sites[20], the growing number of internet users, as well as the
One main point presented in the article is That the algorithm presented by the author is not only more
Since 1960 and beyond the need for an efficient data management and retrieval of data has always been an issue due to the growing need in business and academia. To resolve these issues a number of databases models have been created. Relational databases allow data storage, retrieval and manipulation using a standard Structured Query Language (SQL). Until now, relational databases were an optimal enterprise storage choice. However, with an increase in growth of stored and analyzed data, relational databases have displayed a variety of limitations. The limitations of scalability, storage and efficiency of queries due to the large volumes of data [1] [2].
This vast source of human generated data is being used for various applications such as Traffic analysis and management, Disaster analysis and recovery, Urban development, spread of diseases, event
In this section, the CPP method is presented to reduce the 5DOF system to the 2DOF system. In
By taking the minimum values in the degree of possibility sets created from Equation 17, it was
Yet, the output of the FCA method is computed according to numerical similarity with no support of the hardware constraints and the network services priorities. Indeed, the FCA clusters generally satisfy a single cardinality constraint min or max leading to the notion of iceberg lattice. A solution to this drawback is to tune FCA methods by searching within the set of nodes for subsets that provide the best trade-off between high priority services lookup time and storage. Once the subsets are selected, a lookup key is chosen for each subset: here, also the choice is guided vis-`a-vis storage minimizing which
Abstract—Graph-based grade model have been at length functional in in order repossession area. In this paper, we heart on a well recognized graph-based model - the place on statistics diverse representation, or assorted position (MR). Particularly, it has been productively applied to content-based image retrieval, because of its outstanding ability to discover causal geometrical structure of the given image database. However, a range of ranking is computationally very unpleasant, which appreciably restrictions its applicability to large Databases above all for the cases that the query are away from home of the folder (new sample) We proposition a book scalable graph-based grade model called proficient Manifold Ranking (EMR), trying to address the shortcoming of MR from two main perspective: scalable graph construction and efficient ranking computation. Specifically, we build an fix graph on the database instead of a traditional k-nearest fellow inhabitant graph, and design a new form of adjacency medium utilize to speed up the ranking. An likely method is adopted for well-organized out-of-sample rescue. untried outcome on a quantity of great scale
Computational studies were performed using the Gaussian 94 computational package with ab initio method and the complete basis set amended by Peterson and co-workers was used for calculations.^2 The
The main benefit of the analytical methods is to avoid the massive calculation time, and to ensure the simplicity to integrate more assumptions and complex mathematical techniques [5, 6]. Cumulants mechanism combined with the Gram-Charlier extension is offered to fix O-PLF in many manuscripts [7-9].
In this proposed method, first we will be analyzing the performance of data processing individually on relational databases and Hadoop framework by taking a collection of sample datasets. After evaluating the performance of each system, we will be working on a new method of data processing by combinedly using both the computational powers of RDBMS and Hadoop frameworks. We will be using same experimental setup and configurations for analyzing data.
A distributed quantile filter-based algorithm was proposed in (Chen, Liang & Yu, 2014) to answer top-k query in a wireless sensor network. It evaluate top-k query in wireless sensor networks to maximize the network lifetime. Also, an online algorithm was proposed for answering time-dependent top-k queries with different values of k.
In some literature, the multiplicative noise has been used. Spinello and Stilwell [7] have presented a recursive algorithm using the Maximum Likelihood Estimation method coupled with Gauss-Newton algorithm in which a statedependent
The nearest neighbor algorithm is a lazy learning algorithm that analyses many training data samples and relates every test data sample to the closest matching train data sample to produce an output [12]. In this algorithm a
A local improvement heuristic based on swapping centers in and out that yields a (9 + ϵ)-approximation algorithm is presented. The paper also shows that any approach based on performing a fixed number of swaps achieves an approximation factor of at least (9-ϵ) in all sufficiently high dimensions by providing example.