Abstract. Scavenging Information Associated To Particular

Decent Essays
Scavenging information associated to particular person through search engines is one of the most common activities on the Internet. Result contains numbers of Web pages, which may be relevant to different person which includes queried name. Human languages are not correct. Text referring to the city "Roanoke" can mean "Roanoke, Virginia" or "Roanoke, Texas", depending on the surrounding context. Organizations and companies often have multiple nicknames, name variations, or common misspellings. Famous persons ("Amitabh Bachchan") often share a name with many non-famous individuals. In this paper, we propose a similarity measures system to solve the problem by using cosine similarity which is based on TF and IDF. Web pages having
…show more content…
Changing the infrastructure of the current web to the semantic web.
2. Placing the keyword based search engines as the base and doing some modifications to considering the query and web page context in order to improve their efficiency. There was a big problem over the realization of the first idea. The problem was that there were already millions of millions documents in current web that should apply considerable modifications in their structure to express their content in RDF and RDFS .
That’s why our proposed architecture follows the second strategy.
The goal of Similarity Measures of web pages using Cosine Similarity is to find similarity between web pages based on extracted entities. For finding Cosine similarity between web pages we extract entities for each URL by using alchemy API and then find TF-IDF for each entity and every URL


Many different approaches have been applied to the basic problem of disambiguation of people and document ranking they are as follows.
1.Using dependency structure for prioritization of functional test suites [5]in this paper, they proposed a new test case prioritization technique that uses the dependency information from the test suites to prioritize. Dependency structure prioritization technique includes four algorithms for prioritizing. The open dependency proves to have lower execution cost and closed dependency achieved better fault
Get Access