نتایج جستجو برای: WebCrawler

تعداد نتایج: 35  

2000
Brian Pinkerton

WebCrawler: Finding What People Want

ژورنال: :نظام ها و خدمات اطلاعاتی 0
ناهید خوشیان کارشناسی ارشد علم اطلاعات و دانش شناسی، دانشگاه شیراز

هدف: هدف از  پژوهش حاضر بررسی  8 ابرموتور کاوش مطرح به نام­های webcrawler, depperweb, metagopher, ask jeeves, clusty, goto, metalib, polymeta از نظر پاسخگویی به سؤالات اختصاصی مرجع در رشته علم اطلاعات و دانش شناسی است. روش: تعداد 16 سؤال اختصاصی این رشته در هریک از ابرموتورها جستجو و ده نتیجه اول در هر ابرموتور از نظر ربط، دقت و ریزش کاذب ارزشیابی و مقایسه شد. جمعا 1280 جستجو انجام شد. در تجزیه...

هدف: هدف از  پژوهش حاضر بررسی  8 ابرموتور کاوش مطرح به نام­های Webcrawler, Depperweb, Metagopher, Ask Jeeves, Clusty, Goto, Metalib, Polymeta از نظر پاسخگویی به سؤالات اختصاصی مرجع در رشته علم اطلاعات و دانش‌شناسی است. روش: تعداد 16 سؤال اختصاصی این رشته در هریک از ابرموتورها جستجو و ده نتیجه اول در هر ابرموتور از نظر ربط، دقت و ریزش کاذب ارزشیابی و مقایسه شد. جمعا 1280 جستجو انجام شد. در تجزی...

Journal: :IJIMAI 2009
J. Aguilera Collar R. González-Cebrián Toba C. Cortés Velasco

— The Forex market is a very interesting market. Finding a suitable tool to forecast currency behavior will be of great interest. It is almost impossible to find a 100 % reliable tool. This market is like any other one, unpredictable. However we developed a very interesting tool that makes use of WebCrawler, data mining and web services to offer and forecast an advice to any user or broker.

1999
Jean Marie Buijs Michael S. Lew

Learning visual concepts is an important tool for automatic annotation and visual querying of networked multimedia databases. It allows the user to express queries in his own vocabulary instead of the computer’s vocabulary. This paper gives an overview of our current research directions in learning visual concepts for use in our online visual webcrawler, ImageScape. We discuss using the Kullbac...

2012
DIVAKAR YADAV AK SHARMA SONIA SANCHEZ-CUADRADO JORGE MORATO

World Wide Web (WWW) is a huge repository of interlinked hypertext documents known as web pages. Users access these hypertext documents via Internet. Since its inception in 1990, WWW has become many folds in size, and now it contains more than 50 billion publicly accessible web documents distributed all over the world on thousands of web servers and still growing at exponential rate. It is very...

Journal: :journal of ai and data mining 2014
deepika koundal

the enormous growth of the world wide web in recent years has made it necessary to perform resource discovery efficiently. for a crawler it is not an simple task to download the domain specific web pages. this unfocused approach often shows undesired results. therefore, several new ideas have been proposed, among them a key technique is focused crawling which is able to crawl particular topical...

2016
Steffen Remus Christian Biemann

This work presents a straightforward method for extending or creating in-domain web corpora by focused webcrawling. The focused webcrawler uses statistical N-gram language models to estimate the relatedness of documents and weblinks and needs as input only N-grams or plain texts of a predefined domain and seed URLs as starting points. Two experiments demonstrate that our focused crawler is able...

2012
Vineet Singh Ayushi Srivastava Suman Yadav

A focused crawler is a web crawler that attempts to download only web pages that are relevant to a pre-defined topic or set of topics. Focused crawling also assumes that some labeled examples of relevant and not relevant pages are available. The topic can be represent by a set of keywords (we call them seed keywords) or example urls. The key for designing an efficient focus crawler is how to ju...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید