分享
 
 
 

·搜索引擎Nutch介绍(1)-使用Nutch

王朝java/jsp·作者佚名  2006-01-09
窄屏简体版  字體: |||超大  

tutorial

Requirements

Java 1.4.x, either from Sun or IBM on Linux is preferred. Set NUTCH_JAVA_HOME to the root of your JVM installation.

Apache's Tomcat 4.x.

On Win32, cygwin, for shell support. (If you plan to use CVS on Win32, be sure to select the cvs and openssh packages when you install, in the "Devel" and "Net" categories, respectively.)

Up to a gigabyte of free disk space, a high-speed connection, and an hour or so.

Getting Started

First, you need to get a copy of the Nutch code. You can download a release from http://www.nutch.org/release/. Unpack the release and connect to its top-level directory. Or, check out the latest source code from CVS and build it with Ant.

Try the following command:

bin/nutch

This will display the documentation for the Nutch command script.

Now we're ready to crawl. There are two approaches to crawling:

Intranet crawling, with the crawl command.

Whole-web crawling, with much greater control, using the lower level inject, generate, fetch and updatedb commands.

Intranet Crawling

Intranet crawling is more appropriate when you intend to crawl up to around one million pages on a handful of web servers.

Intranet: ConfigurationTo configure things for intranet crawling you must:

Create a flat file of root urls. For example, to crawl the nutch.org site you might start with a file named urls containing just the Nutch home page. All other Nutch pages should be reachable from this page. The urls file would thus look like: http://www.nutch.org/

Edit the file conf/crawl-urlfilter.txt and replace MY.DOMAIN.NAME with the name of the domain you wish to crawl. For example, if you wished to limit the crawl to the nutch.org domain, the line should read: +^http://([a-z0-9]*\.)*nutch.org/

This will include any url in the domain nutch.org.

Intranet: Running the CrawlOnce things are configured, running the crawl is easy. Just use the crawl command. Its options include:

-dir dir names the directory to put the crawl in.

-depth depth indicates the link depth from the root page that should be crawled.

-delay delay determines the number of seconds between accesses to each host.

-threads threads determines the number of threads that will fetch in parallel. For example, a typical call might be: bin/nutch crawl urls -dir crawl.test -depth 3 >& crawl.log

Typically one starts testing one's configuration by crawling at low depths, and watching the output to check that desired pages are found. Once one is more confident of the configuration, then an appropriate depth for a full crawl is around 10.

Once crawling has completed, one can skip to the Searching section below.

Whole-web CrawlingWhole-web crawling is designed to handle very large crawls which may take weeks to complete, running on multiple machines.

Whole-web: ConceptsNutch data is of two types:

The web database. This contains information about every page known to Nutch, and about links between those pages.

A set of segments. Each segment is a set of pages that are fetched and indexed as a unit. Segment data consists of the following types:

a fetchlist is a file that names a set of pages to be fetched

the fetcher output is a set of files containing the fetched pages

the index is a Lucene-format index of the fetcher output. In the following examples we will keep our web database in a directory named db and our segments in a directory named segments: mkdir db

mkdir segments

Whole-web: Boostrapping the Web DatabaseThe admin tool is used to create a new, empty database: bin/nutch admin db -create

The injector adds urls into the database. Let's inject URLs from the DMOZ Open Directory. First we must download and uncompress the file listing all of the DMOZ pages. (This is a 200+Mb file, so this will take a few minutes.) wget http://rdf.dmoz.org/rdf/content.rdf.u8.gz

gunzip content.rdf.u8.gz

Next we inject a random subset of these pages into the web database. (We use a random subset so that everyone who runs this tutorial doesn't hammer the same sites.) DMOZ contains around three million URLs. We inject one out of every 3000, so that we end up with around 1000 URLs: bin/nutch inject db -dmozfile content.rdf.u8 -subset 3000

This also takes a few minutes, as it must parse the full file.

Now we have a web database with around 1000 as-yet unfetched URLs in it.

Whole-web: FetchingTo fetch, we first generate a fetchlist from the database: bin/nutch generate db segments

This generates a fetchlist for all of the pages due to be fetched. The fetchlist is placed in a newly created segment directory. The segment directory is named by the time it's created. We save the name of this segment in the shell variable >s1: s1=`ls -d segments/2* | tail -1`

echo $s1

Now we run the fetcher on this segment with: bin/nutch fetch $s1

When this is complete, we update the database with the results of the fetch: bin/nutch updatedb db $s1

Now the database has entries for all of the pages referenced by the initial set.

Next we run five iterations of link analysis on the database in order to prioritize which pages to next fetch:

bin/nutch analyze db 5

Now we fetch a new segment with the top-scoring 1000 pages: bin/nutch generate db segments -topN 1000

s2=`ls -d segments/2* | tail -1`

echo $s2

bin/nutch fetch $s2

bin/nutch updatedb db $s2

bin/nutch analyze db 2

Let's fetch one more round: bin/nutch generate db segments -topN 1000

s3=`ls -d segments/2* | tail -1`

echo $s3

bin/nutch fetch $s3

bin/nutch updatedb db $s3

bin/nutch analyze db 2

By this point we've fetched a few thousand pages. Let's index them!

Whole-web: IndexingTo index each segment we use the index command, as follows: bin/nutch index $s1

bin/nutch index $s2

bin/nutch index $s3

Then, before we can search a set of segments, we need to delete duplicate pages. This is done with: bin/nutch dedup segments dedup.tmp

Now we're ready to search!

Searching

To search you need to put the nutch war file into your servlet container. (If instead of downloading a Nutch release you checked the sources out of CVS, then you'll first need to build the war file, with the command ant war.)

Assuming you've unpacked Tomcat as ~/local/tomcat, then the Nutch war file may be installed with the commands: rm -rf ~/local/tomcat/webapps/ROOT*

cp nutch*.war ~/local/tomcat/webapps/ROOT.war

The webapp finds its indexes in ./segments, relative to where you start Tomcat, so, if you've done intranet crawling, connect to your crawl directory, or, if you've done whole-web crawling, don't change directories, and give the command: ~/local/tomcat/bin/catalina.sh start

Then visit http://localhost:8080/ and have fun!

 
 
 
免责声明:本文为网络用户发布,其观点仅代表作者个人观点,与本站无关,本站仅提供信息存储服务。文中陈述内容未经本站证实,其真实性、完整性、及时性本站不作任何保证或承诺,请读者仅作参考,并请自行核实相关内容。
2023年上半年GDP全球前十五强
 百态   2023-10-24
美众议院议长启动对拜登的弹劾调查
 百态   2023-09-13
上海、济南、武汉等多地出现不明坠落物
 探索   2023-09-06
印度或要将国名改为“巴拉特”
 百态   2023-09-06
男子为女友送行,买票不登机被捕
 百态   2023-08-20
手机地震预警功能怎么开?
 干货   2023-08-06
女子4年卖2套房花700多万做美容:不但没变美脸,面部还出现变形
 百态   2023-08-04
住户一楼被水淹 还冲来8头猪
 百态   2023-07-31
女子体内爬出大量瓜子状活虫
 百态   2023-07-25
地球连续35年收到神秘规律性信号,网友:不要回答!
 探索   2023-07-21
全球镓价格本周大涨27%
 探索   2023-07-09
钱都流向了那些不缺钱的人,苦都留给了能吃苦的人
 探索   2023-07-02
倩女手游刀客魅者强控制(强混乱强眩晕强睡眠)和对应控制抗性的关系
 百态   2020-08-20
美国5月9日最新疫情:美国确诊人数突破131万
 百态   2020-05-09
荷兰政府宣布将集体辞职
 干货   2020-04-30
倩女幽魂手游师徒任务情义春秋猜成语答案逍遥观:鹏程万里
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案神机营:射石饮羽
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案昆仑山:拔刀相助
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案天工阁:鬼斧神工
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案丝路古道:单枪匹马
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案镇郊荒野:与虎谋皮
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案镇郊荒野:李代桃僵
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案镇郊荒野:指鹿为马
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案金陵:小鸟依人
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案金陵:千金买邻
 干货   2019-11-12
 
推荐阅读
 
 
 
>>返回首頁<<
 
靜靜地坐在廢墟上,四周的荒凉一望無際,忽然覺得,淒涼也很美
© 2005- 王朝網路 版權所有