北大天网搜索引擎TSE源码

时间:2014-03-15 07:32:35
【文件属性】:

文件名称:北大天网搜索引擎TSE源码

文件大小:152KB

文件格式:ZIP

更新时间:2014-03-15 07:32:35

北大 天网 源码 TSE 搜索引擎

TSE(Tiny Search Engine) ======================= (Temporary) Web home: http://162.105.80.44/~yhf/Realcourse/ TSE is free utility for non-interactive download of files from the Web. It supports HTTP. According to query word or url, it retrieve results from crawled pages. It can follow links in HTML pages and create output files in Tianwang (http://e.pku.edu.cn/) format or ISAM format files. Additionally, it provies link structures which can be used to rebuild the web frame. --------------------------- Main functions in the TSE: 1) normal crawling, named SE, e.g: crawling all pages in PKU scope. and retrieve results from crawled pages according to query word or url, 2) crawling images and corresponding pages, named ImgSE. --------------------------- INSTALL: 1) execute "tar xvfz tse.XXX.gz" --------------------------- Before running the program, note Note: The program is default for normal crawling (SE). For ImgSE, you should: 1. change codes with the following requirements, 1) In "Page.cpp" file, find two same functions "CPage::IsFilterLink(string plink)" One is for ImgSE whose urls must include "tupian", "photo", "ttjstk", etc. the other is for normal crawling. For ImgSE, remember to comment the paragraph and choose right "CPage::IsFilterLink(string plink)". For SE, remember to open the paragraph and choose righ "CPage::IsFilterLink(string plink)". 2) In Http.cpp file i. find "if( iPage.m_sContentType.find("image") != string::npos )" Comment the right paragraph. 3) In Crawl.cpp file, i. "if( iPage.m_sContentType != "text/html" Comment the right paragraph. ii. find "if(file_length < 40)" Choose right one line. iii. find "iMD5.GenerateMD5( (unsigned char*)iPage.m_sContent.c_str(), iPage.m_sContent.length() )" Comment the right paragraph. iv. find "if (iUrl.IsImageUrl(strUrl))" Comment the right paragraph. 2.sh Clean; (Note not remove link4History.url, you should commnet "rm -f link4History.url" line first) secondly use "link4History.url" as a seed file. "link4History" is produced while normal crawling (SE). --------------------------- EXECUTION: execute "make clean; sh Clean;make". 1) for normal crawling and retrieving ./Tse -c tse_seed.img According to query word or url, retrieve results from crawled pages ./Tse -s 2) for ImgSE ./Tse -c tse_seed.img After moving Tianwang.raw.* data to secure place, execute ./Tse -c link4History.url --------------------------- Detail functions: 1) suporting multithreads crawling pages 2) persistent HTTP connection 3) DNS cache 4) IP block 5) filter unreachable hosts 6) parsing hyperlinks from crawled pages 7) recursively crawling pages h) Outputing Tianwang format or ISAM format files --------------------------- Files in the package Tse --- Tse execute file tse_unreachHost.list --- unreachable hosts according to PKU IP block tse_seed.pku --- PKU seeds tse_ipblock --- PKU IP block ... Directories in the package hlink,include,lib,stack,uri directories --- Parse links from a page --------------------------- Please report bugs in TSE to MAINTAINERS: YAN Hongfei * Created: YAN Hongfei, Network lab of Peking University. * Created: July 15 2003. version 0.1.1 * # Can crawl web pages with a process * Updated: Aug 20 2003. version 1.0.0 !!!! * # Can crawl web pages with multithreads * Updated: Nov 08 2003. version 1.0.1 * # more classes in the codes * Updated: Nov 16 2003. version 1.1.0 * # integrate a new version linkparser provided by XIE Han * # according to all MD5 values of pages content, * for all the pages not seen before, store a new page * Updated: Nov 21 2003. version 1.1.1 * # record all duplicate urls in terms of content MD5


【文件预览】:
tse
----Main.cpp(1KB)
----Url.cpp(12KB)
----DataEngine.h(505B)
----Md5.cpp(9KB)
----tfind.cpp(2KB)
----Crawl.h(2KB)
----Clean.sh(329B)
----Http.cpp(21KB)
----Rules.make(511B)
----tse_seed.robots(32B)
----tse_seed.pku.bak(1KB)
----tse_seed.pku(9KB)
----Url.h(2KB)
----Stat.cpp(1KB)
----FileEngine.cpp(558B)
----mt.txt(58B)
----CommonDef.h(899B)
----Md5.h(1KB)
----Page.h(3KB)
----IsamFile.cpp(3KB)
----seed(1KB)
----StrFun.cpp(2KB)
----DataEngine.cpp(145B)
----StrFun.h(858B)
----Http.h(567B)
----include()
--------uri.h(2KB)
--------hlink.h(598B)
--------stack.h(1KB)
--------misc.h(840B)
--------list.h(6KB)
----IsamFile.h(527B)
----Page.cpp(24KB)
----TianwangFile.h(391B)
----pku.hosts(7KB)
----Link4SEFile.cpp(1KB)
----remind.txt(1KB)
----tse_seed.img(385B)
----Tse.h(4KB)
----hlink()
--------hlink.h(598B)
--------hlink.l.0(7KB)
--------lex.hlink.c(344KB)
--------hlink.l(8KB)
--------hlink.l.bak(8KB)
--------Makefile(241B)
----Link4SEFile.h(383B)
----seeds(1KB)
----TianwangFile.cpp(1KB)
----Search.h(252B)
----Res.cpp(2KB)
----tfindForeign.cpp(1KB)
----FileEngine.h(553B)
----Search.cpp(5KB)
----DatabaseEngine.cpp(138B)
----README(4KB)
----uri()
--------uri.h(2KB)
--------.uri.h.swp(12KB)
--------uri.l(31KB)
--------Makefile(204B)
----Crawl.cpp(36KB)
----tse_unreachHost.list(1KB)
----tse_ipblock(0B)
----Design-doc.txt(249B)
----tse_seed.net(234B)
----DatabaseEngine.h(301B)
----tse_seed.gh(18B)
----Makefile(2KB)
----lib()
--------stack.h(1KB)
--------misc.h(840B)
--------list.h(6KB)
--------stack.c(4KB)
--------Makefile(196B)
--------misc.c(226B)
----stack()
--------stack.h(1KB)
--------stack.c(4KB)
--------Makefile(132B)

网友评论

  • 非常牛逼,还在摸索中
  • 非常nb的代码,学习用很好
  • 内容不错,参考一下啊
  • 可以用来看看,比较简单
  • 好像有点小问题,用来学习还是不错
  • 代码有些bug,有待调试!
  • 非常实用,有助于了解搜索引擎的具体实现
  • 虽然有点老了,但是还是很有研究价值的!
  • 内容不全,不过谢谢共享!
  • 感觉有点看不懂。
  • 有错误,正在调试中
  • 有4个错误 自己改一下 就行了 总的来说 不错
  • 内容不错 参考一下 值得下载
  • 有两个小错误,编译会报错,按着错误修改,很容易就调通了,能用不错
  • 好像不完整,我没有运行起来!