我们专注攀枝花网站设计 攀枝花网站制作 攀枝花网站建设
成都网站建设公司服务热线:400-028-6601

网站建设知识

十年网站开发经验 + 多家企业客户 + 靠谱的建站团队

量身定制 + 运营维护+专业推广+无忧售后,网站问题一站解决

java实现爬虫代码 Java爬虫代码

如何用java爬虫爬取招聘信息

1、思路:

肥城ssl适用于网站、小程序/APP、API接口等需要进行数据传输应用场景,ssl证书未来市场广阔!成为成都创新互联公司的ssl证书销售渠道,可以享受市场价格4-6折优惠!如果有意向欢迎电话联系或者加微信:028-86922220(备注:SSL证书合作)期待与您的合作!

明确需要爬取的信息

分析网页结构

分析爬取流程

优化

2、明确需要爬取的信息

职位名称

工资

职位描述

公司名称

公司主页

详情网页

分析网页结构

3、目标网站-拉勾网

网站使用json作为交互数据,分析json数据,需要的json关键数据

查看需要的信息所在的位置,使用Jsoup来解析网页

4、分析爬取流程

1.获取所有的positionId生成详情页,存放在一个存放网址列表中ListString joburls

2.获取每个详情页并解析为Job类,得到一个存放Job类的列表ListJob jobList

3.把ListJob jobList存进Excel表格中

Java操作Excel需要用到jxl

5、关键代码实现

public ListString getJobUrls(String gj,String city,String kd){

String pre_url="";

String end_url=".html";

String url;

if (gj.equals("")){

url=";city="+city+"needAddtionalResult=falsefirst=falsepn="+pn+"kd="+kd;

}else {

url=""+gj+"px=defaultcity="+city+"needAddtionalResult=falsefirst=falsepn="+pn+"kd="+kd;

}

String rs=getJson(url);

System.out.println(rs);

int total= JsonPath.read(rs,"$.content.positionResult.totalCount");//获取总数

int pagesize=total/15;

if (pagesize=30){

pagesize=30;

}

System.out.println(total);

// System.out.println(rs);

ListInteger posid=JsonPath.read(rs,"$.content.positionResult.result[*].positionId");//获取网页id

for (int j=1;j=pagesize;j++){ //获取所有的网页id

pn++; //更新页数

url=""+gj+"px=defaultcity="+city+"needAddtionalResult=falsefirst=falsepn="+pn+"kd="+kd;

String rs2=getJson(url);

ListInteger posid2=JsonPath.read(rs2,"$.content.positionResult.result[*].positionId");

posid.addAll(posid2); //添加解析的id到第一个list

}

ListString joburls=new ArrayList();

//生成网页列表

for (int id:posid){

String url3=pre_url+id+end_url;

joburls.add(url3);

}

return joburls;

}

public Job getJob(String url){ //获取工作信息

Job job=new Job();

Document document= null;

document = Jsoup.parse(getJson(url));

job.setJobname(document.select(".name").text());

job.setSalary(document.select(".salary").text());

String joball=HtmlTool.tag(document.select(".job_bt").select("div").html());//清除html标签

job.setJobdesc(joball);//职位描述包含要求

job.setCompany(document.select(".b2").attr("alt"));

Elements elements=document.select(".c_feature");

//System.out.println(document.select(".name").text());

job.setCompanysite(elements.select("a").attr("href")); //获取公司主页

job.setJobdsite(url);

return job;

}

void insertExcel(ListJob jobList) throws IOException, BiffException, WriteException {

int row=1;

Workbook wb = Workbook.getWorkbook(new File(JobCondition.filename));

WritableWorkbook book = Workbook.createWorkbook(new File(JobCondition.filename), wb);

WritableSheet sheet=book.getSheet(0);

for (int i=0;ijobList.size();i++){ //遍历工作列表,一行行插入到表格中

sheet.addCell(new Label(0,row,jobList.get(i).getJobname()));

sheet.addCell(new Label(1,row,jobList.get(i).getSalary()));

sheet.addCell(new Label(2,row,jobList.get(i).getJobdesc()));

sheet.addCell(new Label(3,row,jobList.get(i).getCompany()));

sheet.addCell(new Label(4,row,jobList.get(i).getCompanysite()));

sheet.addCell(new Label(5,row,jobList.get(i).getJobdsite()));

row++;

}

book.write();

book.close();

}

如何用Java写一个爬虫

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

import java.io.File;

import java.net.URL;

import java.net.URLConnection;

import java.nio.file.Files;

import java.nio.file.Paths;

import java.util.Scanner;

import java.util.UUID;

import java.util.regex.Matcher;

import java.util.regex.Pattern;

public class DownMM {

public static void main(String[] args) throws Exception {

//out为输出的路径,注意要以\\结尾

String out = "D:\\JSP\\pic\\java\\";

try{

File f = new File(out);

if(! f.exists()) {

f.mkdirs();

}

}catch(Exception e){

System.out.println("no");

}

String url = "-";

Pattern reg = Pattern.compile("img src=\"(.*?)\"");

for(int j=0, i=1; i=10; i++){

URL uu = new URL(url+i);

URLConnection conn = uu.openConnection();

conn.setRequestProperty("User-Agent", "Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko");

Scanner sc = new Scanner(conn.getInputStream());

Matcher m = reg.matcher(sc.useDelimiter("\\A").next());

while(m.find()){

Files.copy(new URL(m.group(1)).openStream(), Paths.get(out + UUID.randomUUID() + ".jpg"));

System.out.println("已下载:"+j++);

}

}

}

用java编写 网络爬虫求代码和流程 急

import java.awt.*;

import java.awt.event.*;

import java.io.*;

import java.net.*;

import java.util.*;

import java.util.regex.*;

import javax.swing.*;

import javax.swing.table.*;//一个Web的爬行者(注:爬行在这里的意思与抓取,捕获相同)

public class SearchCrawler extends JFrame{

//最大URL保存值

private static final String[] MAX_URLS={"50","100","500","1000"};

//缓存robot禁止爬行列表

private HashMap disallowListCache=new HashMap();

//搜索GUI控件

private JTextField startTextField;

private JComboBox maxComboBox;

private JCheckBox limitCheckBox;

private JTextField logTextField;

private JTextField searchTextField;

private JCheckBox caseCheckBox;

private JButton searchButton;

//搜索状态GUI控件

private JLabel crawlingLabel2;

private JLabel crawledLabel2;

private JLabel toCrawlLabel2;

private JProgressBar progressBar;

private JLabel matchesLabel2;

//搜索匹配项表格列表

private JTable table;

//标记爬行机器是否正在爬行

private boolean crawling;

//写日志匹配文件的引用

private PrintWriter logFileWriter;

//网络爬行者的构造函数

public SearchCrawler(){

//设置应用程序标题栏

setTitle("搜索爬行者");

//设置窗体大小

setSize(600,600);

//处理窗体关闭事件

addWindowListener(new WindowAdapter(){

public void windowClosing(WindowEvent e){

actionExit();

}

});

//设置文件菜单

JMenuBar menuBar=new JMenuBar();

JMenu fileMenu=new JMenu("文件");

fileMenu.setMnemonic(KeyEvent.VK_F);

JMenuItem fileExitMenuItem=new JMenuItem("退出",KeyEvent.VK_X);

fileExitMenuItem.addActionListener(new ActionListener(){

public void actionPerformed(ActionEvent e){

actionExit();

}

});

fileMenu.add(fileExitMenuItem);

menuBar.add(fileMenu);

setJMenuBar(menuBar);

Java网络爬虫怎么实现?

网络爬虫是一个自动提取网页的程序,它为搜索引擎从万维网上下载网页,是搜索引擎的重要组成。

传统爬虫从一个或若干初始网页的URL开始,获得初始网页上的URL,在抓取网页的过程中,不断从当前页面上抽取新的URL放入队列,直到满足系统的一定停止条件。对于垂直搜索来说,聚焦爬虫,即有针对性地爬取特定主题网页的爬虫,更为适合。

以下是一个使用java实现的简单爬虫核心代码:

public void crawl() throws Throwable {

while (continueCrawling()) {

CrawlerUrl url = getNextUrl(); //获取待爬取队列中的下一个URL

if (url != null) {

printCrawlInfo();

String content = getContent(url); //获取URL的文本信息

//聚焦爬虫只爬取与主题内容相关的网页,这里采用正则匹配简单处理

if (isContentRelevant(content, this.regexpSearchPattern)) {

saveContent(url, content); //保存网页至本地

//获取网页内容中的链接,并放入待爬取队列中

Collection urlStrings = extractUrls(content, url);

addUrlsToUrlQueue(url, urlStrings);

} else {

System.out.println(url + " is not relevant ignoring ...");

}

//延时防止被对方屏蔽

Thread.sleep(this.delayBetweenUrls);

}

}

closeOutputStream();

}

private CrawlerUrl getNextUrl() throws Throwable {

CrawlerUrl nextUrl = null;

while ((nextUrl == null) (!urlQueue.isEmpty())) {

CrawlerUrl crawlerUrl = this.urlQueue.remove();

//doWeHavePermissionToVisit:是否有权限访问该URL,友好的爬虫会根据网站提供的"Robot.txt"中配置的规则进行爬取

//isUrlAlreadyVisited:URL是否访问过,大型的搜索引擎往往采用BloomFilter进行排重,这里简单使用HashMap

//isDepthAcceptable:是否达到指定的深度上限。爬虫一般采取广度优先的方式。一些网站会构建爬虫陷阱(自动生成一些无效链接使爬虫陷入死循环),采用深度限制加以避免

if (doWeHavePermissionToVisit(crawlerUrl)

(!isUrlAlreadyVisited(crawlerUrl))

isDepthAcceptable(crawlerUrl)) {

nextUrl = crawlerUrl;

// System.out.println("Next url to be visited is " + nextUrl);

}

}

return nextUrl;

}

private String getContent(CrawlerUrl url) throws Throwable {

//HttpClient4.1的调用与之前的方式不同

HttpClient client = new DefaultHttpClient();

HttpGet httpGet = new HttpGet(url.getUrlString());

StringBuffer strBuf = new StringBuffer();

HttpResponse response = client.execute(httpGet);

if (HttpStatus.SC_OK == response.getStatusLine().getStatusCode()) {

HttpEntity entity = response.getEntity();

if (entity != null) {

BufferedReader reader = new BufferedReader(

new InputStreamReader(entity.getContent(), "UTF-8"));

String line = null;

if (entity.getContentLength() 0) {

strBuf = new StringBuffer((int) entity.getContentLength());

while ((line = reader.readLine()) != null) {

strBuf.append(line);

}

}

}

if (entity != null) {

nsumeContent();

}

}

//将url标记为已访问

markUrlAsVisited(url);

return strBuf.toString();

}

public static boolean isContentRelevant(String content,

Pattern regexpPattern) {

boolean retValue = false;

if (content != null) {

//是否符合正则表达式的条件

Matcher m = regexpPattern.matcher(content.toLowerCase());

retValue = m.find();

}

return retValue;

}

public List extractUrls(String text, CrawlerUrl crawlerUrl) {

Map urlMap = new HashMap();

extractHttpUrls(urlMap, text);

extractRelativeUrls(urlMap, text, crawlerUrl);

return new ArrayList(urlMap.keySet());

}

private void extractHttpUrls(Map urlMap, String text) {

Matcher m = (text);

while (m.find()) {

String url = m.group();

String[] terms = url.split("a href=\"");

for (String term : terms) {

// System.out.println("Term = " + term);

if (term.startsWith("http")) {

int index = term.indexOf("\"");

if (index 0) {

term = term.substring(0, index);

}

urlMap.put(term, term);

System.out.println("Hyperlink: " + term);

}

}

}

}

private void extractRelativeUrls(Map urlMap, String text,

CrawlerUrl crawlerUrl) {

Matcher m = relativeRegexp.matcher(text);

URL textURL = crawlerUrl.getURL();

String host = textURL.getHost();

while (m.find()) {

String url = m.group();

String[] terms = url.split("a href=\"");

for (String term : terms) {

if (term.startsWith("/")) {

int index = term.indexOf("\"");

if (index 0) {

term = term.substring(0, index);

}

String s = //" + host + term;

urlMap.put(s, s);

System.out.println("Relative url: " + s);

}

}

}

}

public static void main(String[] args) {

try {

String url = "";

Queue urlQueue = new LinkedList();

String regexp = "java";

urlQueue.add(new CrawlerUrl(url, 0));

NaiveCrawler crawler = new NaiveCrawler(urlQueue, 100, 5, 1000L,

regexp);

// boolean allowCrawl = crawler.areWeAllowedToVisit(url);

// System.out.println("Allowed to crawl: " + url + " " +

// allowCrawl);

crawler.crawl();

} catch (Throwable t) {

System.out.println(t.toString());

t.printStackTrace();

}

}

java 网络爬虫怎么实现?

1、在打开的ie浏览器窗口右上方点击齿轮图标,选择“Internet选项”,如下图所示:

2、在打开的Internet选项窗口中,切换到安全栏,在安全选卡中点击“自定义级别”,如下图所示:

3、在“安全设置-Internet区域”界面找到“Java小程序脚本”、“活动脚本”,并将这两个选项都选择为“禁用”,然后点击确定,如下图所示:

Java源码 实现网络爬虫?

//Java爬虫demo

import java.io.File;

import java.net.URL;

import java.net.URLConnection;

import java.nio.file.Files;

import java.nio.file.Paths;

import java.util.Scanner;

import java.util.UUID;

import java.util.regex.Matcher;

import java.util.regex.Pattern;

public class DownMM {

public static void main(String[] args) throws Exception {

//out为输出的路径,注意要以\\结尾

String out = "D:\\JSP\\pic\\java\\"; 

try{

File f = new File(out);

if(! f.exists()) {  

f.mkdirs();  

}  

}catch(Exception e){

System.out.println("no");

}

String url = "-";

Pattern reg = Pattern.compile("img src=\"(.*?)\"");

for(int j=0, i=1; i=10; i++){

URL uu = new URL(url+i);

URLConnection conn = uu.openConnection();

conn.setRequestProperty("User-Agent", "Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko");

Scanner sc = new Scanner(conn.getInputStream());

Matcher m = reg.matcher(sc.useDelimiter("\\A").next());

while(m.find()){

Files.copy(new URL(m.group(1)).openStream(), Paths.get(out + UUID.randomUUID() + ".jpg"));

System.out.println("已下载:"+j++);

}

}

}

}


标题名称:java实现爬虫代码 Java爬虫代码
标题链接:http://shouzuofang.com/article/hphgjc.html

其他资讯