GSP
Quick Navigator

Search Site

Unix VPS
A - Starter
B - Basic
C - Preferred
D - Commercial
MPS - Dedicated
Previous VPSs
* Sign Up! *

Support
Contact Us
Online Help
Handbooks
Domain Status
Man Pages

FAQ
Virtual Servers
Pricing
Billing
Technical

Network
Facilities
Connectivity
Topology Map

Miscellaneous
Server Agreement
Year 2038
Credits
 

USA Flag

 

 

Man Pages
HTML::RobotsMETA(3) User Contributed Perl Documentation HTML::RobotsMETA(3)

HTML::RobotsMETA - Parse HTML For Robots Exclusion META Markup

  use HTML::RobotsMETA;
  my $p = HTML::RobotsMETA->new;
  my $r = $p->parse_rules($html);
  if ($r->can_follow) {
    # follow links here!
  } else {
    # can't follow...
  }

HTML::RobotsMETA is a simple HTML::Parser subclass that extracts robots exclusion information from meta tags. There's not much more to it ;)

Currently HTML::RobotsMETA understands the following directives:
ALL
NONE
INDEX
NOINDEX
FOLLOW
NOFOLLOW
ARCHIVE
NOARCHIVE
SERVE
NOSERVE
NOIMAGEINDEX
NOIMAGECLICK

Creates a new HTML::RobotsMETA parser. Takes no arguments

Parses an HTML string for META tags, and returns an instance of HTML::RobotsMETA::Rules object, which you can use in conditionals later

Returns the HTML::Parser instance to use.

Returns callback specs to be used in HTML::Parser constructor.

Tags that specify the crawler name (e.g. <META NAME="Googlebot">) are not handled yet.

There also might be more obscure directives that I'm not aware of.

Copyright (c) 2007 Daisuke Maki <daisuke@endeworks.jp>

HTML::RobotsMETA::Rules HTML::Parser

This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.

See http://www.perl.com/perl/misc/Artistic.html

2007-11-30 perl v5.32.1

Search for    or go to Top of page |  Section 3 |  Main Index

Powered by GSP Visit the GSP FreeBSD Man Page Interface.
Output converted with ManDoc.