|
NAMEAI::Perceptron - example of a node in a neural network.SYNOPSISuse AI::Perceptron; my $p = AI::Perceptron->new ->num_inputs( 2 ) ->learning_rate( 0.04 ) ->threshold( 0.02 ) ->weights([ 0.1, 0.2 ]); my @inputs = ( 1.3, -0.45 ); # input can be any number my $target = 1; # output is always -1 or 1 my $current = $p->compute_output( @inputs ); print "current output: $current, target: $target\n"; $p->add_examples( [ $target, @inputs ] ); $p->max_iterations( 10 )->train or warn "couldn't train in 10 iterations!"; print "training until it gets it right\n"; $p->max_iterations( -1 )->train; # watch out for infinite loops DESCRIPTIONThis module is meant to show how a single node of a neural network works.Training is done by the Stochastic Approximation of the Gradient-Descent model. MODELModel of a Perceptron+---------------+ X[1] o------ |W[1] T | X[2] o------ |W[2] +---------+ +-------------------+ . | . | ___ |_________| __ Squarewave |_______\ Output . | . | \ | S | __| Generator | / . | . | /__ | +-------------------+ X[n] o------ |W[n] | Sum | +-----+---------+ S = T + Sum( W[i]*X[i] ) as i goes from 1 -> n Output = 1 if S > 0; else -1 Where "X[n]" are the perceptron's inputs, "W[n]" are the Weights that get applied to the corresponding input, and "T" is the Threshold. The squarewave generator just turns the result into a positive or negative number. So in summary, when you feed the perceptron some numeric inputs you get either a positive or negative output depending on the input's weights and a threshold. TRAININGUsually you have to train a perceptron before it will give you the outputs you expect. This is done by giving the perceptron a set of examples containing the output you want for some given inputs:-1 => -1, -1 -1 => 1, -1 -1 => -1, 1 1 => 1, 1 If you've ever studied boolean logic, you should recognize that as the truth table for an "AND" gate (ok so we're using -1 instead of the commonly used 0, same thing really). You train the perceptron by iterating over the examples and adjusting the weights and threshold by some value until the perceptron's output matches the expected output of each example: while some examples are incorrectly classified update weights for each example that fails The value each weight is adjusted by is calculated as follows: delta[i] = learning_rate * (expected_output - output) * input[i] Which is know as a negative feedback loop - it uses the current output as an input to determine what the next output will be. Also, note that this means you can get stuck in an infinite loop. It's not a bad idea to set the maximum number of iterations to prevent that. CONSTRUCTOR
ACCESSORS
METHODS
AUTHORSteve Purkis <spurkis@epn.nu>COPYRIGHTCopyright (c) 1999-2003 Steve Purkis. All rights reserved.This package is free software; you can redistribute it and/or modify it under the same terms as Perl itself. REFERENCESMachine Learning, by Tom M. Mitchell.THANKSHimanshu Garg <himanshu@gdit.iiit.net> for his bug-report and feedback. Many others for their feedback.SEE ALSOStatistics::LTU, AI::jNeural, AI::NeuralNet::BackProp, AI::NeuralNet::Kohonen
Visit the GSP FreeBSD Man Page Interface. |