Difference between revisions of "Running DiscoverEd"

From Creative Commons
Jump to: navigation, search
(Crawl)
(Crawl)
 
(10 intermediate revisions by 2 users not shown)
Line 19: Line 19:
 
<pre>
 
<pre>
 
$ sudo su - discovered
 
$ sudo su - discovered
 +
$ cd code
 
</pre>
 
</pre>
  
Line 43: Line 44:
 
Before you crawl you need to make a seed which tells the crawler what to retrieve.
 
Before you crawl you need to make a seed which tells the crawler what to retrieve.
  
<pre>
+
If the directory "seed/" does not exist, create it with
$ ./bin/feeds seed > ./seed/crawl-urls.txt
 
</pre>
 
 
 
When the crawl runs it will look in ./seed/ and open every file it finds there, expecting to find one URL per line (so remove files when you don't want them to be crawled).
 
 
 
To run the actual crawl do:
 
  
 
<pre>
 
<pre>
$ ant -f dedbuild.xml crawl
+
mkdir seed
 
</pre>
 
</pre>
  
This will read the seed files and run the crawl.  The result of this is a new index in the oenutch directory; the directories have a timestamp derived directory name.  For example, crawl-20090730201000 for a crawl run on July 30, 2009 @ 8:10 PM.  After the crawl completes you need to merge the new index with the old one.
+
Then create the seed list of URLs:
 
 
The production index lives in '''/var/www/discovered.labs.creativecommons.org/production-crawl'''.
 
 
 
To merge the index run:
 
  
 
<pre>
 
<pre>
$ ./bin/merge ./crawl-<timestamp>-merged /var/www/discovered.labs.creativecommons.org/production-crawl ./crawl-<timestamp>
+
$ ./bin/feeds seed > ./seed/crawl-urls.txt
 
</pre>
 
</pre>
  
The target directory (the first parameter) will be created for you. The second parameter doesn't change, and the third parameter is the directory just created by the crawl.
+
When the crawl runs it will look in ./seed/ and open every file it finds there, expecting to find one URL per line (so remove files when you don't want them to be crawled).
 
 
After the merge completes (assuming it does so successfully) you'll want to move it into the production directory. Do something like:
 
  
 +
To run the actual crawl do:
 
<pre>
 
<pre>
$ mkdir -p ~/archived-crawls/$(date -I)
+
$ ./bin/crawl-and-merge.sh
$ mv ~/production-crawl ~/archived-crawls/$(date -I)
 
 
</pre>
 
</pre>
  
to rename the existing index so you can go back to it if necessary.
+
Finally, restart Tomcat (the Java app server) to make sure the new index is being used:
 
 
Then you can do
 
  
 
<pre>
 
<pre>
$ mv ./crawl-new-dir-merged ~/production-crawl
+
$ sudo /etc/init.d/tomcat6 restart
</pre>
 
 
 
And finally restart Tomcat (the Java app server) to make sure the new index is being used:
 
 
 
<pre>
 
$ sudo /etc/init.d/tomcat5.5 restart
 
 
</pre>
 
</pre>
  
 
== Managing curators and feeds ==
 
== Managing curators and feeds ==
  
On a6, in the /var/www/discovered.creativecommons.org/oenutch directory, running ./bin/feeds with no parameters shows the list of subcommands:
+
On discovered.labs.creativecommons.org in the $HOME/code directory, running ./bin/feeds with no parameters shows the list of subcommands:
  
 
<pre>
 
<pre>
Line 127: Line 108:
 
setcurator [feed_url] [curator_url]
 
setcurator [feed_url] [curator_url]
 
</pre>
 
</pre>
 +
 +
== Deploying new WARs ==
 +
 +
To deploy a new war, do this:
 +
 +
* sudo rm -rf /var/lib/tomcat6/webapps/search/ # clear the existing app to force redeployment
 +
* sudo cp nutch-1.1.war /var/lib/tomcat6/webapps/search.war
 +
* sudo /etc/init.d/tomcat6 restart
 +
 +
== Things the server administrator should know ==
 +
 +
=== JAVA_HOME ===
 +
 +
Many of our scripts require a JAVA_HOME environment variable to be set. For our convenience, we configured ''discovered.labs.creativecommons.org'' to have JAVA_HOME set for every user. We did that by adding this to '''/etc/profile''':
 +
 +
JAVA_HOME=/usr/lib/jvm/java-6-openjdk/ ; export JAVA_HOME
 +
 +
=== Maximum open files ===
 +
 +
Tomcat and Nutch sometimes have problems opening files. This is because they've exceeded the number of open files that a process can have.
 +
 +
To address this, we added this to '''/etc/security/limits.conf''':
 +
 +
### For Tomcat etc.
 +
*              soft    nofile          4096
 +
*              hard    nofile          4096

Latest revision as of 20:54, 11 October 2010


This page contains raw documentation, some of which is only applicable to our DiscoverEd deployment. It will be massaged into more general docs in the fullness of time.

Instructions for running a crawl

Tips:

  • For long aggregates and crawls, run in 'screen'.

Three phases to the process of updating the index:

  1. Aggregation (polling feeds old and new)
  2. crawling
  3. merging (merging the new index with the existing one).

Set up environment

Execute these commands to set up your environment for running the tools. It also places you in the discovered user's account.

$ sudo su - discovered
$ cd code

Managing Feeds

The feeds script (./bin/feeds) allows you to add curators or feeds. Running it without parameters will show the sub-commands. Feeds and curators are identified by URL (and yes, it's picky -- http://example.org is not the same as http://example.org/ ).

Notes

  • For each feed packaged by an OPML feed, the curator is set by the feed title. The OPML consumer will only add the curator/feed if the feed isn't already in the system.
  • If you add a feed that already exists, you'll just overwrite the old one (since it's a triple store and the URI is the identifier. Same with a curator; they're also identified by URI. It's more likely you'd get two curators, but so long as you're dealing with the same feed URL you won't get dupes.

Aggregation

Aggregation polls the feeds and adds new resources to the triple store. It will also poll any OPML feeds and add the new feeds it finds.

$ ./bin/feeds aggregate

Crawl

Before you crawl you need to make a seed which tells the crawler what to retrieve.

If the directory "seed/" does not exist, create it with

mkdir seed

Then create the seed list of URLs:

$ ./bin/feeds seed > ./seed/crawl-urls.txt

When the crawl runs it will look in ./seed/ and open every file it finds there, expecting to find one URL per line (so remove files when you don't want them to be crawled).

To run the actual crawl do:

$ ./bin/crawl-and-merge.sh

Finally, restart Tomcat (the Java app server) to make sure the new index is being used:

$ sudo /etc/init.d/tomcat6 restart

Managing curators and feeds

On discovered.labs.creativecommons.org in the $HOME/code directory, running ./bin/feeds with no parameters shows the list of subcommands:

  listfeeds        list all feeds
  listcurators     list all curators
  addfeed          add a feed
  resetfeed        reset the last aggregation date for a feed
  addcurator       add a curator
  rmfeed           remove a feed
  setcurator       set the curator for a feed
  aggregate
  dump
  seed

Each one is run as an argument to the feeds script (i.e. ./bin/feeds [command] [parameter1] [parameter2]...)

addfeed

addfeed [feed_type] [feed_url] [curator_url]

Assuming you've added the curator for this feed with addcurator, the curator URL will set the curator (so you don't need to set it again with setcurator).

Feed type notes: "rss" is a parser that does RSS/Atom sniffing.

addcurator

addcurator [curator_name] [curator_url]

Curator names with spaces should be surrounded by quotation marks (e.g. addcurator "CC Open Textbook Project" http://www.collegeopentextbooks.org/)

setcurator

setcurator [feed_url] [curator_url]

Deploying new WARs

To deploy a new war, do this:

  • sudo rm -rf /var/lib/tomcat6/webapps/search/ # clear the existing app to force redeployment
  • sudo cp nutch-1.1.war /var/lib/tomcat6/webapps/search.war
  • sudo /etc/init.d/tomcat6 restart

Things the server administrator should know

JAVA_HOME

Many of our scripts require a JAVA_HOME environment variable to be set. For our convenience, we configured discovered.labs.creativecommons.org to have JAVA_HOME set for every user. We did that by adding this to /etc/profile:

JAVA_HOME=/usr/lib/jvm/java-6-openjdk/ ; export JAVA_HOME

Maximum open files

Tomcat and Nutch sometimes have problems opening files. This is because they've exceeded the number of open files that a process can have.

To address this, we added this to /etc/security/limits.conf:

### For Tomcat etc.
*               soft    nofile          4096
*               hard    nofile          4096