COMP309J – Information Retrieval
Programing Assignment
This assignment is worth 20% of the final grade for the module.
Before you begin, download and extract the file ``Cranfield_Colection.zip’ from Moodle. This
contains several files that you wil need to complete this assignment.
Part 1: BM25 Model
For this assignment you are required to implement the BM25 of Information Retrieval. You
must create a program (using Python) that can do the folowing.
1. Extract the documents contained in the Cranfield document colection. These are
contained in the cranfield_colection.txt file. You should divide the documents into terms in
an appropriate way. The strategy should be documented in your source code coments.
2. Perform. the comon stopword removal pre-procesing step. A list of stopwords to use is
contained in the stopword_list.txt file that is on Moodle.
3. Perform. the comon steming pre-procesing step. For this task, you may use the code
that has ben posted on Moodle (porter.py).
4. The first time your program runs, it should create an appropriate index so that IR using the
BM25 method may be performed. This wil require you to calculate the appropriate weights
and do as much pre-calculation as you can. This should be stored in an external file in some
human-readable format. Do not use database systems (e.g. MySQL, SQL Server, etc.) for this.
5. The other times your program runs, it should load the index from this file, rather than
procesing the document colection again.
6. Acept a query on the comand line and return a list of the 15 most relevant documents,
according to the BM25 IR Model, sorted begining with the highest similarity score. The
output should have thre columns: the rank, the document’s ID, and the similarity score. A
sample run of the program is contained later in this document. The user should continue to
be prompted to enter further queries until they type “QUIT”.
It is ESENTIAL that this can be run as a standalone program, without requiring an IDE such
as IDLE, etc. You can assume that the any files you ned from Moodle wil be in the same
directory as your program (which wil be the working directory) when it is run. Do not use
absolute paths in your code.
Also, non-standard libraries (other than the Porter stemer provided) may not be used.
Part 2: Evaluation
For this part, your program should use the standard queries that are part of the Cranfield
colection to evaluate the efectivenes of the BM25 approach. The user should be able to
select this mode by changing how they run the program.
For example, if users want to enter queries manually, they should be able to run (assuming
your program is called search.py):
python search.py -m manual
Or to run the evaluations:
python search.py -m evaluation
For the evaluations, the standard queries should be read from the ``cranfield_queries.txt’
file. An output file should be created (named ``evaluation_output.txt’). Each line in this file
should have thre fields (separated by spaces), as folows:
1. Query ID.
2. Document ID.
3. Rank (begining at 1 for each query).
A sample of what this file should lok like is shown below.
After creating this file, your program should calculate and print the folowing evaluation
metrics:
- Precision
- Recall
- P@10
- MAP
- NDCG@10 (Hint: You wil ned to make some adjustments to the evaluation
judgment scores that are in the colection so that they can work efectively with
NDCG).
What you should submit
Submision of this assignment is through Moodle. You should submit a single .zip archive
containing the folowing:
- The source code for your program. This should be contained in one file only.
- A README.txt file with brief instructions on how to use your program (examples
below).
Sample README file
Name: Zhang San
UCD Student ID: 13123452
By default, this system searches the Cranfield colection using the BM25, with queries
suplied by the user.
A mode can be selected by using the -m flag. Posible options are “manual” and “evaluation”.
Selecting “evaluation” wil run and evaluate the results for all queries in the
cranfield_queries.txt file.
Contact
If this specification is unclear, or you have any questions, please contact me by email
().
Sample Run (Manual)
$ python search.py -m manual
Loading BM25 index from file, please wait.
Enter query: aircraft wing heat
Results for query [aircraft wing heat]
1 928 0.991997
2 1109 0.984280
3 1184 0.979530
4 309 0.969075
5 533 0.918940
6 710 0.912594
7 388 0.894091
8 1311 0.847748
9 960 0.845044
10 717 0.833753
11 77 0.829261
12 1129 0.821643
13 783 0.817639
14 1312 0.804034
15 423 0.795264
Enter query: QUIT
Note: In al of these examples, the results, and similarity scores were generated at random for
ilustration purposes, so they are not correct scores.
Sample Run (Evaluation)
$ python search.py -m evaluation
Loading BM25 index from file, please wait.
Sample Evaluation Output File (First 15 lines)
1 408 1
1 1151 2
1 679 3
1 889 4
1 1068 5
1 1031 6
1 464 7
1 1185 8
1 292 9
1 751 10
1 117 11
1 94 12
1 1238 13
1 115 14
1 959 15