OpenReview
  • Overview
    • OpenReview Documentation
  • Reports
    • Conferences
      • OpenReview NeurIPS 2021 Summary Report
      • OpenReview ECCV 2020 Summary Report
  • Getting Started
    • Frequently Asked Questions
      • I accidentally withdrew a submission, what do I do?
      • How do I add a Program Chair to my venue?
      • When will I be able to withdraw my submission?
      • I want to delete my withdrawn or desk-rejected paper, what do I do?
      • An author of a submission cannot access their own paper, what is the problem?
      • What should I do if I find a vulnerability in OpenReview?
      • How can I report a bug or request a feature?
      • What is the difference between due date (duedate) and expiration date (expdate)?
      • Will Reviewers be notified of their Assignments?
      • What is the max file size for uploads?
      • Why are the "rating" and "confidence" fields in my PC Console wrong?
      • What should I do if my question is not answered here?
      • My Profile is "Limited". What does that mean?
      • What field types are supported in the forms?
      • How do I recruit reviewers?
      • How do I obtain a letter of proof for my services as a reviewer?
      • How do I complete my tasks?
      • Can I automatically transfer my Expertise Selection to another venue?
      • Why does it take two weeks to moderate my profile?
      • What do the different 'status' values mean in the message logs?
      • I am an Independent Researcher, how do I sign up?
      • How do I locate the date a submission is made public?
      • I am a reviewer but I can't access my assigned submissions, what do I do?
      • Reviewers for my venue cannot see their assigned submissions, what should I do?
      • I am a reviewer and I don't have papers for Expertise Selection, what do I do?
      • How do I upload a publication with a license that is not listed?
      • I didn't receive a password reset email, what do I do?
      • How do I add/change an author of my submission after the deadline?
      • How do I find a venue id?
      • Why can't I update my DBLP link?
    • Using the API
      • Installing and Instantiating the Python client
      • Groups
    • Hosting a venue on OpenReview
      • Creating your Venue Instance
      • Navigating your Venue Pages
      • Customizing your submission form
      • Enabling Supplementary Material Upload
      • Changing your submission deadline
      • Enabling an Abstract Registration Deadline
    • Creating an OpenReview Profile
      • Signing up for OpenReview
      • Resending an activation link
      • Expediting Profile Activation
      • Add or remove a name from your profile
      • Add or remove an email address from your profile
      • Finding your profile ID
      • Entering Institutional Data
      • Importing papers from DBLP
      • Manually adding a publication to your profile
      • Finding and adding a Semantic Scholar URL to your profile
      • Finding and adding your ACL Anthology URL to your profile
      • Merging Profiles
    • Customizing Forms
    • Using the New Forum Page
    • Live Chat on the Forum Page
  • Workflows
    • Example Workflow
    • ARR Commitment Venues
    • Exercises for workflow chairs
      • Prerequisites
      • Exercise: Posting LLM generated reviews
  • How-To Guides
    • Modifying Venue Homepages
      • How to customize your venue homepage
      • How to modify the homepage layout to show decision tabs
    • Managing Groups
      • How to Recruit and Remind Recruited Reviewers
      • How to have multiple Reviewer or Area Chair groups
      • How to Add and Remove Members from a Group
      • Publication Chairs
      • How to Copy Members from One Group to Another
    • Workflow
      • How to Programmatically Post Support Request Form
      • How to test your venue workflow
      • How to Post a Test Submission
      • How to support different tracks for a venue
      • How to Make Submissions Available Before the Submission Deadline
      • How to Change the Expiration Date of the Submission Invitation
      • Desk Reject Submissions that are Missing PDFs
      • How to begin the Review Stage while Submissions are Open
      • How to Change Who can Access Submissions After the Deadline
      • How to Enable Commenting on Submissions
      • How to Set a Custom Deadline for Withdrawals
      • How to Enable an Ethics Review Stage
      • How to Hide Submission Fields from Reviewers
      • How to modify the Review, Meta Review, and Decision Forms
      • How to release reviews
      • How to Enable the Rebuttal Period
      • How to Undo a Paper Withdrawal
      • How to enable Camera Ready Revision Upload for accepted papers
      • How to make papers public after decisions are made
      • How to enable bidding for Senior Area Chair Assignment
      • How to release the identities of authors of accepted papers only
      • How to enable the Review Revision Stage
    • Paper Matching and Assignment
      • How to Compute Conflicts Between Users
      • How to Post a Custom Conflict
      • How to create your own Conflict Policy
      • How to Bid on Submissions
      • How to add/remove bids programmatically
      • How to do manual assignments
      • How to do automatic assignments
        • How to setup paper matching by calculating affinity scores and conflicts
        • How to run a paper matching
        • How to modify the proposed assignments
        • How to deploy the proposed assignments
        • How to modify assignments after deployment
      • How to enable Reviewer Reassignment for Area Chairs
      • How to Sync Manual and Automatic Assignments
      • How to Compute Affinity Scores
      • How to Undo Deployed Assignments
      • How to Modify Reviewer Assignments as an Area Chair
      • How to Get all Assignments for a User
      • How to Update Custom Max Papers for Reviewers or ACs
      • How to Make Assignments using Subject Areas
    • Communication
      • How to send messages through the UI
      • How to customize emails sent through OpenReview
      • How to send messages with the python client
      • How to Send Decision Notifications Using the UI
      • How to view messages sent through OpenReview
      • How to email the authors of accepted submissions
      • How to get email adresses
    • Submissions, comments, reviews, and decisions
      • How to add formatting to reviews or comments
      • How to submit a Review Revision
      • How to add formulas or use mathematical notation
      • How to edit a submission after the deadline - Authors
      • How to upload paper decisions in bulk
      • How to hide/reveal fields
      • Update camera-ready PDFs after the deadline expires
    • Data Retrieval and Modification
      • How to check the API version of a venue
      • How to view Camera-Ready Revisions
      • How to Export all Submission Attachments
      • How to loop through Accepted Papers and print the Authors and their Affiliations
      • How to add/remove fields from a submission
      • How to manually change the readers of a note
      • How to post/delete an Official Review using Python
      • How to Get Profiles and Their Relations
      • How to Get All the Reviews that I have written and their Corresponding Submissions
      • How to Get All Registration Notes
      • How to Get All Submissions
      • How to Get All Reviews
      • How to Export All Reviews into a CSV
      • How to get all Rebuttals
      • How to Get All Official Comments
      • How to Get All MetaReviews
      • How to Get All Decisions
      • How to Get All Venues
      • How to Retrieve Data for ACM Proceedings
      • How to Get Reviewer Ratings
  • Reference
    • API V1
      • OpenAPI definition
      • Entities
        • Edge
          • Fields
        • Note
          • Fields
        • Invitation
    • API V2
      • OpenAPI definition
      • Entities
        • Edge
          • Fields
        • Group
          • Fields
        • Note
          • Fields
        • Invitation
          • Types and Structure
          • Fields
          • Specifiers
          • Dollar Sign Notation
        • Edit
          • Fields
          • Inference
    • Stages
      • Revision
      • Registration Stage
      • Bid Stage
      • Review Stage
      • Rebuttal Stage
      • Meta Review Stage
      • Decision Stage
      • Comment Stage
      • Submission Revision Stage
      • Post Submission Stage
      • Post Decision Stage
      • Ethics Review Stage
    • Default Forms
      • Default Submission Form
      • Default Registration Form
      • Default Comment Form
      • Default Review Form
      • Default Rebuttal Form
      • Default Meta Review Form
      • Default Decision Form
      • Default Decision Notification
      • Default Ethics Review Form
    • OpenReview TeX
      • Common Issues with LaTeX Code Display
      • OpenReview TeX support
    • Mental Model on Blind Submissions and Revisions
Powered by GitBook
On this page
  • Introduction
  • Requesting Scores
  • Retrieving The Scores

Was this helpful?

Export as PDF
  1. How-To Guides
  2. Paper Matching and Assignment

How to Compute Affinity Scores

Using the Python client to request scores from our Expertise API

PreviousHow to Sync Manual and Automatic AssignmentsNextHow to Undo Deployed Assignments

Last updated 1 year ago

Was this helpful?

Introduction

Important: For large conferences with 2000 submissions or more, please contact info@openreview.net for computing scores for submissions.

However, manually computing scores between users for any combination of area chairs and senior area chairs for these large venues is allowed

Affinity, or similarity, scores are numbers between 0 and 1 that are used to create matches between the different OpenReview objects. Scores between users and submissions are usually implicitly computed as part of the step for setting up automatic assignments. However, there may be times where you would like to compute scores in a way that is not offered by the previously mentioned step, like scores between two groups of users or scores between a group of users and desk rejected submissions. This can be done by querying our Expertise API through the Python client.

Note: This is only accessible by the Program Chairs, as they are the only role with access to all of a venue's information

Requesting Scores

First, log into OpenReview using the Python client:

client = openreview.api.OpenReviewClient(
    baseurl='https://api2.openreview.net', 
    username=username,
    password=password
)

Next, request a job using the request_expertise function of the client. In the following example, we'll request a job that computes scores between the area chairs:

response = client.request_expertise(
    name='AC_Scores',
    group_id='Conference/Year/Senior_Area_Chairs',
    venue_id=None,
    alternate_match_group='Conference/Year/Senior_Area_Chairs',
    model='specter+mfr',
)

name, group_id, and venue_id are required arguments

The list below describes the arguments for the function call above:

  • name is a string to describe the type of scores you are computing.

  • group_id is the ID of a group where the publications of all the members will be retrieved.

  • venue_id is the value of the venueid property for each submission. The value of this distinguishes between active, withdrawn and desk-rejected submissions.

In this case, venue_id is None because we're looking to compute scores between users, instead of users to submissions

  • alternate_match_group is the ID of the group which you would like to score against the first group. If this value is not None, then the Expertise API will perform a user-to-user score computation

The response variable will contain a dictionary with your job ID, for example: {'jobId': '12345'}. This is the value that you will use to query the status and results of your scores.

Retrieving The Scores

You can use the function get_expertise_results to fetch your scores. Here is an example of the function call:

results = client.get_expertise_results(
    '12345',
    wait_for_complete=True
)

The first argument is the jobId that you received from your call to request_expertise. The wait_for_complete argument will continue to query the Expertise API automatically until the scores are complete. When the scores are complete, get_expertise_results will return a dictionary with the following schema:

{
    'metadata': {
        'no_profile': string[],
        'no_profile_submission': string[],
        'no_publications': string[],
        'no_publications_count': int,
        'submission_count': int
    }, 
    'results': Object[]
}

The scores for our example will be stored in results['results']. Since we are doing a user-to-user computation, the objects in this array may look like: {'match_member': '~UserA1', 'score': 0.61, 'submission_member': '~UserB1'}, where match_member is a member from the group whose ID is group_id and submission_member is a member from the group whose ID is alternate_match_group

model is the name of the model that will be used to compute the scores. We support several models that you find described , but we strongly suggest that you use specter+mfr

Paper Matching Setup
here