arrow-left

All pages
gitbookPowered by GitBook
1 of 6

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

How to setup paper matching by calculating affinity scores and conflicts

hashtag
Setup Matching

The Setup Matching is the first step needed to make assignments between Senior Area Chairs, Area Chairs, Reviewers and submissions. Once this step is completed you can run the Matching following the instructions here.

You can calculate affinity scores and conflicts for your venue using OpenReview's 'Paper Matching Setup' feature. Paper Matching Setup is enabled for any venue and the button is activated once the Submission Deadline (or Abstract Deadline, if there is one) has passed. This feature allows Program Chairs to compute or upload affinity scores and/or compute conflicts.

triangle-exclamation

Calculating affinity scores can be lengthy process depending on the size of your venue. Therefore, only one setup matching can be run at a time.

You can find the 'Paper Matching Setup' button on your venue request form. The button will become available after the submission deadline.

Clicking it should bring up the following form. The 'Matching Group' is a dropdown menu of the groups you can use in the matcher (Reviewers, Area Chairs, Senior Area Chairs), depending on whichever you selected for your venue. You can select if you would like affinity scores and/or conflicts computed. Alternatively, you can compute and upload your own affinity scores using the .

circle-exclamation

It is important that all Senior Area Chairs, Area Chairs, and Reviewers have a complete and updated Profile including their publications. Users can by editing their Profile. This will improve the quality of the affinity scores and conflicts.

hashtag
Selecting Senior Area Chairs as the Matching Group

Senior Area Chairs assignments can be done in two different ways (specified in the ):

  • Senior Area Chairs are assigned submissions directly: This will compute affinity scores and conflicts between Senior Area Chairs and submissions.

  • Assignment to submissions through Area Chairs: Senior Area Chairs are assigned to Area Chairs and the submissions assigned to their Area Chairs are assigned to them. For this reason, when assigning Area Chairs to submissions, the corresponding Senior Area Chair conflicts need to be transferred to the Area Chairs. This guarantees that there are no conflicts between the submission and the assigned Area Chair and Senior Area Chair. Selecting this option will compute affinity scores between Senior Area Chairs and Area Chairs and conflicts between Senior Area Chairs and submissions. It is also required that the assignments between Senior Area Chairs and Area Chairs is done before starting the matching between Area Chairs and submissions.

hashtag
Selecting Area Chairs as the Matching Group

circle-exclamation

If your venue has Senior Area Chairs and you are performing Senior Area Chair assignments based on their Area Chairs, make sure that the Area Chairs already have assigned Senior Area Chairs. This is needed so that the Senior Area Chair conflicts are transferred to the Area Chairs too. Refer to the section for more information

This will compute affinity scores and conflicts between Area Chairs and submissions.

hashtag
Selecting Reviewers as the Matching Group

This will compute affinity scores and conflicts between Reviewers and submissions.

hashtag
Conflict Detection Policy

Conflict detection uses the information of the users' coauthors from publications in OpenReview as long as they are publicly visible and the users' Profile. Therefore, the more complete and accurate the information in the Profile is, the better the conflict detection.

The sections of the Profile used for conflict detection are the Emails section, the Education & Career History section, and the Advisors, Relations & Conflicts section.

Another parameter that can be controlled is the amount of years you want to consider when looking at conflicts. For example, there might be two users who worked at company A at some point. However, User A worked at Company C ten years ago and User B just started working at Company C. If the amount of years is set to 5, for example, a conflict won't be detected between User A and User B because only the history, relations and publications from the past 5 years will be taken into consideration. By default, all relations, history, and publications are taken into consideration for conflict detection.

circle-info

Since a lot of users use email services such as gmail.com, a list of common domains is used to filter them out before conflicts are computed.

There are two policies when computing conflicts: Default and NeurIPS.

hashtag
Default Information Extraction Policy

  1. Uses the domains and computes subdomains from the Education & Career History section.

  2. Uses the domains and computes subdomains from the emails listed in the Advisors, Relations & Conflicts section.

  3. Uses the domains and computes subdomains from the emails listed in the Emails section.

hashtag
NeurIPS Information Extraction Policy

circle-info

Note that emails do not have a range of dates for when they were valid in the user's Profile. The NeurIPS policy addresses this issue.

  1. Uses the domains and computes subdomains from the Education & Career History section. All intern positions are ignored.

  2. Uses the domains and computes subdomains from the emails listed in the Advisors, Relations & Conflicts section, if the relation is that of a Coworker or a Coauthor.

  3. Uses the domains and computes subdomains from the emails listed in the Emails section, if and only if no domains were extracted from the Education & Career History

hashtag
Conflict Detection

Depending on the value you selected for Compute Conflicts N Years, part or all the Profile information will be considered when computing conflicts. For example, if you use the value 5, only the most recent 5 years of the Profile's data will be used to compute conflicts.

Once all the information is extracted from the users' Profiles, the following rules apply to find a conflict between User A and User B:

  • If any of the domains/subdomains from the Education & Career History section of user A matches at least one of the domains/subdomains of the same section of User B, then a conflict is detected.

  • If any of the domains/subdomains from the Advisors, Relations & Conflicts section or the Emails section of user A matches at least one of the domains/subdomains of the same sections of User B, then a conflict is detected.

  • If any of the publications of User A is the same as one of the publications of User B. In other words, if User A and User B are coauthors, then a conflict is detected.

hashtag
Compute Affinity Scores

OpenReview has different models available to compute affinity scores between users and submissions. The current available models are:

  • specter+mfr

  • specter2

  • scincl

  • specter+scincl

If you want to learn more about the models, you can find the open source repository .

You may also choose to upload your scores by selecting No. Directions for how to format uploaded scores are in the description of the field in the Paper Matching Setup.

hashtag
Troubleshoot Paper Matching

Running the paper matching setup should output a comment on your venue request page. If there were members missing profiles or publications, the message will identify them and say 'Affinity scores and/or conflicts could not be computed for these users. Please ask these users to sign up in OpenReview and upload their papers. Alternatively, you can remove these users from the Reviewers group.' This message does not mean that the process failed, but that those members were excluded from the calculations. You have two options:

  1. Remove reviewers without profiles from the reviewers group.

  2. Remind the reviewers that they need OpenReview profiles and wait for them to create them. You can run the Paper Matching Setup as many times as you want or until all users have completed profiles.

Note that when a reviewer creates a profile, their email address will not be automatically updated to their profile ID in the reviewers' group. The matcher will still detect email addresses as users without profiles, so any email addresses will either need to be removed or replaced with tilde IDs. This can be done automatically by re-running Paper Matching Setup.

You can confirm that the affinity scores were computed by checking if an invitation for the scores was created: https://api.openreview.net/edges?invitation=your/venue/id/role/-/Affinity_Score. Next, you should be able to run a paper matching from the ‘Paper Assignments’ link in your PC console.

Uses the publication ids in OpenReview that the user authored.

and
Advisors, Relations & Conflicts
sections.
  • Uses the publication ids in OpenReview that the user authored.

  • OpenReview expertise APIarrow-up-right
    import publications from DBLP
    request form
    Selecting Senior Area Chairs as the Matching Group
    herearrow-up-right
    Setup Paper Matching

    How to do automatic assignments

    If you are using the new venue UI, please refer to the assignments section of the workflow here.

    Venues that selected 'Automatic' for 'Submission Reviewer Assignment' in the venue request form will have the option to use automatic assignment by doing the following:

    1. Make sure your submission deadline has passed.

    2. Calculate affinity scores and conflicts using Paper Matching Setup.

    3. .

    4. .

    5. .

    circle-info

    You will have to perform all these steps for Senior Area Chairs, then for Area Chairs and Reviewers. Depending on your venue, the names of these groups may vary.

    Run a matching
    Modify and finalize the proposed assignments
    Deploy the proposed assignments

    How to deploy the proposed assignments

    circle-info

    If you don't want reviewers to see their assignments before the Review period starts, either wait to deploy the assignments or add the reviewers as submission readers after the start of the review period.

    From the assignment page, click ‘Deploy assignments’ next to the matching configuration of your choice. Note that this will not notify group members of their assignments. You can contact different roles either through their group consoles or through the python client.

    How to modify the proposed assignments

    The edge browser is a tool for visualizing edges, or assignments, created by OpenReview’s automatic paper matching algorithm. You can use it to browse, sort, search, and create proposed assignments between Senior Area Chairs and Area Chairs, Area Chairs and submissions, or Reviewers and submissions before deploying them.

    When you first open the edge browser, all papers will appear in a column on the left. You can click on a certain paper to see a second column of reviewers pop up to the right. Similarly, if you click on a Reviewer (Senior Area Chair or Area Chair), all of their assignments will pop up in another column to the right, and so on.

    The color of each item represents the relationship between that item and the one selected at left:

    1. Light green means that the item is assigned to the item selected at left.

    2. Light red means that the item has conflict with the item selected at left.

    3. Light orange means that the item both has conflict and is assigned to the item selected at left.

    4. White means that the item is not assigned to and has no conflict with the item selected at left.

    Each item will display various edges calculated by the matcher and used to make assignments, such as the Bid, Affinity, Aggregate scores, and Conflicts. The trashcan button can be used to remove an edge. You can create new assignments using the ‘Assign’ button.

    'Assignments' tells you how many papers are assigned to a given reviewer. You may also see 'Custom Max Papers' here if certain reviewers requested a specific max number of papers. You can filter out reviewers who have met their quota with the checkbox 'Only show reviewers with fewer than max assigned papers.' Once a reviewer has hit their quota, the 'Assign' button will be disabled and you will only be able to assign them additional papers using the 'Invite Assignments' button after deployment.

    You can search for specific papers by paper title or number at the top of the first column. At the top of the subsequent columns you can also search for specific reviewers by profileID, name, or email. You can sort subsequent columns on the right by whatever edges are displayed, such as Assignment, Aggregate Score, Bid, Affinity Score, and/or Conflict, using the 'Order By' dropdown.

    You can copy, edit, and create matching configurations as many times as you want until deployment. You can also use the ‘View Statistics’ button on the assignment page to view a breakdown of paper assignments. When you are happy with your assignments,

    you can deploy them.

    How to run a paper matching

    hashtag
    Running Paper Matching

    In order to run the matching for Senior Area Chairs, Area Chairs, and Reviewers, you must first run Setup Paper Matching from the venue request form.

    Once that is done, a link for 'Paper Assignments' should appear on your Program Chair console.

    Clicking on one of the assignment links will bring you to the assignment page, where you can create a new matching configuration. If members of your Senior Area Chair, Area Chair or Reviewer group have profiles without publications, you will need to select ‘Yes’ for ‘Allow Zero Score Assignments’ in order to obtain a solution. Please note that all members of a group must have OpenReview profiles in order for the automatic assignment algorithm to run. Any members without profiles must be removed from the group before this step.

    You can learn more about our automatic paper matching algorithm from its . To create a new matching, click the 'New Assignment Configuration'. This will pull up a form with some default values pertaining to your matching settings:

    chevron-rightUser Demandhashtag
    • The number of users that should be assigned to each paper

    chevron-rightMax Papers hashtag
    • The maximum number of papers that can be assigned to each reviewer

    chevron-rightMin papers hashtag
    • The minimum number of papers that can be assigned to each reviewer

    chevron-rightAlternates hashtag
    • How many alternate reviewers should be saved per paper

    chevron-rightPaper Invitation hashtag
    • Invitation of the submissions that will be assigned in this matching

    • Defaults to venue_id/-/Submission for single blind and venue_id/-/Blind_Submission for double blind venues

    chevron-rightMatch Group hashtag
    • The group ID of users to be assigned to submissions

    chevron-rightScores Specificationhashtag
    • JSON providing further details and customization to scores.

    • Each key represents an edge invitation (affinity score, bid, etc.). Weight can be added to a given score value with the numerical field 'Weight'. 'Default' is a numerical value that is used when there is not an edge for a specific reviewer-paper pair. Finally, 'translate_map' is a map function that translates an edge label value into a number.

    chevron-rightConflicts Invitationhashtag
    • Invitation for storing conflicts between users and papers

    • Defaults to venue_id/user_group/-/Conflict

    chevron-rightCustom User Demand Invitation hashtag
    • If certain papers require a custom number of assigned users, Program Chairs can create edges determining these requests and provide the invitation for used for those edges here.

    • Defaults to venue_id/user_group/-/Custom_User_Demands

    chevron-rightCustom Max Papers Invitation hashtag
    • Some reviewers may submit requests to only have a certain number of assigned papers. The matcher will convert those requests into edges. This determines the invitation that will be used for those edges.

    • Defaults to venue_id/user_group/-/Custom_Max_Papers

    chevron-rightSolver hashtag
    • Which algorithm (MinMax, Fairflow, or Randomized) will be used to generate automatic assignments.

      • MinMax: Optimizes the scores while respecting the min and max quotas for each paper and reviewer. You can read more about MinMax

    chevron-rightAllow Zero Score Assignments hashtag
    • Whether or not assignments with scores of 0 should be allowed. If a reviewer does not have any publications listed on their profile, they will have an affinity score of 0 with all submissions. Therefore, if you have users without publications, you will need to select "yes" in order to get a solution.

    chevron-rightRandomized Probability Limits hashtag
    • If you select "Randomized" for the solver, it will select a random assignment that maximizes expected total affinity, subject to the probability limit provided here. What this means is that for each reviewer-paper assignment, the probability of that match being generated in a random assignment is limited to this value. This should make the outcome of the matching more difficult to predict.

    After filling out the matching configuration form and hitting submit, you should see the following:

    You can view, edit or copy the values you filled out in the matching form. When you are happy with your configuration, you should hit 'Run Matcher' and wait until its status is 'Complete'.

    hashtag
    Modifying Proposed Assignments

    Once the Matching algorithm completes, the proposed assignments will be generated with options to browse them, view statistics or deploy proposed assignments. If you click ‘Browse Assignments’ you will be brought to the edge browser, where you can browse, edit, and create proposed assignments. You can read more about modifying proposed assignments .

    circle-exclamation

    Be careful before deploying proposed assignments! Do not deploy them unless you are sure that you are satisfied with them. Undoing a deployment is difficult and in most cases it requires the OpenReview staff intervention which can only be done during office hours.

    hashtag
    Troubleshooting Matcher

    If you get "No Solution" after running the matcher, you can view the configuration to see the entire error message. If the message is something like the following:

    • Error Message: Total demand (150) is out of range when min review supply is (34) and max review supply is (100)

    that means that your constraints require more reviewers or area chairs than you currently have. The total demand is equal to (number of submissions * user demand) + (number of submissions * alternates). The max review supply is the number of reviewers available * max papers and the review supply is the number of reviewers available * min papers. Your total demand must fall within this range in order to obtain a solution.

    Note that completion of this step does not make assignments, it only creates a proposed assignment configuration. Those assignments will need to be deployed before Reviewers or Area Chairs will see them.

    In the example below, the aggregate score being used by the optimizer is: weight * (affinity score) + weight * (translate_map(bid))

  • .
  • Fairflow: Tries to make every match have at least some minimum affinity. You can read more about Fairflow herearrow-up-right.

  • Randomized: Generates randomized assignments and selects the assignment that maximizes expected total affinity without breaking the probability limits. You can read more about the Randomized solver herearrow-up-right.

  • You can read more about all solver options herearrow-up-right.

  • Defaults to MinMax

  • GitHub repositoryarrow-up-right
    herearrow-up-right
    here
    {
        "Example_Venue/2022/Conference/Reviewers/-/Affinity_Score": {
            "weight": 1,
            "default": 0
        },
        "Example_Venue/2022/Conference/Reviewers/-/Bid": {
            "weight": 1,
            "default": 0,
            "translate_map": {
                "Very High": 1,
                "High": 0.75,
                "Neutral": 0,
                "Low": -0.5,
                "Very Low": -1
            }
        }
    }

    How to modify assignments after deployment

    If you selected "Manual" for Submission Reviewer Assignment, instead follow the guide on how to do manual assignments.

    There are two ways to assign a role after assignments have been deployed:

    1. The option 'Assign' directly assigns the reviewer to the paper and sends an email notification.

    2. The option 'Invite Assignment' sends an invitation email to the member with an accept/decline link.

    Any changes made after deployment are immediately visible to the assigned Reviewers or Area Chairs, and it is not necessary to deploy again.

    The reviewer can then respond to the invitation. If you want to invite a reviewer from outside the reviewer pool, you can do so by searching for their email address or profileID in the search bar of the second column and clicking 'Invite Assignment'. If they are in conflict with that paper, a banner will alert you with an error. Otherwise, they will receive an email notifying them of their invitation with the option to accept or reject the assignment. Their status will change according to their response to your invitation ('Declined', 'Pending Sign Up', 'Accepted', or 'Conflict Detected').