Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
OpenReview ECCV 2020 Summary Report
Andrew McCallum (Professor, UMass Amherst; Director OpenReview project)
Melisa Bok (Lead Developer, OpenReview project)
Thomas Brox (Professor, U. Freiburg; ECCV 2020 Program Co-chair)
Rene Vidal (Professor, JHU; Computer Vision Foundation Board member)
In 2020 the organizers of ECCV (one of the flagship conferences in computer vision) decided to move from CMT to OpenReview. This report provides a summary of the ECCV 2020 workflow, the OpenReview services provided, the system performance, and enhancements planned for the next ECCV.
(The Computer Vision Foundation, CVF, has the long-term goal of unifying the CVPR conference workflow tools under one integrated infrastructure. Seeing the success of OpenReview for ICLR over the past seven years, CVF has been providing the OpenReview Foundation with a multi-year financial gift towards this new software development. CVPR is hoping to move to OpenReview in the future.)
ECCV 2020 workflow was not fundamentally different from its previous years: double blind, closed reviewing, with area chairs, closed reviewer discussion, author reponses, and meta-reviews.
Workflow details and timing were planned extensively with shared Google Docs, and three video conference meetings with the OpenReview team. Through the submission and reviewing process OpenReview technical staff provided 24/7 support to the ECCV program chairs, including rapid responses and custom work over weekends and evenings.
Below is a summary of key workflow steps and services. (Detailed workflow is described here.)
Reviewer recruiting. Based on a list provided by ECCV PCs, OpenReview invited over 5k reviewers, and automatically gathered their responses. We also coordinated with ICLR to invite additional reviewers from ICLR’s 2020 reviewer pool.
Reviewer & author registration. OpenReview already had profiles for approximately 100k researchers. For ECCV we added an additional ~3k reviewer profiles, and incorporated their papers from DBLP, running our own version of author coreference, augmented by verification performed by OpenReview staff. ECCV required all authors to register with OpenReview (mostly for the purposes of conflict-of-interest resolution, and gathering multiple email addresses per person); this resulted in ~12k additional profiles being created.
Conflicts-of-interest gathering. Author and reviewer profiles include not only current institution domain names, but DBLP url, Google Scholar url, past advisors, and other non-institutional conflicts. OpenReview could in the future also create conflicts based on paper co-authorship within the last N years; in future ECCV may use this feature also.
Reviewer expertise modeling. Expertise models were built for all reviewers, using modern deep learning embedding methods run on titles and abstracts of reviewers’ papers. In future, OpenReview expertise modeling will also use paper full-text and citation graphs. ECCV 2020 decided to use a combination of both OpenReview reviewer-paper affinities as well as those from TPMS.
Paper submissions. As requested by ECCV 2020 PCs, draft paper titles and abstracts were submitted one week before the full-paper deadline. OpenReview received 7646 paper submissions. In the 24 hours before the final deadline, OpenReview received over 24k submission updates, and had over 19k active users (over 3.7k active simultaneous users during the last hour of submissions). The OpenReview multi-server system never surpassed 50% CPU usage, and maintained smooth operation with rapid system response throughout. (In contrast, the ECCV static web server simply providing submission deadline information became unresponsive.) In addition, during the submission period over 55k email messages were sent to authors (sent to each author for each update).
Paper double-submission check. ECCV used the OpenReview service that checks for double submissions against ICML 2020 and NeurIPS 2020.
Bidding. Both ACs and reviewers bid on papers, assigned as a “task” that was not complete until a given number of bids had been entered. During reviewer bidding, ACs and reviewers were able to search submissions by keyword.
Paper-reviewer assignment. Paper-reviewer affinities included: the OpenReview reviewer expertise model, TMPS affinity scores, area chair reviewer suggestions, reviewer bids, conflicts of interest. During area chair reviewer suggestions, candidate reviewers could be shown ordered by various criteria, including OpenReview affinity, TPMS affinity, and reviewer bids, (and custom reviewer loads). Optimization of paper-reviewer matching was performed by both Min-Cost-Flow and FairFlow [Kobren, et al, 2019]. The optimizer’s meta-parameters can be easily tuned, and the ECCV 2020 program chairs ran the optimizer many times (with ~30 minute turn-around time). Each resulting match comes with various requested summary statistics about the quality of the match. The results of a paper-reviewer match could be browsed by PCs and ACs using OpenReview’s “Edge Browser,” which provides a MacOS-Finder-“column-view”-like nested browsing, as well as extensive searching, and the ability to make suggested edits to the assignment, while seeing reviewer loads, and meta-data for reviewers, including their institution, job title, and link to profile. (The same paper matching system was used to do secondary area chair assignment, and emergency reviewer assignment during the reviewing stage.)
Specialized consoles: OpenReview provided specialized consoles for reviewers, area chairs, and program chairs, including functionality such as task lists, reviewing status, search, reviewer re-assignment, aggregate statistics, status of bids for each revidewer, status of review completion, sending email to remind reviewers, the ability to dump data as downloadable CSV files.
Reviewing and discussion. Reviews were entered directly into the OpenReview system, visible immediately to the ACs, then visible to authors and reviewers of the same paper after the reviewing deadline. As a specially-ECCV-requested enhancement, OpenReview implemented MarkDown in time for the entry of author responses. (LaTeX formula rendering has already been available since Spring 2019.) OpenReview processed 15,152 reviews, 4,117 meta reviews and 2,752 secondary meta reviews. In addition, 9,506 confidential comments and 10,874 rebuttal comments were entered.
Review rating. ECCV PCs requested that area chairs be able to rate the quality of each review on the scale -1, 0, 1, 2. From this reviewers were assigned an aggregate rating, what also included information about their tardiness. These aggregated reviewer ratings are stored (privately) in OpenReview, so that they will be easily (and programmatically) available to future ECCV program chairs. (We are also hoping to encourage private sharing of these ratings across conferences.)
Paper ranking. ECCV program chairs requested that ACs be able to enter a ranking of their assigned papers.
Decisions. PCs downloaded various CSV files into Google Sheets, including AC decisions. Some decisions were modified by the PCs. Then OpenReview emailed and posted the decision based on this Google Sheet. (In future, OpenReview may provide browsing, sorting, and editing directly through its UI; avoiding the need for Google Sheets. Alternatively, we may more closely embrace Google Sheets––leveraging its features––with live bi-directional data updates between OpenReview and the Google Sheet.)
Camera-ready revisions. OpenReview created additional upload invitations and tasks for accepted paper authors, including copyright form, supplementary materials (including videos), camera-ready LaTeX zip file.
Conference track formation. OpenReview also provided affinity scores between accepted papers, as input to paper clustering, for conference track assignments.
Feedback from Thomas Brox, ECCV 2020 Program Co-chair: Very happy with how OpenReview worked, and would recommend it to future program chairs. Particularly liked: (a) very stable and reliable system, (b) great response time and availability of the team, (c) excellent custom service (even implementing custom features we needed), (d) expressive conflict management (this was a primary impetus for moving to OpenReview, (e) reviewer assignment tools. Improvements that would be helpful for next year: feature allowing program chairs to impersonate another user (as CMT allows); additional reviewer-assignment constraints limiting the number of papers from the same institution on one paper, and multiple of the new features listed below.
OpenReview team’s plans for improvement, including
new system features:
Allow program chairs to impersonate another user (as CMT allows), for purposes of understanding reviewer and area chair questions.
Additional reviewer-assignment constraints limiting the number of papers from the same institution on one paper.
new UI features:
Reviewer re-assignment directly from the convenient OpenReview “Edge Browser” interface (without the need to visit the PC console).
Faster load times of the PC console when there are >5k submitted papers.
Improved UI and organization of the “forum” page containing per-paper reviews and discussion: Easier way to read one-to-one discussion and distinguish between different types of replies: reviews, comments, rebuttals. More self-documenting “idiot-proof” UI widget for discussion participants to select the readers of the comments they enter.
Allow ACs to download all their assigned paper files in a zip file.
Add the ability for ACs and reviewers to bid “in blocks,” for example, bidding (positively or negatively) on all submissions containing a keyword, or in an area.’
Add additional UI options for filtering papers, area chairs, or reviewers by various criteria, and then taking actions (such as sending email) on those objects satisfying the criteria.
new data gathering features:
Improved expertise data, by automatically gathering the most recent computer vision conference publications that are not yet in DBLP. Improved expertise model based on the full-text and citations of each reviewers’ papers.
Provide summary statistics of the number of past computer vision publications authored by each reviewer.
simple, alternative configuration for the next ECCV (no new system features needed):
Restrict the list of papers shown to reviewers during the bidding stage: only the top N relevant submissions to each reviewer (rather than allowing reviewers to see all submissions).
Allow the reviewer to edit the review after the rebuttal stage without showing the change to the authors until final decisions are released.
The timeline for submission withdrawals is determined by the Program Chairs of each venue. Therefore, there is not a single overarching OpenReview withdrawal policy. If you have questions about when in a venue workflow you will be able to withdraw your submission, you should reach out to the Program Chairs of that venue directly.
Authors are able to withdraw their paper at any time after the submission deadline or after Post Submission Stage has been run. You can optionally restrict the withdrawal window from your venue request form. You can also use the venue request form to configure the visibility of withdrawn and desk-rejected papers, as well as the identities of their authors.
The OpenReview Documentation is divided into 3 main sections:
: Contains the FAQ, how to create a Venue, how to create a profile, and how to interact with the API.
: Mainly for Venue organizers that want to setup different parts of the workflow.
: Contains a technical reference on how to use more advanced features of OpenReview.
You can use the Search bar in the top right corner to search for keywords in the entire OpenReview documentation.
OpenReview NeurIPS 2021 Summary Report
Andrew McCallum (Professor, UMass Amherst; Director OpenReview project)
Melisa Bok (Lead Developer, OpenReview project)
Alina Beygelzimer (Senior Research Scientist, Yahoo Research; NeurIPS 2021 Program Co-chair)
In 2021 the organizers of NeurIPS (one of the flagship conferences in machine learning) decided to move from CMT to OpenReview. This report provides a summary of the NeurIPS 2021 workflow, the OpenReview services provided, the system performance, and enhancements planned for the next NeurIPS.
The volume of NeurIPS paper submissions has been increasing dramatically over the past decade. NeurIPS also has a history of innovation in peer review.
NeurIPS 2021 workflow was very similar to its previous years: double blind, closed reviewing, with area chairs and senior area chairs, and meta-reviews, with the addition in 2021 of rolling reviewer discussion with authors (rather than a single author response).
Workflow details and timing were planned extensively with the OpenReview team, and coordinated through Google Docs, several video conference meetings, and conversations through a shared Slack channel. Throughout the submission and reviewing process OpenReview technical staff provided 24/7 support to the NeurIPS program chairs, including rapid responses and custom work.
Below is a summary of key workflow steps and services. (Detailed workflow is described here.)
Reviewer recruiting. NeurIPS PCs invited over 13k reviewers, 1k area chairs and 155 senior area chairs. With the permission of ICLR, OpenReview also shared with the PCs the list of accepted authors of the previous ICLR conference from 2016 until 2021.
Reviewer & author registration. OpenReview already had profiles for approximately 228k researchers. During the reviewer recruiting and the paper submission 11k profiles were created, and incorporated their papers from DBLP, running our own version of author coreference, augmented by verification performed by OpenReview staff. NeurIPS required all authors (not just submitting authors) to register with OpenReview (mostly for the purposes of conflict-of-interest resolution, and gathering multiple email addresses per person). During the month of May 12,537 new user profiles were created, more than in any month of OpenReview’s history.
Conflicts-of-interest gathering. Author and reviewer profiles include not only current institution domain names, but also a DBLP URL (from which authors imported all their publications), Google Scholar URL, and extensive conflict-of-interest information, including institutional history, advisors, other collaborators, and social connections, and other non-institutional conflicts. As requested by NeurIPS, we also added the ability to record private conflicts (not shown in the public web site). For NeurIPS review matching, OpenReview computed the conflicts based on institution history, all conflict relations listed above, and paper co-authorship within the last 3 years.
Reviewer expertise modeling. Expertise models were built for all reviewers, using OpenReview’s own modern deep learning embedding methods run on titles and abstracts of reviewers’ papers. NeurIPS decided to use only our expertise model instead of TPMS or Semantic Scholar.
Paper submissions. As requested by NeurIPS 2021 PCs, draft paper titles and abstracts were submitted one week before the full-paper deadline. OpenReview received 11,729 paper submissions. In the 24 hours before the final deadline, OpenReview received over 42k submission updates, and had over 28k active users (over 2.3k active simultaneous users during the last hour of submissions). The OpenReview multi-server system never surpassed 50% CPU usage, and maintained smooth operation with rapid system response throughout. In addition, during the submission period over 110k email messages were sent to authors (sent to each author for each update).
Bidding. SACs bid on ACs and both ACs and reviewers bid on papers, assigned as a “task” that was not complete until a given number of bids had been entered. During reviewer bidding, SACs, ACs and reviewers were able to sort the ACs/papers by affinity scores or search by metadata.
Paper-reviewer assignment. Paper-reviewer affinities included: the OpenReview reviewer expertise model, reviewer bids, and conflicts of interest. Optimization of paper-reviewer matching was performed by both Min-Cost-Flow and FairFlow [Kobren, et al, 2019]. The optimizer’s meta-parameters can be easily tuned, and the NeurIPS 2021 program chairs ran the optimizer many times (with ~60 minute turn-around time). Each resulting match comes with various requested summary statistics about the quality of the match. The results of a paper-reviewer match could be browsed by PCs and ACs using OpenReview’s “Edge Browser,” which provides a MacOS-Finder-“column-view”-like nested browsing, as well as extensive searching, and the ability to make suggested edits to the assignment (including inviting new reviewers not already in the NeurIPS reviewing pool), while seeing reviewer loads, and meta-data for reviewers (including their institution, job title, and link to profile). The same paper matching system was used to do secondary area chair assignment, and emergency reviewer assignment during the reviewing stage.
Specialized consoles: OpenReview provided specialized custom consoles for reviewers, area chairs, senior area chairs, ethic reviewers, ethic chairs, and program chairs––including functionality such as task lists, reviewing status, filtering entries with an filtering language (such as “papers with missing reviews”, or “papers where the average rating is higher than 3”), keyword search, reviewer re-assignment, aggregate statistics, status of bids for each revidewer, status of review completion, sending email to remind reviewers, the ability to dump data as downloadable CSV files.
Reviewing and discussion. Reviews were entered directly into the OpenReview system, visible immediately to the ACs, then visible to authors and reviewers of the same paper after the reviewing deadline. An enhancement created specially at the request of NeurIPS, OpenReview implemented multiple tabs in the discussion forum of a paper (author discussion, committee discussion, all reviewing discussion, post-reviewing public discussion). OpenReview processed 37,284 reviews, 8,103 meta reviews and 452 ethics reviews. In addition, 101,112 confidential comments.
Review rating. NeurIPS PCs requested that area chairs be able to rate the quality of each review. The PCs also allowed authors of the submissions to provide review feedback. The ratings and feedback were only visible to the Program Chairs.
Ethics reviews. As requested by NeurIPS, for the first time OpenReview added configuration to handle ethics reviews. The Ethics Review Chairs assigned ethics reviewers to papers flagged with ethical issues. The OpenReview expertise matching system was used to suggest reviewers with the appropriate topical expertise.
Decisions. OpenReview provides the ability to download various CSV files, which PCs downloaded into Google Sheets, including AC decisions. Some decisions were modified by the PCs. Then OpenReview emailed and posted the decision based directly on this Google Sheet. (In future, OpenReview may provide browsing, sorting, and editing directly through its UI; avoiding the need for Google Sheets. Alternatively, we may more closely embrace Google Sheets––leveraging its features––with live bi-directional data updates between OpenReview and the Google Sheet.)
Camera-ready revisions. OpenReview created additional upload invitations and tasks for accepted paper authors, including copyright form, supplementary materials (including videos), camera-ready LaTeX zip file.
Conference track formation. OpenReview also provided affinity scores between accepted papers, as input to paper clustering, for conference track assignments.
System Responsiveness
Throughout the submission period, the OpenReview system provided smooth service, with rapid response and smooth uptime.
Peer Review Experiments:
With the help and guidance of the team at OpenReview, NeurIPS 2021 ran the following experiments:
Consistency experiment: In 2014, NeurIPS ran an experiment in which 10% of submissions were reviewed by two independent program committees to quantify the randomness in the review process. Since then, the number of annual NeurIPS submissions has increased more than fivefold. To check whether decision consistency has changed as the conference has grown, we ran a variant of this experiment again in 2021. Thе results of this experiment are reported here: https://blog.neurips.cc/2021/12/08/the-neurips-2021-consistency-experiment/
To discourage resubmissions without substantial changes, authors were asked to declare if a previous version of their submission had been rejected from another peer-reviewed venue. Like the year before, authors of resubmissions were asked to describe the improvements made. This information was entered into OpenReview during the submission process. To evaluate resubmission bias, resubmission information was made visible to reviewers and area chairs only for a randomly chosen 50% of submissions. While the experiment allowed us to eliminate a significant bias, we can’t confidently ascertain there is none.
Author perception experiment: OpenReview implemented a two-part author survey to help NeurIPS understand how well authors’ perception of their submissions agrees with reviewing outcomes. The results of this experiment are forthcoming.
Releasing the data to the public:
Submissions under review were visible only to assigned program committee members, and we did not solicit comments from the general public during the review process. After the notification deadline, accepted papers were made public and open for non-anonymous public commenting, along with their anonymous reviews, meta-reviews, and author responses.
By default, rejected submissions were not made public, but authors of rejected submissions were given 2 weeks to opt in to make their de-anonymized papers public and open for commenting in OpenReview. If they chose to do so, this also opened up the reviews, meta-reviews, and any discussion with the authors for these papers. This policy does give authors a mechanism to publicly flag and expose potential problems with the review process. In the end, only about 2% of rejected papers opted in.
Feedback from Alina Beygelzimer, NeurIPS 2021 Program Co-chair:
“As Program Chairs for NeurIPS 2021, we decided to shift the entire reviewing workflow to OpenReview. OpenReview is a flexible platform that allows heavy customization, and will be easy to adapt as the needs of the conference evolve. It brings a number of infrastructural improvements including persistent user profiles that can be self-managed, accountability in conflict-of-interest declarations, and improved modes of interaction during the discussion process. NeurIPS has a long history of experimentation with the goal of informing and improving the review process (e.g., the widely known “NeurIPS Consistency Experiment” of 2014). This year we took full advantage of the great flexibility of OpenReview’s workflow configuration to run several key experiments (including a version of the noise audit that hasn’t been done since 2014). We are grateful to the OpenReview team for supporting all requested experimentation.
Our experience with OpenReview has been a delight. Not only did the paper deadline proceed smoothly (with sub-second system response time throughout the arrival of thousands of submissions just before the submission deadline), but OpenReview gracefully handled more than 20K authors accessing the system roughly at the same time to read and respond to preliminary reviews, and enabled 10K reviewers and Area Chairs and 20K authors to engage in discussions in the weeks that followed. The feedback we received from our authors and program committee members has been overwhelmingly positive.
I hope that NeurIPS will continue to work with OpenReview for years to come. We are hugely grateful to the OpenReview team, for their unparalleled level of support to everyone involved in the review process. OpenReview has also supported the Data & Benchmarks track (new this year) as well as the Ethics Review process for both the main conference and the Data & Benchmarks track. It is also notable that over 20 of the NeurIPS workshops have chosen to use OpenReview for their reviewing workflow this year.”
OpenReview team’s plans for improvement. The OpenReview system is ready for re-use for future NeurIPS conference reviewing needs. The OpenReview team continues to make improvements and new features. Current work likely to be ready for NeurIPS 2022 includes
We are currently designing a new version of the paper reviewing discussion forum, and would be eager for feedback and feature requests. NeurIPS concerns about “rolling discussions” could be addressed here.
Further improvements to the reviewer-paper matching system.
Deployment of a new API providing (1) additional flexibility for fine-grained per-field control of visibility, (2) ease of changing readership permission of content, (3) better storage and access of the history of changes to a paper, review, or comment, (4) creation of “CRON”-jobs for automated sending of reminders.
In future, we will also have support for synchronous chat-style communication among reviewers, area chairs, and program chairs––which we hope will encourage more interactive, open, scientifically-flexible communication during the reviewing period. We are also building support for live conferences, integrated into the OpenReview reviewing platform.
You can report a bug or request a feature by going to our issue tracker repository. Before you do so, please read the guidelines so that we can understand and address your bug report or request. Once you are familiar with the procedure, you can make use of one of these templates to help us better understand your issue or feature request.
Reviewers will see their assigned papers in their Reviewer console and in their Task list. Depending on what method you used to assign Reviewers to papers, they may have already been notified of their assignments:
If you assigned reviewers manually using the dropdown in the Program Chair console, they would have received an email for each assignment.
If you deployed an assignment configuration from the edge browser, Reviewers would not have received a notification of their assignments.
If you use the "Assign" button to add a Reviewer to a paper after having deployed assignments, the Reviewer will receive a notification of their new assignment.
Reports from previous conferences
The author must make sure that the email address associated with the submission is added to their profile and confirmed.
If you removed the rating or confidence fields from your Official Review form, the PC console will show an average rating and/or confidence of 0 for each paper. If you replaced them with custom values, you can customize your PC console to show the average of those values instead.
From your venue request form, click Review Stage.
Select which field you want to be used in place of rating and/or confidence. It must be in the "Additional Review Form Options" field. The options reviewers can select for that field must follow the format "number: description", for example "1: Very Poor".
Enter the field name for the "Review Rating Field Name" (or "Review Confidence Field Name"). It must match the case as it is entered in Additional Review Form Options.
Click "Submit".
OpenReview’s goal is to provide scientific communication to the entire global community. Initially OpenReview was not available to users in Iran, Cuba, Syria, and several other countries because our cloud service provider blocked access as a simple approach to comply with United States economic sanctions and trade laws.
Recently the OpenReview staff has done extra work to make OpenReview available in all countries, including those above. In order to comply with relevant laws, OpenReview must guarantee that none of our registered users appear in the Office of Foreign Assets Control of the U.S. Department of the Treasury (OFAC) SDN list. We primarily accomplish this by looking for first-last-name matches in the list. When there is a name match, we ask for additional information that would disambiguate the OpenReview user from the person on the SDN list.
If you were notified that your profile has been limited, this means that the name on your profile matches that of a person on the SDN list. While in a limited state, you can log in to OpenReview, edit your profile, and view the same amount of data as before, but you will be unable to author any notes on the OpenReview system. This means you will be prevented from performing most common actions, including submitting to venues or posting reviews, meta-reviews, or comments. In order to reactivate your profile, you will need to do the following:
Log in to OpenReview at https://openreview.net/login
Navigate to your Profile at https://openreview.net/profile
Click the Edit button.
Find the field where you can enter your year of birth, and enter the information.
Click Save at the bottom of the page.
OpenReview will routinely check for SDN matches, and if your birth year has been updated, your profile will return to active. Note that this change will not be immediate. If you believe you have been waiting longer than expected and your profile has not been reactivated, you can reach out to info@openreview.net to request that your profile be updated.
A vulnerability is a defect (bug) in the system that compromises the integrity of the application or its data. If you found one, please contact us at info@openreview.net as soon as possible so that it can be fixed. Please do NOT post about the vulnerability elsewhere.
Due date is the advertised deadline.
Expiration date is the hard deadline.
The best way to get help with a specific issue is to contact the program chairs or organizers of the venue you are participating in. Contact info can usually be found on the venue's OpenReview page.
For general inquiries, you can contact the OpenReview team by emailing info@openreview.net or use the feedback form linked in the website's footer. We are most responsive from 9AM - 5PM EST Monday through Friday. With the exception of urgent issues, requests made on weekends or US holidays can expect to receive a response on the next business day.
The max file size for uploads is 100 MB.
There are currently two APIs supported. Venue organizers can decide what API version to use before it is deployed.
While most operations will work on both APIs, pay careful attention when that is not the case.
Depending on the API version that your venue is using, there are different type of fields that can be defined:
You will need to install the openreview-py client.
2. Create a client object with your OpenReview credentials. If you do not yet have an OpenReview profile, you will need to make one now.
Assignments are stored as edges. You can view all of the assignment edges for a user in two ways:
In your browser search bar, enter the following. If you want to view assignments for an Area Chair, change 'Reviewers' to 'Area_Chairs'. Replace your_venue_id with your full venue id. This will bring up all of the assignment edges for all of your venue's reviewers.
2. Now we want to filter by a particular reviewer. If you look at a specific edge, you will see that the field "tail" corresponds to the profile ID of the assigned reviewer. So to see only the assignments for reviewer ~User_One1, change your search query to the following:
If you have not done so, you will need to install and instantiate the openreview-py client.
Set a variable 'tail' to the profile ID of the user you are interested in, for example:
3. Set a variable 'invitation' to the invitation of the edges you are trying to view. If your user of interest is a reviewer, the invitation would be in the format <your_venue_id>/Reviewers/-/Assignment.
4. Retrieve all of the edges posted to that invitation for the user of interest.
In general, conflicts can and should be computed using Paper Matching Setup. However, there may be cases where you do not want to re-compute conflicts for all reviewers and papers; for example, if an author requested to add new coauthors after reviewers were already assigned to their paper. You can use the python client to manually check for conflicts between the reviewers and the new authors like so:
If you have not done so, you will need to install and instantiate the openreview-py client.
Get the note that you are interested in computing conflicts for. If you have a double blind venue, go to api.openreview.net/notes?id=<submission_forum> and get the id listed for "original". If you have a single blind venue, you can pass in the forum.
3. Get the profiles of the authors of the submission and the reviewers. The reviewer group id should be something like your conference id/Paper#/Reviewers, for example robot-learning.org/CoRL/2022/Conference/Paper99/Reviewers if the submission of interest is paper number 99.
4. Compute the conflicts. If no conflicts are found, an empty array will be printed. Otherwise, the array will contain the shared groups that put them in conflict with each other.
If you are using automatic matching, you can generate conflicts automatically with Paper Matching Setup. Sometimes Program Chairs additionally want to create custom conflicts between certain users and certain papers. This can be done by posting an edge to the conflict edge invitation for your venue.
If you have not done so, you will need to install and instantiate the openreview-py client.
Determine the conflict invitation you will be using. If you are creating a custom conflict edge for a reviewer, it would be <your_venue_id>/Reviewers/-/Conflict. If it is for an Area Chair, it would be <your_venue_id>/Area_Chairs/-/Conflict.
Set a variable 'tail' to the user you are generating the conflict for. Set a variable 'head' to the forum of the submission you want to create the conflict for. For example:
Set the readers of this conflict. In general, this should include the tail of the edge and anyone who might be making assignments for this group, such as Area Chairs or Senior Area Chairs. You can confirm who the readers should be by going to openreview.net/invitation/edit?id=<conflict_invitation> and checking the readers in the reply field.
Optionally set a label for your custom conflicts. This can help you query and retrieve them later.
Finally, create and post an edge with a weight of -1 between the user and the paper. This will make the conflict a hard constraint.
Bids, assignments, affinity scores, conflicts, etc. are saved as Edges in OpenReview.
Simply speaking, Edges are links between two OpenReview entities (Notes, Groups, Profiles, etc.).
Besides the fields that define user permissions, an Edge would usually contain these fields: head, tail, weight, label.
For example, a OpenReview affinity score edge for a paper-reviewer pair may have the reviewer’s Profile id set in the edge.head field, paper id set in the edge.tail field, and OpenReview affinity score set in the edge.weight field.
All Edges respond to some OpenReview Invitation, which specifies the possible content and permissions that an Edge is required to contain.
Because the amount of Edges can be quite large, depending on the query used, you need to pass at least one of the following parameters: id
, head
, tail
.
Consider the following example which gets the first 10 Edges representing the “Personal” conflicts in ICLR 2020:
Note that since conflict data is sensitive, you may not have permissions to access conflict edges mentioned in the above example.
By default, get_edges will return up to the first 1000 corresponding Edges (limit=1000). To retrieve Edges beyond the first 1000, you can adjust the offset parameter, or use the function get_all_edges which returns a list of all corresponding Edges:
Since edges usually are very large in numbers, it is possible to get just the count of edges by using the function client.get_edges_count. Note that this only returns the count of edges visible to you.
Since most of the common tasks performed using Edges require Edges to be grouped, it’s also possible to query for already grouped Edges. Consider the following example that gets all reviewers grouped by papers they have conflicts with for the ICLR 2020 Conference.
Note that in this case it's not necessary to pass id
, head
or tail
as parameters. This is the advantage of using the get_grouped_edges
method.
Consider the following example that gets all papers grouped by reviewers they are in conflict with for the ICLR 2020 Conference. It returns a list of lists of the {head, weight, label} of each conflict edge for that tail.
To group Edges, one must already know what the edge.head and edge.tail represent in an Edge and that information can be seen from the Edge’s invitation.
Program committee is represented as group, like Reviewers, Area Chairs, Action Editors, etc. Each of these group has a property members that can contain a set of groups. Email addresses and profile ids are also reprensted as groups. If you want to build you own group of reviewers you can just simply add them as members of the group.
You have two options to edit the Reviewers group:
Using the group editor. You can find the group editor using the url: https://openreview.net/group/edit?id=group_id
Using our Python library:
client.add_members_to_group(group, ['melisa@mail.com', '~Melisa_Bok1'])
You can get a list of all of the venues in OpenReview like so:
Some venues with multiple deadlines a year may want to reuse the same reviewer and area chairs from cycle to cycle. In those cases, it is useful to use the python client to copy group members from one group to another rather than recruiting the same people each time.
If you have not done so, you will need to .
Get the group you are taking members from. You can get an individual group by its id. If you are copying reviewers from one venue iteration to another, for example, do the following:
3. Each group has a field 'members' which is a list of profile IDs or emails belonging to the members of that group. Extract the members:
4. Finally, you can use add_members_to_group to add those members to your new Reviewers group.
While a support request form can most easily be submitted through the UI, some venues that have multiple deadlines a year and need to submit multiple venue requests with the same settings may find it easier to do this programmatically through the API.
If you have not done so, you will need to .
Familiarize yourself with the venue request form. Note which fields are required, and which are optional. You can or .
Choose your program chairs, and create a list of their email addresses.
4. Now it is time to choose the settings for your venue. These make up the 'content' field of your note. Go through the fields on the form in UI, and cross reference the JSON of the invitation to make sure each key matches that in the invitation exactly. For the Program Chairs field, you can enter your program_chair_emails list. For example:
6. Post your note.
Get all submissions for your venue. You can do this by passing your venue's submission invitation into get_all_notes. You should also pass in details = "directReplies" to obtain any notes that reply to each submission.
2. For each submission, add any replies with the Official Review invitation to a list of Reviews.
3. The list reviews now contains all of the reviews for your venue.
Get all submissions for your venue. You can do this by passing your venue's submission invitation into get_all_notes. You should also pass in details = "directReplies" to obtain any notes that reply to each submission.
2. For each submission, add any replies with the Official Review invitation to a list of Reviews.
3. The list reviews now contains all of the reviews for your venue.
Get all submissions for your venue. You can do this by passing your venue's submission invitation into get_all_notes. You should also pass in details = "directReplies" to obtain any notes that reply to each submission.
2. For each submission, add any replies with the Official Review invitation to a list of Reviews.
3. The list reviews now contains all of the reviews for your venue.
Most data in OpenReview are represented as Notes. Each Note responds to an Invitation, which specifies the possible content and permissions that the Note is required to contain.
Users can query notes using the ID of the Invitation that they respond to.
Consider the following example which gets the public Notes that represent the 11th through 20th submissions to ICLR 2019:
By default, get_notes will return up to the first 1000 corresponding Notes (limit=1000). To retrieve Notes beyond the first 1000, you can adjust the offset parameter, or use the function client.get_all_notes which returns a list of all corresponding Notes:
It’s also possible to query for Notes that are replies to other Notes. All reviews, decisions, and comments posted to a particular submission will share that submission's forum. To get all of the notes posted to a particular submission forum, you can do the following:
If you would like to get all types of replies for a Conference, like all Reviews, you can use details = replies. Consider the following example that gets all Official_Comments for the ICLR 2021 conference:
This code returns public comments left on ICLR 2021 Blind Submissions with invitations such as ICLR.cc/2021/Conference/Paper1234/-/Official_Comment.
Invitation IDs follow a loose convention that resembles the one in the example above: the ID of the conference (e.g. ICLR.cc/2019/Conference) and an identifying string (e.g. Blind_Submission), joined by a dash (-). Older conferences often use the format ConferenceID/-/Paper#/Official_Comment, whereas newer venues use the format ConferenceID/Paper#/-/Official_Comment.
Invitations can be queried with the get_invitations function to find the Invitation IDs for a particular conference. The following example retrieves the first 10 Invitations for the ICLR 2019 conference:
Like get_notes, get_invitations will return up to the first 1000 Invitations (limit=1000). To retrieve Invitations beyond the first 1000, you can adjust the offset parameter, or use the function tools.iterget_invitations:
The data of a note is stored in the "content" of that note. For example, the actual decision is stored in the content of decision notes and can be accessed like this:
If you are testing your venue on the dev site, you may want to generate some test submissions. This can be accomplished manually through the UI, but it may be faster to do it with python using the guide below.
If you have not already, you will need to create a username and password on the dev site: . Note that sending emails through the dev site is not supported. Therefore, you will need to request the activation link at .
Instantiate the python client using your dev profile credentials:
Familiarize yourself with the submission invitation. Every content field of your note will need to be in the format specified by the submission invitation, and any fields marked as "required" will need to be present in the note.
Create your note. If you were submitting a note through the , it would look something like this:
Invitation: The ID of your venue's submission invitation. It will take the form of your venue id + /-/Submission. Readers/Writers: A list of the venue ID + all author profile IDs. After the submission deadline, these values will be automatically updated to match the visibility settings selected in the venue request form. Signatures: Your profile ID. Content: The actual content of the submission, which must match the invitation. If the invitation calls for any file uploads, such as a pdf or zip file, you can build the url to the file using put_attachment and entering the path, the invitation ID, and the name of the field. You can then add that field to the note's content. The "authorids" field of the note's content should contain a list of the authors' profile IDs or emails, in the same order as the authors' names. If you registered dummy users, you can find their profile IDs by going to their profile page and copying the ID parameter of that page's url. For example, dev.openreview.net/profile?id=~Test_Author2 --> ~Test_Author2.
Post your note and output the resulting note's ID:
You can now view your note in the UI by going to dev.openreview.net/forum?id=<note_id>.
Instantiate the python client using your dev profile credentials:
Familiarize yourself with the submission invitation. Every content field of your note will need to be in the format specified by the submission invitation, and all the fields that do not contain optional: true
are mandatory.
Invitation: The ID of your venue's submission invitation. It will take the form of your venue id + /-/Submission. Readers/Writers: A list of the venue ID + all author profile IDs. After the submission deadline, these values will be automatically updated to match the visibility settings selected in the venue request form. Signatures: Your profile ID. Content: The actual content of the submission, which must match the invitation. If the invitation calls for any file uploads, such as a pdf or zip file, you can build the url to the file using put_attachment and entering the path, the invitation ID, and the name of the field. You can then add that field to the note's content. The "authorids" field of the note's content should contain a list of the authors' profile IDs or emails, in the same order as the authors' names. If you registered dummy users, you can find their profile IDs by going to their profile page and copying the ID parameter of that page's url. For example, dev.openreview.net/profile?id=~Test_Author2 --> ~Test_Author2.
Post your note and output the resulting note's ID:
You can now view your note in the UI by going to dev.openreview.net/forum?id=<note_id>.
5. Now create your openreview . You need to include the invitation for the request form, as well as signatures, readers, writers, and content. Your signature should be your OpenReview profile ID that is linked to the email address you entered in the program_chair_emails.
7. You can check for your support request here:
If you have not already, you will need to create a username and password on the dev site: . Note that sending emails through the dev site is not supported. Therefore, you will need to request the activation link at .
Create your note. If you were submitting a note through the , it would look something like this:
You can retrieve an individual's OpenReview profile object by their name or email:
If you want to query more than one profile at a time, you can use our tools module:
If you want to get all the profiles and their publication, you can use the previous call and add the parameter with_publications=True
Relations can be extracted in two ways: (1) from the Profile object itself, or (2) from coauthored Notes in the system.
Getting stored relations:
Getting coauthorship relations from Notes:
The submission deadline set through the venue request form is actually only the advertised due date. The true deadline is the expiration date, which is set to 30 minutes after the submission deadline. If you would like to change the expiration date to allow the authors more or less wiggle room around the submission deadline, you can do so using the python client.
If you have not done so, you will need to install and instantiate the openreview-py client.
Choose your desired expiration date. You will need to convert it into epoch time in milliseconds using an epoch time converter. For example:
Depending on the API version that your venue is using, you will need to update the expdate
value differently.
Retrieve your invitation:
Set the expiration date, or expdate
.
Post your changes.
Create an Invitation Edit
Depending on the Invitation used to create the Invitation Edit, some other fields may be required. To read more about how Invitations work, refer to the Invitations section.
To get all meta-reviews for a venue, you can do the following:
Get all submissions for your venue. You can do this by passing your venue's submission invitation into get_all_notes. You should also pass in details = "directReplies" to obtain any notes that reply to each submission.
2. For each submission, add any replies with the Meta-Review invitation to a list of meta-reviews.
3. The metareviews list now contains all of the meta-reviews for your venue.
To get all decisions for a venue, you can do the following:
Get all submissions for your venue. You can do this by passing your venue's submission invitation into get_all_notes. You should also pass in details = "directReplies" to obtain any notes that reply to each submission.
2. For each submission, add any replies with the Decision invitation to a list of decisions.
3. The decisions list now contains all of the decisions for your venue.
Say you want to export all of the reviews for a given venue into a csv file.
If you have not done so, you will need to install and instantiate the openreview-py client.
Retrieve all of the Reviews. Reviews generally follow the invitation Your/Venue/ID/-/Official_Review. We can retrieve them by getting all of the direct replies to each submission and finding those with invitations ending in Official_Review.
Single blind venues can do this like so:
whereas double blind venues should replace "Submission" in the invitation with "Blind_Submission":
3. Next, get the super review invitation. This is the overall review invitation which each of the Paper#/-/Official_Review invitations are based off of, and it follows the format Venue/ID/-/Official_Review.
4. Generate a list of the fields in the content in the Review invitation. For reference, this is what the default review invitation content looks like in JSON:
so we would expect a list like ["title", "review", "rating", "confidence"]. This is how we get the list:
5. If you haven't already, import csv. Then iterate through the list of reviews stored in 'reviews' and for each one, append the values associated to the keys in your keylist. If a value does not exist for that key, put an empty string in its place.
6. The previous example only exports the content fields of each review. You may also want to know which submission each review is associated with. You can get the forum of each review, which corresponds to the forum page of its associated submission. For example, if a review's forum is aBcDegh, you could find that submission at https://openreview.net/forum?id=aBcDegh. To create a csv that includes the review forums, do this:
7. There should now be a csv of exported reviews in the directory in which you are working.
If you want to change your submission deadline, you can do so using the button on the . Note that you can only have one submission deadline for all submissions, and it is not possible to have different deadlines for different types of papers, or to extend the deadline only for a subset of authors.
If you used an abstract registration deadline and want to change the submission deadline after the abstract deadline has passed, you will need to rerun the Post Submission stage after changing the Submission Deadline.
Submitting a venue request form is the first step towards hosting a venue on OpenReview. Go to https://openreview.net/group?id=OpenReview.net/Support and click 'OpenReview Support Request Form'.
If you want to create a Venue Instance for testing purposes, you should use the following link instead: https://dev.openreview.net/group?id=OpenReview.net/Support.
Note that sending emails through the dev site is not supported. If you need to create test profiles/accounts in the dev site, you will need to request the activation link at info@openreview.net.
This is where you will select many of the settings for your venue. The settings for readership permissions can be overwritten at later stages; if you initially make submissions private, you override the submission readers later on with Post Submission Stage or with Post Decision Stage.
After you submit the form, our team will review it and deploy it, making your venue live. You can then edit some of your selected settings from the venue request form. Note that if you do not enter a submission start date, submissions will open immediately upon venue deployment.
If you are following a journal-like workflow where you will be posting several venue request forms a year with the same settings, you may find it more efficient to submit your request forms programmatically.
You can add supplementary material to the submission form by clicking on the 'Revision' button and adding the following JSON under Additional Submission Options:
This will add a supplementary material field to upload zipped files of size up to 50 MB. You can also enable a Submission Revision Stage to allow a separate deadline for Supplementary Material.
If your venue is using the new API (api_version = "2") then you should use the following JSON example:
The field readers
is optional and it can be used to restrict the readers of the field, if you don't specify the readers then all the readers of the submission will be able to see the supplementary material. Make sure you use the right group ids to specify the readers.
You can customize your venue’s submission form using the Revision button on your venue request form. New fields can be entered in JSON format, surrounded by a single set of curly braces, as shown below:
To remove fields, enter a comma-separated list of lowercase field names in the ‘Remove Submission Options’ field. To learn more about accepted field types, refer here.
After your venue is deployed, you will have access to multiple pages in OpenReview:
You can find a link to your PC console under ‘Active Venues’ on the OpenReview homepage.
The venue request form contains the settings that were selected in your venue’s support request form. You can get to the venue request form using the ‘Full venue configuration’ link at the bottom of your PC console. Almost all customizations, such as review form modifications, deadline changes, and workflow stages, should be made through the venue request form.
Each type of committee member (Reviewers, Area Chairs, and Senior Area Chairs) for your venue will have their own console. You can access these consoles through each of the links under ‘Venue Roles’ at the bottom of your PC console.
There are two ways you can set an abstract registration deadline:
When you submit your Support Request Form
After your venue request form has been submitted, add an Abstract Registration deadline using the 'Revision' button on your venue request form.
Until the Abstract Registration deadline, authors will have the option to modify their submissions using the ‘edit’ button in the top right corner of their submission forum page.
After the Abstract Registration deadline passes, you will need to run the 'Post Submission Stage' from your venue request form. This creates the 'Revision' button and paper groups, which allow authors to access and edit their submissions. It also creates blinded copies to anonymize submissions, if applicable.
If you change the Submission deadline after the Abstract Registration deadline, you will need to re-run the ‘Post Submission Stage’ in order to update the revision deadline. After the Submission deadline, authors will lose the ability to revise their submissions.
To create a profile, go to https://openreview.net/signup. Entering your full name might bring up a variety of potential options, depending on whether accounts under your name already appear in our system:
This option will appear if there is an existing profile in OpenReview with your name that does not yet have a password, often because it was pre-created by OpenReview. If you see a profile that you believe belongs to you, you can click ‘Claim Profile’. A text field will then appear for your email address. Enter your email, click claim, and then enter a password. You will receive an email with instructions about next steps. If you no longer have access to the email associated with the profile you need to claim, contact OpenReview support.
If you created a profile that has not yet been activated, you will see this option. If you enter your email address and click ‘Send Activation Link’, you will receive an email with a link that will bring you to the OpenReview .
If you forgot your password associated with a profile, enter your email address and click ‘Reset Password’.
If no existing profiles belong to you and you would like to create a new one, you can fill in your email address in the text field next to the ‘Sign Up’ option and click the button. You will then be prompted to enter a new password and send a confirmation email. Clicking the link in the confirmation email will bring you to the registration page.
The profile registration page is where you can input your personal and professional information, . After clicking ‘Register for OpenReview’, your profile will either be activated immediately or sent to moderation. Providing an institutional email address and valid personal homepage, such as a LinkedIn, GitHub, or Google Scholar profile, will increase your chances of being quickly activated. If your profile is rejected, you can return to the signup page, enter your name, and click ‘Resend Activation Link’ next to the email address you previously attempted to register with.
Go to your profile page at https://openreview.net/profile
Click 'Edit Profile'.
Locate the 'Emails' section and click the blue plus sign underneath your name.
Enter your email and then click 'Confirm'. An email will be sent to your new address.
Click the link in the confirmation email to confirm the email to your profile.
If you would like to make that email preferred, you can do so by clicking ‘Edit profile’ once more and selecting ‘Make Preferred’ next to your desired email. The option to make an email preferred will not appear until you have confirmed the new email to your profile.
OpenReview uses email addresses associated with current or former affiliations for profile deduplication, conflict detection, and paper coreference. For this reason, OpenReview prefers to keep all current and former institutional email addresses on each user's profile. OpenReview will only send messages to the address marked as “Preferred”. OpenReview only displays obfuscated emails (****@umass.edu) and never releases full email addresses to the public.
If there is an email address you want removed from your profile, please reach out to info@openreview.net.
Your OpenReview profile ID is a unique string made up of a tilde concatenated with your full name and a number, for example ∼First_Last1. If you go to your OpenReview profile, your ID will be at the end of the url (for example, https://openreview.net/profile?id=∼Your_Id1)
To add institutional data to your OpenReview profile, go to your profile page at https://openreview.net/profile and click 'Edit profile'. You must enter at least one position under 'Education & Career History' for your profile to be saved. You can choose one position from the dropdown, which includes the most commonly used ones. If none of the positions in the dropdown reflect the position you are entering, you can type your own.
Next, enter a valid institution name (e.g. University of Massachusetts, Amherst) and domain (e.g. umass.edu) from the dropdown or type it in if not present. You can leave the 'End' field empty if you are currently in that position, or you can enter when you are expected to leave that position.
You can click on the 'Login' button on the right of the navigation menu, or go to https://openreview.net/login
Click on your name on the right of the navigation menu and click 'Profile' from the dropdown options, or go to https://openreview.net/profile.
Click 'Edit Profile'.
Locate the 'DBLP URL' text field under the 'Personal Links' section.
You will need to get the 'Persistent DBLP URL' from your DBLP homepage. To do so, hover over the share icon to the right of your name in DBLP page heading and copy the persistent URL from the hover menu.
Paste this persistent url into the DBLP URL field.
4. Click the "Add DBLP Papers to Profile" button
If your persistent DBLP url was valid, the option to 'Add DBLP Papers to Profile' will appear. Click the button and your DBLP publications will appear in a modal window.
Use the checkbox in front of each paper to select those which you would like to import to your OpenReview profile.
Click the 'Add to your Profile' button at the bottom of the modal window to import the selected papers.
If you get an error that says "please ensure the provided DBLP URL is yours", make sure that the name (or one of the names) in your OpenReview profile matches exactly with the name used in DBLP publications. If it does not, you can add a new name to your profile, click 'Save Profile Changes', and try again to import your papers.
Go to your profile page and click 'Edit Profile'. Scroll to the bottom of the page and look for 'Publications' section. All publications associated with your profile will be listed here but those imported from DBLP will have a minus icon displayed after the title.
You can use this minus button to remove a DBLP publication from your profile. If you mistakenly remove a publication, you can click the icon again to reverse it.
When you are finished, click 'Save Profile Changes' in order to remove the selected papers from your profile.
If one or more publications are not present in your DBLP homepage, you can use our direct upload feature to manually upload your missing publications. Please go to the OpenReview.net Archive page and follow the instructions there.
To locate your Semantic Scholar URL, go to https://semanticscholar.org and search for the name you publish by.
If Semantic Scholar has your data, an author tile with your name will appear under the search bar. If your name is not immediately one of the top tiles, click the "Show All Authors" link to expand the tile section. Click on the author tile.
Once you have identified your author page with the associated papers, the URL in the browser address bar is the Semantic Scholar URL that you can use in OpenReview profile edit page.
If you would like to edit your Semantic Scholar author page or add additional metadata (e.g. affiliation data) you may use the "Claim Author Page" button located under your name at the top left of your Semantic Scholar author page.
After you have claimed your page and the claim has been approved, you will receive an email from Semantic Scholar with instructions to edit and update your author page. You will have the option to edit or add metadata, remove papers or add additional papers to your claimed Semantic Scholar author page (in case there are multiple author pages with your name).
Learn about the differences and new features of the forum page.
OpenReview is releasing a major update of the forum page for venues using the new API (v2). While the interface will look familiar, it should be faster, more flexible, and offer new ways to quickly find the content you are looking for.
The new forum UI is enabled by default. In case you face unexpected issues, you can switch back to the original forum UI by changing the URL of any existing TMLR forum page from /forum
to /forum-original
. For example: https://openreview.net/forum-original?id=8HuyXvbvqX.
Here is an example of a typical forum comment:
Every post on the new forum page contains the following information:
Title: title may be provided by the author of the post or it may be a generic title. Clicking the link icon next to the title copies a direct link to this post to your clipboard.
Reply Type: represents the type of forum reply, and comes from the name of the invitation that was used to create the post.
Signature: shows the identity of the user who posted the reply. If the reply comes from a group and you have permission to see the members of the group, their identities will be shown in parenthesis next to the signature.
Creation Date: shows when the reply was posted and when it was last modified
Readers: shows who this post is visible to
Revisions Link: see a list of all edits made (for more see "Editing Posted Content" below)
Edit & Delete: allows you to modify or hide the reply (for more see "Editing Posted Content" below)
Content: shows the complete content of the forum reply
Collapse Toggles: allows you to show more or less of the content of the note. The top button will collapse everything down to a single line displaying just basic information, the middle button will only show the first 5-10 lines of content, and the bottom button will display the entire contents of the note.
Reply Buttons: show all the available options for replying to this post. Clicking one of the buttons will open a form that allows you to submit your reply.
The new forum page provides advanced controls for sorting and filtering replies such as comments, reviews, private responses, and PC decisions. The new controls look something like this:
Invitation Filter: show only replies of a certain type. Can select multiple invitations (types) to show replies matching any of those invitations.
Author Filter: show only replies signed by the selected user or group. Can select multiple authors to show all replies that include any one of the selections as a signature.
Keyword Filter: show all replies that match the phrase. Matches can come from the content of the reply, the title, or the invitation.
Sort Control: change the order of replies shown to either most recent first or oldest first
Layout Control: change the level of nesting of the replies. The three options are Linear, Threaded, and Nested
Collapse Control: change how much of all the replies is shown. By default the entire contents of the reply is visible, this corresponds to the three lines. Selecting the middle button (two lines) will only show the first 5-10 lines of content, and selecting the left button (one line) will condense the replies down to a single line displaying just basic information.
Link Button: copy a URL to your clipboard that includes all the currently selected filter and sort options. This is useful for sharing specific views of a forum page with other people or bookmarking for later.
Readers Filter: show all the replies that have the selected users or groups listed as a reader. Clicking a button twice will turn the button red – this means that only the replies that DO NOT include that group as a reader will be shown.
Preset Views: venues can configure sets of filters and layout options that are displayed as tabs above the filter form. Clicking on a given tab will switch the current filters over to those settings.
As mentioned in the section above there are currently three layout modes available for forum pages. These are:
Linear: all replies are shown at the same level of nesting (no indentation). This is useful to quickly see a chronological feed of all forum replies. If a given post is a reply to another reply (not a general reply to the submission note) the title of that reply will be shown in gray above the title. Clicking on this gray title will scroll the page to that reply.
Threaded: one level of nesting is shown. This layout is useful to group conversations into overarching topics.
Nested: two levels of nesting are shown. This is useful for breaking larger conversations down into sub threads.
If you are logged into OpenReview and have permission to modify the content of a submission or a forum reply (aka a Note) you will see a dropdown button labeled Edit to the right of the title. Clicking this button will display a list of all the available ways to modify the note (aka edit Invitations). For a submission note this might include options to revise the submission or withdraw the submission, and for a reply it might include the option to edit the content of the reply.
You can see a list of all the edits of a given note by clicking on the Revisions link below the title.
We hope that you find the new functionality useful. If there is anything you would like to see changed or added please send an email to info@openreview.net with the subject "New Forum Page Feedback".
On the request form for your venue, click on the ‘Revision’ button and modify the Homepage Override field, which expects valid JSON.
The instruction field of the JSON accepts HTML, allowing the formatting of the instructions section to be fully customized. All HTML should be validated to ensure that it will not break the layout of the page: https://validator.w3.org/#validate_by_input
Example:
which will be displayed as:
Once decisions have been posted, you will see a Post Decision Stage button on the request form for your venue. Once you click on this button, you will be able to specify the name of each tab you want to include in the homepage in the form of a dictionary.
Note that each key must be a valid decision option. For example, a venue with the decision options Accept (Oral), Accept (Spotlight), Accept (Poster) and Reject could choose to hide the rejected papers tab as follows:
To recruit reviewers, there is a feature located on the venue request form called "Recruitment", clicking this button will allow you to send emails to potential reviewers. To find more information, please see our documentation on Reviewer Recruitment and Reminders.
Although OpenReview can support multiple groups for each role, it cannot support distinct workflows, different deadlines, or different review forms for multiple groups within the same venue.
Contact OpenReview support at info@openreview.net to request the ability to support multiple groups for a particular role.
After we have customized your venue to support multiple groups, you can recruit members directly into each group with the 'Recruitment' button.
When it is time to assign the group members to submissions, you will need to run separate matchings for each group. When you deploy your assignments, the members of each group will be moved into the individual paper groups. For example, if a reviewer from Group A and Group B are each assigned to paper 1, they would both be moved into the Paper1/Reviewers group.
From this point forward the reviewers will be treated identically, regardless of their original group.
Test venues and profiles are not allowed on the live OpenReview site. You can test your venue workflow using the dev site: dev.openreview.net. In order to do so, create a profile and submit a venue request form, just as you would on the live site.
Note that sending emails through the dev site is not supported. If you need to create a profile in the dev site, you will need to request the activation link at info@openreview.net.
It is not possible to support multiple different workflows for different types of papers within a single venue. If, however, you want to accept distinct types of submissions that will all follow the same workflow, you can add a ‘track’ field to your submission form. If you reach out to OpenReview support at info@openreview.net, we can then customize your PC console to allow you to filter and sort by track.
On the request form for your venue, click on the ‘Post Submission’ button to make submissions available according to the settings selected in the fields ‘Author and Reviewer Anonymity’ and ‘Submission Readers’ of the venue request. This means that:
If submissions are double blind, blind copies of submissions will be created (make sure to select Force=True). You can also choose which fields are kept hidden (author names are automatically hidden).
If you select the option ‘Everyone (submissions are public)’ in ‘Submission Readers’, then all submissions will be public.
If submissions should be private, then they can be released to the assigned program committee (only assigned reviewers, for example), to the entire program committee (all reviewers), or to PCs and authors only.
This option is only available for Single Blind, public venues.
In order to begin reviewing while still accepting submissions, you will need to run the ‘Post Submission Stage’ with 'Force: True' in order to create paper groups. Then you can run the Review Stage. You will need to repeat these two steps continuously until the Submission deadline has passed to generate the paper groups and review invitations for new submissions.
On the request form for your venue, click on the ‘Recruitment’ button to recruit reviewers (and ACs if applicable). You can use the 'Remind Recruitment' button to send a reminder to Reviewers who have not yet accepted or declined your invitation.
Make sure to pay close attention to the Invitee Details on the Recruitment form. Invitees must be formatted in a specific way to send the messages otherwise the messages will fail to send and will return a status error.
Enter a list of invitees with one user per line. Either tilde IDs (∼Captain_America1), emails (captain_rogers@marvel.com), or email, name pairs (captain_rogers@marvel.com, Captain America) expected. If only an email address is provided for an invitee, the recruitment email is addressed to "Dear invitee". Do not use parentheses in your list of invitees.
All invited reviewers will appear in the group venue_id/Reviewers/Invited
Reviewers who accept the invitation will be added to the group venue_id/Reviewers
Reviewers who decline the invitation will be added to the group venue_id/Reviewers_Declined.
You can find links to these groups on your PC console.
At any point during your venue's worflow, you can click on the ‘Post Submission’ button and use the field ‘Submission Readers’ to change the readers for all submissions:
All program committee (all reviewers, all area chairs, all senior area chairs if applicable): all papers are private and only released to all reviewers, area chairs and senior area chairs (if your venue has them)
Assigned program committee (assigned reviewers, assigned area chairs, assigned senior area chairs if applicable): all papers are private and only released to each paper's assigned reviewers, area chairs and senior area chairs (if your venue has them)
Program chairs and paper authors only: papers are private and released only to program chairs and paper authors
Everyone (submissions are public): papers are released to the public
You can use this field to change the submission readers as many times as needed.
If you would like to restrict when authors can withdraw their submissions, you can do so from your venue request form. Go to your venue request form, click the "Revision" button, and enter a date and time in GMT for the field "Withdraw Submission Expiration".
Select "Yes, our venue has Ethics Chairs and Reviewers" from the venue request form. This can be done when submitting a new support request form, or after your venue has been deployed by using the "Revision" button. This will generate an "Ethics Review Stage" button from the venue request form as well as other necessary customizations.
Recruit Ethics Chairs and Ethics Reviewers using the "Recruitment" button from the Venue Request form.
Once you have identified which papers need ethics review, create a comma-separated list of their paper numbers.
From the venue request form, run the "Ethics Review Stage". This will allow you to customize the Ethics Review form and pass in your list of papers that require review.
Now Ethics Chairs will be able to assign Ethics Reviewers to the flagged papers from their Ethics Chairs console. Ethics Reviewers will see the option to post Ethics Reviews to their assigned papers.
You can enable comments using the Comment Stage.
The review, meta review, and decision forms should be modified only through the venue request form. Do not edit the review, meta review, or decision invitations directly from the invitation editor, or your changes will be overwritten.
Example: In order to edit the review form, run the Review Stage from the venue request form. Any additional fields can be added in valid JSON to the ‘Additional Review Form Options’ field, and any fields can be removed using the ‘Remove Review Form Options’ field. A similar process can be followed for meta-reviews and decisions using the Meta Review and Decision stages.
You can hide certain fields of the submission form from Reviewers using the Post Submission Stage. From the venue request form, click 'Post Submission Stage'. In the 'Hide Fields' section, enter a comma-separated list of fields that you want hidden from Reviewers. Author identities are hidden by default. Double blind venues will need to wait to do this until their submission deadline has passed; single blind venues can do this at any time.
You can use the Invitation editor to preview the changes made to your forms.
View your invitation in the invitation editor. There are two ways to open the invitation editor:
Go to your venue homepage, click 'Edit Group', and click on your invitation of choice under 'Related Invitations'.
Go to https://openreview.net/invitation/edit?id=Your/Venue/Id/-/Invitation
Edit the Reply Parameters field. Enter and remove fields in valid JSON as you choose. Do not hit 'Save Invitation'.
View your changes in the 'Preview' tab.
Once you are happy with your JSON, use the venue request form to make your desired changes to the form. You can copy the JSON of new fields into the 'Additional ___ Options' field, and remove fields using the 'Remove ___ Options' field.
OpenReview does not support a rebuttal stage in the sense that authors will only be able to respond once to their reviews. Venues typically mimic the rebuttal functionality using the comment stage, which allows authors to address the Reviews posted to their submissions but does not limit them to a single response.
Release reviews to authors, if they are not released already.
Run the 'Comment Stage' with your desired participants, start, and end dates.
When you are ready to release the reviews, run the Review Stage from the venue request form and update the visibility settings to determine who should be able to see them. This will change the readers of all existing reviews in bulk.
Please note that if you want to release the reviews to the public, you will first need to make all submissions public if they are not already. If you select to release the reviews to the public while trying to release them to authors, it will not work if the submissions are not already public. If your decision stage has passed, you can use the 'Post Decision Stage' to release submissions. If you need to make submissions public but have not yet posted the decisions, contact OpenReview support at info@openreview.net for assistance.
Once decisions have been posted, you will see a ‘Post Decision Stage’ button on the request form for your venue. Click on this button to choose who should have access to submissions.
Withdrawn submissions can be restored by deleting the withdraw note. On the forum of the withdrawn submission there will be a note like the following:
Using the trashcan button to delete this note will restore the submission.
From the venue request form, click the ‘Submission Revision Stage’ button to set up camera-ready revisions for papers. To view all camera-ready versions submitted to your venue, refer here.
Once decisions have been posted, you will see a ‘Post Decision Stage’ button on the venue request form. Click on this button to choose between revealing identities of authors of all papers or only accepted papers to the public.
If you chose to manually assign reviewers from the PC console, that means that assignments are based solely on group membership. For example, anyone in the Paper1/Reviewers group will have reviewer permissions for Paper 1. If you removed them from that group, they would be effectively unassigned from Paper 1 and would lose access to those permissions. Note that the following steps should only be used for venues that used manual assignment through the PC console. Do not do this if you used the edge browser.
Locate the reviewer group for a particular paper. You can build the url to this group like so: https://openreview.net/group/info?id= + your venue ID + /Paper + the paper's number + /Reviewers
For example, if you wanted to assign a reviewer to Fake Conference paper 1, you could go to https://openreview.net/group/info?id=Fake.cc/2022/Conference/Paper1/Reviewers.
Click "Edit group".
Add a profile ID or email address to the group to assign a new reviewer. Or, remove a group member to un-assign them.
Venues that selected a value for 'Paper Matching' in the venue request form will have the option to use automatic assignment by doing the following:
Make sure your submission deadline has passed. Unless your venue is single blind and public, assignments cannot be made until after the submission deadline.
Run the Post Submission stage to reveal submissions to reviewers.
Run the Review Stage.
Before calculating affinity scores and conflicts, you should make sure that your submission deadline has passed and that you have run either the ‘Post Submission Stage’ or the ‘Review Stage’.
You can calculate affinity scores and conflicts for your venue using OpenReview's 'Paper Matching Setup' feature. Paper Matching Setup is enabled for any venue that selected an option for the 'Paper Matching' question on the venue request form. This feature allows Program Chairs to compute or upload affinity scores and/or compute conflicts.
You can find the 'Paper Matching Setup' button on your venue request form next to 'Remind Recruitment'.
Clicking it should bring up the following form. The 'Matching Group' is a dropdown menu of the groups you can use in the matcher (Reviewers, Area Chairs, Senior Area Chairs), depending on whichever you selected for your venue. You can select if you would like affinity scores and/or conflicts computed. Alternatively, you can compute and upload your own affinity scores using the OpenReview expertise API: https://github.com/openreview/openreview-expertise
Conflict detection uses the information of the users' coauthors from publications in OpenReview as long as they are publicly visible and the users' Profile. Therefore, the more complete and accurate the information in the Profile is, the better the conflict detection.
The sections of the Profile used for conflict detection are the Emails section, the Education & Career History section, and the Advisors, Relations & Conflicts section.
Another parameter that can be controlled is the amount of years you want to consider when looking at conflicts. For example, there might be two users who worked at company A at some point. However, User A worked at Company C ten years ago and User B just started working at Company C. If the amount of years is set to 5, for example, a conflict won't be detected between User A and User B because only the history, relations and publications from the past 5 years will be taken into consideration. By default, all relations, history, and publications are taken into consideration for conflict detection.
Since a lot of users use email services such as gmail.com, a list of common domains is used to filter them out before conflicts are computed.
There are two policies when computing conflicts: default and neurips.
Uses the domains and computes subdomains from the Education & Career History section.
Uses the domains and computes subdomains from the emails listed in the Advisors, Relations & Conflicts section.
Uses the domains and computes subdomains from the Emails listed in the Emails section.
Uses the publication ids in OpenReview that the user authored.
Note that emails do not have a range of dates for when they were valid in the user's Profile. The Neurips policy addresses this issue.
Uses the domains and computes subdomains from the Education & Career History section.
Uses the domains and computes subdomains from the emails listed in the Advisors, Relations & Conflicts section.
Uses the domains and computes subdomains from the Emails listed in the Emails section, if and only if, no domains were available in the Education & Career History and Advisors, Relations & Conflicts sections.
Uses the publication ids in OpenReview that the user authored.
Once all the information is extracted from the users' Profiles, the following rules apply to find a conflict between User A and User B:
If any of the domains/subdomains from the Education & Career History section of user A matches at least one of the domains/subdomains of the same section of User B, then a conflict is detected.
If any of the domains/subdomains from the Advisors, Relations & Conflicts section or the Emails section of user A matches at least one of the domains/subdomains of the same sections of User B, then a conflict is detected.
If any of the publications of User A is the same as one of the publications of User B. In other words, if User A and User B are coauthors, then a conflict is detected.
Running the paper matching setup should output a comment on your venue request page. If there were members missing profiles or publications, the message will identify them and say 'Affinity scores and/or conflicts could not be computed for these users. Please ask these users to sign up in OpenReview and upload their papers. Alternatively, you can remove these users from the Reviewers group.' This message does not mean that the process failed, but that those members were excluded from the calculations. You have two options:
Remove reviewers without profiles from the reviewers group.
Remind the reviewers that they need OpenReview profiles and wait for them to create them. You can run the Paper Matching Setup as many times as you want or until all users have completed profiles.
Note that when a reviewer creates a profile, their email address will not be automatically updated to their profile ID in the reviewers' group. The matcher will still detect email addresses as users without profiles, so any email addresses will either need to be removed or replaced with tilde IDs. This can be done automatically by re-running Paper Matching Setup.
You can confirm that the affinity scores were computed by checking if an invitation for the scores was created: https://api.openreview.net/edges?invitation=your/venue/id/role/-/Affinity_Score. Next, you should be able to run a paper matching from the ‘Paper Assignments’ link in your PC console.
In order to automatically assign Reviewers and Area Chairs, you must:
Enable the 'Review' or 'Post Submission' stage from your venue request form. This can only be done AFTER the submission deadline has passed.
The Review Stage sets the readership of reviews.
The Post Submission stage sets readership of submissions.
Use the 'Paper Matching Setup' button on your venue request form to calculate affinity scores and conflicts.
After you complete these steps, a link for 'Paper Assignments' should appear on your Program Chair console.
Clicking on one of the assignment links will bring you to the assignment page, where you can create a new matching configuration. If members of your reviewer or area chairs group have profiles without publications, you will need to select ‘Yes’ for ‘Allow Zero Score Assignments’ in order to obtain a solution. Please note that all members of a group must have OpenReview profiles in order for the automatic assignment algorithm to run. Any members without profiles must be removed from the group before this step.
You can learn more about our automatic paper matching algorithm from its github repo: https://github.com/openreview/openreview-matcher. To create a new matching, click the 'New Assignment Configuration'. This will pull up a form with some default values pertaining to your matching settings:
After filling out the matching configuration form and hitting submit, you should see the following:
You can view, edit or copy the values you filled out in the matching form. When you are happy with your configuration, you should hit 'Run Matcher' and wait until its status is 'Complete'. This generates proposed assignments, with options to browse assignments, view statistics or deploy matching. If you click ‘Browse Assignments’ you will be brought to the edge browser, where you can browse, edit, and create proposed assignments.
If you get "No Solution" after running the matcher, you can view the configuration to see the entire error message. If the message is something like the following:
Error Message: Total demand (150) is out of range when min review supply is (34) and max review supply is (100)
that means that your constraints require more reviewers or area chairs than you currently have. The total demand is equal to (number of submissions * user demand) + (number of submissions * alternates). The max review supply is the number of reviewers available * max papers and the review supply is the number of reviewers available * min papers. Your total demand must fall within this range in order to obtain a solution.
Note that completion of this step does not make assignments, it only creates a proposed assignment configuration. Those assignments will need to be deployed before Reviewers or Area Chairs will see them.
The edge browser is a tool for visualizing edges, or assignments, created by OpenReview’s automatic paper matching algorithm. You can use it to browse, sort, search, and create proposed assignments between reviewers and papers before deploying them.
When you first open the edge browser, all papers will appear in a column on the left. You can click on a certain paper to see a second column of reviewers pop up to the right. Similarly, if you click on a reviewer, all of their assigned papers will pop up in another column to the right, and so on.
The color of each item represents the relationship between that item and the one selected at left:
Light green means that the item is assigned to the item selected at left.
Light red means that the item has conflict with the item selected at left.
Light orange means that the item both has conflict and is assigned to the item selected at left.
White means that the item is not assigned to and has no conflict with the item selected at left.
Each item will display various edges calculated by the matcher and used to make assignments, such as the Bid, Affinity, Aggregate scores, and Conflicts. The trashcan button can be used to remove an edge. You can create new assignments using the ‘Assign’ button.
'Assignments' tells you how many papers are assigned to a given reviewer. You may also see 'Custom Max Papers' here if certain reviewers requested a specific max number of papers. You can filter out reviewers who have met their quota with the checkbox 'Only show reviewers with fewer than max assigned papers.' Once a reviewer has hit their quota, the 'Assign' button will be disabled and you will only be able to assign them additional papers using the 'Invite Assignments' button after deployment.
You can search for specific papers by paper title or number at the top of the first column. At the top of the subsequent columns you can also search for specific reviewers by profileID, name, or email. You can sort subsequent columns on the right by whatever edges are displayed, such as Assignment, Aggregate Score, Bid, Affinity Score, and/or Conflict, using the 'Order By' dropdown.
You can copy, edit, and create matching configurations as many times as you want until deployment. You can also use the ‘View Statistics’ button on the assignment page to view a breakdown of paper assignments. When you are happy with your assignments, you can deploy them.
From the assignment page, click ‘Deploy assignments’ next to the matching configuration of your choice. Note that this will not notify group members of their assignments. You can contact different roles either through their group consoles or through the python client.
There are two ways to assign members after assignments have been deployed:
The option 'Assign' directly assigns the reviewer to the paper and sends an email notification.
The option 'Invite Assignment' sends an invitation email to the reviewer with an accept/decline link. If you would like to use this option, you will need to contact info@openreview.net to enable its functionality.
Any changes made after deployment are immediately visible to the assigned Reviewers or Area Chairs, and it is not necessary to deploy again.
The reviewer can then respond to the invitation. If you want to invite a reviewer from outside the reviewer pool, you can do so by searching for their email address or profileID in the search bar of the second column and clicking 'Invite Assignment'. If they are in conflict with that paper, a banner will alert you with an error. Otherwise, they will receive an email notifying them of their invitation with the option to accept or reject the assignment. Their status will change according to their response to your invitation ('Declined', 'Pending Sign Up', 'Accepted', or 'Conflict Detected').
If you selected to use affinity scores, conflicts, bids, or reviewer recommendation scores, you will not be able to use the following options. Instead, follow the guide on how to do automatic assignments.
If you did not specify you wanted to use the OpenReview matcher to assign reviewers to papers, you will be able to manually assign them using the PC console.
Make sure your submission deadline has passed. Unless your venue is single blind and public, assignments cannot be made until after the submission deadline.
Run the review stage by clicking on the Review Stage button on the request form for your venue.
Under the 'Paper Status' tab in the PC console, click on 'Show Reviewers' next to the paper you want to assign reviewers to.
To assign reviewers from the reviewer pool, you can choose a reviewer from the dropdown. Here, you can also search reviewers in the reviewer pool by name or email. After finding the reviewer you want to assign, click on the 'Assign' button.
To assign reviewers from outside the reviewer pool, you should type the reviewer's email or OpenReview profile ID (e.g., ~Alan_Turing1) in the text box and then click on the 'Assign' button. A reviewer does not need to have an OpenReview profile in order to be assigned to a paper.
Note that assigning a reviewer to a paper through the PC console automatically adds that reviewer to the reviewers pool and sends them an email notifying them about their new assignment.
Area Chairs: Unfortunately, assigning ACs is not available through the PC console, but manual AC assignments can be done through the Python library: (You can check out the docs for our Python API here)
You will need to use your own OpenReview credentials to initialize the Client object.
request_form_id (string) refers to the forum id of the venue request for your venue, (e.g., https://openreview.net/forum?id=r1lf10zpw4)
paper_number (int) is the number of the paper you want to assign an area chair to (you can find this in the 'Paper Status' tab of the PC console)
user_id (string) is the email address or OpenReview profile ID (e.g., ~Alan_Turing1) of the user you want to assign
Note that assigning an area chair using python does not send an email to that user. For information on how to contact Area Chairs through the UI, click here. For information about how to contact Area Chairs using python, click here.
The edge browser is a tool for visualizing edges, or matches, created by OpenReview's automatic paper matching algorithm. You can use it to browse, sort, search, and create new assignments between reviewers and papers until you are happy with the assignments generated.
Navigating the Edge Browser
If the option to modify assignments is available for Area Chairs in your venue, you should find a link to do so in your Area Chair console that will bring you directly to the edge browser.
All of your assigned papers will appear in a single column on the left. Clicking on a paper in the list will pull up a second column containing all reviewers, colored by their relationship to the selected paper:
Light green means that the reviewer is assigned to the paper selected at left.
White means that the reviewer is not assigned to and has no conflict with the paper selected at left.
You can search for specific papers by paper title or number at the top of the first column. At the top of the subsequent column you can also search for specific reviewers by their name, email or profile id. You can sort the column on the right by whatever edge types are shown, such as Assignment, Aggregate Score, Bid, or Affinity Score, using the 'Order By' dropdown. The second column will show the total number of assignments for the selected reviewer.
Creating and Removing Assignments Using the Edge Browser
You can delete an assignment using the trash can button on a certain Reviewer.
There are two ways to create assignments:
Using the 'Assign' button. This assigns a reviewer to a paper and notifies them by email. The assignment becomes automatically available in the Reviewer and AC consoles.
Using the 'Invite Assignment' button. This is only availble if it has been enabled by the Program Chairs. This sends an invitation to the reviewer with an accept/decline link. The reviewer can then respond to the invitation. If you want to invite a reviewer from outside the reviewer pool, including another Area Chair, you can do so by searching for their email address or profileID in the search bar of the second column and clicking 'Invite Assignment'. If they are in conflict with that paper, a banner will alert you with an error. Otherwise, they will receive an email notifying them of their invitation with the option to accept or reject the assignment. Their status will change according to their response to your invitation ('Declined', 'Pending Sign Up', 'Accepted', or 'Conflict Detected').
Some reviewers have a custom reduced paper load which appears in the edge browser as 'Custom Max Papers'. You cannot directly assign a reviewer to more papers than their custom max papers, but you can 'Invite' reviewers if that option is enabled for you. You can also filter out reviewers who have met their quota with the checkbox 'Only show reviewers with fewer than max assigned papers.'
Program Chairs can message any venue participants through the group consoles. Clicking any of the links under 'Venue roles' on your PC console will bring you to a console for that group. If you click 'Edit group', you will see the option to email those group members. You can customize the emails using the backend tags. Note that you will not be able to use the author group to message authors until after the submission deadline.
You can use the PC console Paper Status, Reviewer Status, or Area Chair Status tabs to message selected reviewers or area chairs, respectively. You will also have the option to message only those with incomplete reviews/metareviews, only those with completed reviews/metareviews, or only those assigned to particular papers. You can customize the emails using either the frontend or backend tags.
You can send messages through OpenReview using the python client post_message function. You will first need to install and setup the python client. Your recipients could be a list of OpenReview profile IDs, a list of email addresses, or an OpenReview group ID. The important thing is that whichever ID you use for the recipient, it has to be a member of a group you are a writer of. This property is called the parentGroup
. This is how OpenReview gives permission to organizers to send emails. If you do not know what the corresponding group of your recipient is, you can find it with the following query:
This will return a JSON with the property id
and details.writable
. If a group has details.writable=true
, that group id
can be used as your parentGroup
.
To send a message to all of your venue's authors, or example:
If you wanted to include a link to each author's paper in the email, you could instead iterate through each submission and send an email with the papers' authorids fields as recipients. If your venue is single blind, replace /-/Blind_Submission with /-/Submission:
If your venue is set up to use API 1, and the submission deadline has not passed then you can send a message to submission authors by doing the following:
If you don't know what API version you're using, you can check this by viewing your venue request form, look for the field "API Version". If there is no such field on your venue request form you are using API 1 by default.
Emails can be sent to users of your venue programmatically using the API or from certain pages like the Program Chair console. They can be personalized to include the recipient's name or other information using email template tags.
There are two types of email template tags: Tags that are handled on the backend by the OpenReview API, and tags that are replaced on the frontend (in the browser).
Backend tags can be used anywhere, including when sending messages directly using the API or via the openreview-py library. Frontend tags can only be used on specific pages, such as the Area Chair console and the Program Chair console. A list of all available tags is below:
Backend tags: {{fullname}}
, {{firstname}}
Frontend tags: [[SUBMIT_REVIEW_LINK]]
If you want to include further customizations, including links to papers or reviews, you can
The Author group is not populated until after the submission deadline, so until then, there is not a way to message authors through the User Interface. You can send messages to authors through OpenReview using the python client.
In API 2, the Author groups will be populated before the deadline.
Users can keep multiple names in their profiles and select one as preferred, which will be used for author submission and identity display. Names can be replaced by new names in the profile and in some submissions as long as the organizers of the venue allow it.
Go to your profile page at https://openreview.net/profile
Click 'Edit Profile'.
Locate the 'Names' section and click the blue plus sign underneath your name.
Enter your name and then click ‘Save Profile Changes’ at the bottom of the page.
If you would like to make that name preferred, you can do so by clicking ‘Edit profile’ once more and selecting ‘Make Preferred’ next to your desired name. The option to make a name preferred will not appear until you have saved the new name to your profile.
Submissions can be updated with the user's preferred name if the organizers of the venue that the user submitted to allow it.
Go to your profile page at https://openreview.net/profile
Click 'Edit Profile'.
Locate the 'Names' section and make sure you have a name marked as preferred that you want to keep.
Click on the button 'Request Deletion' next to the name you want to remove.
In the new window, explain why you want that name to be removed and click 'Submit'
The process for removing the name will take some time until has been reviewed and accepted. You will receive email updates regarding the status of the name removal. Once the process is complete the name will be removed and any publications with the previous name will be updated with the one you marked as preferred in your profile.
OpenReview doesn't take any action on PDF files so the name change in the file should also be authorized by the organizers of the venue.
After the decision stage closes, you will see the Post Decision Stage option on your venue request form. You will be able to use this stage to send bulk decision notifications to authors.
Select "Yes, send an email notification to the authors" for the "Send Decision Notifications" field.
Customize the Email Content Fields. There should be one per decision type for your venue. Any fields in curly braces will be populated with the information for each paper. You can customize anything in the message so long as you do not remove the curly braces.
Click submit and notification emails will be sent to all authors. Note that if you run Post Decision multiple times, it will send the decision notifications each time that "Yes, send an email notification to the authors" is selected. If you need to make changes to Post Submission stage without sending out emails each time, make sure to select "No, I will send the emails to the authors".
Go to https://openreview.net/messages. You should see any messages sent from your venue.
Filter messages by Parent Group to narrow your search:
If you sent a message through a group console, enter that group ID.
If you sent a message through the python client and specified the parent group, search by that parent group.
If you sent a message through the PC console, there will not be a parent group.
You can also filter by recipient, email status, and subject.
Under the ‘Overview’ tab of the PC console for your venue, you will find a ‘Venue Roles’ section. Click on the ‘Accepted’ link next to ‘Authors’ to be taken to the Accepted Authors group. On this page, click 'Edit group'. You will then have the option to email members of the group.
Since the Submissions do not contain the decisions, we first need to retrieve all the Decision notes, filter the accepted notes and use their forum ID to locate its corresponding Submission. We break down these steps below.
Retrieve Submissions and Decisions:
It is convenient to place all the submissions in a dictionary with their id as the key so that we can retrieve an accepted submission using its id.
We then filter the Decision notes that were accepted and use their forum ID to get the corresponding Submission:
You can then message the author ids of each accepted submission.
This is very similar to the previous example. The only difference is that we need to get the blind notes with the added details parameter to get the Submission.
Retrieve Submissions and Decisions:
We then filter the Decision notes that were accepted and use their forum ID to get the corresponding Submission:
You can then message the author ids of each accepted submission.