An important trait of the artificial intelligence community is its "intentional openness”. Researchers make their codes publicly available on websites such as https://github.com/ for others to build on; saving them the time and effort of implementing the framework they are seeking to improve/extend. The revision process for some of the most prestigious conferences in AI are all done through the open review platform https://openreview.net/. Here, the exchange between authors and reviewers are open to anyone curious about the research. The platform also allows for other non-reviewers to weigh in on the contribution’s value and provide additional insights that in the traditional reviewing process would have been missed. Though admittedly, sometimes too many opinions can lead to uncomfortable conflict as was the case of LipNet (https://openreview.net/forum?id=BkjLkSqxg).
This “intentional openness” adds transparency, accountability, and a chance at reproducibility to the peer review process and, in my opinion, leads to faster review times and better-quality manuscripts that have immediate impact on the interested community. They pile up citations and extensions while being fresh off the press because the codes have been available, and discussions have been had even before the paper was accepted (a double-edged sword in its own right). Oh, and did I mention that the final papers are publicly available without any paid journal subscriptions? That’s right, no paywall!
The Operations Research community, on the other hand, has been less “intentionally open”. We’ve been accustomed to the traditional route of submitting papers without code, going through the review process only among AEs, reviewers, and authors (with reviewers unable to test the reproducibility at all), then getting a final decision by the AE. The entire process could take several months with the result being either a published paper or resubmission to the closest, less-prestigious journal because, at the end of the day, it only counts in our C.V. if it’s in a top journal; all of which have paywalls.
That’s not to say that there haven’t been attempts to be more “intentionally open”. The COIN-OR foundation (https://www.coin-or.org/), for example, has been around since the year 2000 and has made a significant push for open-source solvers. Also, many journals encourage authors to make their codes available on github or personal webpages. Unfortunately, the open-source culture has yet to be the norm in our field. Why? For different reasons, depending on who you ask.
There has also been an increased focus on reproducibility. The journal Mathematical Programming Computation (https://link.springer.com/journal/12532), initiated in 2009 by the Mathematical Optimization Society (http://www.mathopt.org/), has made it one of its main requirements for publication. More recently, our flagship journal “Operations Research” has somewhat followed suit and made code and data submission mandatory upon acceptance (not submission) (https://pubsonline.informs.org/page/opre/submission-guidelines). However, this remains the exception and not the norm.
Finally, although most journals offer open access options (at a fee) and the use of repositories such as Optimization Online (http://www.optimization-online.org/), and arXiv (https://arxiv.org/list/math.OC/recent ) are gaining popularity, we have yet to make the open access culture pervasive in our community. The creation of the Open Journal of Mathematical Optimization (https://ojmo.centre-mersenne.org/) represents a huge step toward open access. We have yet to see how this initiative pans out but with an editorial team comprised of a who’s-who in optimization, this may be our best chance at becoming more “intentionally open”; granted that is what we want.
Comments