Genotype and transcriptional data for the cell lines were available as part of the 1000 Genomes Project ( www.1000genomes.org) and structural attributes for the compounds were also provided. deriving lessons about which type of methods seem to be more suitable, which features seem to be good predictors regardless of the method, etc. Importantly, the challenges results remain as a resource for the community representing a Mibampator snapshot of the state-of-the-art and to aide in further method development and benchmarking. In the context of computational biology, there have been several of these initiatives, including CASP, CAFA, CAPRI, FlowCAP 1, CAGI, and Dialogue for Reverse Engineering Assessment and Methods (DREAM; www.dreamchallenges.org) 2. The DREAM challenges started with a focus on the field of biomolecular network inference 3C 5 but now cover questions ranging from prediction of transcription factor sequence specificity 6, to toxicity of chemical compounds 7 and the progression of Amyotropic Lateral Sclerosis (ALS) patients 8 or survival of breast cancer patients 9. Since 2013, DREAM has partnered with Sage Bionetworks and challenges are hosted on Sages Synapse platform. Each challenge has a dedicated project space in Synapse where the description, training data set, gold standard and scoring methodology are provided. The scored predictions are also available on a public leaderboard. A fundamental step in DREAM challenges, or any other collaborative competition, is to assess how well the different predictions fare against the gold standard. This may seem obvious at first glance; for example, for a question of predicting a set of numbers, one can compute the sum of the squared differences between predicted and observed values, and identify the submission for which this sum is the smallest. However, multiple aspects have to be taken into CDC18L account such as the fact that often the confidence on the different measured values is not the same, or that the differences between the submissions may or may not be different enough to declare one method superior to the other. Over the years, within the DREAM challenges, these questions have been addressed leading to the generation of multiple scoring methods. Scoring methods developed by challenge organizers are reported in the publications that describe the challenges, but the corresponding code is typically provided only in pseudo-code or at best as a script in an arbitrary language (R, Python, Perl…) and syntax by different developers leading to a set of heterogeneous code. In addition, templates and gold standards need to be retrieved manually. All of these factors present obstacles to maximize the scientific value of DREAM challenges as a framework for evaluation of a methods performance in comparison with those used in the challenges. Similarly, reuse of scoring code for future challenges becomes complicated when Mibampator at all possible. To facilitate the use of the challenges resources by the scientific community, we have gathered DREAM scoring functions within a Mibampator single software called that provides a single entry point to the DREAM scoring functions. We also provide a standalone executable for end-users and the ability to share and re-use existing code within a common framework to ease the development of new scoring functions for future challenges. does not provide code to generate the data or to manage leaderboards (which happens within Synapse), but focuses on the scoring functions. Note that organizers interested in setting up automatic scoring and publishing of leaderboards should instead refer to the section Create a Scoring application from the Synapse project 2453886. Currently, includes about 80% of the past challenges. For a few challenges where integration in was not possible, references to external resources are provided. Here, we first describe the framework used in software from the point of view of both an organizer/developer and an end-user (see Figure 1). We then review the challenges and the scoring functions that are available until now. Open up in another window Shape 1. library platform.Fantasy problems are described in the Fantasy site ( http://dreamchallenges.org) where analysts can get a synopsis of days gone by and current problems. Each problem has its project page inside the Synapse platform ( http://synapse.org) where information regarding the challenge can be found. The ultimate leaderboard showing benchmarks achieved at the ultimate end of the task will also be shown in the Synapse project. offers a Python collection that allows analysts to get a template for every closed problem and to quickly rating a prediction/design template against the yellow metal regular. In a few lines of.