The sbv IMPROVER project created strong methodological foundations and demonstrated that crowd sourcing is a viable strategy to verify scientific methods and concepts in an industrial context. The first phase of the project to develop and prove the methodology was accomplished as part of a collaboration between Philip Morris International and IBM Research. The development phase of sbv IMPROVER came to a successful conclusion on 31 March 2014. As of 1 April 2014, this website reflects the end of the development phase.
sbv IMPROVER at a Glance
sbv IMPROVER stands for Systems Biology Verification combined with Industrial Methodology for Process Verification in Research. This approach aims to provide a measure of quality control of industrial research and development by verifying the methods used. The sbv IMPROVER approach was initially developed by Philip Morris International (PMI) and IBM Research in 2011-2013 and is now a collaborative effort led and funded by PMI Research and Development. For more information please see Nature Biotechnology (2011) or Bioinformatics (2012).
It is different from other scientific crowdsourcing approaches as it focuses on the verification of processes in an industrial context, and not just on basic questions regarding science. The sbv IMPROVER approach allows an organization to benchmark its methods and industrial processes.
Today the scope of sbv IMPROVER is the verification of methods and concepts in systems biology research. However, the scope of the project could be extended to the verification of research processes in other industries such as pharmaceuticals, biotechnology, nutrition and environmental safety, to name a few.
A complex research program is typically built upon research projects (consisting of “building blocks”) that synergistically support each other towards a final goal. A building block is a standalone research process of a complex workflow. It has a defined input that results in a defined output.
Self Assessment of Computational Methods often leads to Bias
Some examples are:
- selective reporting of performance
- choice of only one, best metric
- parameter tinkering
Proposed solution for an unbiased assessment:
- conduct scientific challenges
- make predictions on unseen data
- apply crowd sourcing
The following diagram shows the fraction of papers in which the authors rank their method using self or independent assessment. In all of the 57 papers reviewed, the self-assessed papers ranked top in at least one metric.
Wisdom of Crowds Applied to Solve Challenges
A challenge is a scientific problem presented to the community. Some of its basic elements are:
- the need for a “Gold Standard” or a solution to the challenge. Each prediction is compared to the Gold Standard
- guidelines about the metrics to be used to evaluate predictions are given prior to the receipt of the predictions
Crowd sourcing is characterized by:
- contribution by many participants
- participants produce independent methods and submit different solutions which tackle different aspects of a complex problem
- the combination of solutions often outperforms the best performing submissions
- this phenomenon is often referred to as the “Wisdom of Crowds”
The approach has the following advantages:
- it nucleates a community around a given scientific problem
- allows for easy comparison of performance of different methods on the same data set
- establishes the state-of-the-art technology in a field, and identifies complementary methods to solve a problem