The advent of genome sequencing brought in an exciting era of genetics through the 80’s and 90’s. In 2003, the Human Genome Project was officially declared completed by the NIH which produced a torrent of activity. However, one of the major issues researchers faced was what to do with all the data. At the time, we had not yet received significant computing power nor did we have a solid grasp on the important requirements of needing both the phenotypic and genotypic data required to fully understand important discoveries.
Fast forward to today, we now have huge levels of computing power, more sophisticated software, and better ways of managing data. However, the industry still faces a significant problem: the coordination and cost of getting to these discoveries. Researchers face more options and vendors than ever before with exploding costs that slow research down significantly. On top of that, they’re required to learn and understand what to do with all of the interlacing data, sifting through the torrent of potential correlations in order to find the causations.