The Personalized/Precision Medicine Blog
The official blog of the Annual Personalized and Precision Medicine Conference provides readers with information, insight and analysis regarding the field of personalized and precision medicine, genomics, genomic interpretation and the evolution of healthcare in the post-genomic era.
Precision Medicine: The Start of a Whole New Era
LIMITS TO PROGRESS
Nearly 30 years after the start of planning for the Human Genome Project, 25 years after the Project began, and 12 years after it’s completion was announced, genomic research progress is becoming impactful. Lives are being radically changed for the better thanks to more precise, diagnostics and targeted treatments. Patients are starting to get the right medicine in the right dose at the beginning of treatment, not halfway through. However, although tumor profiling is regularly done to determine which of available cancer treatments is most appropriate, and pharmacogenetic/pharmacogenomic testing is more available thanks to an increasingly crowded market, personalized medicine has yet to help the majority of patients, e.g. those suffering from common and complex diseases.
This is largely because the US government does not yet have large studies linking genomic to clinical data. Numerous other countries, however, do. The British 100,000 genomes project, the Saudi Arabia’ Genome Mapping Project, the Genome of the Netherlands, with similar efforts starting up in Belgium, and other European Countries are cases in point. Many reasons for the US lag have been cited, including the lack of data management standards and system inter-operatability. Notably, the private sector is leading the way. The largest current repository is privately owned, by 23 & Me (800,000 samples), Craig Venter is spearheading the Microbes and Metabolites Fuel an Ambitious Aging Project, which will sequence a million genomes, and some pharmaceutical companies are establishing in house sequencing projects, i.e.“The Search for Exceptional Genomes”).
Recognizing the medical power of personalized medicine to prevent, treat and in some cases cure disease, President Obama announced that the US will get into the game with a $215 million dollar precision medicine initiative to expand initial successes into a large-scale effort. While the initiative will use data from existing cohort studies, widespread participation, including the public and the breadth of stakeholders, is vital to the 2nd core objective of creating a research database of 1 million genomes with related clinical and other types of information which will generate a new taxonomy of disease based on what the National Academy of Science http:www.nap.edu/catalog/13284/toward-precision-medicine-building-a-knowledge-network-for-biomedical researchcalls an information commons and knowledge network of disease and treatments.
Excitement is high and planning began within weeks of the initiative’s announcement. The NIH hosted a two day workshop where experts, representing a broad range of diverse disciplines and white papers, presentations and thoughts about opportunities, challenges and strategies for successful implementation, http://www.nih.gov/precisionmedicine/workshop.htm. 2500 viewers engaged through WebEx. Regulatory officials recently announced plans to use the $10 million expected to build full or hybrid cloud storage, open-source data sharing platforms and Google-like search tools.
It is easy to get excited about this new initiative, particularly because it promises to propel precision medicine efficiently and effectively. But, combining and mining data from disparate third party sources will be no small feat, particularly considering the adoption process of electronic medical records and existing interoperatability problems in linking existing databases. Cautious optimism may be prudent, given that existing smaller scale sequencing projects are moving at a snail’s pace or stalling out. The Million Veteran Program, (http://www.whitehouse.gov/sites/default/files/microsites/ostp/kupersmith.pdf) launched in 2011, intends to combine individual genetic information to determine associations between genetics and health status to better screen, diagnose, and prognose disease and develop targeted treatments, only recently (less than 6 months ago) awarded the genomic analyses contract. As of last year at this time, i.e. three years into the project, only 200,000 veterans had enrolled. To be fair, though, a recent report indicates that the Project already has 343,000 samples and has partially analyzed 200,000 participants[CI6] (http://www.technologyreview.com/news/534591/us-to-develop-dna-study-of-one-million-people/. Progress has been, in other words, slow. If you are unaware of the project’s status, that could be because the government has said little. Another relatively large scale project, the National Children’s Study, which was designed to collect 100,000 genomes at birth, and slated to start next year, was shut down at the end of last year based on a report that found that the design was not feasible. Congressional battles about funding issues did not help matters. These events will hopefully inform planning efforts so that the current initiative will not succumb to the same.
Pundits have expressed concern about the government’s ability to manage such a large dataset given technological failures of HealthCare.gov. The insurance exchange site crashed and shut down for 5 hours upon launch. A technological glitch related to income verification prevented an unknown number of public from submitting applications and a hacker claimed to have obtained 70,000 records containing personal identifying information.
A separate concern is whether the initiative will permit broad based data sharing. Currently, while publicly funded research is deposited into publicly accessible databases, as a practical matter only research institutions with the resources to devote to filing laborious applications can hope to gain access. Small labs and institutions with far lower operating budgets cannot devote scarce resources to securing access to data, and thus their research is severely limited. Without data access, they simply cannot pursue certain research and thus talented researchers are beginning to redirect their careers to plant and animal genomes, since they are more readily accessible. http://www.sciencedirect.com/science/journal/22120661/3/4
Public engagement is clearly vital. And media coverage may be greasing the skids for success. After all, when the President of the United States announces a biomedical initiative in his (or perhaps some day her) State of the Union address, many take notice; http://www.whitehouse.gov/the-press-office/2015/01/30/fact-sheet-president-obama-s-precision-medicine-initiative. When USA Today reports on details of the initiative, its clear that precision medicine is trying to make its way into every home in America; http://www.usatoday.com/story/news/nation/2015/01/30/obama-precision-medicine-initiative-white-house/22547019/.
If the precision medicine initiative can overcome both of these impediments, it will have achieved a great deal. Good planning is a start and timely execution important. If the initiative succeeds in being transparent, engaging the public and enabling broad based data sharing then a new paradigm of open and collaborative research will have been established. Such will undoubtedly propel precision medicine significantly forward and ideally support precision medicine efforts underway in various places around the globe.
The opportunity for thoughtful public input is great given the discussions that will occur at the plethora of upcoming precision medicine conferences. Hopefully, the NIH and the FDA will invite public comment, because the discussions that will occur at precision medicine conferences are likely to generate valuable insights.