Skip to main content

News and Stories

Shirley Luckhart awarded Modeling Access Grant

Title: DNA and protein modeling for P/asmodium falciparum dhfr and dhps sequences derived from an existing study in Kenya

Project Team: Shirley Luckhart, Dharmesh Patel, JT Van Leuven

Start Date: June, 2019

The UI, USUHS, and USAMRU-K teams have collected human blood samples from study-enrolled Kenyan adults for an NIH-funded project focused on defining the impacts of HIV-malaria co-infection and HIV treatment.

We have completed deep sequencing, cleaning and clustering of dhfr and dhps genotypes and have identified frequencies for the dhfr and dhps SNPs known to be associated with sulfadoxine-pyrimethamine drug resistance.

However, given the AT-rich codon bias and mutation frequency of P. falciparum per asexual parasite cycle, we have noted other nucleotide substitutions in our sequences that we would like to explore with modeling.

We seek to classify whether these nucleotide changes are:
(1) substitutions that affect codon frequency but are synonymous,
(2) non-synonymous substitutions that are conservative, and
(3) non-synonymous substitutions that are non-conservative,
all of which could be predicted to alter protein structure or function.

2019 Research Report: Inspired Discoveries

2019 Research Report: Inspired Discoveries

It’s time to celebrate the impact we make as Idaho’s land-grant university says Dr. Janet Nelson, Vice President for Research and Economic Development.

This year’s research report summarizes record-breaking expenditures and innovative contributions. It shows just a sliver of the impact made by U of I research, scholarship and creative activities.

Download and enjoy!

Which comes first, the chicken or the egg?

Which comes first, the chicken or the egg?

World-wide experiments have been conducted to understand the distinct relationships among various genes. However, it remains a challenge to identify the genomic causes and effects directly from the data, especially within a network. It’s the classic chicken and egg question: Which comes first, the chicken or the egg? In other words, how do you know which genes regulate which other genes?

Correlation between the expression of two genes is symmetrical. Therefore, scientists cannot infer which of the two genes is the regulator and which is the target. Similar levels of correlation can arise from different causal mechanisms. For example, between two genes with correlated expression levels, it is plausible that one gene regulates the other gene; it is also plausible that they do not regulate each other directly, but are regulated by a common genetic variant.

Audrey Fu, Assistant Professor in the Department of Statistical Science, and Postdoctoral Researcher Md. Bahadur Badsha, recently published a paper introducing a novel machine learning algorithm. “Our new method, namely the MRPC algorithm, can tease apart which correlation may suggest causality and which correlation is just indirect association through many other genes,” said Fu.

Figure 2. The MRPC algorithm. The MRPC algorithm consists of two steps. In Step I, it starts with a fully connected graph shown in (1), and learns a graph skeleton shown in (2), whose edges are present in the final graph but are undirected. In Step II, it orients the edges in the skeleton in the following order: edges involving at least one genetic variant (3), edges in a v-structure (if v-structures exist) (4), and remaining edges, for which MRPC iteratively forms a triplet and checks which of the five basic models under the PMR is consistent with the triplet (5). If none of the basic models matches the triplet, the edge is left unoriented (shown as bidirected). (A) An example illustrating the algorithm. (B)The pseudocode of the algorithm. 

Reproducibility Does Not Equal Truth

Reproducibility Does Not Equal Truth

The CMCI Reproducibility in Sciences working group, or SciRep for short, has been meeting since the fall of 2015. Today, their most recent publication, “Scientific discovery in a model-centric framework: Reproducibility, innovation, and epistemic diversity,” was published in PLOS ONE. Congratulations!

Fig 2. A transition of our process of scientific discovery for an epistemically diverse population with replicator. A scientist (Bo) is chosen uniformly randomly from the population (1). Given the global model, the set of proposal models and their probabilities (given in percentage points inside models) are determined. In this population with no replicator, Bo proposes only models formed by adding an interaction (2). The proposed model selected (3) and the data generated from the true model (4) are used with the model comparison statistic (SC or AIC) to update the global model (5).

The American Council on Science and Health also picked up the story with their article, “Reconsidering The ‘Replication Crisis’ In Science.”

The following article was written by Leigh Cooper, U of I Science and Content Writer.


U of I Study Finds Scientific Reproducibility Does Not Equate to Scientific Truth

MOSCOW, Idaho — May 15, 2019 — Reproducible scientific results are not always true and true scientific results are not always reproducible, according to a mathematical model produced by University of Idaho researchers. Their study, which simulates the search for that scientific truth, was published today, May 15, in the journal PLOS ONE.

Independent confirmation of scientific results — known as reproducibility — lends credibility to a researcher’s conclusion. But researchers have found the results of many well-known science experiments cannot be reproduced, an issue referred to as a “replication crisis.”


“Over the last decade, people have focused on trying to find remedies for the ‘replication crisis,’” said Berna Devezer, lead author of the study and U of I associate professor of marketing in the College of Business and Economics . “But proposals for remedies are being accepted and implemented too fast without solid justifications to support them. We need a better theoretical understanding of how science operates before we can provide reliable remedies for the right problems. Our model is a framework for studying science.”

Devezer and her colleagues investigated the relationship between reproducibility and the discovery of scientific truths by building a mathematical model that represents a scientific community working toward finding a scientific truth. In each simulation, the scientists are asked to identify the shape of a specific polygon.

The modeled scientific community included multiple scientist types, each with a different research strategy, such as performing highly innovative experiments or simple replication experiments. Devezer and her colleagues studied whether factors like the makeup of the community, the complexity of the polygon and the rate of reproducibility influenced how fast the community settled on the true polygon shape as the scientific consensus and the persistence of the true polygon shape as the scientific consensus.

Within the model, the rate of reproducibility did not always correlate with the probability of identifying the truth, how fast the community identified the truth and whether the community stuck with the truth once they identified it. These findings indicate reproducible results are not synonymous with finding the truth, Devezer said.

Compared to other research strategies, highly innovative research tactics resulted in a quicker discovery of the truth. According to the study, a diversity of research strategies protected against ineffective research approaches and optimized desirable aspects of the scientific process.

Variables including the makeup of the community and complexity of the true polygon influenced the speed scientists discovered the truth and persistence of that truth, suggesting the validity of scientific results should not be automatically blamed on questionable research practices or problematic incentives, Devezer said. Both have been pointed to as drivers of the “replication crisis.”


“We found that, within the model, some research strategies that lead to reproducible results could actually slow down the scientific process, meaning reproducibility may not always be the best — or at least the only — indicator of good science,” said Erkan Buzbas , U of I assistant professor in the College of Science , Department of Statistical Science and a co-author on the paper. “Insisting on reproducibility as the only criterion might have undesirable consequences for scientific progress.”