Carnegie Mellon University researchers have created the first robotically driven system in order to determine the effects of a large number of drugs on many proteins. This system could reduce the number of necessary experiments by 70 percent.
Armaghan Naik, lead author of the study and a Lane Fellow in Carnegie Mellon University’s Computational Biology Department said that a careful balance between performing experiments that can be predicted confidently and those that cannot is a challenge that requires a lot of reasoning about an enormous amount of hypothetical outcomes. In other words, it’s a lot of thinking and working with a lot of expenses.
To counteract this problem, the team has previously called the application of a machine learning approach “active learning”. The model uses an approach that leads to true and accurate predictions of the interactions between new drugs and their targets, helping reduce the cost of drug discovery. It enables a computer to repeatedly choose which experiments to do, which are then carried out using liquid-handling robots and an automated microscope.
The model studied the possible interactions between 96 drugs and 96 cultured mammalian cell clones with distinct, fluorescently tagged proteins. A total of 9,216 experiments were possible, each consisting of acquiring images for a given cell clone in the presence of a given drug. The challenge for the algorithm was to learn how proteins were affected in each of the experiments, without performing all of them.
The first round of experiments started by collecting images of each clone for one of the drugs, and images were expressed by numerical features that captured the protein’s location in the cell. The algorithm repeated the process for 30 rounds, completing 2,697 out of the 9,216 possible experiments. As it performed the experiments, it identified more phenotypes and more patterns in how sets of proteins were affected by sets of drugs.
The team was able to determine that the algorithm learned a 92 percent accurate model for how the 96 drugs affected the 96 proteins, from only 29 percent of the experiments done.
Scientists from the Perelman School of Medicine at the University of Pennsylvania developed a mathematical model that explains variability in mutation rates in the human genome. The study named“An expanded sequence context model broadly explains variability in polymorphism levels across the human genome” is published in Nature Genetics.
Senior author of the study, Benjamin F. Voight, Ph.D., explained that they developed a mathematical model in order to estimate the rates of mutation as a function of nucleotides in the human genome. Voight is also an assistant professor in the department of Systems Pharmacology and Translational Therapeutics and the department of Genetics. “This new model not only provides clues into the process of mutation, but also helps discover possible genetic risk factors that influence complex human diseases, such as autism spectrum disorder,” he said.
The study focused on the probability that any given nucleotide in the human genome is changed. Most of the changes (called single nucleotide poly-morphisms or SNPs) are not dangerous to the human body, but Voight examined why some sequences are more likely to mutate, while others are not.
”The crux of the paper examines the dependency of mutation rate on which nucleotides are one, two, or three bases away from either side of a SNP,” Voight said. “We already know about one situation in which this placement matters: DNA sequences in the genome where methyl groups are attached to the cytosine nucleotide, also known as CpG sites, are hotspots for mutation. But are there other types of local sequences that matter beyond these?”
In order to answer this question, the team developed a mathematical model applicable to SNP data found in humans. They used publicly available data of human subjects, called 1000 Genomes Project. Knowing the three nucleotides flanking either side of a given SNP, for a total of seven nucleotides, predicted up to 93% of the variability in the chance of finding a SNP in a given sequence in people whose genome sequences are in the 1000 Genomes Project database. Their model also revealed several distinctive sequences of local nucleotides that were not previously known to be prone to mutation.
Computational predictive measurements like these are used to help prioritize rare or new gene variants found from these studies for follow-up investigation. The team focused on a set of autism sequencing studies by looking for genes with an excess of new mutations in children with autism not otherwise found in parents. When they implemented their model to these data, they found an improvement over existing methods for predicting which rare or new mutations were linked with human disease.
Surgeries are painful and may involve many risks. Patients provided with anesthesia may require a potent dose in order for them to sustain the high levels of risk and pain involved. Even doctors find it hard to conceive that there is no legible way to know if the patients are conscious during the operations.
However, researchers have found a way to manage these risks. A new research has found ways to measure electrical impulses in the brain during various stages of consciousness. Network signatures can help determine at what level the loss of consciousness would occur. The same means can be used to find the actual dose needed to help patients lose consciences and maintain it through the surgery. The research was conducted in Australia, where doctors perform more than 6,000 surgeries on a daily basis. Anesthesia is an important component of any operation. Gender and patient weight are the main determinants of how much sedation would be needed. Physical movement, blood pressure, and heart rate are then monitored to know the level of consciousness patient is in.
Although two in every 1,000 patients regain consciousness during a surgery, such cases can top 2,000 per year. Doctors take care of such symptoms when patients start talking during their surgeries or are in exceptional pain. Moreover, it is also seen that patients may get mental disorders and bad hospital memories resulting in traumas. Nevertheless, the new techniques are based on mind mapping, brain imagery, and bioinformatics, allowing a potent way to evaluate past history of the patient while using the traditional mathematical basis to come up with an exact dosage for the time frame. Neurosciences indicate that the brain looks for signals in order to stay conscious. When such signals are not found, it stops sending signals that can help a patient stay alert or feel pain.
Various researchers at Rice and Rutgers Universities have been trying to solve an age-old mystery of why biochemicals do not perform as intended. In order to help understand this, they used the bacteria, which causes tuberculosis. As per Oleg Igoshin, principal investigator and associate professor of bioengineering at Rice University, researchers have spent decades to understand the biochemical networks that affect human cells. More studies have been helping them understand how the dynamics and biological responses work, but the dynamic responses over time are still not found. Apart from various other rules, the new theorem, which the team is working on, gives way to understand these dynamic responses in living cells. Eduardo Sontag, who is one of the most distinguished professors for Quantitative Biology and Department of Mathematics at Rutgers University, also worked upon the theorem.
The mechanics of control theory, mathematics, biogenetics, bioengineering and quantitative biology is making it possible to work on such issues when combined. The theorem designed, will help formulate certain conditions for specified biochemical networks, which then display non-monotonic dynamics with respect to their monotonic triggers. The theorem further states that a non-monotonic response can only be possible if the output the system is receiving conflicts with the input messages itself. The World Health Organization reports that although one-third of the global population is affected by tuberculosis, only a fraction of the infected die. There is no way to kill the bacteria, as it will reactivate itself due to the non-monotonous reaction it undergoes. Moreover, the bacteria grow itself. E. Coli takes 20 minutes to grow while M. Tuberculosis generation may take upto 24 hours. The theorem helps understand these secrets, which help tuberculosis patients stay infected. The mythical expressions coupled with the theorem-helped researchers come to the idea of how biochemical interactions worked in the missing underlying networks.
The National institute of Health actively supported the research and the super computers from National Science Foundation administered by the Rice’s Ken Kennedy Institute for Information Technology helped process the data.