MIT researchers have developed a computational model of the brain’s face-recognition mechanism that captures human neurology aspects that previous models have missed. These researchers designed a machine-learning system and trained it to recognize specific faces by providing it with a battery of sample pictures.
They discovered that the trained system has an intermediate processing step that represented degree rotation of a face— say, 45 degrees from centre— but not the direction — right or left. The property wasn’t built into the system; it occurred from the training process, duplicating an experimentally perceived feature of the human face-processing mechanism, which shows that the system and human brain do something similar.
According to Tomaso Poggio, director of the Centre for Brains, Minds and Machines (CBMM) and a professor of cognitive sciences at MIT, the results need to be examined further. He added that models are cartoons of reality and the outcome of their research was not a proof that they understood what was going on.
The research takes in mathematical proof that the specific type of machine-learning system used inevitably yields intermediary representations which are indifferent to an angle of rotation. The researchers believe that the brain produces “invariant” representations of faces, meaning representations which are indifferent to objects’ angle in space, their location in the visual field or their distance from the viewer.
Researchers from Carnegie Mellon University have developed a robotically driven experimentation system that determines the effects of numerous drugs on large number of proteins, lowering a number of required experiments by 70 percent.
Biomedical researchers have invested a lot of effort and time to make it easier to perform many experiments cheaply and quickly. Scientists simply cannot perform an experiment for each possible combination of biological condition, like cell type and genetic mutation. Therefore, researchers had to choose a few targets or conditions to test thoroughly or pick experiments themselves. For that reason, the scientists have developed an application of machine learning approach known as “active learning”
The machine learning approach involves a computer constantly choosing which tests to perform in order to learn competently from the patterns detected in the data. While the researchers’ approach had only been confirmed using previously acquired data, their present models build on this by allowing the computer select which tests to do.
The experiments are carried out using automated microscope and liquid-handling robots. As the system gradually performed the tests, it detected more phenotypes and patterns on how proteins sets are affected by various drugs.
Scientists have made the first viable mathematical model of a vital cellular defence mechanism against Salmonella, according to a study in PLOS Computational Biology. Globally, bacterium Salmonella is responsible for numerous infections and many deaths every year. A process known as xenophagy targets Salmonella when it enters a human cell. Understanding how cells itself against the bacterium is essential to develop treatments, but the process is not yet well-understood.
In the new study, scientist Ivan Dikic and a team of bioinformatics used molecular interactions knowledge, combined with a computer science technique known as Petri net, to create a mathematical model of xenophagy..
To examine the model, the scientists studied what would happen when some proteins in xenophagy process are virtually disturbed, a method called in silico knockout. The computer-based perturbation results were consistent with data from lab tests, in which similar proteins were perturbed, showing that the model precisely reproduces well-known parts of the xenophagy process.
Also, the researchers proposed a possible new mechanism for one of the proteins that are involved in xenophagy process. All hypotheses showed that in silico knockout studies could be tested in lab tests.
Researchers at Massachusetts General Hospital (MGH) have developed new ways of measuring and mapping solid stress within tumors, an achievement that may result in better understanding of those forces, their consequences, and novel treatment strategies.
The scientists discovered solid stress evidence in tumors in 1997 and offered the firsts measurements in 2012. In many studies, the team has revealed that compression of lymphatic and blood vessels by solid stress leads to tumor progression by reducing oxygen supply, which lowers the effectiveness of immunotherapy, chemotherapy, and radiation treatment. Recently, they discovered that applying stress to tumors in living animals stimulates pathways that are involved in migration and initiation of tumors. Alleviating solid stress by decreasing hyaluronic acid and collagen, two major stress-carrying components of the extracellular matrix, have led to new methodologies to improving the results of conventional therapies.
The investigators developed mathematical and experimental frameworks that provided two-dimensional representation of solid stress in tumors. Using such approaches to make measurements in mouse models of tumors revealed that stored elastic energy and solid stress may be different in metastatic and primary tumors, since they depend on the surrounding microenvironment and tumor cells. In addition, the study revealed that solid stress increases when tumor grow bigger, and the normal tissue neighbouring a tumor contributes greatly to solid stress.