Protein dynamics is a complex process and the current challenge is to break down its complexity into elementary processes which act on different time scales and length scales. Here, we integrate femtosecond spectroscopy, molecular biology techniques, and computational simulations to follow the system evolution in real time and thus elucidate the complex dynamics with unprecedented detail. Here, we report two important biological systems of protein surface hydration and light-driven DNA repair by photoenzyme (photolyase). With femtosecond temporal and single-residue spatial resolution, we mapped out the global water motion in the hydration layer using tryptophan residue to scan the protein surface with site-directed mutagenesis. The obtained results reveal the ultrafast nature of surface hydration dynamics and provide a molecular basis for protein conformational flexibility, an essential determinant of protein function. By altering chemically and structurally important residues of photolyase with mutation, we identified key residues in catalytic reactions and followed the entire functional evolution of DNA repair. We resolved a series of ultrafast processes including active-site solvation, energy harvesting and transfer, and electron hopping and tunneling. These results elucidate the crucial role of ultrafast dynamics in biological function efficiency and lay bare the molecular mechanism of DNA repair at atomic scale.
Prions are infectious proteins that are hypothesized to be the causative agent of diseases such as Creutzfeld-Jacob disease in humams, scrapie in sheep, and bovine spongiform encephalopathies in cows (mad cow disease). This hypothesis is controversial, because prion populations are capable of proliferation even though prions do not contain DNA or RNA. A mathematical model is analyzed to explain prion proliferation. Them model consists of a system of nonlinear ordinary and partial differential equations. An analysis is given of the model, and model simulations are compared to experimental data.
Locally stationary time series have formally been introduced by Dahlhaus (1997). A simple example is provided by an autoregressive process with time varying parameters. We show how such models can successfully be applied to the problem of discriminating seismographic readings of earthquakes and explosions. Discrimination is here based on functionals of estimated time varying variance functions. Our method has the advantage that no alignment of the underlying time series is required. Estimating the variance functions is accomplished via a minimum distance approach. We utilize prior knowledge about our target problem by introducing shape constraints into the estimation process. Some justification for our method in form of large sample results will be presented, and the method is illustrated using simulations and a real data application.
This presentation is based on joint work with G. Chandler and R. Dahlhaus.
Honey bee swarms perform a nest-site selection task that involves search, nest-site assessment, and group agreement before the swarm flies to its new home. Swarm cluster elements can be identified that have close analogs to known components and structures in neuron-based brains of animals that perform perception-attention- choice tasks. These elements include an interconnection of communicating units, group-level memory, parallel and converging paths, and identifiable early and late processing. To provide justification that this swarm cognition perspective is more than just an extended analogy, we first conduct a series of behavioral tests on an experimentally validated simulation of the nest-site selection process. These tests demonstrate the ability of a swarm (i) to discriminate between site qualities even in the presence of significant individual bee nest-site assessment noise, (ii) to avoid being misled by multiple inferior distractor nest sites and simultaneously focus on the best site, and (iii) to order the percentage of choices for each site according to relative nest-site qualities and thereby avoid negative context-dependent effects on choice performance. Next, it is shown that (i) swarm cognition mechanism parameters have been tuned by natural selection to provide a balance between speed and accuracy of choice, and (ii) the key component of swarm cognition, accurate group memory, is a result of this same balance. Our analysis at multiple levels, from mechanisms and behavioral levels to the adaptation level, serves to solidify connections between neuroscience, sociobiology, and cognitive ecology that we hope will have implications in the study of robust group decision making for other species.
Game-theoretic models are extensively used in the study of animal behavior. These models are used to predict optimal strategies for a variety of animal interactions, such as fighting, foraging, or signaling. In order to be of predictive value, a strategy must be evolutionarily stable (abbreviated ESS), which means that a population of animals who follow an ESS must be resistant to invasion of mutants who follow a different strategy. The first part of this talk will give a very brief introduction to evolutionary game theory and the ESS concept. Then a new game-theoretic model will be introduced that makes predictions about which contestant (the likely winner or the likely loser) can be expected to initiate escalation in a contest. Next, computer simulation studies on whether the ESS's predicted for this game can actually evolve in a finite population that initially behaves randomly will be presented. Finally, some experiments about representing strategies in computer simulations will be reported, and the relevance of the findings for the study of the genotype-phenotype map will be discussed.
We will discuss the advantages of using irreducible representations of su(2) and so(3) in describing various interactions in NMR. We will show how these representations to analyze the NMR of half-integer quadrupolar nuclei. We will also make use of theserepresentations to develop the theory of pulse sequences, and analyze a few pulse sequences used in the NMR of biological samples. The talk will introduce all the concepts required, and will be accessible to a broad audience.
In the study of various aspects of cell metabolism, in particular, folate and methionine metabolism, new mathematical models are developed. The new models are used to to better understand the temporal variations in methionine and folate input on the other metabolite concentrations. Sensitivity analysis of the model is performed to better understand the molecular mechanisms underlying the complexity of the cycles. More specifically, we were interested in how the model qualitative behavior depends on precise choices of parameter values. This is an ongoing work, we finally aim to develop a visualization project, which it will be designed to be used by scientists as testbeds for exploring and evaluating the folate and methionine metabolism. Using advanced computer imaging techniques, the folate cycle and the methionine cycle may be reconstructed from model parameters. The computer reconstructions created through the visualization project will permit cycles to be explored interactively for presentation purposes, while providing an additional modality for data exploration and analysis.
PS 1: The folic acid cycle plays a central role in cell metabolism. Among the important functions of the folate cycle are the synthesis of pyrimidines and purines and the delivery of one carbon units to the methionine cycle for use in methylation reactions. Dietary folate deficiencies as well as mutations in enzymes of the folate cycle are associated with megaloblastic anemia, cancers of the colon, breast and cervix, affective disorders, cleft palate, neural tube defects, Alzheimers disease, Down's syndrome, preeclampsia and early pregnancy loss and several enzymes in the cycle are the targets of anti-cancer drugs.
PS 2: The methionine cycle is important for the regulation of homocysteine, an important risk factor for heart disease, and for the control of DNA methylation. Both hyper- and hypomethylation have been proposed as crucial steps in chains of events that turn normal cells into cancerous cells.
Despite the fact that all terrestrial plants require the same essential resources (such as mineral nutrients, water, and sunlight), we often observe multiple plant species successfully living closely together. In this talk, we investigate the role of canopy partitioning (or vertical leaf placement) as a possible mechanism by which clonal plant species with different competitive abilities may coexist.
I will begin by showing how plant competition for sunlight fits within the mathematical framework of resource competition. Next, I will present an analytic model of clonal plant population growth that emphasizes the role of light capture by leaves at different heights. This model's extension to two species competition is realized by a system of Kolmogorov integro-differential equations that are coupled through the species' vertical leaf density functions. I will then describe some mathematical methods that we use to determine the outcome of competition between two model species in the interesting but difficult case that they possess overlapping vertical leaf profiles. If time permits, I will also indicate some ways in which the biological realism of this model can be increased without altering its qualitative conclusions. This work is in collaboration with Richard R. Vance of the University of California, Los Angeles.
Articular cartilage is the primary load-bearing soft tissue in joints such as the knee, shoulder, and hip. Multiphasic continuum mixture models have been used to describe the relative contribution of effects due to solid, fluid, and ionic phases in cartilage. This research is motivated by the need to quantify differences between the normal and osteoarthritic mechanical and physico-chemical states in the tissue. In this talk, I will present numerical methods and mathematical models pertaining to the cells and extracellular matrix of articular cartilage. Two problems will be described. The first problem is the development of an accelerated numerical method for the continuous spectrum biphasic poroviscoelastic (BPVE) model of articular cartilage deformation. The second problem is the formulation and application of a triphasic mechano-chemical model to analyze osmotic loading experiments for the chondron, which is the functional cell-matrix unit in cartilage.
The emerging field of systems biology is focused on the integration of biological information into predictive mathematical models. One primary approach in the systems-biology paradigm is to build models from time series of experimental data, obtained by measuring the response of a biological system to perturbations. Referred to as reverse engineering, this approach is used to elucidate features of such systems, including their structure and dynamics. Of relevance for reverse engineering is to design biological experiments that are suitable for modeling and to identify perturbations that will reveal salient features of the system.
In this talk I will introduce a collaborative project, in which one objective is to generate appropriate time series data for reverse engineering a stress-response network in yeast. I will present a modeling approach that uses algorithmic tools from computational algebra to build the set of all possible discrete models that fit time series data and to select minimal models from this set. In this setting, discrete models are given by systems of polynomial functions over a finite field. As it is important to identify which perturbations are best suited to build accurate models, properties of the data that make them appropriate for the discrete modeling method will be discussed.
SELEX experiments allow extracting, from an initially random pool of DNA, those oligomers with high affinity for a given DNA-binding protein. We address what is a suitable experimental and computational procedure to infer parameters of protein-DNA interaction from SELEX experiments. To answer this, we use a biophysical model of protein-DNA interactions to quantitatively model SELEX and show that the standard procedure is unsuitable for obtaining the interaction parameters. However, we show that a suitably modified experiment allows robust generation of an appropriate data set. Based on our quantitative model, we propose a novel bioinformatic method of data analysis. Our method results in a significantly improved false positive/false negative trade-off, as compared to the standard information-theory based method.
In the second part of the talk, I will discuss analysis of virulent bacteriophage gene expression strategies. Most of genes of virulent Xp10 bacteriophage are organized similarly to lambdoid phages that rely only on host RNA polymerase for their development. However, unlike the lambdoid phages, Xp10 encodes its own RNA polymerase. We perform global transcription profiling, kinetic modeling and bioinformatics analyses, in order to understand the role of both host and phage RNA polymerases in the Xp10 gene expression. Our analysis results in the quantitative estimates of contributions of both RNA polymerases to the rates of transcription of all Xp10 genes, and in the identification of the previously unknown promoter sequence for Xp10 RNA polymerase. Developed methods of data analysis can be used to efficiently infer transcription strategies of other novel bacterial viruses.
The talk will introduce the basics of NMR. The various spin interactions present in biological samples will be discussed, along with the commonly used methodology to obtain high resolution solid-state NMR spectra for these samples.
Abstract: Hantaviruses are rodent-borne zoonotic agents that cause hantavirus pulmonary syndrome and hemorrhagic fever with renal syndrome in humans. We formulate and analyze some multi-host and multi-patch epidemic models for rodent populations and determine conditions under which the disease can emerge. The basic reproduction number is computed and shown to increase with the number of hosts that can be infected.
Despite considerable evidence showing that landscape heterogeneity induces asymmetric processes in metapopulation systems, most metapopulation models assume such processes to be symmetric. With individual-based models, we investigated the effect of (i) asymmetry in colonization, (ii) different patterns of disturbance and (iii) various dispersal strategies on metapopulation viability and connectivity. The most important results obtained are: (i) if we used a model assuming symmetric dispersal when dispersal is actually asymmetric, the estimation of metapopulation persistence is wrong in more than 50% of the cases; (ii) the extinction probability is larger for spatially aggregated disturbances than for spatially-random disturbances; and (iii) metapopulation connectivity and dispersal success depend strongly on the focal organism's properties (including its mobility and cognitive abilities). Additionally, we investigated gene flow asymmetry in metapopulation induced by a sex reversal gene. The conditions under which it can invade a metapopulation system were analyzed.
Since 1999, West Nile virus has spread spatially from the East to the West coast of North America. A partial differential equation model for this spatial spread is developed and analyzed. The model has cross infection between mosquitoes and birds, with diffusion terms describing their movement. Using a simplified version of the model, the cooperative nature of the cross-infection dynamics is used to prove the existence of traveling waves and to give an expression for the spatial spread of infection. A comparison theorem is used to show that this spread rate may provide an upper bound for the spread rate of the more realistic model.
All organisms are composed of multiple chemical elements such as carbon, nitrogen, and phosphorus. Element cycling and energy flow are two fundamental and unifying principles in ecosystem theory; however, population models rarely take advantage of the former. Instead, they assume chemical homogeneity of all populations by concentrating on a single constituent, generally an equivalent of energy. In this talk, we examine ramifications of an explicit assumption that both predator and prey are chemically heterogeneous. Using stoichiometric principles, we construct a 2D Lotka-Volterra predator-prey type model where both populations are composed of two essential elements: carbon and phosphorous. The analysis shows that indirect competition between two populations for phosphorus can shift predator-prey interactions from a (+, -) type to an unusual (-, -) class. This leads to complex dynamics with multiple positive equilibria, where bistability and deterministic extinction of the predator are possible. Rosenzweig's paradox of enrichment holds only in the part of the phase plane where the predator is energy (food quantity) limited; a new phenomenon, the paradox of energy enrichment, arises in the other part, where the predator is phosphorus (food quality) limited. Subsequent laboratory experiments validated the outcomes of this model.
The development of drug-resistant strains of bacteria is an increasing threat to society, especially in hospital settings. Many antibiotics that were formerly effective in combating bacterial infections in hospital patients are no longer effective due to the evolution of resistant strains. The evolution of these resistant strains compromises medical care worldwide. In this article, we formulate a two-level population model to quantify key elements in nosocomial infections. At the bacteria level patients infected with these strains generate both nonresistant and resistant bacteria. At the patient level susceptible patients are infected by infected patients at rates proportional to the total bacteria load of each strain present in the hospital. The objectives are to analyze the dynamic elements of nonresistant and resistant bacteria strains in epidemic populations in hospital environments and to provide understanding of measures to avoid the endemicity of resistant antibiotic strains.
Knowledge of haplotypes is useful for understanding block structure and disease risk associations. Direct measurement of haplotypes in the absence of family data is presently impractical. Hence several methods have been developed previously for reconstructing haplotypes from population data. We have developed a new population-based method using a Hidden Markov Model (HMM) for the source of the ancestral haplotype segments. For the ancestral haplotypes, a higher order Markov model has been used to account for the linkage disequilibrium. Our model includes parameters for the genotyping error rate, the mutation rate and the recombination rate at each position. Parameters of the model are inferred by Bayesian methods, specifically, Markov Chain Monte Carlo (MCMC) methods. Crucial to the efficiency of the Markov Chain sampling is the use of a forward-backward algorithm for summing over all possible state sequences of the HMM. We have used the model to reconstruct the haplotypes of 129 children in the data set of Daly et al. 2001 and of 30 children in the CEU and YRI data of the HAPMAP project. For these data sets, the family-based reconstructions were found using Merlin (Abecasis et al. 2002). Our haplotype reconstruction method does not require division into small blocks of loci. It produces results that are quite close to the family-based reconstructions and comparable to the state-of-the-art PHASE program of Stephens et al. 2001 and 2003. The recombination rates inferred from our model can help to estimate the recombination hotspots, such as in the data set of Daly et al. 2001 and in the YRI data of the HAPMAP project.
The talk will introduce the theory and applications of monotone dynamics. Examples from ordinary, delay and parabolic partial differential equations arising from biology will be featured. New results and lines of research will be discussed.
The focus of this talk is a 3-dimensional, stochastic, rule-based model of immune response to viral pathogens. In its present form, the PathSim model focuses on Epstein-Barr virus (EBV) infection of the Waldeyer's tonsilar ring. EBV is a ubiquitous and sometimes pathogenic human herpesvirus that establishes a life-long infection in B cells despite an aggressive immune response. EBV is an ideal model system for studying persistent infection because: 1) sites of infection are accessible; 2) levels of infected cells, viral shedding, anti-viral antibody, and T cell responses can be measured in parallel; and 3) infection can be studied from an extreme state of perturbation (mononucleosis) into persistence. Mechanisms underlying the establishment and maintenance of persistence are complex and, given the lack of animal models, we seek to better understand them using modeling strategies.A multi-scale anatomical viewer helps to visualize infection model dynamics. Preliminary results qualitatively match clinical data. Furthermore, simulations reveal that persistence appears very dependent on access to the circulation of latently infected B cells; when access to the circulation is blocked,the infection is cleared. One factor that dramatically affects the course of infection is the percentage of latently infected cells triggered to begin viral replication upon returning from the blood.
Rule-based models are well-suited for the simulation of dynamics resulting from a large number of spatially distributed, interacting entities, such as virions and immune cells. One of their shortcomings, however, is the relative lack of mathematical tools available to analyze model dynamics and, in particular, to formulate and solve optimal control problems. We will describe an approach to develop a mathematical foundation for PathSim, which allows the development of control theoretic methods.
When standing up, blood is pooled in the legs due to the effect of gravity resulting in a drop in systemic arterial pressure and widening of the blood flow velocity. This can be modeled by increasing the blood pressure in the compartments representing the lower body. To restore blood pressure and blood flow velocity a number of regulatory mechanisms are activated. The most important mechanisms are autonomic reflexes mediated by the sympathetic nervous system and cerebral autoregulation mediated by changes in concentrations of oxygen and carbon dioxide. The response to standing is an increase in nervous activity, which results in increased heart rate and cardiac contractility, vasoconstriction of the systemic arterioles, and changes in unstressed volume and venous compliance. The response by the cerebral autoregulation is to dilate arterioles in the cerebral vascular bed. It is not clear how the autonomic and autoregulation interacts; one theory suggests that vasoconstriction, resulting from increased sympathetic activity, has an effect throughout the body, but that cerebral vasoconstriction gets overridden (possibly with a significant delay) by autoregulation resulting in a net vasodilatation of the cerebral vascular bed. In this work we demonstrate how mathematical modeling can be used to predict the interaction between autonomic and autoregulation, and how methods from optimal control theory can be used to identify model parameters to make the model patient specific. We will show that our models can be used to predict the response, for healthy young people, for healthy elderly people, and for hypertensive people.
The application of cable theory to the study of electrical signaling in neurons has a long history. In this talk a numerical method of direct computation of a neuronal circuit will be described. Then this method will be used as a tool for computational analysis of two very different mechanisms: 1) the contribution of the glial Muller cell to the extracellular mass response of the eye to light (electroretinogram), and 2) the generation of directionally selective light responses by the starburst amacrine cell.
The mathematical tools used in protein structure determination from Nuclear Magnetic resonance data are distance geometry and discrete differential geometry, dealing with distance and orientational constraints respectively. We give an introduction to these methods and their use in finding structures of proteins from NMR experiments in solid-state.
Micro/nanofabrication methods from the electronics industry exist for producing miniature devices in silicon and glass. However, the properties of these materials (poor impact strength/toughness, poor biocompatibility) are inappropriate for many biomedical devices. In contrast, polymeric materials possess many attractive properties such as high toughness and recyclability. Some possess excellent biocompatibility, are biodegradable, and can provide various biofunctionalities. Proper combinations of polymers and biomolecules can offer tailored properties for various medical devices, but the ability to process them at the nanoscale is still largely underdeveloped. We have developed non-cleanroom, affordable, environmentally and biologically benign nanoengineering techniques using biocompatible polymers, biomolecules, and nanoparticles as building blocks as well as nanofluidic surface transport as a mechanism to design, synthesize, and fabricate bioMEMS/NEMS devices. Applications of polymer nanoengineering and nanofluidics for enzyme immunoassays, drug delivery and gene therapy will be discussed.
Spatial moment equations are an alternative approach to understanding population and community dynamics in a heterogeneous spatial environment. In contrast to typical spatial PDEs, which track population densities at every point in the habitat, spatial moment equations focus on the spatial correlations in density between nearby points. I will give a brief and highly non-rigorous derivation of spatial moment equations, followed by an application to the evolution of dispersal distance and shape in a heterogeneous environment.
Sequence alignment is the most prevalent computational method for functionally annotating newly found genes. It remains a crucial problem in the application of sequence alignment to distinguish between biologically significant and spurious similarities between the query sequence and a database sequence. Current numerical methods for assessing the statistical significance of local alignments with gaps are time consuming. Analytical solutions thus far have been limited to specific cases. Here, we present a new line of attack to the problem of statistical significance assessment. We combine this new approach with known properties of the dynamics of the global alignment algorithm and high performance numerical techniques and present a novel method for assessing significance of gaps within practical time scales. The results and performance of these new methods test very well against tried methods with drastically less effort.
The central ideas of Formal Concept Analysis revolve around the notion of a formal context and a formal concept. Of interest is the duality called Galois connection that arises naturally in different contexts. This duality is often observed between sets whose elements are related, such as objects and their attributes. In a Galois connection between two sets, the increase in size of one set corresponds to the decrease in size of the other set and vice versa. For example, an increase in the number of search terms used in a Google query corresponds, in general, to a decrease in the number of hits.
I will introduce the fundamentals of Formal Concept Analysis and demonstrate how we applied the ideas of the field to problems in microarray analysis. In this work we integrate biological attributes related to genes along with their expression values obtained from a microarray experiment. The integrated data is represented as a partially ordered set that respect Galois connections inherent in the data. Metrics are applied to the representations of multiple samples to discover biological similarities.
The neural coding of the direction of stimulus motion, which is a classic example of local neural computation, is a common feature of the nervous system. In the vertebrate retina, the mechanisms that underlie the computation of the direction of image motion remain unresolved. Recent evidence indicates that directionally-selective light responses occur first in the dendrites of a retinal interneuron, the starburst amacrine cell, and that these responses are highly sensitive to the activity of Na-K-2Cl (NKCC) and K-Cl (KCC), two types of chloride cotransporter that determine whether the neurotransmitter GABA depolarizes or hyperpolarizes neurons, respectively. By measuring the GABA reversal potential in different starburst dendritic compartments a nd by mapping NKCC2 and KCC2 antibody staining on these dendrites, we have recently found that the localization of NKCC2 and KCC2 in different dendritic compartments results in a GABA-evoked depolarization and hyperpolarization at the NKCC2 and KCC2 compartments, respectively, and underlies the directionally-selective responses of starburst dendrites. Computational analysis of light-evoked voltage changes at the starburst cell body and dendritic tip suggest that directionally-selective light responses similar to those we have observed experimentally can be generated if there is a chloride gradient along starburst dendrites due to the differential compartmentalization of the chloride cotransporters and if the GABA-evoked increase in the chloride conductance is relatively long-lasting. Experimental measurements indicate that GABA produces long-lasting responses from starburst cells. The functional compartmentalization of interneuron dendrites may be an important means by which the nervous system computes complex information at the subcellular level.
This talk will describe several combinatorial descriptions of RNA secondary structure topologies. I will review some of the older models and describe in detail a new model using permutations. This most recent (permutation) model gives an exact description of the permutations involved, and relates classical statistics on permutations to information on the secondary structures.
Humans are notoriously susceptible to large biases in judgment, called cognitive illusions. Nonhuman animals are thought to be immune to such illusions because their decision-making has been shaped by natural selection. However, our research reveals that, like human consumers, gray jays show irrational preferences when choosing between options varying in quality and price. Standard models of choice assume decision makers evaluate options on relevant dimensions, assign fixed fitness-related values to options, and then make rational choices based on these values. If this were true, then an animal that prefers option a to b, and b to c, must prefer a to c. Likewise, the animal's preference for a over b should be unaffected by the introduction of a third, least preferable option. However, we have found clear violations of these and related predictions. I will give an overview of our experimental findings of economically irrational choice behavior. Throughout, I will describe our modeling attempts to uncover evolutionary explanations for this seemingly maladaptive behavior.
Mathematical biologists have built on variants of the Lotka-Volterra equations and in almost all cases have adopted the pure physical science's single-currency (energy) approach to understanding population dynamics. However, biomass production requires more than just energy. It is crucially dependent on the chemical compositions of both the consumer species and food resources. In this review style talk, we explore how depicting organisms as built of more than one thing (for example, C and an important nutrient, such as P) in stoichiometrically explicit models results in qualitatively different and realistic predictions about the resulting dynamics. Stoichiometric models incorporate both food quantity and food quality effects in a single framework, appear to stabilize predator-prey systems while simultaneously producing rich dynamics with alternative domains of attraction and occasionally counterintuitive outcomes, such as coexistence of more than one predator species on a single-prey item and decreased herbivore performance in response to increased light intensity experienced by the autotrophs. We conclude that stoichiometric theory has tremendous potential for both quantitative and qualitative improvements in the predictive power of mathematical population models in the study of both ecological and evolutional dynamics.
Determining the long-time behavior of large dynamical systems has proved to be a remarkably difficult problem. And yet the robustness and stability of molecular networks in biology seems to indicate a certain underlying structure that doesn't change under (some) small changes in the topology or the parameter values. Using the theory of monotone systems, we have tried to underline some of the relevant stability features of certain potentially high-dimensional systems. In this talk, I give sufficient qualitative and quantitative conditions for global attractivity and multistability, even for systems that are not monotone themselves, with applications to delay differential equations arising in molecular biology.
Linear reaction-hyperbolic equations arise in the transport of neurofilaments and membrane-bound organelles in axons. The profile of the solution was shown by simulations to be approximately that of a traveling wave; this was also suggested by formal calculations. In this talk I will describe a rigorous proof of such results.
Addictive drugs have been hypothesized to access the same neurophysiological mechanisms as natural learning systems. These natural learning systems can be modeled through temporal-difference reinforcement learning (TDRL), which requires a reward-error signal that has been hypothesized to be carried by dopamine. TDRL learns to predict reward by driving that reward-error signal to zero. By adding a noncompensable drug-induced dopamine increase to a TDRL model, a computational model of addiction is constructed that overselects actions leading to drug receipt. The model provides an explanation for important aspects of the addiction literature and provides a theoretic viewpoint with which to address other aspects. I will present how this model explains important aspects of cocaine addiction.
These models, however, have a problem modeling behavioral extinction (in which a response that once led to reward no longer leads to reward). Because TDRL models are generalizations of associative models, they do not differentiate learning from unlearning: a missing reward produces delta < 0, which produces a decrease in value (expectation of reward), which produces a decrease in action-selection. We propose instead that acquisition and extinction are driven by separate processes: Acquisition entails the development of an association, is based on phasic increases in dopamine, and is learned through increases in the value-estimate. Once this association has been learned, it is permanently stored and cannot be unlearned. Extinction entails the development of a new state space, which has no associated value-estimate. I will discuss how this model provides a potential path to problem gambling.
An agent-based computer simulation has been created to study the complex network behavior of the immune system in a way that is not possible using a living system. It includes agent and signal representations of all of the basic elements of the immune system, and emulates both normal and pathological immune system behavior in a viral infection scenario. By designating the agents (cells) as nodes and meaningful interactions between the agents as links, the data generated during the simulated immune response demonstrates behavior like that of a scale-free network with dendritic cell agents as hubs. The average number of links for each agent type also correlates with their contribution to the success of the immune response. Modeling the immune system as a scale-free network creates a new window to information that can be used for development of new strategies for manipulating the immune response.
There is growing interest in proteins that lack a stable and well-defined three-dimensional structure - often referred to as intrinsically disordered proteins - but have functionally important properties that depend on the lack of structure. It has been shown that these proteins possess a range of important properties and functions that derive from being disordered. In this talk, I explore the properties of intrinsically disordered proteins with both computational and experimental methods. First, I present a support vector machine (SVM) trained on naturally occurring disordered and ordered proteins, which is used to examine the contribution of various parameters to recognizing proteins that contain disordered regions. I show that a SVM that incorporates only amino acid composition has a recognition accuracy of 87+/-2%. This result suggests that composition alone is sufficient to accurately recognize disorder. Interestingly, SVMs using reduced sets of amino acids based on chemical similarity preserve high recognition accuracy. A set as small as four retains an accuracy of 84+/-2%; this result suggests that general physicochemical properties rather than specific amino acids are important factors contributing to protein disorder. Second, I build on the SVM analysis by examining the relationship of disorder propensity to sequence complexity. I graph the distributions of 40 amino acid peptides from both ordered and disordered proteins in disorder-complexity space. An analysis of the Swiss-Prot database shows that most peptides are of high complexity and relatively low disorder. However, there are also an appreciable number of low complexity-high disorder peptides in the database. In contrast, there are no low complexity-low disorder peptides. A similar analysis for peptides in the Protein Data Bank (PDB) reveals a much narrower distribution, with few peptides of low complexity and high disorder. In the case of the PDB, the bounds of the disorder-complexity distribution are well defined, and might be used to evaluate the likelihood that a peptide can be crystallized with current methods. I also examine disorder-complexity distributions of individual proteins and sets of proteins grouped by function. Among individual proteins, there are a variety of distributions that in some cases can be rationalized with regard to function. Groups of functionally related proteins are found to have distributions that are similar within each group, but show notable differences between groups. In addition, I use a pattern matching algorithm to search for proteins with particular disorder-complexity distributions. The results suggest that this approach might be used to identify relationships between otherwise dissimilar proteins.
The chemostat is a biological reactor used to study the dynamics of species competing for nutrients. If there are n>1 competitors and a single nutrient, then at most one species survives, provided the control variables of the reactor are constant. This result is known as the competitive exclusion principle. I will review what happens if one of the control variables-the dilution rate- is treated as a feedback variable. Several species can coexist for appropriate choices of the feedback. Also, the dynamical behavior can be more complicated, exhibiting oscillations or bistability.
In recent studies, we have applied a diffusion model to examine differences in processing between college students and older adults. The results show that in several paradigms, namely signal detection, brightness discrimination, recognition memory, and lexical decision, the rate of accumulation of evidence in the decision process is not significantly different for the two groups. Longer response times for the older adults come from more conservative decision criteria and from a small increase in the nondecision components of processing. In contrast, in letter discrimination, the older adults' longer response times come from a reduced rate of accumulation of evidence, as well as more conservative decision criteria and a small increase in the nondecision components of processing. In this talk, we review these results, present data from the same group of subjects tested on four of these tasks, and we apply other sequential sampling models to the data from the five paradigms to determine whether the results obtained using the diffusion model are specific to that model or general across the class of models. I will also work through the models and show how they account for correct and error RT distributions and accuracy and how parameters are associated with different experimental effects.
Neonates are more subject to the Sudden Death Infant syndrom than other babies, mainly because their nervous organisation is not fully achieved when they come to life. In Amiens hospital, pedriatricians started a new protocol (30 babies treated up to now) : they gave them caffeine, so as to augment their hearth activity and they bet this treatment would keep the babies in better conditions not to forget to breathe. Of course, on the other hand, overdose of caffeine may threaten the babies'lives due to possible tachycardia events. Unfortunately, it is not technically feasable to monitor online caffeine concentration within a neonate's blood and a protocol to keep caffeine concentration within therapeutic bounds has to be designed and adapted to each baby. Sandrine Micallef and Billy Amzal, two PhD students of mine have been working on a Bayesian analysis of such a cafeine treatment for neonates. I will first describe a population model that has been set up to describe the cafeine elimination within the body of baby. We have designed a nonstationary compartment model that takes into account the rapid growth rate of babies during their first weeks. Second, I will focus onto the design of the caffeine protocol that has to be optimized under uncertainty and how it can be updated when a new case enters the study.
An important component of plant water transport is the design of the vascular network, including the size and shape of water conducting elements, or xylem conduits. Despite the development of a number of competing theories of hydraulic design, empirical data have rarely been assembled to assess whole-plant hydraulic architecture of woody plants as they age and grow. In this talk, I present analysis of the scaling of plant hydraulic architecture within a single white ash tree over 18 years of growth and 12 meters in height. The qualitative form for the scaling of vessel radii agrees remarkably well with simple power laws, implying the existence of an ontogenetically stable hydraulic design, i.e. a design that scales in the same manner as a tree grows in height and diameter. I discuss the implications of the present finding for optimal theories of hydraulic design, its relevance to work on cavitation, and comparison to recent empirical findings on other species.
We consider longitudinal clinical data for HIV patients undergoing treatment interruptions. Leveraging a statistically-based censored data method, together with inverse problem techniques, we estimate parameters in a biologically-based nonlinear dynamical model to fit each patient's data. The predictive ability of such a model is demonstrated by fitting it to half of each patient's longitudinal data, then using those estimated parameters to simulate the model over the full longitudinal time span. For many patients, the model accurately predicts the full longitudinal data set.
Gonadotropin releasing-hormone (GnRH) is a small neuropeptide that regulates pituitary release of luteinizing hormone (LH) and follicle-stimulating hormone (FSH). These gonadotropins are essential for the regulation of reproductive function. GnRH release is not continuous, but rather is released in episodic pulses which are essential for reproduction. The identity of the neuronal substrate that results in pulsatile GnRH release, and therefore comprises the GnRH "pulse" generator, is unknown. The intermittent stimuli for GnRH release may arise from input to the GnRH cells and reflect synaptic interactions between GnRH neurons and a secondary network. Alternatively, pulsatile release of GnRH may be a consequence of spontaneous activity of the GnRH neurons themselves. These two hypotheses are not mutually exclusive; the GnRH pulse generator is likely derived from a combination of intrinsic properties and synaptic interactions. I will present evidence from electrophysiological experiments in single GnRH neurons and compartmental models of GnRH neurons that supports a role for excitatory synaptic input as a key regulator of repetitive firing in GnRH neurons.
The Drosophila Genome Project is a large scale research effort whose aim is to sequence, compare and contrast 12 Drosophila genomes with the goal of significantly advancing comparative genomics methods. We will provide an overview of our recent work on annotation and alignment of the genomes, which focuses on the related problems of transposable element identification, gene finding and multiple sequence alignment. In particular, we emphasize the importance of robust alignment methods, and their relevance for identifying functional elements in genomes. A key concept is the alignment polytope, which will be explained and illustrated.
Ever since the discovery of the double-helical DNA structure by Watson and Crick it became apparent that the survival and reproduction of a cell requires the solution of a number of problems ranging from efficient packaging of DNA to the untangling of DNA strands during replication and transcription. Theoretical understanding of these problems required the use of concepts from topology and differential geometry, and prompted the development of new approaches to solving open problems in the mechanics of slender elastic bodies. Presented will be an introduction to the main concepts in the theory of DNA topology and elasticity and an overview of the results obtained in recent years on (i) equilibrium configurations of DNA segments with the effects of impenetrability and self-contact forces taken into account and (ii) the effects of sequence-dependence of elastic properties on configurations of DNA minicircles and the probability of DNA closure.
Rabies, the most important viral zoonotic disease world-wide, has been undergoing epidemic expansion along the eastern seaboard of the United States since the mid-1970s following an accidental introduction of rabid raccoons from a source of endemic infection in the southeastern US. Using data submitted from US States to the Centers for Disease Control and Prevention, we have constructed stochastic simulations of the spatial dynamics of rabies as it has spread into new geographic region. The simulation was constructed as an interaction network with nodes of the network defined by township and county centroids. Interaction strengths along specific connections were sensitive to local geographic conditions and parameterized against reported data on the time and spatial location of detected rabid animals. The parameterized model has proven to be a valuable model for strategic planning for disease emergence and to direct the development of spatial control strategies.
Although influenza occurs annually, unique characteristics particular to each influenza season make forecasting difficult. Each year the geographical locations, rates of increase and decline, duration, and size of each outbreak vary considerably. Statistical models using historical data may accurately describe the typical pattern for a particular year, but they do not predict departures from the norm. However, it is the deviations that are of the most concern and, therefore, the most important to predict. Nurses, physicians, epidemiologists, pharmacists and microbiologists all have access to unique data that could help predict future influenza activity. However, because of the disparate nature of this information, standard research and statistical methods cannot be used to aggregate and analyze it rapidly enough to ensure clinical relevance. In order to address this shortcoming, we ran an influenza prediction market in Iowa for the 2004-05 influenza season. Traders, who included a diverse mix of healthcare workers, were each given a $100 grant with which to buy and sell contracts that reflected their views on short-term future influenza activity. By aggregating expert opinion, we predicted the epidemic curve, up to 4 weeks in advance, more accurately than forecasts based on historical data. We believe that these methods would be of use in predicting the course and timing of other infectious disease outbreaks.
A Boolean dynamical system is a dynamical system whose state space consists of vectors of fixed finite length of Boolean values 0 and 1. Such systems have important applications in mathematical biology as models of gene regulatory networks. In studying these models, one would like to have an efficient algorithm for deducing the dynamical properties of the system from the formula for the updating function. In particular, one would like to know if all attractors of the system are steady states. Efficient algorithms for this problem are known if the updating function is linear or each of its components is a monomial. In this talk we will see that if the set of permissible updating functions is only slightly broadened, the problem becomes computationally intractable, more precisely, NP-hard.
In this talk we will present Boolean dynamical systems as models of gene regulatory networks and will review the basics of NP-hardness. Then we will state our main results and illustrate the technique of proving NP-hardness of a given combinatorial problem with one of our proofs.
A weekly discussion of published mathematical models of cell -fate regulation (proliferation and the cell cycle, cell death and apoptosis, cell differentiation, etc) will be held at the Mathematical Biosciences Institute on Wednesdays at 3:30 pm (venue will be announced later). The discussions will be organized by an MBI long-term visitor, Baltz Aguda whose office is at Rm 234 Math Bldg. The first meeting will be held on May 24th at 3:30 pm on the topic of "Cell Death and Apoptosis". Reading materials and more information about the organization of the journal club will be emailed at least a week before the meeting to those who are planning to attend. Please email Baltz at firstname.lastname@example.org if you are attending or if you have further questions.
This work is based on the experimental observations of migration of neurons in the presence of a signalling molecule known as Slit, found in the migratory path. Experiments by Ward et al [J.Neurosci, 2003, 23(12):5170-5177] in vitro involved a circular tissue explant containing many neurons, placed in the proximity of a Slit source. In the absence of a Slit source, the neurons migrated away from the explant in a radially symmetric fashion. When Slit was present asymmetric distributions of neurons over time were observed, pointing to a possible inhibitory or repulsive role of Slit. We have used population models and individual nearest neighbour random walk models to match experimental observations of the cell distributions and individual cell tracks. This talk describes preliminary results of one and two-dimensional models.
Numerous cell migration processes exhibit travelling waves, from tumour cell invasion to wound healing. Using a wound healing assay, we model contact inhibited cell motility and cell proliferation with both continuum and discrete techniques. Imaging analysis shows that cells at the healing wavefront tend to be more motile compared to the cells behind the wavefront. This work has applications to the modelling of cell migration where diffusion and proliferation are the dominant mechanisms. We use both a modified Fisher equation, and an interacting population model to match simulation outputs with experimental data. Discrete simulations of reaction diffusion equations using continuous time random walkers will be also discussed.
Work done in collaboration with Kerry A. Landman and Barry D. Hughes
It is now known that the majority of human genes have more than one promoter. In other words, the process of transcription can initiate at more than one place in the gene, and each of these locations has its own regulatory mechanisms including unique transcription factor binding sites. If we are to truly understand gene expression in different tissues and diseases, we must investigate the activity of individual promoters within the gene. To do this, we are creating a custom microarray in which the probes are specifically designed to measure the activity of alternative promoters in human genes. I will discuss the approach we're taking in the design of these probes.