In QFT, the initial and final state is the vacuum. We can envision an instanton as a sort of path that correlates or links initial and final states (that have different topological winding numbers) and since those winding numbers can be infinite in extent, the vacuum not only becomes a state of lowest energy but also an aggregation of an infinite number of apparently homogeneous yet topologically different vacua. The lawn-mowing analogy is helpful as the power-lead leading over the tops of trees and shrubs act as barriers to movement of the electric lawn-mower. In field theory, this is equivalent to an energy barrier, instantons surpass this barrier via quantum tunnelling (linking one distinct topological state to another, measured by the θ-parameter. But how do instantons solve the U(1) problem? We could just invoke a respectable particle to account for the symmetry breaking like the η meson; but it's a Goldstone boson and the particle with the next mass up is too heavy. Instantons, like goldilocks, give just the right symmetry disturbance. A massless spiral of gluons and inverts right-handed quarks to left-handed ones. Such an inversion of handedness breaks the chiral symmetry and deals with the additional U(1) symmetry without the need for particles.
Thursday, 5 December 2013
Instantons- Topology of the Vacuum
In QFT, the initial and final state is the vacuum. We can envision an instanton as a sort of path that correlates or links initial and final states (that have different topological winding numbers) and since those winding numbers can be infinite in extent, the vacuum not only becomes a state of lowest energy but also an aggregation of an infinite number of apparently homogeneous yet topologically different vacua. The lawn-mowing analogy is helpful as the power-lead leading over the tops of trees and shrubs act as barriers to movement of the electric lawn-mower. In field theory, this is equivalent to an energy barrier, instantons surpass this barrier via quantum tunnelling (linking one distinct topological state to another, measured by the θ-parameter. But how do instantons solve the U(1) problem? We could just invoke a respectable particle to account for the symmetry breaking like the η meson; but it's a Goldstone boson and the particle with the next mass up is too heavy. Instantons, like goldilocks, give just the right symmetry disturbance. A massless spiral of gluons and inverts right-handed quarks to left-handed ones. Such an inversion of handedness breaks the chiral symmetry and deals with the additional U(1) symmetry without the need for particles.
Saturday, 12 October 2013
Networks- Physics of the Web
The Web remains an untamed beast. Ever since its inception, routers and lines are added continuously without bounds in an uncontrolled and decentralised manner; the every embodiment of digital anarchy. But is this network of networks inherently random? Nope. But how them do you get order to emerge from the entropy of millions of links and nodes? Let's examine the Internet and the web in the light of network theory and statistical mechanics.The most fundamental qualitative feature of any network is its degree distribution. The degree of a node is the amount of edges connected to it. Much of the Internet is an aggregation of low degree nodes with some few high degree hubs. An intriguing pattern arises with degree distribution of the Internet at large; it follows roughly a straight line when plotted against a logarithmic scale, essentially implying that the # p(k) of nodes with a degree k obeys a power law p(k)xk^-a. The present value of a for the Internet is around 2.2. But if edges of a network were arbitrarily placed between nodes, the resultant degrees would obey a Poisson distribution (in which a majority of nodes have degrees fairly close to the mean value and a total lack of high degree hubs), much like the Erdős-Renyi random graph. So the fact that the Internet follows a power-law makes it far from random and hence 'scale-free'. Citation networks, where edges represent citations of one paper to another and nodes symbolise the papers themselves, are also scale-free. So why do the Web and Internet both have an affinity and indeed a tendency to form similar scale-free networks? Conventional graph theory makes the assumption that the amount of nodes in a network is static and that links are randomly distributed. Such assumptions fail given that the Internet continually evolves with new routers the the Web with new pages, also the fact that actual networks feature 'preferential attachment' (nodes have a high probability of forming connections with another nodes that have many links).Let's imagine that some nodes in a network are abruptly removed or disappear. 3% of Internet routers are destined to fail at any given time, so what percentage of nodes would need to be removed so as to affect network performance? We can perform one test by removing nodes uniformly and at random and another test by deliberately removing the nodes with the highest degree. It turns out that for a scale-free network, random node removal has little to no effect whereas targeting hubs can be destructive.
The concept of 'six degrees of separation', proposed by Stanley Milgram, suggests that anyone in the world can be connected to anyone else by a chain of five or six acquaintances. Does the Internet follow this trend seen in social networks (small separation of nodes and high degrees of clustering)? Since we don't have a complete copy of the entire web, even search engine cover only around 16%, we can use a small finite sample of it to make an inference about the whole. Using 'finite size scaling', you can quantify the mean shortest distance between two nodes (numbers of clicks to get from a page to another page). Given there are around 1 billion nodes that make up the Web, this bring the 'small world' effect to 19 'clicks of separation'. Not all pairs of nodes can be internconnected given that the Web is not a directed network; a link leading from one page to another does not mean an inverse link exists, hence such a path of 19 clicks is not guaranteed.In most complex networks, nodes undergo competition for links. We can model this by giving each node a 'fitness factor' which quantifies its ability to compete, subsequently energy levels can be assigned to each node to produce a Bose gas (its lowest energy level representing the fittest node). The Bose gas evolves with time, adding new energy levels; such corresponds to the addition of novel nodes in the network. Two different outcomes can arise depending on the distribution of energy level selection: (1) 'fit get rich': as the energy level increases, particle level decreases (2) Bose-Einstein Condensation: the fittest node gains a large percentage of all links and manifests itself as a highly populated lowest energy level. Perhaps the Web is just another Bose-Einstein condensate?
The concept of 'six degrees of separation', proposed by Stanley Milgram, suggests that anyone in the world can be connected to anyone else by a chain of five or six acquaintances. Does the Internet follow this trend seen in social networks (small separation of nodes and high degrees of clustering)? Since we don't have a complete copy of the entire web, even search engine cover only around 16%, we can use a small finite sample of it to make an inference about the whole. Using 'finite size scaling', you can quantify the mean shortest distance between two nodes (numbers of clicks to get from a page to another page). Given there are around 1 billion nodes that make up the Web, this bring the 'small world' effect to 19 'clicks of separation'. Not all pairs of nodes can be internconnected given that the Web is not a directed network; a link leading from one page to another does not mean an inverse link exists, hence such a path of 19 clicks is not guaranteed.In most complex networks, nodes undergo competition for links. We can model this by giving each node a 'fitness factor' which quantifies its ability to compete, subsequently energy levels can be assigned to each node to produce a Bose gas (its lowest energy level representing the fittest node). The Bose gas evolves with time, adding new energy levels; such corresponds to the addition of novel nodes in the network. Two different outcomes can arise depending on the distribution of energy level selection: (1) 'fit get rich': as the energy level increases, particle level decreases (2) Bose-Einstein Condensation: the fittest node gains a large percentage of all links and manifests itself as a highly populated lowest energy level. Perhaps the Web is just another Bose-Einstein condensate?
Thursday, 3 October 2013
Homology- A Unified Definition
Homology is 'a word ripe for burning'. But how should it be defined? Superficially, it's often identified as similarity in morphology reflecting a common evolutionary origin; but can we give a more rigorous approach? Like 'species', the many definitions of homology fall into two basic forms: developmental and taxic. The developmental approach is based on ontogeny, and two characters are homologous if they have an identical set of developmental constraints. The taxic definition is based on cladistics and identifies a homologues as a synapomorphy (a trait that characterises a monophyly). Some complications arise with structural homology, for instance, the wings or bats and birds can be considered convergent as they are differently arranged (and lack common ancestry); but they can be considered homologous at the level or the organism (because they evolved from the same pattern or vertebrate forelimb traceable to a common ancestor). Circular reasoning also arises with structural homology in that it is used to build a phylogeny and that phylogeny is subsequently used to infer homology (notice the circularity); a phylogeny must be initially constructed based on evidence before homology is proposed. What about evo-devo? This is also unhelpful for a working definition of homology, different pathways of development can converge on the same adult form, such as the methods of gastrulation and the many routes of developmental regeneration in the hydroid Tubularia. Even the embryonic developmental origin is unuseful as it relies on the subsequent interactions between cells and fails to give a conserved adult morphology. Molecular markers such as genes succumb to hierarchical disconnect (whereby homologous characters produce non-homologous traits). A classic example is the gene PAX6 in eye development which is found and transcribed in species as diverse as insects, humans, squids and even primitive eyed nemertines and platyhelminths.
Experiments involving grafting of Drosophila PAX6 into Drosophila limbs or wings can place eyes in incorrect positions; when mice PAX6 is inserted into Drosophila, it is expressed as mouse-like. These grafting tests indicate that the adjustability for change does not lie in the genes as in the regulatory network of genes that code for expression. The need by to redefine homology at different hierarchical levels is also indicated in other characters. For a long time, arthropod compound eyes had been thought to have evolved rather independently of the vertebrate simple eye; now this seems improbable given the immense similarity between cephalopod and vertebrate eyes (commonly attributed to convergence). In essence, the gene starting eye formation is homologous but it's expression is not necessarily homologous. Hierarchical disconnect in the form of nonhomologous traits causing homologous characters is also noted. With the exception of urodele amphibians, all tetrapods develop tissue between their primordial digits and later undergo apoptosis. But in newts and salamanders, there is no need for apoptosis and digits take a separate developmental pathway. The evolutionary hypothesis is that salamanders and newts (or one of their ancestral species) lost the ability of apoptosis between digits and differential growth is a derived process. Novel genes exchanged for older ones can also cause the same homologous morphology (co-option of genes during evolution for very distinct functions).
Experiments involving grafting of Drosophila PAX6 into Drosophila limbs or wings can place eyes in incorrect positions; when mice PAX6 is inserted into Drosophila, it is expressed as mouse-like. These grafting tests indicate that the adjustability for change does not lie in the genes as in the regulatory network of genes that code for expression. The need by to redefine homology at different hierarchical levels is also indicated in other characters. For a long time, arthropod compound eyes had been thought to have evolved rather independently of the vertebrate simple eye; now this seems improbable given the immense similarity between cephalopod and vertebrate eyes (commonly attributed to convergence). In essence, the gene starting eye formation is homologous but it's expression is not necessarily homologous. Hierarchical disconnect in the form of nonhomologous traits causing homologous characters is also noted. With the exception of urodele amphibians, all tetrapods develop tissue between their primordial digits and later undergo apoptosis. But in newts and salamanders, there is no need for apoptosis and digits take a separate developmental pathway. The evolutionary hypothesis is that salamanders and newts (or one of their ancestral species) lost the ability of apoptosis between digits and differential growth is a derived process. Novel genes exchanged for older ones can also cause the same homologous morphology (co-option of genes during evolution for very distinct functions).
Saturday, 28 September 2013
Relativistic Chemistry- Why Mercury is Liquid
Firstly, the p3/2 orbital contracts to a lesser degree as opposed to the s1/2 and p1/2 orbitals (which contract a lot). Secondly, such causes an outward augmenting of the d and f orbitals (in relation to the s and p orbitals). And thirdly, the relativistic splitting of the p, d and f orbital energies manifests itself as spin-orbit coupling. These 3 effects cause the energy gap (difference) between the 5d5/2 and 6s1/2 orbitals to shrink. More importantly, we may explain away the colours of Au and Ag; the colour of Au is caused by the absorption of blue light causing 5d electrons to be excited to the 6s level, however silver appears colourless when it absorbs UV. The relativistically contracted 6s orbital in Hg is filled and hence, unlike Au, the 2 6s electrons don't play that much a role in metal-metal bonding, which is why it is liquid at room temperature.
Sunday, 8 September 2013
Universal Common Ancestry- A Test
So are the three superkingdoms of life (archaea, bacteria, eukarya) united by a common ancestor? Douglas Theobald recently performed a test where 23 conserved amino-acids across the three domains had evolutionary networks (or trees) build around their sequences. Then contrasting the probability values for a range of ancestry hypotheses. But does this imply that life originated only once around 3.5 BYA? Not at all! It just implies that one of the primordial (original) forms of life has extant descendants; however it is possible for life to arisen more than once but the whole conclusion necessitates that all life has at least once common ancestor: a last universal common ancestor (LUCA). A problem however is that a phylogenetic tree can be build on virtually any set of data; we need to demonstrate an agreement between trees for the exact set of data spanning different datasets. And this agreement can also be explained in terms of other biological processes so the Akaike Information Criterion (AIC) may be applied to compare and contrast a range of hypotheses.So what signature feature of sequence data allows us to give qualitative evidence for UCA? In a nut-shell, the site-specific relationships in the amino-acids across a range of species; such relationships fade away as we go back in time through a lineage and species converge back (but with enough data, the progressive accumulation of relationships becomes statistically significant). On the other hand, if a pair of extant species have absolutely distinct origins, the relationships between the site-specific amino acid correlations (in the two species) disappear.
Friday, 6 September 2013
Graphene- One Carbon Thick
The Klein paradox in QED is when a potential barrier allows relativistic particles to move through freely, yet the probability that an electron tunnels through decreases at an exponential rate with the height of the barrier. Paradoxical enough, the probability for relativistic particles increases with barrier height (since a potential barrier that acts to repel electrons will also attract positrons). Chiral symmetry breaking may also be illuminated by graphene; in graphene the right and left-handed fermions behave the same unlike neutrinos which are strictly left-handed. But graphene is too conductive and to lower its conductivity we can take advantage of carbon's adaptability. In diamonds, each carbon is bound to four others (involving all electrons) in contrast to graphene, where one electron is left over (making it a good conductor). The most basic way of achieving this is to add a hydrogen (just like conversion of ethane to ethane) to make graphene into graphane. The σ-electrons that bind carbon atoms in graphene make a band structure with an energy gap between the final occupied and vacant states. But the delocalised π-electrons cause fully occupied and vacant bands to touch one another. In graphane, the π-electrons are strongly attached to hydrogen atoms, making an energy gap between the lowest vacant band and the highest occupied band. Bizarrely, annealing causes the hydrogen to disperse leaving the graphene backbone whole.
Sunday, 25 August 2013
Topological Insulators- The New Physics
We have all heard of conductors and insulators. And indeed some of us are more familiar with magnets and semiconductors or even superconductors; they are all manifestations of electronic band structure. But what about the topological insulator? They conduct on the outside but insulate on the inside, much like a plastic wire wrapped with a metallic layer. Weird enough, they create a 'spin-current', where the conducted electrons themselves into spin-down electrons moving in one direction and spin-up moving in the other. Such a topological insulator is an exotic state resulting from quantum mechanics: the spin-orbit interaction and invariance (symmetry) under time reversal. What's more, the topological insulator has topologically protected surface state which is free of impairment of impurities. So how can we understand this 'new' physics? The insulating state has a conductivity of exactly zero around a temperature of absolute zero due to the energy gap segregating the vacant and occupied electron states. The quantum Hall state (QHS) near absolute zero has a quantised Hall conductance (ratio of current to voltage orthogonal to flow of current), unlike other materials like ferromagnet which have order arising from a broken symmetry, topologically ordered states are made distinct by wound up quantum states of electrons (and this protects the surface state). The QHS (the most basic topologically ordered state) happens when electrons trapped to a 2-D interface in between a pair of semiconductors encounters a strong magnetic field. This field causes the electrons to 'feel' an orthogonal Lorentz force, making them move around in circles (like electrons confined to an atom). Quantum mechanics substitutes these circular movements with discrete energies, causing a energy gap to segregate the vacant and occupied states like in an insulator.
However, at the boundary of the interface, the electron's circular motion can rebound off the edge, creating so called 'skipping orbits'. At the quantum scale, such skipping orbits create electronic states that spread across the boundary in a one-way manner with energies that are not discrete; this state can conduct owing to the lack of an energy gap. In addition, the flow in one-direction creates perfect electric transport (electrons have no other option but to move forward because there are no backward-motion modes). Dispationless transport emerges because the electrons don't scatter and hence no energy or work is lost (it also explains the discrete transport). But topological insulators happen without a magnetic field, unlike the quantum hall effect; the job of the magnetic field is taken over by spin-orbit coupling (interplay between orbital motion of electrons via space and the electron's spin). Relativistic electrons arise in atoms with high atomic numbers and thus produce strong spin-orbit forces; so any particle will experience a strong spin-momentum reliant force that plays the part of the magnetic field (when spin changes, its direction changes). Such a comparison between a spin-reliant magnetic field and spin-orbit coupling allows us to introduce the most basic 2-D topological insulator; the quantum Hall spin state. This happens when both the spin-up and spin-down electrons experience equal but opposite 'magnetic fields'.
Just as in a regular insulator, there exists an energy gap but there are edge states where the spin-up and spin-down electrons propagate in opposition to another. Time-reversal invariance exchanges both the direction of spin and propagation; hence swapping the two oppositely-propagating modes. But the 3-D topological insulator can't be explained by a spin-dependant magnetic field. The surface state of 3-D topological insulators promotes the movement of electrons in any direction, but the direction of electronic motion decides the spin direction. The relation between momentum and energy has a Dirac cone structure like in graphene.
However, at the boundary of the interface, the electron's circular motion can rebound off the edge, creating so called 'skipping orbits'. At the quantum scale, such skipping orbits create electronic states that spread across the boundary in a one-way manner with energies that are not discrete; this state can conduct owing to the lack of an energy gap. In addition, the flow in one-direction creates perfect electric transport (electrons have no other option but to move forward because there are no backward-motion modes). Dispationless transport emerges because the electrons don't scatter and hence no energy or work is lost (it also explains the discrete transport). But topological insulators happen without a magnetic field, unlike the quantum hall effect; the job of the magnetic field is taken over by spin-orbit coupling (interplay between orbital motion of electrons via space and the electron's spin). Relativistic electrons arise in atoms with high atomic numbers and thus produce strong spin-orbit forces; so any particle will experience a strong spin-momentum reliant force that plays the part of the magnetic field (when spin changes, its direction changes). Such a comparison between a spin-reliant magnetic field and spin-orbit coupling allows us to introduce the most basic 2-D topological insulator; the quantum Hall spin state. This happens when both the spin-up and spin-down electrons experience equal but opposite 'magnetic fields'.
Just as in a regular insulator, there exists an energy gap but there are edge states where the spin-up and spin-down electrons propagate in opposition to another. Time-reversal invariance exchanges both the direction of spin and propagation; hence swapping the two oppositely-propagating modes. But the 3-D topological insulator can't be explained by a spin-dependant magnetic field. The surface state of 3-D topological insulators promotes the movement of electrons in any direction, but the direction of electronic motion decides the spin direction. The relation between momentum and energy has a Dirac cone structure like in graphene.
Wednesday, 21 August 2013
Gravitational Waves- Einstein's Final Straw
Like ripples through a rubber sheet, they squeeze and stretch spacetime and move outwards at the speed of light. Gravitational waves are still up for grabs. An exotic prediction of general relativity yet to be observed yet having profound implications for cosmology and astrophysics. If we picture a star in a relativistic orbit around a supermassive black hole, it may continue so for thousands of years but never forever. Neglecting even drag due to gas, the orbit would lose energy gradually until the star spiralled into the hole; the reason for this plunge is the emission of gravitational radiation. We know that if the shape or size of an object is altered, so is the gravity surrounding it; Newton realised the sphere was an exception since the gravitational field outside it is invariant (remains the same) if it merely expands or contracts. Changes in the gravitational field can't spread out instantly because this would imply a conveyance of information about the shape and size of an object at superluminal speeds (which is forbidden by relativity). If the sun were to somehow alter its shape and the gravitational field around it, 8 minutes would elapse for the effect to be 'felt' on the earth and at very large distances, this is evident as radiation (a wave of changing gravity) moving away from its source. This is analogous to the manner in which fluctuations in an electric field produces electromagnetic waves (a rotating bar with a charged ends produces an electric field unlike which is different from when the bar is end-on or sideways-on). But there are two main distinctions to be made between gravitational and electromagnetic waves. Firstly, gravitational waves are especially weak (except if very large masses are involved). Diatomic molecules are great emitters of electromagnetic radiation but terrible at transmitting gravitational waves. Because there is no such thing as negative mass (negative gravitational charge) to neutralise (or cancel out) positive ones (like in electricity), on large scales, gravity competes with electromagnetism. This lack of negative gravitational charge gives gravity an advantage over electromagnetism but it implies a deep paradox: it weakens the strength of an object to make gravitational radiation. Which brings us to the second difference between gravitational and electromagnetic waves:
The most productive (i.e. efficient) way of making electromagnetic radiation is for the 'centre of electric charge' to stagger or wobble in relation to the centre of mass. Dipole radiation is an example of this, where the ends of a spinning bar are positively charged on one end while negative on the other. But the Equivalence Principle (which dictates that gravitation is indistinguishable from acceleration, much like how a rising elevator makes you feel heavier while a descending one makes you feel lighter) also mentions that everything exerts a gravitational force equal to its inertial mass, hence at all points in spacetime, all bodies experience the same gravitational acceleration. Translating into english: this implies that the 'centre of gravitational charge' is really just the centre of mass and since the former can't wobble relative to the latter, dipole gravitational radiation can't exist. We compare gravitational radiation to the spinning bar by envisaging it possesses positive charges at both ends so that 'the centre of charge' remains set (fixed) at the centre and thus, low amounts of radiation are produced owing the existence of a quadrupole moment (it's only quantity that changes: it describes the distribution of shape and charge). Due to gravitational radiation, binary systems loses energy and their orbital period shrinks progressively, causing the component stars to coalesce; when two black holes meet, their even horizons combine into a larger one and in accordance with the 'no hair theorems', returns to a state described by the Kerr Metric (hole has mass and spin).
But the detection of such gravitational radiation (or waves) is causing a stir, it is Einstein's final straw. Any object in the way of a gravitational wave would experience a tidal gravitational force that acts transverse (perpendicular) to the direction in which the wave moved outward. If you interrupt a gravitational wave some sort of circular hoop head-on, it will eventually be contorted into an ellipse. In Louisiana, the LIGO detector uses laser interferometry, where a laser beam is divided and reflected off mirrors which are connected to two masses (kilometers away) in a perpendicular fashion (an L shape). If a gravitational wave were to arrive, it would cause two lengths, X and Y to change. To be continued...
Sunday, 18 August 2013
Y Chromosome- An Evolutionary Curiosity
As the early Y tended to exchange segments, a portion of DNA experienced an inversion (effectively turning the sequence upside down) relative to the X and since a prerequisite for recombination is that analogous sequences are aligned, any inversion would prevent interaction between the two regions. Comparative genomics unveils that autosomal precursors of the X and Y were unbroken (intact) in reptilian species before the mammalian lineage began . But monotremes like platypi were among the earliest to speciate and have a SRY gene aged back to 300 million years. X-inactivation followed (in which female embryo cells arbitrarily shut down a majority of the genes in one of the 2 X-chromosomes) to compensate for the degeneration. If we reduce the whole human population to two people (one man and woman), together this couple carries four copies of each autosome and three X chromosomes and a single Y. The effective population size of the Y can be therefore predicted to be similar to that of haploid mtDNA, 1/3 that of any X and 1/4 that of any autosome. Hence, we can expect much lower rates of diversification in the Y than any other region of the nuclear genome. We can predict is to also be more subject to genetic drift (random changes in frequency of haplotypes) and such drift would act as a catalyst for the differentiation between aggregates of Y-chromosomes in different populations.
Saturday, 17 August 2013
Molecular Clocks- Timing the Gene Pool
It certainly doesn't tick. And it has no hands either. But the molecular clock is more than a faceless clock. It is a fairly new technique, employing a relatively constant rate of evolution to date almost anything from the divergence of taxa or species to the appearance of a viral epidemic. But this tool is made possible by an incredibly simple observation: the range of difference of DNA between species is essentially a function of the time ever since their divergence. Though the practical applications may seem subtle, molecular clocks put the final nail in the coffin of claims that HIV was first propagated by tainted polio vaccines in the 1950's, made using SIV (simian immunodeficiency virus) by dating the strain back to the 1930's. Essentially, the modern molecular clock has shown that a given protein has a characteristic rate of molecular evolution while genes are different in their characteristic rates . And that molecular evolution per se better fits into a neutralist rather than selectionist view. Linus Pauling reported a range of constant rates of evolution for different proteins (histones are characteristically slow, cytochrome c is slightly quicker (yet slower than haemoglobin) and fibrinopetides are quicker overall). Motoo Kimura and Tomoko Ohta explained away this fairly constant characteristic rate for each protein by positing that most amino acids changes were effectively neutral, so the change has no influence on the overall fitness and as a result the rate of change was no under the effects of natural selection. So on average, beneficial mutations were predicted to be rare, deleterious ones would be quickly wiped out by natural selection and a large fraction of the amino acids changes are effectively neutral. The actual mutation rate of the neutral mutations would only shaped by the mutation rate (and would be fairly constant, taken that the base mutation rate remained unchanged). Such predicts that in a species, the long-term rate of neutral molecular evolution is equivalent to the neutral mutation rate in the individuals. But why do different proteins have different characteristic rates of evolutionary change? We may explain these variations in terms of the assumption that proteins differed in the proportion of amino acid positions that were neutral (so that altering an amino acid has zero selective effect) or constrained (so any mutation was probably deleterious).
Summing up, the greater proportion of neutral sites, the more rapid the rate of molecular evolution. So in accordance with the neutral theory, the rate of which genes evolve is determined by the overall rate of mutation and proportion of neutral sites. Darwin actually predicted two phenomena (rate of fixation of mutations and high level of polymorphisms), which may be accounted by the neutral theory. And the amount of divergence between genes tends to increase (with time) since their evolutionary separation. But molecular clocks themselves may vary either as a result of 'sloppiness' of the tick-rate or variation in the mutation rate; since the clock is probabilistic (ticks are irregular intervals which can be described by a Poisson distribution), but where did this variation stem from? An important source of variation comes from the influence of population size on the rate of fixation of mutation. Ohta expanded the neutral theory (with the nearly-neutral theory) by acknowledging the important role of effective population size; smaller populations are more severely influenced by fluctuations in allelic frequency, so genetic drift can vanquish selection for alleles with small selection coefficients. So in effect, the fixation of of nearly-neutral alleles of small selection effect is predicted to be the greatest in the smallest populations, if a population undergoes a decrease in population, this might coincide with an influx of fixation of nearly-neutral alleles, so population flux can increase the sloppiness of molecular clocks. Another application of molecular clocks can be made to the Hawaiian Islands, where the phylogeny of endemic birds and fruit flies is confirmed by molecular dates that follow a linear correlation between divergence and time in which DNA distance is compared against Island age. Since viruses leave behind no fossil record, we can also reassemble the history of viral outbreaks using viral lineages (viral molecular clock). In the case of endogenous retroviruses (ERVs), dates of origin can be fine tuned by comparing the pair of long terminal repeats (LTRs) that surround the genome.
Saturday, 10 August 2013
Homochirality- Left-Handed Life
Life is anything but ambidextrous. And the problem of life's origins is compounded by a basic configuration of amino acids and sugars. Amino acids are molecules consisting of both an amino group (NH2) and a carboxylic group (COOH); the alpha amino acids have the carbon atom in the centre attached to both groups. The infamous Miller-Urey experiment showed how at least 22 amino acids could be produced in a spark discharge tube simulating a prebiotic environment containing water (H2O), methane (CH4), ammonia (NH3), molecular hydrogen (H2) and very little oxygen (O2). But an anomaly arises when trying to reconcile such a result with the chirality of the amino acids; just as your left and right hand can't be translated into each other, they can in principle when you look at the mirror image of one hand. Hence, hands possess a mirror symmetry. Chirality can also be explained in terms of the direction of rotation of circularly polarised light, rightly-polarised and leftly-polarised light behave very differently when they pass via a medium consisting of molecules that have selected chirality. All amino acids naturally occurring on earth are left-handed (except glycine which is non-chiral) but the Miller-Urey experiment produced racemics (equal numbers of left and right handed amino acids), so how did the amino acids get left handed? Such homochirality is critical to protein function, if proteins made of L-amino acids had random incorporations of their D-enantiomers, they would have varying conformations. Sugars also possess homochirality, they are classified as D-sugars based on the arrangement of the chiral centre furthest away from the carbonyl group of the sugar; so sugars are essentially right handed. But why? The Murchison meteorite that landed in Australia in 1969 had five alpha-methyl amino acids and an excess of L-enantiomers; these translate as S-enantiomers (a configuration where the methyl group is attached where the hydrogen atom would normally be in the L-amino acids). So why such an excess? The discovery that our place in the interstellar neighbourhood contains more right-circularly polarised light than anywhere else in the universe hints many possible explanations for homochirality. Among them is processing by ultraviolet photons in outer space, such as their polarisation is highest when they pass via a dense nebulae to leave stars (allowing scattering) and would thereby destroy molecules of one chirality while preserving the others.
The weak interaction of beta decay is the only force with the potential of producing a chirality due its parity violation. Conservation of parity means that the mirror image of an object has to be identical as the object itself, hence the weak interaction could distort the balance between right and left-handed molecules. One way this could be achieved is by electrons produced via beta decay, which have antiparallel spins to the direction of motion (longitudinal polarisation); more energetic, relativistic electrons are entirely longitudinally polarised and would produce Bremsstrahlung photons which interact with molecules to cause chiral discrimination. A similar hypothesis involves amplification via catalytic reactions: an agent that could act as a catalyst for its own synthesis and an inhibitor for the synthesis of the chiral opposite. Imagine a left handed molecule L and a right handed molecule R (both are made of the constituents A and B); once synthesised they trigger 'autocatalysis' where they can drive the synthesis of new molecules of their identical handedness from A and B. Finally merging to form molecule B', which leads to the destruction of one R and one L molecule.
An approach from astrobiology involves the interplay between neutrinos, amino acids and supernovae. 14N (nitrogen-14) is a constituent common to all amino acids and has a non-zero spin. The recently described 'Buckingham effect' occurs when the interaction of a nuclear magnetic moment with the magnetic moment possessed by electrons (produced by the Faraday effect), would behave in a different manner in a right-handed molecule than in a left-handed molecule. So the non-zero spin of the 14N nucleus, coupled with a strong magnetic field could allow a mechanism for chiral discrimination. The SNAAP (Supernova Neutrino Amino Acid Processing) model proposes that supernovae produce carbon, nitrogen, oxygen and a racemic assortment of amino acids (which synthesise in supernova nebulae). Neutrinos from other supernovae, together with the magnetic field from a neutron star or black hole, make the racemic mixture enantiomeric by selectively destroying one type of chirality of 14N based molecules. Subsequently, chemical evolution quickly amplifies the enantiomers and more L-amino acids are produced as the galaxy is permeated with molecular clouds.
The weak interaction of beta decay is the only force with the potential of producing a chirality due its parity violation. Conservation of parity means that the mirror image of an object has to be identical as the object itself, hence the weak interaction could distort the balance between right and left-handed molecules. One way this could be achieved is by electrons produced via beta decay, which have antiparallel spins to the direction of motion (longitudinal polarisation); more energetic, relativistic electrons are entirely longitudinally polarised and would produce Bremsstrahlung photons which interact with molecules to cause chiral discrimination. A similar hypothesis involves amplification via catalytic reactions: an agent that could act as a catalyst for its own synthesis and an inhibitor for the synthesis of the chiral opposite. Imagine a left handed molecule L and a right handed molecule R (both are made of the constituents A and B); once synthesised they trigger 'autocatalysis' where they can drive the synthesis of new molecules of their identical handedness from A and B. Finally merging to form molecule B', which leads to the destruction of one R and one L molecule.
An approach from astrobiology involves the interplay between neutrinos, amino acids and supernovae. 14N (nitrogen-14) is a constituent common to all amino acids and has a non-zero spin. The recently described 'Buckingham effect' occurs when the interaction of a nuclear magnetic moment with the magnetic moment possessed by electrons (produced by the Faraday effect), would behave in a different manner in a right-handed molecule than in a left-handed molecule. So the non-zero spin of the 14N nucleus, coupled with a strong magnetic field could allow a mechanism for chiral discrimination. The SNAAP (Supernova Neutrino Amino Acid Processing) model proposes that supernovae produce carbon, nitrogen, oxygen and a racemic assortment of amino acids (which synthesise in supernova nebulae). Neutrinos from other supernovae, together with the magnetic field from a neutron star or black hole, make the racemic mixture enantiomeric by selectively destroying one type of chirality of 14N based molecules. Subsequently, chemical evolution quickly amplifies the enantiomers and more L-amino acids are produced as the galaxy is permeated with molecular clouds.
Wednesday, 7 August 2013
Nucleosynthesis- Making the Elements
The recipe for the making of the elements reads like a cookbook. In the first 3 minutes following the universe's fiery birth, very little was produced during big bang nucleosynthesis (BBN) due to some nuclear anomalies; there are no stable mass-5 or mass-8 nuclides making it almost impossible make anything other than 2H, 3He, 4He and 7Li (which is difficult to produce in abundance). Let's see how the light elements were initially synthesised. Firstly, a neutron coalesces with 1H to produce a deutron (2H) and a gamma ray; the deutron acts as a bottleneck for the rest of fusion events. Since a free neutron is highly unstable (half life of ~10 min) it decays into a proton and electron antineutrino; so you end up with around half the neutrons you started off with (which get captured into nuclei). 3He is produced when a proton is captured onto a deutron, which is converted to 4He via either neutron capture or a reaction where the deutron tosses in its neutron to the 3He and gives up its proton. In the another set of reactions, neutron capture by a deutron to produce 3He (a triton) gets converted to a 4He via either proton capture or a reaction in which a deutron gives up its proton and frees a neutron. That's pretty much hat was produced during BBN, aside from the fact that 7Li was made in minuscule amounts via the combination of 4He and 3H (in lows baryon density) or through the fusion of 4He and 3He to produce 7Be which was then fused with an electron neutrino. However, WMAP data seems to agree with theoretical calculations of 2H and 4He but not for 7Li (the prediction for lithium is about three times higher than actually observed). The 'lithium problem' may be addressed by short-lived hypothetical particles called axions which bind to nuclei; assuming it was negatively charged, the axion would reduce the Coulomb barrier between particles as the universe cooled to a certain point, hence triggering a revival of nucleosynthesis. So now that hydrogen, helium and a little bit of lithium were produced via BBN, the rest of the elements from carbon to lead and even as far as thorium and uranium were synthesised by nuclear reactions in stars.
Stellar nucleosynthesis begins with the initial stage of hydrogen burning, where hydrogen is converted to helium. In each of the 3 pp (proton-proton) chains 4 protons undergo fusion to form a 4He nucleus. In the pp-I branch, 6 protons actually go into the chain but only 2 remain in the final reaction with the 4He nucleus (so the net number of protons consumed is 4). The pp-II branch, the final reaction produces 2 4He nuclei, but one of them is put in to restart the chain (net number is 1). While the pp-III branch begins with 7Be (4He nucleus and 3 protons), so the proton that enters the chain makes one net 4He nucleus when 8Be decays. The CNO (carbon/nitrogen/oxygen) cycle is used for hydrogen burning in more massive stars and uses 12C as a catalyst. Next, the triple alpha and alpha processes of helium burning are rather simple; 2 4He are fused to form 8Be, 8Be is fused with 4He to produce 12C and 12C combines with 4He to make 16O. Hoyle discovered a resonance (an excited energy level) in the carbon nucleus of 7.7 MeV, to compensate for the instability of 8Be (which lives for 10^-16 secs). Subsequent nuclear reactions involve silicon burning following oxygen burning; the temperature is high enough so photons can interact with 28Si to make 24Mg and a 4He nucleus. Other photons can interact with 24Mg to make 20Ne and 4He nuclei, moreover, the light 4He nuclei can be captured by other 28Si to make 32S followed by 36Ar (very simplified); so nuclei around nickel and iron are products of silicon burning.
But the picture of nucleosynthesis is not complete without a mechanism for making the elements heavier than iron and nickel. Most of which are produced via the s-process (slow neutron capture) and r-process (rapid neutron capture); the s-process happens during helium burning and makes around half the nuclei heavier than iron. Such a process continues until it encounters the closed shells of the nucleons, which makes it difficult to capture an additional neutron. The s-process peaks in element abundance at barium, lead and strontium but the heaviest element made is 209Bi; attempt to add an another neutron and it undergoes beta decay to 210Po, releasing a 4He nucleus and ending up at 206Pb. The favoured site for the r-process is core-collapse supernovae, as a star cools, the seed nuclei form nuclides all the way up to uranium and plutonium and beyond.
Stellar nucleosynthesis begins with the initial stage of hydrogen burning, where hydrogen is converted to helium. In each of the 3 pp (proton-proton) chains 4 protons undergo fusion to form a 4He nucleus. In the pp-I branch, 6 protons actually go into the chain but only 2 remain in the final reaction with the 4He nucleus (so the net number of protons consumed is 4). The pp-II branch, the final reaction produces 2 4He nuclei, but one of them is put in to restart the chain (net number is 1). While the pp-III branch begins with 7Be (4He nucleus and 3 protons), so the proton that enters the chain makes one net 4He nucleus when 8Be decays. The CNO (carbon/nitrogen/oxygen) cycle is used for hydrogen burning in more massive stars and uses 12C as a catalyst. Next, the triple alpha and alpha processes of helium burning are rather simple; 2 4He are fused to form 8Be, 8Be is fused with 4He to produce 12C and 12C combines with 4He to make 16O. Hoyle discovered a resonance (an excited energy level) in the carbon nucleus of 7.7 MeV, to compensate for the instability of 8Be (which lives for 10^-16 secs). Subsequent nuclear reactions involve silicon burning following oxygen burning; the temperature is high enough so photons can interact with 28Si to make 24Mg and a 4He nucleus. Other photons can interact with 24Mg to make 20Ne and 4He nuclei, moreover, the light 4He nuclei can be captured by other 28Si to make 32S followed by 36Ar (very simplified); so nuclei around nickel and iron are products of silicon burning.
But the picture of nucleosynthesis is not complete without a mechanism for making the elements heavier than iron and nickel. Most of which are produced via the s-process (slow neutron capture) and r-process (rapid neutron capture); the s-process happens during helium burning and makes around half the nuclei heavier than iron. Such a process continues until it encounters the closed shells of the nucleons, which makes it difficult to capture an additional neutron. The s-process peaks in element abundance at barium, lead and strontium but the heaviest element made is 209Bi; attempt to add an another neutron and it undergoes beta decay to 210Po, releasing a 4He nucleus and ending up at 206Pb. The favoured site for the r-process is core-collapse supernovae, as a star cools, the seed nuclei form nuclides all the way up to uranium and plutonium and beyond.
Wednesday, 31 July 2013
Superfluidity- Going with the Flow
Cool things happen when you cool liquid helium to 2.2 K. Below this temperature (lambda point), superfluidity takes over and viscosity decreases radically, resulting in a frictionless flow. Place it in a container and it will flow into a thin film up around its edges and flow through the pores of the walls. This anomaly is difficult to understand using classical fluid mechanics; Poiseuille's law dictates that the flow rate of a fluid corresponds to the difference across the capillary and to the fourth power of the capillary radius. But below the lambda point, the flow rate of supercooled liquid helium was not only high, it was both independent of both the capillary radius and pressure; evidently, this is not within the explanatory scope of classical theory. When cooling liquid helium (by pumping out its vapour), the liquid ceases to boil due to fact that the thermal conductivity of liquid helium has so sharply increases to maintain a homogeneous temperature. Let's start our exploration from the ground up: the reason liquid helium never solidifies no matter how cold you cool it is because the weak Van der Waals forces between the atoms are adequate enough to overpower the zero-point motion related with attempting to restrain a helium atom to a site on the lattice. The nature of the superfluid is therefore necessarily quantum mechanical. London suggested superfluidity as an expression of a Bose-Einstein condensate (BEC) but the issue is that BECs happen in ideal gases where particles have no interaction with each other whereas helium atoms attract weakly at a distance and repel strongly when close. Feynman's path integral approach lead to the realisation of two important yet subtle notions, firstly, helium atoms are bosons and this means that a Bose symmetry ensures the wave-function is not affected by any two helium atoms changing their configuration. And secondly, if an atom is moving slowly along its trajectory, the adjacent atoms would have to move slowly to get out of the way, this act of 'making room' increases the kinetic energy of the helium atoms that would add to the action. The overall effect is that we must change what we usually perceive as the mass of the helium atom, because when it moved, more than one atoms would have to make way (as mentioned), hence the trajectories that give the most 'sum over paths', there would be a particle with a somewhat increased mass. But what keeps a superfluid helium superfluid? Landau suggested that there are no more available low energy states near the coherent BEC state at low temperatures that any fluctuations could place the quantum fluid into. A classical fluid has viscosity (resistance to flow) because separate atoms bounce around other atoms and molecules and any debris in the container; these excitations alter the motion of the particles and dissipate energy from the fluid to the container, but if no more states are there to be filled (as in Landau's suggestion), particles can't alter their motion and persist to flow without dissipating energy. Feynman wanted to extrapolate this to a quantum mechanical regime, essentially because helium atoms repel each other at short distances, the ground state (lowest energy) of the liquid will be of a roughly uniform density. You can imagine each atom in the system as confined to a 'cage' formed by the surrounding atoms, thus in high densities, the cage enclosing the atom would be smaller. The Uncertainty principle teaches us that as a result of confining the atom to a smaller space, its energy is raised; so the ground state is achieved when all atoms are as far apart from each other as possible.
Imagine a ground state configuration with uniform density and envision that we can create a state that differs from it, but only over large regions, so any 'wiggles' in the wave-function will not be closely arranged (a requirement of the Uncertainty principle). Now taking one atom a distance away to a new position will leave the system invariant due to the Bose symmetry, so the wave-function does not represent atomic displacements. This can be interpreted as the biggest extra wiggles in the wave-function to describe a new state can't be larger than the average space between the individual atoms. Since wiggles of this magnitude conform to excited energy states, they are higher than the random thermal perturbations that could produce at 2.2 K or below. Therefore, this hints that fact that there are no low-hanging energy states above the ground state that could be readily accessed by particle motion, so as to act as a resistance to current flow. Like superconductivity, the 'superflow' would continue provided the total energy of the system was lower than the 'energy gap' between the ground state and the lowest-energy excited state. Tizsa proposed a 'two-fluid' model where at absolute zero, all of the liquid helium would enter the superfluid state and as fluid gained adequate heat, excitations would dissipate energy and the normal portion would permeate the whole volume. But what would happen to a container or bucket or superfluid if one spun it around? Due to the configuration of the ground state and the energy needed for excitations above it, the superfluid had to have no rotation. And what about making the entire fluid rotate by spinning its container? Feynman suggested that small regions on the order of several atoms would rotate around a pivot, these pivots or central regions would form so called vortex lines (which tangle and twist around each other). Such vortices don't need to extend from the container top to the bottom but may form rings; this also equates to the minimum energy of a roton (lowest-energy excitations) where the roton is a local domain moving at a different speed to the background fluid. And hence, for the quantum behaviour of the angular momentum to still apply, the fluid needs to flow back somewhere else again like a vortex.
Imagine a ground state configuration with uniform density and envision that we can create a state that differs from it, but only over large regions, so any 'wiggles' in the wave-function will not be closely arranged (a requirement of the Uncertainty principle). Now taking one atom a distance away to a new position will leave the system invariant due to the Bose symmetry, so the wave-function does not represent atomic displacements. This can be interpreted as the biggest extra wiggles in the wave-function to describe a new state can't be larger than the average space between the individual atoms. Since wiggles of this magnitude conform to excited energy states, they are higher than the random thermal perturbations that could produce at 2.2 K or below. Therefore, this hints that fact that there are no low-hanging energy states above the ground state that could be readily accessed by particle motion, so as to act as a resistance to current flow. Like superconductivity, the 'superflow' would continue provided the total energy of the system was lower than the 'energy gap' between the ground state and the lowest-energy excited state. Tizsa proposed a 'two-fluid' model where at absolute zero, all of the liquid helium would enter the superfluid state and as fluid gained adequate heat, excitations would dissipate energy and the normal portion would permeate the whole volume. But what would happen to a container or bucket or superfluid if one spun it around? Due to the configuration of the ground state and the energy needed for excitations above it, the superfluid had to have no rotation. And what about making the entire fluid rotate by spinning its container? Feynman suggested that small regions on the order of several atoms would rotate around a pivot, these pivots or central regions would form so called vortex lines (which tangle and twist around each other). Such vortices don't need to extend from the container top to the bottom but may form rings; this also equates to the minimum energy of a roton (lowest-energy excitations) where the roton is a local domain moving at a different speed to the background fluid. And hence, for the quantum behaviour of the angular momentum to still apply, the fluid needs to flow back somewhere else again like a vortex.
Saturday, 27 July 2013
Genome- The Code of Us
The genome is like a page of printed music, the page is a material object but the notes and scores are coherent with realisations in space and time; in a range of ways but in a range of limited ways. Our species is one amongst many and the code of us is what the genome contains; in the beginning was the word and the word was with our genes. The typical inventory of a human genome contains both moderately repetitive and highly repetitive DNA, namely functional classes of dispersed gene families (e.g. globin, actin), tandem gene family arrays (ribosome, histone and tRNA genes). It also contains the highly repetitive minisatellites (most of the heterochromatin around centromeres), telomeres and microsatellites (distributed throughout genome). Components with unknown function or vestigiality include the long and short interspersed elements (LINEs and SINEs) as well as pseudogenes. Many of the variations between individual genomes come in the form of single-nucleotide polymorphisms (SNPs), which are single-base differences; such SNPs in recombination-poor areas such as mtDNA and Y-chromosomes tend to remain together and define a person's individual haplotype (co-inherited genetic polymorphisms). Another important haplotype is the major histocompatibility complex (MHC); in the genome MHC proteins, which are clustered on chromosome 6, are especially polymorphic and help the host to identify foreign proteins and activate innate immunity by T-cell receptors. Dynamic components of the genome (those that can move around in it) are found in all organisms: retrotransposons (class I) copy themselves through an RNA intermediary and comprise many degenerate retroviruses, transposons (class II) encode transposase which cuts and pastes it some place else in the sequence, they contain upside-down repeats at their ends which are targets of the cut and paste process. Interestingly, retroviruses (which use reverse transcriptase) is susceptible to errors in the transcription process and thus inactivates them and integrates them into the host genome, such endogenous retroviral insertions (ERVs) are specific to species and if two species share the same ERVs in identical mutation points, it is evidence of common ancestry. Other examples of shared mutations includes the pseudogene responsible for the silencing of the enzyme L-gulonolactone oxidase; which is involved in vitamin C synthesis, such is evidence for common descent between humans and other simians. Repeated copying of sections of the genome can produce large volumes of homologues such as the super-family of G-protein-coupled receptors (GPCRs), 700 of which are in the human genome. But what makes us different to chimpanzees is more subtle, differing in our sequences by around 13 Mb, we diverge in terms of transcription factors like FOXP2 (responsible for language). But this is a classic paradox, given that yeast can survive a sacrifice of 80% of their genes while the 13Mb (4%) variation between humans and chimpanzees cause profound change in the phenotype. Let's examine the progress made in genome sequencing: Sanger's method uses DNA polymerase to create a new strand of DNA; the polymerase needs a supply of nucleotide triphosphates and the enzyme adds to the growing primer strand, this leads to the polymerase chain reaction (PCR) which amplifies small quantities of DNA. The whole-genome shotgun approach involves sequencing random fragments of the DNA and putting them back in the right order again, such overcomes the tedious nature of making a map as the basis for the assembly of partial sequences.
Tuesday, 23 July 2013
Cold Fusion- A Modern Heresy
Fusion is a hot topic these days. The stakes are higher than ever for a source of sustainable energy and many still want a piece of the action. But like most feats, it's fallen on the wayside of fringe physics as a modern-day heresy. The basic idea is whether it may be possible to recreate the power of the sun (which undergoes fusion of atomic nuclei at 10^7 K) at or near room temperature. In 1989, Fleischmann and Pons claimed they could create such a process on earth at room temperature using a simple electrolysis cell experiment. Using heavy water (D2O), where hydrogen atoms have been replaced by hydrogen's heavier isotope, deuterium; they applied a palladium cathode as an electrode and passed a current via the water, allegedly causing large quantities of heat to be produced. Such a 'cold fusion' reaction is nothing short of miraculous: firstly, there is a positive charge produced by the nucleus of every deuterium atom, prohibiting the atoms from coming close enough to fuse (Coulomb barrier). The sun overcomes the Coulomb barrier by the enormous temperatures that sends atoms accelerating at great speeds and colliding to fuse and release energy; even more miraculous, the Fleischmann-Pons experiment didn't produce lethal doses of radiation, as often expected from fusion reactions. To explain away their phenomenon, it was proposed that neutrons were being exchanged between the atomic nuclei (and releasing heat in the process), others believed that deep within the lattice of palladium atoms, an exotic clustering of electron clouds allowed the deuterium nuclei to come close enough to fuse. Another proposal was that spontaneous fractures in the palladium cathode effectively fired the deuterons together. The experiment itself was quantitatively measured in terms of the current put in to the cell compared with loss of heat and temperature rise during the entire set-up; but was this really cold fusion? Apart from the lack of experimental reproducibility, a strong theoretical argument can be made as a final 'nail in the coffin' against the feasibility of cold fusion; Leggett and Bayem maintained that in calculating the maximum degree to which the Coulomb barrier can be lowered (presuming maximum entropy/equilibrium) in addition to the binding energies of electrons in both hydrogen and helium, one can also consider the affinity of the metallic lattice for an atom (the energy released when an atom is put in the crystal and permitted to occupy the lowest energy-state). Such nuclear parameters are well defined, except for the final one because there is no precise measurement of the affinity of palladium or titanium for helium, but it can be rest assured that the value must be small due to the fact that helium releases readily from such metals at room temperature. Other cold fusion scenarios such as a deuteron-metal apparatus in a transient state but under thermodynamic equilibrium are questionable in their efficacy.
Wednesday, 17 July 2013
Particle Creation- The Ultimate Free Lunch
You can't get something from nothing, there is no simply no such thing as a free lunch. But considering the beginning of the universe, where did all the particles come from? Common sense tells us that some breach of natural law such as energy conservation (the total heat energy added to a system equals the increase in internal energy minus any work) was necessary for the universe to begin in maximum entropy and zero energy. Such intuition fails at the level of quantum mechanics and relativity where that matter may be 'created and destroyed' via E = mc2 and the Uncertainty principle permits 'accidental' violations of energy conservation to occur spontaneously. But such a state of nothing may not be compared with 'absolute nothingness' because the laws of physics are presupposed to exist beforehand. We live in a zero energy universe, where the negative contribution of the energy of the gravitational field cancels out the matter and energy to give a null value; so really it's just a case of nothing-for-nothing. The issue of particle creation is an example of that free lunch, revisiting some ideas from inflation and cosmology. The false vacuum that ignited inflation was very different from any typical expanding gas (that has positive pressure performing work on the external environment and reducing its internal energy if no heat is added); we can think of the false vacuum as a curved but empty space-time with a constant negative pressure performing work on itself and increasing its total internal energy via an adiabatic process as it inflates. This provides a starting point for the creation of elementary particles, the original inflationary model included an inflaton embedded in a stable field potential and quantum tunnelling as a mechanism for ceasing the exponential expansion. Since tunnelling ends inflation by bubble nucleation, bubbles emerged but sufficient collisions couldn't occur to distribute energy in a homogeneous fashion (the so called 'graceful exit problem'). This poses a big complication for particle creation due to the fact that energy (which is trapped in the bubble walls) can only be freed by the collision of many such bubbles; the graceful exit problem means that the bubbles remain in inhomogeneous clusters. However, this difficulty in early inflationary models was resolved by the concept of a 'slow-roll', whereby inflation is unstable and goes through a phase transition where fluctuations begin at the plateau of a field potential and roll gradually (universe inflates during this time) until it eventually becomes a true vacuum and inflation ends. But the universe becomes too cold after this exponential expansion for any particles or radiation to form so a theory of reheating is required; during this epoch the inflaton field slowly decayed and transferred energy to create particles. Firstly, coherent oscillations of a scalar field occurs and may last for some time if no rapid decays happen, thus the particle decay duration may be a lot longer than the Hubble time. Next, when the Hubble time (here the age of universe) reaches the decay time, the slow-case allows only fermionic decays to occur but when bosonic particles are produced, this allows parametric resonance (like a child swinging on a swing and momentarily standing and squatting to increase the magnitude of the oscillation) to take over. Such parametric resonance promotes a fairly rapid decay termed preheating differentiate it from the initial stage. Occupation numbers (quantities that determine degree to which a quantum state may be filled with particles) produced via parametric resonance are large so that bosons are formed far from maximum entropy (equilibrium); they also give a reason why preheating does not occur is the only decay pathway is fermionic in accordance with the Pauli exclusion principle. Finally, following the formation of high occupation numbers by parametric resonance, the reheating can continue normally according to normal conditions and bosons should interact and decay as well as achieve a state of maximum entropy (equilibrium).
Friday, 12 July 2013
Active Galaxies- Of Quasars and Kin
Something weird is going on in the centres of many galaxies. Often, intense of aggregates of 'blue' light with characteristics distinct from the radiation associated with stars or gas are produced. Galaxies possesing these centers are 'active galaxies' and their central sources are active galactic nuclei (AGN). The optical spectra of a typical galaxy is a composite of contributions from H II regions and stars; elliptical galaxies mirror the spectrum of a star while spiral galaxies are akin to both a star and H II region (partially ionised gas clouds). The optical spectrum of an active galaxy is a combination of the spectra of a typical galaxy and extra radiation that features strong emission lines. Since the common denominator of all active galaxies is an AGN, there are many such types of active galaxy; namely Seyferts, which are spiral galaxies containing very bright point nuclei which have brightness variation. Quasars look like far-way Seyferts with bright nuclei while radio galaxies are made distinctive by their massive radio lobes powered by relativistic jets. Blazars are just quasars that appear differently when observed from varying angles but have a stellar appearance and produce continuous spectra. The central question that concerns astrophysics is how a volume so small can generate such intense luminosities; the central engine of an AGN is thought to be driven by a supermassive black hole around which an accretion disk forms by falling material that converts gravitational energy to radiating heat. Jets are believed to to be discharged orthogonally to the accretion disk. Such a paradigm leads to a standard model of an AGN (pictured), summarised as an accreting supermassive black hole (central engine) encircled by a broad-line region contained inside a obscuring torus of infrared emitting dust and a narrow-line region. Unification is an emerging means of modelling AGN according to the viewers position relative to the axis of the accretion disk; one unification regime links so called Type 1 and Type 2 AGN depending on whether the observer has a clear view of the black hole (Type 1) or is prohibited from viewing it by an opaque dusty torus (Type 2). In Type 2 AGN, the observer can't see the source of ultraviolet radiation or even emission lines but only 'mirror images' of such properties on adjacent clouds of gas. Another unification regime is applicable to around a tenth of AGN that have intense jets (radio loud); an observer viewing along the axis of the jet will see a blazar while looking aside from the axis will make the AGN appear much less intense and would be discerned as either a radio-loud quasar or radio galaxy.
Structure Formation- Revisiting Dark Matter
Looking up at the night sky and into our interstellar neighbourhood, we see matter clumped into galaxies, clusters and super-clusters. Quite distinct from the early universe, where the relatively low levels of anisotropies in the cosmic microwave background (CMB) serves as a cue to the smooth and sleek distribution of baryonic matter at recombination. The evolution of large scale structure is thought to have arisen from the gravitational instability and collapse of regions initially denser than usual; hence such regions expand slower than the average expansion of the universe. The minute perturbations layed down quantum mechanically produce relative density fluctuations and depends on the balance between two effects. Firstly, the self-gravitation of matter in the over-dense region which has a tendency to cause collapse and secondly, the maintenance of hydrostatic equilibrium that serves to prevent collapse. A key cosmological parameter is the Jeans mass which plays the role of a border or limit between these two effects, if a region exceeds the Jeans mass, it will collapse. Similarly, the horizon distance (at any moment in the chronology of the universe, the maximum interval a signal could transverse in the time that had passed up to that moment) plays a critical role in stability against collapse; an overdense region exceeding the horizon distance can't support itself. Something interesting happens at recombination, at about 3000K and 300,000 years after the big bang; the, Jeans mass falls sharply to about the mass of globular clusters and before recombination, the interaction of photons with free electrons contributed to the overall pressure. After the recombination epoch, when the electrons cease to interact with photons, the only protection against collapse comes from the internal pressure of gas. However, gravitational collapse with cold (non-relativistic) dark matter seems to have caused density fluctuations before recombination; the dominant influence on baryons is the gravitational attraction of regions which have acquired over-densities of cold dark matter. This means that baryons were drawn into those collapsing clouds of dark matter and kick-started galaxy formation; a hierarchical process (bottom-up) whereby cold dark matter drew condensations against the overall expansion of the universe into redshifts between the orders of 100 and 40. Cool gas was drawn into dark matter halos into well-defined disks to produce the first spiral galaxies, so this provides an intuitive reason why ellipticals have no young stars because only where gas can collect and coalesce can stellar formation proceed. After recombination, the decoupling of photons from baryons allowed them to travel unhindered, causing the universe to become transparent and ushing in a period of darkness (the dark ages). Such dark ages ceased 400 million years (reionisation) after with the inception of the maiden generations of galaxies and other objects (quasars and Pop. III stars) that emit UV radiation, forming the initial ionised portion of cosmic gas which exponentially increases until the complete ionisiation of hydrogen. This highlights a point where ionised gas became just as important as cold dark matter in structure formation.
Wednesday, 10 July 2013
Magnetic Monopoles- Anomaly or Possibility?
Like poles repel and opposites attract. Such is an elementary rule of thumb and one of the basic properties of magnetism: a magnet always has two 'inseparable' poles, north and south. Yet there is no fundamental reason why this should be the case, why does a magnet always have 2 poles? Why can't the field lines of the magnetic field have a terminating end? Why can't a magnet have only one pole? Since electric field lines terminate as electric charges, it seems as though there are simply no magnetic charges. Case closed? Not quite. In classical electrodynamics, Maxwell's equations have an elegant symmetry; the electric-magnetic duality (ensuring the electric and magnetic fields behave identically). This symmetry appears broken because no magnetic charges have been found but the existence of monopoles would solve this anomaly and restore the elegant symmetry. In quantum mechanics, the forces of electromagnetism are quantified in terms of scalar and vector potentials as opposed to electric and magnetic fields and their introduction seems to break the duality. Because electromagnetism has an abelian U(1) symmetry, one can perform a gauge transformation using an unlimited number of potentials to give rise to the same fields; however the vector potential seems to prohibit magnetic charges due to the disappearance of the divergence of the curl of a vector field. Dirac devised a means to apply a vector potential to construct a monopole; a means similar to Faraday who used a long magnet contained in a mercury-filled vessel in such a manner that one of the poles was beneath the surface while the pole above acted as a monopole. The existence of monopoles can explain the quantisation of electric charge and hence Dirac envisaged a semi-infinite solenoid with an end that possessed a non-zero divergence (thus acted as the monopole) and Dirac strings (infinitely thin flux tubes that connect two monopoles). Moving onto GUTs, the Weinberg-Salam unification incorporates a U(1) x SU(2) symmetry which is broken by the Higgs field at low energies; a simpler equivalent is the Georgi-Glashow SO(3) model. 't Hooft and Polyakov found that a solution to such a model exists that incorporates both electric and magnetic charges; their topologically stable solution involves a Higgs field of stationary length with varying direction in each different direction. And for the field to be continuous, a point-like flaw in the origin of the field can't be a vacuum state; thus the origin of the field is a clump of energy corresponding to a massive particle (since the Higgs field disappears at the origin, the SO(3) symmetry is left unbroken. Interestingly, such a particle possesses magnetic charge because electromagnetism is made by oscillations around the Higgs field vector, one can quantify the magnetic field and the 't Hooft-Polyakov solution turns out to be a monopole. Even though monopoles haven't been directly observed, let alone discovered, they play an important role in modern physics; especially in explaining the phenomenon of quark confinement in QCD. At extremely low temperatures, materials become superconductors and allow current to flow without resistance but eject any magnetic flux (Meissner effect); if we could put a monopole-anti monopole pair into a superconductor, what would happen? Since any magnetic flux is ejected, a way to resolve this dilemma would be that an Abrikosov-Gorkov flux tube forms between the pair, hence the flux is restricted to this tube. And since the flux tube has a nonzero energy value, the quantity of energy needed to separate the pair increases in a linear fashion. Finally, monopoles are important is cosmology because GUTs predict they were produced in the early universe; the Kibble mechanism is likely candidate for how that happened. It proposes the universe contains domain walls with arbitrary yet uniform field direction and the Higgs field inserts itself continually between a pair of domains but the field disappears in the origin causing topological defects. But when two pairs of domain walls meet, a monopole can be made.
Sunday, 7 July 2013
Astrometry- The Cosmic Distance Ladder
Van Gogh's painting, 'The Starry Night', resembles not only the
Whirlpool galaxy but it proves its astronomical worth in a number of
ways. For starters, one can deduce it was painted in the predawn hours
due to the inclination of the moon to the horizon and that the brightest
of its 'stars' is in fact Venus, attesting to the fact that the planets
are usually the first to emerge in the evening. But what makes this
painting sacred is not what it shows but what it represents: astrometry.
Our obsession with the heavens means that we can measure distances with
greater rigour and precision, from gnomons and sundials to standard
candles and gravitational lensing; such are the rungs of the cosmic
distance ladder. The earth is the first rung of that ladder. Aristotle
and others provided the first indirect arguments that the earth is round
using the moon. He knew that lunar eclipses happned when the moon was
directly opposite the sun (opposite constellation of the Zodiac), so
eclipses happen because the moon falls into the earth's shadow. But in a
lunar eclipse, the shadow of the earth on the moon is always a circular
arc and since the only shape that produces such a shadow is the sphere
he inferred the earth was round. If the earth was circular yet flat like
a disk, the shadows would be elliptical. Similarly, Eratosthenes
calculated the radius of the earth to 40000 stadia. Having read of a
well in Syene that reflected the overhead sun at noon of the summer
solstice (June 21) because of its location on the tropic of cancer; he
used a gnomon (in Alexandria) to measure the deviation of the sun from
the vertical as 7 degrees. Knowing the distance from Alexandria to Syene
to some 5000 stadia, it was enough to compute the earth's radius.
Aristotle also argued the moon was a sphere (rather than a flat disk)
because the terminator (boundary of the sun's light on the moon) was
always an elliptical arc. The only shape with such a property is the
sphere, should the moon be a flat disk, no terminator wouldn't appear.
Aristarchus determined the distance from the earth to the moon as 60
earth radii (57-63 earth radii in actuality). He also computed radius of
the moon as 1/3 the radius of the earth. Aristarchus had knowledge of
lunar eclipses being caused by the moon passing through the earth's
shadow and since the earth's shadow is 2 earth radii wide (diameter) and
the maximum lunar eclipse lasted for 3 hours, it meant that it took 3
hours for the moon to cross 2 earth radii. And it also takes around 28
days for the moon to go around earth, sufficient to compute the moon's
radius. In addition, the radius of the moon in terms of distance to the
moon was determined by the time it takes to set (2 minutes) and the time
it take to make a full (apparent rotation) is roughly 24 hours. Next,
the Sun's radius was measured by Aristarchus by relying on the moon.
Having computed the radius of the moon as 1/180 the distance to the
moon, he knew that during a solar eclipse that the moon covered the sun
almost perfectly, using similar triangles, he inferred that radius of
the sun was also 1/180 the distance to the sun. But to determine the
distance to the sun, he knew that half moons happened when the moon
makes a right angle between the earth and sun, full moons occurred when
the moon was directly opposite the sun and new moons occurred when the
moon was between the earth and sun. This meant that half moons occur
slightly closer to new moons than to full moons. Simple trigonometry
could then be used to compute the distance to the sun at 20 times
further than the moon, but a time discrepancy of 1/2 hour meant that the
actual distance is 390 times the distance of the earth to moon. This
also lead to the conclusion that the Sun was enormously larger than the
earth and the first heliocentric proposal, later adapted by Copernicus.
Continuing our trek up the cosmic distance ladder, the rung of the planets and speed of light is quite a story. The ancient astrologers realised that all the planets lie on the ecliptic (a plane) due to the fact that they only moved via the Zodiac (the set of 12 constellations around the Earth. Ptolemy produced inaccurate results due to his geocentric model while Copernicus made highly accurate conclusions, initially poring over the annals of the ancient Babylonians who knew that the synodic period of mars repeated itself every 780 days. The heliocentric model allowed Copernicus to calculate the actual angular velocity as 1/170 and knowing that the earth took 1 year to go around the sun he would subtract implied angular velocities to find that the sideral period of mars was 687 days.Copernicus determined the distance of mars from the sun to 1.5 AU (astronomical units) by assuming circular orbits and using measurements of mars' location in the Zodiac across various dates. Brahe made similar predictions but they deviated from the Copernican regime, Kepler maintained that this was so because the orbits were elliptical and not perfect circles as Copernicus has assumed. Kepler would attempt to compute the orbits of the earth and mars simultaneously and since Brahe's data only gave the direction of mars from the earth and not the distance, he would need to figure out the orbit of the earth using mars. Working under the assumption that mars was fixed and the earth was moving in an orbit, Kepler used triangulation to use Brahe's 687 day interval to compute the earth's orbit relative to any position of mars. Such allowed the more precise calculation of the AU by parallax (measuring the same object from two different locations on earth), especially during the transit of Venus across the sun in multiple places (including Cook's voyage). But the anomaly of the precession of Mercury (where the points of aphelion and perihelion progressively wind around one another in a circular manner) could not be reconciled with Newtonian mechanics, so general relativity was invoked.The first attempts at accurately measuring the speed of light (c) was by Rømer who measured c by observing Io, one of Jupiter's moons that made a complete orbit every 42.5 hours. Rømer noticed that when Jupiter was aligned with the Earth, the orbit advanced slightly but when it was opposed, it slowed and lagged by around 20 minutes. Huygens inferred that this was because of the extra distance (2 AU) that light had to travel from Jupiter, so light travels 2 AU in 20 minutes; hence the speed may be computed to 300,000 km/s.
Continuing our trek up the cosmic distance ladder, the rung of the planets and speed of light is quite a story. The ancient astrologers realised that all the planets lie on the ecliptic (a plane) due to the fact that they only moved via the Zodiac (the set of 12 constellations around the Earth. Ptolemy produced inaccurate results due to his geocentric model while Copernicus made highly accurate conclusions, initially poring over the annals of the ancient Babylonians who knew that the synodic period of mars repeated itself every 780 days. The heliocentric model allowed Copernicus to calculate the actual angular velocity as 1/170 and knowing that the earth took 1 year to go around the sun he would subtract implied angular velocities to find that the sideral period of mars was 687 days.Copernicus determined the distance of mars from the sun to 1.5 AU (astronomical units) by assuming circular orbits and using measurements of mars' location in the Zodiac across various dates. Brahe made similar predictions but they deviated from the Copernican regime, Kepler maintained that this was so because the orbits were elliptical and not perfect circles as Copernicus has assumed. Kepler would attempt to compute the orbits of the earth and mars simultaneously and since Brahe's data only gave the direction of mars from the earth and not the distance, he would need to figure out the orbit of the earth using mars. Working under the assumption that mars was fixed and the earth was moving in an orbit, Kepler used triangulation to use Brahe's 687 day interval to compute the earth's orbit relative to any position of mars. Such allowed the more precise calculation of the AU by parallax (measuring the same object from two different locations on earth), especially during the transit of Venus across the sun in multiple places (including Cook's voyage). But the anomaly of the precession of Mercury (where the points of aphelion and perihelion progressively wind around one another in a circular manner) could not be reconciled with Newtonian mechanics, so general relativity was invoked.The first attempts at accurately measuring the speed of light (c) was by Rømer who measured c by observing Io, one of Jupiter's moons that made a complete orbit every 42.5 hours. Rømer noticed that when Jupiter was aligned with the Earth, the orbit advanced slightly but when it was opposed, it slowed and lagged by around 20 minutes. Huygens inferred that this was because of the extra distance (2 AU) that light had to travel from Jupiter, so light travels 2 AU in 20 minutes; hence the speed may be computed to 300,000 km/s.
Subscribe to:
Posts (Atom)