Cool things happen when you cool liquid helium to 2.2 K. Below this temperature (lambda point), superfluidity takes over and viscosity decreases radically, resulting in a frictionless flow. Place it in a container and it will flow into a thin film up around its edges and flow through the pores of the walls. This anomaly is difficult to understand using classical fluid mechanics; Poiseuille's law dictates that the flow rate of a fluid corresponds to the difference across the capillary and to the fourth power of the capillary radius. But below the lambda point, the flow rate of supercooled liquid helium was not only high, it was both independent of both the capillary radius and pressure; evidently, this is not within the explanatory scope of classical theory. When cooling liquid helium (by pumping out its vapour), the liquid ceases to boil due to fact that the thermal conductivity of liquid helium has so sharply increases to maintain a homogeneous temperature. Let's start our exploration from the ground up: the reason liquid helium never solidifies no matter how cold you cool it is because the weak Van der Waals forces between the atoms are adequate enough to overpower the zero-point motion related with attempting to restrain a helium atom to a site on the lattice. The nature of the superfluid is therefore necessarily quantum mechanical. London suggested superfluidity as an expression of a Bose-Einstein condensate (BEC) but the issue is that BECs happen in ideal gases where particles have no interaction with each other whereas helium atoms attract weakly at a distance and repel strongly when close. Feynman's path integral approach lead to the realisation of two important yet subtle notions, firstly, helium atoms are bosons and this means that a Bose symmetry ensures the wave-function is not affected by any two helium atoms changing their configuration. And secondly, if an atom is moving slowly along its trajectory, the adjacent atoms would have to move slowly to get out of the way, this act of 'making room' increases the kinetic energy of the helium atoms that would add to the action. The overall effect is that we must change what we usually perceive as the mass of the helium atom, because when it moved, more than one atoms would have to make way (as mentioned), hence the trajectories that give the most 'sum over paths', there would be a particle with a somewhat increased mass. But what keeps a superfluid helium superfluid? Landau suggested that there are no more available low energy states near the coherent BEC state at low temperatures that any fluctuations could place the quantum fluid into. A classical fluid has viscosity (resistance to flow) because separate atoms bounce around other atoms and molecules and any debris in the container; these excitations alter the motion of the particles and dissipate energy from the fluid to the container, but if no more states are there to be filled (as in Landau's suggestion), particles can't alter their motion and persist to flow without dissipating energy. Feynman wanted to extrapolate this to a quantum mechanical regime, essentially because helium atoms repel each other at short distances, the ground state (lowest energy) of the liquid will be of a roughly uniform density. You can imagine each atom in the system as confined to a 'cage' formed by the surrounding atoms, thus in high densities, the cage enclosing the atom would be smaller. The Uncertainty principle teaches us that as a result of confining the atom to a smaller space, its energy is raised; so the ground state is achieved when all atoms are as far apart from each other as possible.
Imagine a ground state configuration with uniform density and envision that we can create a state that differs from it, but only over large regions, so any 'wiggles' in the wave-function will not be closely arranged (a requirement of the Uncertainty principle). Now taking one atom a distance away to a new position will leave the system invariant due to the Bose symmetry, so the wave-function does not represent atomic displacements. This can be interpreted as the biggest extra wiggles in the wave-function to describe a new state can't be larger than the average space between the individual atoms. Since wiggles of this magnitude conform to excited energy states, they are higher than the random thermal perturbations that could produce at 2.2 K or below. Therefore, this hints that fact that there are no low-hanging energy states above the ground state that could be readily accessed by particle motion, so as to act as a resistance to current flow. Like superconductivity, the 'superflow' would continue provided the total energy of the system was lower than the 'energy gap' between the ground state and the lowest-energy excited state. Tizsa proposed a 'two-fluid' model where at absolute zero, all of the liquid helium would enter the superfluid state and as fluid gained adequate heat, excitations would dissipate energy and the normal portion would permeate the whole volume. But what would happen to a container or bucket or superfluid if one spun it around? Due to the configuration of the ground state and the energy needed for excitations above it, the superfluid had to have no rotation. And what about making the entire fluid rotate by spinning its container? Feynman suggested that small regions on the order of several atoms would rotate around a pivot, these pivots or central regions would form so called vortex lines (which tangle and twist around each other). Such vortices don't need to extend from the container top to the bottom but may form rings; this also equates to the minimum energy of a roton (lowest-energy excitations) where the roton is a local domain moving at a different speed to the background fluid. And hence, for the quantum behaviour of the angular momentum to still apply, the fluid needs to flow back somewhere else again like a vortex.
Wednesday, 31 July 2013
Saturday, 27 July 2013
Genome- The Code of Us
The genome is like a page of printed music, the page is a material object but the notes and scores are coherent with realisations in space and time; in a range of ways but in a range of limited ways. Our species is one amongst many and the code of us is what the genome contains; in the beginning was the word and the word was with our genes. The typical inventory of a human genome contains both moderately repetitive and highly repetitive DNA, namely functional classes of dispersed gene families (e.g. globin, actin), tandem gene family arrays (ribosome, histone and tRNA genes). It also contains the highly repetitive minisatellites (most of the heterochromatin around centromeres), telomeres and microsatellites (distributed throughout genome). Components with unknown function or vestigiality include the long and short interspersed elements (LINEs and SINEs) as well as pseudogenes. Many of the variations between individual genomes come in the form of single-nucleotide polymorphisms (SNPs), which are single-base differences; such SNPs in recombination-poor areas such as mtDNA and Y-chromosomes tend to remain together and define a person's individual haplotype (co-inherited genetic polymorphisms). Another important haplotype is the major histocompatibility complex (MHC); in the genome MHC proteins, which are clustered on chromosome 6, are especially polymorphic and help the host to identify foreign proteins and activate innate immunity by T-cell receptors. Dynamic components of the genome (those that can move around in it) are found in all organisms: retrotransposons (class I) copy themselves through an RNA intermediary and comprise many degenerate retroviruses, transposons (class II) encode transposase which cuts and pastes it some place else in the sequence, they contain upside-down repeats at their ends which are targets of the cut and paste process. Interestingly, retroviruses (which use reverse transcriptase) is susceptible to errors in the transcription process and thus inactivates them and integrates them into the host genome, such endogenous retroviral insertions (ERVs) are specific to species and if two species share the same ERVs in identical mutation points, it is evidence of common ancestry. Other examples of shared mutations includes the pseudogene responsible for the silencing of the enzyme L-gulonolactone oxidase; which is involved in vitamin C synthesis, such is evidence for common descent between humans and other simians. Repeated copying of sections of the genome can produce large volumes of homologues such as the super-family of G-protein-coupled receptors (GPCRs), 700 of which are in the human genome. But what makes us different to chimpanzees is more subtle, differing in our sequences by around 13 Mb, we diverge in terms of transcription factors like FOXP2 (responsible for language). But this is a classic paradox, given that yeast can survive a sacrifice of 80% of their genes while the 13Mb (4%) variation between humans and chimpanzees cause profound change in the phenotype. Let's examine the progress made in genome sequencing: Sanger's method uses DNA polymerase to create a new strand of DNA; the polymerase needs a supply of nucleotide triphosphates and the enzyme adds to the growing primer strand, this leads to the polymerase chain reaction (PCR) which amplifies small quantities of DNA. The whole-genome shotgun approach involves sequencing random fragments of the DNA and putting them back in the right order again, such overcomes the tedious nature of making a map as the basis for the assembly of partial sequences.
Tuesday, 23 July 2013
Cold Fusion- A Modern Heresy
Fusion is a hot topic these days. The stakes are higher than ever for a source of sustainable energy and many still want a piece of the action. But like most feats, it's fallen on the wayside of fringe physics as a modern-day heresy. The basic idea is whether it may be possible to recreate the power of the sun (which undergoes fusion of atomic nuclei at 10^7 K) at or near room temperature. In 1989, Fleischmann and Pons claimed they could create such a process on earth at room temperature using a simple electrolysis cell experiment. Using heavy water (D2O), where hydrogen atoms have been replaced by hydrogen's heavier isotope, deuterium; they applied a palladium cathode as an electrode and passed a current via the water, allegedly causing large quantities of heat to be produced. Such a 'cold fusion' reaction is nothing short of miraculous: firstly, there is a positive charge produced by the nucleus of every deuterium atom, prohibiting the atoms from coming close enough to fuse (Coulomb barrier). The sun overcomes the Coulomb barrier by the enormous temperatures that sends atoms accelerating at great speeds and colliding to fuse and release energy; even more miraculous, the Fleischmann-Pons experiment didn't produce lethal doses of radiation, as often expected from fusion reactions. To explain away their phenomenon, it was proposed that neutrons were being exchanged between the atomic nuclei (and releasing heat in the process), others believed that deep within the lattice of palladium atoms, an exotic clustering of electron clouds allowed the deuterium nuclei to come close enough to fuse. Another proposal was that spontaneous fractures in the palladium cathode effectively fired the deuterons together. The experiment itself was quantitatively measured in terms of the current put in to the cell compared with loss of heat and temperature rise during the entire set-up; but was this really cold fusion? Apart from the lack of experimental reproducibility, a strong theoretical argument can be made as a final 'nail in the coffin' against the feasibility of cold fusion; Leggett and Bayem maintained that in calculating the maximum degree to which the Coulomb barrier can be lowered (presuming maximum entropy/equilibrium) in addition to the binding energies of electrons in both hydrogen and helium, one can also consider the affinity of the metallic lattice for an atom (the energy released when an atom is put in the crystal and permitted to occupy the lowest energy-state). Such nuclear parameters are well defined, except for the final one because there is no precise measurement of the affinity of palladium or titanium for helium, but it can be rest assured that the value must be small due to the fact that helium releases readily from such metals at room temperature. Other cold fusion scenarios such as a deuteron-metal apparatus in a transient state but under thermodynamic equilibrium are questionable in their efficacy.
Wednesday, 17 July 2013
Particle Creation- The Ultimate Free Lunch
You can't get something from nothing, there is no simply no such thing as a free lunch. But considering the beginning of the universe, where did all the particles come from? Common sense tells us that some breach of natural law such as energy conservation (the total heat energy added to a system equals the increase in internal energy minus any work) was necessary for the universe to begin in maximum entropy and zero energy. Such intuition fails at the level of quantum mechanics and relativity where that matter may be 'created and destroyed' via E = mc2 and the Uncertainty principle permits 'accidental' violations of energy conservation to occur spontaneously. But such a state of nothing may not be compared with 'absolute nothingness' because the laws of physics are presupposed to exist beforehand. We live in a zero energy universe, where the negative contribution of the energy of the gravitational field cancels out the matter and energy to give a null value; so really it's just a case of nothing-for-nothing. The issue of particle creation is an example of that free lunch, revisiting some ideas from inflation and cosmology. The false vacuum that ignited inflation was very different from any typical expanding gas (that has positive pressure performing work on the external environment and reducing its internal energy if no heat is added); we can think of the false vacuum as a curved but empty space-time with a constant negative pressure performing work on itself and increasing its total internal energy via an adiabatic process as it inflates. This provides a starting point for the creation of elementary particles, the original inflationary model included an inflaton embedded in a stable field potential and quantum tunnelling as a mechanism for ceasing the exponential expansion. Since tunnelling ends inflation by bubble nucleation, bubbles emerged but sufficient collisions couldn't occur to distribute energy in a homogeneous fashion (the so called 'graceful exit problem'). This poses a big complication for particle creation due to the fact that energy (which is trapped in the bubble walls) can only be freed by the collision of many such bubbles; the graceful exit problem means that the bubbles remain in inhomogeneous clusters. However, this difficulty in early inflationary models was resolved by the concept of a 'slow-roll', whereby inflation is unstable and goes through a phase transition where fluctuations begin at the plateau of a field potential and roll gradually (universe inflates during this time) until it eventually becomes a true vacuum and inflation ends. But the universe becomes too cold after this exponential expansion for any particles or radiation to form so a theory of reheating is required; during this epoch the inflaton field slowly decayed and transferred energy to create particles. Firstly, coherent oscillations of a scalar field occurs and may last for some time if no rapid decays happen, thus the particle decay duration may be a lot longer than the Hubble time. Next, when the Hubble time (here the age of universe) reaches the decay time, the slow-case allows only fermionic decays to occur but when bosonic particles are produced, this allows parametric resonance (like a child swinging on a swing and momentarily standing and squatting to increase the magnitude of the oscillation) to take over. Such parametric resonance promotes a fairly rapid decay termed preheating differentiate it from the initial stage. Occupation numbers (quantities that determine degree to which a quantum state may be filled with particles) produced via parametric resonance are large so that bosons are formed far from maximum entropy (equilibrium); they also give a reason why preheating does not occur is the only decay pathway is fermionic in accordance with the Pauli exclusion principle. Finally, following the formation of high occupation numbers by parametric resonance, the reheating can continue normally according to normal conditions and bosons should interact and decay as well as achieve a state of maximum entropy (equilibrium).
Friday, 12 July 2013
Active Galaxies- Of Quasars and Kin
Something weird is going on in the centres of many galaxies. Often, intense of aggregates of 'blue' light with characteristics distinct from the radiation associated with stars or gas are produced. Galaxies possesing these centers are 'active galaxies' and their central sources are active galactic nuclei (AGN). The optical spectra of a typical galaxy is a composite of contributions from H II regions and stars; elliptical galaxies mirror the spectrum of a star while spiral galaxies are akin to both a star and H II region (partially ionised gas clouds). The optical spectrum of an active galaxy is a combination of the spectra of a typical galaxy and extra radiation that features strong emission lines. Since the common denominator of all active galaxies is an AGN, there are many such types of active galaxy; namely Seyferts, which are spiral galaxies containing very bright point nuclei which have brightness variation. Quasars look like far-way Seyferts with bright nuclei while radio galaxies are made distinctive by their massive radio lobes powered by relativistic jets. Blazars are just quasars that appear differently when observed from varying angles but have a stellar appearance and produce continuous spectra. The central question that concerns astrophysics is how a volume so small can generate such intense luminosities; the central engine of an AGN is thought to be driven by a supermassive black hole around which an accretion disk forms by falling material that converts gravitational energy to radiating heat. Jets are believed to to be discharged orthogonally to the accretion disk. Such a paradigm leads to a standard model of an AGN (pictured), summarised as an accreting supermassive black hole (central engine) encircled by a broad-line region contained inside a obscuring torus of infrared emitting dust and a narrow-line region. Unification is an emerging means of modelling AGN according to the viewers position relative to the axis of the accretion disk; one unification regime links so called Type 1 and Type 2 AGN depending on whether the observer has a clear view of the black hole (Type 1) or is prohibited from viewing it by an opaque dusty torus (Type 2). In Type 2 AGN, the observer can't see the source of ultraviolet radiation or even emission lines but only 'mirror images' of such properties on adjacent clouds of gas. Another unification regime is applicable to around a tenth of AGN that have intense jets (radio loud); an observer viewing along the axis of the jet will see a blazar while looking aside from the axis will make the AGN appear much less intense and would be discerned as either a radio-loud quasar or radio galaxy.
Structure Formation- Revisiting Dark Matter
Looking up at the night sky and into our interstellar neighbourhood, we see matter clumped into galaxies, clusters and super-clusters. Quite distinct from the early universe, where the relatively low levels of anisotropies in the cosmic microwave background (CMB) serves as a cue to the smooth and sleek distribution of baryonic matter at recombination. The evolution of large scale structure is thought to have arisen from the gravitational instability and collapse of regions initially denser than usual; hence such regions expand slower than the average expansion of the universe. The minute perturbations layed down quantum mechanically produce relative density fluctuations and depends on the balance between two effects. Firstly, the self-gravitation of matter in the over-dense region which has a tendency to cause collapse and secondly, the maintenance of hydrostatic equilibrium that serves to prevent collapse. A key cosmological parameter is the Jeans mass which plays the role of a border or limit between these two effects, if a region exceeds the Jeans mass, it will collapse. Similarly, the horizon distance (at any moment in the chronology of the universe, the maximum interval a signal could transverse in the time that had passed up to that moment) plays a critical role in stability against collapse; an overdense region exceeding the horizon distance can't support itself. Something interesting happens at recombination, at about 3000K and 300,000 years after the big bang; the, Jeans mass falls sharply to about the mass of globular clusters and before recombination, the interaction of photons with free electrons contributed to the overall pressure. After the recombination epoch, when the electrons cease to interact with photons, the only protection against collapse comes from the internal pressure of gas. However, gravitational collapse with cold (non-relativistic) dark matter seems to have caused density fluctuations before recombination; the dominant influence on baryons is the gravitational attraction of regions which have acquired over-densities of cold dark matter. This means that baryons were drawn into those collapsing clouds of dark matter and kick-started galaxy formation; a hierarchical process (bottom-up) whereby cold dark matter drew condensations against the overall expansion of the universe into redshifts between the orders of 100 and 40. Cool gas was drawn into dark matter halos into well-defined disks to produce the first spiral galaxies, so this provides an intuitive reason why ellipticals have no young stars because only where gas can collect and coalesce can stellar formation proceed. After recombination, the decoupling of photons from baryons allowed them to travel unhindered, causing the universe to become transparent and ushing in a period of darkness (the dark ages). Such dark ages ceased 400 million years (reionisation) after with the inception of the maiden generations of galaxies and other objects (quasars and Pop. III stars) that emit UV radiation, forming the initial ionised portion of cosmic gas which exponentially increases until the complete ionisiation of hydrogen. This highlights a point where ionised gas became just as important as cold dark matter in structure formation.
Wednesday, 10 July 2013
Magnetic Monopoles- Anomaly or Possibility?
Like poles repel and opposites attract. Such is an elementary rule of thumb and one of the basic properties of magnetism: a magnet always has two 'inseparable' poles, north and south. Yet there is no fundamental reason why this should be the case, why does a magnet always have 2 poles? Why can't the field lines of the magnetic field have a terminating end? Why can't a magnet have only one pole? Since electric field lines terminate as electric charges, it seems as though there are simply no magnetic charges. Case closed? Not quite. In classical electrodynamics, Maxwell's equations have an elegant symmetry; the electric-magnetic duality (ensuring the electric and magnetic fields behave identically). This symmetry appears broken because no magnetic charges have been found but the existence of monopoles would solve this anomaly and restore the elegant symmetry. In quantum mechanics, the forces of electromagnetism are quantified in terms of scalar and vector potentials as opposed to electric and magnetic fields and their introduction seems to break the duality. Because electromagnetism has an abelian U(1) symmetry, one can perform a gauge transformation using an unlimited number of potentials to give rise to the same fields; however the vector potential seems to prohibit magnetic charges due to the disappearance of the divergence of the curl of a vector field. Dirac devised a means to apply a vector potential to construct a monopole; a means similar to Faraday who used a long magnet contained in a mercury-filled vessel in such a manner that one of the poles was beneath the surface while the pole above acted as a monopole. The existence of monopoles can explain the quantisation of electric charge and hence Dirac envisaged a semi-infinite solenoid with an end that possessed a non-zero divergence (thus acted as the monopole) and Dirac strings (infinitely thin flux tubes that connect two monopoles). Moving onto GUTs, the Weinberg-Salam unification incorporates a U(1) x SU(2) symmetry which is broken by the Higgs field at low energies; a simpler equivalent is the Georgi-Glashow SO(3) model. 't Hooft and Polyakov found that a solution to such a model exists that incorporates both electric and magnetic charges; their topologically stable solution involves a Higgs field of stationary length with varying direction in each different direction. And for the field to be continuous, a point-like flaw in the origin of the field can't be a vacuum state; thus the origin of the field is a clump of energy corresponding to a massive particle (since the Higgs field disappears at the origin, the SO(3) symmetry is left unbroken. Interestingly, such a particle possesses magnetic charge because electromagnetism is made by oscillations around the Higgs field vector, one can quantify the magnetic field and the 't Hooft-Polyakov solution turns out to be a monopole. Even though monopoles haven't been directly observed, let alone discovered, they play an important role in modern physics; especially in explaining the phenomenon of quark confinement in QCD. At extremely low temperatures, materials become superconductors and allow current to flow without resistance but eject any magnetic flux (Meissner effect); if we could put a monopole-anti monopole pair into a superconductor, what would happen? Since any magnetic flux is ejected, a way to resolve this dilemma would be that an Abrikosov-Gorkov flux tube forms between the pair, hence the flux is restricted to this tube. And since the flux tube has a nonzero energy value, the quantity of energy needed to separate the pair increases in a linear fashion. Finally, monopoles are important is cosmology because GUTs predict they were produced in the early universe; the Kibble mechanism is likely candidate for how that happened. It proposes the universe contains domain walls with arbitrary yet uniform field direction and the Higgs field inserts itself continually between a pair of domains but the field disappears in the origin causing topological defects. But when two pairs of domain walls meet, a monopole can be made.
Sunday, 7 July 2013
Astrometry- The Cosmic Distance Ladder
Van Gogh's painting, 'The Starry Night', resembles not only the
Whirlpool galaxy but it proves its astronomical worth in a number of
ways. For starters, one can deduce it was painted in the predawn hours
due to the inclination of the moon to the horizon and that the brightest
of its 'stars' is in fact Venus, attesting to the fact that the planets
are usually the first to emerge in the evening. But what makes this
painting sacred is not what it shows but what it represents: astrometry.
Our obsession with the heavens means that we can measure distances with
greater rigour and precision, from gnomons and sundials to standard
candles and gravitational lensing; such are the rungs of the cosmic
distance ladder. The earth is the first rung of that ladder. Aristotle
and others provided the first indirect arguments that the earth is round
using the moon. He knew that lunar eclipses happned when the moon was
directly opposite the sun (opposite constellation of the Zodiac), so
eclipses happen because the moon falls into the earth's shadow. But in a
lunar eclipse, the shadow of the earth on the moon is always a circular
arc and since the only shape that produces such a shadow is the sphere
he inferred the earth was round. If the earth was circular yet flat like
a disk, the shadows would be elliptical. Similarly, Eratosthenes
calculated the radius of the earth to 40000 stadia. Having read of a
well in Syene that reflected the overhead sun at noon of the summer
solstice (June 21) because of its location on the tropic of cancer; he
used a gnomon (in Alexandria) to measure the deviation of the sun from
the vertical as 7 degrees. Knowing the distance from Alexandria to Syene
to some 5000 stadia, it was enough to compute the earth's radius.
Aristotle also argued the moon was a sphere (rather than a flat disk)
because the terminator (boundary of the sun's light on the moon) was
always an elliptical arc. The only shape with such a property is the
sphere, should the moon be a flat disk, no terminator wouldn't appear.
Aristarchus determined the distance from the earth to the moon as 60
earth radii (57-63 earth radii in actuality). He also computed radius of
the moon as 1/3 the radius of the earth. Aristarchus had knowledge of
lunar eclipses being caused by the moon passing through the earth's
shadow and since the earth's shadow is 2 earth radii wide (diameter) and
the maximum lunar eclipse lasted for 3 hours, it meant that it took 3
hours for the moon to cross 2 earth radii. And it also takes around 28
days for the moon to go around earth, sufficient to compute the moon's
radius. In addition, the radius of the moon in terms of distance to the
moon was determined by the time it takes to set (2 minutes) and the time
it take to make a full (apparent rotation) is roughly 24 hours. Next,
the Sun's radius was measured by Aristarchus by relying on the moon.
Having computed the radius of the moon as 1/180 the distance to the
moon, he knew that during a solar eclipse that the moon covered the sun
almost perfectly, using similar triangles, he inferred that radius of
the sun was also 1/180 the distance to the sun. But to determine the
distance to the sun, he knew that half moons happened when the moon
makes a right angle between the earth and sun, full moons occurred when
the moon was directly opposite the sun and new moons occurred when the
moon was between the earth and sun. This meant that half moons occur
slightly closer to new moons than to full moons. Simple trigonometry
could then be used to compute the distance to the sun at 20 times
further than the moon, but a time discrepancy of 1/2 hour meant that the
actual distance is 390 times the distance of the earth to moon. This
also lead to the conclusion that the Sun was enormously larger than the
earth and the first heliocentric proposal, later adapted by Copernicus.
Continuing our trek up the cosmic distance ladder, the rung of the planets and speed of light is quite a story. The ancient astrologers realised that all the planets lie on the ecliptic (a plane) due to the fact that they only moved via the Zodiac (the set of 12 constellations around the Earth. Ptolemy produced inaccurate results due to his geocentric model while Copernicus made highly accurate conclusions, initially poring over the annals of the ancient Babylonians who knew that the synodic period of mars repeated itself every 780 days. The heliocentric model allowed Copernicus to calculate the actual angular velocity as 1/170 and knowing that the earth took 1 year to go around the sun he would subtract implied angular velocities to find that the sideral period of mars was 687 days.Copernicus determined the distance of mars from the sun to 1.5 AU (astronomical units) by assuming circular orbits and using measurements of mars' location in the Zodiac across various dates. Brahe made similar predictions but they deviated from the Copernican regime, Kepler maintained that this was so because the orbits were elliptical and not perfect circles as Copernicus has assumed. Kepler would attempt to compute the orbits of the earth and mars simultaneously and since Brahe's data only gave the direction of mars from the earth and not the distance, he would need to figure out the orbit of the earth using mars. Working under the assumption that mars was fixed and the earth was moving in an orbit, Kepler used triangulation to use Brahe's 687 day interval to compute the earth's orbit relative to any position of mars. Such allowed the more precise calculation of the AU by parallax (measuring the same object from two different locations on earth), especially during the transit of Venus across the sun in multiple places (including Cook's voyage). But the anomaly of the precession of Mercury (where the points of aphelion and perihelion progressively wind around one another in a circular manner) could not be reconciled with Newtonian mechanics, so general relativity was invoked.The first attempts at accurately measuring the speed of light (c) was by Rømer who measured c by observing Io, one of Jupiter's moons that made a complete orbit every 42.5 hours. Rømer noticed that when Jupiter was aligned with the Earth, the orbit advanced slightly but when it was opposed, it slowed and lagged by around 20 minutes. Huygens inferred that this was because of the extra distance (2 AU) that light had to travel from Jupiter, so light travels 2 AU in 20 minutes; hence the speed may be computed to 300,000 km/s.
Continuing our trek up the cosmic distance ladder, the rung of the planets and speed of light is quite a story. The ancient astrologers realised that all the planets lie on the ecliptic (a plane) due to the fact that they only moved via the Zodiac (the set of 12 constellations around the Earth. Ptolemy produced inaccurate results due to his geocentric model while Copernicus made highly accurate conclusions, initially poring over the annals of the ancient Babylonians who knew that the synodic period of mars repeated itself every 780 days. The heliocentric model allowed Copernicus to calculate the actual angular velocity as 1/170 and knowing that the earth took 1 year to go around the sun he would subtract implied angular velocities to find that the sideral period of mars was 687 days.Copernicus determined the distance of mars from the sun to 1.5 AU (astronomical units) by assuming circular orbits and using measurements of mars' location in the Zodiac across various dates. Brahe made similar predictions but they deviated from the Copernican regime, Kepler maintained that this was so because the orbits were elliptical and not perfect circles as Copernicus has assumed. Kepler would attempt to compute the orbits of the earth and mars simultaneously and since Brahe's data only gave the direction of mars from the earth and not the distance, he would need to figure out the orbit of the earth using mars. Working under the assumption that mars was fixed and the earth was moving in an orbit, Kepler used triangulation to use Brahe's 687 day interval to compute the earth's orbit relative to any position of mars. Such allowed the more precise calculation of the AU by parallax (measuring the same object from two different locations on earth), especially during the transit of Venus across the sun in multiple places (including Cook's voyage). But the anomaly of the precession of Mercury (where the points of aphelion and perihelion progressively wind around one another in a circular manner) could not be reconciled with Newtonian mechanics, so general relativity was invoked.The first attempts at accurately measuring the speed of light (c) was by Rømer who measured c by observing Io, one of Jupiter's moons that made a complete orbit every 42.5 hours. Rømer noticed that when Jupiter was aligned with the Earth, the orbit advanced slightly but when it was opposed, it slowed and lagged by around 20 minutes. Huygens inferred that this was because of the extra distance (2 AU) that light had to travel from Jupiter, so light travels 2 AU in 20 minutes; hence the speed may be computed to 300,000 km/s.
Thursday, 4 July 2013
Entropy- Life and The Second Law
What do life, gravity and the second law of thermodynamics have in common? Entropy of course! The very ground rule that prohibits perpetual motion machines and one hundred percent energy efficiency is often dubbed 'the degeneracy principle'. But its implications for physics and biology are worth taking a peek. The loss of useful energy, which roughly equates to the degree of disorder or chaos present in a system is often equated as entropy; just like when a piston in a cylinder oscillates, its motion represents useful energy while the residual heat is disordered because it's the random motion of particles. The second law posits that in a closed or isolated system, the total entropy can't decrease nor will it rise indefinitely without limit, there is a state of 'maximum chaos' or maximum entropy that represents thermodynamic equilibrium. Once a system has reached that stage, it's a point of no return. Think of two bodies, one hot and one cold; heat moves from the hot body to the cold until they eventually both reach uniform temperature (equilibrium), so the initial state of heat can be considered as more organised, thus of lower entropy in contrast to the final state where heat has been chaotically dispersed to the maximum amount of molecules (heat always flows from hot to cold).
Coming to living systems, there is no contradiction to the second law just as a fridge allows heat to go from cold (the interior of the fridge) to hot (the kitchen); the fridge is an open system just like the living cell. So life can go on evolving, filtering deleterious mutations via natural selection and exporting the accumulated entropy as long as the environment can provide free energy. But unlike a state of equilibrium (maximum entropy), a state of disequilibrium is highly unstable and natural phenomena are constantly trying to increase the level of entropy to the maximum degree, but there are means to circumvent this natural tendency. Imagine a mixture of air and fuel vapour, such a mixture doesn't have maximum entropy but it would desire to ignite to release heat and increase entropy; such a system is incredibly stable yet incredibly fragile, thus it's an example of a 'metastable' state. Living systems rely on metastable sources of energy for usable energy and take advantage of enzymes and catalysis to circumvent potential obstacles to the release of such energy from inorganic systems.
The question of the origin of the biological information embedded upon DNA and the associated nucleic acids may be described via Shannon's information theory that proposes information as a form of 'negative entropy' while random noise and interference as the disorder itself; so the second law may be reformulated as an increase in entropy and a decrease in the information quantity of a system. But as mentioned, like fridges, living cells are not closed systems and so in principle, the information quantity of a cell can increase if the information present in the environment also increases (the source of biological information is the environment). Thus, processes such as metabolism, reproduction and locomotion which are central to life and continuity are based on the flow of information between the living system and the environment and exothermic heat (body heat) can be thought of as a means of releasing entropy.
But the very origin of information itself is still in question, if it spontaneously appeared one day, that would be tantamount to a reduction of the total entropy of the universe and thus a violation of the second law. So the information must have been there from the very beginning; but the cosmic microwave background (CMB) has an incredibly uniform spectrum which equates to a state of thermodynamic equilibrium, but that's a state of maximum entropy which equates to minimum information. How can that be so if the second law prohibits the total information quantity of the universe to increase with time? Where did the present 'extra' information come from? In other words, if the universe began in maximum entropy (equilibrium), how did it reach its current phase of disequilibrium?
The answer is surprisingly gravity!
If you put gas in a box and leave it there, it will reach a state of equilibrium but gas in interstellar space is subject to gravitational forces and gradually forms stars via accretion that release free energy or negative entropy. So just because the CMB is uniform, it doesn't mean the early universe was in a state of equilibrium. Hence, when the large scale structure of our universe was forming, the gravitational 'clumping' that caused star and galaxy formation resulted in an 'entropy gap' ∆S, a difference between the actual entropy (Sact = Smax + ∆S) and the maximum possible entropy, therefore stars like our sun are trying to fill this gap with their light. Therefore, all sources of negative entropy or free energy can be extrapolated back to the entropy gap that gravity created.
Tuesday, 2 July 2013
Singularity Theorems- A Crash Course
In high school, we learn to prove congruent triangles from first principles; similarly in cosmology, indirect yet powerful arguments can be made on the basis of little or no dynamics. A singularity in general relativity is characterised by 'geodesic incompleteness' where time-like and null geodesics can't be extended to the infinite past or future but terminate after a finite proper time boundary. The Friedman equations hold that in an increasingly homogeneous and isotropic universe (such as ours), there was once a time when distances between particles was zero. And since Einstein's field equations hold that a given distribution of energy and momentum is proportional to the geometric properties of spacetime (particularly its curvature), the initial zero volume causes the curvature to become infinite. Crucial to the understanding of the singularity is the notion of a geodesic, which is the worldline of the shortest path traced by a particle only subject to gravitation (the curvature of space time) and not any other force; because the worldline is shorter when not accelerated, a geodesic may also be defined as the worldline where the four acceleration is zero (in the language of special relativity). Like the moon orbiting around the earth, it is just tracing the shortest path via curved spacetime ie. a geodesic. A geodesic is to a sphere, what a straight line is to a flat surface; a straight line and a geodesic are both the shortest distance between two points, but unlike straight lines, geodesics are not infinite in length (they are closed and always circle back on themselves) and they are never parallel. On a globe, only the equator and lines of longitude are geodesics. If we follow the path of a particle back in time to the point of infinite density, the geodesic will terminate, and the particle will cease to exist and will not anymore be part of spacetime. Identifying unextendable geodesics is used to identify singularities in theorems. So if the universe is expanding, in fact, accelerating in its expansion, does it follow that we can extrapolate back to a point when all galaxies and clusters were packed into a single point? In the 1960s, it was thought that any form of matter would adhere to the strong energy condition (suggests a tendency for geodesics to merge to guarantee that gravity is attractive: pc^2+3p>0 ) and because the universe is not entirely homogenous and isotropic as the Friedman-Robertson-Walker metric holds (it is only so on large scales), it is theoretically possible to avoid singularities if particles of mass miss each other when you trace their worldlines back in time (big bounce). Hawking and Penrose, assuming a strong energy condition (and causality conditions on global structure like no closed time-like curves to prevent time travel) proved that you can have geodesic incompleteness in a space-time that is not completely homogenous and isotropic, thus the presence of singularities is extremely generic even in black holes. But inflationary space-times don't abide by the strong energy condition and in fact violate it, so the Hawking-Penrose singularity theorems didn't apply to them.
The kinematic incompleteness theorem of Borde, Guth and Vilenkin proved that inflationary space-times are not past geodesically complete and this indicates under general relativity an initial singularity. They showed an integral going from a(t) distant past to now is bounded (only condition assumed is Hubble constant over zero) and is purely geometric, not assuming any dynamics such as energy conditions. The momentum of an object or test particle travelling on a geodesic is red-shifted in an expanding universe and extrapolating back into the past causes the geodesic to blue-shift, their theorem showed that the blue-shift reaches the velocity of light in a finite proper time (or affine parameter for photons) showing such a trajectory to be geodesically incomplete. Inflation is an exponential phase of de Sitter expansion, a full de Sitter space can't be past-eternal as it would experience a contracting phase preceding it that would cause tiny perturbations preventing the future expansion of the universe. Eternal inflation and cyclic models (which lead to thermal death) are both past geodesically incomplete and the emergent model (which assumes a closed and static universe in the asymptotic past) can collapse quantum mechanically, so they can’t be infinite into the past, even multidimensional brane models can't be extended indefinitely into the past. So its safe to say the universe had a beginning that is if you don't consider the subtleties...
The kinematic incompleteness theorem of Borde, Guth and Vilenkin proved that inflationary space-times are not past geodesically complete and this indicates under general relativity an initial singularity. They showed an integral going from a(t) distant past to now is bounded (only condition assumed is Hubble constant over zero) and is purely geometric, not assuming any dynamics such as energy conditions. The momentum of an object or test particle travelling on a geodesic is red-shifted in an expanding universe and extrapolating back into the past causes the geodesic to blue-shift, their theorem showed that the blue-shift reaches the velocity of light in a finite proper time (or affine parameter for photons) showing such a trajectory to be geodesically incomplete. Inflation is an exponential phase of de Sitter expansion, a full de Sitter space can't be past-eternal as it would experience a contracting phase preceding it that would cause tiny perturbations preventing the future expansion of the universe. Eternal inflation and cyclic models (which lead to thermal death) are both past geodesically incomplete and the emergent model (which assumes a closed and static universe in the asymptotic past) can collapse quantum mechanically, so they can’t be infinite into the past, even multidimensional brane models can't be extended indefinitely into the past. So its safe to say the universe had a beginning that is if you don't consider the subtleties...
Neutrinos- Massive After All
Italian speaks of the 'small neutral one'. And the recent discovery that neutrinos have mass comes as a revelation for particle physics; they are particles produced by trillions of times over by our sun, play a subtle role in continental drift and are released vigorously by dying stars through supernovae. Pauli first proposed them in the 30s to account for energy conservation in beta decay (whereby a neutron becomes a proton and simultaneously emits an electron), he suggested they have a spin of ħ/2. Then in the 50s, Goldhaber computed the negative helicity of the neutrino (its spin along the direction of motion), also called 'handedness' by the electron capture of europium 152 when it decays into a neutrino and samarium 152, which emits a gamma ray. Firing both the neutrino and gamma ray adjacent to one another reveals the left handedness of the neutrino (as a conservation of angular momentum). In accordance with special relativity, an onlooker moving at luminal speed can overtake a massive neutrino and observe it spinning in the opposite direction; but since right-handed neutrinos were never detected, it was inferred that they were massless. Or so it seemed...Particles gain mass by interacting with the Higgs boson, and quantum field theory teaches that seemingly 'vacuous' nature of the vacuum is in fact teeming with Higgs bosons and when a particle interacts with the Higgs (a spinless, scalar field), it changes its handedness (Lorentz invariance). Popular science writers like to describe the Higgs field as a sort of ''molasses' which slows particles to endow them with mass but such is a flawed analogy, in fact fields don't slow particles and the quantum vacuum has no 'stickiness'. Naturally, its obvious to think that the known left-handed neutrinos can interact with the Higgs and become massive and right-handed; but again, since no right handed neutrinos have been detected, it was again inferred that neutrinos are massless. But recent developments from neutrino oscillations (whereby electron, muon and tau neutrinos convert into each other as they travel) from Japan's Super-K observatory attests to the fact that neutrinos do have mass. Since particles may behave like waves, oscillating neutrinos are a mixture of the three neutrino waves (or flavours) which can only oscillate if the component waves combine and form 'beats' in the waveform, such beats are the outcome of mass; thus if we can see neutrinos oscillating (which we do), then they have mass. But the standard model runs into trouble if it tries to accomodate massive neutrinos, so a means of renormalising the theory is necessary; enter the Dirac and Majorana neutrinos. The Dirac neutrino posits the reason why right handed neutrinos are so elusive is because their interaction is so weak by about 30 orders of magnitude, another similar idea comes from string theory where right handed neutrinos are stuck in extra compactified dimensions. But Majorana neutrinos require a lack of differentiation between antimatter and matter (neutrinos and antineutrinos are the same thing); thus don't rely on weak interactions to expound mass. Going back to the onlooker travelling at light speed, what if a right handed antineutrino is observed? Then neutrinos can acquire mass via a 'see-saw mechanism': when a left-handed neutrino interacts with a Higgs, it is granted mass M and becomes a right-handed neutrino (which violates the law of energy conservation), however due to the Uncertainty principle ∆t~h/Mc2 such a quantum state can last for a period ∆t; subsequently becoming a left-handed neutrino again by interacting with the Higgs once more. This has a lot of implications for big bang cosmology, especially the leptogenesis and the conservation of lepton number to create the lepton asymmetry; as the early universe cooled down, massive right handed neutrinos ceased to transform into light left-handed ones and thus, since Majorana neutrinos are both matter and antiparticles simultaneously, they decayed into right- handed antineutrinos and left-handed neutrinos along with Higgs bosons.
Sex- A Gene's Eye View
Speaking of the birds and bees, sex is ubiquitous. But what remains elusive is why that is the case; why it persists as a 'popular' means of reproduction and an engine of evolutionary change. The standard Darwinian picture is that reproduction results in more progeny that the environment can sustain, intraspecific diversity is abundant, resulting in the 'struggle for existence'. Verily, descent with inherited modification results and speciation follows, creating new reproductively isolated yet interbreeding populations (species). The phylogenetic prevalence of sexual reproduction demands an explanation; what evolutionary advantages does sex offer over asexual breeding? Taking a gene's eye view, the concepts of accumulated mutation, genetic drift and negative epistasis should be taken into account; but superficially, the disadvantages and costs of sex seem easier to recognise and fathom than the actual evolutionary benefits. Maynard Smith illustrated the so called 'two-fold cost' of sex; whereby in organisms that reproduce sexually, males simply contribute genes to the progeny but a mutation causing females to breed asexually will ultimately triumph as its prevalence doubles at each and every generation. Secondly, the whole point of sex is the union of male and female gametes which might be ecologically taxing if population densities are low, distinct from asexual organisms that don't encounter such a burden. Progeny arising from sexual reproduction often become troubled by recombinational loads, whereby mutually adapted genes become separated via recombination, resulting in a decline of overall fitness. So what are the benefits of sex? In brief, the evolutionary advantages are both direct and indirect and their recognition is made complicated by sexual selection and anisogamy (variation in sizes of male and female gametes). Nevertheless, the filamentous fungus Emericella nidulans demonstrates slower decline in fitness relative to its asexual counterparts when its progeny are exposed to rates of accumulated mutation, such is an example of a 'selectional arena', whereby strict selection against offspring that carry lethal mutations is maintained. Volvox carteri, a multicellular green alga, may be induced into sexual reproduction via increasing the levels of reative oxygen species (ROS), which may hint a relationship between free radicals and the early origin of sex in eukaryotic life forms. Indirect advantages of sex seem to reinforce the reaction to directional selection, where there is linear phenotypic change; the Fisher-Muller hypothesis holds that sexual recombination accelerates the ability of a species of adapt by stringing together beneficial mutations from different origins, sex also protects against 'mutational meltdown' (in which asexual life forms may accumulate irreparable deleterious mutations) via background selection by separating deleterious alles from beneficial mutations. This so called 'Muller's ratchet' is also observed in segmented bacteriophages in lowering and alleviating the load of deleterious alles. Negative epistasis caused by sexual recombination may contribute greatly to the architecture and intricacy of the genome: such is a decline in the fitness of two entangled alleles which paves the way for future removal of potential hosts for deleterious mutations from the population and deploys a sexual advantage.
Subscribe to:
Posts (Atom)