Friday, December 20, 2013

Thursday, December 5, 2013

stats: the covariance matrix

The covariance matrix is N-dimensional generalisation of the scalar variance in 1 dimension. Here's a simple explanation of it which I needed in order to code up a test case for fitting a 2D Gaussian with intrinsic scatter.

astroML: calling hist

If you're trying to use astroML.plotting.hist and it spits blood saying something like:
slice1[axis] = slice(1, None)
IndexError: list assignment index out of range

check the namespace: astroML's hist should simply be called as hist(data), whereas ax.hist() or plt.hist() methods are the normal Matplotlib's methods which choke from astroML's options.

Wednesday, December 4, 2013

lit: The Habitable Epoch of the Early Universe

by A. Loeb.
That's a fantastically crazy paper (by A. Loeb, who's one of the best known names in areas as diverse as gravitational microlensing/black hole evolution/reionisation and 21 cm signal/high-z GRBs/Event Horizon telescope, and many others). He's written several papers about wild ideas before: exploring the (sad and very distant) future of observational cosmology, bio-markers in white dwarf planets' atmospheres, planets of hypervelocity stars, search of artificially-illuminated objects in and beyond the Solar System and cosmology measurements from hypervelocity stars which shows that not all is lost for the future cosmologists.
In this paper he looks at the dawn of the Universe, when it was only ~15 million year old. A. Loeb points out that the temperature of the cosmic microwave background was roughly around 0-30 C then, and therefore liquid water could have existed on any solid surface, meaning that there might have been conditions suitable to life as we know it. In order to form any rocky planet one needs to explode some massive stars before (to get some heavier elements like oxygen, iron, silicium, etc, which are what rocky planets are made of). At that time we're left with the formation of the very first stars in the very first, tiny, dark matter haloes at the far end of the density distribution. Calculations show that the number of such protogalaxies was incredibly small at these redshifts.
However, if we assume that the initial density distribution was not perfectly Gaussian (there are many theories explaining why it might have been the case, although Planck and other observations haven't found any proof of non-gaussianity yet), there might have been some haloes that had formed massive stars by that time.
It's a marvelous article, although AFAIK, it takes much longer than a few Myr for rocky planets to assemble from accretion disks of stars (and cool down due to the decay of radioactive elements..). But think of it: at some time in the history of the Universe the outer space was warm (and at least a million times denser than now).

Python: bootstrapping with sklearn

sklearn.cross_validation.Bootstrap returns indices of random bootstrap samples from your data. I am not yet well familiar with the use of bootstrapping for 2 sample comparison, so I'm using means as a way to characterise the two distributions (that's better than 2D K-S test, which doesn't exist anyway!). This is of course an incomplete statistic (I have a reason to suspect that two of the samples I am working with have different skewnesses). I'm working with three sets of absolute r magnitudes (M_r) here.
Here's how it looks in practice:

from sklearn import cross_validation

#let's call our sample array data. It can be N-dimensional.

#len(data) -- total number of data points in my dataset, 
#nBoot -- number of bootstrap samples, 
#train_size = bootstrap sample size (proportion of the whole sample, or just number)
#Here we create an empty array we will store the means of bootstrapped 

means = np.empty((nBoot, ))

#then we get the class instance, drawing half of the sample with replacement every time

bs = cross_validation.Bootstrap(len(data), nBoot, train_size=0.5, random_state=0)

#Filling the means array while iterating over the bootstrap samples (indexed by train_index):

i = 0
for train_index, test_index in bsJ:
    means[i] = np.mean(data[train_index])
    i+=1



I've repeated it for all three distributions I'm interested in, and here is a histogram of all the bootstrap sample means. The difference is just what what I've expected: basically, some faint galaxies (with M_r < 20 or so) were manually rejected from the green and red distributions, so the mean of them is shifted towards brighter absolute magnitudes. It remains to be seen how important this is. If we think this is worth it, I'll try using other distributions' statistics some other time.

Tuesday, November 26, 2013

Wednesday, November 20, 2013

lit: Yegorova et al. 2007

A detailed investigation of the dependence of the slope of TFR on the radius where the velocity is measured. They find that the scatter is smallest at R = 0.6 R_opt (see fig.5).

Tuesday, November 19, 2013

lit: Some aspects of measurement error in linear regression of astronomical data, Kelly 2007

Kelly 2007:
I describe a Bayesian method to account for measurement errors in linear regression of astronomical data. The method allows for heteroscedastic and possibly correlated measurement errors and intrinsic scatter in the regression relationship. The method is based on deriving a likelihood function for the measured data, and I focus on the case when the intrinsic distribution of the independent variables can be approximated using a mixture of Gaussian functions. I generalize the method to incorporate multiple independent variables, non-detections, and selection effects (e.g., Malmquist bias).

LaTeX: \include vs. \input

I was annoyed by a page break that appears if I \include{a portion of my document}. Using \input{} instead solves this.

Monday, November 18, 2013

lit: Tonini et al. 2013, on physically meaningful radius for velocity measurements

They define the 'corrected' disk scale-length that includes the bulge potential. Pay attention to this article anyway -- they add a dynamical classificator dimension to the TFR, which is a little bit akin to what I'm doing.

Thursday, November 14, 2013

Daily paper: The Tully-Fisher relations of early-type spiral and S0 galaxies

I gave a talk at our group meeting yesterday, this was one of several useful suggestions and references I got. bWilliams et al. 2010, The Tully-Fisher relations of early-type spiral and S0 galaxies
Offsets which are ascribed to fading or brightening could in fact be due to systematic biases that differ between the measures of rotation used. For typical slopes of the TFR, a relatively small systematic difference in velocity of ∼0.1 dex introduces an offset in the TFR that is indistinguishable from a ∼ 1mag difference in luminosity.
Our conclusions in this section have three consequences for previous works that measure the TFR of S0 galaxies using stellar kinematics corrected for asymmetric drift (e.g. Neistein et al. 1999; Hinz et al. 2001, 2003, BAM06): (i) the asymmetric drift correction they use yields similar results to detailed Jeans modelling; (ii) the drift correction does not seem to introduce a systematic bias between spirals and S0s and, if applied to samples of both spirals and S0s, can indeed be used to compare the TFRs of the two classes; (iii) however, a TFR derived from asymmetric drift-corrected stellar kinematics cannot be directly compared to TFRs derived from global HI line widths or resolved emission line PVDs.
==== There's a really interesting section on the ADC, and a nice comparison of gas/HI/AD corrected stellar kinematics. Also I should check their section on TFR fitting, intrinsic scatter in it, etc.In addition, I ought to take a look at the references therein, esp. the introduction.

Tuesday, November 12, 2013

SQLITE: nested query for MoG

select avg(gc.r_mag) from luminosity_errors2 as gc where gc.califa_id in(select t.califa_id from tidal_influence as t where t.Ftop > -0 and t.Ftop <98)

Thursday, November 7, 2013

lit: Face-on TFR with IFU

lit: TFR at z=2.2 with SINFONI IFU

Cresci et al. 2013 -- they also use genetic algorithms for velocity field modelling.

lit: The power spectrum dependence of dark matter halo concentrations

lit: The baryonic Tully–Fisher relation and galactic outflows

Dutton and van den Bosch 2009, also see Dutton et al. 2011.

lit: testing CDM with low-mass TFR

lit: The baryonic Tully-Fisher relation and its implication for dark matter halos

Trachternach et al 2009: baryonic TFR as constraint of elongation of DM haloes.

lit: IFU observations of galaxies wrt TFR

Courteau 2003 is important: they observe 14 barred galaxies with Sparse Pak IFU in order to calibrate long-slit samples. To be discussed in talk/article!

lit: Skewness and kurtosis in BTFR

McGaugh 2012:
A persistent problem with interpreting astronomical data is whether Gaussian statistics actually apply. One reason for this is that systematic errors often dominate random ones. We can check whether this might be an issue here by examining higher order statistics like the skew (lopsidedness) and kurtosis (pointiness) of the distribution in the deviations perpendicular to the BTFR. If the data are well behaved (i.e., distributed normally), the skew and kurtosis should be small.

Monday, November 4, 2013

The difference between MCMC and simulations

from PSU astrostatistics lecture notes:
MCMC is a form of simulation, because MCMC generates random numbers and bases the inference on them. However, there are two important differences between MCMC and the simulation that we shall discuss here.
a) In MCMC we do not simulate from the target distribution directly. Instead, we simulate a Markov Chain that converges to the target distribution. Thus, MCMC performs approximate simulation.
b) The primary use of MCMC is in Bayesian statistics where we seek to simulate from the posterior distribution of the parameters. Classical simulation on the other hand simulates fresh data.

Monday, October 28, 2013

Uncertainties are distributions, not errorbar values

This clicked on me while preparing for a talk, though it's an obvious thing: too often we are content with just plotting errorbars and not specifying what do they mean (for instance: 1 sigma-uncertainties, Gaussian distribution). This makes propagating uncertainties (and using the data for model selection) actually impossible.
To put it another way: we don't have datapoints, we operate with slippery, diffuse clouds of probabilities of data. My current task is to take full 2D uncertainties distribution for my datapoints and re-use them in the following steps of hierarchical analysis.

Wednesday, September 25, 2013

lit: Precise TFRs without galaxy inclinations

Precise TFRs without galaxy inclinations, by Obreschkow et al. A really interesting read about why photometric (axis ratio-based) inclination values are not optimal, what TFR is good for, fitting the TFR, etc. The main point of the article is recovering the TFR for samples with limited or no inclination data, but I haven't read it in detail yet.

Tuesday, September 17, 2013

SQLITE: nested query

SELECT r.califa_id, u.isoA, g.isoA, r.isoA, i.isoA FROM gc2_r_sky as r, gc2_u_sky as u, gc2_g_sky as g, gc2_i_sky as i where r.califa_id = u.califa_id and r.califa_id = g.califa_id and r.califa_id = i.califa_id and r.califa_id in (select r.califa_id from gc2_r_sky as r order by r.isoA desc limit 10) order by r.isoA desc

sorting .csv files

cat input.csv | sort -ug -t',' > output.csv worked better than sort -n option, which I had been using.

Monday, August 26, 2013

Fitting the Tully-Fisher relation

There's one thing I can't quite wrap my head around while reading literature on the Tully-Fisher relation: why is the fitting always performed in the magnitude-log(velocity) space? It may be easier to fit an expression y = mx + b to data than y = c*x**z, but one would have to deal with asymmetric errors in two dimensions.
I'm currently trying to figure out a way to do the fit in semi-log space, namely, magnitude-velocity space, using errors in both directions and MCMC.

Thursday, August 22, 2013

Math: Directional statistics, calculating average of angles in Python

How does one calculate the average of a distribution of angles? The mean of 360 deg and 0 deg is not 180 degrees.
I was quite certain that while it is possible to get around the problem programatically, there should be a host of mathematical methods to deal with functions that have circular domains. That's directional statistics and it deals with statistics in certain non-Euclidean spaces.
In a nutshell, that's how I calculated the average of angles in Python (and in radians!):

def getCircularMean(angles):
  n = len(angles)
  sineMean = np.divide(np.sum(np.sin(angles)), n)
  cosineMean = np.divide(np.sum(np.cos(angles)), n)
  vectorMean = math.atan2(sineMean, cosineMean)
  return vectorMean


Wednesday, June 19, 2013

talk: assembly bias

I had volunteered to give a presentation on 'Detection of assembly bias' by Wang, Weinmann, de Lucia and Yang. It went pretty well today, here are the slides. It explains the difference between galaxy bias, halo bias and assembly bias. As one of my supervisors puts it, 'not good news'.

Friday, June 14, 2013

A colloquium by S. White

This morning I attended a talk on the Planck results by S. White, who contributed a lot to the current theories of cosmology and structure formation (White and Rees 1978, the NFW paper, etc.).
It was a brilliant talk, here are some things that stuck:
-- the strongest evidence we have for the existence of the dark matter does not come from rotation curves, but from CMB measurements instead!
-- the Planck team was able to do background subtraction very well because they had data from several channels: maps of synchro emission at radio frequencies and dust maps at FIR. Only 3% of the sky had to be interpolated.
-- Planck has the best dust maps of all time due to the previous point.
-- the mass map of the Universe from gravitational lensing measurements is incredible, and strongly correlates with cluster locations from the S-Z effect.
-- the spectral index n of the H-Z spectrum is not equal to 1 (this was news to me, I thought it should be close), but is near 0.96.
-- sigma_8 is less than 1 too (0.83, if I remember correctly)
-- there are issues w/ Planck-WMAP-SPT inter-calibration
-- the CMB is the edge of the visible Universe -- this is obvious, but I hadn't thought of it this way before.
-- there still are anomalies: the cold spot, differences of amount of structure between hemispheres, some structure that is compatible with a rotating Universe

Thursday, June 13, 2013

lit: Where do the dwarf galaxies go?

I've just attended a colloquium by J. Navarro of the NFW fame, which was really related to my current reading list. I'm writing an introductory chapter on hierarchical galaxy assembly for my thesis at the moment.
One unsolved problem of LCDM cosmology is the apparent lack of dwarf galaxies: simulations and statistical frameworks (such as EPS) predict that we should see a large number (hundreds) of dwarf galaxies around each bright galaxy such as the Milky Way, but we only know of a dozen. Same with Andromeda.
J. Navarro also showed some puzzling properties of local dwarfs, for example, the fact that their star formation histories are wildly different. Some formed all their gas 10 or so Gyr ago, some had a starburst recently, and some have double-peaked SFH's. He presented the work of his student which could explain it: the CLUES simulation shows that a combination of interaction with the collapsing cosmic web, mergers of dwarfs and reionisation stripping can in principle explain this. Some dwarfs lost their gas simply because they moved too fast through the cosmic web. The remainders of such galaxies may be just too faint for us to see.

lit: NFW halo profile properties, relation to TFR

I've a perhaps naive idea to try to obtain the TFR from the halo-stellar mass distribution. Apparently, that's an old thought, but then they didn't have such good halo-stellar mass relation, 'chewed' by AGN and stellar feedback processes on both ends! That's cool.

Sunday, June 9, 2013

Daily Paper #13: the σ_8 parameter, explained!

I'm writing the introduction for my thesis and reading the Book, 'Galaxy Formation and Evolution' by Mo et al. There is a clear and accessible explanation of what the σ_8 parameter really means:
In the linear regime the density field ρ is Gaussian, characterised by the scale-invariant Harrison-Zel'dovich spectrum. It means that the power of a mode P(k) ∝ k^(n - 1), and n = 1. This ensures that the gravitational potential does not diverge neither on small nor on large scales.
In order to fully specify the density field in the early Universe we have to observationally constrain the amplitude of the H-Z spectrum. This amplitude sets the amount of clustering of structure, frequently estimated from the two-point correlation function.
When someone measures σ_8 from galaxy clustering now it means that he or she has calculated the variance (difference from the background density) of galaxy number counts within randomly placed spheres with radius R = 8h^-1 Mpc. For galaxies of Milky Way luminosity the variance is ~= 1, however, such variance decreases sharply if the radii of such spheres approach 30h^-1 Mpc. This means that the Universe is non-homogeneous on small scales, but approaches homogeneity on larger ones.
Note that the cosmological parameter σ_8 and the observed galaxy number variance in spheres of radius 8h^-1 Mpc are not equivalent. The structures we observe today are very far in the non-linear regime.

Thursday, June 6, 2013

lit: CDM Models, halo structure and rotation of spirals

References therein might be useful. There is a link to Shu (1982, I think), where it is claimed that Tully-Fisher works if the SB times the mass-to-light ratio squared is constant.

Daily Paper #12 -- The I-Band TF relation for Cluster Galaxies

Title: The I Band Tully-Fisher Relation for Cluster Galaxies: a Template Relation, its Scatter and Bias Corrections.
Authors:Giovanelli, Haynes et al.
Year:1997, http://adsabs.harvard.edu/abs/1997AJ....113...53G
This is a 'classic' Tully Fisher paper, still being cited often. Their TF relation is also widely used as the reference.
It is the seventh paper from a series with the same topic, presenting the corrections done and the TF relation fitting procedure. They use velocity widths and I magnitudes of 784 field and cluster galaxies.
They discuss biases occuring and TF scatter extensively, and there are quite a few enlightening observations. For example, while discussing the total observed scatter, they name three main causes for it:
  • Raw measurement errors (inclinations, apparent magnitudes, velocities)
  • Uncertainties arising from corrections (extinction, inclination correction for velocities, asymmetric drift correction, turbulence, etc.)
  • 'Cosmic scatter' -- velocity disk distortions, disk distortions, photometric asymmetries, variations in mass-to-light ratio, variation in disk-to-bulge-ratio, etc.
I think that's a nice distinction to have in mind.
They discuss the error propagation and relative importance of various error sources. One important, and in retrospect quite obvious fact is that that the relative importance of internal extinction correction errors depends on the inclination and intrinsic rotation velocity of a given galaxy: they are the largest for highly inclined, fast-rotating galaxies. Conversely, velocity errors are the most significant for nearly face-on, slowly rotating galaxies. That might be an interesting plot to show.
They discuss the implications of TF scatter too, citing Franx and de Zeeuw 1992 who claim that TF scatter poses a strong constraint on the ellipticity of the potential in the disk plane. Rix and Zaritsky, 1995 find that departures from disk axisymmetry can contribute as much as ~0.15 mag to the TF scatter (optical observations). Eisenstein and Loeb, 1996 find that the TF scatter due to different formation histories of galaxes can be larger than 0.3 mag in some cosmological scenarios (I remember reading that one: the intrinsic scatter of TF is surely smaller than that, and they try to find out why it is so, there must be some regularising factor at work).
They report a slight TF offset dependence on morphology -- this is of course band dependent. They also discuss a number of bias present, such as incompleteness bias (which they correct by fitting the LF), Malmquist bias, edge-of-catalogue bias, effects of cluster environment (which they find to be negligible).
I do not understand some of the corrections done here, namely, the morphological type correction (they add a term to magnitudes of early type galaxies in order to have a straight line TF relation -- why do they do it? Extinction corrections are likely to be different, and why such a TF relation would be useful?).
The residuals of the TFR can show if there are uncorrected selection effects -- i.e. if there is a dependency of them on inclination. However, I would not want to try to cram all galaxies into the same line, if they are intrinsically different. That'd be losing information, in my opinion.

Tuesday, June 4, 2013

Daily Paper #11: The Origin of Disks and Spheroids in Simulated Galaxies

Title: The Origin of Disks and Spheroids in Simulated Galaxies
Authors:Sales, Navarro et al.
Year:2012, http://arxiv.org/abs/1112.2220
I picked this article because it challenges the standard idea of galaxy formation I had always been taught: that disks were assembled quite recently in rotating, quiescent haloes, whereas spheroidals are results of mergers (or primordial collapse, as my supervisor pointed out). In other words, current morphology is strongly dependent on the assembly history of the haloes.
They use GIMIC simulations I wrote about to obtain a sizeable sample of Milky Way mass range galaxies. Using a kinematic morphology criterion (\kappa_rot, the fraction of kinetic energy corresponding to the ordered rotation, they look for the origin of morphology of a given galaxy. Simulations show that disks can be formed in haloes of high and low spin parameter (Bullock 2001), in haloes that collapse early or late, in haloes that go through merger events.
They find that disks are preferably formed by slow accretion of gas from a hot corona, as opposed to cold flows (many people in Jerusalem claimed those were important at high redshift). Disks assemble slowly and gradually (see fig.6: A galaxy is a spheroid, D galaxy is a disk, BC -- intermediate steps), and hot corona gas accretion might explain that.
Gas accretion history is not the whole story. The authors claim that spin alignment of the accreted gas and the galaxy itself is more important: disks form when the two spin directions are similar, whereas spheroidals form when they are misaligned. Hot corona gas spends a lot of time inside the halo, it is forced to align its rotation with that of the halo and the galaxy itself. Spheroids form when a galaxy accretes a lot of cold gas via misaligned filaments, i.e. when the angular momenta of the galaxy and the gas are not aligned. Star formation in spheroidals proceeds in a bursty manner (due to a different channel of gas acquisition, not the slow and gradual corona gas accretion), leaving behind stars with different ages and angular momenta. Intermediate galaxies form when the spins are aligned, but the gas accreted is mostly cold.
The kinematic (at least) morphology is determined by the interplay of these two factors: in order to form a disk, a galaxy needs to have a lot of shock-heated gas in its corona, unless the spin misalignment between the spins the galaxy and the gas is too large. That may be only a part of the whole picture, as mergers may play a more important role in other mass ranges, they mention halo triaxiality and feedback as other potential factors of morphology. I think it is an interesting article, even if not directly related to TF. It reasserts what I had repeatedly heard in Jerusalem: that inflows and outflows are very important, and we do not know much about them. The results may be tested observationally: would it be possible to do something like fig.6 with CALIFA data?

Python: splitting a list into a number of roughly equal chunks

I used this solution, however, since I wanted to have a set number of chunks as opposed to chunks of a set length, the function ended up being slightly different:

def splitList(longList, chunkNumber):
    #Get the number of elements in a chunk we want
    n = int(math.floor(len(longList)/chunkNumber)) + 1
    print n, 'length of a chunk'
    return [longList[i:i+n] for i in range(0, len(longList), n)]

Monday, June 3, 2013

LaTeX: draft class

Yesterday, while doing some unrelated stuff, I found out that it is useful to put \documentclass[a4paper][draft]{article} into the preamble sometimes, this shows the blocks of the document and can be really helpful if the layout is messed up.

Friday, May 24, 2013

Daily Paper #10: Reconstructed density and velocity fields from the 2MASS Redshift Survey

Today's article was referenced to in the WISE TF paper I summarised yesterday. I wanted to find out how the peculiar velocities and density fields were reconstructed.
Title: Reconstructed density and velocity fields from the 2MASS Redshift Survey
Authors: Erdoğdu, Lahav et al.
Year: 2006, http://mnras.oxfordjournals.org/content/373/1/45.full
I have to admit I did not understand the mathematics of this article, and will read Peebles 1973 and Fisher 1995 to understand what are the spherical harmonics and Fourier-Bessel functions in detail. What they do in order to transform the the redshift space to the real space is decompose the redshift space distribution into radial and angular components. Peculiar velocities affect only the radial component, so it is easier to deconvolve them.
The relationship between mass and light distribution is usually assumed to be linear with some proportionality constant (there is an article by Vivienne arguing that this is not the case).
They provide density and velocity maps up to cz = 16 0000 km/s, which covers the CALIFA redshift range. I could ask them for the velocity data and obtain the peculiar velocities/infall corrected distances for the sample. I think it would be useful for me later, when working with the density fields and spin alignments. We would also have these distances and their errors for all the galaxies, not only the ones identified as belonging to structures.
Previously I used the Virgo-GA-Shapley infall corrected redshifts from NED (obtained using this model: http://iopscience.iop.org/0004-637X/529/2/786/fulltext/). It's a simpler model, taking only these three attractors into account.

Thursday, May 23, 2013

Daily Paper #9: WISE TF: A Mid-infrared, 3.4-micron Extension of the Tully-Fisher Relation Using WISE Photometry

Here's the recent WISE paper my supervisor had sent me a link to.
Title:WISE TF: A Mid-infrared, 3.4-micron Extension of the Tully-Fisher Relation Using WISE Photometry
Authors: Lagattuta, Mould et al.
Year: 2013, http://arxiv.org/abs/1305.4637v1
In a way, this is a classic, straightforward observational TFR paper. They pick the shortest wavelength (3.4 um) WISE band, where dust emission is not very important, use archival HI linewidths from Hyperleda and Cornell HI archive and WISE photometry (w1gmag magnitudes). They also take morphological classifications from CfA ZCAT and axis ratios from V-band measurements in NED. They reject galaxies with low quality of radio observations and inclinations (from b/a) less than 45 deg, as well as all E galaxies.
What was interesting for me here was that they did a number of corrections in a thorough way, especially the peculiar velocities correction. Since their and CALIFA redshifts overlap, we might have to give more thought to this correction, as it can reduce scatter in the resulting TFR significantly. They use the Erdoglu 2006 model, interpolating over the velocity grid where necessary (I'll read it for tomorrow).
The intrinsic extinction correction are small in the mid IR, but not so in the SDSS r-band. I'll check Masters 2003 correction prescription too, maybe it's better than the one I'm using.
They also find a bend in their TFR, due to different morphological types -- like we do (the slopes for late and early spirals are different), which is worth mentioning! However, they normalise the magnitudes of the early types and force them to fit the late-type spirals relation. Since our sample is dominated by early-type spirals, I would not want to do that (and I'm not sure what physical basis there is for this correction).
There is a reference to a 2013 paper by Sorce et al., where it is claimed that redder galaxies lie above the TF relation, and the bluer ones lie below, that's interesting.
They also discuss the influence of a bulge to the scatter of TFR, which is larger at the IR and for massive galaxies -- I'll remember that.
What made me think is their conclusion that the mid-IR W1 and 2MASS K bands trace the same stellar populations, a significant fraction of which are in the central parts of the galaxies (thus, probably bulges, esp. in CALIFA sample). That is interesting, in part because we make the inclination and extinction corrections assuming the galaxies are thin disks. Also, we have to keep in mind that when we plot the TFR for different bands, we are relating disk rotation velocities with different morphological components of a galaxy, which do not necessarily correspond to the disk only!

Wednesday, May 22, 2013

Daily Paper #8: Constraints on the relationship between stellar mass and halo mass at low and high redshift

Title: Constraints on the relationship between stellar mass and halo mass at low and high redshift
Authors: Moster, Somerville et al.
Year: 2009, http://arxiv.org/abs/0903.4682
They give a nice breakdown of the methods used to link the halo properties with those of the galaxy. It is possible to obtain these relations from observations (kinematics, lensing, X-rays), from bottom-up modelling (N-body or SAMs), or by using a statistical approach: obtaining the probability that a halo with a certain mass hosts a certain number of galaxies with some certain properties (luminosity, type, etc.). This way, we don't have to make assumptions about feedback and other poorly-understood processes that plague the SAMs. I'll review the lecture recordings from Jerusalem School, I think the formalism used was explained in detail there.
The goal of this paper is to construct the conditional stellar mass function (as opposed to the conditional luminosity function). Such a function provides the average number of galaxies in a certain stellar mass range that reside in a halo with a mass M -- it can also be viewed as the stellar mass function for the haloes of mass M.
They use a N-body simulation to obtain a population of DM haloes and their merger tree information. The ratio of the halo mass M and the stellar mass M is not constant (due to different feedback processes), so they use a parameterisation of it to fill the haloes with galaxies, and obtain the stellar mass-Mhalo relation.
In order to obtain the conditional mass function, they divide the galaxies into centrals and satellites, parameterising their distribution in two different ways. The total CMF is the combination of the two.
They also obtain the M* -- M_halo relationship for higher redshifts, claiming that 'large halos accrete dark matter faster than large galaxies grow in stellar mass, while the growth of low mass halos is slower than that of the central galaxies they harbor', (see the attached picture).

This paper does not mention the TF relation, however, I wanted to find out more about the M*-Mhalo relation, the ways to determine it and its evolution. It's interesting that the high-mass end of this relation does not evolve much since z=4.

Daily Paper #7: Galaxy groups in the 2dFGRS: the number density of groups

Title: Galaxy groups in the 2dFGRS: the number density of groups
Authors: Eke, Baugh, Cole et al.
Year: 2006, http://adsabs.harvard.edu/abs/2006MNRAS.370.1147E
The authors determine the galaxy groups mass and luminosity functions from 2dFGRS Percolation-Inferred Galaxy Group (2PIGG) catalogue (getting about 29 000 groups). The catalogue and the mock catalogues used to determine and correct for bias are described in Eke 2004 paper.
They obtain dynamical masses of the clusters (I think they use the words 'cluster' and 'group' interchangeably here) from their velocity dispersions, and obtain their luminosities by simply adding up the members' luminosities with weighting to account for spectroscopic incompleteness and galaxies below the flux limit. Then they correct the mass and luminosity functions using the Vmax method. However, they claim that a more robust way to determine the group mass function is deriving it from the luminosity function by multiplying it by a mass-to-light ratio. They also show that the total group luminosity function depends on the group finding algorithm used.
The main goal of the article is to determine the \sigma_8 parameter, but they make a comparison between halo velocities and circular velocities, claiming that the two are similar. The authors claim it is possible to convert a halo luminosity to its circular rotation velocity directly, then convert group luminosities (which are very similar to those of the brightest galaxy of the group) to circular velocities using Bell & de Jong B-band TF relation, and compare the two (see the attached figure).


I think it is an important article, because it is observational, and worth citing while discussing the halo/galaxy velocities issue (the ratio of v_c/v_200 is around 1.2, at least from the simulations, not close to 1 as they find here). Anyway, I'll remember this article in case I have enough time to work on the velocity function.

Wednesday, May 15, 2013

Daily Paper #6: A physical model of cosmological simulations of galaxy formation

I picked this article for the group meeting presentation, and actually read it, because it is rather relevant.
Title: A physical model of cosmological simulations of galaxy formation
Authors: Vogelsberger, Genel, Sijacki et al.
Year: 2012, http://arxiv.org/pdf/1305.2913v1.pdf
They present galaxy formation simulations using the AREPO code, which is a moving mesh hydro code, claimed to combine the advantages of SPH and semi-analytical models built on top of N-body simulations. What is important, however, is that they include, among other things:
-- BH seeding and growth
-- 3 modes of AGN feedback (thermal, kinetic (radio jets) and EM radiation). The discrepancy between the high-mass end of McCarthy, Schaye et al. TFR and observations I discussed yesterday is probably due to lack of AGN feedback prescriptions.
-- realistic cooling (including metals) with self-shielding
-- realistic enrichment timescales and yields
-- different modes of outflows

The article itself is quite long and rather technical, dealing with the details of the implementation of physics. However, there are some interesting observations. For instance, they discuss the stellar mass function and its relation to feedback, halo mass function and the cosmic SFH. The stellar mass function is 'chewed' by stellar feedback (SN and stellar winds) at the low halo mass end, and by AGN feedback at the higher mass end. It can be viewed as the convolution of the halo mass function with the efficiency with which stars form in these haloes (i.e. feedback mechanisms).

I think this relation is really important -- actually, it shows why the TFR exists, and why there is a knee at the high-mass end. _Why_ the slope, zeropoint and scatter are such are different questions -- but the answer to that should be some combination of the halo concentration (i.e. response to baryons, mostly), details of feedback (which determine the shape of the M* -- Mhalo relation) and the initial halo mass function (i.e. cosmology). The latter also sets the angular momentum of galaxies (Vvir -- Mvir are related, see the description of Dutton 2011).

They reproduce the stellar mass-halo relation, stellar mass function, evolution of the cosmic SFR density, the cosmic stellar mass density, mass-metallicity relation, SDSS luminosity function, M_BH -- M* relation and the Tully-Fisher relation (in the Vcirc -- M* sense, see the other attached plot). They claim (by citing McCarthy & Schaye, and experimenting with different radii) that the precise radius of the velocity measurement is not very important.

They conclude that TFR is not very sensitive to feedback parameters (shown in the plot). However, feedback must be included, otherwise it is not possible to reproduce TFR correctly. They find that the stellar mass in a halo is primarily set by the stellar feedback and radio-mode AGN feedback.

Todo: Eulerian vs. Lagrangian

Daily Paper #4: Dark halo response and the stellar initial mass function in early-type and late-type galaxies

Our group meeting ate most of my day, so here's a delayed review. Title: Dark halo response and the stellar initial mass function in early-type and late-type galaxies
Authors: Dutton, Conroy, van den Bosch et al., http://arxiv.org/abs/1012.5859
Year: 2011
The authors try to find the origins of the TF and Faber-Jackson relations, using a big sample of observed galaxies from SDSS, a set of observational scaling relations showing the distribution of baryons in galaxies, with constrains on the DM halo structure from simulations.
They claim in the introduction that the origin of TFR and FJR is the relation between the halo virial velocity and its virial mass, which scales as v_vir \propto M^(1/3). The relation between v_opt (*) and v_vir depends on 3 factors:
-- the contribution of baryons to v_opt
-- the structure of pristine (DM-only) haloes. The halo profiles are well-enough defined by the NFW profile.
-- and the halo response to the disk formation (contraction/expansion)
The biggest source of uncertainty for the first factor is the IMF, as most of the baryonic mass in galaxies (except the low-mass star-forming ones) is in stars.
The third factor, the halo response, is quite uncertain. It can be adiabatic or non-adiabatic contraction, or expansion due to a number of different processes.
They use a large number of observed relations (TF and FJ relations too) to constrain or predict unknown quantities in their analytic model, modelling the DM halo, bulges, disks, gas disks, constructing 3D models of their SDSS sample galaxies (~100 000 LTGs, ~160 000 ETGs).
They construct mass models for all those galaxies using various IMF and assumptions about the halo response. Then they compare them with the observed circular velocity -- stellar mass relations (I do not understand it at this point -- they already used the TFR as a constraint while solving for the halo contraction at a given IMF, so it is inbuilt in their model somehow?).
Nevertheless, they do not reproduce the zero-point of the TFR with the standard Chabrier IMF and standard adiabatic halo contraction model, as the velocity is overpredicted. They claim that it is possible to fix that by either assuming a different IMF, or changing the halo response, or having different halo concentrations (set by formation time or \sigma_8, they claim that both ways to do it do not solve the zero-point problem).
However, they find that early-type and late-type galaxies cannot have the same IMF and halo response. For any given halo response model, ETGs and LTGs must have different IMF normalisations to reproduce the TFR. If we assume a universal IMF, early-type galaxies require stronger halo contraction.
They look into the model circular velocity profiles (see attached) for models that reproduce the TF and FJRs, with Chabrier IMF. The velocity profiles look rather constant at higher radii, reaching the maximum at ~0.3 R_vir. (For the LTGs, they convert the velocity dispersion to v_circ picking a rather simple conversion factor between sqrt(2) and sqrt(3).) They claim that the flatness of the rotation curves does not depend strongly on the IMF and the halo response model, and 'is a natural consequence for galaxies embedded in LCDM haloes'.
Then they discuss possible mechanisms that would account for differences in IMF or halo response for ETGs and LTGs. The halo contraction/expansion may depend on galaxy morphology via a number of processes:
-- adiabatic contraction due to smooth accretion -- results in disk growth and SF, important for LTGs
-- wet major mergers -- result in halo contraction because gas settles in the center of a galaxy, transforms LTGs to ellipticals
-- dry major mergers ( for ETGs) -- halo contraction due to violent relaxation which mixes stars and DM
-- clumpy accretion/minor mergers: halo expansion due to dynamical friction. Clumps must be baryonic, so that's likely to occur at high redshifts (A. Dekel was arguing in Jerusalem that it must be the case).
-- feedback causes halo expansion, more important to LTGs and for higher redshifts.
-- bars -- halo expansion or contraction, not clear.

They sum it all up by saying that the key formation event that distinguishes ETGs from LTGs is a major merger (dry or wet), which causes halo contraction. For LTGs, haloes could expand due to a combination of clumpy cold accretion and intensive feedback.
The IMF may plausibly vary with redshift due to its alleged dependence on ISM temperature via the Jeans mass. Since ETGs form their stars earlier than LTGs, their IMF might be different. They discuss the van Dokkum & Conroy 2008 article on the different IMF at the cores of massive ellipticals, and conclude that this IMF would not fit for the other ETGs well.
All in all, they show that there is a degeneracy between IMF and the halo response to disk formation, and we cannot match the two for ETGs and LTGs if we want to reproduce the TFR. However, I'm puzzled here -- like in the previous Mo, Mao, White article, they do not model galaxy evolution in a self-contained way, and a model cannot show things that are not included in it. Mo, Mao & White include one set of parameters in their article, and get a set dependences on their parameters (like the disk formation time). Here Dutton et al. model rotation curves as functions of other parameters, and get dependence on another set of parameters. I think it's tricky to understand which parameters are really important, because in astrophysics we often have many coupled parameters, and it's not difficult to find correlations between them. It's more difficult to find out which ones are the fundamental ones, which really affect the TFR.
-- a question: what does it mean "TFR is an edge-on projection of the fundamental plane" (p. 323)?
(*) v2.2 for late-type galaxies and some measure of velocity proportional to dispersion at the effective radius for early types

Tuesday, May 14, 2013

Daily Paper #3: Rotation rates, sizes and star formation efficiencies of a representative population of simulated disc galaxies

I was a bit late today, because I went to see a couple of talks at the HETDEX meeting. A couple of interesting points from there:
-- The LAE people are worried about selection bias arising due to galaxy spins' alignment with the large scale structure (http://arxiv.org/abs/1004.3611)
-- Cluster growth depends not only on the proto-cluster environment (i.e. the immediate neighbourhood), but on super-halo scales as well (it's probably better defined here: http://arxiv.org/abs/1109.6328)

...On to the paper:
Title: Rotation rates, sizes and star formation efficiencies of a representative population of simulated disc galaxies
Authors: McCarthy, Schaye et al.
Year: 2012, http://arxiv.org/abs/1204.5195 They use resimulated (SPH) Millenium sub-regions with selected representative mean densities, including baryonic physics. They include background UV/Xray radiation field that 'reionises' the simulations at z = 9 and the CMB. They model star formation using an effective prescription reproducing the Kennicut's relation and assuming Chabrier IMF, as well as yields and timing of nucleosynthesis and SN feedback.
Then they fit the surface brightness profiles of galaxies in 9 - 11.5 log M* range with a Sersic function and classify their galaxies into disc and spheroid-dominated usin n = 2.5 as the cut.
-- The stellar mass - rotation velocity relation:
They examine this relation for their disk galaxy population (see the attached picture) and compare it with Reyes 2011 observations, claiming that this is the first cosmological hydrodynamical simulation that produces a population of galaxies consistent with the observed TFR. The slope is slightly higher, but in general, the agreement is really good. They use v80 (velocity at 80% light radius) and Vmax as the velocity measures. The velocities at their high-mass range are too big, and they claim that lack of prescription for AGN feedback may explain that.
The authors also discuss the scatter in TF relation, claiming that it is not possible to directly compare observed and simulated TF scatter without careful analysis of selection criteria and observational uncertainties.
Simulated spheroid galaxies follow the TFR well at this mass range, except for the most massive galaxies. The rotation curves of the galaxies with M* > 10.6 are not realistic (due to overcooling, too efficient SF of the highest mass haloes). Their simulated galaxies are also compatible with observed M* - size and star formation efficiency -- M* relations, up to the higher mass range.
One more quantity that simulations are compared with is the ratio of v_circ to v_200. If there is no significant halo expansion/contraction, this ratio should be roughly equal to 1.2 (Duffy 2010 writes more on halo expansion/contraction issue and impact on this ratio). Again, their results agree with this quite well up to log(M*) > 10.5.
They try to reproduce the main galaxy sequence at z = 0, comparing their results with SFR from Galex. Except for the lowest mass galaxies where their resolution is too small, it works well.
The section 4 is an interesting discussion of connection between SF efficiency, stellar mass function and the TF relation. They show that by assuming power-law shapes for halo mass function and stellar mass function in certain stellar mass range where this is valid, it is possible to match haloes (=velocities) and stellar masses using the abundance matching method, and reproduce TFR and SFR efficiency-M200 relations well (and maybe even constrain the faint end of the IMF). The way I understand it, the TFR arises due to the linear (in log-space) match between the halo mass function (set by cosmology) and the stellar mass function (set by IMF, SFR efficiency, thus feedback, etc.). I'll read Mo, van den Bosch and White 2010 for more discussion on this matter.

Monday, May 13, 2013

Daily Paper #2: The Tully-Fisher Zero Point Problem

Today's is a short paper (conference proceedings, just 4 or so pages), dealing with the halo contraction/expansion issue.

Title: The Tully-Fisher Zero Point Problem
Authors: Dutton, van den Bosch & Courteau
Year: 2008, http://arxiv.org/abs/0801.1505

The authors claim that CDM-based disk formation models can reproduce the slope/scatter of the TFR, however, its zeropoint has not been matched. This problem gets even worse if additional constraints (disk sizes, number densities) are imposed.
The TFR zeropoint can be matched in models if the halo contraction or disk self-gravity are not taken into account, or, as is shown in the article, halo contraction is counteracted by baryon feedback. They model an exponential disk in a NFW halo, obtain its luminosity from observed TFR and its scale-length from the size-luminosity relation. Assuming WMAP3 cosmology and a concentration parameter value from simulations, they solve for the virial mass and obtain rotation curves with and without the halo contraction (see the attached plot). The halo-contraction case is in conflict with both theory and observations, meaning that the halo is too small and has too high spin parameter.
They suggest 3 ways to solve this problem:
  • -- lower the stellar mass-to-light ratio (but there is no known IMF that could provide that). Also, they claim that baryons at the disk and bulge account for at least half of the v2.2 -- I didn't know that (Courteau 1999, Weiner 2001)!
  • -- lower the initial concentration (then the virial radius of a halo is increased) -- but that's incompatible with WMAP3 cosmology.
  • -- turn off or reverse halo contraction: disk galaxies do not form in isolated haloes by smooth cooling. Halo contraction can be reversed by:
    • a) Feedback during adiabatic disk formation, if a large fraction of its mass is removed. It would also explain why galaxy formation is so inefficient (I think they mean that the gas is not converted into stars rapidly).
    • b) Dynamic friction between baryons and DM due to bars or large baryonic clumps (do they mean violent disk instabilities here?). They do not explain the details of such a process here, but I think what happens is that baryons transfer some of their angular momentum to the halo due to formation of massive, concentrated, rapidly moving structures.
The interesting message of the article is that the TFR zeropoint can provide clues into baryon feedback (which should also explain the core-cusp problem of DM haloes, at least that was what A. Dekel and J. Primack maintained at the Jerusalem Winter School). That's important.

Friday, May 10, 2013

Daily Paper: The formation of galactic discs

Me and my supervisor agreed that I would read a paper a day and write it up, so I'll be copying the summaries here.

Title: The formation of galactic discs (http://adsabs.harvard.edu/abs/1998MNRAS.295..319M)
Authors: Mo, Mao, White
Year: 1998
It's actually a really important paper -- I'm glad not to have missed it. They model galactic disk formation (of pure exponential ones, though they address influence of a simple bulge model later) in 3 CDM cosmologies. The authors make 4 important assumptions:
1) The mass of a disk is a fixed fraction of its halo
2) The angular momentum of the disk is also a fixed fraction of that of the DM halo (later they find that the specific angular momenta of the disk and halo are similar, so these two ratios should be equal).
3) The disk is thin, has an exponential density profile and is centrifugally supported
4) Real galaxies are dynamically stable disks, i.e. they reject unstable models.

They calculate the halo mass function using Press-Schechter formalism (see Appendix), assume halo angular momentum distribution determined by N-body simulations and a constant mass-to light ratio, and NFW profile. They calculate the total energy of a truncated (to r_200) NFW halo using the virial theorem, the rotation curve corresponding to it (after accounting for adiabatic halo contraction and gravitational disk influence).
See the attached picture (Mo1998_rotcurves.png) for their results. They claim that the shape of a rotation curve depends on 3 parameters only: the halo concentration parameter c(see *), fraction of mass in the disk m_d, and the spin parameter.

There is a lot more in this paper, but they devote a significant section of it to Tully-Fisher relation, explaining the origins of its slope, zeropoint and intrinsic scatter.
  • -- slope: They claim that it is a generic prediction of hierarchical structure formation models with CDM-like fluctuation spectra (which set the halo formation times and thus their concentration c distribution).
  • -- zeropoint: mostly set on disk mass fraction, mass-to-light radio and the redshift at which a given disk was assembled. They claim that disks must have effectively formed at redshifts close to zero (?), unless we assume an unphysical mass-to-light ratio.
  • -- scatter: they analyse possible sources of intrinsic scatter of the disks, and find that:
    • --- a small amount of scatter comes from the spin parameter distribution
    • --- scatter in concentration c also adds scatter to the TF relation (disks at more concentrated haloes spin faster).
    • --- bulge is not included in their model, so bulge luminosity contribution is not accounted for -- that's a source of scatter as well
    • --- mass-to-light ratio may vary, adding some more scatter
    • --- formation times may vary as well, adding even more scatter.

The intrinsic scatter because of the first two factors cannot be smaller than 0.15 mag.
In conclusion, they show that it is possible to derive a realistic TF relation theoretically, however, I'm not sure if that is the only way it could arise? They do not include bulges, which are important for our sample, too. I am also unsure what do they mean with 'effective disk assembly redshift' at z <= 1.

* c == r_200/r_s, here r_200 -- the limiting radius of a halo, defined so that the mean density inside this radius is 200 times larger than the background density, r_s -- the disk scale length.
There is also a brief explanation of the \sigma_8 parameter -- see attached.

Monday, May 6, 2013

lit: Baryonic T-F relation

S. McGaugh et al, somehow the baryons know how many stars to form'.
There is also a reference to Mo 1998, which may provide insights into the origin of the T-F relation.

Friday, April 26, 2013

Batch downloading SDSS image cutouts

I remember I've done that before, but couldn't find the script. Here is one I quickly put together, based on Min-Su Shin's code.

import os
import db
from utils import *

ids = db.dbUtils.getFromDB('id', dbDir+'MyDB.sqlite', 'mothersample')

for id in ids:
  id = str(id)
  ra = db.dbUtils.getFromDB('ra', dbDir+'MyDB.sqlite', 'mothersample', ' where id = '+id)[0]
  dec = db.dbUtils.getFromDB('dec', dbDir+'MyDB.sqlite', 'mothersample', ' where id = '+id)[0]
  name = "data/cutouts/"+id+".png"
  command = str("wget 'http://casjobs.sdss.org/ImgCutoutDR7/getjpeg.aspx?ra="+str(ra)+"&dec="+str(dec)+"&scale=0.5020920&width=150&height=150&opt=GS&query=&Grid=on&PhotoObjs=on' -O "+str(name)  )
  os.system(command)




Thursday, April 25, 2013

Sunday, April 14, 2013

SQL: selecting B/D decomposition data

SELECT califa_id, re_1, re_2, n_1, n_2, rmag1, rmag2, rmag2/rmag1, Reduced_chi2 FROM galfit_2comp where n_1 < n_2 and re_1 > 20 and n_2 > 2
#1st component is disk:
id, disk_re, bulge_re, disk_n, bulge_n, disk_mag, bulge_mag, bulge/disk ratio
where 1st component is a sizeable disk (lower n) and we don't simply split the disk in two components.
And the reverse case:
SELECT id, re_2, re_1, n_2, n_1, rmag2, rmag1, rmag1/rmag2, Reduced_chi2 FROM galfit_2comp where n_2 < n_1 and re_2 > 20 and n_1 > 2

Saturday, April 13, 2013

Matplotlib: plotting a bar plot with errorbars from data

This didn't make any sense for what I've been trying to do (a histogram), but might be useful someday:

y, binEdges = np.histogram(vel_22,bins=15)
errArray = np.zeros((y.shape))
for i, edge in enumerate(binEdges):
  vel_lo = np.where((vel_22 <= edge) & (vel_22 > binEdges[i-1]))[0]
  print np.mean(vel_err[vel_lo]), 'mean'
  if math.isnan(np.mean(vel_err[vel_lo])):
   errArray[i -1] = 0.
  else:
   errArray[i -1] = np.mean(vel_err[vel_lo])
width = 12
bincenters = 0.5*(binEdges[1:]+binEdges[:-1])
plt.bar(bincenters, y, color='white', edgecolor='k', width=width, yerr=errArray)

Monday, April 8, 2013

Numpy set operations

Numpy set routines are a treasure. I've never heard of them before and converted arrays to sets and back to get precisely what they do.

Sunday, April 7, 2013

How MCMC works

Last week my advisor came back from a long trip and encouraged me to re-do my modelling of galaxy rotation curves using MCMC [Markov chain Monte Carlo] instead of a Levenberg-Marquardt implementation I've been using.
I had dropped this line of approach before because I didn't understand why MCMC algorithms such as Metropolis - Hastings work. It's quite difficult to find a simple, intuitive, non-Bourbaki-ish explanation of MCMC principles, and even more difficult to understand it without writing code in parallel.
MCMC is, as many things we take for granted today, a by-product of nuclear weapons research and the Cold War. It merges Monte Carlo methods, i.e. repeated random sampling, and Markov chains -- a prescription for processes which next state depends only on the current state, and is not correlated with the preceding ones.
MCMC methods are used for solving a wide variety of problems, frequently for model selection too. A crucial concept here is the parameter space -- i.e. the set of all possible parameters one's model can have. For instance, if your model is a straight line, described by an equation y = mx+b, x and y are the independent and dependent variables of your model, whereas m and b are the model parameters. The (m, b) plane is your two-dimensional parameter space, representing all slope m and intercept b combinations a straight line can have. Here's a nice literary explanation.
If you have any data you are trying to model at your disposal, your model becomes constrained by it: not all (m, b) pairs describe your data equally well. There are locations at the parameter space where the model cannot describe the data by any means, let's say, the line is way above the data and goes away to infinity because the slope is wildly inadequate. There are locations that describe pretty decent models -- let's say, with a slightly wrong offset or slightly wrong slope. The goodness of model in parameter space can look similar to this picture, taken from a pedagogical article by D. Hogg et al., here the darker, denser regions show better combinations of m and b:
Evaluating how well a model fits the data is a separate branch of science, but that is often helped by a quantity called Chi-squared, or Chi2. It's simply the squared difference between your data values (y') and the values (y) your model predicts, divided by squared uncertainties on y' you know from your data. This way bad datapoints do not influence your model selection much. You can think about Chi-squared as measure of distance between your model and your data -- you sure want to make it as small as possible.
The crux of MCMC model selection methods is random sampling from model parameter distribution in such a manner that the sample is proportional to the real parameter distribution -- you pick more points from denser regions in the parameter space, and just a few points from the areas with unlikely parameter combinations. What you end up with are the distributions of your model parameters, given the data -- not only a (probably local) minimum given by simple fitting, but a full distribution, from which you can estimate the uncertainties of your model, see its limitations, etc.
You probably don't need MCMC to fit a straight line to data, but if you have several parameters in your model (I've got 7 or 5) with non-trivial interrelations, your parameter space becomes high-dimensional, with a complicated landscape of better/worse fitness peaks and troughs.
So, how does MCMC work? I'll go through the Metropolis algorithm step by step. Imagine you have the straight line model y = mx+b and some data you are trying to fit with it.
  1. You randomly select a pair of initial m, b values m0, b0 and calculate how good the model y = m0*x + b0 is (by calculating it's ln-likelihood, which is simply a function of Chi-squared for that model).
  2. You make a small random step in parameter space, obtaining a m1, b1 pair.
  3. Then you check if the model y = m1*x + b1 is better than the previous model, y = m0*x + b0, by looking at the ratio of their likelihoods. If this model describes the data better, you keep this pair of parameters (save it into an array) and go to step 2. If it is worse, you go to the next step.
  4. You draw a random number between 0 and 1 again. If the ratio of likelihoods from step 3 is lower than this number, you reject this (m1, b1) pair and go back to (m0, b0), trying step 2 again with a new random step. If it is higher, you keep (m1, b1) and repeat step 2 from this new position.
I couldn't understand this last step for quite a long time. What it does -- and why you draw a second random number -- is selecting the less likely parameter pairs with a probability proportional to their likelihood. You still have some points in highly unlikely regions of parameter space, but not many -- because you reject most of the steps as the likelihood ratio is so low, you rarely draw a random number lower than that. Conversely, you get many more points from the denser, 'better' regions in parameter space, because the likelihood ratio is closer to 1 and you keep most of the points you draw there. This is magic for two reasons:
-- in the end you can simply draw histograms of m and b and see how they are distributed in the parameter space, characterise those distributions, calculate the standard deviations, see if they are multimodal (have multiple local minima), etc.. When you use the common fitting algorithms, you frequently get just a few numbers characterising (possibly local) minima, and not the full distributions of your parameters.
-- The fact that some low-likelihood points are still picked up allows exploring all the parameter space, thus you don't get stuck in a local minimum.
There's one more reason! Imagine you don't care about one or more of the parameters and only want to explore the distribution of the other ones (say, you only care about the slope). If you run MCMC, as the end result you obtain the full distribution of the parameter(s) of interest, using the entire, realistic distribution of the unimportant ones. That's called marginalisation of the nuisance parameters and may involve some nasty integrals otherwise.

I wrote my own MCMC code in order to understand the algorithm in detail and develop intuition, then switched to emcee by D. Foreman-Mckey et al., which is a well-designed, parallelised, fast code that makes life easier (and I don't have to worry about things like burn-in, initial position, autocorrelation time, thinning and step size selection too much). Here's the scholarly article and some highly useful wraparound scripts by Neil Crighton for setting-up and running emcee.

Here is what I get, fitting the stellar rotation of a galaxy (CALIFA IFU observations) with a model having 5 free parameters. Inclination and velocity amplitude (the upper left panel) are strongly coupled, and that's evident!

Tuesday, March 26, 2013

sqrt(1/2), integers, floats

That may be Python 101, but np.sqrt(1/2) = 0, whereas np.sqrt(0.5) = 0.707...(). I thought integers are automatically cast to floats in such operations.
Except they aren't -- that's what the mysterious line from __future__ import division is supposed to remedy! http://stackoverflow.com/questions/4841732/python-convert-fraction-to-decimal

Thursday, March 14, 2013

Fitting a model to data: lit review

Yesterday I showed some preliminary TF results at our group meeting, and it was rightly noted that the straight line fit I made was wrong. I've only included errors in one variable (velocity, obtained by using bootstrapping) in the fit, and made a mess of that by mixing the variables on the plot.
I've been thinking about how to do a proper fit when the data has uncertainties in both directions, and been revisiting this paper by Hogg et al.:
In the astrophysics literature (see, for example, the Tully Fisher literature, there is a tradition, when there are uncertainties in both directions, of fi tting the "forward" and "reverse" relations, that is, fi tting y as a function of x and then x as a function of y, and then splitting the diff erence between the two slopes so obtained, or treating the di fference between the slopes as a systematic uncertainty. This is unjusti ed.

Monday, March 11, 2013

Monday, February 25, 2013

Importing large .csv files via sqlite3 command line interface

If they are too big to import via the GUI manager, that's one way to go (I created a database of emission line data):
CREATE TABLE "elines" ("CALIFA_id" text,"morph" text,"bar"
text,"radius" float,"xpos" float,"ypos" float,"Hbeta_flux"
float,"Hbeta_eflux_float" float,"Hgamma_flux_float"
float,"Hgamma_eflux" float,"Hdelta_flux" float,"Hdelta_eflux"
float,"Halpha_flux" float,"Halpha_eflux" float,"OIII5007_flux"
float,"OIII5007_eflux" float,"OI6300_flux" float,"OI6300_eflux"
float,"NII6584_flux" float,"NII6584_eflux" float,"SII6717_flux"
float,"SII6717_eflux" float,"SII6730_flux" float,"SII6730_eflux"
float,"vel" float,"evel" float,"disp" float,"edisp" float,"Ha_cont"
float)

.separator ","

.nullvalue -999

.import elines.csv elines


Thursday, February 21, 2013

Python string to boolean

I wanted to pass True/False as boolean values from the command line, this is a nice way to do it.

Tuesday, February 19, 2013

Centering and clipping image, sigma-clipping (ds9 like) in Matplotlib

I wanted to cut out the galaxies I ran Galfit on, then clip the pixel values and save to an image (which is picked up by LaTeX). Here's the source.

Matplotlib: strings as axis labels

I wanted to make a plot that shows the relation of some galaxy properties vs. their Hubble subtype. The subtypes were saved as strings at my database, I selected them, converted back to numbers. Why back? Because some colleagues of mine, who think in words, classified all the 900 galaxies by their Hubble types, then their classifications were converted into numbers, means and probabilities calculated, and converted back to types.
Anyway, I wanted to have types on my axis, so I did it thus. Here's the plot:

Matplotlib: colorbar too large

If the colorbar is too big, setting the 'shrink' property helps:

  p = ax2.imshow(image, cmap=cm.hot, interpolation='nearest')
  ax2.xaxis.set_visible(False)
  ax2.yaxis.set_visible(False) 
  cb = plt.colorbar(p, shrink=0.455)

Monday, February 18, 2013

Parallel processing in Python

I just wrote a series of scripts that run batch GALFIT fits. Since it's boring to parallelise this manually (e.g. splitting the list of input ID's into smaller lists, then running same script separately for each sublist, or setting the chunks of IDs from command line), I wrote a simple multithreading routine using multiprocessing. It's quite trivial, as the parallel processes here don't have to communicate (unless I see some terrible race conditions arising). The ID lists are made by hand, but it'll be easy to split the list into smaller chunks of desired length by looping through it, or by using list comprehension.

ds9: an alias for multi-extension data cubes

Since I chronically cannot remember the command-line options to ds9, I added this to my ~/.bashrc: alias meds9=' -scale mode 99.5 -cmap Heat -medatacube'

Friday, February 15, 2013

lit: Intrinsic properties of SDSS galaxies

Maller et al.. It's important in many other ways, but here is an interesting excerpt on why TF relation is different:
However, with the notable exception of the Tully-Fisher relation these distributions and relations are traditionally measured in terms of the observed properties of galaxies. That is, the measurements used are K-corrected and corrected for foreground dust extinction, but no correction is attempted to compensate for the viewing angle from which the galaxies are observed. In contrast, the Tully-Fisher relation is not a relationship between a galaxy's observed luminosity and rotation velocity, but a relation between a galaxy luminosity and rotation velocity corrected for inclination. The inclination correction attempts to recover the intrinsic properties of a galaxy and not properties that are measured because of the particular angle from which the galaxy is viewed. For a long time there was great controversy over whether spiral galaxies are optically thin or optically thick (e.g., Holmberg 1958; Disney et al. 1989), but in the early 1990s it was established that galaxies become redder and fainter as they are inclined (e.g., Burstein et al. 1991).

Wednesday, February 13, 2013

Pyfits: quickly overwriting a .fits header

I had some .fits files that lost their header information during analysis. Here's a quick kludge I coded up to retrieve the original header and overwrite it for a single file.

import pyfits 
import sys

fileName = sys.argv[1]

originalFile = "../data/SDSS/r/"+fileName[:-4]+"fit.gz"
OriginalHDUList = pyfits.open(originalFile)
OriginalHeader = OriginalHDUList[0].header



HDUList = pyfits.open(fileName, mode = 'update')
HDUList[0].header = OriginalHeader
print HDUList[0].header
HDUList.flush()

Monday, February 11, 2013

Thursday, February 7, 2013

SDSS: a query for clean photometry galaxies with redshifts

SELECT g.objid, g.ra, g.dec, g.petroMag_u, g.petroMag_g, g.petromag_r, g.petromag_i, g.petromag_z, s.z, g.petroR50_r, g.petroR90_r, g.rho, g.isoA_r, g.isoB_r, g.isophi_r,
dbo.fPhotoFlagsN(flags) as flags into mydb.galaxies from Galaxy as G
   JOIN SpecObj as S ON G.objId=S.bestObjId 
WHERE ((flags & 0x10000000) != 0)       
AND ((flags & 0x8100000c00a0) = 0)     
       
       
       
AND (((flags & 0x400000000000) = 0) or (psfmagerr_g <= 0.2))
       
       
AND (((flags & 0x100000000000) = 0) or (flags & 0x1000) = 0)

Monday, February 4, 2013

lit review: The shape of the dark matter halo in the early-type galaxy NGC 2974

The shape of the dark matter halo in the early-type galaxy NGC 2974, by Weijmans, Krajnovic et al. They use kinemetry to analyse gas and stellar kinematics, with clear description of the AD correction. (Thanks, Mariya!)

lit review: a generalization of photometry to the higher moments of the line-of-sight velocity distribution

by Krajnovic et al.. This will probably be one of the important papers when writing: an interesting breakdown and discussion of velocity map moments, also many useful references (esp. Schoenmakers et al. (1997), the degeneracy of q, center and systemic velocity v0.
We have presented a generalization of surface photometry to the higher-order moments of the LOSVD, observed with integral-field spectrograph. We call our method kinemetry. For even moments of the LOSVD, kinemetry reduces to photometry. For odd moments, kinemetry is based on the assumption that it is possible to define an ellipse such that the kinematic profile extracted along the ellipse can be well described by a simple cosine law. This assumption is satisfied in the case of simple axisymmetric rotators.

Friday, January 25, 2013

IDL # operator

I'm playing around with my old MSc code, as I have an idea where it might be useful. Here's something I forgot:
The # operator computes array elements by multiplying the columns of the first array by the rows of the second array. The second array must have the same number of columns as the first array has rows. The resulting array has the same number of columns as the first array and the same number of rows as the second array.

Thursday, January 24, 2013

lit: Galactic Dynamics

I'm reading Binney & Tremaine's section on LSR, asymmetric drift and its correction, as I plan to do the corrections for my stellar velocity fields. Some definitions:
LSR is the hypotetical local standard of rest: velocity of some imaginary stars in precisely circular orbits at the Solar radius.
Characteristic thickness is the ratio of surface density to its volume density at the galactic plane: it's different for different stellar populations (like their vertical scale lengths), arising due to interactions w/ molecular clouds and/or spiral arms.
The collisionless Boltzmann equation states that the flow of stellar phase points through phase space is incompressible: phase space density around a phase point of a given star remains constant. A nice example of such incompressible flow is a marathon race where all participants run at constant speeds: at the begining the density of runners is large, but their speed distribution is wide. At the end of the race the density is low, but the speeds of all runners passing a point are very similar. The coordinates of stars in phase space are (x, v).
Asymmetric drift is the difference between the LSR (local circular speed) and the mean rotation velocity of a population. Velocity dispersion and asymmetric drift: see eq. 4.32-4.34 for the correction formulae; also Neistein, Maoz, Rix et al. for their application.

python: catch multiple exceptions in one line

lit: Asymmetric Drift and the Stellar Velocity ellipsoid

Asymmetric Drift and the Stellar Veloocity ellipsoid by Westfall, Bershady et al.:
The shape of the stellar velocity ellipsoid, defined by \sigma_R, \sigma_{phi}, and \sigma_z, provides key insights into the dynamical state of a galactic disk: \sigm_z:\sigma_R provides a measure of disk heating and  \sigma_{phi}:\sigma_R yields a check on the validity of the epicycle approximation (EA). Additionally, \sigma_R is a key component in measuring the stability criterion and in correcting rotation curves for asymmetric drift (AD), while \sigma_z is required for measuring the disk mass-to-light ratio.
See also p.2 on asymmetric drift corrections for stellar velocities.

Monday, January 21, 2013

Wednesday, January 16, 2013

Probability vs. likelihood

Probability -- a function of the outcome given a fixed parameter value. (What's the probability that 50 heads will come up, given that I toss a fair coin (--> parameters) 50 times?)
Likelihood -- a function of the parameters given a fixed outcome. (What's the likelihood a coin is fair if 50 heads came up out of 100 (--> outcome)?)
The likelihood of a set of parameter values given some observed outcomes is equal to the probability of those observed outcomes given those parameter values.

Monday, January 14, 2013

Jerusalem WS lecture notes: 16. Signatures of Inflows and Outflows

By R. Dave, slides here.
  • metals in diffuse IGM -- outflows!
  • at high z, SF can't keep up with inflow: the gas accumulation phase: gas can even turn into molecular phase, but due to low efficiency it won't all be processed into stars (equilibrium breaks down), RD 2012
  • eta scaling ~ M_{halo} ^{-1/3} -- z_{equil} ~ 5 for massive galaxies
  • equilibrium model code at http://ursa.as.arizona.edu/~/rad/work/equil.c
  • 'thesis back in the stone age'
  • constraining outflow parameters: direct observation and direct modelling are challenging
  • direct observation of preventive feedback:
    • mass in CGM: gas in absorption, X-ray emission, soft X-ray bg: COS (Cosmic Origins Spectrograph): OVI high ionisation line, 2 components: photo- and collisional excitation: SF galaxies are probing 300 000 K gas: http://sci.esa.int/science-e/www/object/index.cfm?fobjectid=49663, Anderson 2012
    • direct observations of ejective feedback: outflowing ISM lines: LBG tomography
    • direct observations of wind recycling: _metallicity_ of inflowing gas. Disk outskirts: \alpha_z ~ 0.3
  • indirect constraints:
    • counting statistics are not as good as scaling relations
    • galaxies to 0th order are a 1-parameter family (stellar or halo mass) --> good scaling parameters
    • conversion efficiency peaks at 10^{12} M_{\odot}
    • if you don't have winds, you have overcooling problems
    • cosmic SFR efficiency: SFR vs. Halo mass infall rate: a simple combination of model parameters (Behroozi et al), simulations are hard
    • SFR vs. M_{\odot}: galaxy MS is the parameter to compare SF galaxies across redshifts, not SFR -- present starbursts and past normal SF galaxies have identical SFRs
    • MS evolution is currently difficult to model
    • SFR -- metallicity relation
    • M_{star} -- feedback relation
    • M_{star} function -- Baldry 2008 simulations -- reproduces the lower mass GSMF well, arbitrary quenching at logM > 11, governed by differential recycling which flattens the relation at intermediate masses --> inflection
    • gas fractions and X_{co}
  • DLAs: kinematics favor ejection, not prevention, CDM has many low-mass halos, if they have many HI, too many narrow DLAs result

Jerusalem WS lecture notes: 15. Star-Forming Galaxies

By R. Genzel, slides here
  • ways to look back in time:
    • look-back imaging surveys
    • local stellar archaeology
    • detailed, resolved in situ observations
    • pathologies: Hanny's Voorwerp: radiation echo --> timescales of AGN evolution
  • SF in MW:
    • shielded regions, pressure comparable to diffuse ISM, density far above MW mean
    • highly structured, velocity dispersion ~5 km/s (supersonic motions), increases with scale
    • unstable to fragmentation
    • in large scales -- virialised, in small scales -- not --> inefficient SF:
    • MW SFR ~ 3 M_{\odot}/yr
    • Krumholz 2005
    • gas depletion time way smaller than Hubble time -- why stars are still forming: MHD pressure (Alfven) prevents collapse, GMCs are magnetically supercritical, but highly supersonic --> theory and observations tend to support the second explanation
  • SF occurs in clusters (70-90%, Lada & Lada 1993), super star clusters with M ~ 10^6 M_{\odot} -- from the collapse of the whole MC
  • SF tracers: Kennicutt 1998
    • H\alpha: traces Lyman continuum, issues w/ extinction correction
    • UV continuum: IR fine structure lines, C+ --> easily collisionally excited, in areas that are exposed to UV, OI line
    • mid and far IR: calorimetric method: UV heats dust, reprocessing: FIR is currently the best, esp. due to Herschel
    • radio: FIR-radio relation, works if there is no AGN, evolutionary changes
    • SFH uncertainties: SF tracers are no better than 0.3 dex
  • pictures of extinction:
    • 'screen' of dust and gas surrounding SF region
    • 'mixed' -- dust and stars are mixed, high and low exctinction regions: you can observe short and high wavelengths together, Calzetti 2000
  • dust extinction: 'greyer' -- less dependent on wavelength
  • mass tracers (see the slides):
    • SED fitting: live stellar mass
    • rotcurves: dynamical mass: HI not detectable beyond z = 0.1-0.2 until at least SKA
    • velocity dispersion
    • CO -- variation of X
    • lensing
    • submm dust luminosity (ALMA)
    • uncertain up to at least factor of 2: SFH, IMF, extinction, spatial distribution, kinematics, conversion factors
  • gas-SF formation:
    • K-S relation (empirical, simple physical motivation (see slides)), Kennicutt & Evans 2012 review
    • conversion factor: MW factor not appropriate for metal poor galaxies
    • classic papers on SF: 'very good papers were written beyond our event horizon': Kenicutt 1983: infall required for SF over cosmic time
    • spatially resolved SF relation: KS law between H_2 and SFR is more linear -- can we ignore HI?
    • HCN traces very dense regions
    • detection thresholds: stellar surface density
    • KS relation breaks down in scales < 500 pc -- CO and H\alpha are not correlated, due to local evolutionary effects: larger areas are necessary to 'smear out'
    • 'J. Gunn in suit'
  • VLT: very thin, large mirror supported by adaptive optics pads
  • LBT: thin faceplate on light honeycomb ceramic structure
  • Concepts for > 20 m telescopes:
    • aplanatic Gregorian optics
    • 30 m telescope (TMT), EELT (10" FoV, 5 mirrors)
  • JWST: 6m, 'Keck in space'
  • AO:
    • wavefront sensor
    • deformable mirror
    • guide star | artificial guide stars
    • feedback w/ computer
  • IFUs: KMOS
  • mm interferometry of cold gas:
    • IRAM Plateau de Bure -- exploration of molecular gas in high redshift
    • ALMA: 0.005" resolution, R = 30 000000