-->

Etiquetas

Supernovae and the Hubble diagram

Supernovae and the Hubble diagram

Author: Eduardo Martín Calleja

The phenomenon of the expanding universe, separating from each other all those cosmic objects that are not bound by the force of gravitation, is a known fact from the Hubble observations in 1929. The rhythm of this expansion is governed by the so-called Hubble's law, which relates the value of redshift observed in the spectra of distant galaxies with the distance that separates us from them.

Hubble's law is usually shown by a diagram showing this redshift in the form of a magnitude with speed units (\(c \, z\)) in \(Km \, s^{-1}\) with respect to the distances measured in Megaparsec. Clearly the biggest challenge to produce a diagram of this type is to know the distances that separate us from objects as far away. To do this, the objects known as supernovae play, as we shall see, a crucial role. Once made, this diagram shows an approximately linear relationship between the two variables, with the proportionality constant $ H_0 $ called the Hubble constant. The suffix $ _0 $ indicates that we refer here to its present value, as this can vary over time depending on the model of evolution of the Universe that is adopted. The current best value of this constant is \(H_0 = 67.8 \, km \, s ^ {-1} \; Mpc ^ {-1} \).

The purpose of this post is to construct a Hubble diagram and get an approximate value of the Hubble constant.

Imports and references

The objects we will build this diagram with are the type Ia supernovae obtained in the ** The SDSS Supernova Survey. ** You can consult the main page of this survey. A summary with links to relevant documentation for this exercise can be found here.

In particular, this is the table of supernovae found by the SDSS in this survey.

An article in wikipedia about supernovas and usefulness of Type Ia supernovae in the calculation of cosmic distances can be found here.

This is a poster at a very affordable level with a similar purpose of this post, to make a Hubble diagram with data from the SDSS supernova survey.

In [1]:
%matplotlib inline

from __future__ import division

import re    # Expresiones regulares
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import quantities as pq
import scipy.stats as s

# This IPython magic generates a table with version information
#https://github.com/jrjohansson/version_information
%load_ext version_information
%version_information re, numpy, matplotlib, pandas, quantities, scipy
Out[1]:
SoftwareVersion
Python2.7.9 64bit [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]
IPython2.3.1
OSLinux 3.13.0 45 generic x86_64 with debian jessie sid
re2.2.1
numpy1.9.1
matplotlib1.4.2
pandas0.15.2
quantities0.10.1
scipy0.15.1
Tue Feb 24 20:13:41 2015 CET
In [2]:
# Definition of new constants:
earth_radius = 6378160.0 * pq.m
moon_radius = 1740000.0 * pq.m
sun_radius = 695000000.0 *pq.m
earth_mass =  5.97219e24 * pq.kg
sun_mass = 332946 * earth_mass
Hubble_constant = 67.80 * (pq.km/pq.s)/(1e6*pq.pc)

# Definition of new units
Mly = pq.UnitQuantity('megalight_year', 1e6*pq.ly, symbol = 'Mly')
Mpc = pq.UnitQuantity('megaparsec', 1e6*pq.pc, symbol = 'Mpc')

A little bit of theory

The homogeneous and isotropic expansion of the universe

A feature of the expansion of the universe is that at large scales it is homogeneous (same at all points) and isotropic (same in all directions). An expansion of this type maintains the shape of the objects. For example, a triangle would become, after a certain time, a scaled one, but similar to the first triangle. The similarity ratio is called scale factor and depends on the instant \(t\) considered, and therefore is designated by \(a(t)\).

Let's consider now two galaxies, and let $ r (t_0) $ be the vector that joins them at time $ t_0 \(, which we can assume for example to be the present time. In another instant \) t $ the vector joining these two objects will be

\[\vec r(t) = a(t) \, \vec r(t_0)\]

The speed at whitch the two galaxies move appart will be therefore (a point representing the derivative with respect to time):

\[\vec v_r(t) = \dot a(t) \, \vec r(t_0) = \dot a(t) \frac{\vec r(t)}{a(t)}\]

That is:

\[ \vec v_r = \frac{\dot a}{a} \vec r \]

And calling \(H(t) = \dot a(t) / a(t)\):

\[\vec v_r = H \vec r\]

Or, removing the vector notation: \(v = H \, r\)

That is, the speed at which galaxies move appart as a result of the expansion of the universe, is proportional to the distance that separates them. The factor of proportionality, \(H\), can vary over time, but at each instant it is the same at all points of the universe. \(H = H (t)\) is known as the Hubble parameter and, as mentioned before, it is not constant since it may change over time. But most important is to note that this relationship between speed and distance is a fundamental, exact property. A mathematical consequence of a homogeneous and isotropic expansion of the universe, not to be confused with the Hubble law to be seen below and which has a observational nature.

Cosmic redshift

When we look the light emitted at any wavelength by a distant object, we observe a difference between the wavelength measured by the observer and the light emitted by the object. This discrepancy is known as redshift and is characterized by the unitless quantity \(z\) defined as:

\[z = \frac{\lambda_{\text{obs}}-\lambda_{\text{emit}}}{\lambda_{\text{emit}}} = \frac{\Delta \lambda}{\lambda_{emit}}\]

Which can also be written as:

\[ z + 1 = \frac{\lambda_{obs}}{\lambda_{emit}} \]

Now I will consider only the redshift which results from the expansion of the universe. If we imagine it as an expanding grid we would be in the case where both the emitting source and the observer occupy nodes of this grid. That is, we assume that the objects have not any own peculiar motions (actually they always have, but at the long distances considered, its effect will be negligible).

Well, in this case it can be demonstrated that the instantaneous wavelength \(\lambda(t)\) is proportional to the scale factor, and we can write:

\[\lambda = \text{cte} \cdot a \Rightarrow \lambda \propto a\]

That is, upon universe expansion, the wavelength expands at the same rate, and measuring the wavelength received and comparing it to the emitted one, it is possible to find out how much has expanded the universe while the light has been traveling to reach to us. It is as if the expanding universe "stretched" light waves traveling through.

For example, if at time \(t_{emit}\) a light pulse is emitted and we receive it at time \(t_ {now}\), we can write:

\[1 + z = \frac{\lambda_{now}}{\lambda_{emit}} = \frac{a(t_{now})}{a(t_{emit})}\]

This redshift, which is not a manifestation of the Doppler effect, but results solely from the expansion of the universe, which is, as we have seen, closely related to the expansion scale factor, is called the cosmological redshift, as we will refer to it hereafter in this post.

Hubble Law

As mentioned, in 1929 Hubble formulated his famous law that states the proportionality between the amount of redshift measured in the spectrum of light that we see from galaxies and their distance obtained by various methods, but mainly by using distance modulus resulting from the comparison between the absolute magnitude and the apparent magnitude of the object:

\[ z = \frac{H_0}{c}d \]

At the time, Hubble interpreted the redshift of galaxies as a manifestation of the Doppler effect due to moving away from us. On the other hand, for small values of \(z\) the Doppler effect admits the approximation \(v \approx cz\), where \(v\) is the source relative velocity away from the observer, so Hubble's law is often written as \(v = H_0 d \) although, to clarify the method of obtaining this velocity, it is preferable to write Hubble's law as:

\[ c \, z = H_0 \, d \]

About that, it is worth to make a number of clarifications:

  • Unlike the theoretical law \(v = H (t) \, r\), the Hubble's law \(c \, z = H_0 \, d\) is purely experimental, being the result of multiple observations. In fact, this linear relationship is no longer fulfilled for high values of z.

  • The distance \(r\) that appears in the theoretical relationship between recession velocities and distances is the proper, exact distance that separates us from the object, measured in an inertial frame of the observer. That is to say, you would measure the distance by putting one after the other rulers that do not expand with the universe.

  • Conversely, \(d\) (Note the change in notation) is the distance estimated by using different observation methods. Most often that \(d\) is the "luminosity distance" \(d_L\) calculated through the distance modulus seen in this post. The luminosity distance provides a good approximation to the proper distance \(r\) for close objects, but deviates from it in the case of the most distant objects

  • The law \(v = H (t) \, r\) relates speeds and distances in the "present", ie, they are speeds and distances now, at the time of doing the observation, measured in the observer's proper time. Instead, \(z\) and \(d\) are referred to the time when the light was emitted by the object observed.

  • Due to the coincidence of the velocity \(v\) with the magnitude \(c \, z\) for small values of \(z\), we can write \(H(t_ {now}) = H_0\). The Hubble constant is the value taken at the present moment, by the Hubble parameter

An excellent explanation of all these subtleties concerning Hubble's law can be found in the notes of the Extragalactic Astronomy course from Mark Whittle.

After this theoretical introduction, I'm going to obtain the corresponding Hubble diagram from the type Ia supernovae out of the the SDSS survey.

Calculation of the distance to a supernova

As it has been mentioned, we will focus on Type Ia supernovae. Its main features with a view to their use in the development of a Hubble diagram are:

  • Its maximum absolute magnitude in the blue filter (B) and visual (V) in the Johnson's UBV photometric system is :

    \[M_U \approx M_V = -19.3 \pm 0.3\]

  • Their apparent magnitude \(m\) is determined by examining their light curve, which is a curve that represents the evolution of \(m\) as a function of time.

  • As their absolute magnitude \(M\) is (approximately) constant, we say that Type Ia supernovae are "standard candles".

  • Once we have the data of maximum apparent magnitude \(m\), we can calculate the distance using the distance modulus \(\mu\), expressing that the distance \(d_ {Mpc}\) in Megaparsec is given by:

    \[\mu = m - M = 5 \, \log_{10}d_{Mpc} + 25\]

In [3]:
# Example: supose that in the light curve of a Ia supernova
# we measure a maximum apparent magnitude of 19.5 with the V filter
m = 19.5
M = -19.3
k = (m-M)/5 - 5
d = 10**k
print "The luminosity distance is %d Mpc" %d
The luminosity distance is 575 Mpc

To calculate the maximum magnitude, additional corrections can be applied, the most important being the Phillips relationship, that we will not use in this post.

Reading the light curves files

To progress in our study, we need the light curves of Type Ia supernovae identified by the SDSS survey. These have been published in the following article: "The Sloan Digital Sky Survey-II: Photometry and Supernova Ia Light Curves from the 2005 data" Holtzman et al 2008, AJ, 136, 2306.

In fact, what will interest us most are just the light curves in electronic format, which can be downloaded from the page of the SDSS Supernova Survey

So, the first thing to do is download all these files in a subdirectory of our working directory. In my case this subdirectory will be called "SDSS_light_curves"

In [4]:
# Get a list with the names of all the downloaded files in the subdirectory
lc_files = !ls ../SDSS_light_curves
lc_files = list(lc_files.n.split('\n'))
lc_files = lc_files[1:]    # Delete the ReadMe file from the list
In [5]:
#A sample of the names...
print lc_files[0:3]
['SN10028.snphot-5.04.sum', 'SN10096.snphot-5.04.sum', 'SN10106.snphot-5.04.sum']

Opening any of these files with a text editor shows that the first 18 rows contain a description of the different data fields. Line 19 is a header line with the names of the data columns, then there are a number of lines with data separated by spaces.

I will then read the first of the files creating a Pandas dataframe with the fields that interest us. Once created the dataframe with the data of the first file, I'll go about all files in the subdirectory adding to our data dataframe the "light curve" data of each supernova.

In [6]:
filename = lc_files[0]
df = pd.read_table('../SDSS_light_curves/'+filename, 
     skiprows=18, sep=r'\s+') #Fields separated by spaces
df.columns    # List the columns of the dataframe
Out[6]:
Index([u'#FLAG', u'MJD', u'FILT', u'MAG', u'MERR', u'MSKYERR', u'MGALERR', u'FLUX', u'FLUXERR', u'SKYERR', u'GALERR', u'NPRE', u'TELE', u'RUN', u'STRIP'], dtype='object')
In [7]:
# Let's use just the four first columns
df = df.ix[:,0:4]

# Change the name of the first column...
df = df.rename(columns={'#FLAG':'FLAG'})
df.head()
Out[7]:
FLAG MJD FILT MAG
0 0 53616.363706 1 25.659
1 32 53616.360389 2 22.739
2 0 53616.361218 3 27.020
3 32 53616.362877 4 21.498
4 4 53616.362048 0 25.049

Then I will extract the value of the redshift "Z" found in the second line of the file:

In [8]:
with open('../SDSS_light_curves/'+filename) as f:
    f.readline()
    scndline = f.readline()
resp = re.search(r'[\d.]+', scndline)  # Using a regular expression
Z = resp.group()
print scndline
print type(Z), Z
# Z:  0.06533 ZERR:  0.00016 SDSS TYPE:  120

<type 'str'> 0.06533

And add a new colum with the Z value, converted to float:

In [9]:
df['Z'] = float(Z)

We also have to include in the dataframe the id of the supernova, which can be extracted from the file name:

In [10]:
print filename
SN10028.snphot-5.04.sum

In [11]:
# Extract the supernova id from the filename using a reguolar expression:
resp = re.search(r'\w+', filename)
SN = resp.group()
print SN     # See if it works...
SN10028

Then I'll add a column to the dataframe with the id of the supernova

In [12]:
df['SN'] = SN
In [13]:
df.head() # Look at a sample of the data
Out[13]:
FLAG MJD FILT MAG Z SN
0 0 53616.363706 1 25.659 0.06533 SN10028
1 32 53616.360389 2 22.739 0.06533 SN10028
2 0 53616.361218 3 27.020 0.06533 SN10028
3 32 53616.362877 4 21.498 0.06533 SN10028
4 4 53616.362048 0 25.049 0.06533 SN10028

This is the very model we want for our dataframe. Now lets walk the names of the remaining light curve files adding rows to it.

In [14]:
for filename in lc_files[1:]:
    dftemp = pd.read_table('../SDSS_light_curves/'+filename, 
                           skiprows=18, sep=r'\s+')
    dftemp = dftemp.ix[:,0:4]
    dftemp = dftemp.rename(columns={'#FLAG':'FLAG'})
    with open('../SDSS_light_curves/'+filename) as f:
        f.readline()
        scndline = f.readline()
    resp = re.search(r'[\d.]+', scndline)
    Z = resp.group()
    dftemp['Z'] = float(Z)
    resp = re.search(r'\w+', filename)
    SN = resp.group()
    dftemp['SN'] = SN
    df = pd.concat([df, dftemp], ignore_index=True)
In [15]:
df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 16708 entries, 0 to 16707
Data columns (total 6 columns):
FLAG    16708 non-null int64
MJD     16708 non-null float64
FILT    16708 non-null int64
MAG     16708 non-null float64
Z       16708 non-null float64
SN      16708 non-null object
dtypes: float64(3), int64(2), object(1)
memory usage: 913.7+ KB

In [16]:
df[100:110] # Show some of the rows...
Out[16]:
FLAG MJD FILT MAG Z SN
100 32 53704.265479 1 18.955 0.06533 SN10028
101 1184 53704.262162 2 18.825 0.06533 SN10028
102 32 53704.262991 3 19.032 0.06533 SN10028
103 160 53704.264650 4 19.148 0.06533 SN10028
104 32 53704.263820 0 20.735 0.06533 SN10028
105 8 53616.395290 1 25.260 0.07775 SN10096
106 0 53616.391972 2 26.469 0.07775 SN10096
107 32 53616.392802 3 23.493 0.07775 SN10096
108 32 53616.394460 4 21.202 0.07775 SN10096
109 128 53616.393631 0 26.155 0.07775 SN10096

As indicated in the header of the light curves data files, measures with a flag bigger than 1024 are unreliable. Let's clean up our dataframe removing values with a flag \(\geq\) 1024

In [17]:
b = df['FLAG'] < 1024
df = df[b]
df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 15838 entries, 0 to 16707
Data columns (total 6 columns):
FLAG    15838 non-null int64
MJD     15838 non-null float64
FILT    15838 non-null int64
MAG     15838 non-null float64
Z       15838 non-null float64
SN      15838 non-null object
dtypes: float64(3), int64(2), object(1)
memory usage: 866.1+ KB

Plotting a light curve

Although not essential for the purpose of representing a Hubble diagram, In will generate now a graphical representation of the light curve of one of the supernovae whose data has been read, which will help to better understand the whole process.

In [18]:
# Create a list with all Supernovae id
list_SN = list(df['SN'].unique())

# We choose one of them
SNid = list_SN[30]

# Select filter 'g' values (filter 1)
b = (df['SN'] == SNid) & (df['FILT']==1)
df_g = df[b].sort(columns = 'MJD')
# And next filter 'r' values (filter 2)
b = (df['SN'] == SNid) & (df['FILT']==2)
df_r = df[b].sort(columns = 'MJD')

Now we are ready to plot the light curves corresponding to filters "g" and "r"

In [19]:
f, (ax1, ax2) = plt.subplots(1,2, sharex=True)
f.set_size_inches(12, 5)
f.suptitle(SNid, fontsize=20)
f.subplots_adjust(hspace=0)
ax1.plot(df_g['MJD'], df_g['MAG'],'o-g')
ax1.invert_yaxis()
ax1.set_title('Filter g', fontsize=15)
ax1.set_xlabel('MJD')
ax1.set_ylabel('MAG')
ax2.plot(df_r['MJD'], df_r['MAG'],'o-r')
ax2.invert_yaxis()
ax2.set_title('Filter r', fontsize=15)
ax2.set_xlabel('MJD')
ax2.set_ylabel('MAG');

These plots show very well how the apparent brightness of the supernova increases to a peak from which it decreases more slowly. It is at this maximum when the absolute magnitude reaches a known value that is allowing us to estimate the distance from us.

Dates are in the MJD (Modified Julian Date) format. The units are days, so we can also get an idea of the duration of the outbreak of a supernova. The sharp increase in apparent brightness followed by its less abrupt fall is clearly seen in a period of only one month. The total recorded time period is 90 days.

Getting the minimum apparent magnitudes

Once created the dataframe with the data from all the light curves, we will build the Hubble diagram for these supernovae. A rigorous determination of the distances would require comparing the available light curves with a set of "standard" curves and apply corrections to determine the maximum absolute magnitude. I won't do that. I will just create a dataframe selecting the observation data with greater apparent brightness (ie minimum value of the apparent corresponding magnitude) in the "g" and "r" filters, along with the value of Z and the identification of the supernova.

In [20]:
b = df['FILT'] == 1    # Select "g" filter
dftemp1 = df.ix[b,['MAG', 'Z', 'SN']]
dftemp1.columns = ['g', 'Z', 'SN']
grouped = dftemp1.groupby('SN') # Group by supernova id
df_g = grouped.min() #Select minimum values

b = df['FILT'] == 2    # Select "r" filter
dftemp2 = df.ix[b,['MAG', 'SN']]
dftemp2.columns = ['r', 'SN']
grouped = dftemp2.groupby('SN') # Group by supernova id
df_r = grouped.min()  #Select magnitude minimum values

# Finally we concatenate the columns
# of the last two dataframes
# Pandas will handle aligning the values
# which correspond to a single supernova
df_gr = pd.concat([df_g, df_r], axis=1)
df_gr['g'] = df_gr['g'].astype(np.float)
df_gr['r'] = df_gr['r'].astype(np.float)
df_gr = df_gr[['Z', 'g', 'r']] #Reordering columns
df_gr.head()
Out[20]:
Z g r
SN
SN10028 0.06533 18.292 18.354
SN10096 0.07775 20.323 19.904
SN10106 0.14720 20.844 20.665
SN1032 0.12975 20.410 20.205
SN10434 0.10430 19.432 19.369

Extinction correction

The phenomenon of interstellar extinction is the darkening of the light coming from celestial objects, due to absorption and scattering by more or less intense clouds of cosmic dust existing in the interstellar medium. Consequently celestial objects seem less bright, and therefore more distant from, than would correspond to the distance they are actually located.

Therefore, we should apply a correction to the values of apparent magnitudes in each of the filters, subtracting in each case a constant \(A_ {\lambda}> 0\), whose effect will be to obtain a smaller magnitude, ie corresponding to a brighter object, which would be the apparent magnitude we would observe if there were no extinction. We can then apply the distance modulus formula to this new magnitude to get a better approximation of the distance that separates the supernova from us.

Here's an example: The SN722 supernova presents his lowest apparent magnitude in the bandpass filter \(g\) with a value of g = 19,394. The extinction correction for its host galaxy in the wavelength of this filter is \(A_g\) = 0.10. Estimate its distance, without and with application of the correction.

In [21]:
df_gr.ix['SN722',:]
Out[21]:
Z     0.08658
g    19.39400
r    19.19400
Name: SN722, dtype: float64
In [22]:
M = -19.3
m = 19.394
k = (m-M)/5 - 5
d = 10**k    # this is the distance in Mpc
print "Distance without correction: %.2f Mpc" %d
A = 0.10
m -= A
k = (m-M)/5 - 5
d = 10**k
print "Distance with correction: %.2f Mpc" %d
Distance without correction: 548.02 Mpc
Distance with correction: 523.36 Mpc

We can be sure that these corrections have not been already applied to the data, because the header of each light curves file contains the comment "NO extinction correction applied to mags or fluxes"

The problem we have to solve now is how to get the values of the extinction corrections for each supernova and band. Fortunately these are included in the header of each file. On line 11 (counting from 1) there is a text like:

# SFD extinctions [ugriz] = [ 0.13 0.10 0.07 0.05 0.04 ]

So I will read each file and obtain the extinction correction values for each supernova

In [23]:
lc_files = !ls ../SDSS_light_curves
lc_files = list(lc_files.n.split('\n')) # Create a list with filenames
lc_files = lc_files[1:]    # Remove the ReadMe file from the list
In [24]:
#Create a list with supernovae names
# Obtained from the list of file names
SN = lambda filename: re.search(r'\w+', filename).group()

#Create two lists with the values of the index and the column names
# of the new dataframe to be created with the extinctions corrections 
# for each SN and filter
index = [SN(filename) for filename in lc_files]
columns = ['Au', 'Ag', 'Ar', 'Ai', 'Az']

#Create an empty dataframe with the right structure
df_ext = pd.DataFrame(index=index, columns=columns)
df_ext.head()
Out[24]:
Au Ag Ar Ai Az
SN10028 NaN NaN NaN NaN NaN
SN10096 NaN NaN NaN NaN NaN
SN10106 NaN NaN NaN NaN NaN
SN1032 NaN NaN NaN NaN NaN
SN10434 NaN NaN NaN NaN NaN
In [25]:
# Finally all files are read
# Extracting the values of the extinctions
# and populating the dataframe df_ext

for filename in lc_files:
    id = SN(filename)  # Extract supernova id
    with open('../SDSS_light_curves/'+filename) as f:
        for linea in f:
            if 'extinctions' in linea: # This is the line containing extinctions
                l_campos = linea.split()
                
                # Write extinctions in the dataframe
                df_ext.ix[id] = [float(a) for a in l_campos[6:11]]
                break
In [26]:
# Let's check that everything seems OK
df_ext.head()
Out[26]:
Au Ag Ar Ai Az
SN10028 0.13 0.1 0.07 0.05 0.04
SN10096 0.14 0.1 0.07 0.06 0.04
SN10106 0.38 0.28 0.2 0.15 0.11
SN1032 0.38 0.28 0.2 0.15 0.11
SN10434 0.48 0.36 0.26 0.2 0.14

Once we have the extinction corrections, we can correct the apparent magnitudes for the filters \(g\) and \(r\) in the dataframe "df_gr", which was the one containing the apparent magnitude minimum values in each these two filters for each supernova. The correction will simply be subtracting, from each apparent magnitude, the value of the correction for that filter and supernova.

In [27]:
# Let's first do a deep copy of the dataframe
# i.e., copying also the data
df_gr_ext = df_gr.copy()

# apply extinction corrections
df_gr_ext['g'] = df_gr['g'] - df_ext['Ag']
df_gr_ext['r'] = df_gr['r'] - df_ext['Ar']
In [28]:
# Check in a sample we got the right values
df_gr_ext.head()
Out[28]:
Z g r
SN
SN10028 0.06533 18.192 18.284
SN10096 0.07775 20.223 19.834
SN10106 0.14720 20.564 20.465
SN1032 0.12975 20.13 20.005
SN10434 0.10430 19.072 19.109

Correction due to the photometric system used

There is still another correction to be carried out. It has been said that we will use M = -19.3 as the value of the maximum absolute magnitude of type Ia supernovae. But this is a valid value only for the filters B and V of the UBV photometric system, while here we are using the SDSS filter system, which is different.

One way to take account of this fact is a conversion between the SDSS "ugriz" magnitudes, and the magnitudes in the UBV system. There are several algorithms that can be found on this page.

For example, I will use the Lupton (2005) transformation, although it is not certain that this is the most appropriate transformation in this case. Furthermore, to carry out this conversion has the additional advantage that for example the calculation of the apparent magnitude in the "V" filter, involves the apparent magnitudes in the SDSS "g" and "r" filters, thereby taking a sort of "average" between the two filters.

Here is the formula that will be used:

\[V = g - 0.5784 (g - r) - 0.0038\]

Now I will expand the previous df_gr_ext dataframe, which includes the extinction corrections, creating a new 'V' column calculated with the above formula:

In [29]:
df_gr_ext['V'] = df_gr_ext['g'] - 0.5784 * (df_gr_ext['g']-df_gr_ext['r']) - 0.0038
df_gr_ext.head()
Out[29]:
Z g r V
SN
SN10028 0.06533 18.192 18.284 18.24141
SN10096 0.07775 20.223 19.834 19.9942
SN10106 0.14720 20.564 20.465 20.50294
SN1032 0.12975 20.13 20.005 20.0539
SN10434 0.10430 19.072 19.109 19.0896

Obtaining the Hubble diagram

Determining recessional velocities

Let us first examine the range of z values of supernovae in the SDSS survey

In [30]:
df_gr_ext['Z'].min(), df_gr_ext['Z'].max()
Out[30]:
(0.01306, 0.42199999999999999)

This tells us that there is at least one observation with a value of z = 0.42, ie, if we use in this case the approximation \(v = z \, c\) we would have an object moving at 42% of the speed of light, which is already a relativistic speed. This means that using this approximation is not valid in cases like this if we interpret the result as the actual speed of recession of the supernova with respect to us.

However, as we have seen, the Hubble constant, for large values of the parameter z still continues to be defined as:

\[H_0 = \frac{c \, z}{d} \]

I will use therefore, as usual, \(c \, z\) in implementing the Hubble diagram, although this magnitude should not be interpreted generally as the actual speed of recession of the supernova, but only as an approximation to this speed for \(z\) small. The exact relationship between the recession velocity due to the expansion of the universe, and the amount \(z\) of redshift depends on the model of the universe under consideration, and requires the use of general relativity. In this regard you can consult the article: Expanding Confusion: common misconceptions of cosmological horizons and the superluminal expansion of the Universe

In [31]:
z = df_gr_ext['Z']

#Pandas series without any attached units. The velocities are in Km/s
v = z * pq.c.rescale(pq.km/pq.s).magnitude

Calculation of distances

For the distances we will use the following expression of the distance modulus, where distances \(d\) are measured in Mpc:

\[\mu = m - M = 5 \, \log_{10}d + 25\]

Taking M = -19.3 $ $ as the maximum absolute magnitude of any Type Ia supernova, and being \(m\) the value of the maximum apparent magnitude (the lowest value) taken from the dataframe:

In [32]:
M = -19.3
#m = df_gr_ext['V']
m = df_gr_ext['V'].astype(np.float)
k = (m-M)/5 - 5

# d is a Pandas Series with the distance in Mpc
d = 10**k

Getting \(H_0\) and the Hubble diagram

Now I will do a linear least squares fitting to the values of distances and speeds

In [33]:
slope, intercept, r_value, p_value, std_err = s.linregress(d,v)
print 'The coefficient of determination is r^2 = %.2f' %r_value**2
print 'H0 = %.2f' %slope
The coefficient of determination is r^2 = 0.86
H0 = 60.71

In [34]:
f, ax = plt.subplots(1)
f.set_size_inches(12,8)
ax.scatter(d, v, edgecolors='none')
ax.set_xlim(0, 2500)
ax.set_ylim(0,140000)


ax.set_title('Hubble diagram of SDSS-II survey Type Ia supernovae',
             fontsize=20)
ax.set_xlabel('Distances in Mpc')
ax.set_ylabel('$c \, z \; Km \, s^{-1}$')
ax.xaxis.label.set_fontsize(15)
ax.yaxis.label.set_fontsize(20)

ax.plot([0, 3000], [intercept, slope * 3000 + intercept], '-k', lw=2)
ax.text(1000, 80000,'Value obtained $H0 = %.2f \; Km \; s^{-1} \; Mpc^{-1}$'
        %slope, fontsize=15, rotation = 39)
ax.plot([0, 3000], [intercept, 67.80 * 3000 + intercept], ':r', lw=2)
ax.text(500, 130000,'Best current value $H0 = 67.80 \; Km \; s^{-1} \; Mpc^{-1}$',
        fontsize=15, rotation = 41, color='r')
ax.grid();

As can be seen, we have obtained a lower value than the current best value by approximately 10%, which can be considered a good approximation given the very basic method that was used in obtaining the apparent magnitude corresponding to the time of maximum radiation of the supernovae under study. In this regard it may be interesting to read the eventful history of the determination of \(H_0\) in The Ups and Downs of the Hubble Constant. It has taken about 75 years to get a value of \(H_0\) with an uncertainty lower than 10%!

This post is already very long, so I will postpone other possibilities of improving the results, like using more sophisticated ways of getting the peak apparent magnitude, take into account other corrections like the Phillips relationship, studying the details of the outlayers in this graph, and so. Maybe for an other post...

Well, that concludes this post. I hope you enjoyed it. Until next time!

1 comentario:

  1. Borgata Hotel Casino & Spa Map & Floor Plans - Mapyro
    This map 고양 출장마사지 shows the elevation and elevation of the 오산 출장안마 three main 고양 출장마사지 parking lots. The Borgata 영천 출장마사지 Hotel Casino & Spa in Atlantic City offers 0.1 mi 속초 출장샵 (0.1 km)

    ResponderEliminar