{ "cells": [ { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "
\n", "
\n", "
\n", "
Ambient Seismic Noise Analysis
\n", "
Cross Correlation
\n", "
\n", "
\n", "
\n", "\n", "In this tutorial you will try to reproduce one of the figures in Shapiro _et al._. To see which one, execute the second code block below. \n", "\n", "Reference: *High-Resolution Surface-Wave Tomography from Ambient Seismic Noise*, Nikolai M. Shapiro, et al. **Science** 307, 1615 (2005);\n", "DOI: 10.1126/science.1108339\n", "\n", "##### Authors:\n", "* Celine Hadziioannou\n", "* Ashim Rijal\n", "---\n" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "### In this notebook\n", "We will reproduce figure B below. This figure compares: \n", "1) the seismogram from an event near station MLAC, recorded at station PHL (top)\n", "2) the \"Greens function\" obtained by correlating noise recorded at stations MLAC and PHL (center and bottom)\n", "\n", "All bandpassed for periods between 5 - 10 seconds. \n", "\n", "\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [ 0 ], "slideshow": { "slide_type": "slide" } }, "outputs": [], "source": [ "# Configuration step (Please run it before the code!)\n", "\n", "# MatplotLib.PyPlot\n", "%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "plt.style.use('ggplot') # Matplotlib style sheet - nicer plots!\n", "plt.rcParams['figure.figsize'] = 12, 8\n", "\n", "# NumPy\n", "import numpy as np\n", "\n", "# ObsPy\n", "from obspy.core import UTCDateTime, read\n", "from obspy.clients.fdsn import Client\n", "try: # depends on obspy version; this is for v1.1.0\n", " from obspy.geodetics import gps2dist_azimuth as gps2DistAzimuth\n", "except ImportError:\n", " from obspy.core.util import gps2DistAzimuth\n", "\n", "# ignore warnings from filters\n", "import warnings\n", "warnings.filterwarnings('ignore')" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### 1. Read in noise data \n", "Read the noise data for station MLAC into a stream. \n", "\n", "Then, **read in** noise data for station PHL.\n", "Add this to the stream created above.\n", "\n", "These two data files contain 90 days of vertical component noise for each station.\n", "#### If you need data for more than 90 days, it can be downloaded form IRIS database. ###" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [ 0 ], "slideshow": { "slide_type": "subslide" } }, "outputs": [], "source": [ "# Shapiro et al. use noise data from MLAC and PHL stations\n", "\n", "num_of_days = 90 # no of days of data: change if more than 90days of data is required\n", "if num_of_days <= 90:\n", " # get noise data for station MLAC\n", " stn = read('https://raw.github.com/ashimrijal/NoiseCorrelation/master/data/noise.CI.MLAC.LHZ.2004.294.2005.017.mseed')\n", " # get noise data for the station PHL and add it to the previous stream\n", " stn += read('https://raw.github.com/ashimrijal/NoiseCorrelation/master/data/noise.CI.PHL.LHZ.2004.294.2005.017.mseed')\n", " # if you have data stored locally, comment the stn = and stn += lines above\n", " # then uncomment the following 3 lines and adapt the path: \n", " # stn = obspy.read('./noise.CI.MLAC.LHZ.2004.294.2005.017.mseed')\n", " # stn += obspy.read('noise.CI.PHL.LHZ.2004.294.2005.017.mseed')\n", " # ste = obspy.read('event.CI.PHL.LHZ.1998.196.1998.196.mseed')\n", "else:\n", " # download data from IRIS database\n", " client = Client(\"IRIS\") # client specification\n", " t1 = UTCDateTime(\"2004-10-20T00:00:00.230799Z\") # start UTC date/time\n", " t2 = t1+(num_of_days*86400) # end UTC date/time\n", " stn = client.get_waveforms(network=\"CI\", station=\"MLAC\",location=\"*\", channel=\"*\",\n", " starttime=t1, endtime=t2) # get data for MLAC\n", " stn += client.get_waveforms(network=\"CI\", station=\"PHL\", location=\"*\", channel=\"*\",\n", " starttime=t1, endtime=t2) # get data for PHL and add it to the previous stream" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### 2. Preprocess noise ###\n", "***Preprocessing 1***\n", "* Just to be sure to keep a 'clean' original stream, first **copy** the noise stream with [st.copy()](https://docs.obspy.org/packages/autogen/obspy.core.stream.Stream.copy.html)\n", "The copied stream is the stream you will use from now on. \n", "\n", "* In order to test the preprocessing without taking too long, it's also useful to first **trim** this copied noise data stream to just one or a few days. This can be done with [st.trim()](https://docs.obspy.org/packages/autogen/obspy.core.stream.Stream.trim.html), after defining your start- and endtime. \n", "\n", "\n", "Many processing functionalities are included in Obspy. For example, you can remove any (linear) trends with [st.detrend()](https://docs.obspy.org/packages/autogen/obspy.core.stream.Stream.detrend.html), and taper the edges with [st.taper()](https://docs.obspy.org/packages/autogen/obspy.core.stream.Stream.taper.html). \n", "Different types of filter are also available in [st.filter()](https://docs.obspy.org/packages/autogen/obspy.core.stream.Stream.filter.html). \n", "\n", "* first **detrend** the data. \n", "* next, apply a **bandpass filter** to select the frequencies with most noise energy. The secondary microseismic peak is roughly between 0.1 and 0.2 Hz. The primary microseismic peak between 0.05 and 0.1 Hz. Make sure to use a zero phase filter! *(specify argument zerophase=True)*" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [ 0 ], "slideshow": { "slide_type": "subslide" } }, "outputs": [], "source": [ "# Preprocessing 1\n", "\n", "stp = stn.copy() # copy stream\n", "t = stp[0].stats.starttime\n", "stp.trim(t, t + 4 * 86400) # shorten stream for quicker processing\n", "\n", "stp.detrend('linear') # remove trends using detrend\n", "stp.taper(max_percentage=0.05, type='cosine') # taper the edges\n", "stp.filter('bandpass', freqmin=0.1, freqmax=0.2, zerophase=True) # filter data of all traces in the streams" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "***Preprocessing 2***\n", "\n", "Some additional useful processing functions are provided in the following **Functions**\n", "\n", "* For each trace in the stream, apply **spectral whitening** on the frequency range you chose before (either [0.1 0.2]Hz or [0.05 0.1]Hz), using function **``whiten``**. \n", "\n", "\n", "* For the **time normalization**, the simplest option is to use the one-bit normalization option provided in function **``normalize``**. \n", "\n", "* *Optional: play around with different normalization options, such as clipping to a certain number of standard deviations, or using the running absolute mean normalization.*" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "A brief *desription of individual* **functions** (see the next cell) are as follows:\n", "\n", "1) **whiten**:\n", "\n", " spectral whitening of trace `tr` using a cosine tapered boxcar between `freqmin` and `freqmax`\n", " (courtesy Gaia Soldati & Licia Faenza, INGV)\n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [ 0, 3, 46, 57, 93, 130, 150 ], "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "def whiten(tr, freqmin, freqmax):\n", " \n", " nsamp = tr.stats.sampling_rate\n", " \n", " n = len(tr.data)\n", " if n == 1:\n", " return tr\n", " else: \n", " frange = float(freqmax) - float(freqmin)\n", " nsmo = int(np.fix(min(0.01, 0.5 * (frange)) * float(n) / nsamp))\n", " f = np.arange(n) * nsamp / (n - 1.)\n", " JJ = ((f > float(freqmin)) & (f lim] = lim\n", " tr.data[tr.data < -lim] = -lim\n", "\n", " elif norm_method == \"clipping_iter\":\n", " lim = clip_factor * np.std(np.abs(tr.data))\n", " \n", " # as long as still values left above the waterlevel, clip_weight\n", " while tr.data[np.abs(tr.data) > lim] != []:\n", " tr.data[tr.data > lim] /= clip_weight\n", " tr.data[tr.data < -lim] /= clip_weight\n", "\n", " elif norm_method == 'ramn':\n", " lwin = tr.stats.sampling_rate * norm_win\n", " st = 0 # starting point\n", " N = lwin # ending point\n", "\n", " while N < tr.stats.npts:\n", " win = tr.data[st:N]\n", "\n", " w = np.mean(np.abs(win)) / (2. * lwin + 1)\n", " \n", " # weight center of window\n", " tr.data[st + lwin / 2] /= w\n", "\n", " # shift window\n", " st += 1\n", " N += 1\n", "\n", " # taper edges\n", " taper = get_window(tr.stats.npts)\n", " tr.data *= taper\n", "\n", " elif norm_method == \"1bit\":\n", " tr.data = np.sign(tr.data)\n", " tr.data = np.float32(tr.data)\n", "\n", " return tr" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "6) **get_window**:\n", "\n", " Return tukey window of length N\n", " N: length of window\n", " alpha: alpha parameter in case of tukey window.\n", " 0 -> rectangular window\n", " 1 -> cosine taper\n", " returns: window (np.array)\n", " \n", "Doc of [scipy.signal.get_window](https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.signal.get_window.html)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [ 0, 3, 46, 57, 93, 130, 150 ], "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "def get_window(N, alpha=0.2):\n", "\n", " window = np.ones(N)\n", " x = np.linspace(-1., 1., N)\n", " ind1 = (abs(x) > 1 - alpha) * (x < 0)\n", " ind2 = (abs(x) > 1 - alpha) * (x > 0)\n", " window[ind1] = 0.5 * (1 - np.cos(np.pi * (x[ind1] + 1) / alpha))\n", " window[ind2] = 0.5 * (1 - np.cos(np.pi * (x[ind2] - 1) / alpha))\n", " return window" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "### Actual preprocessing happens here -- this can take a while!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [ 0 ], "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "# Preprocessing 2\n", "st = stp.copy() # copy stream\n", "\n", "for tr in st:\n", " tr = normalize(tr, norm_method=\"1bit\")\n", " tr = whiten(tr, 0.1, 0.2)\n", "print ('done!')" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "#### Cross-correlation ####\n", "Once you're happy with the preprocessing, you can calculate the **cross-correlation** using **``correlateNoise``** function. The cross-correlation are computed by slices of a few hours each (specified in *corrwin*). \n", "\n", "**For correlateNoise function**\n", "* input: stream, list of stations (here: ['MLAC', 'PHL']), slice length in seconds\n", "* output: all individual correlations, stack\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [ 0 ], "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "# Cross-correlate\n", "xcorr, stack = correlateNoise(st, ['MLAC','PHL'], 7200)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "The resulting stack can be **plotted** with **``plotStack``** function. Since it doesn't make much sense to look at a 2 hour long correlation signal, you can decide to plot only the central part by specifying a ``maxlag`` (in seconds). " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [ 0 ], "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "# Plotting\n", "\n", "plotStack(st, stack, 400)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "If you're only working with a few days of noise (after trimming), this plot probably doesn't look very nice. You could go back to the code block named 'preprocessing 1', and keep a longer noise record (10 days works quite well already). " ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ "#### Compare to event trace ####\n", "In 1998, a M = 5.1 event occurred next to station MLAC. This event was recorded at PHL and we read this data.\n", "\n", "* **read** the event data to a separate stream" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "ste = read('https://raw.github.com/ashimrijal/NoiseCorrelation/master/data/event.CI.PHL.LHZ.1998.196.1998.196.mseed')\n", "# if data is stored locally, uncomment the following line and comment the line above:\n", "#ste = obspy.read('./event.CI.PHL.LHZ.1998.196.1998.196.mseed')" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "subslide" } }, "source": [ "#### Preprocess event ####\n", "\n", "The event signal should be processed in a similar way to the noise. \n", "\n", "* **detrend** in the same way as before\n", "* **bandpass filter** for the same frequencies as chosen above\n", "* apply **spectral whitening** to each trace in the event stream to ensure the spectrum is comparable to the noise. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [ 0 ], "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "# Preprocessing\n", "\n", "ste.detrend('linear')\n", "ste.filter('bandpass', freqmin=0.1, freqmax=0.2, zerophase=True)\n", "for tr in ste:\n", " tr = whiten(tr, 0.1, 0.2)" ] }, { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "fragment" } }, "source": [ "#### Plot! ####\n", "\n", "A plotting function is provided to plot both signals alongside: **``plotXcorrEvent``**. \n", "\n", "* input: event stream, noise stream, stack, maxlag" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "code_folding": [ 0 ], "slideshow": { "slide_type": "fragment" } }, "outputs": [], "source": [ "# Plotting\n", "\n", "plotXcorrEvent(ste, stn, stack, 400)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "celltoolbar": "Slideshow", "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.2" } }, "nbformat": 4, "nbformat_minor": 2 }