diff --git a/ObsPy/00_Introduction.ipynb b/ObsPy/00_Introduction.ipynb index ac58067..fc3e850 100644 --- a/ObsPy/00_Introduction.ipynb +++ b/ObsPy/00_Introduction.ipynb @@ -4,7 +4,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "
\n", + "
\n", "
\n", "
\n", "
ObsPy Tutorial
\n", @@ -61,7 +61,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.3" + "version": "3.10.14" } }, "nbformat": 4, diff --git a/ObsPy/01_File_Formats.ipynb b/ObsPy/01_File_Formats.ipynb index 60d8620..cccc4e0 100644 --- a/ObsPy/01_File_Formats.ipynb +++ b/ObsPy/01_File_Formats.ipynb @@ -1 +1 @@ -{"cells": [{"cell_type": "markdown", "metadata": {}, "source": ["
\n", "
\n", "
\n", "
ObsPy Tutorial
\n", "
Introduction to File Formats and read/write in ObsPy
\n", "
\n", "
\n", "
"]}, {"cell_type": "markdown", "metadata": {}, "source": ["Seismo-Live: http://seismo-live.org\n", "\n", "##### Authors:\n", "* Lion Krischer ([@krischer](https://github.com/krischer))\n", "* Tobias Megies ([@megies](https://github.com/megies))\n", "---"]}, {"cell_type": "markdown", "metadata": {}, "source": ["![](images/obspy_logo_full_524x179px.png)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["This is oftentimes not taught, but fairly important to understand, at least at a basic level. This also teaches you how to work with these in ObsPy."]}, {"cell_type": "markdown", "metadata": {}, "source": ["**This notebook aims to give a quick introductions to ObsPy's core functions and classes. Everything here will be repeated in more detail in later notebooks.**"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "plt.style.use('ggplot')\n", "plt.rcParams['figure.figsize'] = 12, 8"]}, {"cell_type": "markdown", "metadata": {}, "source": ["## SEED Identifiers\n", "\n", "According to the [SEED standard](www.fdsn.org/seed_manual/SEEDManual_V2.4.pdf), which is fairly well adopted, the following nomenclature is used to identify seismic receivers:\n", "\n", "* **Network code**: Identifies the network/owner of the data. Assigned by the FDSN and thus unique.\n", "* **Station code**: The station within a network. *NOT UNIQUE IN PRACTICE!* Always use together with a network code!\n", "* **Location ID**: Identifies different data streams within one station. Commonly used to logically separate multiple instruments at a single station.\n", "* **Channel codes**: Three character code: 1) Band and approximate sampling rate, 2) The type of instrument, 3) The orientation\n", "\n", "This results in full ids of the form **NET.STA.LOC.CHAN**, e.g. **IV.PRMA..HHE**.\n", "\n", "\n", "---\n", "\n", "\n", "In seismology we generally distinguish between three separate types of data:\n", "\n", "1. **Waveform Data** - The actual waveforms as time series.\n", "2. **Station Data** - Information about the stations' operators, geographical locations, and the instrument's responses.\n", "3. **Event Data** - Information about earthquakes.\n", "\n", "Some formats have elements of two or more of these."]}, {"cell_type": "markdown", "metadata": {}, "source": ["## Waveform Data\n", "\n", "![stream](images/Stream_Trace.svg)\n", "\n", "There are a myriad of waveform data formats but in Europe and the USA two formats dominate: **MiniSEED** and **SAC**\n", "\n", "\n", "### MiniSEED\n", "\n", "* This is what you get from datacenters and also what they store, thus the original data\n", "* Can store integers and single/double precision floats\n", "* Integer data (e.g. counts from a digitizer) are heavily compressed: a factor of 3-5 depending on the data\n", "* Can deal with gaps and overlaps\n", "* Multiple components per file\n", "* Contains only the really necessary parameters and some information for the data providers"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["import obspy\n", "\n", "# ObsPy automatically detects the file format.\n", "st = obspy.read(\"data/example.mseed\")\n", "print(st)\n", "\n", "# Fileformat specific information is stored here.\n", "print(st[0].stats.mseed)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["st.plot()"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["# This is a quick interlude to teach you basics about how to work\n", "# with Stream/Trace objects.\n", "\n", "# Most operations work in-place, e.g. they modify the existing\n", "# objects. We'll create a copy here.\n", "st2 = st.copy()\n", "\n", "# To use only part of a Stream, use the select() function.\n", "print(st2.select(component=\"Z\"))\n", "\n", "# Stream objects behave like a list of Trace objects.\n", "tr = st2[0]\n", "\n", "tr.plot()\n", "\n", "# Some basic processing. Please note that these modify the\n", "# existing object.\n", "tr.detrend(\"linear\")\n", "tr.taper(type=\"hann\", max_percentage=0.05)\n", "tr.filter(\"lowpass\", freq=0.5)\n", "\n", "tr.plot()"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["# You can write it again by simply specifing the format.\n", "st.write(\"temp.mseed\", format=\"mseed\")"]}, {"cell_type": "markdown", "metadata": {}, "source": ["### SAC\n", "\n", "* Custom format of the `sac` code.\n", "* Simple header and single precision floating point data.\n", "* Only a single component per file and no concept of gaps/overlaps.\n", "* Used a lot due to `sac` being very popular and the additional basic information that can be stored in the header."]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["st = obspy.read(\"data/example.sac\")\n", "print(st)\n", "st[0].stats.sac.__dict__"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["st.plot()"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["# You can once again write it with the write() method.\n", "st.write(\"temp.sac\", format=\"sac\")"]}, {"cell_type": "markdown", "metadata": {}, "source": ["## Station Data\n", "\n", "![inv](images/Inventory.svg)\n", "\n", "Station data contains information about the organziation that collections the data, geographical information, as well as the instrument response. It mainly comes in three formats:\n", "\n", "* `(dataless) SEED`: Very complete but pretty complex and binary. Still used a lot, e.g. for the Arclink protocol\n", "* `RESP`: A strict subset of SEED. ASCII based. Contains **ONLY** the response.\n", "* `StationXML`: Essentially like SEED but cleaner and based on XML. Most modern format and what the datacenters nowadays serve. **Use this if you can.**\n", "\n", "\n", "ObsPy can work with all of them but today we will focus on StationXML."]}, {"cell_type": "markdown", "metadata": {}, "source": ["They are XML files:"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["!head data/all_stations.xml"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["import obspy\n", "\n", "# Use the read_inventory function to open them.\n", "inv = obspy.read_inventory(\"data/all_stations.xml\")\n", "print(inv)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["You can see that they can contain an arbirary number of networks, stations, and channels."]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["# ObsPy is also able to plot a map of them.\n", "inv.plot(projection=\"local\");"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["# As well as a plot the instrument response.\n", "inv.select(network=\"IV\", station=\"SALO\", channel=\"BH?\").plot_response(0.001);"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["# Coordinates of single channels can also be extraced. This function\n", "# also takes a datetime arguments to extract information at different\n", "# points in time.\n", "inv.get_coordinates(\"IV.SALO..BHZ\")"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["# And it can naturally be written again, also in modified state.\n", "inv.select(channel=\"BHZ\").write(\"temp.xml\", format=\"stationxml\")"]}, {"cell_type": "markdown", "metadata": {}, "source": ["## Event Data\n", "\n", "![events](./images/Event.svg)\n", "\n", "Event data is essentially served in either very simple formats like NDK or the CMTSOLUTION format used by many waveform solvers:"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["!cat data/GCMT_2014_04_01__Mw_8_1"]}, {"cell_type": "markdown", "metadata": {}, "source": ["Datacenters on the hand offer **QuakeML** files, which are surprisingly complex in structure but can store complex relations."]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["# Read QuakeML files with the read_events() function.\n", "cat = obspy.read_events(\"data/GCMT_2014_04_01__Mw_8_1.xml\")\n", "print(cat)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["print(cat[0])"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["cat.plot(projection=\"ortho\");"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["# Once again they can be written with the write() function.\n", "cat.write(\"temp_quake.xml\", format=\"quakeml\")"]}, {"cell_type": "markdown", "metadata": {}, "source": ["To show off some more things, I added a file containing all events from 2014 in the GCMT catalog."]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["import obspy\n", "\n", "cat = obspy.read_events(\"data/2014.ndk\")\n", "\n", "print(cat)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["cat.plot();"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["cat.filter(\"depth > 100000\", \"magnitude > 7\")"]}], "metadata": {"kernelspec": {"display_name": "Python 3", "language": "python", "name": "python3"}}, "nbformat": 4, "nbformat_minor": 2} \ No newline at end of file +{"cells":[{"cell_type":"markdown","metadata":{},"source":["
\n","
\n","
\n","
ObsPy Tutorial
\n","
Introduction to File Formats and read/write in ObsPy
\n","
\n","
\n","
"]},{"cell_type":"markdown","metadata":{},"source":["Seismo-Live: http://seismo-live.org\n","\n","##### Authors:\n","* Lion Krischer ([@krischer](https://github.com/krischer))\n","* Tobias Megies ([@megies](https://github.com/megies))\n","---"]},{"cell_type":"markdown","metadata":{},"source":["![](images/obspy_logo_full_524x179px.png)"]},{"cell_type":"markdown","metadata":{},"source":["This is oftentimes not taught, but fairly important to understand, at least at a basic level. This also teaches you how to work with these in ObsPy."]},{"cell_type":"markdown","metadata":{},"source":["**This notebook aims to give a quick introductions to ObsPy's core functions and classes. Everything here will be repeated in more detail in later notebooks.**"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["%matplotlib inline\n","import matplotlib.pyplot as plt\n","plt.style.use('ggplot')\n","plt.rcParams['figure.figsize'] = 12, 8"]},{"cell_type":"markdown","metadata":{},"source":["## SEED Identifiers\n","\n","According to the [SEED standard](www.fdsn.org/seed_manual/SEEDManual_V2.4.pdf), which is fairly well adopted, the following nomenclature is used to identify seismic receivers:\n","\n","* **Network code**: Identifies the network/owner of the data. Assigned by the FDSN and thus unique.\n","* **Station code**: The station within a network. *NOT UNIQUE IN PRACTICE!* Always use together with a network code!\n","* **Location ID**: Identifies different data streams within one station. Commonly used to logically separate multiple instruments at a single station.\n","* **Channel codes**: Three character code: 1) Band and approximate sampling rate, 2) The type of instrument, 3) The orientation\n","\n","This results in full ids of the form **NET.STA.LOC.CHAN**, e.g. **IV.PRMA..HHE**.\n","\n","\n","---\n","\n","\n","In seismology we generally distinguish between three separate types of data:\n","\n","1. **Waveform Data** - The actual waveforms as time series.\n","2. **Station Data** - Information about the stations' operators, geographical locations, and the instrument's responses.\n","3. **Event Data** - Information about earthquakes.\n","\n","Some formats have elements of two or more of these."]},{"cell_type":"markdown","metadata":{},"source":["## Waveform Data\n","\n","![stream](images/Stream_Trace.svg)\n","\n","There are a myriad of waveform data formats but in Europe and the USA two formats dominate: **MiniSEED** and **SAC**\n","\n","\n","### MiniSEED\n","\n","* This is what you get from datacenters and also what they store, thus the original data\n","* Can store integers and single/double precision floats\n","* Integer data (e.g. counts from a digitizer) are heavily compressed: a factor of 3-5 depending on the data\n","* Can deal with gaps and overlaps\n","* Multiple components per file\n","* Contains only the really necessary parameters and some information for the data providers"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["import obspy\n","\n","# ObsPy automatically detects the file format.\n","st = obspy.read(\"data/example.mseed\")\n","print(st)\n","\n","# Fileformat specific information is stored here.\n","print(st[0].stats.mseed)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["st.plot()"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["# This is a quick interlude to teach you basics about how to work\n","# with Stream/Trace objects.\n","\n","# Most operations work in-place, e.g. they modify the existing\n","# objects. We'll create a copy here.\n","st2 = st.copy()\n","\n","# To use only part of a Stream, use the select() function.\n","print(st2.select(component=\"Z\"))\n","\n","# Stream objects behave like a list of Trace objects.\n","tr = st2[0]\n","\n","tr.plot()\n","\n","# Some basic processing. Please note that these modify the\n","# existing object.\n","tr.detrend(\"linear\")\n","tr.taper(type=\"hann\", max_percentage=0.05)\n","tr.filter(\"lowpass\", freq=0.5)\n","\n","tr.plot()"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["# You can write it again by simply specifing the format.\n","st.write(\"temp.mseed\", format=\"mseed\")"]},{"cell_type":"markdown","metadata":{},"source":["### SAC\n","\n","* Custom format of the `sac` code.\n","* Simple header and single precision floating point data.\n","* Only a single component per file and no concept of gaps/overlaps.\n","* Used a lot due to `sac` being very popular and the additional basic information that can be stored in the header."]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["st = obspy.read(\"data/example.sac\")\n","print(st)\n","st[0].stats.sac.__dict__"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["st.plot()"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["# You can once again write it with the write() method.\n","st.write(\"temp.sac\", format=\"sac\")"]},{"cell_type":"markdown","metadata":{},"source":["## Station Data\n","\n","![inv](images/Inventory.svg)\n","\n","Station data contains information about the organziation that collections the data, geographical information, as well as the instrument response. It mainly comes in three formats:\n","\n","* `(dataless) SEED`: Very complete but pretty complex and binary. Still used a lot, e.g. for the Arclink protocol\n","* `RESP`: A strict subset of SEED. ASCII based. Contains **ONLY** the response.\n","* `StationXML`: Essentially like SEED but cleaner and based on XML. Most modern format and what the datacenters nowadays serve. **Use this if you can.**\n","\n","\n","ObsPy can work with all of them but today we will focus on StationXML."]},{"cell_type":"markdown","metadata":{},"source":["They are XML files:"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["!head data/all_stations.xml"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["import obspy\n","\n","# Use the read_inventory function to open them.\n","inv = obspy.read_inventory(\"data/all_stations.xml\")\n","print(inv)"]},{"cell_type":"markdown","metadata":{},"source":["You can see that they can contain an arbirary number of networks, stations, and channels."]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["# ObsPy is also able to plot a map of them.\n","inv.plot(projection=\"local\");"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["# As well as a plot the instrument response.\n","inv.select(network=\"IV\", station=\"SALO\", channel=\"BH?\").plot_response(0.001);"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["# Coordinates of single channels can also be extraced. This function\n","# also takes a datetime arguments to extract information at different\n","# points in time.\n","inv.get_coordinates(\"IV.SALO..BHZ\")"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["# And it can naturally be written again, also in modified state.\n","inv.select(channel=\"BHZ\").write(\"temp.xml\", format=\"stationxml\")"]},{"cell_type":"markdown","metadata":{},"source":["## Event Data\n","\n","![events](./images/Event.svg)\n","\n","Event data is essentially served in either very simple formats like NDK or the CMTSOLUTION format used by many waveform solvers:"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["!cat data/GCMT_2014_04_01__Mw_8_1"]},{"cell_type":"markdown","metadata":{},"source":["Datacenters on the hand offer **QuakeML** files, which are surprisingly complex in structure but can store complex relations."]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["# Read QuakeML files with the read_events() function.\n","cat = obspy.read_events(\"data/GCMT_2014_04_01__Mw_8_1.xml\")\n","print(cat)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["print(cat[0])"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["cat.plot(projection=\"ortho\");"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["# Once again they can be written with the write() function.\n","cat.write(\"temp_quake.xml\", format=\"quakeml\")"]},{"cell_type":"markdown","metadata":{},"source":["To show off some more things, I added a file containing all events from 2014 in the GCMT catalog."]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["import obspy\n","\n","cat = obspy.read_events(\"data/2014.ndk\")\n","\n","print(cat)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["cat.plot();"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["cat.filter(\"depth > 100000\", \"magnitude > 7\")"]}],"metadata":{"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"}},"nbformat":4,"nbformat_minor":2} diff --git a/ObsPy/02_UTCDateTime.ipynb b/ObsPy/02_UTCDateTime.ipynb index bb64eb7..9ec3dd7 100644 --- a/ObsPy/02_UTCDateTime.ipynb +++ b/ObsPy/02_UTCDateTime.ipynb @@ -1 +1 @@ -{"cells": [{"cell_type": "markdown", "metadata": {}, "source": ["
\n", "
\n", "
\n", "
ObsPy Tutorial
\n", "
Handling Time
\n", "
\n", "
\n", "
"]}, {"cell_type": "markdown", "metadata": {}, "source": ["Seismo-Live: http://seismo-live.org\n", "\n", "##### Authors:\n", "* Lion Krischer ([@krischer](https://github.com/krischer))\n", "* Tobias Megies ([@megies](https://github.com/megies))\n", "\n", "---"]}, {"cell_type": "markdown", "metadata": {}, "source": ["![](images/obspy_logo_full_524x179px.png)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["This is a bit dry but not very difficult and important to know. It is used everywhere in ObsPy!\n", "\n", "\n", "* All absolute time values are consistently handled with this class\n", "* Based on a double precision POSIX timestamp for accuracy\n", "* Timezone can be specified at initialization (if necessary)\n", "* In Coordinated Universal Time (UTC) so no need to deal with timezones, daylight savings, ...\n", "\n", "---"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "plt.style.use('ggplot')\n", "plt.rcParams['figure.figsize'] = 12, 8"]}, {"cell_type": "markdown", "metadata": {}, "source": ["---\n", "\n", "## Features of **`UTCDateTime`**\n", "\n", "#### Initialization"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["from obspy import UTCDateTime\n", "\n", "print(UTCDateTime(\"2011-03-11T05:46:23.2\")) # mostly time strings defined by ISO standard\n", "print(UTCDateTime(\"2011-03-11T14:46:23.2+09:00\")) # non-UTC timezone input\n", "print(UTCDateTime(2011, 3, 11, 5, 46, 23, 2))\n", "print(UTCDateTime(1299822383.2))"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["# Current time can be initialized by leaving out any arguments\n", "print(UTCDateTime())"]}, {"cell_type": "markdown", "metadata": {}, "source": ["#### Attribute Access"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["time = UTCDateTime(\"2011-03-11T05:46:23.200000Z\")\n", "print(time.year)\n", "print(time.julday)\n", "print(time.timestamp)\n", "print(time.weekday)\n", "# try time."]}, {"cell_type": "markdown", "metadata": {}, "source": ["#### Handling time differences\n", "\n", "* \"**`+`**/**`-`**\" defined to add seconds to an **`UTCDateTime`** object\n", "* \"**`-`**\" defined to get time difference of two **`UTCDateTime`** objects"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["time = UTCDateTime(\"2011-03-11T05:46:23.200000Z\")\n", "print(time)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["# one hour later\n", "print(time + 3600)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["time2 = UTCDateTime(2012, 1, 1)\n", "print(time2 - time)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["### Exercises\n", "\n", "Calculate the number of days passed since the Tohoku main shock (the timestamp used above)."]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["Make a list of 10 UTCDateTime objects, starting today at 10:00 with a spacing of 90 minutes."]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["Below is a list of strings with origin times of magnitude 8+ earthquakes since 2000 (fetched from IRIS). Assemble a list of interevent times in days. Use matplotlib to display a histogram."]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["times = [\"2001-06-23T20:33:09\",\n", " \"2003-09-25T19:50:07\",\n", " \"2004-12-23T14:59:00\",\n", " \"2004-12-26T00:58:52\",\n", " \"2005-03-28T16:09:35\",\n", " \"2006-06-01T18:57:02\",\n", " \"2006-06-05T00:50:31\",\n", " \"2006-11-15T11:14:14\",\n", " \"2007-01-13T04:23:23\",\n", " \"2007-04-01T20:39:56\",\n", " \"2007-08-15T23:40:58\",\n", " \"2007-09-12T11:10:26\",\n", " \"2009-09-29T17:48:11\",\n", " \"2010-02-27T06:34:13\",\n", " \"2011-03-11T05:46:23\",\n", " \"2012-04-11T08:38:36\",\n", " \"2012-04-11T10:43:10\",\n", " \"2013-05-24T05:44:48\"]"]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}], "metadata": {"kernelspec": {"display_name": "Python 3", "language": "python", "name": "python3"}}, "nbformat": 4, "nbformat_minor": 2} \ No newline at end of file +{"cells":[{"cell_type":"markdown","metadata":{},"source":["
\n","
\n","
\n","
ObsPy Tutorial
\n","
Handling Time
\n","
\n","
\n","
"]},{"cell_type":"markdown","metadata":{},"source":["Seismo-Live: http://seismo-live.org\n","\n","##### Authors:\n","* Lion Krischer ([@krischer](https://github.com/krischer))\n","* Tobias Megies ([@megies](https://github.com/megies))\n","\n","---"]},{"cell_type":"markdown","metadata":{},"source":["![](images/obspy_logo_full_524x179px.png)"]},{"cell_type":"markdown","metadata":{},"source":["This is a bit dry but not very difficult and important to know. It is used everywhere in ObsPy!\n","\n","\n","* All absolute time values are consistently handled with this class\n","* Based on a double precision POSIX timestamp for accuracy\n","* Timezone can be specified at initialization (if necessary)\n","* In Coordinated Universal Time (UTC) so no need to deal with timezones, daylight savings, ...\n","\n","---"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["%matplotlib inline\n","import matplotlib.pyplot as plt\n","plt.style.use('ggplot')\n","plt.rcParams['figure.figsize'] = 12, 8"]},{"cell_type":"markdown","metadata":{},"source":["---\n","\n","## Features of **`UTCDateTime`**\n","\n","#### Initialization"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["from obspy import UTCDateTime\n","\n","print(UTCDateTime(\"2011-03-11T05:46:23.2\")) # mostly time strings defined by ISO standard\n","print(UTCDateTime(\"2011-03-11T14:46:23.2+09:00\")) # non-UTC timezone input\n","print(UTCDateTime(2011, 3, 11, 5, 46, 23, 2))\n","print(UTCDateTime(1299822383.2))"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["# Current time can be initialized by leaving out any arguments\n","print(UTCDateTime())"]},{"cell_type":"markdown","metadata":{},"source":["#### Attribute Access"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["time = UTCDateTime(\"2011-03-11T05:46:23.200000Z\")\n","print(time.year)\n","print(time.julday)\n","print(time.timestamp)\n","print(time.weekday)\n","# try time."]},{"cell_type":"markdown","metadata":{},"source":["#### Handling time differences\n","\n","* \"**`+`**/**`-`**\" defined to add seconds to an **`UTCDateTime`** object\n","* \"**`-`**\" defined to get time difference of two **`UTCDateTime`** objects"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["time = UTCDateTime(\"2011-03-11T05:46:23.200000Z\")\n","print(time)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["# one hour later\n","print(time + 3600)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["time2 = UTCDateTime(2012, 1, 1)\n","print(time2 - time)"]},{"cell_type":"markdown","metadata":{},"source":["### Exercises\n","\n","Calculate the number of days passed since the Tohoku main shock (the timestamp used above)."]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["Make a list of 10 UTCDateTime objects, starting today at 10:00 with a spacing of 90 minutes."]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["Below is a list of strings with origin times of magnitude 8+ earthquakes since 2000 (fetched from IRIS). Assemble a list of interevent times in days. Use matplotlib to display a histogram."]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["times = [\"2001-06-23T20:33:09\",\n"," \"2003-09-25T19:50:07\",\n"," \"2004-12-23T14:59:00\",\n"," \"2004-12-26T00:58:52\",\n"," \"2005-03-28T16:09:35\",\n"," \"2006-06-01T18:57:02\",\n"," \"2006-06-05T00:50:31\",\n"," \"2006-11-15T11:14:14\",\n"," \"2007-01-13T04:23:23\",\n"," \"2007-04-01T20:39:56\",\n"," \"2007-08-15T23:40:58\",\n"," \"2007-09-12T11:10:26\",\n"," \"2009-09-29T17:48:11\",\n"," \"2010-02-27T06:34:13\",\n"," \"2011-03-11T05:46:23\",\n"," \"2012-04-11T08:38:36\",\n"," \"2012-04-11T10:43:10\",\n"," \"2013-05-24T05:44:48\"]"]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]}],"metadata":{"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"}},"nbformat":4,"nbformat_minor":2} diff --git a/ObsPy/03_waveform_data.ipynb b/ObsPy/03_waveform_data.ipynb index 86a936f..bd048fa 100644 --- a/ObsPy/03_waveform_data.ipynb +++ b/ObsPy/03_waveform_data.ipynb @@ -1 +1 @@ -{"cells": [{"cell_type": "markdown", "metadata": {}, "source": ["
\n", "
\n", "
\n", "
ObsPy Tutorial
\n", "
Handling Waveform Data
\n", "
\n", "
\n", "
"]}, {"cell_type": "markdown", "metadata": {}, "source": ["Seismo-Live: http://seismo-live.org\n", "\n", "##### Authors:\n", "* Tobias Megies ([@megies](https://github.com/megies))\n", "* Lion Krischer ([@krischer](https://github.com/krischer))\n", "---"]}, {"cell_type": "markdown", "metadata": {}, "source": ["![](images/obspy_logo_full_524x179px.png)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "plt.style.use('ggplot')\n", "plt.rcParams['figure.figsize'] = 12, 8"]}, {"cell_type": "markdown", "metadata": {}, "source": ["\n", "\n", "* read waveform data is returned as a **`Stream`** object."]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["from obspy import read\n", "st = read(\"./data/waveform_PFO.mseed\", format=\"mseed\")\n", "print(st)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["- UNIX wildcards can be used to read multiple files simultaneously\n", "- automatic file format detection, no need to worry about file formats\n", "\n", " - currently supported: **mseed, sac, segy, seg2, gse1/2, seisan, sh, datamark, css, wav, y, Q (keeps growing...)**\n", " - more file formats are included whenever a basic reading routine is provided (or e.g. sufficient documentation on data compression etc.)\n", " - custom user-specific file formats can be added (through plugin) to filetype autodiscovery in local ObsPy installation by user"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["from obspy import read\n", "st = read(\"./data/waveform_*\")\n", "print(st)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["- for MiniSEED files, only reading short portions of files without all of the file getting read into memory is supported (saves time and memory when working on large collections of big files)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["from obspy import UTCDateTime\n", "\n", "t = UTCDateTime(\"2011-03-11T05:46:23.015400Z\")\n", "st = read(\"./data/waveform_*\", starttime=t + 10 * 60, endtime=t + 12 * 60)\n", "print(st)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["#### The Stream Object\n", "\n", " - A Stream object is a collection of Trace objects"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["from obspy import read\n", "st = read(\"./data/waveform_PFO.mseed\")\n", "print(type(st))"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["print(st)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["print(st.traces)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["print(st[0])"]}, {"cell_type": "markdown", "metadata": {}, "source": ["- More traces can be assembled using **`+`** operator (or using the `.append()` and `.extend()` methods)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["st1 = read(\"./data/waveform_PFO.mseed\")\n", "st2 = read(\"./data/waveform_PFO_synthetics.mseed\")\n", "\n", "st = st1 + st2\n", "print(st)\n", "\n", "st3 = read(\"./data/waveform_BFO_BHE.sac\")\n", "\n", "st += st3\n", "print(st)"]}, {"cell_type": "markdown", "metadata": {}, "source": [" - convenient (and nicely readable) looping over traces"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["for tr in st:\n", " print(tr.id)"]}, {"cell_type": "markdown", "metadata": {}, "source": [" - Stream is useful for applying the same processing to a larger number of different waveforms or to group Traces for processing (e.g. three components of one station in one Stream)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["#### The Trace Object\n", "\n", "- a Trace object is a single, contiguous waveform data block (i.e. regularly spaced time series, no gaps)\n", "- a Trace object contains a limited amount of metadata in a dictionary-like object (as **`Trace.stats`**) that fully describes the time series by specifying..\n", " * recording location/instrument (network, station, location and channel code)\n", " * start time\n", " * sampling rate"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["st = read(\"./data/waveform_PFO.mseed\")\n", "tr = st[0] # get the first Trace in the Stream\n", "print(tr)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["print(tr.stats)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["- For custom applications it is sometimes necessary to directly manipulate the metadata of a Trace.\n", "- The metadata of the Trace will **stay consistent**, as all values are derived from the starttime, the data and the sampling rate and are **updated automatically**"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["print(tr.stats.delta, \"|\", tr.stats.endtime)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["tr.stats.sampling_rate = 5.0\n", "print(tr.stats.delta, \"|\", tr.stats.endtime)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["print(tr.stats.npts)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["tr.data = tr.data[:100]\n", "print(tr.stats.npts, \"|\", tr.stats.endtime)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["- convenience methods make basic manipulations simple"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["tr = read(\"./data/waveform_PFO.mseed\")[0]\n", "tr.plot()"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["print(tr)\n", "tr.resample(sampling_rate=100.0)\n", "print(tr)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["print(tr)\n", "tr.trim(tr.stats.starttime + 12 * 60, tr.stats.starttime + 14 * 60)\n", "print(tr)\n", "tr.plot()"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["tr.detrend(\"linear\")\n", "tr.taper(max_percentage=0.05, type='cosine')\n", "tr.filter(\"lowpass\", freq=0.1)\n", "tr.plot()"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["# try tr. for other methods defined for Trace\n", "tr.detrend?"]}, {"cell_type": "markdown", "metadata": {}, "source": ["- Raw data available as a [**`numpy.ndarray`**](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.html) (as **`Trace.data`**)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["print(tr.data[:20])"]}, {"cell_type": "markdown", "metadata": {}, "source": ["- Data can be directly modified e.g. ..\n", "\n", "..by doing arithmetic operations (fast, handled in C by NumPy)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["print(tr.data ** 2 + 0.5)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["..by using [**`numpy.ndarray`** builtin methods](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.html) (also done in C by NumPy)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["print(tr.data.max())\n", "print(tr.data.mean())\n", "print(tr.data.ptp())\n", "# try tr.data. for a list of numpy methods defined on ndarray"]}, {"cell_type": "markdown", "metadata": {}, "source": ["..by using **`numpy`** functions (also done in C by NumPy)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["import numpy as np\n", "print(np.abs(tr.data))\n", "# you can try np. but there is a lot in there\n", "# try np.a"]}, {"cell_type": "markdown", "metadata": {}, "source": ["..by feeding pointers to existing C/Fortran routines from inside Python!\n", "\n", "This is done internally in several places, e.g. for cross correlations, beamforming or in third-party filetype libraries like e.g. libmseed.\n", "\n", "- Trace objects can also be manually generated with data in a [**`numpy.ndarray`**](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.html) (e.g. when needing to parse waveforms from non-standard ascii files)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["from obspy import Trace\n", "\n", "x = np.random.randint(-100, 100, 500)\n", "tr = Trace(data=x)\n", "tr.stats.station = \"XYZ\"\n", "tr.stats.starttime = UTCDateTime()\n", "\n", "tr.plot()"]}, {"cell_type": "markdown", "metadata": {}, "source": ["- Stream objects can be assembled from manually generated Traces"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["from obspy import Stream\n", "\n", "tr2 = Trace(data=np.random.randint(-300, 100, 1000))\n", "tr2.stats.starttime = UTCDateTime()\n", "tr2.stats.sampling_rate = 10.0\n", "st = Stream([tr, tr2])\n", "\n", "st.plot()"]}, {"cell_type": "markdown", "metadata": {}, "source": ["#### Builtin methods defined on **`Stream`** / **`Trace`**\n", "\n", "- Most methods that work on a Trace object also work on a Stream object. They are simply executed for every trace. [See ObsPy documentation for an overview of available methods](http://docs.obspy.org/packages/autogen/obspy.core.stream.Stream.html) (or try **`st.`**).\n", " - **`st.filter()`** - Filter all attached traces.\n", " - **`st.trim()`** - Cut all traces.\n", " - **`st.resample()`** / **`st.decimate()`** - Change the sampling rate.\n", " - **`st.trigger()`** - Run triggering algorithms.\n", " - **`st.plot()`** / **`st.spectrogram()`** - Visualize the data.\n", " - **`st.attach_response()`**/**`st.remove_response()`**, **`st.simulate()`** - Instrument correction\n", " - **`st.merge()`**, **`st.normalize()`**, **`st.detrend()`**, **`st.taper()`**, ...\n", "- A **`Stream`** object can also be exported to many formats, so ObsPy can be used to convert between different file formats."]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["st = read(\"./data/waveform_*.sac\")\n", "st.write(\"output_file.mseed\", format=\"MSEED\")\n", "!ls -l output_file*"]}, {"cell_type": "markdown", "metadata": {}, "source": [""]}, {"cell_type": "markdown", "metadata": {}, "source": ["#### Trace Exercises\n", " - Make an **`numpy.ndarray`** with zeros and (e.g. use **`numpy.zeros()`**) and put an ideal pulse somewhere in it\n", " - initialize a **`Trace`** object with your data array\n", " - Fill in some station information (e.g. network, station, ..)\n", " - Print trace summary and plot the trace\n", " - Change the sampling rate to 20 Hz\n", " - Change the starttime of the trace to the start time of this session\n", " - Print the trace summary and plot the trace again"]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["- Use **`tr.filter(...)`** and apply a lowpass filter with a corner frequency of 1 Hertz.\n", "- Display the preview plot, there are a few seconds of zeros that we can cut off."]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["- Use **`tr.trim(...)`** to remove some of the zeros at start and at the end\n", "- show the preview plot again"]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["- Scale up the amplitudes of the trace by a factor of 500\n", "- Add standard normal gaussian noise to the trace (use [**`np.random.randn()`**](http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.randn.html))\n", "- Display the preview plot again"]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["#### Stream Exercises\n", "\n", "- Read all Tohoku example earthquake data into a stream object (\"./data/waveform\\_\\*\")\n", "- Print the stream summary"]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["- Use **`st.select()`** to only keep traces of station BFO in the stream. Show the preview plot."]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["- trim the data to a 10 minute time window around the first arrival (just roughly looking at the preview plot)\n", "- display the preview plot and spectrograms for the stream (with logarithmic frequency scale, use `wlen=50` for the spectrogram plot)"]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["- remove the linear trend from the data, apply a tapering and a lowpass at 0.1 Hertz\n", "- show the preview plot again"]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}], "metadata": {"kernelspec": {"display_name": "Python 3", "language": "python", "name": "python3"}}, "nbformat": 4, "nbformat_minor": 2} \ No newline at end of file +{"cells":[{"cell_type":"markdown","metadata":{},"source":["
\n","
\n","
\n","
ObsPy Tutorial
\n","
Handling Waveform Data
\n","
\n","
\n","
"]},{"cell_type":"markdown","metadata":{},"source":["Seismo-Live: http://seismo-live.org\n","\n","##### Authors:\n","* Tobias Megies ([@megies](https://github.com/megies))\n","* Lion Krischer ([@krischer](https://github.com/krischer))\n","---"]},{"cell_type":"markdown","metadata":{},"source":["![](images/obspy_logo_full_524x179px.png)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["%matplotlib inline\n","import matplotlib.pyplot as plt\n","plt.style.use('ggplot')\n","plt.rcParams['figure.figsize'] = 12, 8"]},{"cell_type":"markdown","metadata":{},"source":["\n","\n","* read waveform data is returned as a **`Stream`** object."]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["from obspy import read\n","st = read(\"./data/waveform_PFO.mseed\", format=\"mseed\")\n","print(st)"]},{"cell_type":"markdown","metadata":{},"source":["- UNIX wildcards can be used to read multiple files simultaneously\n","- automatic file format detection, no need to worry about file formats\n","\n"," - currently supported: **mseed, sac, segy, seg2, gse1/2, seisan, sh, datamark, css, wav, y, Q (keeps growing...)**\n"," - more file formats are included whenever a basic reading routine is provided (or e.g. sufficient documentation on data compression etc.)\n"," - custom user-specific file formats can be added (through plugin) to filetype autodiscovery in local ObsPy installation by user"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["from obspy import read\n","st = read(\"./data/waveform_*\")\n","print(st)"]},{"cell_type":"markdown","metadata":{},"source":["- for MiniSEED files, only reading short portions of files without all of the file getting read into memory is supported (saves time and memory when working on large collections of big files)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["from obspy import UTCDateTime\n","\n","t = UTCDateTime(\"2011-03-11T05:46:23.015400Z\")\n","st = read(\"./data/waveform_*\", starttime=t + 10 * 60, endtime=t + 12 * 60)\n","print(st)"]},{"cell_type":"markdown","metadata":{},"source":["#### The Stream Object\n","\n"," - A Stream object is a collection of Trace objects"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["from obspy import read\n","st = read(\"./data/waveform_PFO.mseed\")\n","print(type(st))"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["print(st)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["print(st.traces)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["print(st[0])"]},{"cell_type":"markdown","metadata":{},"source":["- More traces can be assembled using **`+`** operator (or using the `.append()` and `.extend()` methods)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["st1 = read(\"./data/waveform_PFO.mseed\")\n","st2 = read(\"./data/waveform_PFO_synthetics.mseed\")\n","\n","st = st1 + st2\n","print(st)\n","\n","st3 = read(\"./data/waveform_BFO_BHE.sac\")\n","\n","st += st3\n","print(st)"]},{"cell_type":"markdown","metadata":{},"source":[" - convenient (and nicely readable) looping over traces"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["for tr in st:\n"," print(tr.id)"]},{"cell_type":"markdown","metadata":{},"source":[" - Stream is useful for applying the same processing to a larger number of different waveforms or to group Traces for processing (e.g. three components of one station in one Stream)"]},{"cell_type":"markdown","metadata":{},"source":["#### The Trace Object\n","\n","- a Trace object is a single, contiguous waveform data block (i.e. regularly spaced time series, no gaps)\n","- a Trace object contains a limited amount of metadata in a dictionary-like object (as **`Trace.stats`**) that fully describes the time series by specifying..\n"," * recording location/instrument (network, station, location and channel code)\n"," * start time\n"," * sampling rate"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["st = read(\"./data/waveform_PFO.mseed\")\n","tr = st[0] # get the first Trace in the Stream\n","print(tr)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["print(tr.stats)"]},{"cell_type":"markdown","metadata":{},"source":["- For custom applications it is sometimes necessary to directly manipulate the metadata of a Trace.\n","- The metadata of the Trace will **stay consistent**, as all values are derived from the starttime, the data and the sampling rate and are **updated automatically**"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["print(tr.stats.delta, \"|\", tr.stats.endtime)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["tr.stats.sampling_rate = 5.0\n","print(tr.stats.delta, \"|\", tr.stats.endtime)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["print(tr.stats.npts)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["tr.data = tr.data[:100]\n","print(tr.stats.npts, \"|\", tr.stats.endtime)"]},{"cell_type":"markdown","metadata":{},"source":["- convenience methods make basic manipulations simple"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["tr = read(\"./data/waveform_PFO.mseed\")[0]\n","tr.plot()"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["print(tr)\n","tr.resample(sampling_rate=100.0)\n","print(tr)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["print(tr)\n","tr.trim(tr.stats.starttime + 12 * 60, tr.stats.starttime + 14 * 60)\n","print(tr)\n","tr.plot()"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["tr.detrend(\"linear\")\n","tr.taper(max_percentage=0.05, type='cosine')\n","tr.filter(\"lowpass\", freq=0.1)\n","tr.plot()"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["# try tr. for other methods defined for Trace\n","tr.detrend?"]},{"cell_type":"markdown","metadata":{},"source":["- Raw data available as a [**`numpy.ndarray`**](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.html) (as **`Trace.data`**)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["print(tr.data[:20])"]},{"cell_type":"markdown","metadata":{},"source":["- Data can be directly modified e.g. ..\n","\n","..by doing arithmetic operations (fast, handled in C by NumPy)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["print(tr.data ** 2 + 0.5)"]},{"cell_type":"markdown","metadata":{},"source":["..by using [**`numpy.ndarray`** builtin methods](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.html) (also done in C by NumPy)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["print(tr.data.max())\n","print(tr.data.mean())\n","print(tr.data.ptp())\n","# try tr.data. for a list of numpy methods defined on ndarray"]},{"cell_type":"markdown","metadata":{},"source":["..by using **`numpy`** functions (also done in C by NumPy)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["import numpy as np\n","print(np.abs(tr.data))\n","# you can try np. but there is a lot in there\n","# try np.a"]},{"cell_type":"markdown","metadata":{},"source":["..by feeding pointers to existing C/Fortran routines from inside Python!\n","\n","This is done internally in several places, e.g. for cross correlations, beamforming or in third-party filetype libraries like e.g. libmseed.\n","\n","- Trace objects can also be manually generated with data in a [**`numpy.ndarray`**](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.html) (e.g. when needing to parse waveforms from non-standard ascii files)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["from obspy import Trace\n","\n","x = np.random.randint(-100, 100, 500)\n","tr = Trace(data=x)\n","tr.stats.station = \"XYZ\"\n","tr.stats.starttime = UTCDateTime()\n","\n","tr.plot()"]},{"cell_type":"markdown","metadata":{},"source":["- Stream objects can be assembled from manually generated Traces"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["from obspy import Stream\n","\n","tr2 = Trace(data=np.random.randint(-300, 100, 1000))\n","tr2.stats.starttime = UTCDateTime()\n","tr2.stats.sampling_rate = 10.0\n","st = Stream([tr, tr2])\n","\n","st.plot()"]},{"cell_type":"markdown","metadata":{},"source":["#### Builtin methods defined on **`Stream`** / **`Trace`**\n","\n","- Most methods that work on a Trace object also work on a Stream object. They are simply executed for every trace. [See ObsPy documentation for an overview of available methods](http://docs.obspy.org/packages/autogen/obspy.core.stream.Stream.html) (or try **`st.`**).\n"," - **`st.filter()`** - Filter all attached traces.\n"," - **`st.trim()`** - Cut all traces.\n"," - **`st.resample()`** / **`st.decimate()`** - Change the sampling rate.\n"," - **`st.trigger()`** - Run triggering algorithms.\n"," - **`st.plot()`** / **`st.spectrogram()`** - Visualize the data.\n"," - **`st.attach_response()`**/**`st.remove_response()`**, **`st.simulate()`** - Instrument correction\n"," - **`st.merge()`**, **`st.normalize()`**, **`st.detrend()`**, **`st.taper()`**, ...\n","- A **`Stream`** object can also be exported to many formats, so ObsPy can be used to convert between different file formats."]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["st = read(\"./data/waveform_*.sac\")\n","st.write(\"output_file.mseed\", format=\"MSEED\")\n","!ls -l output_file*"]},{"cell_type":"markdown","metadata":{},"source":[""]},{"cell_type":"markdown","metadata":{},"source":["#### Trace Exercises\n"," - Make an **`numpy.ndarray`** with zeros and (e.g. use **`numpy.zeros()`**) and put an ideal pulse somewhere in it\n"," - initialize a **`Trace`** object with your data array\n"," - Fill in some station information (e.g. network, station, ..)\n"," - Print trace summary and plot the trace\n"," - Change the sampling rate to 20 Hz\n"," - Change the starttime of the trace to the start time of this session\n"," - Print the trace summary and plot the trace again"]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["- Use **`tr.filter(...)`** and apply a lowpass filter with a corner frequency of 1 Hertz.\n","- Display the preview plot, there are a few seconds of zeros that we can cut off."]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["- Use **`tr.trim(...)`** to remove some of the zeros at start and at the end\n","- show the preview plot again"]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["- Scale up the amplitudes of the trace by a factor of 500\n","- Add standard normal gaussian noise to the trace (use [**`np.random.randn()`**](http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.randn.html))\n","- Display the preview plot again"]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["#### Stream Exercises\n","\n","- Read all Tohoku example earthquake data into a stream object (\"./data/waveform\\_\\*\")\n","- Print the stream summary"]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["- Use **`st.select()`** to only keep traces of station BFO in the stream. Show the preview plot."]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["- trim the data to a 10 minute time window around the first arrival (just roughly looking at the preview plot)\n","- display the preview plot and spectrograms for the stream (with logarithmic frequency scale, use `wlen=50` for the spectrogram plot)"]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["- remove the linear trend from the data, apply a tapering and a lowpass at 0.1 Hertz\n","- show the preview plot again"]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]}],"metadata":{"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"}},"nbformat":4,"nbformat_minor":2} diff --git a/ObsPy/04_Station_metainformation.ipynb b/ObsPy/04_Station_metainformation.ipynb index 9fa82c3..b1da1e8 100644 --- a/ObsPy/04_Station_metainformation.ipynb +++ b/ObsPy/04_Station_metainformation.ipynb @@ -1 +1 @@ -{"cells": [{"cell_type": "markdown", "metadata": {}, "source": ["
\n", "
\n", "
\n", "
ObsPy Tutorial
\n", "
Handling Station Metadata
\n", "
\n", "
\n", "
"]}, {"cell_type": "markdown", "metadata": {}, "source": ["Seismo-Live: http://seismo-live.org\n", "\n", "##### Authors:\n", "* Lion Krischer ([@krischer](https://github.com/krischer))\n", "* Tobias Megies ([@megies](https://github.com/megies))\n", "---"]}, {"cell_type": "markdown", "metadata": {}, "source": ["![](images/obspy_logo_full_524x179px.png)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "plt.style.use('ggplot')\n", "plt.rcParams['figure.figsize'] = 12, 8"]}, {"cell_type": "markdown", "metadata": {}, "source": ["- for station metadata, the de-facto standard of the future (replacing SEED/RESP) is [FDSN StationXML](http://www.fdsn.org/xml/station/)\n", "- FDSN StationXML files can be read using **`read_inventory()`**"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["from obspy import read_inventory\n", "# real-world StationXML files often deviate from the official schema definition\n", "# therefore file-format autodiscovery sometimes fails and we have to force the file format\n", "inventory = read_inventory(\"./data/station_PFO.xml\", format=\"STATIONXML\")\n", "print(type(inventory))"]}, {"cell_type": "markdown", "metadata": {}, "source": ["- the nested ObsPy Inventory class structure (Inventory/Station/Channel/Response/...) is closely modelled after FDSN StationXML\n", ""]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["!head data/station_BFO.xml"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["print(inventory)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["network = inventory[0]\n", "print(network)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["station = network[0]\n", "print(station)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["channel = station[0]\n", "print(channel)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["print(channel.response)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["from obspy import read\n", "st = read(\"./data/waveform_PFO.mseed\")\n", "print(st)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["inv = read_inventory(\"./data/station_PFO.xml\", format=\"STATIONXML\")"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["print(st[0].stats)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["- the instrument response can be deconvolved from the waveform data using the convenience method **`Stream.remove_response()`**\n", "- evalresp is used internally to calculate the instrument response"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["st.plot()\n", "st.remove_response(inventory=inv)\n", "st.plot()"]}, {"cell_type": "markdown", "metadata": {}, "source": ["- several options can be used to specify details of the deconvolution (water level, frequency domain prefiltering), output units (velocity/displacement/acceleration), demeaning, tapering and to specify if any response stages should be omitted"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["st = read(\"./data/waveform_PFO.mseed\")\n", "st.remove_response(inventory=inv, water_level=60, pre_filt=(0.01, 0.02, 8, 10), output=\"DISP\")\n", "st.plot()"]}, {"cell_type": "markdown", "metadata": {}, "source": ["- station metadata not present in StationXML yet but in Dataless SEED or RESP files can be used for instrument correction using the `.simulate()` method of Stream/Trace in a similar fashion"]}], "metadata": {"kernelspec": {"display_name": "Python 3", "language": "python", "name": "python3"}}, "nbformat": 4, "nbformat_minor": 2} \ No newline at end of file +{"cells":[{"cell_type":"markdown","metadata":{},"source":["
\n","
\n","
\n","
ObsPy Tutorial
\n","
Handling Station Metadata
\n","
\n","
\n","
"]},{"cell_type":"markdown","metadata":{},"source":["Seismo-Live: http://seismo-live.org\n","\n","##### Authors:\n","* Lion Krischer ([@krischer](https://github.com/krischer))\n","* Tobias Megies ([@megies](https://github.com/megies))\n","---"]},{"cell_type":"markdown","metadata":{},"source":["![](images/obspy_logo_full_524x179px.png)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["%matplotlib inline\n","import matplotlib.pyplot as plt\n","plt.style.use('ggplot')\n","plt.rcParams['figure.figsize'] = 12, 8"]},{"cell_type":"markdown","metadata":{},"source":["- for station metadata, the de-facto standard of the future (replacing SEED/RESP) is [FDSN StationXML](http://www.fdsn.org/xml/station/)\n","- FDSN StationXML files can be read using **`read_inventory()`**"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["from obspy import read_inventory\n","# real-world StationXML files often deviate from the official schema definition\n","# therefore file-format autodiscovery sometimes fails and we have to force the file format\n","inventory = read_inventory(\"./data/station_PFO.xml\", format=\"STATIONXML\")\n","print(type(inventory))"]},{"cell_type":"markdown","metadata":{},"source":["- the nested ObsPy Inventory class structure (Inventory/Station/Channel/Response/...) is closely modelled after FDSN StationXML\n",""]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["!head data/station_BFO.xml"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["print(inventory)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["network = inventory[0]\n","print(network)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["station = network[0]\n","print(station)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["channel = station[0]\n","print(channel)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["print(channel.response)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["from obspy import read\n","st = read(\"./data/waveform_PFO.mseed\")\n","print(st)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["inv = read_inventory(\"./data/station_PFO.xml\", format=\"STATIONXML\")"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["print(st[0].stats)"]},{"cell_type":"markdown","metadata":{},"source":["- the instrument response can be deconvolved from the waveform data using the convenience method **`Stream.remove_response()`**\n","- evalresp is used internally to calculate the instrument response"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["st.plot()\n","st.remove_response(inventory=inv)\n","st.plot()"]},{"cell_type":"markdown","metadata":{},"source":["- several options can be used to specify details of the deconvolution (water level, frequency domain prefiltering), output units (velocity/displacement/acceleration), demeaning, tapering and to specify if any response stages should be omitted"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["st = read(\"./data/waveform_PFO.mseed\")\n","st.remove_response(inventory=inv, water_level=60, pre_filt=(0.01, 0.02, 8, 10), output=\"DISP\")\n","st.plot()"]},{"cell_type":"markdown","metadata":{},"source":["- station metadata not present in StationXML yet but in Dataless SEED or RESP files can be used for instrument correction using the `.simulate()` method of Stream/Trace in a similar fashion"]}],"metadata":{"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"}},"nbformat":4,"nbformat_minor":2} diff --git a/ObsPy/05_Event_metadata.ipynb b/ObsPy/05_Event_metadata.ipynb index 1a3610b..5247744 100644 --- a/ObsPy/05_Event_metadata.ipynb +++ b/ObsPy/05_Event_metadata.ipynb @@ -1 +1 @@ -{"cells": [{"cell_type": "markdown", "metadata": {}, "source": ["
\n", "
\n", "
\n", "
ObsPy Tutorial
\n", "
Handling Event Metadata
\n", "
\n", "
\n", "
"]}, {"cell_type": "markdown", "metadata": {}, "source": ["Seismo-Live: http://seismo-live.org\n", "\n", "##### Authors:\n", "* Lion Krischer ([@krischer](https://github.com/krischer))\n", "* Tobias Megies ([@megies](https://github.com/megies))\n", "---"]}, {"cell_type": "markdown", "metadata": {}, "source": ["![](images/obspy_logo_full_524x179px.png)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "plt.style.use('ggplot')\n", "plt.rcParams['figure.figsize'] = 12, 8"]}, {"cell_type": "markdown", "metadata": {}, "source": ["- for event metadata, the de-facto standard is [QuakeML (an xml document structure)](https://quake.ethz.ch/quakeml/)\n", "- QuakeML files can be read using **`read_events()`**"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["from obspy import read_events\n", "\n", "catalog = read_events(\"./data/event_tohoku_with_big_aftershocks.xml\")\n", "print(catalog)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["- **`read_events()`** function returns a **`Catalog`** object, which is\n", "a collection of **`Event`** objects."]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["print(type(catalog))\n", "print(type(catalog[0]))"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["event = catalog[0]\n", "print(event)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["- Event objects are again collections of other resources.\n", "- the nested ObsPy Event class structure (Catalog/Event/Origin/Magnitude/FocalMechanism/...) is closely modelled after QuakeML\n", ""]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["print(type(event.origins))\n", "print(type(event.origins[0]))\n", "print(event.origins[0])"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["print(type(event.magnitudes))\n", "print(type(event.magnitudes[0]))\n", "print(event.magnitudes[0])"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["# try event. to get an idea what \"children\" elements event has"]}, {"cell_type": "markdown", "metadata": {}, "source": ["- The Catalog object contains some convenience methods to make\n", "working with events easier.\n", "- for example, the included events can be filtered with various keys."]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["largest_magnitude_events = catalog.filter(\"magnitude >= 7.8\")\n", "print(largest_magnitude_events)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["- There is a basic preview plot using the matplotlib basemap module."]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["catalog.plot(projection=\"local\");"]}, {"cell_type": "markdown", "metadata": {}, "source": ["- a (modified) Catalog can be output to file (currently there is write support for QuakeML only)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["largest_magnitude_events.write(\"/tmp/large_events.xml\", format=\"QUAKEML\")\n", "!ls -l /tmp/large_events.xml"]}, {"cell_type": "markdown", "metadata": {}, "source": ["- the event type classes can be used to build up Events/Catalogs/Picks/.. from scratch in custom processing work flows and to share them with other researchers in the de facto standard format QuakeML"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["from obspy import UTCDateTime\n", "from obspy.core.event import Catalog, Event, Origin, Magnitude\n", "from obspy.geodetics import FlinnEngdahl\n", "\n", "cat = Catalog()\n", "cat.description = \"Just a fictitious toy example catalog built from scratch\"\n", "\n", "e = Event()\n", "e.event_type = \"not existing\"\n", "\n", "o = Origin()\n", "o.time = UTCDateTime(2014, 2, 23, 18, 0, 0)\n", "o.latitude = 47.6\n", "o.longitude = 12.0\n", "o.depth = 10000\n", "o.depth_type = \"operator assigned\"\n", "o.evaluation_mode = \"manual\"\n", "o.evaluation_status = \"preliminary\"\n", "o.region = FlinnEngdahl().get_region(o.longitude, o.latitude)\n", "\n", "m = Magnitude()\n", "m.mag = 7.2\n", "m.magnitude_type = \"Mw\"\n", "\n", "m2 = Magnitude()\n", "m2.mag = 7.4\n", "m2.magnitude_type = \"Ms\"\n", "\n", "# also included could be: custom picks, amplitude measurements, station magnitudes,\n", "# focal mechanisms, moment tensors, ...\n", "\n", "# make associations, put everything together\n", "cat.append(e)\n", "e.origins = [o]\n", "e.magnitudes = [m, m2]\n", "m.origin_id = o.resource_id\n", "m2.origin_id = o.resource_id\n", "\n", "print(cat)\n", "cat.write(\"/tmp/my_custom_events.xml\", format=\"QUAKEML\")\n", "!cat /tmp/my_custom_events.xml"]}], "metadata": {"kernelspec": {"display_name": "Python 3", "language": "python", "name": "python3"}}, "nbformat": 4, "nbformat_minor": 2} \ No newline at end of file +{"cells":[{"cell_type":"markdown","metadata":{},"source":["
\n","
\n","
\n","
ObsPy Tutorial
\n","
Handling Event Metadata
\n","
\n","
\n","
"]},{"cell_type":"markdown","metadata":{},"source":["Seismo-Live: http://seismo-live.org\n","\n","##### Authors:\n","* Lion Krischer ([@krischer](https://github.com/krischer))\n","* Tobias Megies ([@megies](https://github.com/megies))\n","---"]},{"cell_type":"markdown","metadata":{},"source":["![](images/obspy_logo_full_524x179px.png)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["%matplotlib inline\n","import matplotlib.pyplot as plt\n","plt.style.use('ggplot')\n","plt.rcParams['figure.figsize'] = 12, 8"]},{"cell_type":"markdown","metadata":{},"source":["- for event metadata, the de-facto standard is [QuakeML (an xml document structure)](https://quake.ethz.ch/quakeml/)\n","- QuakeML files can be read using **`read_events()`**"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["from obspy import read_events\n","\n","catalog = read_events(\"./data/event_tohoku_with_big_aftershocks.xml\")\n","print(catalog)"]},{"cell_type":"markdown","metadata":{},"source":["- **`read_events()`** function returns a **`Catalog`** object, which is\n","a collection of **`Event`** objects."]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["print(type(catalog))\n","print(type(catalog[0]))"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["event = catalog[0]\n","print(event)"]},{"cell_type":"markdown","metadata":{},"source":["- Event objects are again collections of other resources.\n","- the nested ObsPy Event class structure (Catalog/Event/Origin/Magnitude/FocalMechanism/...) is closely modelled after QuakeML\n",""]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["print(type(event.origins))\n","print(type(event.origins[0]))\n","print(event.origins[0])"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["print(type(event.magnitudes))\n","print(type(event.magnitudes[0]))\n","print(event.magnitudes[0])"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["# try event. to get an idea what \"children\" elements event has"]},{"cell_type":"markdown","metadata":{},"source":["- The Catalog object contains some convenience methods to make\n","working with events easier.\n","- for example, the included events can be filtered with various keys."]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["largest_magnitude_events = catalog.filter(\"magnitude >= 7.8\")\n","print(largest_magnitude_events)"]},{"cell_type":"markdown","metadata":{},"source":["- There is a basic preview plot using the matplotlib basemap module."]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["catalog.plot(projection=\"local\");"]},{"cell_type":"markdown","metadata":{},"source":["- a (modified) Catalog can be output to file (currently there is write support for QuakeML only)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["largest_magnitude_events.write(\"/tmp/large_events.xml\", format=\"QUAKEML\")\n","!ls -l /tmp/large_events.xml"]},{"cell_type":"markdown","metadata":{},"source":["- the event type classes can be used to build up Events/Catalogs/Picks/.. from scratch in custom processing work flows and to share them with other researchers in the de facto standard format QuakeML"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["from obspy import UTCDateTime\n","from obspy.core.event import Catalog, Event, Origin, Magnitude\n","from obspy.geodetics import FlinnEngdahl\n","\n","cat = Catalog()\n","cat.description = \"Just a fictitious toy example catalog built from scratch\"\n","\n","e = Event()\n","e.event_type = \"not existing\"\n","\n","o = Origin()\n","o.time = UTCDateTime(2014, 2, 23, 18, 0, 0)\n","o.latitude = 47.6\n","o.longitude = 12.0\n","o.depth = 10000\n","o.depth_type = \"operator assigned\"\n","o.evaluation_mode = \"manual\"\n","o.evaluation_status = \"preliminary\"\n","o.region = FlinnEngdahl().get_region(o.longitude, o.latitude)\n","\n","m = Magnitude()\n","m.mag = 7.2\n","m.magnitude_type = \"Mw\"\n","\n","m2 = Magnitude()\n","m2.mag = 7.4\n","m2.magnitude_type = \"Ms\"\n","\n","# also included could be: custom picks, amplitude measurements, station magnitudes,\n","# focal mechanisms, moment tensors, ...\n","\n","# make associations, put everything together\n","cat.append(e)\n","e.origins = [o]\n","e.magnitudes = [m, m2]\n","m.origin_id = o.resource_id\n","m2.origin_id = o.resource_id\n","\n","print(cat)\n","cat.write(\"/tmp/my_custom_events.xml\", format=\"QUAKEML\")\n","!cat /tmp/my_custom_events.xml"]}],"metadata":{"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"}},"nbformat":4,"nbformat_minor":2} diff --git a/ObsPy/06_FDSN.ipynb b/ObsPy/06_FDSN.ipynb index cf122ac..d28a8e6 100644 --- a/ObsPy/06_FDSN.ipynb +++ b/ObsPy/06_FDSN.ipynb @@ -1 +1 @@ -{"cells": [{"cell_type": "markdown", "metadata": {}, "source": ["
\n", "
\n", "
\n", "
ObsPy Tutorial
\n", "
Downloading Data
\n", "
\n", "
\n", "
"]}, {"cell_type": "markdown", "metadata": {}, "source": ["Seismo-Live: http://seismo-live.org\n", "\n", "##### Authors:\n", "* Lion Krischer ([@krischer](https://github.com/krischer))\n", "* Tobias Megies ([@megies](https://github.com/megies))\n", "---"]}, {"cell_type": "markdown", "metadata": {}, "source": ["![](images/obspy_logo_full_524x179px.png)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "plt.style.use('ggplot')\n", "plt.rcParams['figure.figsize'] = 12, 8"]}, {"cell_type": "markdown", "metadata": {}, "source": ["ObsPy has clients to directly fetch data via...\n", "\n", "- FDSN webservices (IRIS, Geofon/GFZ, USGS, NCEDC, SeisComp3 instances, ...)\n", "- ArcLink (EIDA, ...)\n", "- Earthworm\n", "- SeedLink (near-realtime servers)\n", "- NERIES/NERA/seismicportal.eu\n", "- NEIC\n", "- SeisHub (local seismological database)\n", "\n", "This introduction shows how to use the FDSN webservice client. The FDSN webservice definition is by now the default web service implemented by many data centers world wide. Clients for other protocols work similar to the FDSN client.\n", "\n", "#### Waveform Data"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["from obspy import UTCDateTime\n", "from obspy.clients.fdsn import Client\n", "\n", "client = Client(\"IRIS\")\n", "t = UTCDateTime(\"2011-03-11T05:46:23\") # Tohoku\n", "st = client.get_waveforms(\"II\", \"PFO\", \"*\", \"LHZ\",\n", " t + 10 * 60, t + 30 * 60)\n", "print(st)\n", "st.plot()"]}, {"cell_type": "markdown", "metadata": {}, "source": ["- again, waveform data is returned as a Stream object\n", "- for all custom processing workflows it does not matter if the data originates from a local file or from a web service\n", "\n", "#### Event Metadata\n", "\n", "The FDSN client can also be used to request event metadata:"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["t = UTCDateTime(\"2011-03-11T05:46:23\") # Tohoku\n", "catalog = client.get_events(starttime=t - 100, endtime=t + 24 * 3600,\n", " minmagnitude=7)\n", "print(catalog)\n", "catalog.plot();"]}, {"cell_type": "markdown", "metadata": {}, "source": ["Requests can have a wide range of constraints (see [ObsPy Documentation](http://docs.obspy.org/packages/autogen/obspy.fdsn.client.Client.get_events.html)):\n", "\n", "- time range\n", "- geographical (lonlat-box, circular by distance)\n", "- depth range\n", "- magnitude range, type\n", "- contributing agency"]}, {"cell_type": "markdown", "metadata": {}, "source": ["#### Station Metadata\n", "\n", "Finally, the FDSN client can be used to request station metadata. Stations can be looked up using a wide range of constraints (see [ObsPy documentation](http://docs.obspy.org/packages/autogen/obspy.fdsn.client.Client.get_stations.html)):\n", "\n", " * network/station code\n", " * time range of operation\n", " * geographical (lonlat-box, circular by distance)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["event = catalog[0]\n", "origin = event.origins[0]\n", "\n", "# M\u00fcnster\n", "lon = 7.63\n", "lat = 51.96\n", "\n", "inventory = client.get_stations(longitude=lon, latitude=lat,\n", " maxradius=2.5, level=\"station\")\n", "print(inventory)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["The **`level=...`** keyword is used to specify the level of detail in the requested inventory\n", "\n", "- `\"network\"`: only return information on networks matching the criteria\n", "- `\"station\"`: return information on all matching stations\n", "- `\"channel\"`: return information on available channels in all stations networks matching the criteria\n", "- `\"response\"`: include instrument response for all matching channels (large result data size!)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["inventory = client.get_stations(network=\"OE\", station=\"DAVA\",\n", " level=\"station\")\n", "print(inventory)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["inventory = client.get_stations(network=\"OE\", station=\"DAVA\",\n", " level=\"channel\")\n", "print(inventory)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["For waveform requests that include instrument correction, the appropriate instrument response information can be attached to waveforms automatically: \n", "(Of course, for work on large datasets, the better choice is to download all station information and avoid the internal repeated webservice requests)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["t = UTCDateTime(\"2011-03-11T05:46:23\") # Tohoku\n", "st = client.get_waveforms(\"II\", \"PFO\", \"*\", \"LHZ\",\n", " t + 10 * 60, t + 30 * 60, attach_response=True)\n", "st.plot()\n", "\n", "st.remove_response()\n", "st.plot()"]}, {"cell_type": "markdown", "metadata": {}, "source": ["All data requested using the FDSN client can be directly saved to file using the **`filename=\"...\"`** option. The data is then stored exactly as it is served by the data center, i.e. without first parsing by ObsPy and outputting by ObsPy."]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["client.get_events(starttime=t-100, endtime=t+24*3600, minmagnitude=7,\n", " filename=\"/tmp/requested_events.xml\")\n", "client.get_stations(network=\"OE\", station=\"DAVA\", level=\"station\",\n", " filename=\"/tmp/requested_stations.xml\")\n", "client.get_waveforms(\"II\", \"PFO\", \"*\", \"LHZ\", t + 10 * 60, t + 30 * 60,\n", " filename=\"/tmp/requested_waveforms.mseed\")\n", "!ls -lrt /tmp/requested*"]}, {"cell_type": "markdown", "metadata": {}, "source": ["#### FDSN Client Exercise\n", "\n", "Use the FDSN client to assemble a waveform dataset for on event.\n", "\n", "- search for a large earthquake (e.g. by depth or in a region of your choice, use option **`limit=5`** to keep network traffic down)"]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["- search for stations to look at waveforms for the event. stations should..\n", " * be available at the time of the event\n", " * have a vertical 1 Hz stream (\"LHZ\", to not overpower our network..)\n", " * be in a narrow angular distance around the event (e.g. 90-91 degrees)\n", " * adjust your search so that only a small number of stations (e.g. 3-6) match your search criteria"]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["- for each of these stations download data of the event, e.g. a couple of minutes before to half an hour after the event\n", "- put all data together in one stream (put the `get_waveforms()` call in a try/except/pass block to silently skip stations that actually have no data available)\n", "- print stream info, plot the raw data"]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["- correct the instrument response for all stations and plot the corrected data"]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["If you have time, assemble and plot another similar dataset (e.g. like before stations at a certain distance from a big event, or use Transportable Array data for a big event, etc.)"]}], "metadata": {"jupytext": {"encoding": "# -*- coding: utf-8 -*-"}, "kernelspec": {"display_name": "Python 3", "language": "python", "name": "python3"}}, "nbformat": 4, "nbformat_minor": 2} \ No newline at end of file +{"cells":[{"cell_type":"markdown","metadata":{},"source":["
\n","
\n","
\n","
ObsPy Tutorial
\n","
Downloading Data
\n","
\n","
\n","
"]},{"cell_type":"markdown","metadata":{},"source":["Seismo-Live: http://seismo-live.org\n","\n","##### Authors:\n","* Lion Krischer ([@krischer](https://github.com/krischer))\n","* Tobias Megies ([@megies](https://github.com/megies))\n","---"]},{"cell_type":"markdown","metadata":{},"source":["![](images/obspy_logo_full_524x179px.png)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["%matplotlib inline\n","import matplotlib.pyplot as plt\n","plt.style.use('ggplot')\n","plt.rcParams['figure.figsize'] = 12, 8"]},{"cell_type":"markdown","metadata":{},"source":["ObsPy has clients to directly fetch data via...\n","\n","- FDSN webservices (IRIS, Geofon/GFZ, USGS, NCEDC, SeisComp3 instances, ...)\n","- ArcLink (EIDA, ...)\n","- Earthworm\n","- SeedLink (near-realtime servers)\n","- NERIES/NERA/seismicportal.eu\n","- NEIC\n","- SeisHub (local seismological database)\n","\n","This introduction shows how to use the FDSN webservice client. The FDSN webservice definition is by now the default web service implemented by many data centers world wide. Clients for other protocols work similar to the FDSN client.\n","\n","#### Waveform Data"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["from obspy import UTCDateTime\n","from obspy.clients.fdsn import Client\n","\n","client = Client(\"IRIS\")\n","t = UTCDateTime(\"2011-03-11T05:46:23\") # Tohoku\n","st = client.get_waveforms(\"II\", \"PFO\", \"*\", \"LHZ\",\n"," t + 10 * 60, t + 30 * 60)\n","print(st)\n","st.plot()"]},{"cell_type":"markdown","metadata":{},"source":["- again, waveform data is returned as a Stream object\n","- for all custom processing workflows it does not matter if the data originates from a local file or from a web service\n","\n","#### Event Metadata\n","\n","The FDSN client can also be used to request event metadata:"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["t = UTCDateTime(\"2011-03-11T05:46:23\") # Tohoku\n","catalog = client.get_events(starttime=t - 100, endtime=t + 24 * 3600,\n"," minmagnitude=7)\n","print(catalog)\n","catalog.plot();"]},{"cell_type":"markdown","metadata":{},"source":["Requests can have a wide range of constraints (see [ObsPy Documentation](http://docs.obspy.org/packages/autogen/obspy.fdsn.client.Client.get_events.html)):\n","\n","- time range\n","- geographical (lonlat-box, circular by distance)\n","- depth range\n","- magnitude range, type\n","- contributing agency"]},{"cell_type":"markdown","metadata":{},"source":["#### Station Metadata\n","\n","Finally, the FDSN client can be used to request station metadata. Stations can be looked up using a wide range of constraints (see [ObsPy documentation](http://docs.obspy.org/packages/autogen/obspy.fdsn.client.Client.get_stations.html)):\n","\n"," * network/station code\n"," * time range of operation\n"," * geographical (lonlat-box, circular by distance)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["event = catalog[0]\n","origin = event.origins[0]\n","\n","# Münster\n","lon = 7.63\n","lat = 51.96\n","\n","inventory = client.get_stations(longitude=lon, latitude=lat,\n"," maxradius=2.5, level=\"station\")\n","print(inventory)"]},{"cell_type":"markdown","metadata":{},"source":["The **`level=...`** keyword is used to specify the level of detail in the requested inventory\n","\n","- `\"network\"`: only return information on networks matching the criteria\n","- `\"station\"`: return information on all matching stations\n","- `\"channel\"`: return information on available channels in all stations networks matching the criteria\n","- `\"response\"`: include instrument response for all matching channels (large result data size!)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["inventory = client.get_stations(network=\"OE\", station=\"DAVA\",\n"," level=\"station\")\n","print(inventory)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["inventory = client.get_stations(network=\"OE\", station=\"DAVA\",\n"," level=\"channel\")\n","print(inventory)"]},{"cell_type":"markdown","metadata":{},"source":["For waveform requests that include instrument correction, the appropriate instrument response information can be attached to waveforms automatically: \n","(Of course, for work on large datasets, the better choice is to download all station information and avoid the internal repeated webservice requests)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["t = UTCDateTime(\"2011-03-11T05:46:23\") # Tohoku\n","st = client.get_waveforms(\"II\", \"PFO\", \"*\", \"LHZ\",\n"," t + 10 * 60, t + 30 * 60, attach_response=True)\n","st.plot()\n","\n","st.remove_response()\n","st.plot()"]},{"cell_type":"markdown","metadata":{},"source":["All data requested using the FDSN client can be directly saved to file using the **`filename=\"...\"`** option. The data is then stored exactly as it is served by the data center, i.e. without first parsing by ObsPy and outputting by ObsPy."]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["client.get_events(starttime=t-100, endtime=t+24*3600, minmagnitude=7,\n"," filename=\"/tmp/requested_events.xml\")\n","client.get_stations(network=\"OE\", station=\"DAVA\", level=\"station\",\n"," filename=\"/tmp/requested_stations.xml\")\n","client.get_waveforms(\"II\", \"PFO\", \"*\", \"LHZ\", t + 10 * 60, t + 30 * 60,\n"," filename=\"/tmp/requested_waveforms.mseed\")\n","!ls -lrt /tmp/requested*"]},{"cell_type":"markdown","metadata":{},"source":["#### FDSN Client Exercise\n","\n","Use the FDSN client to assemble a waveform dataset for on event.\n","\n","- search for a large earthquake (e.g. by depth or in a region of your choice, use option **`limit=5`** to keep network traffic down)"]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["- search for stations to look at waveforms for the event. stations should..\n"," * be available at the time of the event\n"," * have a vertical 1 Hz stream (\"LHZ\", to not overpower our network..)\n"," * be in a narrow angular distance around the event (e.g. 90-91 degrees)\n"," * adjust your search so that only a small number of stations (e.g. 3-6) match your search criteria"]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["- for each of these stations download data of the event, e.g. a couple of minutes before to half an hour after the event\n","- put all data together in one stream (put the `get_waveforms()` call in a try/except/pass block to silently skip stations that actually have no data available)\n","- print stream info, plot the raw data"]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["- correct the instrument response for all stations and plot the corrected data"]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["If you have time, assemble and plot another similar dataset (e.g. like before stations at a certain distance from a big event, or use Transportable Array data for a big event, etc.)"]}],"metadata":{"jupytext":{"encoding":"# -*- coding: utf-8 -*-"},"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"}},"nbformat":4,"nbformat_minor":2} diff --git a/ObsPy/07_Basic_Processing_Exercise.ipynb b/ObsPy/07_Basic_Processing_Exercise.ipynb index 45ab9e5..b4dbf61 100644 --- a/ObsPy/07_Basic_Processing_Exercise.ipynb +++ b/ObsPy/07_Basic_Processing_Exercise.ipynb @@ -1 +1 @@ -{"cells": [{"cell_type": "markdown", "metadata": {}, "source": ["
\n", "
\n", "
\n", "
ObsPy Tutorial
\n", "
Downloading/Processing Exercise
\n", "
\n", "
\n", "
"]}, {"cell_type": "markdown", "metadata": {}, "source": ["Seismo-Live: http://seismo-live.org\n", "\n", "##### Authors:\n", "* Lion Krischer ([@krischer](https://github.com/krischer))\n", "* Tobias Megies ([@megies](https://github.com/megies))\n", "---"]}, {"cell_type": "markdown", "metadata": {}, "source": ["![](images/obspy_logo_full_524x179px.png)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["For the this exercise we will download some data from the Tohoku-Oki earthquake, cut out a certain time window around the first arrival and remove the instrument response from the data."]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "plt.style.use('ggplot')\n", "plt.rcParams['figure.figsize'] = 12, 8"]}, {"cell_type": "markdown", "metadata": {}, "source": ["The first step is to download all the necessary information using the ObsPy FDSN client. **Learn to read the documentation!**\n", "\n", "We need the following things:\n", "\n", "1. Event information about the Tohoku-Oki earthquake. Use the `get_events()` method of the client. A good provider of event data is the USGS.\n", "2. Waveform information for a certain station. Choose your favorite one! If you have no preference, use `II.BFO` which is available for example from IRIS. Use the `get_waveforms()` method.\n", "3. Download the associated station/instrument information with the `get_stations()` method."]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["Have a look at the just downloaded data."]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["## Exercise\n", "\n", "The goal of this exercise is to cut the data from 1 minute before the first arrival to 5 minutes after it, and then remove the instrument response.\n", "\n", "\n", "#### Step 1: Determine Coordinates of Station"]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["#### Step 2: Determine Coordinates of Event"]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["#### Step 3: Calculate distance of event and station.\n", "\n", "Use `obspy.geodetics.locations2degree`."]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["#### Step 4: Calculate Theoretical Arrivals\n", "\n", "```python\n", "from obspy.taup import TauPyModel\n", "m = TauPyModel(model=\"ak135\")\n", "arrivals = m.get_ray_paths(...)\n", "arrivals.plot()\n", "```"]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["#### Step 5: Calculate absolute time of the first arrivals at the station"]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["#### Step 6: Cut to 1 minute before and 5 minutes after the first arrival"]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["#### Step 7: Remove the instrument response\n", "\n", "```python\n", "st.remove_response(inventory=inv, pre_filt=...)\n", "```\n", "\n", "![taper](images/cos_taper.png)"]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["## Bonus: Interactive IPython widgets"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["from IPython.html.widgets import interact\n", "from obspy.taup import TauPyModel\n", "\n", "m = TauPyModel(\"ak135\")\n", "\n", "def plot_raypaths(distance, depth, wavetype):\n", " try:\n", " plt.close()\n", " except:\n", " pass\n", " if wavetype == \"ttall\":\n", " phases = [\"ttall\"]\n", " elif wavetype == \"diff\":\n", " phases = [\"Pdiff\", \"pPdiff\"]\n", " m.get_ray_paths(distance_in_degree=distance,\n", " source_depth_in_km=depth,\n", " phase_list=phases).plot();\n", " \n", "interact(plot_raypaths, distance=[0, 180],\n", " depth=[0, 700], wavetype=[\"ttall\", \"diff\"]);"]}], "metadata": {"kernelspec": {"display_name": "Python 3", "language": "python", "name": "python3"}}, "nbformat": 4, "nbformat_minor": 2} \ No newline at end of file +{"cells":[{"cell_type":"markdown","metadata":{},"source":["
\n","
\n","
\n","
ObsPy Tutorial
\n","
Downloading/Processing Exercise
\n","
\n","
\n","
"]},{"cell_type":"markdown","metadata":{},"source":["Seismo-Live: http://seismo-live.org\n","\n","##### Authors:\n","* Lion Krischer ([@krischer](https://github.com/krischer))\n","* Tobias Megies ([@megies](https://github.com/megies))\n","---"]},{"cell_type":"markdown","metadata":{},"source":["![](images/obspy_logo_full_524x179px.png)"]},{"cell_type":"markdown","metadata":{},"source":["For the this exercise we will download some data from the Tohoku-Oki earthquake, cut out a certain time window around the first arrival and remove the instrument response from the data."]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["%matplotlib inline\n","import matplotlib.pyplot as plt\n","plt.style.use('ggplot')\n","plt.rcParams['figure.figsize'] = 12, 8"]},{"cell_type":"markdown","metadata":{},"source":["The first step is to download all the necessary information using the ObsPy FDSN client. **Learn to read the documentation!**\n","\n","We need the following things:\n","\n","1. Event information about the Tohoku-Oki earthquake. Use the `get_events()` method of the client. A good provider of event data is the USGS.\n","2. Waveform information for a certain station. Choose your favorite one! If you have no preference, use `II.BFO` which is available for example from IRIS. Use the `get_waveforms()` method.\n","3. Download the associated station/instrument information with the `get_stations()` method."]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["Have a look at the just downloaded data."]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["## Exercise\n","\n","The goal of this exercise is to cut the data from 1 minute before the first arrival to 5 minutes after it, and then remove the instrument response.\n","\n","\n","#### Step 1: Determine Coordinates of Station"]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["#### Step 2: Determine Coordinates of Event"]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["#### Step 3: Calculate distance of event and station.\n","\n","Use `obspy.geodetics.locations2degree`."]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["#### Step 4: Calculate Theoretical Arrivals\n","\n","```python\n","from obspy.taup import TauPyModel\n","m = TauPyModel(model=\"ak135\")\n","arrivals = m.get_ray_paths(...)\n","arrivals.plot()\n","```"]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["#### Step 5: Calculate absolute time of the first arrivals at the station"]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["#### Step 6: Cut to 1 minute before and 5 minutes after the first arrival"]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["#### Step 7: Remove the instrument response\n","\n","```python\n","st.remove_response(inventory=inv, pre_filt=...)\n","```\n","\n","![taper](images/cos_taper.png)"]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["## Bonus: Interactive IPython widgets"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["from IPython.html.widgets import interact\n","from obspy.taup import TauPyModel\n","\n","m = TauPyModel(\"ak135\")\n","\n","def plot_raypaths(distance, depth, wavetype):\n"," try:\n"," plt.close()\n"," except:\n"," pass\n"," if wavetype == \"ttall\":\n"," phases = [\"ttall\"]\n"," elif wavetype == \"diff\":\n"," phases = [\"Pdiff\", \"pPdiff\"]\n"," m.get_ray_paths(distance_in_degree=distance,\n"," source_depth_in_km=depth,\n"," phase_list=phases).plot();\n"," \n","interact(plot_raypaths, distance=[0, 180],\n"," depth=[0, 700], wavetype=[\"ttall\", \"diff\"]);"]}],"metadata":{"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"}},"nbformat":4,"nbformat_minor":2} diff --git a/ObsPy/08_Exercise__2008_MtCarmel_Earthquake_and_Aftershock_Series.ipynb b/ObsPy/08_Exercise__2008_MtCarmel_Earthquake_and_Aftershock_Series.ipynb index b31ac6d..00a2d9c 100644 --- a/ObsPy/08_Exercise__2008_MtCarmel_Earthquake_and_Aftershock_Series.ipynb +++ b/ObsPy/08_Exercise__2008_MtCarmel_Earthquake_and_Aftershock_Series.ipynb @@ -1 +1 @@ -{"cells": [{"cell_type": "markdown", "metadata": {}, "source": ["
\n", "
\n", "
\n", "
ObsPy Tutorial
\n", "
Exercise: Event Detection and Magnitude Estimation in an Aftershock Series
\n", "
\n", "
\n", "
"]}, {"cell_type": "markdown", "metadata": {}, "source": ["Seismo-Live: http://seismo-live.org\n", "\n", "##### Authors:\n", "* Tobias Megies ([@megies](https://github.com/megies))\n", "* Lion Krischer ([@krischer](https://github.com/krischer))\n", "---"]}, {"cell_type": "markdown", "metadata": {}, "source": ["![](images/obspy_logo_full_524x179px.png)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["# Exercise -- The 2008 Mt. Carmel, Illinois, Earthquake and Aftershock Series\n", "\n", "Introduction from \n", "[\"The April 18, 2008 Illinois Earthquake: An ANSS Monitoring Success\" by Robert B. Herrmann, Mitch Withers, and Harley Benz, SRL 2008](http://srl.geoscienceworld.org/content/79/6/830.extract):\n", "\n", "*\"The largest-magnitude earthquake in the past 20 years struck near **Mt. Carmel in southeastern Illinois** on Friday morning, 18 April 2008 at 09:36:59 UTC (04:37 CDT). **The Mw 5.2 earthquake was felt over an area that spanned Chicago and Atlanta**, with about 40,000 reports submitted to the U.S. Geological Survey (USGS) \u201cDid You Feel It?\u201d system. There were at least six felt aftershocks greater than magnitude 3 and 20 aftershocks with magnitudes greater than 2 located by regional and national seismic networks. **Portable instrumentation was deployed** by researchers of the University of Memphis and Indiana University (the first portable station was installed at about 23:00 UTC on 18 April). The portable seismographs were deployed both to capture near-source, high-frequency ground motions for significant aftershocks and to better understand structure along the active fault. [...]\"*\n", "\n", "\"World\n", "\"Felt\n", "\n", "Web page hits at USGS/NEIC during 24 hours after the earthquake:\n", "\n", " - peak rate: **3,892 hits/second**\n", " - **68 million hits** in the 24 hours after the earthquake\n", "\n", "\"USGS/NEIC\n", "\n", "Some links:\n", "\n", " - http://earthquake.usgs.gov/earthquakes/dyfi/events/us/2008qza6/us/index.html\n", " - http://earthquake.usgs.gov/earthquakes/eqinthenews/2008/us2008qza6/#summary\n", " - http://srl.geoscienceworld.org/content/79/6/830.extract\n", " - http://srl.geoscienceworld.org/content/82/5/735.short"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["%matplotlib inline"]}, {"cell_type": "markdown", "metadata": {}, "source": ["### Use the search on the [ObsPy docs](http://docs.obspy.org/) for any functionality that you do not remember/know yet..!\n", "### E.g. [searching for \"filter\"](http://docs.obspy.org/search.html?q=filter)..."]}, {"cell_type": "markdown", "metadata": {}, "source": ["## 1. Download and visualize main shock\n", "\n", "Request information on stations recording close to the event from IRIS using the [`obspy.fdsn Client`](http://docs.obspy.org/packages/obspy.fdsn.html), print the requested station information."]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["Download waveform data for the mainshock for one of the stations using the FDSN client (if you get an error, maybe try a different station and/or ask for help). Make the preview plot using obspy."]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["Visualize a Spectrogram (if you got time, you can play around with the different parameters for the spectrogram). Working on a copy of the donwloaded data, apply a filter, then trim the requested data to some interesting parts of the earthquake and plot the data again."]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["Define a function `plot_data(t)` that fetches waveform data for this station and that shows a preview plot of 20 seconds of data starting at a given time. It should take a UTCDateTime object as the single argument."]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["Test your function by calling it for the time of the main shock"]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["## 2. Visualize aftershock and estimate magnitude\n", "\n", "Read file \"`./data/mtcarmel.mseed`\". It contains data of stations from an aftershock network that was set up shortly after the main shock. Print the stream information and have a look at the network/station information, channel names time span of the data etc.. Make a preview plot. "]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["The strongest aftershock you see in the given recordings is at `2008-04-21T05:38:30`. Trim the data to this aftershock and make a preview plot and a spectrogram plot of it."]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["Make a very simple approximation of the magnitude. Use the function provided below (after you execute the code box with the function you can call it anywhere in your code boxes).\n", "\n", "Demean the data and then determine the raw data's peak value of the event at one of the stations (e.g. using Python's `max` function or a numpy method on the data array) and call the provided function for that value. (Note that this is usually done on the horizontal components.. we do it on vertical for simplicity here)"]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["from obspy.signal.invsim import estimate_magnitude\n", "\n", "def mag_approx(peak_value, frequency, hypo_dist=20):\n", " \"\"\"\n", " Give peak value of raw data for a very crude and simple magnitude estimation.\n", "\n", " For simplicity, this is done assuming hypocentral location, peak frequency, etc.\n", " To keep it simple for now the response information is entered manually here\n", " (it is the same for all instruments used here).\n", " \"\"\"\n", " poles = [-1.48600E-01 + 1.48600E-01j,\n", " -1.48600E-01 - 1.48600E-01j,\n", " -4.14690E+02 + 0.00000E+00j,\n", " -9.99027E+02 + 9.99027E+02j,\n", " -9.99027E+02 - 9.99027E+02j]\n", " zeros = [0.0 + 0.0j,\n", " 0.0 + 0.0j,\n", " 1.1875E+03 + 0.0j]\n", " norm_factor = 7.49898E+08\n", " sensitivity = 6.97095E+05\n", " paz = {'poles': poles, 'zeros': zeros, 'gain': norm_factor,\n", " 'sensitivity': sensitivity}\n", " ml = estimate_magnitude(paz, peak_value, 0.5 / frequency, hypo_dist)\n", " return ml"]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["Do the magnitude approximation in a for-loop for all stations in the Stream. Calculate a network magnitude as the average of all three stations."]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["Define a function `netmag(st)` that returns a network magnitude approximation. It should take a Stream object (which is assumed to be trimmed to an event) as only argument. Use the provided `mag_approx` function and calculate the mean of all traces in the stream internally."]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["Test your function on the cut out Stream object of the large aftershock from before."]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["#### Advanced\n", "You can also download the station metadata using the FDSN client and extract poles and zeros information and directly use the `estimate_magnitude` function without using the hard-coded response information. \n", "\n", "###3. Detect aftershocks using triggering routines\n", "\n", "Read the 3-station data from file `\"./data/mtcarmel.mseed\"` again. Apply a bandpass filter, adjust it to the dominant event frequency range you have seen in the aftershock spectrogram before. Run a recursive STA/LTA trigger on the filtered data (see [ObsPy docs](http://docs.obspy.org/packages/autogen/obspy.core.stream.Stream.trigger.html)). The length of the `sta` window should be a bit longer than an arriving seismic phase, the `lta` window can be around ten to twenty times as long.\n", "\n", "Make a preview plot of the Stream object, now showing the characteristic function of the triggering. High trigger values indicate transient signals (of the frequency range of interest) that might be an event (or just a local noise burst on that station..).\n", "\n", "(play around with different values and check out the resulting characteristic function)"]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["We could now manually compare trigger values on the different stations to find small aftershocks, termed a network coincidence trigger. However, there is a convenience function in ObsPy's signal toolbox to do just that in only a few lines of code.\n", "\n", "Read the data again and apply a bandpass to the dominant frequencies of the events. Use the `coincidence_trigger` function that returns a list of possible events (see the [ObsPy Tutorial](http://docs.obspy.org/tutorial/code_snippets/trigger_tutorial.html#network-coincidence-trigger-example) for an example of a recursive STA/LTA network coincidence trigger). Print the length of the list and adjust the trigger-on/off thresholds so that you get around 5 suspected events.\n", "\n", "Print the first trigger in the list to show information on the suspected event."]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["Go over the list of triggers in a for-loop. For each trigger/suspected event:\n", "\n", " - print the time of the trigger\n", " - read the waveform data, use [`starttime` and `endtime` arguments for `read()`](http://docs.obspy.org/packages/autogen/obspy.core.stream.read.html) to trim the data to the suspected event right during reading (avoiding to read the whole file again and again)\n", " - calculate and print the network magnitude using the `netmag(st)` function from earlier\n", " - make a preview plot\n", "\n", "If you're curious you can compare the crude magnitude estimates with the [table of aftershocks](http://www.seismosoc.org/publications/srl/SRL_82/srl_82-5_hamburger_et_al-esupp/Table_S2.txt) provided by the scientists that analyzed the aftershock sequence. The paper with details can be found here: [\"Aftershocks of the 2008 Mt. Carmel, Illinois, Earthquake: Evidence for Conjugate Faulting near the Termination of the Wabash Valley Fault System\" by M. W. Hamburger, K. Shoemaker, S. Horton, H. DeShon, M. Withers, G. L. Pavlis and E. Sherrill, SRL 2011](http://srl.geoscienceworld.org/content/82/5/735.short)."]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["#### Advanced:\n", "You can also use event templates of some good signal-noise-ratio aftershocks to detect more weaker aftershocks and select from weak triggers based on waveform similarities like shown in the ObsPy tutorial. \n", "\n", "## 4. Cross correlation pick alignment\n", "\n", "An unfortunate undergrad student picked his/her way through the aftershock sequence. For a high-precision relative location run (e.g. double difference, master-event relocation) you want to have more precise, cross correlation aligned phase picks.\n", "\n", "The approach by [Deichmann and Garcia-Fernandez (1992, GJI)](http://onlinelibrary.wiley.com/doi/10.1111/j.1365-246X.1992.tb02088.x/abstract;jsessionid=F73CF9485C0432A3E14EDDDD0C73E058.d03t02) is implemented in the function [**`xcorr_pick_correction`**](http://docs.obspy.org/packages/autogen/obspy.signal.cross_correlation.xcorr_pick_correction.html) in `obspy.signal.cross_correlation`. Follow the [example in the ObsPy Tutorial](http://docs.obspy.org/tutorial/code_snippets/xcorr_pick_correction.html). Get time corrections for the event picks given in `pick_times` relative to the event pick `reference_pick`.\n", "\n", "Use the data in file `\"./data/mtcarmel_100hz.mseed\"`. Before applying the correction resample to 200 Hz (the parabola fitting works better with the finer time resolution) and optionally bandpass to a relatively narrow frequency range."]}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": ["reference_pick = \"2008-04-19T12:46:45.96\"\n", "pick_times = [\"2008-04-19T13:08:59.19\",\n", " \"2008-04-19T16:55:19.65\",\n", " \"2008-04-19T19:03:38.72\",\n", " \"2008-04-19T19:28:53.54\"]"]}, {"cell_type": "code", "execution_count": null, "metadata": {"lines_to_next_cell": 2, "tags": ["exercise"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": ["solution"]}, "outputs": [], "source": []}], "metadata": {"jupytext": {"encoding": "# -*- coding: utf-8 -*-"}, "kernelspec": {"display_name": "Python 3", "language": "python", "name": "python3"}}, "nbformat": 4, "nbformat_minor": 2} \ No newline at end of file +{"cells":[{"cell_type":"markdown","metadata":{},"source":["
\n","
\n","
\n","
ObsPy Tutorial
\n","
Exercise: Event Detection and Magnitude Estimation in an Aftershock Series
\n","
\n","
\n","
"]},{"cell_type":"markdown","metadata":{},"source":["Seismo-Live: http://seismo-live.org\n","\n","##### Authors:\n","* Tobias Megies ([@megies](https://github.com/megies))\n","* Lion Krischer ([@krischer](https://github.com/krischer))\n","---"]},{"cell_type":"markdown","metadata":{},"source":["![](images/obspy_logo_full_524x179px.png)"]},{"cell_type":"markdown","metadata":{},"source":["# Exercise -- The 2008 Mt. Carmel, Illinois, Earthquake and Aftershock Series\n","\n","Introduction from \n","[\"The April 18, 2008 Illinois Earthquake: An ANSS Monitoring Success\" by Robert B. Herrmann, Mitch Withers, and Harley Benz, SRL 2008](http://srl.geoscienceworld.org/content/79/6/830.extract):\n","\n","*\"The largest-magnitude earthquake in the past 20 years struck near **Mt. Carmel in southeastern Illinois** on Friday morning, 18 April 2008 at 09:36:59 UTC (04:37 CDT). **The Mw 5.2 earthquake was felt over an area that spanned Chicago and Atlanta**, with about 40,000 reports submitted to the U.S. Geological Survey (USGS) “Did You Feel It?” system. There were at least six felt aftershocks greater than magnitude 3 and 20 aftershocks with magnitudes greater than 2 located by regional and national seismic networks. **Portable instrumentation was deployed** by researchers of the University of Memphis and Indiana University (the first portable station was installed at about 23:00 UTC on 18 April). The portable seismographs were deployed both to capture near-source, high-frequency ground motions for significant aftershocks and to better understand structure along the active fault. [...]\"*\n","\n","\"World\n","\"Felt\n","\n","Web page hits at USGS/NEIC during 24 hours after the earthquake:\n","\n"," - peak rate: **3,892 hits/second**\n"," - **68 million hits** in the 24 hours after the earthquake\n","\n","\"USGS/NEIC\n","\n","Some links:\n","\n"," - http://earthquake.usgs.gov/earthquakes/dyfi/events/us/2008qza6/us/index.html\n"," - http://earthquake.usgs.gov/earthquakes/eqinthenews/2008/us2008qza6/#summary\n"," - http://srl.geoscienceworld.org/content/79/6/830.extract\n"," - http://srl.geoscienceworld.org/content/82/5/735.short"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["%matplotlib inline"]},{"cell_type":"markdown","metadata":{},"source":["### Use the search on the [ObsPy docs](http://docs.obspy.org/) for any functionality that you do not remember/know yet..!\n","### E.g. [searching for \"filter\"](http://docs.obspy.org/search.html?q=filter)..."]},{"cell_type":"markdown","metadata":{},"source":["## 1. Download and visualize main shock\n","\n","Request information on stations recording close to the event from IRIS using the [`obspy.fdsn Client`](http://docs.obspy.org/packages/obspy.fdsn.html), print the requested station information."]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["Download waveform data for the mainshock for one of the stations using the FDSN client (if you get an error, maybe try a different station and/or ask for help). Make the preview plot using obspy."]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["Visualize a Spectrogram (if you got time, you can play around with the different parameters for the spectrogram). Working on a copy of the donwloaded data, apply a filter, then trim the requested data to some interesting parts of the earthquake and plot the data again."]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["Define a function `plot_data(t)` that fetches waveform data for this station and that shows a preview plot of 20 seconds of data starting at a given time. It should take a UTCDateTime object as the single argument."]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["Test your function by calling it for the time of the main shock"]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["## 2. Visualize aftershock and estimate magnitude\n","\n","Read file \"`./data/mtcarmel.mseed`\". It contains data of stations from an aftershock network that was set up shortly after the main shock. Print the stream information and have a look at the network/station information, channel names time span of the data etc.. Make a preview plot. "]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["The strongest aftershock you see in the given recordings is at `2008-04-21T05:38:30`. Trim the data to this aftershock and make a preview plot and a spectrogram plot of it."]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["Make a very simple approximation of the magnitude. Use the function provided below (after you execute the code box with the function you can call it anywhere in your code boxes).\n","\n","Demean the data and then determine the raw data's peak value of the event at one of the stations (e.g. using Python's `max` function or a numpy method on the data array) and call the provided function for that value. (Note that this is usually done on the horizontal components.. we do it on vertical for simplicity here)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["from obspy.signal.invsim import estimate_magnitude\n","\n","def mag_approx(peak_value, frequency, hypo_dist=20):\n"," \"\"\"\n"," Give peak value of raw data for a very crude and simple magnitude estimation.\n","\n"," For simplicity, this is done assuming hypocentral location, peak frequency, etc.\n"," To keep it simple for now the response information is entered manually here\n"," (it is the same for all instruments used here).\n"," \"\"\"\n"," poles = [-1.48600E-01 + 1.48600E-01j,\n"," -1.48600E-01 - 1.48600E-01j,\n"," -4.14690E+02 + 0.00000E+00j,\n"," -9.99027E+02 + 9.99027E+02j,\n"," -9.99027E+02 - 9.99027E+02j]\n"," zeros = [0.0 + 0.0j,\n"," 0.0 + 0.0j,\n"," 1.1875E+03 + 0.0j]\n"," norm_factor = 7.49898E+08\n"," sensitivity = 6.97095E+05\n"," paz = {'poles': poles, 'zeros': zeros, 'gain': norm_factor,\n"," 'sensitivity': sensitivity}\n"," ml = estimate_magnitude(paz, peak_value, 0.5 / frequency, hypo_dist)\n"," return ml"]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["Do the magnitude approximation in a for-loop for all stations in the Stream. Calculate a network magnitude as the average of all three stations."]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["Define a function `netmag(st)` that returns a network magnitude approximation. It should take a Stream object (which is assumed to be trimmed to an event) as only argument. Use the provided `mag_approx` function and calculate the mean of all traces in the stream internally."]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["Test your function on the cut out Stream object of the large aftershock from before."]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["#### Advanced\n","You can also download the station metadata using the FDSN client and extract poles and zeros information and directly use the `estimate_magnitude` function without using the hard-coded response information. \n","\n","###3. Detect aftershocks using triggering routines\n","\n","Read the 3-station data from file `\"./data/mtcarmel.mseed\"` again. Apply a bandpass filter, adjust it to the dominant event frequency range you have seen in the aftershock spectrogram before. Run a recursive STA/LTA trigger on the filtered data (see [ObsPy docs](http://docs.obspy.org/packages/autogen/obspy.core.stream.Stream.trigger.html)). The length of the `sta` window should be a bit longer than an arriving seismic phase, the `lta` window can be around ten to twenty times as long.\n","\n","Make a preview plot of the Stream object, now showing the characteristic function of the triggering. High trigger values indicate transient signals (of the frequency range of interest) that might be an event (or just a local noise burst on that station..).\n","\n","(play around with different values and check out the resulting characteristic function)"]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["We could now manually compare trigger values on the different stations to find small aftershocks, termed a network coincidence trigger. However, there is a convenience function in ObsPy's signal toolbox to do just that in only a few lines of code.\n","\n","Read the data again and apply a bandpass to the dominant frequencies of the events. Use the `coincidence_trigger` function that returns a list of possible events (see the [ObsPy Tutorial](http://docs.obspy.org/tutorial/code_snippets/trigger_tutorial.html#network-coincidence-trigger-example) for an example of a recursive STA/LTA network coincidence trigger). Print the length of the list and adjust the trigger-on/off thresholds so that you get around 5 suspected events.\n","\n","Print the first trigger in the list to show information on the suspected event."]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["Go over the list of triggers in a for-loop. For each trigger/suspected event:\n","\n"," - print the time of the trigger\n"," - read the waveform data, use [`starttime` and `endtime` arguments for `read()`](http://docs.obspy.org/packages/autogen/obspy.core.stream.read.html) to trim the data to the suspected event right during reading (avoiding to read the whole file again and again)\n"," - calculate and print the network magnitude using the `netmag(st)` function from earlier\n"," - make a preview plot\n","\n","If you're curious you can compare the crude magnitude estimates with the [table of aftershocks](http://www.seismosoc.org/publications/srl/SRL_82/srl_82-5_hamburger_et_al-esupp/Table_S2.txt) provided by the scientists that analyzed the aftershock sequence. The paper with details can be found here: [\"Aftershocks of the 2008 Mt. Carmel, Illinois, Earthquake: Evidence for Conjugate Faulting near the Termination of the Wabash Valley Fault System\" by M. W. Hamburger, K. Shoemaker, S. Horton, H. DeShon, M. Withers, G. L. Pavlis and E. Sherrill, SRL 2011](http://srl.geoscienceworld.org/content/82/5/735.short)."]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"markdown","metadata":{},"source":["#### Advanced:\n","You can also use event templates of some good signal-noise-ratio aftershocks to detect more weaker aftershocks and select from weak triggers based on waveform similarities like shown in the ObsPy tutorial. \n","\n","## 4. Cross correlation pick alignment\n","\n","An unfortunate undergrad student picked his/her way through the aftershock sequence. For a high-precision relative location run (e.g. double difference, master-event relocation) you want to have more precise, cross correlation aligned phase picks.\n","\n","The approach by [Deichmann and Garcia-Fernandez (1992, GJI)](http://onlinelibrary.wiley.com/doi/10.1111/j.1365-246X.1992.tb02088.x/abstract;jsessionid=F73CF9485C0432A3E14EDDDD0C73E058.d03t02) is implemented in the function [**`xcorr_pick_correction`**](http://docs.obspy.org/packages/autogen/obspy.signal.cross_correlation.xcorr_pick_correction.html) in `obspy.signal.cross_correlation`. Follow the [example in the ObsPy Tutorial](http://docs.obspy.org/tutorial/code_snippets/xcorr_pick_correction.html). Get time corrections for the event picks given in `pick_times` relative to the event pick `reference_pick`.\n","\n","Use the data in file `\"./data/mtcarmel_100hz.mseed\"`. Before applying the correction resample to 200 Hz (the parabola fitting works better with the finer time resolution) and optionally bandpass to a relatively narrow frequency range."]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["reference_pick = \"2008-04-19T12:46:45.96\"\n","pick_times = [\"2008-04-19T13:08:59.19\",\n"," \"2008-04-19T16:55:19.65\",\n"," \"2008-04-19T19:03:38.72\",\n"," \"2008-04-19T19:28:53.54\"]"]},{"cell_type":"code","execution_count":null,"metadata":{"lines_to_next_cell":2,"tags":["exercise"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]},{"cell_type":"code","execution_count":null,"metadata":{"tags":["solution"]},"outputs":[],"source":[]}],"metadata":{"jupytext":{"encoding":"# -*- coding: utf-8 -*-"},"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"}},"nbformat":4,"nbformat_minor":2}