Compare commits

...

10 Commits

Author SHA1 Message Date
3be655017f updated paths to header image file 2024-06-10 12:15:28 +02:00
228434397d add notebook collection from seismo-live.org on ObsPy
activate LFS
2024-06-10 11:47:40 +02:00
dbcb4a3d8a initial import of PPSD exercise from seismo live web site 2024-06-03 12:17:07 +02:00
d829006103 update README.md 2024-04-22 11:46:34 +02:00
fc3b809875 Add data analysis code workspace configuration
This commit adds a new file `dataanalaysis.code-workspace` which includes the necessary configuration for a data analysis project. The file specifies the root folder path and contains default settings.
2024-04-22 11:35:38 +02:00
85977c1a3a add exercises on cross correlation 2022-05-30 13:32:09 +02:00
81479e92af add new file 02-FFt_and_Basic_Filtering/3-Instrument_response.ipynb
Exercise / Lab on Instrument Responses and Earth' Free Oscilation
2022-05-16 14:03:44 +02:00
6f854dfb66 add notebooks for exercise 2
* filter basics
* Fourier transformation
2022-04-24 18:42:45 +02:00
6aac70a47a add jupyter notebook "Python Crash Course"
jupyter notebook and 2 images
2022-04-08 21:47:05 +02:00
ce24db6a8e update README.md and .gitignore
updated lecture information and notes for 1st exercise
2022-04-08 21:35:38 +02:00
52 changed files with 40290 additions and 4 deletions

3
.gitattributes vendored Normal file
View File

@@ -0,0 +1,3 @@
*.png filter=lfs diff=lfs merge=lfs -text
*.mseed filter=lfs diff=lfs merge=lfs -text
*.sac filter=lfs diff=lfs merge=lfs -text

40
.gitignore vendored
View File

@@ -152,3 +152,43 @@ dmypy.json
# Cython debug symbols
cython_debug/
# General
.DS_Store
.AppleDouble
.LSOverride
# Icon must end with two \r
Icon
# Thumbnails
._*
# Files that might appear in the root of a volume
.DocumentRevisions-V100
.fseventsd
.Spotlight-V100
.TemporaryItems
.Trashes
.VolumeIcon.icns
.com.apple.timemachine.donotpresent
# Directories potentially created on remote AFP share
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk
.vscode/*
!.vscode/settings.json
!.vscode/tasks.json
!.vscode/launch.json
!.vscode/extensions.json
!.vscode/*.code-snippets
# Local History for Visual Studio Code
.history/
# Built Visual Studio Code Extensions

View File

@@ -0,0 +1,771 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<div style='background-image: url(\"../images/header.svg\") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>\n",
" <div style=\"float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px\">\n",
" <div style=\"position: relative ; top: 50% ; transform: translatey(-50%)\">\n",
" <div style=\"font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%\">Scientific Python</div>\n",
" <div style=\"font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)\">A super quick crash course</div>\n",
" </div>\n",
" </div>\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Seismo-Live: http://seismo-live.org\n",
"\n",
"##### Authors:\n",
"* Lion Krischer ([@krischer](https://github.com/krischer))\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook is a very quick introduction to Python and in particular its scientific ecosystem in case you have never seen it before. It furthermore grants a possibility to get to know the [IPython/Jupyter notebook](http://www.nature.com/news/interactive-notebooks-sharing-the-code-1.16261). [See here for the official documentation](http://nbviewer.jupyter.org/github/jupyter/notebook/blob/master/docs/source/examples/Notebook/Notebook%20Basics.ipynb) of the Jupyter notebook - a ton more information can be found online.\n",
"\n",
"\n",
"A lot of motivational writing on *Why Python?* is out there so we will not repeat it here and just condense it to a single sentence: **Python is a good and easy to learn, open-source, general purpose programming language that happens to be very good for many scientific tasks (due to its vast scientific ecosystem).**\n",
"\n",
"\n",
"#### Quick Reference on How to Use This Notebook\n",
"\n",
"\n",
"<img src=\"../images/notebook_toolbar.png\" style=\"width:70%\"></img>\n",
"\n",
"* `Shift + Enter`: Execute cell and jump to the next cell\n",
"* `Ctrl/Cmd + Enter`: Execute cell and don't jump to the next cell\n",
"\n",
"\n",
"#### Disclaimer\n",
"\n",
"The tutorials are employing Jupyter notebooks but these are only one way of using Python. Writing scripts to text files and executing them with the Python interpreter of course also works:\n",
"\n",
"```bash\n",
"$ python do_something.py\n",
"```\n",
"\n",
"Another alternative is interactive usage on the command line:\n",
"\n",
"```bash\n",
"$ ipython\n",
"```\n",
"\n",
"## Notebook Setup\n",
"\n",
"First things first: In many notebooks you will find a cell similar to the following one. **Always execute it!** They do a couple of things:\n",
"* Make plots appear in the browser (otherwise a window pops up)\n",
"* Printing things works like this: \n",
"\n",
"```python\n",
"print(\"Hello\")\n",
"```\n",
"\n",
"This essentially makes the notebooks work under Python 2 and Python 3.\n",
"\n",
"* Plots look quite a bit nicer (this is optional).\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Plots now appear in the notebook.\n",
"%matplotlib inline \n",
"\n",
"import matplotlib.pyplot as plt\n",
"plt.style.use('ggplot') # Matplotlib style sheet - nicer plots!\n",
"plt.rcParams['figure.figsize'] = 12, 8 # Slightly bigger plots by default"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"## Useful Links\n",
"\n",
"Here is collection of resources regarding the scientific Python ecosystem. They cover a number of different packages and topics; way more than we will manage today.\n",
"\n",
"If you have any question regarding some specific Python functionality you can consult the official [Python documenation](http://docs.python.org/).\n",
" \n",
"Furthermore a large number of Python tutorials, introductions, and books are available online. Here are some examples for those interested in learning more.\n",
" \n",
"* [Learn Python The Hard Way](http://learnpythonthehardway.org/book/)\n",
"* [Dive Into Python](http://www.diveintopython.net/)\n",
"* [The Official Python Tutorial](http://docs.python.org/2/tutorial/index.html)\n",
"* [Think Python Book](http://www.greenteapress.com/thinkpython/thinkpython.html)\n",
" \n",
"Some people might be used to Matlab - this helps:\n",
" \n",
"* [NumPy for Matlab Users Introdution](http://wiki.scipy.org/NumPy_for_Matlab_Users)\n",
"* [NumPy for Matlab Users Cheatsheet](http://mathesaurus.sourceforge.net/matlab-numpy.html)\n",
" \n",
" \n",
"Additionally there is an abundance of resources introducing and teaching parts of the scientific Python ecosystem.\n",
" \n",
"* [NumPy Tutorial](http://wiki.scipy.org/Tentative_NumPy_Tutorial)\n",
"* [Probabilistic Programming and Bayesian Methods for Hackers](http://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/): Great ebook introducing Bayesian methods from an understanding-first point of view with the examples done in Python.\n",
"* [Python Scientific Lecture Notes](http://scipy-lectures.github.io/): Introduces the basics of scientific Python with lots of examples.\n",
"* [Python for Signal Processing](http://python-for-signal-processing.blogspot.de/): Free blog which is the basis of a proper book written on the subject.\n",
"* [Another NumPy Tutorial](http://www.loria.fr/~rougier/teaching/numpy/numpy.html), [Matplotlib Tutorial](http://www.loria.fr/~rougier/teaching/matplotlib/matplotlib.html)\n",
" \n",
"You might eventually have a need to create some custom plots. The quickest way to success is usually to start from some example that is somewhat similar to what you want to achieve and just modify it. These websites are good starting points:\n",
" \n",
"* [Matplotlib Gallery](http://matplotlib.org/gallery.html)\n",
"* [ObsPy Gallery](http://docs.obspy.org/gallery.html)\n",
"* [Basemap Gallery](http://matplotlib.org/basemap/users/examples.html)\n",
"\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Core Python Crash Course\n",
"\n",
"This course is fairly non-interactive and serves to get you up to speed with Python assuming you have practical programming experience with at least one other language. Nonetheless please change things and play around an your own - it is the only way to really learn it!\n",
"\n",
"The first part will introduce you to the core Python language. This tutorial uses Python 3 but almost all things can be transferred to Python 2. If possible choose Python 3 for your own work!\n",
"\n",
"\n",
"### 1. Numbers\n",
"\n",
"Python is dynamically typed and assigning something to a variable will give it that type."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Three basic types of numbers\n",
"a = 1 # Integers\n",
"b = 2.0 # Floating Point Numbers\n",
"c = 3.0 + 4j # Complex Numbers, note the use of j for the complex part\n",
"\n",
"\n",
"# Arithmetics work as expected.\n",
"# Upcasting from int -> float -> complex\n",
"d = a + b # (int + float = float)\n",
"print(d)\n",
"\n",
"e = c ** 2 # c to the second power, performs a complex multiplication\n",
"print(e)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2. Strings"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Just enclose something in single or double quotes and it will become a string. On Python 3 it defaults to unicode strings, e.g. non Latin alphabets and other symbols."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# You can use single or double quotes to create strings.\n",
"location = \"New York\"\n",
"\n",
"# Concatenate strings with plus.\n",
"where_am_i = 'I am in ' + location\n",
"\n",
"# Print things with the print() function.\n",
"print(location, 1, 2)\n",
"print(where_am_i)\n",
"\n",
"# Strings have a lot of attached methods for common manipulations.\n",
"print(location.lower())\n",
"\n",
"# Access single items with square bracket. Negative indices are from the back.\n",
"print(location[0], location[-1])\n",
"\n",
"# Strings can also be sliced.\n",
"print(location[4:])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Exercise\n",
"\n",
"Save your name in all lower-case letters to a variable, and print a capitalized version of it. Protip: [Google for \"How to capitalize a string in python\"](http://www.google.com/search?q=how+to+capitalize+a+string+in+python). This works for almost any programming problem - someone will have had the same issue before!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3. Lists"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Python has two main collection types: List and dictionaries. The former is just an ordered collection of objects and is introduced here."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# List use square brackets and are simple ordered collections of things.\n",
"everything = [a, b, c, 1, 2, 3, \"hello\"]\n",
"\n",
"# Access elements with the same slicing/indexing notation as strings.\n",
"# Note that Python indices are zero based!\n",
"print(everything[0])\n",
"print(everything[:3])\n",
"print(everything[2:-2])\n",
"\n",
"# Negative indices are counted from the back of the list.\n",
"print(everything[-3:])\n",
"\n",
"# Append things with the append method.\n",
"everything.append(\"you\")\n",
"print(everything)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4. Dictionaries\n",
"\n",
"The other main collection type in Python are dictionaries. They are similiar to associative arrays or (hash) maps in other languages. Each entry is a key-value pair."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Dictionaries have named fields and no inherent order. As is\n",
"# the case with lists, they can contain anything.\n",
"information = {\n",
" \"name\": \"Hans\",\n",
" \"surname\": \"Mustermann\",\n",
" \"age\": 78,\n",
" \"kids\": [1, 2, 3]\n",
"}\n",
"\n",
"# Acccess items by using the key in square brackets.\n",
"print(information[\"kids\"])\n",
"\n",
"# Add new things by just assigning to a key.\n",
"print(information)\n",
"information[\"music\"] = \"jazz\"\n",
"print(information)\n",
"\n",
"# Delete things by using the del operator\n",
"del information[\"age\"]\n",
"print(information)"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"### 5. Functions\n",
"\n",
"The key to conquer a big problem is to divide it into many smaller ones and tackle them one by one. This is usually achieved by using functions."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Functions are defined using the def keyword.\n",
"def do_stuff(a, b):\n",
" return a * b\n",
"\n",
"# And called with the arguments in round brackets.\n",
"print(do_stuff(2, 3))\n",
"\n",
"# Python function also can have optional arguments.\n",
"def do_more_stuff(a, b, power=1):\n",
" return (a * b) ** power\n",
"\n",
"print(do_more_stuff(2, 3))\n",
"print(do_more_stuff(2, 3, power=3))\n",
"\n",
"# For more complex function it is oftentimes a good idea to \n",
"#explicitly name the arguments. This is easier to read and less error-prone.\n",
"print(do_more_stuff(a=2, b=3, power=3))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 6. Imports\n",
"\n",
"To use functions and objects not part of the default namespace, you have import them. You will have to do this a lot so it is necessary to learn how to do it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Import anything, and use it with the dot accessor.\n",
"import math\n",
"\n",
"a = math.cos(4 * math.pi)\n",
"\n",
"# You can also selectively import things.\n",
"from math import pi\n",
"\n",
"b = 3 * pi\n",
"\n",
"# And even rename them if you don't like their name.\n",
"from math import cos as cosine\n",
"c = cosine(b)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"How to know what is available?\n",
"\n",
"1. Read the [documentation](https://docs.python.org/3/library/math.html)\n",
"2. Interactively query the module"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"print(dir(math))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Typing the dot and the TAB will kick off tab-completion."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"math."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the IPython framework you can also use a question mark to view the documentation of modules and functions."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"math.cos?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 7. Control Flow\n",
"\n",
"Loops and conditionals are needed for any non-trivial task. Please note that **whitespace matters in Python**. Everything that is indented at the same level is part of the same block. By far the most common loops in Python are for-each loops as shown in the following. While loops also exist but are rarely used."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"temp = [\"a\", \"b\", \"c\"]\n",
"\n",
"# The typical Python loop is a for-each loop, e.g.\n",
"for item in temp:\n",
" # Everything with the same indentation is part of the loop.\n",
" new_item = item + \" \" + item\n",
" print(new_item)\n",
" \n",
"print(\"No more part of the loop.\") "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Useful to know is the range() function.\n",
"for i in range(5):\n",
" print(i)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The second crucial control flow structure are if/else conditional and they work the same as in any other language."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# If/else works as expected.\n",
"age = 77\n",
"\n",
"if age >= 0 and age < 10:\n",
" print(\"Younger ten.\")\n",
"elif age >= 10:\n",
" print(\"Older than ten.\")\n",
"else:\n",
" print(\"wait what?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# List comprehensions are a nice way to write compact loops.\n",
"# Make sure you understand this as it is very common in Python.\n",
"\n",
"a = list(range(10))\n",
"print(a)\n",
"b = [i for i in a if not i % 2]\n",
"print(b)\n",
"\n",
"# Equivalant loop for b.\n",
"b = []\n",
"for i in a:\n",
" if not i % 2:\n",
" b.append(i)\n",
"print(b)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 8. Error Messages\n",
"\n",
"You will eventually run into some error messages. Learn to read them! The last line is often the one that matters - reading upwards traces the error back in time and shows what calls led to it. If stuck: just google the error message!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"def do_something(a, b): \n",
" print(a + b + something_else)\n",
" \n",
"do_something(1, 2) "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## The Scientific Python Ecosystem\n",
"\n",
"The [SciPy Stack](https://www.scipy.org/stackspec.html) forms the basis for essentially all applications of scientific Python. Here we will quickly introduce the three core libraries:\n",
"\n",
"* `NumPy`\n",
"* `SciPy`\n",
"* `Matplotlib`\n",
"\n",
"The SciPy stack furthermore contains `pandas` (library for data analysis on tabular and time series data) and `sympy` (package for symbolic math), both very powerful packages, but we will omit them in this tutorial."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 9. NumPy\n",
"\n",
"Large parts of the scientific Python ecosystem use NumPy, an array computation package offering N-dimensional, typed arrays and useful functions for linear algebra, Fourier transforms, random numbers, and other basic scientific tasks."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import numpy as np\n",
"\n",
"# Create a large array with with 1 million samples.\n",
"x = np.linspace(start=0, stop=100, num=int(1E6), dtype=np.float64)\n",
"\n",
"# Most operations work per-element.\n",
"y = x ** 2\n",
"\n",
"# Uses C and Fortran under the hood for speed.\n",
"print(y.sum())\n",
"\n",
"# FFT and inverse\n",
"x = np.random.random(100)\n",
"large_X = np.fft.fft(x)\n",
"x = np.fft.ifft(large_X)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 10. SciPy\n",
"\n",
"`SciPy`, in contrast to `NumPy` which only offers basic numerical routines, contains a lot of additional functionality needed for scientific work. Examples are solvers for basic differential equations, numeric integration and optimization, spare matrices, interpolation routines, signal processing methods, and a lot of other things."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from scipy.interpolate import interp1d\n",
"\n",
"x = np.linspace(0, 10, num=11, endpoint=True)\n",
"y = np.cos(-x ** 2 / 9.0)\n",
"\n",
"# Cubic spline interpolation to new points.\n",
"f2 = interp1d(x, y, kind='cubic')(np.linspace(0, 10, num=101, endpoint=True))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 11. Matplotlib\n",
"\n",
"Plotting is done using `Matplotlib`, a package for greating high-quality static plots. It has an interface that mimics Matlab which many people are familiar with."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"\n",
"plt.plot(np.sin(np.linspace(0, 2 * np.pi, 2000)), color=\"green\",\n",
" label=\"Some Curve\")\n",
"plt.legend()\n",
"plt.ylim(-1.1, 1.1)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Exercises\n",
"\n",
"#### Functions, NumPy, and Matplotlib\n",
"\n",
"A. Write a function that takes a NumPy array `x` and `a`, `b`, and `c` and returns\n",
"\n",
"$$\n",
"f(x) = a x^2 + b x + c\n",
"$$\n",
"\n",
"B. Plot the result of that function with matplotlib."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 99 Bottles of Beer\n",
"\n",
"*(stolen from http://www.ling.gu.se/~lager/python_exercises.html)*\n",
"\n",
"\n",
"\"99 Bottles of Beer\" is a traditional song in the United States and Canada. It is popular to sing on long trips, as it has a very repetitive format which is easy to memorize, and can take a long time to sing. The song's simple lyrics are as follows:\n",
"\n",
"```\n",
"99 bottles of beer on the wall, 99 bottles of beer.\n",
"Take one down, pass it around, 98 bottles of beer on the wall.\n",
"```\n",
"\n",
"The same verse is repeated, each time with one fewer bottle. The song is completed when the singer or singers reach zero.\n",
"\n",
"Your task here is write a Python program capable of generating all the verses of the song.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Ceasar Cipher\n",
"\n",
"*(stolen from http://www.ling.gu.se/~lager/python_exercises.html)*\n",
"\n",
"In cryptography, a Caesar cipher is a very simple encryption techniques in which each letter in the plain text is replaced by a letter some fixed number of positions down the alphabet. For example, with a shift of 3, A would be replaced by D, B would become E, and so on. The method is named after Julius Caesar, who used it to communicate with his generals. ROT-13 (\"rotate by 13 places\") is a widely used example of a Caesar cipher where the shift is 13. In Python, the key for ROT-13 may be represented by means of the following dictionary:\n",
"\n",
"```python\n",
"key = {'a':'n', 'b':'o', 'c':'p', 'd':'q', 'e':'r', 'f':'s', 'g':'t', 'h':'u', \n",
" 'i':'v', 'j':'w', 'k':'x', 'l':'y', 'm':'z', 'n':'a', 'o':'b', 'p':'c', \n",
" 'q':'d', 'r':'e', 's':'f', 't':'g', 'u':'h', 'v':'i', 'w':'j', 'x':'k',\n",
" 'y':'l', 'z':'m', 'A':'N', 'B':'O', 'C':'P', 'D':'Q', 'E':'R', 'F':'S', \n",
" 'G':'T', 'H':'U', 'I':'V', 'J':'W', 'K':'X', 'L':'Y', 'M':'Z', 'N':'A', \n",
" 'O':'B', 'P':'C', 'Q':'D', 'R':'E', 'S':'F', 'T':'G', 'U':'H', 'V':'I', \n",
" 'W':'J', 'X':'K', 'Y':'L', 'Z':'M'}\n",
"```\n",
"\n",
"Your task in this exercise is to implement an decoder of ROT-13. Once you're done, you will be able to read the following secret message:\n",
"\n",
"```\n",
"Pnrfne pvcure? V zhpu cersre Pnrfne fnynq!\n",
"```\n",
"\n",
"**BONUS:** Write an encoder!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -0,0 +1,581 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"<div style='background-image: url(\"../images/header.svg\") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>\n",
" <div style=\"float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px\">\n",
" <div style=\"position: relative ; top: 50% ; transform: translatey(-50%)\">\n",
" <div style=\"font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%\">Signal Processing</div>\n",
" <div style=\"font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)\">Filtering Basics - Solution</div>\n",
" </div>\n",
" </div>\n",
"</div>\n",
"\n",
"Seismo-Live: http://seismo-live.org\n",
"\n",
"##### Authors:\n",
"* Stefanie Donner ([@stefdonner](https://github.com/stefdonner))\n",
"* Celine Hadziioannou ([@hadzii](https://github.com/hadzii))\n",
"* Ceri Nunn ([@cerinunn](https://github.com/cerinunn))\n",
"\n",
"\n",
"---"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [],
"source": [
"# Cell 0 - Preparation: load packages, set some basic options \n",
"#%matplotlib inline\n",
"from obspy import *\n",
"from obspy.clients.fdsn import Client\n",
"import numpy as np\n",
"import matplotlib.pylab as plt\n",
"plt.style.use('ggplot')\n",
"plt.rcParams['figure.figsize'] = 15, 4\n",
"plt.rcParams['lines.linewidth'] = 0.5"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Basics in filtering"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## The Filter\n",
"\n",
"In seismology, filters are used to correct for the instrument response, avoid aliasing effects, separate 'wanted' from 'unwanted' frequencies, identify harmonic signals, model a specific recording instrument, and much more ...\n",
"\n",
"There is no clear classification of filters. Roughly speaking, we can distinguish linear vs. non-linear, analog (circuits, resistors, conductors) vs. digital (logical components), and continuous vs. discrete filters. In seismology, we generally avoid non-linear filters, because their output contains frequencies which are not in the input signal. Analog filters can be continuous or discrete, while digital filters are always discrete. Discrete filters can be subdivided into infinite impulse response (IIR) filters, which are recursive and causal, and finite impulse response (FIR) filters, which are non-recursive and causal or acausal. We will explain more about these types of filters below.\n",
"Some filters have a special name, such as Butterworth, Chebyshev or Bessel filters, but they can also be integrated in the classification described above."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Response Funtion\n",
"\n",
"A filter is characterised by its frequency response function $T(j\\omega)$ which relate the Fourier transformation of the output signal $Y(j\\omega)$ and the Fourier transformation of the input signal $X(j\\omega)$:\n",
"$$ Y(j\\omega) = T(j\\omega)X(j\\omega)$$\n",
"\n",
"For a simple lowpass filter it is given as:\n",
"$$ |T(j\\omega)| = \\sqrt{ \\frac{1}{1+(\\frac{\\omega}{\\omega_c})^{2n}} } $$\n",
"\n",
"with $\\omega$ indicating the frequency samples, $\\omega_c$ the corner frequency of the filter, and $n$ the order of the filter (also called the number of corners of the filter). For a lowpass filter, all frequencies lower than the corner frequency are allowed to pass the filter. This is the pass band of the filter. On the other hand, the range of frequencies above the corner frequency is called stop band. In between lies the transition band, a small band of frequencies in which the passed amplitudes are gradually decreased to zero. The steepness of the slope of this transition band is defined by the order of the filter: the higher the order, the steeper the slope, the more effectively 'unwanted' frequencies get removed."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### Time Domain vs. Frequency Domain\n",
"\n",
"In the time domain, filtering means to convolve the data with the impulse response function of the filter. Doing this operation in the time domain is mathematically complex, computationally expensive and slow. Therefore, the digital application of filters is almost always done in the frequency domain, where it simplifies to a much faster multiplication between data and filter response. The procedure is as follows: transfer the signal into the frequency domain via FFT, multiply it with the filter's frequency response function (i.e. the FFT of the impulse response function), and transfer the result back to the time-domain. As a consequence, when filtering, we have to be aware of the characteristics and pit-falls of the Fourier transformation."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## Filter types\n",
"\n",
"There are 4 main types of filters: a lowpass, a highpass, a bandpass, and a bandstop filter. Low- and highpass filters only have one corner frequency, allowing frequencies below and above this corner frequency to pass the filter, respectively. In contrast, bandpass and bandstop filters have two corner frequencies, defining a frequency band to pass and to stop, respectively.\n",
"Here, we want to see how exactly these filters act on the input signal. In Cell 1, the vertical component of the M$_w\\,$9.1 Tohoku earthquake, recorded at Bochum - Germany, is downloaded and pre-processed. In Cell 2, the four basic filters are applied to these data and plotted together with the filter functions and the resulting amplitude spectrum.\n",
"\n",
"1) Look at the figure and explain what the different filters do.\n",
"\n",
"2) Change the order of the filter (i.e the number of corners). What happens and why?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [],
"source": [
"# Cell 1: prepare data from Tohoku-Oki earthquake. \n",
"client = Client(\"BGR\")\n",
"t1 = UTCDateTime(\"2011-03-11T05:00:00.000\")\n",
"st = client.get_waveforms(\"GR\", \"BUG\", \"\", \"LHZ\", t1, t1 + 6 * 60 * 60, \n",
" attach_response = True)\n",
"st.remove_response(output=\"VEL\")\n",
"st.detrend('linear')\n",
"st.detrend('demean')\n",
"st[0].plot();"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "skip"
}
},
"outputs": [],
"source": [
"# Cell 1b: print stats\n",
"print(st[0].stats)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"# Cell 2 - filter types\n",
"npts = st[0].stats.npts # number of samples in the trace\n",
"dt = st[0].stats.delta # sample interval\n",
"fNy = 1. / (2. * dt) # Nyquist frequency \n",
"time = np.arange(0, npts) * dt # time axis for plotting\n",
"freq = np.linspace(0, fNy, npts // 2 + 1) # frequency axis for plotting\n",
"corners = 4 # order of filter\n",
"\n",
"# several filter frequencies for the different filter types\n",
"f0 = 0.04\n",
"fmin1 = 0.04\n",
"fmax1 = 0.07\n",
"fmin2 = 0.03\n",
"fmax2 = 0.07\n",
"\n",
"# filter functions\n",
"LP = 1 / ( 1 + (freq / f0) ** (2 * corners))\n",
"HP = 1 - 1 / (1 + (freq / f0) ** (2 * corners))\n",
"wc = fmax1 - fmin1\n",
"wb = 0.5 * wc + fmin1\n",
"BP = 1/(1 + ((freq - wb) / wc) ** (2 * corners))\n",
"wc = fmax2 - fmin2\n",
"wb = 0.5 * wc + fmin2\n",
"BS = 1 - ( 1 / (1 + ((freq - wb) / wc) ** (2 * corners)))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [],
"source": [
"# Cell 2b: filtered traces\n",
"stHP = st.copy()\n",
"stHP.filter('highpass', freq=f0, corners=corners, zerophase=True)\n",
"stLP = st.copy()\n",
"stLP.filter('lowpass', freq=f0, corners=corners, zerophase=True)\n",
"stBP = st.copy()\n",
"stBP.filter('bandpass', freqmin=fmin1, freqmax=fmax1, corners=corners, zerophase=True)\n",
"stBS = st.copy()\n",
"stBS.filter('bandstop', freqmin=fmin2, freqmax=fmax2, corners=corners, zerophase=True)\n",
"\n",
"# amplitude spectras\n",
"Ospec = np.fft.rfft(st[0].data)\n",
"LPspec = np.fft.rfft(stLP[0].data)\n",
"HPspec = np.fft.rfft(stHP[0].data)\n",
"BPspec = np.fft.rfft(stBP[0].data)\n",
"BSspec = np.fft.rfft(stBS[0].data)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"rise": {
"scroll": true
},
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"# Cell 2c - plot\n",
"plt.rcParams['figure.figsize'] = 17, 17\n",
"tx1 = 3000\n",
"tx2 = 8000\n",
"fx2 = 0.12\n",
"\n",
"fig = plt.figure()\n",
"\n",
"ax1 = fig.add_subplot(5,3,1)\n",
"ax1.ticklabel_format(style='sci', axis='y', scilimits=(-1,1))\n",
"plt.plot(time, st[0].data, 'k')\n",
"plt.xlim(tx1, tx2)\n",
"plt.title('time-domain data')\n",
"plt.ylabel('original data \\n amplitude [ms$^-1$]')\n",
"\n",
"ax3 = fig.add_subplot(5,3,3)\n",
"plt.plot(freq, abs(Ospec), 'k')\n",
"plt.title('frequency-domain data \\n amplitude spectrum')\n",
"plt.ylabel('amplitude')\n",
"plt.xlim(0,fx2)\n",
"\n",
"ax4 = fig.add_subplot(5,3,4)\n",
"ax4.ticklabel_format(style='sci', axis='y', scilimits=(-1,1))\n",
"plt.plot(time, stLP[0].data, 'k')\n",
"plt.xlim(tx1, tx2)\n",
"plt.ylabel('LOWPASS \\n amplitude [ms$^-1$]')\n",
"\n",
"ax5 = fig.add_subplot(5,3,5)\n",
"plt.plot(freq, LP, 'k', linewidth=1.5)\n",
"plt.xlim(0,fx2)\n",
"plt.ylim(-0.1,1.1)\n",
"plt.title('filter function')\n",
"plt.ylabel('amplitude [%]')\n",
"\n",
"ax6 = fig.add_subplot(5,3,6)\n",
"plt.plot(freq, abs(LPspec), 'k')\n",
"plt.ylabel('amplitude ')\n",
"plt.xlim(0,fx2)\n",
"\n",
"ax7 = fig.add_subplot(5,3,7)\n",
"ax7.ticklabel_format(style='sci', axis='y', scilimits=(-1,1))\n",
"plt.plot(time, stHP[0].data, 'k')\n",
"plt.xlim(tx1, tx2)\n",
"plt.ylabel('HIGHPASS \\n amplitude [ms$^-1$]')\n",
"\n",
"ax8 = fig.add_subplot(5,3,8)\n",
"plt.plot(freq, HP, 'k', linewidth=1.5)\n",
"plt.xlim(0,fx2)\n",
"plt.ylim(-0.1,1.1)\n",
"plt.ylabel('amplitude [%]')\n",
"\n",
"ax9 = fig.add_subplot(5,3,9)\n",
"plt.plot(freq, abs(HPspec), 'k')\n",
"plt.ylabel('amplitude ')\n",
"plt.xlim(0,fx2)\n",
"\n",
"ax10 = fig.add_subplot(5,3,10)\n",
"ax10.ticklabel_format(style='sci', axis='y', scilimits=(-1,1))\n",
"plt.plot(time, stBP[0].data, 'k')\n",
"plt.xlim(tx1, tx2)\n",
"plt.ylabel('BANDPASS \\n amplitude [ms$^-1$]')\n",
"\n",
"ax11 = fig.add_subplot(5,3,11)\n",
"plt.plot(freq, BP, 'k', linewidth=1.5)\n",
"plt.xlim(0,fx2)\n",
"plt.ylim(-0.1,1.1)\n",
"plt.ylabel('amplitude [%]')\n",
"\n",
"ax12 = fig.add_subplot(5,3,12)\n",
"plt.plot(freq, abs(BPspec), 'k')\n",
"plt.ylabel('amplitude ')\n",
"plt.xlim(0,fx2)\n",
"\n",
"ax13 = fig.add_subplot(5,3,13)\n",
"ax13.ticklabel_format(style='sci', axis='y', scilimits=(-1,1))\n",
"plt.plot(time, stBS[0].data, 'k')\n",
"plt.xlim(tx1, tx2)\n",
"plt.xlabel('time [sec]')\n",
"plt.ylabel('BANDSTOPP \\n amplitude [ms$^-1$]');\n",
"\n",
"ax14 = fig.add_subplot(5,3,14)\n",
"plt.plot(freq, BS, 'k', linewidth=1.5)\n",
"plt.xlim(0,fx2)\n",
"plt.ylim(-0.1,1.1)\n",
"plt.ylabel('amplitude [%]')\n",
"plt.xlabel('frequency [Hz]')\n",
"\n",
"ax15 = fig.add_subplot(5,3,15)\n",
"plt.plot(freq, abs(BSspec), 'k')\n",
"plt.xlabel('frequency [Hz]')\n",
"plt.ylabel('amplitude ')\n",
"plt.xlim(0,fx2)\n",
"\n",
"plt.subplots_adjust(wspace=0.3, hspace=0.4)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## Causal vs. acausal\n",
"\n",
"Filters can be causal or acausal. The output of a causal filter depends only on past and present input, while the output also depends on future input. Thus, an acausal filter is always symmetric and a causal one not. In this exercise, we want to see the effects of such filters on the signal. In Cell 3, the example seismogram of ObsPy is loaded and lowpass filtered several times with different filter order $n$ and causality.\n",
"\n",
"3) Explain the effects of the different filters. You can also play with the order of the filter (variable $ncorners$).\n",
"\n",
"4) Zoom into a small window around the first onset (change the variables $start$, $end$ and $amp$). Which filter would you use for which purpose?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"# Cell 3 - filter effects\n",
"stf = read() # load example seismogram\n",
"tr = stf[0] # select the first trace in the Stream object\n",
"tr.detrend('demean') # preprocess data\n",
"tr.detrend('linear')\n",
"tr.filter(\"highpass\", freq=2) # removing long-period noise\n",
"print(tr)\n",
"t = tr.times() # time vector for x axis\n",
"\n",
"f = 15.0 # frequency for filters (intial: 10 Hz)\n",
"start = 4 # start time to plot in sec (initial: 4)\n",
"end = 8 # end time to plot in sec (initial: 8)\n",
"amp = 1500 # amplitude range for plotting (initial: 1500) \n",
"ncorners = 4 # number of corners/order of the filter (initial: 4)\n",
"\n",
"tr_filt = tr.copy() # causal filter / not zero phase. Order = 2\n",
"tr_filt.filter('lowpass', freq=f, zerophase=False, corners=2)\n",
"tr_filt2 = tr.copy() # causal filter / not zero phase. Order = set by ncorners\n",
"tr_filt2.filter('lowpass', freq=f, zerophase=False, corners=ncorners)\n",
"tr_filt3 = tr.copy() # acausal filter / zero phase. Order = set by ncorners\n",
"tr_filt3.filter('lowpass', freq=f, zerophase=True, corners=ncorners)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "skip"
}
},
"outputs": [],
"source": [
"# Cell 3b: plot - comment single lines to better see the remaining ones\n",
"plt.rcParams['figure.figsize'] = 15, 4\n",
"plt.plot(t, tr.data, 'k', label='original', linewidth=1.)\n",
"plt.plot(t, tr_filt.data, 'b', label='causal, n=2', linewidth=1.2)\n",
"plt.plot(t, tr_filt2.data, 'r', label='causal, n=%s' % ncorners, linewidth=1.2)\n",
"plt.plot(t, tr_filt3.data, 'g', label='acausal, n=%s' % ncorners, linewidth=1.2)\n",
"\n",
"plt.xlabel('time [s]')\n",
"plt.xlim(start, end) \n",
"plt.ylim(-amp, amp)\n",
"plt.ylabel('amplitude [arbitrary]')\n",
"plt.legend(loc='lower right')\n",
"\n",
"plt.show();"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Frequency ranges - bandpass filter\n",
"\n",
"We will now look at an event which took place in Kazakhstan on July 8, 1989. We want to see how filtering helps to derive information from a signal. The signal has been recorded by a Chinese station and is bandpassed in several different frequency bands.\n",
"\n",
"5) What do you see in the different frequency bands?\n",
"\n",
"6) Play with the channel used for filtering in Cell 5. What do you not see? Can you guess what kind of event it is?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"# Cell 4 - get + preprocess data\n",
"c = Client(\"IRIS\")\n",
"tmp1 = UTCDateTime(\"1989-10-19T09:50:00.0\")\n",
"tmp2 = UTCDateTime(\"1989-10-19T10:20:00.0\")\n",
"dat = c.get_waveforms(\"CD\", \"WMQ\", \"\", \"BH*\", tmp1, tmp2, attach_response = True)\n",
"dat.detrend('linear')\n",
"dat.detrend('demean')\n",
"dat.remove_response(output=\"VEL\")\n",
"dat.detrend('linear')\n",
"dat.detrend('demean')\n",
"print(dat)\n",
"dat.plot();"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"# Cell 5 - filter data in different frequency ranges\n",
"\n",
"chanel = 2\n",
"tm = dat[chanel].times()\n",
"xmin = 0\n",
"xmax = 700\n",
"\n",
"dat1 = dat[chanel].copy()\n",
"dat2 = dat[chanel].copy()\n",
"dat2.filter(type=\"bandpass\", freqmin=0.01, freqmax=0.05)\n",
"dat3 = dat[chanel].copy()\n",
"dat3.filter(type=\"bandpass\", freqmin=0.05, freqmax=0.1)\n",
"dat4 = dat[chanel].copy()\n",
"dat4.filter(type=\"bandpass\", freqmin=0.1, freqmax=0.5)\n",
"dat5 = dat[chanel].copy()\n",
"dat5.filter(type=\"bandpass\", freqmin=0.5, freqmax=1)\n",
"dat6 = dat[chanel].copy()\n",
"dat6.filter(type=\"bandpass\", freqmin=1., freqmax=5.)\n",
"dat7 = dat[chanel].copy()\n",
"dat7.filter(type=\"bandpass\", freqmin=5., freqmax=9.99)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"# Cell 5b: plot\n",
"plt.rcParams['figure.figsize'] = 17, 21\n",
"fig = plt.figure()\n",
"ax1 = fig.add_subplot(7,1,1)\n",
"ax1.ticklabel_format(style='sci', axis='y', scilimits=(-1,1))\n",
"plt.plot(tm, dat1.data, 'k')\n",
"plt.xlim(xmin, xmax)\n",
"plt.title('unfiltered')\n",
"plt.ylabel('amplitude \\n [m/s]')\n",
"ax2 = fig.add_subplot(7,1,2)\n",
"ax2.ticklabel_format(style='sci', axis='y', scilimits=(-1,1))\n",
"plt.plot(tm, dat2.data, 'k')\n",
"plt.xlim(xmin, xmax)\n",
"plt.title('0.01 - 0.05 Hz')\n",
"plt.ylabel('amplitude \\n [m/s]')\n",
"ax3 = fig.add_subplot(7,1,3)\n",
"ax3.ticklabel_format(style='sci', axis='y', scilimits=(-1,1))\n",
"plt.plot(tm, dat3.data, 'k')\n",
"plt.xlim(xmin, xmax)\n",
"plt.title('0.05 - 0.1 Hz')\n",
"plt.ylabel('amplitude \\n [m/s]')\n",
"ax4 = fig.add_subplot(7,1,4)\n",
"ax4.ticklabel_format(style='sci', axis='y', scilimits=(-1,1))\n",
"plt.plot(tm, dat4.data, 'k')\n",
"plt.xlim(xmin, xmax)\n",
"plt.title('0.1 - 0.5 Hz')\n",
"plt.ylabel('amplitude \\n [m/s]')\n",
"ax5 = fig.add_subplot(7,1,5)\n",
"ax5.ticklabel_format(style='sci', axis='y', scilimits=(-1,1))\n",
"plt.plot(tm, dat5.data, 'k')\n",
"plt.xlim(xmin, xmax)\n",
"plt.title('0.5 - 1.0 Hz')\n",
"plt.ylabel('amplitude \\n [m/s]')\n",
"ax6 = fig.add_subplot(7,1,6)\n",
"ax6.ticklabel_format(style='sci', axis='y', scilimits=(-1,1))\n",
"plt.plot(tm, dat6.data, 'k')\n",
"plt.xlim(xmin, xmax)\n",
"plt.title('1.0 - 5.0 Hz')\n",
"plt.ylabel('amplitude \\n [m/s]')\n",
"ax7 = fig.add_subplot(7,1,7)\n",
"ax7.ticklabel_format(style='sci', axis='y', scilimits=(-1,1))\n",
"plt.plot(tm, dat7.data, 'k')\n",
"plt.xlim(xmin, xmax)\n",
"plt.title('5.0 - 10.0 Hz')\n",
"plt.xlabel('time [sec]')\n",
"plt.ylabel('amplitude \\n [m/s]')\n",
"\n",
"plt.subplots_adjust(hspace=0.3)\n",
"plt.show();"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"celltoolbar": "Slideshow",
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.7"
},
"rise": {
"scroll": true
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -0,0 +1,554 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"<div style='background-image: url(\"../images/header.svg\") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>\n",
" <div style=\"float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px\">\n",
" <div style=\"position: relative ; top: 50% ; transform: translatey(-50%)\">\n",
" <div style=\"font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%\">Signal Processing</div>\n",
" <div style=\"font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)\">Fourier Transformation</div>\n",
" </div>\n",
" </div>\n",
"</div>\n",
"\n",
"Seismo-Live: http://seismo-live.org\n",
"\n",
"##### Authors:\n",
"* Stefanie Donner ([@stefdonner](https://github.com/stefdonner))\n",
"* Celine Hadziioannou ([@hadzii](https://github.com/hadzii))\n",
"* Ceri Nunn ([@cerinunn](https://github.com/cerinunn))\n",
"\n",
"Some code used in this tutorial is taken from [stackoverflow.com](http://stackoverflow.com/questions/4258106/how-to-calculate-a-fourier-series-in-numpy/27720302#27720302). We thank [Giulio Ghirardo](https://www.researchgate.net/profile/Giulio_Ghirardo) for his kind permission to use his code here.\n",
"\n",
"---"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"code_folding": [
0
],
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [],
"source": [
"# Cell 0 - Preparation: load packages, set some basic options \n",
"%matplotlib inline\n",
"from scipy import signal\n",
"from obspy.signal.invsim import cosine_taper \n",
"from matplotlib import rcParams\n",
"import numpy as np\n",
"import matplotlib.pylab as plt\n",
"plt.style.use('ggplot')\n",
"plt.rcParams['figure.figsize'] = 15, 3"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"<h1>Tutorial on Fourier transformation in 1D </h1>\n",
"<br>"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## The Fourier transformation\n",
"\n",
"In the world of seismology, we use the *Fourier transformation* to transform a signal from the time domain into the frequency domain. That means, we split up the signal and separate the content of each frequency from each other. Doing so, we can analyse our signal according to energy content per frequency. We can extract information on how much amplitude each frequency contributes to the final signal. In other words: we get a receipt of the ingredients we need to blend our measured signal. "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"The *Fourier transformation* is based on the *Fourier series*. With the *Fourier series* we can approximate an (unknown) function $f(x)$ by another function $g_n(x)$ which consists of a sum over $N$ basis functions weighted by some coefficients. The basis functions need to be orthogonal. $sin$ and $cos$ functions seem to be a pretty good choice because any signal can be filtered into several sinusoidal paths. In the period range of $[-T/2 ; T/2]$ the *Fourier series* is defined as:\n",
"\n",
"$$\n",
"f(t) \\approx g_n(t) = \\frac{1}{2} a_0 + \\sum_{k=1}^N \\left[ a_k \\cos \\left(\\frac{2\\pi k t}{T} \\right) + b_k \\sin\\left(\\frac{2\\pi k t}{T}\\right)\\right]\n",
"$$\n",
"\n",
"$$ \n",
"a_k = \\frac{2}{T} \\int_{-T/2}^{T/2} f(t) \\cos\\left(\\frac{2\\pi k t}{T}\\right)dt\n",
"$$\n",
"\n",
"$$\n",
"b_k = \\frac{2}{T} \\int_{-T/2}^{T/2} f(t) \\sin\\left(\\frac{2\\pi k t}{T}\\right)dt\n",
"$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"At this stage, we consider continuous, periodic and infinite functions. The more basis functions are used to approximate the unknown function, the better is the approximation, i.e. the more similar the unknown function is to its approximation. \n",
"\n",
"For a non-periodic function the interval of periodicity tends to infinity. That means, the steps between neighbouring frequencies become smaller and smaller and thus the infinite sum of the *Fourier series* turns into an integral and we end up with the integral form of the *Fourier transformation*:\n",
"\n",
"$$\n",
"F(\\omega) = \\frac{1}{2\\pi} \\int_{-\\infty}^{\\infty} f(t) e^{-i\\omega t} dt \\leftrightarrow f(t) = \\int_{-\\infty}^{\\infty} F(\\omega)e^{i\\omega t}dt\n",
"$$\n",
"\n",
"Attention: sign and factor conventions can be different in the literature!"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"In seismology, we do not have continuous but discrete time signals. Therefore, we work with the discrete form of the *Fourier transformation*:\n",
"\n",
"$$\n",
"F_k = \\frac{1}{N} \\sum_{j=0}^{N-1} f_j e^{-2\\pi i k j /N} \\leftrightarrow f_k = \\sum_{j=0}^{N-1} F_j e^{2\\pi i k j /N}\n",
"$$\n",
"\n",
"Some intuitive gif animations on what the *Fourier transform* is doing, can be found [here](https://en.wikipedia.org/wiki/File:Fourier_series_and_transform.gif), [here](https://en.wikipedia.org/wiki/File:Fourier_series_square_wave_circles_animation.gif), and [here](https://en.wikipedia.org/wiki/File:Fourier_series_sawtooth_wave_circles_animation.gif).\n",
"Further and more detailed explanations on *Fourier series* and *Fourier transformations* can be found [here](https://betterexplained.com/articles/an-interactive-guide-to-the-fourier-transform/) and [here](www.fourier-series.com).\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### The Fourier series and its coefficients \n",
"\n",
"In the following two code cells, we first define a function which calculates the coefficients of the Fourier series for a given function. The function in the next cell does it the other way round: it is creating a function based on given coefficients and weighting factors."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"code_folding": [
1
],
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"# Cell 1: code by Giulio Ghirardo \n",
"def fourier_series_coeff(f, T, N):\n",
" \"\"\"Calculates the first 2*N+1 Fourier series coeff. of a periodic function.\n",
"\n",
" Given a periodic, function f(t) with period T, this function returns the\n",
" coefficients a0, {a1,a2,...},{b1,b2,...} such that:\n",
"\n",
" f(t) ~= a0/2+ sum_{k=1}^{N} ( a_k*cos(2*pi*k*t/T) + b_k*sin(2*pi*k*t/T) )\n",
" \n",
" Parameters\n",
" ----------\n",
" f : the periodic function, a callable like f(t)\n",
" T : the period of the function f, so that f(0)==f(T)\n",
" N_max : the function will return the first N_max + 1 Fourier coeff.\n",
"\n",
" Returns\n",
" -------\n",
" a0 : float\n",
" a,b : numpy float arrays describing respectively the cosine and sine coeff.\n",
" \"\"\"\n",
" # From Nyquist theorem we must use a sampling \n",
" # freq. larger than the maximum frequency you want to catch in the signal. \n",
" f_sample = 2 * N\n",
" \n",
" # We also need to use an integer sampling frequency, or the\n",
" # points will not be equispaced between 0 and 1. We then add +2 to f_sample.\n",
" t, dt = np.linspace(0, T, f_sample + 2, endpoint=False, retstep=True)\n",
" y = np.fft.rfft(f) / t.size\n",
" y *= 2\n",
" return y[0].real, y[1:-1].real[0:N], -y[1:-1].imag[0:N]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"code_folding": [
1
],
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"# Cell 2: code by Giulio Ghirardo \n",
"def series_real_coeff(a0, a, b, t, T):\n",
" \"\"\"calculates the Fourier series with period T at times t,\n",
" from the real coeff. a0,a,b\"\"\"\n",
" tmp = np.ones_like(t) * a0 / 2.\n",
" for k, (ak, bk) in enumerate(zip(a, b)):\n",
" tmp += ak * np.cos(2 * np.pi * (k + 1) * t / T) + bk * np.sin(\n",
" 2 * np.pi * (k + 1) * t / T)\n",
" return tmp"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"Now, we can create an arbitrary function, which we use to experiment with in the following example. \n",
"1) When you re-run cell 3 several times, do you always see the same function? Why? What does it tell you about the Fourier series? "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [],
"source": [
"# Cell 3: create periodic, discrete, finite signal\n",
"\n",
"# number of samples (intial value: 3000)\n",
"samp = 3000\n",
"# sample rate (initial value: 1)\n",
"dt = 1\n",
"# period\n",
"T = 1.0 / dt\n",
"length = samp * dt\n",
"# number of coefficients (initial value: 100)\n",
"N = 100\n",
"# weighting factors for coefficients (selected randomly)\n",
"a0 = np.random.rand(1)\n",
"a = np.random.randint(1, high=11, size=N)\n",
"b = np.random.randint(1, high=11, size=N)\n",
"\n",
"t = np.linspace(0, length, samp) # time axis\n",
"sig = series_real_coeff(a0, a, b, t, T)\n",
"\n",
"# plotting\n",
"plt.plot(t, sig, 'r', label='arbitrary, periodic, discrete, finite signal')\n",
"plt.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))\n",
"plt.xlabel('time [sec]')\n",
"plt.ylabel('amplitude')\n",
"plt.legend()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"Now, we can play with the signal and see what happens when we try to reconstruct it with a limited number of coefficients. \n",
"2) Run the cells 4 and 5. What do you observe? \n",
"3) Increase the number of coefficients $n$ step by step and re-run cells 4 and 5. What do you observe now? Can you explain? \n",
"4) In cell 5 uncomment the lines to make a plot which is not normalized (and comment the other two) and re-run the cell. What do you see now and can you explain it?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [],
"source": [
"# Cell 4: determine the first 'n' coefficients of the function using the code function of cell 1\n",
"T = 1 # period\n",
"n = 100 # number of coeffs to reconstruct\n",
"a0, a, b = fourier_series_coeff(sig, T, n)\n",
"a_ = a.astype(int)\n",
"b_ = b.astype(int)\n",
"print('coefficient a0 = ', int(a0))\n",
"print('array coefficients ak =', a_)\n",
"print('array coefficients bk =', b_)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"# Cell 5: reconstruct the function using the code in cell 2\n",
"g = series_real_coeff(a0, a, b, t, dt)\n",
"\n",
"# plotting\n",
"#plt.plot(t, sig, 'r', label='original signal') # NOT normalized \n",
"#plt.plot(t, g, 'g', label='reconstructed signal')\n",
"plt.plot(t, sig/max(sig), 'r', label='original signal') # normalized \n",
"plt.plot(t, g/max(g), 'g', label='reconstructed signal')\n",
"\n",
"plt.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))\n",
"plt.xlabel('time [sec]')\n",
"plt.ylabel('amplitude')\n",
"plt.legend()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Fourier series, convergence and Gibb's phenomenon\n",
"\n",
"As seen above the convergence of the *Fourier series* can be tricky due to the fact that we work with signals of finite length. To analyse this effect in a bit more detail, we define a square wave in cell 6 and try to reconstruct it in cell 7. \n",
"5) First, we use only 5 coefficients to reconstruct the wave. Describe what you see. \n",
"6) Increase the number of coefficients $n$ in cell 7 step by step and re-run the cell. What do you see now? Can you explain it?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"# Cell 6: define a square wave of 5 Hz\n",
"freq = 5.\n",
"npts = 3000\n",
"dt_ = 0.002\n",
"length = npts * dt_\n",
"t_ = np.linspace(0, length, npts, endpoint=False)\n",
"square = signal.square(2 * np.pi * freq * t_)\n",
"\n",
"plt.plot(t_, square)\n",
"plt.xlabel('time [sec]')\n",
"plt.ylabel('amplitude')\n",
"plt.xlim(0, 1.05)\n",
"plt.ylim(-1.2, 1.2)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"##### You may replace the square function by something else. What about a sawtooth function?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [],
"source": [
"signal.sawtooth?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"# Cell 7: reconstruct signal using convergence criterion\n",
"n = 5 # number of coefficients (initial: 5)\n",
"T_ = 1/freq # period of signal\n",
"\n",
"# determine coefficients\n",
"a0 = 0\n",
"a = []\n",
"b = []\n",
"for i in range(1,n):\n",
" if (i%2 != 0):\n",
" a_ = 4/(np.pi*i)\n",
" else:\n",
" a_ = 0\n",
" a.append(a_)\n",
" b_ = (2*np.pi*i)/T_\n",
" b.append(b_)\n",
"\n",
"# reconstruct signal\n",
"g = np.ones_like(t_) * a0\n",
"for k, (ak, bk) in enumerate(zip(a, b)):\n",
" g += ak * np.sin(bk*t_)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"# Cell 7b: plotting\n",
"plt.plot(t_, square, 'r', label='Analytisches Signal') \n",
"plt.plot(t_, g, 'g', label='Reihenentwicklung')\n",
"plt.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))\n",
"plt.xlabel('time [sec]')\n",
"plt.ylabel('amplitude')\n",
"#plt.ylim(0.9,1.1)\n",
"plt.legend()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Fourier transformation\n",
"\n",
"Let us now do the Fourier transformation of the signal created in cell 3 and have a look on the amplitude spectra. In computer science the transformation is performed as fast Fourier transformation (FFT). \n",
"\n",
"7) Why do we need to taper the signal before we perform the FFT? \n",
"8) How do you interpret the plot of the amplitude spectra? \n",
"9) Which frequency contributes most to the final signal? "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"# Cell 8: FFT of signal\n",
"# number of sample points need to be the same as in cell 3\n",
"print('samp =',samp,' Need to be the same as in cell 3.')\n",
"# number of sample points need to be the same as in cell 3\n",
"print('T =',T,' Need to be the same as in cell 3.')\n",
"# percentage of taper applied to signal (initial: 0.1)\n",
"taper_percentage = 0.1\n",
"taper = cosine_taper(samp,taper_percentage)\n",
"\n",
"sig_ = square * taper\n",
"Fsig = np.fft.rfft(sig_, n=samp)\n",
"\n",
"# prepare plotting\n",
"xf = np.linspace(0.0, 1.0/(2.0*T), (samp//2)+1)\n",
"rcParams[\"figure.subplot.hspace\"] = (0.8)\n",
"rcParams[\"figure.figsize\"] = (15, 9)\n",
"rcParams[\"axes.labelsize\"] = (15)\n",
"rcParams[\"axes.titlesize\"] = (20)\n",
"rcParams[\"font.size\"] = (12)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": false,
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"#Cell 8b: plotting\n",
"plt.subplot(311)\n",
"plt.title('Time Domain')\n",
"plt.plot(t, square, linewidth=1)\n",
"plt.xlabel('Time [s]')\n",
"plt.ylabel('Amplitude')\n",
"\n",
"plt.subplot(312)\n",
"plt.title('Frequency Domain')\n",
"plt.plot(xf, 2.0/npts * np.abs(Fsig))\n",
"#plt.xlim(0, 0.04) \n",
"plt.xlabel('Frequency [Hz]')\n",
"plt.ylabel('Amplitude')\n",
"plt.show()"
]
}
],
"metadata": {
"anaconda-cloud": {},
"celltoolbar": "Slideshow",
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.7"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -0,0 +1,557 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<div style='background-image: url(\"../images/header.svg\") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>\n",
" <div style=\"float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px\">\n",
" <div style=\"position: relative ; top: 50% ; transform: translatey(-50%)\">\n",
" <div style=\"font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%\">Applied Seismology</div>\n",
" <div style=\"font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)\">Lab: Instrument Response</div>\n",
" </div>\n",
" </div>\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Seismo-Live: http://seismo-live.org\n",
"\n",
"##### Authors:\n",
"* Carl Tape ([@carltape](https://github.com/carltape))\n",
"* Lion Krischer ([@krischer](https://github.com/krischer))\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"based on *GEOS 626: Applied Seismology from Carl Tape*\n",
"\n",
"---"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# This cell does two things: \n",
"# (1) make plots appear in the notebook and (2) creates prettier plots.\n",
"# Make sure to execute it at least once!\n",
"%matplotlib inline\n",
"import matplotlib.pyplot as plt\n",
"plt.style.use('ggplot')\n",
"plt.rcParams['figure.figsize'] = 10, 5"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### L1 Instructions\n",
"\n",
"This is a Python and ObsPy version of the above mentioned lab. We will perform the same calculations and create similar plots but we will be using different services, data formats, and tools.\n",
"\n",
"It aims to teach students how to calculate, visualize and use instruments responses."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### L2 Computing and plotting instrument response, $I(ω)$ "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### Acquiring Data\n",
"\n",
"The first step when working with instrument data is to acquire it. Nowadays this is mostly done using the *station* web service defined by the [FDSN](http://www.fdsn.org/) (International Federation of Digital Seismograph Networks). Once you know how to use this you can use it to download data from data centers across the globe including IRIS, ORFEUS, ETH, RESIF, GEONET, and many more. Notable exceptions are most Asian countries and countries that don't freely share their data."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With ObsPy this works by first importing the FDSN `Client` class and then initializing it with the data center you want to download from. For this lab we will use data from the Geoscope project, which can be downloaded from IRIS (its \"home\" data center is the IPGP in Paris which you can also use - just replace `\"IRIS\"` with `\"IPGP\"` in the following code box - but most of you will be more familiar with IRIS).\n",
"\n",
"See the [documentation of obspy.clients.fdsn](http://docs.obspy.org/packages/obspy.clients.fdsn.html) for more details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from obspy.clients.fdsn import Client\n",
"# ObsPy knows which website to connect to for many data center.\n",
"# For others you can also pass the full URL.\n",
"c = Client(\"IRIS\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Station information is time dependent due to for example changed sensors, new calibrations, or other changes. It is thus important to specify the time frame for which to request response information.\n",
"\n",
"ObsPy handles absolute times with the `UTCDateTime` class. See the [documentation](http://docs.obspy.org/packages/autogen/obspy.core.utcdatetime.UTCDateTime.html) and the [tutorial](http://docs.obspy.org/tutorial/code_snippets/utc_date_time.html) for more information."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import obspy\n",
"# Create two times which we will later use to download time series data.\n",
"starttime = obspy.UTCDateTime(2004, 12, 26, 0, 58, 50)\n",
"# Create a new time object by adding 5 days to the previous ones. Time\n",
"# differences are in seconds.\n",
"endtime = starttime + 86400 * 5"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will use the `CAN` station which is located in Canberra, Australia, and featured in [*Park et al.* (2005, Figure 1)](http://dx.doi.org/10.1126/science.1112305). Please note that in many cases the station code is not unique and in practice the network code is also required. See the ObsPy tutorials for more details about these concepts. The `get_station()` method of the FDSN `Client` object returns station/instrument information. Once again see its [documentation](http://docs.obspy.org/packages/autogen/obspy.clients.fdsn.client.Client.get_stations.html) for more information."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# This will get all stations that satisfy all constraints simultaneosly.\n",
"inv = c.get_stations(network=\"G\", station=\"CAN\", channel=\"LHZ\", \n",
" level=\"response\", \n",
" starttime=starttime, endtime=endtime)\n",
"print(inv)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `station` services return `StationXML` files which are the most modern and future proof data format for station and instrument information. **If possible at all: Always try to use StationXML files!**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### Plotting the Instrument Response\n",
"\n",
"We can now plot the instrument response with the `plot_response()` method. The `min_freq` argument determines the lowest plotted frequency - the maximum plotted frequency is always the Nyquist frequency."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"inv.plot_response(min_freq=0.001);"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"These are [Bode plots](https://en.wikipedia.org/wiki/Bode_plot) and show the amplitude $A(\\omega)$ and the phase $\\phi(\\omega)$ response where\n",
"\n",
"$$\n",
"I(\\omega) = A(\\omega) e^{i \\phi(\\omega)}\n",
"$$\n",
"\n",
"Note that the formulas use $\\omega$ but the plots show the frequency $f$ with $\\omega = 2 \\pi f$. The first variant usually simplifies notation whereas the latter one facilitates physical interpretation.\n",
"\n",
"This plot does not just show the instrument response - it shows the response of the whole recording chain from observed ground motion to actually stored data. This includes the analoge response of the actual seismometer, the analog-to-digitial converter (DAC), and possible digital filter stages."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The sampling rate of this channel is 1 sample per second ($\\Delta T$ = 1s), so the **Nyquist frequency**, which is defined as\n",
"\n",
"$$\n",
"f_{Nyq} = 1 / (2 \\Delta t),\n",
"$$\n",
"\n",
"is $f_{Nyq}$ = 0.5 Hz. The above plot does not show frequencies higher than the Nyquist frequency (the vertical dashed line) as these frequencies do not exist in the data and thus have no physical meaning."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Have a look at the details of the response information of this channel:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"inv[0][0][0].response"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here we see that it consists of multiple stages that convert from one unit to another. The final instrument response is the multipication (in the frequency domain) of all of these.\n",
"\n",
"**QUESTION:** What are the input and output units of the whole instrument? What does that mean physically?\n",
"\n",
"**QUESTION:** How does one conceptually remove the instrument response from recorded data?\n",
"\n",
"##### Incomplete Responses\n",
"\n",
"You can also only plot the response of a range of stages. The plotted sensitivity in Obspy 1.0.1 is the overall sensitivity of all stages and not only the chosen ones - thus the plot looks a bit strange. This will be rectified [soon](https://github.com/obspy/obspy/issues/1368)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"inv[0][0][0].plot(0.001, start_stage=1, end_stage=1);"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that the major differences (except a constant factor) are close to the Nyquist frequency - these are due to the here not plotted decimation and FIR filter stages. In many cases it is sufficient to only use Poles & Zeros information but there are stations where this is not true. As it is tedious to manually check everytime: **There is no reason to not use the full chain instrument response if it is available!**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### Output Units"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Modern seismometers usually record the velocity of ground motion, strong motions sensor record acceleration.\n",
"\n",
"**QUESTION:** Why is that?\n",
"\n",
"The instrument response describes how incoming data (in whatever the instrument measures) it transformed into what is finally stored in a file. Restoring original ground motion requires the inversion of that process. The deconvolution of the instrument response as written for example in a StationXML file will convert data back to its original units.\n",
"\n",
"If your analysis requires other units it is best to perform the integration or differentiation during the instrument deconvolution. That deconvolution is usually performed in the frequency domain - once there integration/differentiation is almost free and the most accurate it gets."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following plots show the instrument response $I_d(\\omega)$ from displacement, velocity $I_v(\\omega)$, and acceleration $I_a(\\omega)$."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"Displacement\")\n",
"inv.plot_response(0.001, output=\"DISP\")\n",
"print(\"Velocity\")\n",
"inv.plot_response(0.001, output=\"VEL\")\n",
"print(\"Acceleration\")\n",
"inv.plot_response(0.001, output=\"ACC\");"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"Basic properties of the Fourier transform result in \n",
"\n",
"$$\n",
"\\begin{align}\n",
"X_v(\\omega) &= (i \\omega) X_d(\\omega) \\\\\n",
"X_a(\\omega) &= (i \\omega) X_v(\\omega)\n",
"\\end{align}\n",
"$$\n",
"\n",
"where $X_d$, $X_v$, and $X_a$ are the Fourier transforms of displacement $x_d(t)$, velocity $x_v(t)$, and acceleration $x_a(t)$.\n",
"\n",
"We also have the relationship between ground motion, instrument response, and the output form the seismometer:\n",
"\n",
"$$\n",
"\\begin{align}\n",
"x_a(t) * i_a(t) &= c(t) \\\\\n",
"X_a(\\omega) I_a (\\omega) &= C(\\omega) \n",
"\\end{align}\n",
"$$\n",
"\n",
"Here we have assumed that the instrument response is described with respect to acceleration. But we could alternatively consider the instrument response with respect to velocity or displacement; **the key point is that what comes out of the seismometer,** $C(\\omega)$**, is fixed.** But we can descibe the input ground motion as displacement, velocity, or acceleration. Showing all three together and omitting explicit $\\omega$ dependence, we have\n",
"\n",
"$$\n",
"\\begin{align}\n",
"C &= X_a I_a = X_v I_v = X_d I_d\\\\\n",
" &= (i\\omega) X_v I_a = (i \\omega) X_d I_V = X_d I_d \\\\\n",
" &= (i\\omega)^2 X_d I_a = (i \\omega) X_d I_V = X_d I_d \\\\\n",
"I_v &= I_d \\ (i\\omega)\\\\\n",
"I_a&= I_v \\ (i\\omega)\\\\\n",
"\\end{align}\n",
"$$\n",
"\n",
"It turns out that the effect of differentiation in the time domain leads to an *increase by a factor of one* in the slope of the amplitude spectrum, $|H(\\omega)|$, in log-log space, for example, by changing from $X_d(\\omega)$ to $X_v(\\omega)$. But when we are looking at the *instrument response*, the slope will *decrease by a factor of one* when changing from, say, $I_d(\\omega)$ to $I_v(\\omega)$.\n",
"\n",
"We see this in the above figure. Consider the flat segment in the velocity correction spectrum: the displacement shows a corresponding slope increase whereas the acceleration show a slope decrease.\n",
"\n",
"**QUESTION:** What is meant by a \"broadband\" seismometer?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### L3 Deconvolve instrument response for a seismogram"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will download some data and remove its instrument response.\n",
"\n",
"The first step is to get some data - we will once again use the FDSN web services but this time also to download waveform data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"otime = obspy.UTCDateTime(2004, 12, 26, 0, 58, 50)\n",
"duration = 5 * 86400\n",
"starttime = otime\n",
"endtime = starttime + duration\n",
"\n",
"st = c.get_waveforms(network=\"G\", station=\"CAN\", location=\"\", channel=\"LH*\",\n",
" starttime=starttime, endtime=endtime)\n",
"\n",
"# Do it once again to also get the response information of the vertical channels.\n",
"inv = c.get_stations(network=\"G\", station=\"CAN\", location=\"\", channel=\"LH*\",\n",
" starttime=starttime, endtime=endtime, level=\"response\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will now show what the main arrival looks like at bandpass 50-500s with and without the instrument deconvolution."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Plot the first hour - slice() will leave the original \n",
"# data stream intact and just returns a new view on the\n",
"# data.\n",
"st.slice(endtime=otime + 3600).copy().taper(0.05).filter(\n",
" \"bandpass\", freqmin=1.0 / 500.0, freqmax=1.0 / 50.0).plot()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Now plot the same but with the instrument removed.\n",
"# Note the copy() here as remove_response() would otherwise\n",
"# remove the contents of the original data stream.\n",
"st.slice(endtime=otime + 3600).copy().remove_response(\n",
" inventory=inv, output=\"VEL\").taper(0.05).filter(\n",
" \"bandpass\", freqmin=1.0 / 500.0, freqmax=1.0 / 50.0).plot()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"What are the differences? What is their cause?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will now plot the amplitude spectrum over 0.2 - 1.0 mHz to show the gravest normal mode peaks. We will only use the vertical component to simplify things.\n",
"\n",
"We will first do it without instrument correction"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"\n",
"tr = st.select(component=\"Z\")[0].copy().taper(0.05)\n",
"\n",
"# rfft() is a varient of the FFT for real valued input.\n",
"# The negative frequencies of a DFT for purely real valued\n",
"# input are just the complex conjugates of the corresponding\n",
"# positive frequencies and thus redundant.\n",
"D = np.fft.rfft(tr.data)\n",
"# Return the corresponding frequencies.\n",
"freqs = np.fft.rfftfreq(tr.stats.npts, d=tr.stats.delta)\n",
"\n",
"plt.plot(freqs, np.abs(D))\n",
"plt.xlim(0.2E-3, 1.0E-3)\n",
"plt.ylim(0, 1E7)\n",
"plt.xlabel(\"Frequency [mHz]\")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Instrument removal is, in practice, a tricky and unstable operation. Have a look at the [documentation of the remove_response() method](http://docs.obspy.org/packages/autogen/obspy.core.trace.Trace.remove_response.html) and try to understand the meaning of the various parameters and when you would want to use each."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Remember the copy()!\n",
"tr = st.select(component=\"Z\").copy().remove_response(inventory=inv, \n",
" output=\"DISP\")[0]\n",
"\n",
"D = np.fft.rfft(tr.data)\n",
"freqs = np.fft.rfftfreq(tr.stats.npts, d=tr.stats.delta)\n",
"\n",
"plt.plot(freqs, np.abs(D))\n",
"plt.xlim(0.2E-3, 1.0E-3)\n",
"plt.ylim(0, 5.0)\n",
"plt.xlabel(\"Frequency [mHz]\")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Are there any difference to the previous versions? If yes - what are they? Keep in mind that the previous plot is based on velocity data, whereas this one has been corrected to displacement."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We are interested in the spectrum of the data so we don't really need to use instrument corrected data to calculate the spectrum - we can just calculate the spectrum and divide by the response."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tr = st.select(component=\"Z\")[0].copy().taper(0.05)\n",
"\n",
"D = np.fft.rfft(tr.data)\n",
"freqs = np.fft.rfftfreq(tr.stats.npts, d=tr.stats.delta)\n",
"# Advanced ObsPy-foo to directly get the complex response.\n",
"freq_response, _ = \\\n",
" inv.select(channel=\"LHZ\")[0][0][0].response.get_evalresp_response(\n",
" tr.stats.delta, tr.stats.npts, output=\"DISP\")\n",
"\n",
"plt.plot(freqs, np.abs(D)/ np.abs(freq_response))\n",
"plt.xlim(0.2E-3, 1.0E-3)\n",
"plt.ylim(0, 35.0)\n",
"plt.xlabel(\"Frequency [mHz]\")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that this is quite different then the previous spectrum. **Why?** Keep in mind that we are operating far from the passband of the instrument. The following two plots as well as the\n",
"[documentation of the remove_response() method](http://docs.obspy.org/packages/autogen/obspy.core.trace.Trace.remove_response.html) should help you answer that question. Which alternative yields the more correct answer?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"inv.select(channel=\"LHZ\").plot_response(min_freq=0.0001);"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tr = st.select(component=\"Z\").copy().remove_response(\n",
" inventory=inv, output=\"DISP\", plot=True)[0]"
]
}
],
"metadata": {
"interpreter": {
"hash": "b40d316b6eb93f8d3cd4b30bf241a4edb2b8ca4ea8a7206ab7ccdcc909e9c051"
},
"jupytext": {
"encoding": "# -*- coding: utf-8 -*-"
},
"kernelspec": {
"display_name": "Python 3.6.13 ('obspy')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.13"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,366 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Cross correlation exercise\n",
"\n",
"This exercise focuses on the use of cross correlation of similar waveforms. The first example uses the cross correlation to enhance the pick of a seismic onset.\n",
"\n",
"In signal processing, cross-correlation is a measure of similarity of two series as a function of the displacement of one relative to the other. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. In seismology cross-correlation is used to compare seismograms from the same event recorded at different stations. It can give information about differences due to longer wave distances or different wave paths. \n",
"\n",
"For seismograms recorded at two Stations (1,2), the cross-correlation is defined as\n",
"\n",
"$$\\begin{equation} f_{21}(t)=\\int_{- \\infty}^\\infty f_2(\\tau)f_1(\\tau + t)d\\tau \\end{equation} $$\n",
"\n",
"where $t$ is the displacement, also known as lag."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"In the frequency range the equation is\n",
"\n",
"\\begin{equation}\n",
" F_{21}(\\omega)=F_2(\\omega)F_1^\\ast (\\omega) \\quad.\n",
"\\end{equation}\n",
"\n",
"$F_1^\\ast $ denotes the complex conjugate of $F_1$. In an autocorrelation, which is the cross-correlation of a signal with itself, there will always be a peak at a lag of zero, and its size will be the signal energy. Furthermore, the definition of correlation always includes a standardising factor in such a way that correlations have values between 1 and +1.\n",
"\n",
"As an example, consider two real valued functions $f_1$ and $f_2$ differing only by an unknown shift along the x-axis. One can use the cross-correlation to find how much $f_2$ must be shifted along the x-axis to make it identical to $f_1$. The formula essentially slides the $f_2$ function along the x-axis, calculating the integral of their product at each position. When the functions match, the value of $f_{21}(t)$ is maximized."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"We will use this to detect the nuclear weapons tests of the DPRK (North Korea) in the recordings of station BUG. The largest event from Sep. 3, 2017 is used as template event."
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Matplotlib settings and Imports"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [],
"source": [
"# show figures inline\n",
"%matplotlib inline\n",
"import matplotlib.pyplot as plt\n",
"plt.style.use('ggplot') # Matplotlib style sheet - nicer plots!\n",
"plt.rcParams['figure.figsize'] = 12, 8 # Slightly bigger plots by default\n",
"\n",
"# python imports\n",
"from obspy import read, UTCDateTime\n",
"from matplotlib.pyplot import xcorr\n",
"import numpy as np\n",
"\n",
"#from obspy.signal.cross_correlation import xcorr_pick_correction # 1st exercise\n",
"#from obspy.signal.trigger import coincidence_trigger # 2nd exercise\n",
"#from pprint import pprint"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Get waveform of template event"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [],
"source": [
"## compute and plot cross correlation of recordings of two events\n",
"\n",
"nsec = 300 # 5 minutes\n",
"\n",
"# use FDSNws to get the data from station BUG\n",
"from obspy.clients.fdsn import Client\n",
"client = Client(\"BGR\")\n",
"\n",
"t1 = UTCDateTime(\"2017-09-03T03:40:00Z\")\n",
"t2 = t1 + nsec\n",
"template = client.get_waveforms(\"GR\", \"BUG\", \"\", \"BHZ\", t1, t2, attach_response=True)\n",
"template.remove_response(output=\"VEL\");"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"# try different starttime, enttime and filtersettings to display the onset of the event at BUG\n",
"st = template.copy()\n",
"st.plot(starttime=t1+10, endtime=t1+200);\n",
"st.filter('bandpass', freqmin=0.8, freqmax=3.0).plot(starttime=t1+10, endtime=t1+200);"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Get more Events"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [],
"source": [
"# Test Event 1\n",
"t1 = UTCDateTime(\"2016-09-09T00:40:00Z\")\n",
"t2 = t1 + nsec\n",
"st1 = client.get_waveforms(\"GR\", \"BUG\", \"\", \"BHZ\", t1, t2, attach_response=True)\n",
"st1.remove_response(output=\"VEL\")\n",
"\n",
"st_ = st1.copy()\n",
"st_.plot(starttime=t1+90, endtime=t1+160);\n",
"st_.filter('bandpass', freqmin=0.8, freqmax=3.0).plot(starttime=t1+90, endtime=t1+160);"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"# Test Event 2\n",
"t1 = UTCDateTime(\"2016-01-06T01:40:00Z\")\n",
"t2 = t1 + nsec\n",
"st2 = client.get_waveforms(\"GR\", \"BUG\", \"\", \"BHZ\", t1, t2, attach_response=True)\n",
"st2.remove_response(output=\"VEL\")\n",
"\n",
"st_ = st2.copy()\n",
"st_.plot(starttime=t1+90, endtime=t1+160);\n",
"st_.filter('bandpass', freqmin=0.8, freqmax=3.0).plot(starttime=t1+90, endtime=t1+160);"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Calculate Cross-correlation of template event data and test event data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [],
"source": [
"# time window for cross-corelation\n",
"nsec1 = 90\n",
"nsec2 = 160\n",
"\n",
"# extract data of template event\n",
"st_ = template.copy()\n",
"t1 = UTCDateTime(\"2017-09-03T03:40:00Z\")\n",
"st_.filter('bandpass', freqmin=0.8, freqmax=3.0).detrend().trim(starttime=t1+nsec1, endtime=t1+nsec2)\n",
"st_.trim(starttime=t1, endtime=t1+nsec, pad=True, fill_value=0.0)\n",
"tr_template = st_[0].copy()\n",
"\n",
"# extract data of 1st and 2nd test event\n",
"st_ = st1.copy()\n",
"t1 = UTCDateTime(\"2016-09-09T00:40:00Z\")+0\n",
"st_.filter('bandpass', freqmin=0.8, freqmax=3.0).trim(starttime=t1, endtime=t1+nsec, pad=True, fill_value=0.0)\n",
"tr_event1 = st_[0].copy()\n",
"\n",
"st_ = st2.copy()\n",
"t1 = UTCDateTime(\"2016-01-06T01:40:00Z\")+0\n",
"st_.filter('bandpass', freqmin=0.8, freqmax=3.0).trim(starttime=t1, endtime=t1+nsec, pad=True, fill_value=0.0)\n",
"tr_event2 = st_[0].copy()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"# plot template and test events\n",
"tr_template.normalize().plot();\n",
"tr_event1.normalize().plot();\n",
"tr_event2.normalize().plot();"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"# set parameters\n",
"cc_maxlag = nsec # maximum lag time [s]\n",
"samp_rate = tr_template.stats['sampling_rate']\n",
"shift_len = int(cc_maxlag * samp_rate)\n",
"\n",
"# calculate cross correlation\n",
"a_t1 = xcorr(tr_template.data, tr_event1.data, maxlags=shift_len) # matplotlib\n",
"cc_lags = a_t1[0] #lag vector\n",
"cc = a_t1[1] #correlation vector\n",
"\n",
"# extract parameters from result\n",
"cc_t = np.linspace(-cc_maxlag, cc_maxlag, shift_len*2+1)\n",
"cc_max = max(cc)\n",
"cc_shift = cc.argmax() - len(cc)/2\n",
"cc_shift_t = cc_shift / samp_rate\n",
"\n",
"# plot result\n",
"fig = plt.figure(1)\n",
"plt.clf()\n",
"plt.rcParams[\"figure.figsize\"] = [25,10]\n",
"plt.plot(cc_t, cc, 'k')\n",
"plt.title('cross-correlation of template and test event 1, max %.2f at %.2f s shift' %(cc_max, cc_lags[np.argmax(cc)]/samp_rate))\n",
"plt.xlabel('time lag [sec]')\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"# set parameters\n",
"cc_maxlag = nsec # maximum lag time [s]\n",
"samp_rate = tr_template.stats['sampling_rate']\n",
"shift_len = int(cc_maxlag * samp_rate)\n",
"\n",
"# calculate cross correlation\n",
"a_t2 = xcorr(tr_template.data, tr_event2.data, maxlags=shift_len) # matplotlib\n",
"cc_lags = a_t2[0] #lag vector\n",
"cc = a_t2[1] #correlation vector\n",
"\n",
"# extract parameters from result\n",
"cc_t = np.linspace(-cc_maxlag, cc_maxlag, shift_len*2+1)\n",
"cc_max = max(cc)\n",
"cc_shift = cc.argmax() - len(cc)/2\n",
"cc_shift_t = cc_shift / samp_rate\n",
"\n",
"# plot result\n",
"fig = plt.figure(1)\n",
"plt.clf()\n",
"plt.rcParams[\"figure.figsize\"] = [25,10]\n",
"plt.plot(cc_t, cc, 'k')\n",
"plt.title('cross-correlation of template and test event 2, max %.2f at %.2f s shift' %(cc_max, cc_lags[np.argmax(cc)]/samp_rate))\n",
"plt.xlabel('time lag [sec]')\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## Exercise\n",
"\n",
"* create a plot showing the template and the test events in one plot with the onset times alligned"
]
}
],
"metadata": {
"celltoolbar": "Slideshow",
"interpreter": {
"hash": "b40d316b6eb93f8d3cd4b30bf241a4edb2b8ca4ea8a7206ab7ccdcc909e9c051"
},
"kernelspec": {
"display_name": "Python 3.6.13 ('obspy')",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.13"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -0,0 +1,153 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## Simple cross correlation to enhance pick accuracy\n",
"\n",
"This part is taken from the ObsPy tutorial at https://docs.obspy.org/master/tutorial/code_snippets/xcorr_pick_correction.html.\n",
"\n",
"This example shows how to align the waveforms of phase onsets of two earthquakes in order to correct the original pick times that can never be set perfectly consistent in routine analysis. A parabola is fit to the concave part of the cross correlation function around its maximum, following the approach by [Deichmann 1992](https://docs.obspy.org/master/citations.html#deichmann1992).\n",
"\n",
"To adjust the parameters (i.e. the used time window around the pick and the filter settings) and to validate and check the results the options plot and filename can be used to open plot windows or save the figure to a file.\n",
"\n",
"See the documentation of [xcorr_pick_correction()](https://docs.obspy.org/master/packages/autogen/obspy.signal.cross_correlation.xcorr_pick_correction.html#obspy.signal.cross_correlation.xcorr_pick_correction) for more details."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"# show figures inline\n",
"%matplotlib inline\n",
"import matplotlib.pyplot as plt\n",
"plt.style.use('ggplot') # Matplotlib style sheet - nicer plots!\n",
"plt.rcParams['figure.figsize'] = 12, 8 # Slightly bigger plots by default\n",
"\n",
"# python imports\n",
"from obspy import read, UTCDateTime\n",
"from obspy.signal.cross_correlation import xcorr_pick_correction"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [],
"source": [
"# read example data of two small earthquakes\n",
"path = \"https://examples.obspy.org/BW.UH1..EHZ.D.2010.147.%s.slist.gz\"\n",
"st1 = read(path % (\"a\", ))\n",
"st2 = read(path % (\"b\", ))\n",
"\n",
"# display info and plot data\n",
"print(st1)\n",
"st1.plot();\n",
"print(st2)\n",
"st2.plot();"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [],
"source": [
"# select the single traces to use in correlation.\n",
"# to avoid artifacts from preprocessing there should be some data left and\n",
"# right of the short time window actually used in the correlation.\n",
"tr1 = st1.select(component=\"Z\")[0]\n",
"tr2 = st2.select(component=\"Z\")[0]\n",
"\n",
"# these are the original pick times set during routine analysis\n",
"t1 = UTCDateTime(\"2010-05-27T16:24:33.315000Z\")\n",
"t2 = UTCDateTime(\"2010-05-27T16:27:30.585000Z\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [],
"source": [
"# estimate the time correction for pick 2 without any preprocessing and open\n",
"# a plot window to visually validate the results\n",
"dt, coeff = xcorr_pick_correction(t1, tr1, t2, tr2, 0.05, 0.2, 0.25, plot=True)\n",
"print(\"No preprocessing:\")\n",
"print(\" Time correction for pick 2: %.6f s\" % dt)\n",
"print(\" Correlation coefficient: %.2f\" % coeff)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"# estimate the time correction with bandpass prefiltering\n",
"dt, coeff = xcorr_pick_correction(t1, tr1, t2, tr2, 0.05, 0.2, 0.25, plot=True,\n",
" filter=\"bandpass\",\n",
" filter_options={'freqmin': 1, 'freqmax': 10})\n",
"print(\"Bandpass prefiltering:\")\n",
"print(\" Time correction for pick 2: %.6f s\" % dt)\n",
"print(\" Correlation coefficient: %.2f\" % coeff)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"celltoolbar": "Slideshow",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -0,0 +1,911 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"<div style='background-image: url(\"../images/header.svg\") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>\n",
" <div style=\"float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px\">\n",
" <div style=\"position: relative ; top: 50% ; transform: translatey(-50%)\">\n",
" <div style=\"font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%\">Ambient Seismic Noise Analysis</div>\n",
" <div style=\"font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)\">Cross Correlation </div>\n",
" </div>\n",
" </div>\n",
"</div>\n",
"\n",
"In this tutorial you will try to reproduce one of the figures in Shapiro _et al._. To see which one, execute the second code block below. \n",
"\n",
"Reference: *High-Resolution Surface-Wave Tomography from Ambient Seismic Noise*, Nikolai M. Shapiro, et al. **Science** 307, 1615 (2005);\n",
"DOI: 10.1126/science.1108339\n",
"\n",
"##### Authors:\n",
"* Celine Hadziioannou\n",
"* Ashim Rijal\n",
"---\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"### In this notebook\n",
"We will reproduce figure B below. This figure compares: \n",
"1) the seismogram from an event near station MLAC, recorded at station PHL (top)\n",
"2) the \"Greens function\" obtained by correlating noise recorded at stations MLAC and PHL (center and bottom)\n",
"\n",
"All bandpassed for periods between 5 - 10 seconds. \n",
"\n",
"<img src=\"https://raw.github.com/ashimrijal/NoiseCorrelation/master/data/shapiro_figure.png\">\n",
"\n",
"---"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"code_folding": [
0
],
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [],
"source": [
"# Configuration step (Please run it before the code!)\n",
"\n",
"# MatplotLib.PyPlot\n",
"%matplotlib inline\n",
"import matplotlib.pyplot as plt\n",
"plt.style.use('ggplot') # Matplotlib style sheet - nicer plots!\n",
"plt.rcParams['figure.figsize'] = 12, 8\n",
"\n",
"# NumPy\n",
"import numpy as np\n",
"\n",
"# ObsPy\n",
"from obspy.core import UTCDateTime, read\n",
"from obspy.clients.fdsn import Client\n",
"try: # depends on obspy version; this is for v1.1.0\n",
" from obspy.geodetics import gps2dist_azimuth as gps2DistAzimuth\n",
"except ImportError:\n",
" from obspy.core.util import gps2DistAzimuth\n",
"\n",
"# ignore warnings from filters\n",
"import warnings\n",
"warnings.filterwarnings('ignore')"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### 1. Read in noise data \n",
"Read the noise data for station MLAC into a stream. \n",
"\n",
"Then, **read in** noise data for station PHL.\n",
"Add this to the stream created above.\n",
"\n",
"These two data files contain 90 days of vertical component noise for each station.\n",
"#### If you need data for more than 90 days, it can be downloaded form IRIS database. ###"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"code_folding": [
0
],
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"# Shapiro et al. use noise data from MLAC and PHL stations\n",
"\n",
"num_of_days = 90 # no of days of data: change if more than 90days of data is required\n",
"if num_of_days <= 90:\n",
" # get noise data for station MLAC\n",
" stn = read('https://raw.github.com/ashimrijal/NoiseCorrelation/master/data/noise.CI.MLAC.LHZ.2004.294.2005.017.mseed')\n",
" # get noise data for the station PHL and add it to the previous stream\n",
" stn += read('https://raw.github.com/ashimrijal/NoiseCorrelation/master/data/noise.CI.PHL.LHZ.2004.294.2005.017.mseed')\n",
" # if you have data stored locally, comment the stn = and stn += lines above\n",
" # then uncomment the following 3 lines and adapt the path: \n",
" # stn = obspy.read('./noise.CI.MLAC.LHZ.2004.294.2005.017.mseed')\n",
" # stn += obspy.read('noise.CI.PHL.LHZ.2004.294.2005.017.mseed')\n",
" # ste = obspy.read('event.CI.PHL.LHZ.1998.196.1998.196.mseed')\n",
"else:\n",
" # download data from IRIS database\n",
" client = Client(\"IRIS\") # client specification\n",
" t1 = UTCDateTime(\"2004-10-20T00:00:00.230799Z\") # start UTC date/time\n",
" t2 = t1+(num_of_days*86400) # end UTC date/time\n",
" stn = client.get_waveforms(network=\"CI\", station=\"MLAC\",location=\"*\", channel=\"*\",\n",
" starttime=t1, endtime=t2) # get data for MLAC\n",
" stn += client.get_waveforms(network=\"CI\", station=\"PHL\", location=\"*\", channel=\"*\",\n",
" starttime=t1, endtime=t2) # get data for PHL and add it to the previous stream"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### 2. Preprocess noise ###\n",
"***Preprocessing 1***\n",
"* Just to be sure to keep a 'clean' original stream, first **copy** the noise stream with [st.copy()](https://docs.obspy.org/packages/autogen/obspy.core.stream.Stream.copy.html)\n",
"The copied stream is the stream you will use from now on. \n",
"\n",
"* In order to test the preprocessing without taking too long, it's also useful to first **trim** this copied noise data stream to just one or a few days. This can be done with [st.trim()](https://docs.obspy.org/packages/autogen/obspy.core.stream.Stream.trim.html), after defining your start- and endtime. \n",
"\n",
"\n",
"Many processing functionalities are included in Obspy. For example, you can remove any (linear) trends with [st.detrend()](https://docs.obspy.org/packages/autogen/obspy.core.stream.Stream.detrend.html), and taper the edges with [st.taper()](https://docs.obspy.org/packages/autogen/obspy.core.stream.Stream.taper.html). \n",
"Different types of filter are also available in [st.filter()](https://docs.obspy.org/packages/autogen/obspy.core.stream.Stream.filter.html). \n",
"\n",
"* first **detrend** the data. \n",
"* next, apply a **bandpass filter** to select the frequencies with most noise energy. The secondary microseismic peak is roughly between 0.1 and 0.2 Hz. The primary microseismic peak between 0.05 and 0.1 Hz. Make sure to use a zero phase filter! *(specify argument zerophase=True)*"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"code_folding": [
0
],
"slideshow": {
"slide_type": "subslide"
}
},
"outputs": [],
"source": [
"# Preprocessing 1\n",
"\n",
"stp = stn.copy() # copy stream\n",
"t = stp[0].stats.starttime\n",
"stp.trim(t, t + 4 * 86400) # shorten stream for quicker processing\n",
"\n",
"stp.detrend('linear') # remove trends using detrend\n",
"stp.taper(max_percentage=0.05, type='cosine') # taper the edges\n",
"stp.filter('bandpass', freqmin=0.1, freqmax=0.2, zerophase=True) # filter data of all traces in the streams"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"***Preprocessing 2***\n",
"\n",
"Some additional useful processing functions are provided in the following **Functions**\n",
"\n",
"* For each trace in the stream, apply **spectral whitening** on the frequency range you chose before (either [0.1 0.2]Hz or [0.05 0.1]Hz), using function **``whiten``**. \n",
"\n",
"\n",
"* For the **time normalization**, the simplest option is to use the one-bit normalization option provided in function **``normalize``**. \n",
"\n",
"* *Optional: play around with different normalization options, such as clipping to a certain number of standard deviations, or using the running absolute mean normalization.*"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"A brief *desription of individual* **functions** (see the next cell) are as follows:\n",
"\n",
"1) **whiten**:\n",
"\n",
" spectral whitening of trace `tr` using a cosine tapered boxcar between `freqmin` and `freqmax`\n",
" (courtesy Gaia Soldati & Licia Faenza, INGV)\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"code_folding": [
0,
3,
46,
57,
93,
130,
150
],
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [],
"source": [
"def whiten(tr, freqmin, freqmax):\n",
" \n",
" nsamp = tr.stats.sampling_rate\n",
" \n",
" n = len(tr.data)\n",
" if n == 1:\n",
" return tr\n",
" else: \n",
" frange = float(freqmax) - float(freqmin)\n",
" nsmo = int(np.fix(min(0.01, 0.5 * (frange)) * float(n) / nsamp))\n",
" f = np.arange(n) * nsamp / (n - 1.)\n",
" JJ = ((f > float(freqmin)) & (f<float(freqmax))).nonzero()[0]\n",
" \n",
" # signal FFT\n",
" FFTs = np.fft.fft(tr.data)\n",
" FFTsW = np.zeros(n) + 1j * np.zeros(n)\n",
"\n",
" # Apodization to the left with cos^2 (to smooth the discontinuities)\n",
" smo1 = (np.cos(np.linspace(np.pi / 2, np.pi, nsmo+1))**2)\n",
" FFTsW[JJ[0]:JJ[0]+nsmo+1] = smo1 * np.exp(1j * np.angle(FFTs[JJ[0]:JJ[0]+nsmo+1]))\n",
"\n",
" # boxcar\n",
" FFTsW[JJ[0]+nsmo+1:JJ[-1]-nsmo] = np.ones(len(JJ) - 2 * (nsmo+1))\\\n",
" * np.exp(1j * np.angle(FFTs[JJ[0]+nsmo+1:JJ[-1]-nsmo]))\n",
"\n",
" # Apodization to the right with cos^2 (to smooth the discontinuities)\n",
" smo2 = (np.cos(np.linspace(0., np.pi/2., nsmo+1))**2.)\n",
" espo = np.exp(1j * np.angle(FFTs[JJ[-1]-nsmo:JJ[-1]+1]))\n",
" FFTsW[JJ[-1]-nsmo:JJ[-1]+1] = smo2 * espo\n",
"\n",
" whitedata = 2. * np.fft.ifft(FFTsW).real\n",
" \n",
" tr.data = np.require(whitedata, dtype=\"float32\")\n",
"\n",
" return tr"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"2) **correlateNoise**:\n",
" \n",
" correlate two stations, using slices of 'corrwin' seconds at a time correlations are also stacked. \n",
" NB hardcoded: correlates 1st with 2nd station in the stream only signals are merged - any data gaps are\n",
" filled with zeros.\n",
" st : stream containing data from the two stations to correlate\n",
" stations : list of stations\n",
" corrwin : correlation window length\n",
" returns 'corr' (all correlations) and 'stack' (averaged correlations)\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"code_folding": [
0,
3,
46,
57,
93,
130,
150
],
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [],
"source": [
"def correlateNoise(st, stations, corrwin):\n",
"\n",
" print ('correlating stations', (stations[0], stations[1]))\n",
"\n",
" # initialize sliding timewindow (length = corrwin) for correlation\n",
" # start 1 corrwin after the start to account for different stream lengths\n",
" timewin = st.select(station=stations[1])[0].stats.starttime + corrwin\n",
"\n",
" # loop over timewindows \n",
" # stop 1 corrwin before the end to account for different stream lengths\n",
" while timewin < st.select(station=stations[0])[-1].stats.endtime - 2*corrwin:\n",
" sig1 = st.select(station=stations[0]).slice(timewin, timewin+corrwin)\n",
" sig1.merge(method=0, fill_value=0)\n",
" sig2 = st.select(station=stations[1]).slice(timewin, timewin+corrwin)\n",
" sig2.merge(method=0, fill_value=0)\n",
" xcorr = np.correlate(sig1[0].data, sig2[0].data, 'same')\n",
"\n",
" try: \n",
" # build array with all correlations\n",
" corr = np.vstack((corr, xcorr))\n",
" except: \n",
" # if corr doesn't exist yet\n",
" corr = xcorr\n",
" \n",
" # shift timewindow by one correlation window length\n",
" timewin += corrwin\n",
"\n",
" # stack the correlations; normalize\n",
" stack = np.sum(corr, 0)\n",
" stack = stack / float((np.abs(stack).max())) \n",
" print (\"...done\")\n",
"\n",
" return corr, stack"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"3) **plotStack**:\n",
"\n",
" plots stack of correlations with correct time axis\n",
" st: stream containing noise (and station information)\n",
" stack: array containing stack \n",
" maxlag: maximum length of correlation to plot (in seconds)\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"code_folding": [
0,
3,
46,
57,
93,
130,
150
],
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [],
"source": [
"def plotStack(st, stack, maxlag, figurename=None):\n",
"\n",
" # define the time vector for the correlation (length of corr = corrwin + 1)\n",
" limit = (len(stack) / 2.) * st[0].stats.delta\n",
" timevec = np.arange(-limit, limit, st[0].stats.delta)\n",
"\n",
" plt.plot(timevec, stack, 'k')\n",
" stations = list(set([_i.stats.station for _i in st]))\n",
" plt.title(\"Stacked correlation between %s and %s\" % (stations[0], stations[1]))\n",
" plt.xlim(-maxlag, maxlag)\n",
" plt.xlabel('time [s]')\n",
"\n",
" if figurename is not None:\n",
" fig.savefig(figurename, format=\"pdf\")\n",
" else:\n",
" plt.show() "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"4) **plotXcorrEvent**:\n",
"\n",
" plot the noise correlation (MLAC, PHL) alongside the 1998 event signal \n",
" st : event stream\n",
" stn : noise stream\n",
" stack : noise correlation array\n",
" maxlag : maximum length of correlation, in seconds\n",
" acausal : set to True to use acausal part (=negative times) of the correlation\n",
" figurename : if a filename is specified, figure is saved in pdf format\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"code_folding": [
0,
3,
46,
57,
93,
130,
150
],
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [],
"source": [
"def plotXcorrEvent(st, stn, stack, maxlag, acausal=False, figurename=None):\n",
"\n",
" eventtime = UTCDateTime(1998,7,15,4,53,21,0) # event near MLAC\n",
"\n",
" # station locations\n",
" latP, lonP = 35.41, -120.55 # station PHL\n",
" latM, lonM = 37.63, -118.84 # station MLAC\n",
" latE, lonE = 37.55, -118.809 # event 1998\n",
" \n",
" # calculate distance between stations\n",
" dist = gps2DistAzimuth(latP, lonP, latM, lonM)[0] # between PHL and MLAC\n",
" distE = gps2DistAzimuth(latP, lonP, latE, lonE)[0] # between event and PHL\n",
" #\n",
" # CROSSCORRELATION\n",
" # reverse stack to plot acausal part (= negative times of correlation)\n",
" if acausal:\n",
" stack = stack[::-1]\n",
" \n",
" # find center of stack\n",
" c = int(np.ceil(len(stack)/2.) + 1)\n",
" \n",
" #cut stack to maxlag\n",
" stack = stack[c - maxlag * int(np.ceil(stn[0].stats.sampling_rate)) : c + maxlag * int(np.ceil(stn[0].stats.sampling_rate))]\n",
" \n",
" # find new center of stack\n",
" c2 = int(np.ceil(len(stack)/2.) + 1)\n",
"\n",
" # define time vector for cross correlation\n",
" limit = (len(stack) / 2.) * stn[0].stats.delta\n",
" timevec = np.arange(-limit, limit, stn[0].stats.delta)\n",
" # define timevector: dist / t\n",
" timevecDist = dist / timevec\n",
" \n",
" # EVENT\n",
" ste = st.copy()\n",
" st_PHL_e = ste.select(station='PHL')\n",
" \n",
" # cut down event trace to 'maxlag' seconds\n",
" dt = len(stack[c2:])/stn[0].stats.sampling_rate #xcorrlength\n",
" st_PHL_e[0].trim(eventtime, eventtime + dt)\n",
" \n",
" # create time vector for event signal\n",
" # extreme values:\n",
" limit = st_PHL_e[0].stats.npts * st_PHL_e[0].stats.delta\n",
" timevecSig = np.arange(0, limit, st_PHL_e[0].stats.delta)\n",
"\n",
" # PLOTTING\n",
" fig = plt.figure(figsize=(12.0, 8.0))\n",
" ax1 = fig.add_subplot(2,1,1)\n",
" ax2 = fig.add_subplot(2,1,2)\n",
"\n",
" # plot noise correlation\n",
" ax1.plot(timevecDist[c2:], stack[c2:], 'k')\n",
" ax1.set_title('Noise correlation between MLAC and PHL')\n",
"\n",
" # plot event near MLAC measured at PHL\n",
" ax2.plot(distE/timevecSig, st_PHL_e[0].data / np.max(np.abs(st_PHL_e[0].data)), 'r')\n",
" ax2.set_title('Event near MLAC observed at PHL')\n",
"\n",
" ax2.set_xlim((0, 8000))\n",
" ax1.set_xlim((0, 8000))\n",
"\n",
" ax2.set_xlabel(\"group velocity [m/s]\")\n",
" \n",
" if figurename is not None:\n",
" fig.savefig(figurename, format=\"pdf\")\n",
" else:\n",
" plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"5) **Normalize**:\n",
"\n",
" Temporal normalization of the traces, most after Bensen 2007. NB. before this treatment, traces must be\n",
" demeaned, detrended and filtered. Description of argument:\n",
"\n",
" norm_method=\"clipping\"\n",
" signal is clipped to 'clip_factor' times the std\n",
" clip_factor recommended: 1 (times std)\n",
" \n",
" norm_method=\"clipping_iter\"\n",
" the signal is clipped iteratively: values above 'clip_factor * std' \n",
" are divided by 'clip_weight'. until the whole signal is below \n",
" 'clip_factor * std'\n",
" clip_factor recommended: 6 (times std)\n",
" \n",
" \n",
" norm_method=\"ramn\"\n",
" running absolute mean normalization: a sliding window runs along the \n",
" signal. The values within the window are used to calculate a \n",
" weighting factor, and the center of the window is scaled by this \n",
" factor. \n",
" weight factor: w = np.mean(np.abs(tr.data[win]))/(2. * norm_win + 1) \n",
" finally, the signal is tapered with a tukey window (alpha = 0.2).\n",
"\n",
" norm_win: running window length, in seconds.\n",
" recommended: half the longest period\n",
"\n",
" norm_method=\"1bit\"\n",
" only the sign of the signal is conserved\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"code_folding": [
0,
3,
46,
57,
93,
130,
150
],
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [],
"source": [
"# Functions\n",
"# collection of functions used in noise correlation processing\n",
"\n",
"def normalize(tr, clip_factor=6, clip_weight=10, norm_win=None, norm_method=\"1bit\"): \n",
" \n",
" if norm_method == 'clipping':\n",
" lim = clip_factor * np.std(tr.data)\n",
" tr.data[tr.data > lim] = lim\n",
" tr.data[tr.data < -lim] = -lim\n",
"\n",
" elif norm_method == \"clipping_iter\":\n",
" lim = clip_factor * np.std(np.abs(tr.data))\n",
" \n",
" # as long as still values left above the waterlevel, clip_weight\n",
" while tr.data[np.abs(tr.data) > lim] != []:\n",
" tr.data[tr.data > lim] /= clip_weight\n",
" tr.data[tr.data < -lim] /= clip_weight\n",
"\n",
" elif norm_method == 'ramn':\n",
" lwin = tr.stats.sampling_rate * norm_win\n",
" st = 0 # starting point\n",
" N = lwin # ending point\n",
"\n",
" while N < tr.stats.npts:\n",
" win = tr.data[st:N]\n",
"\n",
" w = np.mean(np.abs(win)) / (2. * lwin + 1)\n",
" \n",
" # weight center of window\n",
" tr.data[st + lwin / 2] /= w\n",
"\n",
" # shift window\n",
" st += 1\n",
" N += 1\n",
"\n",
" # taper edges\n",
" taper = get_window(tr.stats.npts)\n",
" tr.data *= taper\n",
"\n",
" elif norm_method == \"1bit\":\n",
" tr.data = np.sign(tr.data)\n",
" tr.data = np.float32(tr.data)\n",
"\n",
" return tr"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"6) **get_window**:\n",
"\n",
" Return tukey window of length N\n",
" N: length of window\n",
" alpha: alpha parameter in case of tukey window.\n",
" 0 -> rectangular window\n",
" 1 -> cosine taper\n",
" returns: window (np.array)\n",
" \n",
"Doc of [scipy.signal.get_window](https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.signal.get_window.html)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"code_folding": [
0,
3,
46,
57,
93,
130,
150
],
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [],
"source": [
"def get_window(N, alpha=0.2):\n",
"\n",
" window = np.ones(N)\n",
" x = np.linspace(-1., 1., N)\n",
" ind1 = (abs(x) > 1 - alpha) * (x < 0)\n",
" ind2 = (abs(x) > 1 - alpha) * (x > 0)\n",
" window[ind1] = 0.5 * (1 - np.cos(np.pi * (x[ind1] + 1) / alpha))\n",
" window[ind2] = 0.5 * (1 - np.cos(np.pi * (x[ind2] - 1) / alpha))\n",
" return window"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Actual preprocessing happens here -- this can take a while!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"code_folding": [
0
],
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [],
"source": [
"# Preprocessing 2\n",
"st = stp.copy() # copy stream\n",
"\n",
"for tr in st:\n",
" tr = normalize(tr, norm_method=\"1bit\")\n",
" tr = whiten(tr, 0.1, 0.2)\n",
"print ('done!')"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### Cross-correlation ####\n",
"Once you're happy with the preprocessing, you can calculate the **cross-correlation** using **``correlateNoise``** function. The cross-correlation are computed by slices of a few hours each (specified in *corrwin*). \n",
"\n",
"**For correlateNoise function**\n",
"* input: stream, list of stations (here: ['MLAC', 'PHL']), slice length in seconds\n",
"* output: all individual correlations, stack\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"code_folding": [
0
],
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [],
"source": [
"# Cross-correlate\n",
"xcorr, stack = correlateNoise(st, ['MLAC','PHL'], 7200)"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"The resulting stack can be **plotted** with **``plotStack``** function. Since it doesn't make much sense to look at a 2 hour long correlation signal, you can decide to plot only the central part by specifying a ``maxlag`` (in seconds). "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"code_folding": [
0
],
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [],
"source": [
"# Plotting\n",
"\n",
"plotStack(st, stack, 400)"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"If you're only working with a few days of noise (after trimming), this plot probably doesn't look very nice. You could go back to the code block named 'preprocessing 1', and keep a longer noise record (10 days works quite well already). "
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"#### Compare to event trace ####\n",
"In 1998, a M = 5.1 event occurred next to station MLAC. This event was recorded at PHL and we read this data.\n",
"\n",
"* **read** the event data to a separate stream"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [],
"source": [
"ste = read('https://raw.github.com/ashimrijal/NoiseCorrelation/master/data/event.CI.PHL.LHZ.1998.196.1998.196.mseed')\n",
"# if data is stored locally, uncomment the following line and comment the line above:\n",
"#ste = obspy.read('./event.CI.PHL.LHZ.1998.196.1998.196.mseed')"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "subslide"
}
},
"source": [
"#### Preprocess event ####\n",
"\n",
"The event signal should be processed in a similar way to the noise. \n",
"\n",
"* **detrend** in the same way as before\n",
"* **bandpass filter** for the same frequencies as chosen above\n",
"* apply **spectral whitening** to each trace in the event stream to ensure the spectrum is comparable to the noise. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"code_folding": [
0
],
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [],
"source": [
"# Preprocessing\n",
"\n",
"ste.detrend('linear')\n",
"ste.filter('bandpass', freqmin=0.1, freqmax=0.2, zerophase=True)\n",
"for tr in ste:\n",
" tr = whiten(tr, 0.1, 0.2)"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "fragment"
}
},
"source": [
"#### Plot! ####\n",
"\n",
"A plotting function is provided to plot both signals alongside: **``plotXcorrEvent``**. \n",
"\n",
"* input: event stream, noise stream, stack, maxlag"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"code_folding": [
0
],
"slideshow": {
"slide_type": "fragment"
}
},
"outputs": [],
"source": [
"# Plotting\n",
"\n",
"plotXcorrEvent(ste, stn, stack, 400)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"celltoolbar": "Slideshow",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,223 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<div style='background-image: url(\"../images/header.svg\") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>\n",
" <div style=\"float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px\">\n",
" <div style=\"position: relative ; top: 50% ; transform: translatey(-50%)\">\n",
" <div style=\"font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%\">Noise</div>\n",
" <div style=\"font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)\">Lab: Probabilistic Power Spectral Densities</div>\n",
" </div>\n",
" </div>\n",
"</div>\n",
"\n",
"\n",
"Seismo-Live: http://seismo-live.org\n",
"\n",
"##### Authors:\n",
"* Tobias Megies ([@megies](https://github.com/megies))\n",
"\n",
"---"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%matplotlib inline\n",
"import matplotlib.pyplot as plt\n",
"plt.style.use(\"bmh\")\n",
"plt.rcParams['figure.figsize'] = 10, 6"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" * read waveform data from file `data/GR.FUR..BHN.D.2015.361` (station `FUR`, [LMU geophysical observatory in Fürstenfeldbruck](https://www.geophysik.uni-muenchen.de/observatory/seismology))\n",
" * read corresponding station metadata from file `data/station_FUR.stationxml`\n",
" * print info on both waveforms and station metadata"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from obspy import read, read_inventory\n",
"\n",
"st = read(\"data/GR.FUR..BHN.D.2015.361\")\n",
"inv = read_inventory(\"data/station_FUR.stationxml\")\n",
"\n",
"print(st)\n",
"print(inv)\n",
"inv.plot(projection=\"ortho\");"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" * compute probabilistic power spectral densities using `PPSD` class from obspy.signal, see http://docs.obspy.org/tutorial/code_snippets/probabilistic_power_spectral_density.html (but use the inventory you read from StationXML as metadata)\n",
" * plot the processed `PPSD` (`plot()` method attached to `PPSD` object)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from obspy.signal import PPSD\n",
"\n",
"tr = st[0]\n",
"ppsd = PPSD(stats=tr.stats, metadata=inv)\n",
"\n",
"ppsd.add(tr)\n",
"ppsd.plot()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Since longer term stacks would need too much waveform data and take way too long to compute, we prepared one year continuous data preprocessed for a single channel of station `FUR` to play with..\n",
"\n",
" * load long term pre-computed PPSD from file `PPSD_FUR_HHN.npz` using `PPSD`'s `load_npz()` staticmethod (i.e. it is called directly from the class, not an instance object of the class)\n",
" * plot the PPSD (default is full time-range, depending on how much data and spread is in the data, adjust `max_percentage` option of `plot()` option) (might take a couple of minutes..!)\n",
" * do a cumulative plot (which is good to judge non-exceedance percentage dB thresholds)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from obspy.signal import PPSD\n",
"\n",
"ppsd = PPSD.load_npz(\"data/PPSD_FUR_HHN.npz\", allow_pickle=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ppsd.plot(max_percentage=10)\n",
"ppsd.plot(cumulative=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" * do different stacks of the data using the [`calculate_histogram()` (see docs!)](http://docs.obspy.org/packages/autogen/obspy.signal.spectral_estimation.PPSD.calculate_histogram.html) method of `PPSD` and visualize them\n",
" * compare differences in different frequency bands qualitatively (anthropogenic vs. \"natural\" noise)..\n",
" * nighttime stack, daytime stack\n",
" * advanced exercise: Use the `callback` option and use some crazy custom callback function in `calculate_histogram()`, e.g. stack together all data from birthdays in your family.. or all German holidays + Sundays in the time span.. or from dates of some bands' concerts on a tour.. etc."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ppsd.calculate_histogram(time_of_weekday=[(-1, 0, 2), (-1, 22, 24)])\n",
"ppsd.plot(max_percentage=10)\n",
"ppsd.calculate_histogram(time_of_weekday=[(-1, 8, 16)])\n",
"ppsd.plot(max_percentage=10)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" * do different stacks of the data using the [`calculate_histogram()` (see docs!)](http://docs.obspy.org/packages/autogen/obspy.signal.spectral_estimation.PPSD.calculate_histogram.html) method of `PPSD` and visualize them\n",
" * compare differences in different frequency bands qualitatively (anthropogenic vs. \"natural\" noise)..\n",
" * weekdays stack, weekend stack"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ppsd.calculate_histogram(time_of_weekday=[(1, 0, 24), (2, 0, 24), (3, 0, 24), (4, 0, 24), (5, 0, 24)])\n",
"ppsd.plot(max_percentage=10)\n",
"ppsd.calculate_histogram(time_of_weekday=[(6, 0, 24), (7, 0, 24)])\n",
"ppsd.plot(max_percentage=10)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" * do different stacks of the data using the [`calculate_histogram()` (see docs!)](http://docs.obspy.org/packages/autogen/obspy.signal.spectral_estimation.PPSD.calculate_histogram.html) method of `PPSD` and visualize them\n",
" * compare differences in different frequency bands qualitatively (anthropogenic vs. \"natural\" noise)..\n",
" * seasonal stacks (e.g. northern hemisphere autumn vs. spring/summer, ...)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ppsd.calculate_histogram(month=[10, 11, 12, 1])\n",
"ppsd.plot(max_percentage=10)\n",
"ppsd.calculate_histogram(month=[4, 5, 6, 7])\n",
"ppsd.plot(max_percentage=10)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" * do different stacks of the data using the [`calculate_histogram()` (see docs!)](http://docs.obspy.org/packages/autogen/obspy.signal.spectral_estimation.PPSD.calculate_histogram.html) method of `PPSD` and visualize them\n",
" * compare differences in different frequency bands qualitatively (anthropogenic vs. \"natural\" noise)..\n",
" * stacks by specific month\n",
" * maybe even combine several of above restrictions.. (e.g. only nighttime on weekends)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"jupytext": {
"encoding": "# -*- coding: utf-8 -*-"
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

Binary file not shown.

Binary file not shown.

View File

@@ -0,0 +1,232 @@
<?xml version='1.0' encoding='UTF-8'?>
<FDSNStationXML xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.fdsn.org/xml/station/1" schemaVersion="1.0" xsi:schemaLocation="http://www.fdsn.org/xml/station/1 http://www.fdsn.org/xml/station/fdsn-station-1.0.xsd">
<Source>Jane</Source>
<Sender>Jane</Sender>
<Module>JANE WEB SERVICE: fdsnws-station | Jane version: 0.0.0-gd6e1</Module>
<ModuleURI>http://jane/fdsnws/station/1/query?network=GR&amp;station=FUR&amp;starttime=2015-12-01&amp;endtime=2016-02-01&amp;level=response</ModuleURI>
<Created>2016-05-03T10:14:19+00:00</Created>
<Network startDate="2006-12-16T00:00:00.000000Z" code="GR">
<Description>GRSN</Description>
<SelectedNumberStations>1</SelectedNumberStations>
<TotalNumberStations>4</TotalNumberStations>
<Station startDate="2006-12-16T00:00:00.000000Z" code="FUR">
<Latitude plusError="0.0" minusError="0.0">48.162899</Latitude>
<Longitude plusError="0.0" minusError="0.0">11.2752</Longitude>
<Elevation plusError="0.0" minusError="0.0">565.0</Elevation>
<Site>
<Name>Fuerstenfeldbruck, Bavaria, GR-Net</Name>
</Site>
<CreationDate>2006-12-16T00:00:00.000</CreationDate>
<SelectedNumberChannels>12</SelectedNumberChannels>
<TotalNumberChannels>12</TotalNumberChannels>
<Channel locationCode="" code="HHN" startDate="2006-12-16T00:00:00.000">
<Latitude plusError="0.0" minusError="0.0">48.162899</Latitude>
<Longitude plusError="0.0" minusError="0.0">11.2752</Longitude>
<Elevation plusError="0.0" minusError="0.0">565.0</Elevation>
<Depth plusError="0.0" minusError="0.0">0.0</Depth>
<Azimuth plusError="0.0" minusError="0.0">0.0</Azimuth>
<Dip plusError="0.0" minusError="0.0">0.0</Dip>
<Type>TRIGGERED</Type>
<Type>GEOPHYSICAL</Type>
<SampleRate plusError="0.0" minusError="0.0">100.0</SampleRate>
<ClockDrift plusError="0.0" minusError="0.0">0.02</ClockDrift>
<CalibrationUnits>
<Name>A</Name>
<Description>Amperes</Description>
</CalibrationUnits>
<Sensor>
<Type>Streckeisen STS-2/N seismometer</Type>
</Sensor>
<Response>
<InstrumentSensitivity>
<Value>9.4368E8</Value>
<Frequency>0.02</Frequency>
<InputUnits>
<Name>M/S</Name>
<Description>Velocity in Meters per Second</Description>
</InputUnits>
<OutputUnits>
<Name>COUNTS</Name>
<Description>Digital Counts</Description>
</OutputUnits>
</InstrumentSensitivity>
<Stage number="1">
<PolesZeros>
<InputUnits>
<Name>M/S</Name>
<Description>Velocity in Meters per Second</Description>
</InputUnits>
<OutputUnits>
<Name>V</Name>
<Description>Volts</Description>
</OutputUnits>
<PzTransferFunctionType>LAPLACE (RADIANS/SECOND)</PzTransferFunctionType>
<NormalizationFactor>6.0077E7</NormalizationFactor>
<NormalizationFrequency plusError="0.0" minusError="0.0">1.0</NormalizationFrequency>
<Zero number="0">
<Real plusError="0.0" minusError="0.0">0.0</Real>
<Imaginary plusError="0.0" minusError="0.0">0.0</Imaginary>
</Zero>
<Zero number="1">
<Real plusError="0.0" minusError="0.0">0.0</Real>
<Imaginary plusError="0.0" minusError="0.0">0.0</Imaginary>
</Zero>
<Pole number="0">
<Real plusError="0.0" minusError="0.0">-0.037004</Real>
<Imaginary plusError="0.0" minusError="0.0">0.037016</Imaginary>
</Pole>
<Pole number="1">
<Real plusError="0.0" minusError="0.0">-0.037004</Real>
<Imaginary plusError="0.0" minusError="0.0">-0.037016</Imaginary>
</Pole>
<Pole number="2">
<Real plusError="0.0" minusError="0.0">-251.33</Real>
<Imaginary plusError="0.0" minusError="0.0">0.0</Imaginary>
</Pole>
<Pole number="3">
<Real plusError="0.0" minusError="0.0">-131.04</Real>
<Imaginary plusError="0.0" minusError="0.0">-467.29</Imaginary>
</Pole>
<Pole number="4">
<Real plusError="0.0" minusError="0.0">-131.04</Real>
<Imaginary plusError="0.0" minusError="0.0">467.29</Imaginary>
</Pole>
</PolesZeros>
<StageGain>
<Value>1500.0</Value>
<Frequency>0.02</Frequency>
</StageGain>
</Stage>
<Stage number="2">
<Coefficients>
<InputUnits>
<Name>V</Name>
<Description>Volts</Description>
</InputUnits>
<OutputUnits>
<Name>COUNTS</Name>
<Description>Digital Counts</Description>
</OutputUnits>
<CfTransferFunctionType>DIGITAL</CfTransferFunctionType>
</Coefficients>
<Decimation>
<InputSampleRate plusError="0.0" minusError="0.0">200.0</InputSampleRate>
<Factor>1</Factor>
<Offset>0</Offset>
<Delay plusError="0.0" minusError="0.0">0.0</Delay>
<Correction plusError="0.0" minusError="0.0">0.0</Correction>
</Decimation>
<StageGain>
<Value>629121.0</Value>
<Frequency>0.0</Frequency>
</StageGain>
</Stage>
</Response>
</Channel>
<Channel locationCode="" code="BHN" startDate="2006-12-16T00:00:00.000">
<Latitude plusError="0.0" minusError="0.0">48.162899</Latitude>
<Longitude plusError="0.0" minusError="0.0">11.2752</Longitude>
<Elevation plusError="0.0" minusError="0.0">565.0</Elevation>
<Depth plusError="0.0" minusError="0.0">0.0</Depth>
<Azimuth plusError="0.0" minusError="0.0">0.0</Azimuth>
<Dip plusError="0.0" minusError="0.0">0.0</Dip>
<Type>TRIGGERED</Type>
<Type>GEOPHYSICAL</Type>
<SampleRate plusError="0.0" minusError="0.0">20.0</SampleRate>
<ClockDrift plusError="0.0" minusError="0.0">0.02</ClockDrift>
<CalibrationUnits>
<Name>A</Name>
<Description>Amperes</Description>
</CalibrationUnits>
<Sensor>
<Type>Streckeisen STS-2/N seismometer</Type>
</Sensor>
<Response>
<InstrumentSensitivity>
<Value>9.4368E8</Value>
<Frequency>0.02</Frequency>
<InputUnits>
<Name>M/S</Name>
<Description>Velocity in Meters per Second</Description>
</InputUnits>
<OutputUnits>
<Name>COUNTS</Name>
<Description>Digital Counts</Description>
</OutputUnits>
</InstrumentSensitivity>
<Stage number="1">
<PolesZeros>
<InputUnits>
<Name>M/S</Name>
<Description>Velocity in Meters per Second</Description>
</InputUnits>
<OutputUnits>
<Name>V</Name>
<Description>Volts</Description>
</OutputUnits>
<PzTransferFunctionType>LAPLACE (RADIANS/SECOND)</PzTransferFunctionType>
<NormalizationFactor>6.0077E7</NormalizationFactor>
<NormalizationFrequency plusError="0.0" minusError="0.0">1.0</NormalizationFrequency>
<Zero number="0">
<Real plusError="0.0" minusError="0.0">0.0</Real>
<Imaginary plusError="0.0" minusError="0.0">0.0</Imaginary>
</Zero>
<Zero number="1">
<Real plusError="0.0" minusError="0.0">0.0</Real>
<Imaginary plusError="0.0" minusError="0.0">0.0</Imaginary>
</Zero>
<Pole number="0">
<Real plusError="0.0" minusError="0.0">-0.037004</Real>
<Imaginary plusError="0.0" minusError="0.0">0.037016</Imaginary>
</Pole>
<Pole number="1">
<Real plusError="0.0" minusError="0.0">-0.037004</Real>
<Imaginary plusError="0.0" minusError="0.0">-0.037016</Imaginary>
</Pole>
<Pole number="2">
<Real plusError="0.0" minusError="0.0">-251.33</Real>
<Imaginary plusError="0.0" minusError="0.0">0.0</Imaginary>
</Pole>
<Pole number="3">
<Real plusError="0.0" minusError="0.0">-131.04</Real>
<Imaginary plusError="0.0" minusError="0.0">-467.29</Imaginary>
</Pole>
<Pole number="4">
<Real plusError="0.0" minusError="0.0">-131.04</Real>
<Imaginary plusError="0.0" minusError="0.0">467.29</Imaginary>
</Pole>
</PolesZeros>
<StageGain>
<Value>1500.0</Value>
<Frequency>0.02</Frequency>
</StageGain>
</Stage>
<Stage number="2">
<Coefficients>
<InputUnits>
<Name>V</Name>
<Description>Volts</Description>
</InputUnits>
<OutputUnits>
<Name>COUNTS</Name>
<Description>Digital Counts</Description>
</OutputUnits>
<CfTransferFunctionType>DIGITAL</CfTransferFunctionType>
</Coefficients>
<Decimation>
<InputSampleRate plusError="0.0" minusError="0.0">200.0</InputSampleRate>
<Factor>1</Factor>
<Offset>0</Offset>
<Delay plusError="0.0" minusError="0.0">0.0</Delay>
<Correction plusError="0.0" minusError="0.0">0.0</Correction>
</Decimation>
<StageGain>
<Value>629121.0</Value>
<Frequency>0.0</Frequency>
</StageGain>
</Stage>
</Response>
</Channel>
</Station>
</Network>
</FDSNStationXML>

View File

@@ -0,0 +1,69 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<div style='background-image: url(\"../images/header.svg\") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>\n",
" <div style=\"float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px\">\n",
" <div style=\"position: relative ; top: 50% ; transform: translatey(-50%)\">\n",
" <div style=\"font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%\">ObsPy Tutorial</div>\n",
" <div style=\"font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)\">Introduction/Index</div>\n",
" </div>\n",
" </div>\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![](images/obspy_logo_full_524x179px.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"These notebooks are an introduction to ObsPy and will take about 2 days to fully complete. Some basic knowledge of scientific Python is assumed."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Timetable/Contents\n",
"\n",
"1. **[Introduction to File Formats and read/write in ObsPy](01_File_Formats.ipynb)** [45 Minutes] - *[[Solution]](01_File_Formats-with_solutions.ipynb)*\n",
"2. **[Handling Time](02_UTCDateTime.ipynb)** [35 Minutes] - *[[Solution]](02_UTCDateTime-with_solutions.ipynb)*\n",
"3. **[Handling Waveform Data: Stream & Trace Classes](03_waveform_data.ipynb)** [45 Minutes] - *[[Solution]](03_waveform_data-with_solutions.ipynb)*\n",
"4. **[Handling Station Metainformation](04_Station_metainformation.ipynb)** [15 Minutes] - *[[Solution]](04_Station_metainformation-with_solutions.ipynb)*\n",
"5. **[Handling Event Metainformation](05_Event_metadata.ipynb)** [25 Minutes] - *[[Solution]](05_Event_metadata-with_solutions.ipynb)*\n",
"6. **[Retrieving Data from Data Centers](06_FDSN.ipynb)** [30 Minutes] - *[[Solution]](06_FDSN-with_solutions.ipynb)*\n",
"7. **[Basic Downloading/Processing Exercise](07_Basic_Processing_Exercise.ipynb)** [60 - x Minutes] - *[[Solution]](07_Basic_Processing_Exercise-with_solutions.ipynb)*\n",
"8. **[Exercise: Event Detection and Magnitude Estimation in an Aftershock Series](08_Exercise__2008_MtCarmel_Earthquake_and_Aftershock_Series.ipynb)** [open end] - *[[Solution]](08_Exercise__2008_MtCarmel_Earthquake_and_Aftershock_Series-with_solutions.ipynb)*"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.14"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1 @@
{"cells":[{"cell_type":"markdown","metadata":{},"source":["<div style='background-image: url(\"../images/header.svg\") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>\n"," <div style=\"float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px\">\n"," <div style=\"position: relative ; top: 50% ; transform: translatey(-50%)\">\n"," <div style=\"font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%\">ObsPy Tutorial</div>\n"," <div style=\"font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)\">Handling Station Metadata</div>\n"," </div>\n"," </div>\n","</div>"]},{"cell_type":"markdown","metadata":{},"source":["Seismo-Live: http://seismo-live.org\n","\n","##### Authors:\n","* Lion Krischer ([@krischer](https://github.com/krischer))\n","* Tobias Megies ([@megies](https://github.com/megies))\n","---"]},{"cell_type":"markdown","metadata":{},"source":["![](images/obspy_logo_full_524x179px.png)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["%matplotlib inline\n","import matplotlib.pyplot as plt\n","plt.style.use('ggplot')\n","plt.rcParams['figure.figsize'] = 12, 8"]},{"cell_type":"markdown","metadata":{},"source":["- for station metadata, the de-facto standard of the future (replacing SEED/RESP) is [FDSN StationXML](http://www.fdsn.org/xml/station/)\n","- FDSN StationXML files can be read using **`read_inventory()`**"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["from obspy import read_inventory\n","# real-world StationXML files often deviate from the official schema definition\n","# therefore file-format autodiscovery sometimes fails and we have to force the file format\n","inventory = read_inventory(\"./data/station_PFO.xml\", format=\"STATIONXML\")\n","print(type(inventory))"]},{"cell_type":"markdown","metadata":{},"source":["- the nested ObsPy Inventory class structure (Inventory/Station/Channel/Response/...) is closely modelled after FDSN StationXML\n","<img src=\"images/Inventory.svg\" width=90%>"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["!head data/station_BFO.xml"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["print(inventory)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["network = inventory[0]\n","print(network)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["station = network[0]\n","print(station)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["channel = station[0]\n","print(channel)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["print(channel.response)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["from obspy import read\n","st = read(\"./data/waveform_PFO.mseed\")\n","print(st)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["inv = read_inventory(\"./data/station_PFO.xml\", format=\"STATIONXML\")"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["print(st[0].stats)"]},{"cell_type":"markdown","metadata":{},"source":["- the instrument response can be deconvolved from the waveform data using the convenience method **`Stream.remove_response()`**\n","- evalresp is used internally to calculate the instrument response"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["st.plot()\n","st.remove_response(inventory=inv)\n","st.plot()"]},{"cell_type":"markdown","metadata":{},"source":["- several options can be used to specify details of the deconvolution (water level, frequency domain prefiltering), output units (velocity/displacement/acceleration), demeaning, tapering and to specify if any response stages should be omitted"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["st = read(\"./data/waveform_PFO.mseed\")\n","st.remove_response(inventory=inv, water_level=60, pre_filt=(0.01, 0.02, 8, 10), output=\"DISP\")\n","st.plot()"]},{"cell_type":"markdown","metadata":{},"source":["- station metadata not present in StationXML yet but in Dataless SEED or RESP files can be used for instrument correction using the `.simulate()` method of Stream/Trace in a similar fashion"]}],"metadata":{"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"}},"nbformat":4,"nbformat_minor":2}

File diff suppressed because one or more lines are too long

1
ObsPy/06_FDSN.ipynb Normal file

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

12315
ObsPy/data/2014.ndk Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,13 @@
PDEW 2014 4 1 23 46 47.30 -19.6100 -70.7700 25.0 0.0 8.2 NEAR COAST OF NORTHERN C
event name: 201404012346A
time shift: 44.2000
half duration: 28.0000
latitude: -19.7000
longitude: -70.8100
depth: 21.6000
Mrr: 9.200000e+27
Mtt: -3.900000e+26
Mpp: -8.810000e+27
Mrt: 6.370000e+27
Mrp: -1.530000e+28
Mtp: 2.050000e+27

View File

@@ -0,0 +1,174 @@
<q:quakeml xsi:schemaLocation="http://quakeml.org/schema/xsd http://quakeml.org/schema/xsd/QuakeML-1.2.xsd" xmlns="http://quakeml.org/xmlns/bed/1.2" xmlns:q="http://quakeml.org/xmlns/quakeml/1.2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<eventParameters publicID="smi:service.iris.edu/fdsnws/event/1/query">
<event publicID="smi:service.iris.edu/fdsnws/event/1/query?eventid=4597319">
<description>
<text>NEAR COAST OF NORTHERN C</text>
<type>region name</type>
</description>
<focalMechanism publicID="smi:ds.iris.edu/spudservice/momenttensor/gcmtid/C201404012346A/quakeml#focalmechanism">
<momentTensor publicID="smi:ds.iris.edu/spudservice/momenttensor/gcmtid/C201404012346A/quakeml#momenttensor">
<dataUsed>
<waveType>body waves</waveType>
<stationCount>166</stationCount>
<componentCount>407</componentCount>
<shortestPeriod>50.0</shortestPeriod>
</dataUsed>
<dataUsed>
<waveType>surface waves</waveType>
<stationCount>139</stationCount>
<componentCount>219</componentCount>
<shortestPeriod>50.0</shortestPeriod>
</dataUsed>
<dataUsed>
<waveType>mantle waves</waveType>
<stationCount>170</stationCount>
<componentCount>483</componentCount>
<shortestPeriod>200.0</shortestPeriod>
</dataUsed>
<derivedOriginID>smi:www.iris.edu/spudservice/momenttensor/gcmtid/C201404012346A#cmtorigin</derivedOriginID>
<momentMagnitudeID>smi:ds.iris.edu/spudservice/momenttensor/gcmtid/C201404012346A/quakeml#magnitude</momentMagnitudeID>
<scalarMoment>
<value>1898000000000000000000</value>
</scalarMoment>
<tensor>
<Mrr>
<value>920000000000000000000</value>
<uncertainty>4000000000000000000</uncertainty>
</Mrr>
<Mtt>
<value>-39000000000000000000</value>
<uncertainty>2000000000000000000</uncertainty>
</Mtt>
<Mpp>
<value>-881000000000000000000</value>
<uncertainty>2000000000000000000</uncertainty>
</Mpp>
<Mrt>
<value>637000000000000000000</value>
<uncertainty>23000000000000000000</uncertainty>
</Mrt>
<Mrp>
<value>-1530000000000000000000</value>
<uncertainty>33000000000000000000</uncertainty>
</Mrp>
<Mtp>
<value>205000000000000000000</value>
<uncertainty>1000000000000000000</uncertainty>
</Mtp>
</tensor>
<sourceTimeFunction>
<type>triangle</type>
<duration>56.0</duration>
</sourceTimeFunction>
</momentTensor>
<triggeringOriginID>smi:www.iris.edu/spudservice/momenttensor/gcmtid/C201404012346A#reforigin</triggeringOriginID>
<nodalPlanes>
<nodalPlane1>
<strike>
<value>355</value>
</strike>
<dip>
<value>15</value>
</dip>
<rake>
<value>106</value>
</rake>
</nodalPlane1>
<nodalPlane2>
<strike>
<value>159</value>
</strike>
<dip>
<value>76</value>
</dip>
<rake>
<value>86</value>
</rake>
</nodalPlane2>
</nodalPlanes>
<principalAxes>
<tAxis>
<azimuth>
<value>63</value>
</azimuth>
<plunge>
<value>59</value>
</plunge>
<length>
<value>1903000000000000000000</value>
</length>
</tAxis>
<pAxis>
<azimuth>
<value>252</value>
</azimuth>
<plunge>
<value>30</value>
</plunge>
<length>
<value>-1892000000000000000000</value>
</length>
</pAxis>
<nAxis>
<azimuth>
<value>160</value>
</azimuth>
<plunge>
<value>4</value>
</plunge>
<length>
<value>-12000000000000000000</value>
</length>
</nAxis>
</principalAxes>
<creationInfo>
<agencyID>GCMT</agencyID>
<version>V10</version>
</creationInfo>
</focalMechanism>
<magnitude publicID="smi:ds.iris.edu/spudservice/momenttensor/gcmtid/C201404012346A/quakeml#magnitude">
<mag>
<value>8.1</value>
</mag>
<type>Mwc</type>
</magnitude>
<origin publicID="smi:www.iris.edu/spudservice/momenttensor/gcmtid/C201404012346A#reforigin">
<time>
<value>2014-04-01T23:46:47.300Z</value>
</time>
<longitude>
<value>-70.77</value>
</longitude>
<latitude>
<value>-19.61</value>
</latitude>
<depth>
<value>25000.0</value>
</depth>
</origin>
<origin publicID="smi:www.iris.edu/spudservice/momenttensor/gcmtid/C201404012346A#cmtorigin">
<time>
<value>2014-04-01T23:47:31.500Z</value>
</time>
<longitude>
<value>-70.81</value>
</longitude>
<latitude>
<value>-19.7</value>
</latitude>
<depth>
<value>21600.0</value>
</depth>
<timeFixed>false</timeFixed>
<epicenterFixed>false</epicenterFixed>
</origin>
<preferredOriginID>smi:www.iris.edu/spudservice/momenttensor/gcmtid/C201404012346A#cmtorigin</preferredOriginID>
<preferredFocalMechanismID>smi:ds.iris.edu/spudservice/momenttensor/gcmtid/C201404012346A/quakeml#focalmechanism</preferredFocalMechanismID>
<type>earthquake</type>
</event>
<creationInfo>
<creationTime>2015-05-07T09:58:12.311Z</creationTime>
<version>V10</version>
</creationInfo>
</eventParameters>
</q:quakeml>

BIN
ObsPy/data/IV_BOB.mseed LFS Normal file

Binary file not shown.

1
ObsPy/data/IV_BOB.xml Normal file

File diff suppressed because one or more lines are too long

21877
ObsPy/data/all_stations.xml Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,36 @@
<?xml version="1.0" encoding="UTF-8"?><q:quakeml xmlns:q="http://quakeml.org/xmlns/quakeml/1.2" xmlns:iris="http://service.iris.edu/fdsnws/event/1/" xmlns="http://quakeml.org/xmlns/bed/1.2" xmlns:xsi="http://www.w3.org/2000/10/XMLSchema-instance" xsi:schemaLocation="http://quakeml.org/schema/xsd http://quakeml.org/schema/xsd/QuakeML-1.2.xsd"><eventParameters publicID="smi:service.iris.edu/fdsnws/event/1/query"><event publicID="smi:service.iris.edu/fdsnws/event/1/query?eventid=3279407">
<type>earthquake</type>
<description xmlns:iris="http://service.iris.edu/fdsnws/event/1/" iris:FEcode="228">
<type>Flinn-Engdahl region</type>
<text>NEAR EAST COAST OF HONSHU, JAPAN</text>
</description>
<preferredMagnitudeID>smi:service.iris.edu/fdsnws/event/1/query?magnitudeid=16642444</preferredMagnitudeID>
<preferredOriginID>smi:service.iris.edu/fdsnws/event/1/query?originid=9933375</preferredOriginID>
<origin xmlns:iris="http://service.iris.edu/fdsnws/event/1/" publicID="smi:service.iris.edu/fdsnws/event/1/query?originid=9933375" iris:contributor="ISC" iris:catalog="ISC" iris:contributorOriginId="02227159" iris:contributorEventId="16461282">
<time>
<value>2011-03-11T05:46:23.2000</value>
</time>
<creationInfo>
<author>ISC</author>
</creationInfo>
<latitude>
<value>38.2963</value>
</latitude>
<longitude>
<value>142.498</value>
</longitude>
<depth>
<value>19700.0</value>
</depth>
</origin>
<magnitude publicID="smi:service.iris.edu/fdsnws/event/1/query?magnitudeid=16642444">
<originID>smi:service.iris.edu/fdsnws/event/1/query?originid=9933383</originID>
<type>MW</type>
<mag>
<value>9.1</value>
</mag>
<creationInfo>
<author>GCMT</author>
</creationInfo>
</magnitude>
</event></eventParameters></q:quakeml>

View File

@@ -0,0 +1,184 @@
<?xml version="1.0" encoding="UTF-8"?><q:quakeml xmlns:q="http://quakeml.org/xmlns/quakeml/1.2" xmlns:iris="http://service.iris.edu/fdsnws/event/1/" xmlns="http://quakeml.org/xmlns/bed/1.2" xmlns:xsi="http://www.w3.org/2000/10/XMLSchema-instance" xsi:schemaLocation="http://quakeml.org/schema/xsd http://quakeml.org/schema/xsd/QuakeML-1.2.xsd">
<eventParameters publicID="smi:service.iris.edu/fdsnws/event/1/query">
<event publicID="smi:service.iris.edu/fdsnws/event/1/query?eventid=3279407">
<type>earthquake</type>
<description xmlns:iris="http://service.iris.edu/fdsnws/event/1/" iris:FEcode="228">
<type>Flinn-Engdahl region</type>
<text>NEAR EAST COAST OF HONSHU, JAPAN</text>
</description>
<preferredMagnitudeID>smi:service.iris.edu/fdsnws/event/1/query?magnitudeid=16642444</preferredMagnitudeID>
<preferredOriginID>smi:service.iris.edu/fdsnws/event/1/query?originid=9933375</preferredOriginID>
<origin xmlns:iris="http://service.iris.edu/fdsnws/event/1/" publicID="smi:service.iris.edu/fdsnws/event/1/query?originid=9933375" iris:contributor="ISC" iris:catalog="ISC" iris:contributorOriginId="02227159" iris:contributorEventId="16461282">
<time>
<value>2011-03-11T05:46:23.2000</value>
</time>
<creationInfo>
<author>ISC</author>
</creationInfo>
<latitude>
<value>38.2963</value>
</latitude>
<longitude>
<value>142.498</value>
</longitude>
<depth>
<value>19700.0</value>
</depth>
</origin>
<magnitude publicID="smi:service.iris.edu/fdsnws/event/1/query?magnitudeid=16642444">
<originID>smi:service.iris.edu/fdsnws/event/1/query?originid=9933383</originID>
<type>MW</type>
<mag>
<value>9.1</value>
</mag>
<creationInfo>
<author>GCMT</author>
</creationInfo>
</magnitude>
</event>
<event publicID="smi:service.iris.edu/fdsnws/event/1/query?eventid=3279489">
<type>earthquake</type>
<description xmlns:iris="http://service.iris.edu/fdsnws/event/1/" iris:FEcode="228">
<type>Flinn-Engdahl region</type>
<text>NEAR EAST COAST OF HONSHU, JAPAN</text>
</description>
<preferredMagnitudeID>smi:service.iris.edu/fdsnws/event/1/query?magnitudeid=16642643</preferredMagnitudeID>
<preferredOriginID>smi:service.iris.edu/fdsnws/event/1/query?originid=9933511</preferredOriginID>
<origin xmlns:iris="http://service.iris.edu/fdsnws/event/1/" publicID="smi:service.iris.edu/fdsnws/event/1/query?originid=9933511" iris:contributor="ISC" iris:catalog="ISC" iris:contributorOriginId="02227176" iris:contributorEventId="2707261">
<time>
<value>2011-03-11T06:15:37.5700</value>
</time>
<creationInfo>
<author>ISC</author>
</creationInfo>
<latitude>
<value>36.2274</value>
</latitude>
<longitude>
<value>141.088</value>
</longitude>
<depth>
<value>25400.0</value>
</depth>
</origin>
<magnitude publicID="smi:service.iris.edu/fdsnws/event/1/query?magnitudeid=16642643">
<originID>smi:service.iris.edu/fdsnws/event/1/query?originid=9933514</originID>
<type>MW</type>
<mag>
<value>7.9</value>
</mag>
<creationInfo>
<author>GCMT</author>
</creationInfo>
</magnitude>
</event>
<event publicID="smi:service.iris.edu/fdsnws/event/1/query?eventid=3279409">
<type>earthquake</type>
<description xmlns:iris="http://service.iris.edu/fdsnws/event/1/" iris:FEcode="229">
<type>Flinn-Engdahl region</type>
<text>OFF EAST COAST OF HONSHU, JAPAN</text>
</description>
<preferredMagnitudeID>smi:service.iris.edu/fdsnws/event/1/query?magnitudeid=16642722</preferredMagnitudeID>
<preferredOriginID>smi:service.iris.edu/fdsnws/event/1/query?originid=9933564</preferredOriginID>
<origin xmlns:iris="http://service.iris.edu/fdsnws/event/1/" publicID="smi:service.iris.edu/fdsnws/event/1/query?originid=9933564" iris:contributor="ISC" iris:catalog="ISC" iris:contributorOriginId="02227183" iris:contributorEventId="16304073">
<time>
<value>2011-03-11T06:25:50.7400</value>
</time>
<creationInfo>
<author>ISC</author>
</creationInfo>
<latitude>
<value>38.051</value>
</latitude>
<longitude>
<value>144.6297</value>
</longitude>
<depth>
<value>19800.0</value>
</depth>
</origin>
<magnitude publicID="smi:service.iris.edu/fdsnws/event/1/query?magnitudeid=16642722">
<originID>smi:service.iris.edu/fdsnws/event/1/query?originid=9933568</originID>
<type>MW</type>
<mag>
<value>7.6</value>
</mag>
<creationInfo>
<author>GCMT</author>
</creationInfo>
</magnitude>
</event>
<event publicID="smi:service.iris.edu/fdsnws/event/1/query?eventid=3282643">
<type>earthquake</type>
<description xmlns:iris="http://service.iris.edu/fdsnws/event/1/" iris:FEcode="228">
<type>Flinn-Engdahl region</type>
<text>NEAR EAST COAST OF HONSHU, JAPAN</text>
</description>
<preferredMagnitudeID>smi:service.iris.edu/fdsnws/event/1/query?magnitudeid=16176888</preferredMagnitudeID>
<preferredOriginID>smi:service.iris.edu/fdsnws/event/1/query?originid=9535390</preferredOriginID>
<origin xmlns:iris="http://service.iris.edu/fdsnws/event/1/" publicID="smi:service.iris.edu/fdsnws/event/1/query?originid=9535390" iris:contributor="NEIC PDE-M" iris:catalog="NEIC PDE">
<time>
<value>2011-04-07T14:32:43.2900</value>
</time>
<creationInfo>
<author>NEIC</author>
</creationInfo>
<latitude>
<value>38.276</value>
</latitude>
<longitude>
<value>141.588</value>
</longitude>
<depth>
<value>42000.0</value>
</depth>
</origin>
<magnitude publicID="smi:service.iris.edu/fdsnws/event/1/query?magnitudeid=16176888">
<originID>smi:service.iris.edu/fdsnws/event/1/query?originid=9535390</originID>
<type>MW</type>
<mag>
<value>7.1</value>
</mag>
<creationInfo>
<author>UCMT</author>
</creationInfo>
</magnitude>
</event>
<event publicID="smi:service.iris.edu/fdsnws/event/1/query?eventid=3317566">
<type>earthquake</type>
<description xmlns:iris="http://service.iris.edu/fdsnws/event/1/" iris:FEcode="229">
<type>Flinn-Engdahl region</type>
<text>OFF EAST COAST OF HONSHU, JAPAN</text>
</description>
<preferredMagnitudeID>smi:service.iris.edu/fdsnws/event/1/query?magnitudeid=16183694</preferredMagnitudeID>
<preferredOriginID>smi:service.iris.edu/fdsnws/event/1/query?originid=9540887</preferredOriginID>
<origin xmlns:iris="http://service.iris.edu/fdsnws/event/1/" publicID="smi:service.iris.edu/fdsnws/event/1/query?originid=9540887" iris:contributor="NEIC PDE-M" iris:catalog="NEIC PDE">
<time>
<value>2011-07-10T00:57:10.8000</value>
</time>
<creationInfo>
<author>NEIC</author>
</creationInfo>
<latitude>
<value>38.034</value>
</latitude>
<longitude>
<value>143.264</value>
</longitude>
<depth>
<value>23000.0</value>
</depth>
</origin>
<magnitude publicID="smi:service.iris.edu/fdsnws/event/1/query?magnitudeid=16183694">
<originID>smi:service.iris.edu/fdsnws/event/1/query?originid=9540887</originID>
<type>MW</type>
<mag>
<value>7.0</value>
</mag>
<creationInfo>
<author>UCMT</author>
</creationInfo>
</magnitude>
</event>
</eventParameters>
</q:quakeml>

BIN
ObsPy/data/example.mseed LFS Normal file

Binary file not shown.

BIN
ObsPy/data/example.sac LFS Normal file

Binary file not shown.

BIN
ObsPy/data/mtcarmel.mseed LFS Normal file

Binary file not shown.

BIN
ObsPy/data/mtcarmel_100hz.mseed LFS Normal file

Binary file not shown.

587
ObsPy/data/station_BFO.xml Normal file
View File

@@ -0,0 +1,587 @@
<?xml version="1.0" encoding="UTF-8"?>
<FDSNStationXML xmlns="http://www.fdsn.org/xml/station/1" schemaVersion="1.0">
<Source>SeisComP3</Source>
<Sender>ODC</Sender>
<Created>2015-02-20T10:41:48</Created>
<Network code="GR" startDate="1976-02-17T00:00:00" restrictedStatus="open">
<Description>German Regional Seismic Network, BGR Hannover</Description>
<Station code="BFO" startDate="1991-01-01T00:00:00" restrictedStatus="open">
<Latitude>48.3311</Latitude>
<Longitude>8.3303</Longitude>
<Elevation>589</Elevation>
<Site>
<Name>GRSN Station Black Forest Observatory</Name>
<Country>Germany</Country>
</Site>
<CreationDate>1991-01-01T00:00:00</CreationDate>
<Channel code="BHE" startDate="1991-01-01T00:00:00" endDate="2011-10-19T00:00:00" restrictedStatus="open" locationCode="">
<Latitude>48.3311</Latitude>
<Longitude>8.3303</Longitude>
<Elevation>589</Elevation>
<Depth>0</Depth>
<Azimuth>90</Azimuth>
<Dip>0</Dip>
<SampleRate>20</SampleRate>
<SampleRateRatio>
<NumberSamples>20</NumberSamples>
<NumberSeconds>1</NumberSeconds>
</SampleRateRatio>
<StorageFormat>Steim2</StorageFormat>
<ClockDrift>0.05</ClockDrift>
<Sensor resourceId="Sensor#20120723141113.907112.4138">
<Description>STS2</Description>
</Sensor>
<DataLogger resourceId="Datalogger#20120723141113.883481.3977"/>
<Response>
<InstrumentSensitivity>
<Value>598802400</Value>
<Frequency>1</Frequency>
<InputUnits>
<Name>m/s</Name>
</InputUnits>
<OutputUnits>
<Name/>
</OutputUnits>
</InstrumentSensitivity>
<Stage number="1">
<PolesZeros resourceId="ResponsePAZ#20120723141113.90714.4139" name="GR.BFO..BHE_1991.001_STS2">
<InputUnits>
<Name>M/S</Name>
</InputUnits>
<OutputUnits>
<Name>V</Name>
</OutputUnits>
<PzTransferFunctionType>LAPLACE (RADIANS/SECOND)</PzTransferFunctionType>
<NormalizationFactor>1</NormalizationFactor>
<NormalizationFrequency>1</NormalizationFrequency>
<Pole number="0">
<Real>-0.0367429</Real>
<Imaginary>0.036754</Imaginary>
</Pole>
<Pole number="1">
<Real>-0.0367429</Real>
<Imaginary>-0.036754</Imaginary>
</Pole>
<Zero number="2">
<Real>0</Real>
<Imaginary>0</Imaginary>
</Zero>
<Zero number="3">
<Real>0</Real>
<Imaginary>0</Imaginary>
</Zero>
</PolesZeros>
<StageGain>
<Value>598802400</Value>
<Frequency>1</Frequency>
</StageGain>
</Stage>
<Stage number="2">
<Coefficients>
<InputUnits>
<Name>V</Name>
</InputUnits>
<OutputUnits>
<Name>COUNTS</Name>
</OutputUnits>
<CfTransferFunctionType>DIGITAL</CfTransferFunctionType>
</Coefficients>
<Decimation>
<InputSampleRate>6.930355364e-310</InputSampleRate>
<Factor>1</Factor>
<Offset>0</Offset>
<Delay>0</Delay>
<Correction>0</Correction>
</Decimation>
<StageGain>
<Value>1</Value>
<Frequency>0</Frequency>
</StageGain>
</Stage>
</Response>
</Channel>
<Channel code="BHN" startDate="1991-01-01T00:00:00" endDate="2011-10-19T00:00:00" restrictedStatus="open" locationCode="">
<Latitude>48.3311</Latitude>
<Longitude>8.3303</Longitude>
<Elevation>589</Elevation>
<Depth>0</Depth>
<Azimuth>0</Azimuth>
<Dip>0</Dip>
<SampleRate>20</SampleRate>
<SampleRateRatio>
<NumberSamples>20</NumberSamples>
<NumberSeconds>1</NumberSeconds>
</SampleRateRatio>
<StorageFormat>Steim2</StorageFormat>
<ClockDrift>0.05</ClockDrift>
<Sensor resourceId="Sensor#20120723141113.906463.4134">
<Description>STS2</Description>
</Sensor>
<DataLogger resourceId="Datalogger#20120723141113.883481.3977"/>
<Response>
<InstrumentSensitivity>
<Value>598802400</Value>
<Frequency>1</Frequency>
<InputUnits>
<Name>m/s</Name>
</InputUnits>
<OutputUnits>
<Name/>
</OutputUnits>
</InstrumentSensitivity>
<Stage number="1">
<PolesZeros resourceId="ResponsePAZ#20120723141113.906491.4135" name="GR.BFO..BHN_1991.001_STS2">
<InputUnits>
<Name>M/S</Name>
</InputUnits>
<OutputUnits>
<Name>V</Name>
</OutputUnits>
<PzTransferFunctionType>LAPLACE (RADIANS/SECOND)</PzTransferFunctionType>
<NormalizationFactor>1</NormalizationFactor>
<NormalizationFrequency>1</NormalizationFrequency>
<Pole number="0">
<Real>-0.0367429</Real>
<Imaginary>0.036754</Imaginary>
</Pole>
<Pole number="1">
<Real>-0.0367429</Real>
<Imaginary>-0.036754</Imaginary>
</Pole>
<Zero number="2">
<Real>0</Real>
<Imaginary>0</Imaginary>
</Zero>
<Zero number="3">
<Real>0</Real>
<Imaginary>0</Imaginary>
</Zero>
</PolesZeros>
<StageGain>
<Value>598802400</Value>
<Frequency>1</Frequency>
</StageGain>
</Stage>
<Stage number="2">
<Coefficients>
<InputUnits>
<Name>V</Name>
</InputUnits>
<OutputUnits>
<Name>COUNTS</Name>
</OutputUnits>
<CfTransferFunctionType>DIGITAL</CfTransferFunctionType>
</Coefficients>
<Decimation>
<InputSampleRate>6.930355389e-310</InputSampleRate>
<Factor>1</Factor>
<Offset>0</Offset>
<Delay>0</Delay>
<Correction>0</Correction>
</Decimation>
<StageGain>
<Value>1</Value>
<Frequency>0</Frequency>
</StageGain>
</Stage>
</Response>
</Channel>
<Channel code="BHZ" startDate="1991-01-01T00:00:00" endDate="2011-10-19T00:00:00" restrictedStatus="open" locationCode="">
<Latitude>48.3311</Latitude>
<Longitude>8.3303</Longitude>
<Elevation>589</Elevation>
<Depth>0</Depth>
<Azimuth>0</Azimuth>
<Dip>-90</Dip>
<SampleRate>20</SampleRate>
<SampleRateRatio>
<NumberSamples>20</NumberSamples>
<NumberSeconds>1</NumberSeconds>
</SampleRateRatio>
<StorageFormat>Steim2</StorageFormat>
<ClockDrift>0.05</ClockDrift>
<Sensor resourceId="Sensor#20120723141113.905894.4130">
<Description>STS2</Description>
</Sensor>
<DataLogger resourceId="Datalogger#20120723141113.883481.3977"/>
<Response>
<InstrumentSensitivity>
<Value>598802400</Value>
<Frequency>1</Frequency>
<InputUnits>
<Name>m/s</Name>
</InputUnits>
<OutputUnits>
<Name/>
</OutputUnits>
</InstrumentSensitivity>
<Stage number="1">
<PolesZeros resourceId="ResponsePAZ#20120723141113.905922.4131" name="GR.BFO..BHZ_1991.001_STS2">
<InputUnits>
<Name>M/S</Name>
</InputUnits>
<OutputUnits>
<Name>V</Name>
</OutputUnits>
<PzTransferFunctionType>LAPLACE (RADIANS/SECOND)</PzTransferFunctionType>
<NormalizationFactor>1</NormalizationFactor>
<NormalizationFrequency>1</NormalizationFrequency>
<Pole number="0">
<Real>-0.0367429</Real>
<Imaginary>0.036754</Imaginary>
</Pole>
<Pole number="1">
<Real>-0.0367429</Real>
<Imaginary>-0.036754</Imaginary>
</Pole>
<Zero number="2">
<Real>0</Real>
<Imaginary>0</Imaginary>
</Zero>
<Zero number="3">
<Real>0</Real>
<Imaginary>0</Imaginary>
</Zero>
</PolesZeros>
<StageGain>
<Value>598802400</Value>
<Frequency>1</Frequency>
</StageGain>
</Stage>
<Stage number="2">
<Coefficients>
<InputUnits>
<Name>V</Name>
</InputUnits>
<OutputUnits>
<Name>COUNTS</Name>
</OutputUnits>
<CfTransferFunctionType>DIGITAL</CfTransferFunctionType>
</Coefficients>
<Decimation>
<InputSampleRate>6.930355466e-310</InputSampleRate>
<Factor>1</Factor>
<Offset>0</Offset>
<Delay>0</Delay>
<Correction>0</Correction>
</Decimation>
<StageGain>
<Value>1</Value>
<Frequency>0</Frequency>
</StageGain>
</Stage>
</Response>
</Channel>
</Station>
</Network>
<Network code="II" startDate="1980-01-01T00:00:00" restrictedStatus="open">
<Description>(GSN) Global Seismograph Network (IRIS/IDA)</Description>
<Station code="BFO" startDate="1996-05-29T00:00:00" restrictedStatus="open">
<Latitude>48.3319</Latitude>
<Longitude>8.3311</Longitude>
<Elevation>589</Elevation>
<Site>
<Name>Black Forest Observatory, Schiltach, Germany</Name>
<Country> Germany</Country>
</Site>
<CreationDate>1996-05-29T00:00:00</CreationDate>
<Channel code="BHE" startDate="2008-01-16T00:00:00" endDate="2012-06-21T14:29:59" restrictedStatus="open" locationCode="00">
<Latitude>48.3319</Latitude>
<Longitude>8.3311</Longitude>
<Elevation>589</Elevation>
<Depth>0</Depth>
<Azimuth>90</Azimuth>
<Dip>0</Dip>
<SampleRate>20</SampleRate>
<SampleRateRatio>
<NumberSamples>20</NumberSamples>
<NumberSeconds>1</NumberSeconds>
</SampleRateRatio>
<StorageFormat>Steim1</StorageFormat>
<ClockDrift>2.5e-06</ClockDrift>
<Sensor resourceId="Sensor#20121207101201.213013.1484">
<Type>VBB</Type>
<Description>STS-1H</Description>
<Manufacturer>Streckeisen</Manufacturer>
<Model>STS-1H</Model>
</Sensor>
<DataLogger resourceId="Datalogger#20121207101201.210619.1481">
<Description>BFO.2008.016.H00</Description>
</DataLogger>
<Response>
<InstrumentSensitivity>
<Value>3.1632e+10</Value>
<Frequency>0.05</Frequency>
<InputUnits>
<Name>M/S</Name>
</InputUnits>
<OutputUnits>
<Name/>
</OutputUnits>
</InstrumentSensitivity>
<Stage number="1">
<PolesZeros resourceId="ResponsePAZ#20121207101201.213237.1485" name="BFO.2008.016.HE00">
<InputUnits>
<Name>M/S</Name>
</InputUnits>
<OutputUnits>
<Name>V</Name>
</OutputUnits>
<PzTransferFunctionType>LAPLACE (HERTZ)</PzTransferFunctionType>
<NormalizationFactor>100.06</NormalizationFactor>
<NormalizationFrequency>0.05</NormalizationFrequency>
<Pole number="0">
<Real>-6.28</Real>
<Imaginary>7.78213</Imaginary>
</Pole>
<Pole number="1">
<Real>-6.28</Real>
<Imaginary>-7.78213</Imaginary>
</Pole>
<Pole number="2">
<Real>-0.002142</Real>
<Imaginary>0.001753</Imaginary>
</Pole>
<Pole number="3">
<Real>-0.002142</Real>
<Imaginary>-0.001753</Imaginary>
</Pole>
<Zero number="4">
<Real>0</Real>
<Imaginary>0</Imaginary>
</Zero>
<Zero number="5">
<Real>0</Real>
<Imaginary>0</Imaginary>
</Zero>
</PolesZeros>
<StageGain>
<Value>2539.63</Value>
<Frequency>0.05</Frequency>
</StageGain>
</Stage>
<Stage number="2">
<Coefficients>
<InputUnits>
<Name>V</Name>
</InputUnits>
<OutputUnits>
<Name>COUNTS</Name>
</OutputUnits>
<CfTransferFunctionType>DIGITAL</CfTransferFunctionType>
</Coefficients>
<Decimation>
<InputSampleRate>0</InputSampleRate>
<Factor>1</Factor>
<Offset>0</Offset>
<Delay>0</Delay>
<Correction>0</Correction>
</Decimation>
<StageGain>
<Value>700143</Value>
<Frequency>0</Frequency>
</StageGain>
</Stage>
</Response>
</Channel>
<Channel code="BHN" startDate="2008-01-16T00:00:00" endDate="2012-06-21T14:29:59" restrictedStatus="open" locationCode="00">
<Latitude>48.3319</Latitude>
<Longitude>8.3311</Longitude>
<Elevation>589</Elevation>
<Depth>0</Depth>
<Azimuth>0</Azimuth>
<Dip>0</Dip>
<SampleRate>20</SampleRate>
<SampleRateRatio>
<NumberSamples>20</NumberSamples>
<NumberSeconds>1</NumberSeconds>
</SampleRateRatio>
<StorageFormat>Steim1</StorageFormat>
<ClockDrift>2.5e-06</ClockDrift>
<Sensor resourceId="Sensor#20121207101201.23289.1507">
<Type>VBB</Type>
<Description>STS-1H</Description>
<Manufacturer>Streckeisen</Manufacturer>
<Model>STS-1H</Model>
</Sensor>
<DataLogger resourceId="Datalogger#20121207101201.210619.1481">
<Description>BFO.2008.016.H00</Description>
</DataLogger>
<Response>
<InstrumentSensitivity>
<Value>2.39545e+10</Value>
<Frequency>0.05</Frequency>
<InputUnits>
<Name>M/S</Name>
</InputUnits>
<OutputUnits>
<Name/>
</OutputUnits>
</InstrumentSensitivity>
<Stage number="1">
<PolesZeros resourceId="ResponsePAZ#20121207101201.233133.1508" name="BFO.2008.016.HN00">
<InputUnits>
<Name>M/S</Name>
</InputUnits>
<OutputUnits>
<Name>V</Name>
</OutputUnits>
<PzTransferFunctionType>LAPLACE (HERTZ)</PzTransferFunctionType>
<NormalizationFactor>100.076</NormalizationFactor>
<NormalizationFrequency>0.05</NormalizationFrequency>
<Pole number="0">
<Real>-6.29</Real>
<Imaginary>7.77405</Imaginary>
</Pole>
<Pole number="1">
<Real>-6.29</Real>
<Imaginary>-7.77405</Imaginary>
</Pole>
<Pole number="2">
<Real>-0.002157</Real>
<Imaginary>0.001657</Imaginary>
</Pole>
<Pole number="3">
<Real>-0.002157</Real>
<Imaginary>-0.001657</Imaginary>
</Pole>
<Zero number="4">
<Real>0</Real>
<Imaginary>0</Imaginary>
</Zero>
<Zero number="5">
<Real>0</Real>
<Imaginary>0</Imaginary>
</Zero>
</PolesZeros>
<StageGain>
<Value>2572.99</Value>
<Frequency>0.05</Frequency>
</StageGain>
</Stage>
<Stage number="2">
<Coefficients>
<InputUnits>
<Name>V</Name>
</InputUnits>
<OutputUnits>
<Name>COUNTS</Name>
</OutputUnits>
<CfTransferFunctionType>DIGITAL</CfTransferFunctionType>
</Coefficients>
<Decimation>
<InputSampleRate>0</InputSampleRate>
<Factor>1</Factor>
<Offset>0</Offset>
<Delay>0</Delay>
<Correction>0</Correction>
</Decimation>
<StageGain>
<Value>700143</Value>
<Frequency>0</Frequency>
</StageGain>
</Stage>
</Response>
</Channel>
<Channel code="BHZ" startDate="2008-01-16T00:00:00" endDate="2012-06-21T14:29:59" restrictedStatus="open" locationCode="00">
<Latitude>48.3319</Latitude>
<Longitude>8.3311</Longitude>
<Elevation>589</Elevation>
<Depth>0</Depth>
<Azimuth>0</Azimuth>
<Dip>-90</Dip>
<SampleRate>20</SampleRate>
<SampleRateRatio>
<NumberSamples>20</NumberSamples>
<NumberSeconds>1</NumberSeconds>
</SampleRateRatio>
<StorageFormat>Steim1</StorageFormat>
<ClockDrift>2.5e-06</ClockDrift>
<Sensor resourceId="Sensor#20121207101201.250929.1527">
<Type>VBB</Type>
<Description>STS-1V</Description>
<Manufacturer>Streckeisen</Manufacturer>
<Model>STS-1V</Model>
</Sensor>
<DataLogger resourceId="Datalogger#20121207101201.210619.1481">
<Description>BFO.2008.016.H00</Description>
</DataLogger>
<Response>
<InstrumentSensitivity>
<Value>2.27009e+10</Value>
<Frequency>0.05</Frequency>
<InputUnits>
<Name>M/S</Name>
</InputUnits>
<OutputUnits>
<Name/>
</OutputUnits>
</InstrumentSensitivity>
<Stage number="1">
<PolesZeros resourceId="ResponsePAZ#20121207101201.251156.1528" name="BFO.2008.016.HZ00">
<InputUnits>
<Name>M/S</Name>
</InputUnits>
<OutputUnits>
<Name>V</Name>
</OutputUnits>
<PzTransferFunctionType>LAPLACE (HERTZ)</PzTransferFunctionType>
<NormalizationFactor>96.1326</NormalizationFactor>
<NormalizationFrequency>0.05</NormalizationFrequency>
<Pole number="0">
<Real>-6.27451</Real>
<Imaginary>7.53309</Imaginary>
</Pole>
<Pole number="1">
<Real>-6.27451</Real>
<Imaginary>-7.53309</Imaginary>
</Pole>
<Pole number="2">
<Real>-0.002003</Real>
<Imaginary>0.001898</Imaginary>
</Pole>
<Pole number="3">
<Real>-0.002003</Real>
<Imaginary>-0.001898</Imaginary>
</Pole>
<Zero number="4">
<Real>0</Real>
<Imaginary>0</Imaginary>
</Zero>
<Zero number="5">
<Real>0</Real>
<Imaginary>0</Imaginary>
</Zero>
</PolesZeros>
<StageGain>
<Value>2419.66</Value>
<Frequency>0.05</Frequency>
</StageGain>
</Stage>
<Stage number="2">
<Coefficients>
<InputUnits>
<Name>V</Name>
</InputUnits>
<OutputUnits>
<Name>COUNTS</Name>
</OutputUnits>
<CfTransferFunctionType>DIGITAL</CfTransferFunctionType>
</Coefficients>
<Decimation>
<InputSampleRate>0</InputSampleRate>
<Factor>1</Factor>
<Offset>0</Offset>
<Delay>0</Delay>
<Correction>0</Correction>
</Decimation>
<StageGain>
<Value>700143</Value>
<Frequency>0</Frequency>
</StageGain>
</Stage>
</Response>
</Channel>
</Station>
</Network>
</FDSNStationXML>

467
ObsPy/data/station_PFO.xml Normal file
View File

@@ -0,0 +1,467 @@
<?xml version="1.0" encoding="ISO-8859-1"?>
<FDSNStationXML xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.fdsn.org/xml/station/1" schemaVersion="1.0" xsi:schemaLocation="http://www.fdsn.org/xml/station/1 http://www.fdsn.org/xml/station/fdsn-station-1.0.xsd">
<Source>IRIS-DMC</Source>
<Sender>IRIS-DMC</Sender>
<Module>IRIS WEB SERVICE: fdsnws-station | version: 1.0.11</Module>
<ModuleURI>http://service.iris.edu/fdsnws/station/1/query?network=II&amp;level=response&amp;station=PFO&amp;starttime=2011-03-11T05%3A46%3A13.000000&amp;endtime=2011-03-11T05%3A46%3A33.000000&amp;channel=BHZ</ModuleURI>
<Created>2014-02-17T15:43:45</Created>
<Network code="II" startDate="1986-01-01T00:00:00" endDate="2020-12-12T23:59:59" restrictedStatus="open">
<Description>Global Seismograph Network (GSN - IRIS/IDA)</Description>
<TotalNumberStations>50</TotalNumberStations>
<SelectedNumberStations>1</SelectedNumberStations>
<Station code="PFO" startDate="2006-07-13T00:00:00" endDate="2020-12-31T23:59:59" restrictedStatus="open" alternateNetworkCodes=".EARTHSCOPE,_ANSS,.UNRESTRICTED,_CARIBE-EWS,_GSN-BROADBAND,_US-REF,_GSN,_STS-1,_ANSS-BB,_FDSN,_FDSN-ALL,_US-ALL,_REALTIME">
<Latitude>33.6107</Latitude>
<Longitude>-116.4555</Longitude>
<Elevation>1280.0</Elevation>
<Site>
<Name>Pinon Flat, California, USA</Name>
</Site>
<CreationDate>1986-10-24T00:00:00</CreationDate>
<TotalNumberChannels>250</TotalNumberChannels>
<SelectedNumberChannels>2</SelectedNumberChannels>
<Channel locationCode="00" startDate="2010-07-30T18:50:00" restrictedStatus="open" endDate="2012-07-02T03:59:59" code="BHZ">
<Comment>
<Value>S/N #119005</Value>
</Comment>
<Latitude>33.6107</Latitude>
<Longitude>-116.4555</Longitude>
<Elevation>1280.0</Elevation>
<Depth>5.3</Depth>
<Azimuth>0.0</Azimuth>
<Dip>-90.0</Dip>
<Type>CONTINUOUS</Type>
<Type>GEOPHYSICAL</Type>
<SampleRate>20.0</SampleRate>
<ClockDrift>2.5E-6</ClockDrift>
<CalibrationUnits>
<Name>V</Name>
<Description>Volts</Description>
</CalibrationUnits>
<Sensor>
<Type>Streckeisen STS-1 Seismometer with Metrozet E300</Type>
</Sensor>
<Response>
<InstrumentSensitivity>
<Value>5.24814E9</Value>
<Frequency>0.05</Frequency>
<InputUnits>
<Name>M/S</Name>
<Description>Velocity in Meters Per Second</Description>
</InputUnits>
<OutputUnits>
<Name>COUNTS</Name>
<Description>Digital Counts</Description>
</OutputUnits>
</InstrumentSensitivity>
<Stage number="1">
<PolesZeros>
<InputUnits>
<Name>M/S</Name>
<Description>Velocity in Meters Per Second</Description>
</InputUnits>
<OutputUnits>
<Name>V</Name>
<Description>Volts</Description>
</OutputUnits>
<PzTransferFunctionType>LAPLACE (HERTZ)</PzTransferFunctionType>
<NormalizationFactor>11.5758</NormalizationFactor>
<NormalizationFrequency>.05</NormalizationFrequency>
<Zero number="0">
<Real>0</Real>
<Imaginary>0</Imaginary>
</Zero>
<Zero number="1">
<Real>0</Real>
<Imaginary>0</Imaginary>
</Zero>
<Zero number="2">
<Real>-12.5</Real>
<Imaginary>0</Imaginary>
</Zero>
<Zero number="3">
<Real>-.0242718</Real>
<Imaginary>0</Imaginary>
</Zero>
<Zero number="4">
<Real>-.0242718</Real>
<Imaginary>0</Imaginary>
</Zero>
<Pole number="5">
<Real>-.0019211</Real>
<Imaginary>.00194895</Imaginary>
</Pole>
<Pole number="6">
<Real>-.0019211</Real>
<Imaginary>-.00194895</Imaginary>
</Pole>
<Pole number="7">
<Real>-.0242315</Real>
<Imaginary>.00153484</Imaginary>
</Pole>
<Pole number="8">
<Real>-.0242315</Real>
<Imaginary>-.00153484</Imaginary>
</Pole>
<Pole number="9">
<Real>-7.691</Real>
<Imaginary>9.25817</Imaginary>
</Pole>
<Pole number="10">
<Real>-7.691</Real>
<Imaginary>-9.25817</Imaginary>
</Pole>
</PolesZeros>
<StageGain>
<Value>3314.4</Value>
<Frequency>.05</Frequency>
</StageGain>
</Stage>
<Stage number="2">
<PolesZeros>
<InputUnits>
<Name>V</Name>
<Description>Volts</Description>
</InputUnits>
<OutputUnits>
<Name>V</Name>
<Description>Volts</Description>
</OutputUnits>
<PzTransferFunctionType>LAPLACE (HERTZ)</PzTransferFunctionType>
<NormalizationFactor>1</NormalizationFactor>
<NormalizationFrequency>.05</NormalizationFrequency>
</PolesZeros>
<StageGain>
<Value>1</Value>
<Frequency>.05</Frequency>
</StageGain>
</Stage>
<Stage number="3">
<Coefficients>
<InputUnits>
<Name>V</Name>
<Description>Volts</Description>
</InputUnits>
<OutputUnits>
<Name>COUNTS</Name>
<Description>Digital Counts</Description>
</OutputUnits>
<CfTransferFunctionType>DIGITAL</CfTransferFunctionType>
</Coefficients>
<Decimation>
<InputSampleRate>20</InputSampleRate>
<Factor>1</Factor>
<Offset>0</Offset>
<Delay>0.0</Delay>
<Correction>0.0</Correction>
</Decimation>
<StageGain>
<Value>1583330</Value>
<Frequency>0</Frequency>
</StageGain>
</Stage>
<Stage number="4">
<Coefficients>
<InputUnits>
<Name>COUNTS</Name>
<Description>Digital Counts</Description>
</InputUnits>
<OutputUnits>
<Name>COUNTS</Name>
<Description>Digital Counts</Description>
</OutputUnits>
<CfTransferFunctionType>DIGITAL</CfTransferFunctionType>
<Numerator>-.0000000000000000365342</Numerator>
<Numerator>.0000000367488</Numerator>
<Numerator>-.00000042706</Numerator>
<Numerator>.00000114502</Numerator>
<Numerator>-.000000187594</Numerator>
<Numerator>-.000000337274</Numerator>
<Numerator>.00000278747</Numerator>
<Numerator>-.00000374403</Numerator>
<Numerator>.00000541172</Numerator>
<Numerator>.00000747336</Numerator>
<Numerator>-.000517759</Numerator>
<Numerator>.000210677</Numerator>
<Numerator>.0000463258</Numerator>
<Numerator>-.000608222</Numerator>
<Numerator>.00144175</Numerator>
<Numerator>-.00240627</Numerator>
<Numerator>.00322534</Numerator>
<Numerator>-.00350639</Numerator>
<Numerator>.00281441</Numerator>
<Numerator>-.000771971</Numerator>
<Numerator>-.00280512</Numerator>
<Numerator>.00777805</Numerator>
<Numerator>-.0135815</Numerator>
<Numerator>.0191765</Numerator>
<Numerator>-.0229704</Numerator>
<Numerator>.0240398</Numerator>
<Numerator>-.0220986</Numerator>
<Numerator>.00860734</Numerator>
<Numerator>.0117525</Numerator>
<Numerator>-.0447787</Numerator>
<Numerator>.0964923</Numerator>
<Numerator>-.191755</Numerator>
<Numerator>.527652</Numerator>
<Numerator>.724167</Numerator>
<Numerator>-.156905</Numerator>
<Numerator>.0442574</Numerator>
<Numerator>.00314168</Numerator>
<Numerator>-.0266714</Numerator>
<Numerator>.0361532</Numerator>
<Numerator>-.0385687</Numerator>
<Numerator>.0310842</Numerator>
<Numerator>-.0235259</Numerator>
<Numerator>.0153211</Numerator>
<Numerator>-.00740398</Numerator>
<Numerator>.00109645</Numerator>
<Numerator>.00309797</Numerator>
<Numerator>-.0051932</Numerator>
<Numerator>.00556131</Numerator>
<Numerator>-.0047611</Numerator>
<Numerator>.00338213</Numerator>
<Numerator>-.00192052</Numerator>
<Numerator>.000715218</Numerator>
<Numerator>.0000767719</Numerator>
<Numerator>-.000451897</Numerator>
<Numerator>.0005027</Numerator>
<Numerator>-.000565037</Numerator>
<Numerator>-.00005568</Numerator>
<Numerator>.0000157736</Numerator>
<Numerator>-.00000141985</Numerator>
<Numerator>.000000814909</Numerator>
<Numerator>.000000680795</Numerator>
<Numerator>-.00000125273</Numerator>
<Numerator>.00000152435</Numerator>
<Numerator>-.000000283336</Numerator>
<Numerator>-.0000000106384</Numerator>
<Numerator>.00000000125712</Numerator>
<Numerator>-.0000000000542954</Numerator>
</Coefficients>
<Decimation>
<InputSampleRate>20</InputSampleRate>
<Factor>1</Factor>
<Offset>0</Offset>
<Delay>1.6305</Delay>
<Correction>1.6305</Correction>
</Decimation>
<StageGain>
<Value>1</Value>
<Frequency>0</Frequency>
</StageGain>
</Stage>
</Response>
</Channel>
<Channel locationCode="10" startDate="2010-07-23T00:00:00" restrictedStatus="open" endDate="2013-10-30T23:59:59" code="BHZ">
<Comment>
<Value>S/N #484</Value>
</Comment>
<Latitude>33.6107</Latitude>
<Longitude>-116.4555</Longitude>
<Elevation>1280.0</Elevation>
<Depth>5.3</Depth>
<Azimuth>0.0</Azimuth>
<Dip>-90.0</Dip>
<Type>CONTINUOUS</Type>
<Type>GEOPHYSICAL</Type>
<SampleRate>40.0</SampleRate>
<ClockDrift>1.25E-6</ClockDrift>
<CalibrationUnits>
<Name>V</Name>
<Description>Volts</Description>
</CalibrationUnits>
<Sensor>
<Type>Nanometrics Trillium 240 Seismometer</Type>
</Sensor>
<Response>
<InstrumentSensitivity>
<Value>2.00625E9</Value>
<Frequency>0.05</Frequency>
<InputUnits>
<Name>M/S</Name>
<Description>Velocity in Meters Per Second</Description>
</InputUnits>
<OutputUnits>
<Name>COUNTS</Name>
<Description>Digital Counts</Description>
</OutputUnits>
</InstrumentSensitivity>
<Stage number="1">
<PolesZeros>
<InputUnits>
<Name>M/S</Name>
<Description>Velocity in Meters Per Second</Description>
</InputUnits>
<OutputUnits>
<Name>V</Name>
<Description>Volts</Description>
</OutputUnits>
<PzTransferFunctionType>LAPLACE (HERTZ)</PzTransferFunctionType>
<NormalizationFactor>9667.55</NormalizationFactor>
<NormalizationFrequency>.05</NormalizationFrequency>
<Zero number="0">
<Real>0</Real>
<Imaginary>0</Imaginary>
</Zero>
<Zero number="1">
<Real>0</Real>
<Imaginary>0</Imaginary>
</Zero>
<Zero number="2">
<Real>-14.588</Real>
<Imaginary>0</Imaginary>
</Zero>
<Zero number="3">
<Real>-25.4807</Real>
<Imaginary>0</Imaginary>
</Zero>
<Zero number="4">
<Real>-510.408</Real>
<Imaginary>0</Imaginary>
</Zero>
<Pole number="5">
<Real>-.00281527</Real>
<Imaginary>.00280021</Imaginary>
</Pole>
<Pole number="6">
<Real>-.00281527</Real>
<Imaginary>-.00280021</Imaginary>
</Pole>
<Pole number="7">
<Real>-16.994</Real>
<Imaginary>0</Imaginary>
</Pole>
<Pole number="8">
<Real>-30.558</Real>
<Imaginary>41.237</Imaginary>
</Pole>
<Pole number="9">
<Real>-30.558</Real>
<Imaginary>-41.237</Imaginary>
</Pole>
<Pole number="10">
<Real>-88.76</Real>
<Imaginary>181.91</Imaginary>
</Pole>
<Pole number="11">
<Real>-88.76</Real>
<Imaginary>-181.91</Imaginary>
</Pole>
</PolesZeros>
<StageGain>
<Value>1200</Value>
<Frequency>.05</Frequency>
</StageGain>
</Stage>
<Stage number="2">
<PolesZeros>
<InputUnits>
<Name>V</Name>
<Description>Volts</Description>
</InputUnits>
<OutputUnits>
<Name>V</Name>
<Description>Volts</Description>
</OutputUnits>
<PzTransferFunctionType>LAPLACE (HERTZ)</PzTransferFunctionType>
<NormalizationFactor>1</NormalizationFactor>
<NormalizationFrequency>.05</NormalizationFrequency>
</PolesZeros>
<StageGain>
<Value>1</Value>
<Frequency>.05</Frequency>
</StageGain>
</Stage>
<Stage number="3">
<Coefficients>
<InputUnits>
<Name>V</Name>
<Description>Volts</Description>
</InputUnits>
<OutputUnits>
<Name>COUNTS</Name>
<Description>Digital Counts</Description>
</OutputUnits>
<CfTransferFunctionType>DIGITAL</CfTransferFunctionType>
</Coefficients>
<Decimation>
<InputSampleRate>40</InputSampleRate>
<Factor>1</Factor>
<Offset>0</Offset>
<Delay>0.0</Delay>
<Correction>0.0</Correction>
</Decimation>
<StageGain>
<Value>1671840</Value>
<Frequency>0</Frequency>
</StageGain>
</Stage>
<Stage number="4">
<Coefficients>
<InputUnits>
<Name>COUNTS</Name>
<Description>Digital Counts</Description>
</InputUnits>
<OutputUnits>
<Name>COUNTS</Name>
<Description>Digital Counts</Description>
</OutputUnits>
<CfTransferFunctionType>DIGITAL</CfTransferFunctionType>
<Numerator>.000000000000418952</Numerator>
<Numerator>.000330318</Numerator>
<Numerator>.00102921</Numerator>
<Numerator>-.00314123</Numerator>
<Numerator>.000205709</Numerator>
<Numerator>.00152521</Numerator>
<Numerator>-.00623193</Numerator>
<Numerator>.0104801</Numerator>
<Numerator>-.0131202</Numerator>
<Numerator>.0107821</Numerator>
<Numerator>-.00144455</Numerator>
<Numerator>-.0158729</Numerator>
<Numerator>.0395074</Numerator>
<Numerator>-.0651036</Numerator>
<Numerator>.0853716</Numerator>
<Numerator>-.0891913</Numerator>
<Numerator>.0500619</Numerator>
<Numerator>.837233</Numerator>
<Numerator>.266723</Numerator>
<Numerator>-.166693</Numerator>
<Numerator>.095284</Numerator>
<Numerator>-.0509218</Numerator>
<Numerator>.0161458</Numerator>
<Numerator>.00706362</Numerator>
<Numerator>-.0183877</Numerator>
<Numerator>.0199414</Numerator>
<Numerator>-.0154895</Numerator>
<Numerator>.00852735</Numerator>
<Numerator>-.00255789</Numerator>
<Numerator>-.00181103</Numerator>
<Numerator>.00242649</Numerator>
<Numerator>-.00375769</Numerator>
<Numerator>.000467293</Numerator>
<Numerator>.000633072</Numerator>
<Numerator>-.00000156874</Numerator>
<Numerator>-.000012548</Numerator>
<Numerator>.000000321041</Numerator>
<Numerator>-.0000000263324</Numerator>
<Numerator>-.0000000509997</Numerator>
</Coefficients>
<Decimation>
<InputSampleRate>40</InputSampleRate>
<Factor>1</Factor>
<Offset>0</Offset>
<Delay>0.43045</Delay>
<Correction>0.43045</Correction>
</Decimation>
<StageGain>
<Value>1</Value>
<Frequency>0</Frequency>
</StageGain>
</Stage>
</Response>
</Channel>
</Station>
</Network>
</FDSNStationXML>

66
ObsPy/data/tohoku.xml Normal file
View File

@@ -0,0 +1,66 @@
<?xml version="1.0" encoding="US-ASCII" standalone="yes"?>
<q:quakeml xmlns:q="http://quakeml.org/xmlns/quakeml/1.2" xmlns="http://quakeml.org/xmlns/bed/1.2" xmlns:ingv="http://webservices.rm.ingv.it/fdsnws/event/1">
<eventParameters publicID="smi:webservices.rm.ingv.it/fdsnws/event/1/query">
<event publicID="smi:webservices.rm.ingv.it/fdsnws/event/1/query?eventId=2322919">
<type>earthquake</type>
<description>
<type>region name</type>
<text>Near east coast of eastern Honshu, Japan</text>
</description>
<preferredMagnitudeID>smi:webservices.rm.ingv.it/fdsnws/event/1/query?magnitudeId=2122949</preferredMagnitudeID>
<preferredOriginID>smi:webservices.rm.ingv.it/fdsnws/event/1/query?originId=1180759</preferredOriginID>
<creationInfo>
<agencyID>INGV</agencyID>
<author>CSEM</author>
<creationTime>2013-01-05T23:02:38</creationTime>
</creationInfo>
<origin publicID="smi:webservices.rm.ingv.it/fdsnws/event/1/query?originId=1180759">
<evaluationMode>manual</evaluationMode>
<type>hypocenter</type>
<time>
<value>2011-03-11T05:46:23.000000</value>
</time>
<latitude>
<value>38.3</value>
</latitude>
<longitude>
<value>142.5</value>
</longitude>
<depth>
<value>21000</value>
</depth>
<depthType>from location</depthType>
<quality>
<associatedPhaseCount>88</associatedPhaseCount>
<usedPhaseCount>65</usedPhaseCount>
<minimumDistance>0.00000</minimumDistance>
<maximumDistance>0.00000</maximumDistance>
<associatedStationCount>82</associatedStationCount>
<usedStationCount>65</usedStationCount>
</quality>
<evaluationStatus>reviewed</evaluationStatus>
<methodID>smi:webservices.rm.ingv.it/fdsnws/event/1/query?methodId=2</methodID>
<earthModelID>smi:webservices.rm.ingv.it/fdsnws/event/1/query?earthModelId=1</earthModelID>
<creationInfo>
<agencyID>INGV</agencyID>
<author>CSEM</author>
<version>1000</version>
<creationTime>2013-01-05T23:02:39</creationTime>
</creationInfo>
</origin>
<magnitude publicID="smi:webservices.rm.ingv.it/fdsnws/event/1/query?magnitudeId=2122949">
<originID>smi:webservices.rm.ingv.it/fdsnws/event/1/query?originId=1180759</originID>
<type>Mw</type>
<mag>
<value>9.0</value>
</mag>
<methodID>smi:webservices.rm.ingv.it/fdsnws/event/1/query?methodId=2</methodID>
<creationInfo>
<agencyID>INGV</agencyID>
<author>CSEM</author>
<creationTime>2013-01-05T23:02:39</creationTime>
</creationInfo>
</magnitude>
</event>
</eventParameters>
</q:quakeml>

BIN
ObsPy/data/waveform_BFO_BHE.sac LFS Normal file

Binary file not shown.

BIN
ObsPy/data/waveform_BFO_BHN.sac LFS Normal file

Binary file not shown.

BIN
ObsPy/data/waveform_BFO_BHZ.sac LFS Normal file

Binary file not shown.

BIN
ObsPy/data/waveform_PFO.mseed LFS Normal file

Binary file not shown.

Binary file not shown.

3
ObsPy/images/Event.svg Normal file

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 27 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 26 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 36 KiB

BIN
ObsPy/images/cos_taper.png LFS Normal file

Binary file not shown.

BIN
ObsPy/images/notebook_toolbar.png LFS Normal file

Binary file not shown.

Binary file not shown.

View File

@@ -1,7 +1,21 @@
# Seismological Data Analysis
# Seismological Data Analysis 2024
Jupyter Notebooks for class "Seismological Data Analysis"
Jupyter Notebooks for class "Seismological Data Analysis" (summer term 2024)
## This is an empty template
## License
The content of this repository is licensed under Creative Commons Attribution 4.0 International (CC-BY 4.0) ***unless otherwise stated*** (see below)
## Lectures
***Do not use this repository.*** Instead create a new repository every year and fill it up with the contents for the students.
### 01 - Python Introduction
This has been forked from [Seismo Live](http://seismo-live.org). The source code is available at https://github.com/krischer/seismo_live (licensed under a ***CC BY-NC-SA 4.0 License***. &copy; 2015-2022 Lion Krischer).
### 02 - Fourier-Transform and Basic Filtering
This has been forked from [Seismo Live](http://seismo-live.org). The source code is available at https://github.com/krischer/seismo_live (licensed under a ***CC BY-NC-SA 4.0 License***. &copy; 2015-2022 Lion Krischer).
### 03 - Cross Correlation
1. Simple Cross Correlation: Use the cross-correlation to detect and determine the time shift of two test events with one template event.
2. Enhanced Picker: This has been taken from the [ObsPy Tutorial](https://docs.obspy.org/master/tutorial/code_snippets/xcorr_pick_correction.html). ObsPy is licensed under the [LGPL v3.0](https://www.gnu.de/documents/lgpl-3.0.en.html)
3. Ambient Seismic Noise Analysis from [Seismo Live](http://seismo-live.org). The source code is available at https://github.com/krischer/seismo_live (licensed under a ***CC BY-NC-SA 4.0 License***. &copy; 2015-2019 Lion Krischer).

View File

@@ -0,0 +1,8 @@
{
"folders": [
{
"path": "."
}
],
"settings": {}
}

28
images/header.svg Normal file
View File

@@ -0,0 +1,28 @@
<?xml version="1.0" encoding="UTF-8"?>
<svg width="850.39pt" height="428.03pt" version="1.1" viewBox="158.17 116.17 850.39 428.03" xmlns="http://www.w3.org/2000/svg">
<g transform="translate(-3.4095 -2.8898)" fill="none">
<title>Canvas 1</title>
<g>
<title>Layer 1</title>
<rect x="161.57" y="119.06" width="850.39" height="428.03" fill="#d2314b"/>
<text transform="translate(201.57 168.57)" fill="#ffffff" font-family="'Helvetica Neue'" style="line-height:0%"><tspan x="0" y="33" font-size="35px" font-weight="500">Seismo-Live</tspan><tspan x="0" y="84" fill-opacity=".52" font-size="20px" font-weight="300">Live Jupyter Notebooks</tspan><tspan x="0" y="108" fill-opacity=".52" font-size="20px" font-weight="300">for Seismology</tspan></text>
<rect x="731.34" y="119.06" width="280.63" height="428.03" fill="#6ec2ab"/>
<text transform="translate(771.34 309.07)" fill="#d3d3d3"/>
<path d="m507.32 119.06h106.54l-26.634 428.03h-106.54z" fill="#56be2a"/>
<path d="m590.24 119.06h106.54l-26.634 428.03h-106.54z" fill="#277d00"/>
<path d="m602.8 119.06h106.54l-26.634 428.03h-106.54z" fill="#3baa04"/>
<path d="m615.37 119.06h106.54l-26.634 428.03h-106.54z" fill="#78dc4e"/>
<path d="m630.44 119.06h106.54l-26.634 428.03h-106.54z" fill="#9feb7e"/>
<path d="m643.66 119.06h106.54l-26.634 428.03h-106.54z" fill="#de7036"/>
<path d="m726.57 119.06h106.54l-26.634 428.03h-106.54z" fill="#913500"/>
<path d="m739.14 119.06h106.54l-26.634 428.03h-106.54z" fill="#c64f12"/>
<path d="m751.7 119.06h106.54l-26.634 428.03h-106.54z" fill="#ff985f"/>
<path d="m766.78 119.06h106.54l-26.634 428.03h-106.54z" fill="#ffb78d"/>
<path d="m781.85 119.06h106.54l-26.634 428.03h-106.54z" fill="#258468"/>
<path d="m864.77 119.06h106.54l-26.634 428.03h-106.54z" fill="#0b5840"/>
<path d="m877.33 119.06h106.54l-26.634 428.03h-106.54z" fill="#137757"/>
<path d="m889.89 119.06h106.54l-26.634 428.03h-106.54z" fill="#3e9d80"/>
<path d="m904.97 119.06h106.54l-26.634 428.03h-106.54z" fill="#6ec2ab"/>
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 2.0 KiB

BIN
images/notebook_toolbar.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 202 KiB