Compare commits

..

No commits in common. "develop" and "445120990715f43c8aae18955ca533ff0d6d4fb8" have entirely different histories.

75 changed files with 118903 additions and 185035 deletions

2
.gitignore vendored
View File

@ -1,5 +1,3 @@
*.pyc *.pyc
*~ *~
.idea
pylot/RELEASE-VERSION pylot/RELEASE-VERSION
/tests/test_autopicker/dmt_database_test/

View File

@ -1,39 +0,0 @@
Darius Arnold <Darius.Arnold@ruhr-uni-bochum.de> <Darius_A@web.de>
Darius Arnold <Darius.Arnold@ruhr-uni-bochum.de> <darius.arnold@rub.de>
Darius Arnold <Darius.Arnold@ruhr-uni-bochum.de> <darius.arnold@ruhr-uni-bochum.de>
Darius Arnold <Darius.Arnold@ruhr-uni-bochum.de> <mail@dariusarnold.de>
Dennis Wlecklik <dennisw@minos02.geophysik.ruhr-uni-bochum.de>
Jeldrik Gaal <jeldrikgaal@gmail.com>
Kaan Coekerim <kaan.coekerim@ruhr-uni-bochum.de>
Kaan Coekerim <kaan.coekerim@ruhr-uni-bochum.de> <kaan.coekerim@rub.de>
Ludger Kueperkoch <kueperkoch@igem-energie.de> <kueperkoch@bestec-for-nature.com>
Ludger Kueperkoch <kueperkoch@igem-energie.de> <ludger@quake2.(none)>
Ludger Kueperkoch <kueperkoch@igem-energie.de> <ludger@sauron.bestec-for-nature>
Marc S. Boxberg <marc.boxberg@rub.de>
Marcel Paffrath <marcel.paffrath@ruhr-uni-bochum.de> <marcel.paffrath@rub.de>
Marcel Paffrath <marcel.paffrath@ruhr-uni-bochum.de> <marcel@minos01.geophysik.ruhr-uni-bochum.de>
Marcel Paffrath <marcel.paffrath@ruhr-uni-bochum.de> <marcel@minos02.geophysik.ruhr-uni-bochum.de>
Marcel Paffrath <marcel.paffrath@ruhr-uni-bochum.de> <marcel@minos25.geophysik.ruhr-uni-bochum.de>
Marcel Paffrath <marcel.paffrath@ruhr-uni-bochum.de> <marcel@email.com>
Sally Zimmermann <sally.zimmermann@ruhr-uni-bochum.de>
Sebastian Wehling-Benatelli <sebastian.wehling-benatelli@cgi.com> <sebastianw@minos01.geophysik.ruhr-uni-bochum.de>
Sebastian Wehling-Benatelli <sebastian.wehling-benatelli@cgi.com> <sebastianw@minos02.geophysik.ruhr-uni-bochum.de>
Sebastian Wehling-Benatelli <sebastian.wehling-benatelli@cgi.com> <sebastianw@minos22.geophysik.ruhr-uni-bochum.de>
Sebastian Wehling-Benatelli <sebastian.wehling-benatelli@cgi.com> <sebastian.wehling-benatelli@scisys.de>
Sebastian Wehling-Benatelli <sebastian.wehling-benatelli@cgi.com> <sebastian.wehling@rub.de>
Sebastian Wehling-Benatelli <sebastian.wehling-benatelli@cgi.com> <sebastian.wehling@rub.de>
Sebastian Wehling-Benatelli <sebastian.wehling-benatelli@cgi.com> <DarkBeQst@users.noreply.github.com>
Thomas Moeller <thomas.moeller@rub.de>
Ann-Christin Koch <ann-christin.koch@ruhr-uni-bochum.de> <Ann-Christin.Koch@ruhr-uni-bochum.de>
Sebastian Priebe <sebastian.priebe@rub.de>

BIN
.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.9 MiB

630
PyLoT.py

File diff suppressed because it is too large Load Diff

View File

@ -1,70 +1,66 @@
# PyLoT # PyLoT
version: 0.3 version: 0.2
The Python picking and Localisation Tool The Python picking and Localisation Tool
This python library contains a graphical user interfaces for picking seismic phases. This software needs [ObsPy][ObsPy] This python library contains a graphical user interfaces for picking
and the PySide2 Qt5 bindings for python to be installed first. seismic phases. This software needs [ObsPy][ObsPy]
and the PySide Qt4 bindings for python to be installed first.
PILOT has originally been developed in Mathworks' MatLab. In order to distribute PILOT without facing portability PILOT has originally been developed in Mathworks' MatLab. In order to
problems, it has been decided to redevelop the software package in Python. The great work of the ObsPy group allows easy distribute PILOT without facing portability problems, it has been decided
handling of a bunch of seismic data and PyLoT will benefit a lot compared to the former MatLab version. to redevelop the software package in Python. The great work of the ObsPy
group allows easy handling of a bunch of seismic data and PyLoT will
benefit a lot compared to the former MatLab version.
The development of PyLoT is part of the joint research project MAGS2, AlpArray and AdriaArray. The development of PyLoT is part of the joint research project MAGS2 and AlpArray.
## Installation ## Installation
At the moment there is no automatic installation procedure available for PyLoT. Best way to install is to clone the At the moment there is no automatic installation procedure available for PyLoT.
repository and add the path to your Python path. Best way to install is to clone the repository and add the path to your Python path.
It is highly recommended to use Anaconda for a simple creation of a Python installation using either the *pylot.yml* or the *requirements.txt* file found in the PyLoT root directory. First make sure that the *conda-forge* channel is available in your Anaconda installation:
conda config --add channels conda-forge
Afterwards run (from the PyLoT main directory where the files *requirements.txt* and *pylot.yml* are located)
conda env create -f pylot.yml
or
conda create -c conda-forge --name pylot_311 python=3.11 --file requirements.txt
to create a new Anaconda environment called *pylot_311*.
Afterwards activate the environment by typing
conda activate pylot_311
#### Prerequisites: #### Prerequisites:
In order to run PyLoT you need to install: In order to run PyLoT you need to install:
- Python 3 - python 2 or 3
- cartopy
- joblib
- obspy
- pyaml
- pyqtgraph
- pyside2
(the following are already dependencies of the above packages):
- scipy - scipy
- numpy - numpy
- matplotlib - matplotlib
- obspy
- pyside
#### Some handwork: #### Some handwork:
Some extra information on error estimates (just needed for reading old PILOT data) and the Richter magnitude scaling PyLoT needs a properties folder on your system to work. It should be situated in your home directory
relation (on Windows usually C:/Users/*username*):
mkdir ~/.pylot
In the next step you have to copy some files to this directory:
*for local distance seismicity*
cp path-to-pylot/inputs/pylot_local.in ~/.pylot/pylot.in
*for regional distance seismicity*
cp path-to-pylot/inputs/pylot_regional.in ~/.pylot/pylot.in
*for global distance seismicity*
cp path-to-pylot/inputs/pylot_global.in ~/.pylot/pylot.in
and some extra information on error estimates (just needed for reading old PILOT data) and the Richter magnitude scaling relation
cp path-to-pylot/inputs/PILOT_TimeErrors.in path-to-pylot/inputs/richter_scaling.data ~/.pylot/ cp path-to-pylot/inputs/PILOT_TimeErrors.in path-to-pylot/inputs/richter_scaling.data ~/.pylot/
You may need to do some modifications to these files. Especially folder names should be reviewed. You may need to do some modifications to these files. Especially folder names should be reviewed.
PyLoT has been tested on Mac OSX (10.11), Debian Linux 8 and on Windows 10/11. PyLoT has been tested on Mac OSX (10.11), Debian Linux 8 and on Windows 10.
## Example Dataset
An example dataset with waveform data, metadata and automatic picks in the obspy-dmt dataset format for testing the teleseismic picking can be found at https://zenodo.org/doi/10.5281/zenodo.13759803
## Release notes ## Release notes
@ -73,7 +69,6 @@ An example dataset with waveform data, metadata and automatic picks in the obspy
- event organisation in project files and waveform visualisation - event organisation in project files and waveform visualisation
- consistent manual phase picking through predefined SNR dependant zoom level - consistent manual phase picking through predefined SNR dependant zoom level
- consistent automatic phase picking routines using Higher Order Statistics, AIC and Autoregression - consistent automatic phase picking routines using Higher Order Statistics, AIC and Autoregression
- pick correlation correction for teleseismic waveforms
- interactive tuning of auto-pick parameters - interactive tuning of auto-pick parameters
- uniform uncertainty estimation from waveform's properties for automatic and manual picks - uniform uncertainty estimation from waveform's properties for automatic and manual picks
- pdf representation and comparison of picks taking the uncertainty intrinsically into account - pdf representation and comparison of picks taking the uncertainty intrinsically into account
@ -82,17 +77,20 @@ An example dataset with waveform data, metadata and automatic picks in the obspy
#### Known issues: #### Known issues:
Current release is still in development progress and has several issues. We are currently lacking manpower, but hope to assess many of the issues in the near future. - Sometimes an error might occur when using Qt
We hope to solve these with the next release.
## Staff ## Staff
Developer(s): M. Paffrath, S. Wehling-Benatelli, L. Kueperkoch, D. Arnold, K. Cökerim, K. Olbert, M. Bischoff, C. Wollin, M. Rische, S. Zimmermann Original author(s): L. Kueperkoch, S. Wehling-Benatelli, M. Bischoff (PILOT)
Original author(s): M. Rische, S. Wehling-Benatelli, L. Kueperkoch, M. Bischoff (PILOT) Developer(s): S. Wehling-Benatelli, L. Kueperkoch, K. Olbert, M. Bischoff,
C. Wollin, M. Rische, M. Paffrath
Others: A. Bruestle, T. Meier, W. Friederich Others: A. Bruestle, T. Meier, W. Friederich
[ObsPy]: http://github.com/obspy/obspy/wiki [ObsPy]: http://github.com/obspy/obspy/wiki
March 2025 September 2017

View File

@ -8,7 +8,6 @@ import datetime
import glob import glob
import os import os
import traceback import traceback
from obspy import read_events from obspy import read_events
from obspy.core.event import ResourceIdentifier from obspy.core.event import ResourceIdentifier
@ -28,7 +27,7 @@ from pylot.core.util.dataprocessing import restitute_data, Metadata
from pylot.core.util.defaults import SEPARATOR from pylot.core.util.defaults import SEPARATOR
from pylot.core.util.event import Event from pylot.core.util.event import Event
from pylot.core.util.structure import DATASTRUCTURE from pylot.core.util.structure import DATASTRUCTURE
from pylot.core.util.utils import get_none, trim_station_components, check4gapsAndRemove, check4doubled, \ from pylot.core.util.utils import get_None, trim_station_components, check4gapsAndRemove, check4doubled, \
check4rotated check4rotated
from pylot.core.util.version import get_git_version as _getVersionString from pylot.core.util.version import get_git_version as _getVersionString
@ -91,9 +90,9 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
sp=sp_info) sp=sp_info)
print(splash) print(splash)
parameter = get_none(parameter) parameter = get_None(parameter)
inputfile = get_none(inputfile) inputfile = get_None(inputfile)
eventid = get_none(eventid) eventid = get_None(eventid)
fig_dict = None fig_dict = None
fig_dict_wadatijack = None fig_dict_wadatijack = None
@ -119,9 +118,13 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
obspyDMT_wfpath = input_dict['obspyDMT_wfpath'] obspyDMT_wfpath = input_dict['obspyDMT_wfpath']
if not parameter: if not parameter:
if not inputfile: if inputfile:
print('Using default input parameter')
parameter = PylotParameter(inputfile) parameter = PylotParameter(inputfile)
# iplot = parameter['iplot']
else:
infile = os.path.join(os.path.expanduser('~'), '.pylot', 'pylot.in')
print('Using default input file {}'.format(infile))
parameter = PylotParameter(infile)
else: else:
if not type(parameter) == PylotParameter: if not type(parameter) == PylotParameter:
print('Wrong input type for parameter: {}'.format(type(parameter))) print('Wrong input type for parameter: {}'.format(type(parameter)))
@ -136,11 +139,13 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
if parameter.hasParam('datastructure'): if parameter.hasParam('datastructure'):
# getting information on data structure # getting information on data structure
datastructure = DATASTRUCTURE[parameter.get('datastructure')]() datastructure = DATASTRUCTURE[parameter.get('datastructure')]()
dsfields = {'dpath': parameter.get('datapath'),} dsfields = {'root': parameter.get('rootpath'),
'dpath': parameter.get('datapath'),
'dbase': parameter.get('database')}
exf = ['dpath'] exf = ['root', 'dpath', 'dbase']
if parameter['eventID'] != '*' and fnames == 'None': if parameter['eventID'] is not '*' and fnames == 'None':
dsfields['eventID'] = parameter['eventID'] dsfields['eventID'] = parameter['eventID']
exf.append('eventID') exf.append('eventID')
@ -148,7 +153,7 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
datastructure.setExpandFields(exf) datastructure.setExpandFields(exf)
# check if default location routine NLLoc is available and all stations are used # check if default location routine NLLoc is available and all stations are used
if get_none(parameter['nllocbin']) and station == 'all': if get_None(parameter['nllocbin']) and station == 'all':
locflag = 1 locflag = 1
# get NLLoc-root path # get NLLoc-root path
nllocroot = parameter.get('nllocroot') nllocroot = parameter.get('nllocroot')
@ -184,15 +189,15 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
if not input_dict: if not input_dict:
# started in production mode # started in production mode
datapath = datastructure.expandDataPath() datapath = datastructure.expandDataPath()
if fnames in [None, 'None'] and parameter['eventID'] == '*': if fnames == 'None' and parameter['eventID'] is '*':
# multiple event processing # multiple event processing
# read each event in database # read each event in database
events = [event for event in glob.glob(os.path.join(datapath, '*')) if events = [event for event in glob.glob(os.path.join(datapath, '*')) if
(os.path.isdir(event) and not event.endswith('EVENTS-INFO'))] (os.path.isdir(event) and not event.endswith('EVENTS-INFO'))]
elif fnames in [None, 'None'] and parameter['eventID'] != '*' and not type(parameter['eventID']) == list: elif fnames == 'None' and parameter['eventID'] is not '*' and not type(parameter['eventID']) == list:
# single event processing # single event processing
events = glob.glob(os.path.join(datapath, parameter['eventID'])) events = glob.glob(os.path.join(datapath, parameter['eventID']))
elif fnames in [None, 'None'] and type(parameter['eventID']) == list: elif fnames == 'None' and type(parameter['eventID']) == list:
# multiple event processing # multiple event processing
events = [] events = []
for eventID in parameter['eventID']: for eventID in parameter['eventID']:
@ -204,10 +209,12 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
locflag = 2 locflag = 2
else: else:
# started in tune or interactive mode # started in tune or interactive mode
datapath = parameter['datapath'] datapath = os.path.join(parameter['rootpath'],
parameter['datapath'])
events = [] events = []
for eventID in eventid: for eventID in eventid:
events.append(os.path.join(datapath, events.append(os.path.join(datapath,
parameter['database'],
eventID)) eventID))
if not events: if not events:
@ -234,15 +241,12 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
data.get_evt_data().path = eventpath data.get_evt_data().path = eventpath
print('Reading event data from filename {}...'.format(filename)) print('Reading event data from filename {}...'.format(filename))
except Exception as e: except Exception as e:
if type(e) == FileNotFoundError:
print('Creating new event file.')
else:
print('Could not read event from file {}: {}'.format(filename, e)) print('Could not read event from file {}: {}'.format(filename, e))
data = Data() data = Data()
pylot_event = Event(eventpath) # event should be path to event directory pylot_event = Event(eventpath) # event should be path to event directory
data.setEvtData(pylot_event) data.setEvtData(pylot_event)
if fnames in [None, 'None']: if fnames == 'None':
data.setWFData(glob.glob(os.path.join(event_datapath, '*'))) data.setWFData(glob.glob(os.path.join(datapath, event_datapath, '*')))
# the following is necessary because within # the following is necessary because within
# multiple event processing no event ID is provided # multiple event processing no event ID is provided
# in autopylot.in # in autopylot.in
@ -273,7 +277,7 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
if not wfdat: if not wfdat:
print('Could not find station {}. STOP!'.format(station)) print('Could not find station {}. STOP!'.format(station))
return return
# wfdat = remove_underscores(wfdat) #wfdat = remove_underscores(wfdat)
# trim components for each station to avoid problems with different trace starttimes for one station # trim components for each station to avoid problems with different trace starttimes for one station
wfdat = check4gapsAndRemove(wfdat) wfdat = check4gapsAndRemove(wfdat)
wfdat = check4doubled(wfdat) wfdat = check4doubled(wfdat)

View File

@ -3,10 +3,8 @@
#$ -l low #$ -l low
#$ -cwd #$ -cwd
#$ -pe smp 40 #$ -pe smp 40
##$ -l mem=3G #$ -l mem=2G
#$ -l h_vmem=6G #$ -l h_vmem=2G
#$ -l os=*stretch #$ -l os=*stretch
conda activate pylot_311 python ./autoPyLoT.py -i /home/marcel/.pylot/pylot_alparray_mantle_corr_stack_0.03-0.5.in -dmt processed -c $NSLOTS
python ./autoPyLoT.py -i /home/marcel/.pylot/pylot_adriaarray.in -c 20 -dmt processed

View File

@ -1,77 +0,0 @@
# Pick-Correlation Correction
## Introduction
Currently, the pick-correlation correction algorithm is not accessible from they PyLoT GUI. The main file *pick_correlation_correction.py* is located in the directory *pylot\correlation*.
The program only works for an obspy dmt database structure.
The basic workflow of the algorithm is shown in the following diagram. The first step **(1)** is the normal (automatic) picking procedure in PyLoT. Everything from step **(2)** to **(5)** is part of the correlation correction algorithm.
*Note: The first step is not required in case theoretical onsets are used instead of external picks when the parameter use_taupy_onsets is set to True. However, an existing event quakeML (.xml) file generated by PyLoT might be required for each event in case not external picks are used.*
![images/workflow_stacking.png](images/workflow_stacking.png)
A detailed description of the algorithm can be found in the corresponding publication:
*Paffrath, M., Friederich, W., and the AlpArray and AlpArray-SWATH D Working Groups: Teleseismic P waves at the AlpArray seismic network: wave fronts, absolute travel times and travel-time residuals, Solid Earth, 12, 16351660, https://doi.org/10.5194/se-12-1635-2021, 2021.*
## How to use
To use the program you have to call the main program providing two mandatory arguments: a path to the obspy dmt database folder *dmt_database_path* and the path to the PyLoT infile *pylot.in* for picking of the beam trace:
```python pick_correlation_correction.py dmt_database_path pylot.in```
By default, the parameter file *parameters.yaml* is used. You can use the command line option *--params* to specify a different parameter file and other optional arguments such as *-pd* for plotting detailed information or *-n 4* to use 4 cores for parallel processing:
```python pick_correlation_correction.py dmt_database_path pylot.in --params parameters_adriaarray.yaml -pd -n 4```
## Cross-Correlation Parameters
The program uses the parameters in the file *parameters.yaml* by default. You can use the command line option *--params* to specify a different parameter file. An example of the parameter file is provided in the *correlation\parameters.yaml* file.
In the top level of the parameter file the logging level *logging* can be set, as well as a list of pick phases *pick_phases* (e.g. ['P', 'S']).
For each pick phase the different parameters can be set in the first sub-level of the parameter file, e.g.:
```yaml
logging: info
pick_phases: ['P', 'S']
P:
min_corr_stacking: 0.8
min_corr_export: 0.6
[...]
S:
min_corr_stacking: 0.7
[...]
```
The following parameters are available:
| Parameter Name | Description | Parameter Type |
|--------------------------------|----------------------------------------------------------------------------------------------------|----------------|
| min_corr_stacking | Minimum correlation coefficient for building beam trace | float |
| min_corr_export | Minimum correlation coefficient for pick export | float |
| min_stack | Minimum number of stations for building beam trace | int |
| t_before | Correlation window before reference pick | float |
| t_after | Correlation window after reference pick | float |
| cc_maxlag | Maximum shift for initial correlation | float |
| cc_maxlag2 | Maximum shift for second (final) correlation (also for calculating pick uncertainty) | float |
| initial_pick_outlier_threshold | Threshold for excluding large outliers of initial (AIC) picks | float |
| export_threshold | Automatically exclude all onsets which deviate more than this threshold from corrected taup onsets | float |
| min_picks_export | Minimum number of correlated picks for export | int |
| min_picks_autopylot | Minimum number of reference auto picks to continue with event | int |
| check_RMS | Do RMS check to search for restitution errors (very experimental) | bool |
| use_taupy_onsets | Use taupy onsets as reference picks instead of external picks | bool |
| station_list | Use the following stations as reference for stacking | list[str] |
| use_stacked_trace | Use existing stacked trace if found (spare re-computation) | bool |
| data_dir | obspyDMT data subdirectory (e.g. 'raw', 'processed') | str |
| pickfile_extension | Use quakeML files (PyLoT output) with the following extension | str |
| dt_stacking | Time difference for stacking window (in seconds) | list[float] |
| filter_options | Filter for first correlation (rough) | dict |
| filter_options_final | Filter for second correlation (fine) | dict |
| filter_type | Filter type (e.g. bandpass) | str |
| sampfreq | Sampling frequency (in Hz) | float |
## Example Dataset
An example dataset with waveform data, metadata and automatic picks in the obspy-dmt dataset format for testing can be found at https://zenodo.org/doi/10.5281/zenodo.13759803

View File

@ -44,17 +44,15 @@ This section describes how to use PyLoT graphically to view waveforms and create
After opening PyLoT for the first time, the setup routine asks for the following information: After opening PyLoT for the first time, the setup routine asks for the following information:
Questions: Questions:
1. Full Name 1. Full Name
2. Authority: Enter authority/institution name 2. Authority: Enter authority/institution name
3. Format: Enter output format (*.xml, *.cnv, *.obs) 3. Format: Enter output format (*.xml, *.cnv, *.obs)
[//]: <> (TODO: explain what these things mean, where they are used) [//]: <> (TODO: explain what these things mean, where they are used)
## Main Screen ## Main Screen
After entering the [information](#first-start), PyLoTs main window is shown. It defaults to a view of After entering the [information](#first-start), PyLoTs main window is shown. It defaults to a view of the [Waveform Plot](#waveform-plot), which starts empty.
the [Waveform Plot](#waveform-plot), which starts empty.
<img src=images/gui/pylot-main-screen.png alt="Tune autopicks button" title="Tune autopicks button"> <img src=images/gui/pylot-main-screen.png alt="Tune autopicks button" title="Tune autopicks button">
@ -63,21 +61,24 @@ Add trace data by [loading a project](#projects-and-events) or by [adding event
### Waveform Plot ### Waveform Plot
The waveform plot shows a trace list of all stations of an event. The waveform plot shows a trace list of all stations of an event.
Click on any trace to open the stations [picking window](#picking-window), where you can review automatic and manual Click on any trace to open the stations [picking window](#picking-window), where you can review automatic and manual picks.
picks.
<img src=images/gui/pylot-waveform-plot.png alt="A Waveform Plot showing traces of one event"> <img src=images/gui/pylot-waveform-plot.png alt="A Waveform Plot showing traces of one event">
Above the traces the currently displayed event can be selected. In the bottom bar information about the trace under the Above the traces the currently displayed event can be selected.
mouse cursor is shown. This information includes the station name (station), the absolute UTC time (T) of the point In the bottom bar information about the trace under the mouse cursor is shown. This information includes the station name (station), the absolute UTC time (T) of the point under the mouse cursor and the relative time since the first trace start in seconds (t) as well as a trace count.
under the mouse cursor and the relative time since the first trace start in seconds (t) as well as a trace count.
#### Mouse view controls #### Mouse view controls
Hold left mouse button and drag to pan view. Hold left mouse button and drag to pan view.
Hold right mouse button and Direction | Result --- | --- Move the mouse up | Increase amplitude scale Move the mouse Hold right mouse button and
down | Decrease amplitude scale Move the mouse right | Increase time scale Move the mouse left | Decrease time scale Direction | Result
--- | ---
Move the mouse up | Increase amplitude scale
Move the mouse down | Decrease amplitude scale
Move the mouse right | Increase time scale
Move the mouse left | Decrease time scale
Press right mouse button and click "View All" from the context menu to reset the view. Press right mouse button and click "View All" from the context menu to reset the view.
@ -85,100 +86,81 @@ Press right mouse button and click "View All" from the context menu to reset the
[//]: <> (Hack: We need these invisible spaces to add space to the first column, otherwise ) [//]: <> (Hack: We need these invisible spaces to add space to the first column, otherwise )
| Icon &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | Description | Icon &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | Description
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| --- | ---
| <img src="../icons/newfile.png" alt="Create new project" width="64" height="64"> | Create a new project, for more information about projects see [Projects and Events](#projects-and-events). | <img src="../icons/newfile.png" alt="Create new project" width="64" height="64"> | Create a new project, for more information about projects see [Projects and Events](#projects-and-events).
| <img src="../icons/openproject.png" alt="Open project" width="64" height="64"> | Load a project file from disk. | <img src="../icons/openproject.png" alt="Open project" width="64" height="64"> | Load a project file from disk.
| <img src="../icons/saveproject.png" alt="Save Project" width="64" height="64"> | Save all current events into an associated project file on disk. If there is no project file currently associated, you will be asked to create a new one. | <img src="../icons/saveproject.png" alt="Save Project" width="64" height="64"> | Save all current events into an associated project file on disk. If there is no project file currently associated, you will be asked to create a new one.
| <img src="../icons/saveprojectas.png" alt="Save Project as" width="64" height="64"> | Save all current events into a new project file on disk. See [Saving projects](#saving-projects). | <img src="../icons/saveprojectas.png" alt="Save Project as" width="64" height="64"> | Save all current events into a new project file on disk. See [Saving projects](#saving-projects).
| <img src="../icons/add.png" alt="Add event data" width="64" height="64"> | Add event data by selecting directories containing waveforms. For more information see [Event folder structure](#event-folder-structure). | <img src="../icons/add.png" alt="Add event data" width="64" height="64"> | Add event data by selecting directories containing waveforms. For more information see [Event folder structure](#event-folder-structure).
| <img src="../icons/openpick.png" alt="Load event information" width="64" height="64"> | Load picks/origins from disk into the currently displayed event. If a pick already exists for a station, the one from file will overwrite the existing one. | <img src="../icons/openpick.png" alt="Load event information" width="64" height="64"> | Load picks/origins from disk into the currently displayed event. If a pick already exists for a station, the one from file will overwrite the existing one.
| <img src="../icons/openpicks.png" alt="Load information for all events" width="64" height="64"> | Load picks/origins for all events of the current project. PyLoT searches for files within the directory of the event and tries to load them for that event. For this function to work, the files containing picks/origins have to be named as described in [Event folder structure](#event-folder-structure). If a pick already exists for a station, the one from file will overwrite the existing one. | <img src="../icons/openpicks.png" alt="Load information for all events" width="64" height="64"> | Load picks/origins for all events of the current project. PyLoT searches for files within the directory of the event and tries to load them for that event. For this function to work, the files containing picks/origins have to be named as described in [Event folder structure](#event-folder-structure). If a pick already exists for a station, the one from file will overwrite the existing one.
| <img src="../icons/savepicks.png" alt="Save picks" width="64" height="64"> | Save event information such as picks and origin to file. You will be asked to select a directory in which this information should be saved. | <img src="../icons/savepicks.png" alt="Save picks" width="64" height="64"> | Save event information such as picks and origin to file. You will be asked to select a directory in which this information should be saved.
| <img src="../icons/openloc.png" alt="Load location information" width="64" height="64"> | Load location information from disk, | <img src="../icons/openloc.png" alt="Load location information" width="64" height="64"> | Load location information from disk,
| <img src="../icons/Matlab_PILOT_icon.png" alt="Load legacy information" width="64" height="64"> | Load event information from a previous, MatLab based PILOT version. | <img src="../icons/Matlab_PILOT_icon.png" alt="Load legacy information" width="64" height="64"> | Load event information from a previous, MatLab based PILOT version.
| <img src="../icons/key_Z.png" alt="Display Z" width="64" height="64"> | Display Z component of streams in waveform plot. | <img src="../icons/key_Z.png" alt="Display Z" width="64" height="64"> | Display Z component of streams in waveform plot.
| <img src="../icons/key_N.png" alt="Display N" width="64" height="64"> | Display N component of streams in waveform plot. | <img src="../icons/key_N.png" alt="Display N" width="64" height="64"> | Display N component of streams in waveform plot.
| <img src="../icons/key_E.png" alt="Display E" width="64" height="64"> | Display E component of streams in waveform plot. | <img src="../icons/key_E.png" alt="Display E" width="64" height="64"> | Display E component of streams in waveform plot.
| <img src="../icons/tune.png" alt="Tune Autopicker" width="64" height="64"> | Open the [Tune Autopicker window](#tuning). | <img src="../icons/tune.png" alt="Tune Autopicker" width="64" height="64"> | Open the [Tune Autopicker window](#tuning).
| <img src="../icons/autopylot_button.png" alt="" width="64" height="64"> | Opens a window that allows starting the autopicker for all events ([Production run of the AutoPicker](#production-run-of-the-autopicker)). | <img src="../icons/autopylot_button.png" alt="" width="64" height="64"> | Opens a window that allows starting the autopicker for all events ([Production run of the AutoPicker](#production-run-of-the-autopicker)).
| <img src="../icons/compare_button.png" alt="Comparison" width="64" height="64"> | Compare automatic and manual picks, only available if automatic and manual picks for an event exist. See [Comparison between automatic and manual picks](#comparison-between-automatic-and-manual-picks). | <img src="../icons/compare_button.png" alt="Comparison" width="64" height="64"> | Compare automatic and manual picks, only available if automatic and manual picks for an event exist. See [Comparison between automatic and manual picks](#comparison-between-automatic-and-manual-picks).
| <img src="../icons/locate_button.png" alt="Locate event" width="64" height="64"> | Run a location routine (NonLinLoc) as configured in the settings on the picks. See [Location determination](#location-determination). | <img src="../icons/locate_button.png" alt="Locate event" width="64" height="64"> | Run a location routine (NonLinLoc) as configured in the settings on the picks. See [Location determination](#location-determination).
### Array Map ### Array Map
The array map will display a color diagram to allow a visual check of the consistency of picks across multiple stations. The array map will display a color diagram to allow a visual check of the consistency of picks across multiple stations. This works by calculating the time difference of every onset to the earliest onset. Then isolines are drawn between stations with the same time difference and the areas between isolines are colored.
This works by calculating the time difference of every onset to the earliest onset. Then isolines are drawn between The result should resemble a color gradient as the wavefront rolls over the network area. Stations where picks are earlier/later than their neighbours can be reviewed by clicking on them, which opens the [picking window](#picking-window).
stations with the same time difference and the areas between isolines are colored.
The result should resemble a color gradient as the wavefront rolls over the network area. Stations where picks are
earlier/later than their neighbours can be reviewed by clicking on them, which opens
the [picking window](#picking-window).
Above the Array Map the picks that are used to create the map can be customized. The phase of picks that should be used Above the Array Map the picks that are used to create the map can be customized.
can be selected, which allows checking the consistency of the P- and S-phase separately. Additionally the pick type can The phase of picks that should be used can be selected, which allows checking the consistency of the P- and S-phase separately.
be set to manual, automatic or hybrid, meaning display only manual picks, automatic picks or only display automatic Additionally the pick type can be set to manual, automatic or hybrid, meaning display only manual picks, automatic picks or only display automatic picks for stations where there are no manual ones.
picks for stations where there are no manual ones.
![Array Map](images/gui/arraymap-example.png "Array Map") ![Array Map](images/gui/arraymap-example.png "Array Map")
*Array Map for an event at the Northern Mid Atlantic Ridge, between North Africa and Mexico (Lat. 22.58, Lon. -45.11). *Array Map for an event at the Northern Mid Atlantic Ridge, between North Africa and Mexico (Lat. 22.58, Lon. -45.11). The wavefront moved from west to east over the network area (Alps and Balcan region), with the earliest onsets in blue in the west.*
The wavefront moved from west to east over the network area (Alps and Balcan region), with the earliest onsets in blue
in the west.*
To be able to display an array map PyLoT needs to load an inventory file, where the metadata of seismic stations is To be able to display an array map PyLoT needs to load an inventory file, where the metadata of seismic stations is kept. For more information see [Metadata](#adding-metadata). Additionally automatic or manual picks need to exist for the current event.
kept. For more information see [Metadata](#adding-metadata). Additionally automatic or manual picks need to exist for
the current event.
### Eventlist ### Eventlist
The eventlist displays event parameters. The displayed parameters are saved in the .xml file in the event folder. Events The eventlist displays event parameters. The displayed parameters are saved in the .xml file in the event folder. Events can be deleted from the project by pressing the red X in the leftmost column of the corresponding event.
can be deleted from the project by pressing the red X in the leftmost column of the corresponding event.
<img src="images/gui/eventlist.png" alt="Eventlist"> <img src="images/gui/eventlist.png" alt="Eventlist">
| Column | Description | Column | Description
|------------|--------------------------------------------------------------------------------------------------------------------| --- | ---
| Event | Full path to the events folder. | Event | Full path to the events folder.
| Time | Time of event. | Time | Time of event.
| Lat | Latitude in degrees of event location. | Lat | Latitude in degrees of event location.
| Lon | Longitude in degrees of event location. | Lon | Longitude in degrees of event location.
| Depth | Depth in km of event. | Depth | Depth in km of event.
| Mag | Magnitude of event. | Mag | Magnitude of event.
| [N] MP | Number of manual picks. | [N] MP | Number of manual picks.
| [N] AP | Number of automatic picks. | [N] AP | Number of automatic picks.
| Tuning Set | Select whether this event is a Tuning event. See [Automatic Picking](#automatic-picking). | Tuning Set | Select whether this event is a Tuning event. See [Automatic Picking](#automatic-picking).
| Test Set | Select whether this event is a Test event. See [Automatic Picking](#automatic-picking). | Test Set | Select whether this event is a Test event. See [Automatic Picking](#automatic-picking).
| Notes | Free form text field for notes regarding this event. Text will be saved in the notes.txt file in the event folder. | Notes | Free form text field for notes regarding this event. Text will be saved in the notes.txt file in the event folder.
## Usage ## Usage
### Projects and Events ### Projects and Events
PyLoT uses projects to categorize different seismic data. A project consists of one or multiple events. Events contain PyLoT uses projects to categorize different seismic data. A project consists of one or multiple events. Events contain seismic traces from one or multiple stations. An event also contains further information, e.g. origin time, source parameters and automatic as well as manual picks.
seismic traces from one or multiple stations. An event also contains further information, e.g. origin time, source Projects are used to group events which should be analysed together. A project could contain all events from a specific region within a timeframe of interest or all recorded events of a seismological experiment.
parameters and automatic as well as manual picks. Projects are used to group events which should be analysed together. A
project could contain all events from a specific region within a timeframe of interest or all recorded events of a
seismological experiment.
### Event folder structure ### Event folder structure
PyLoT expects the following folder structure for seismic data: PyLoT expects the following folder structure for seismic data:
* Every event should be in it's own folder with the following naming scheme for the folders: * Every event should be in it's own folder with the following naming scheme for the folders:
``e[id].[doy].[yy]``, where ``[id]`` is a four-digit numerical id increasing from 0001, ``[doy]`` the three digit day ``e[id].[doy].[yy]``, where ``[id]`` is a four-digit numerical id increasing from 0001, ``[doy]`` the three digit day of year and ``[yy]`` the last two digits of the year of the event. This structure has to be created by the user of PyLoT manually.
of year and ``[yy]`` the last two digits of the year of the event. This structure has to be created by the user of
PyLoT manually.
* These folders should contain the seismic data for their event as ``.mseed`` or other supported filetype * These folders should contain the seismic data for their event as ``.mseed`` or other supported filetype
* All automatic and manual picks should be in an ``.xml`` file in their event folder. PyLoT saves picks in this file. * All automatic and manual picks should be in an ``.xml`` file in their event folder. PyLoT saves picks in this file. This file does not have to be added manually unless there are picks to be imported. The format used to save picks is QUAKEML.
This file does not have to be added manually unless there are picks to be imported. The format used to save picks is Picks are saved in a file with the same filename as the event folder with ``PyLoT_`` prepended.
QUAKEML. * The file ``notes.txt`` is used for saving analysts comments. Everything saved here will be displayed in the 'Notes' column of the eventlist.
Picks are saved in a file with the same filename as the event folder with ``PyLoT_`` prepended.
* The file ``notes.txt`` is used for saving analysts comments. Everything saved here will be displayed in the 'Notes'
column of the eventlist.
### Loading event information from CSV file ### Loading event information from CSV file
Event information can be saved in a ``.csv`` file located in the rootpath. The file is made from one header line, which Event information can be saved in a ``.csv`` file located in the rootpath. The file is made from one header line, which is followed by one or multiple data lines. Values are separated by comma, while a dot is used as a decimal separator.
is followed by one or multiple data lines. Values are separated by comma, while a dot is used as a decimal separator.
This information is then shown in the table in the [Eventlist tab](#Eventlist). This information is then shown in the table in the [Eventlist tab](#Eventlist).
One example header and data line is shown below. One example header and data line is shown below.
@ -187,61 +169,50 @@ One example header and data line is shown below.
The meaning of the header entries is: The meaning of the header entries is:
| Header | description | Header | description
|----------------------|------------------------------------------------------------------------------------------------| --- | ---
| event | Event id, has to be the same as the folder name in which waveform data for this event is kept. | event | Event id, has to be the same as the folder name in which waveform data for this event is kept.
| Data | Origin date of the event, format DD/MM/YY or DD/MM/YYYY. | Data | Origin date of the event, format DD/MM/YY or DD/MM/YYYY.
| Time | Origin time of the event. Format HH:MM:SS. | Time | Origin time of the event. Format HH:MM:SS.
| Lat, Long | Origin latitude and longitude in decimal degrees. | Lat, Long | Origin latitude and longitude in decimal degrees.
| Region | Flinn-Engdahl region name. | Region | Flinn-Engdahl region name.
| Basis Lat, Basis Lon | Latitude and longitude of the basis of the station network in decimal degrees. | Basis Lat, Basis Lon | Latitude and longitude of the basis of the station network in decimal degrees.
| Distance [km] | Distance from origin coordinates to basis coordinates in km. | Distance [km] | Distance from origin coordinates to basis coordinates in km.
| Distance [rad] | Distance from origin coordinates to basis coordinates in rad. | Distance [rad] | Distance from origin coordinates to basis coordinates in rad.
### Adding events to project ### Adding events to project
PyLoT GUI starts with an empty project. To add events, use the add event data button. Select one or multiple folders PyLoT GUI starts with an empty project. To add events, use the add event data button. Select one or multiple folders containing events.
containing events.
[//]: <> (TODO: explain _Directories: Root path, Data path, Database path_)
### Saving projects ### Saving projects
Save the current project from the menu with File->Save project or File->Save project as. PyLoT uses ``.plp`` files to Save the current project from the menu with File->Save project or File->Save project as.
save project information. This file format is not interchangeable between different versions of Python interpreters. PyLoT uses ``.plp`` files to save project information. This file format is not interchangeable between different versions of Python interpreters.
Saved projects contain the automatic and manual picks. Seismic trace data is not included into the ``.plp`` file, but Saved projects contain the automatic and manual picks. Seismic trace data is not included into the ``.plp`` file, but read from its location used when saving the file.
read from its location used when saving the file.
### Adding metadata ### Adding metadata
[//]: <> (TODO: Add picture of metadata "manager" when it is done) [//]: <> (TODO: Add picture of metadata "manager" when it is done)
PyLoT can handle ``.dless``, ``.xml``, ``.resp`` and ``.dseed`` file formats for Metadata. Metadata files stored on disk PyLoT can handle ``.dless``, ``.xml``, ``.resp`` and ``.dseed`` file formats for Metadata. Metadata files stored on disk can be added to a project by clicking *Edit*->*Manage Inventories*. This opens up a window where the folders which contain metadata files can be selected. PyLoT will then search these files for the station names when it needs the information.
can be added to a project by clicking *Edit*->*Manage Inventories*. This opens up a window where the folders which
contain metadata files can be selected. PyLoT will then search these files for the station names when it needs the
information.
# Picking # Picking
PyLoTs automatic and manual pick determination works as following: PyLoTs automatic and manual pick determination works as following:
* Using certain parameters, a first initial/coarse pick is determined. The first manual pick is determined by visual review of the whole waveform and selection of the most likely onset by the analyst. The first automatic pick is determined by calculation of a characteristic function (CF) for the seismic trace. When a wave arrives, the CFs properties change, which is determined as the signals onset.
* Using certain parameters, a first initial/coarse pick is determined. The first manual pick is determined by visual * Afterwards, a refined set of parameters is applied to a small part of the waveform around the initial onset. For manual picks this means a closer view of the trace, for automatic picks this is done by a recalculated CF with different parameters.
review of the whole waveform and selection of the most likely onset by the analyst. The first automatic pick is
determined by calculation of a characteristic function (CF) for the seismic trace. When a wave arrives, the CFs
properties change, which is determined as the signals onset.
* Afterwards, a refined set of parameters is applied to a small part of the waveform around the initial onset. For
manual picks this means a closer view of the trace, for automatic picks this is done by a recalculated CF with
different parameters.
* This second picking phase results in the precise pick, which is treated as the onset time. * This second picking phase results in the precise pick, which is treated as the onset time.
## Manual Picking ## Manual Picking
To create manual picks, you will need to open or create a project that contains seismic trace data ( To create manual picks, you will need to open or create a project that contains seismic trace data (see [Adding events to projects](#adding-events-to-project)). Click on a trace to open the [Picking window](#picking-window).
see [Adding events to projects](#adding-events-to-project)). Click on a trace to open
the [Picking window](#picking-window).
### Picking window ### Picking window
Open the picking window of a station by leftclicking on any trace in the waveform plot. Here you can create manual picks Open the picking window of a station by leftclicking on any trace in the waveform plot. Here you can create manual picks for the selected station.
for the selected station.
<img src="images/gui/picking/pickwindow.png" alt="Picking window"> <img src="images/gui/picking/pickwindow.png" alt="Picking window">
@ -249,24 +220,24 @@ for the selected station.
#### Picking Window Settings #### Picking Window Settings
| Icon | Shortcut | Menu Alternative | Description | Icon | Shortcut | Menu Alternative | Description
|----------------------------------------------------------------------------------|----------------|-----------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| ---|---|---|---
| <img src="../icons/filter_p.png" alt="Filter P" width="64" height="64"> | p | Filter->Apply P Filter | Filter all channels according to the options specified in Filter parameter, P Filter section. | <img src="../icons/filter_p.png" alt="Filter P" width="64" height="64"> | p | Filter->Apply P Filter | Filter all channels according to the options specified in Filter parameter, P Filter section.
| <img src="../icons/filter_s.png" alt="Filter S" width="64" height="64"> | s | Filter->Apply S Filter | Filter all channels according to the options specified in Filter parameter, S Filter section. | <img src="../icons/filter_s.png" alt="Filter S" width="64" height="64"> | s | Filter->Apply S Filter | Filter all channels according to the options specified in Filter parameter, S Filter section.
| <img src="../icons/key_A.png" alt="Filter Automatically" width="64" height="64"> | Ctrl + a | Filter->Automatic Filtering | If enabled, automatically select the correct filter option (P, S) depending on the selected phase to be picked. | <img src="../icons/key_A.png" alt="Filter Automatically" width="64" height="64"> | Ctrl + a | Filter->Automatic Filtering | If enabled, automatically select the correct filter option (P, S) depending on the selected phase to be picked.
| ![desc](images/gui/picking/phase_selection.png "Phase selection") | 1 (P) or 5 (S) | Picks->P or S | Select phase to pick. If Automatic Filtering is enabled, this will apply the appropriate filter depending on the phase. | ![desc](images/gui/picking/phase_selection.png "Phase selection") | 1 (P) or 5 (S) | Picks->P or S | Select phase to pick. If Automatic Filtering is enabled, this will apply the appropriate filter depending on the phase.
| ![Zoom into](../icons/zoom_in.png "Zoom into waveform") | - | - | Zoom into waveform. | ![Zoom into](../icons/zoom_in.png "Zoom into waveform") | - | - | Zoom into waveform.
| ![Reset zoom](../icons/zoom_0.png "Reset zoom") | - | - | Reset zoom to default view. | ![Reset zoom](../icons/zoom_0.png "Reset zoom") | - | - | Reset zoom to default view.
| ![Delete picks](../icons/delete.png "Delete picks") | - | - | Delete all manual picks on this station. | ![Delete picks](../icons/delete.png "Delete picks") | - | - | Delete all manual picks on this station.
| ![Rename a phase](../icons/sync.png "Rename a phase") | - | - | Click this button and then the picked phase to rename it. | ![Rename a phase](../icons/sync.png "Rename a phase") | - | - | Click this button and then the picked phase to rename it.
| ![Continue](images/gui/picking/continue.png "Continue with next station") | - | - | If checked, after accepting the manual picks for this station with 'OK', the picking window for the next station will be opened. This option is useful for fast manual picking of a complete event. | ![Continue](images/gui/picking/continue.png "Continue with next station") | - | - | If checked, after accepting the manual picks for this station with 'OK', the picking window for the next station will be opened. This option is useful for fast manual picking of a complete event.
| Estimated onsets | - | - | Show the theoretical onsets for this station. Needs metadata and origin information. | Estimated onsets | - | - | Show the theoretical onsets for this station. Needs metadata and origin information.
| Compare to channel | - | - | Select a data channel to compare against. The selected channel will be displayed in the picking window behind every channel allowing the analyst to visually compare signal correlation between different channels. | Compare to channel | - | - | Select a data channel to compare against. The selected channel will be displayed in the picking window behind every channel allowing the analyst to visually compare signal correlation between different channels.
| Scaling | - | - | Individual means every channel is scaled to its own maximum. If a channel is selected here, all channels will be scaled relatively to this channel. | Scaling | - | - | Individual means every channel is scaled to its own maximum. If a channel is selected here, all channels will be scaled relatively to this channel.
| Menu Command | Shortcut | Description | Menu Command | Shortcut | Description
|---------------------------|----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| ---|---|---
| P Channels and S Channels | - | Select which channels should be treated as P or S channels during picking. When picking a phase, only the corresponding channels will be shown during the precise pick. Normally, the Z channel should be selected for the P phase and the N and E channel for the S phase. | P Channels and S Channels | - | Select which channels should be treated as P or S channels during picking. When picking a phase, only the corresponding channels will be shown during the precise pick. Normally, the Z channel should be selected for the P phase and the N and E channel for the S phase.
### Filtering ### Filtering
@ -274,167 +245,105 @@ Access the Filter options by pressing Ctrl+f on the Waveform plot or by the menu
<img src=images/gui/pylot-filter-options.png> <img src=images/gui/pylot-filter-options.png>
Here you are able to select filter type, order and frequencies for the P and S pick separately. These settings are used Here you are able to select filter type, order and frequencies for the P and S pick separately. These settings are used in the GUI for displaying the filtered waveform data and during manual picking. The values used by PyLoT for automatic picking are displayed next to the manual values. They can be changed in the [Tune Autopicker dialog](#tuning).
in the GUI for displaying the filtered waveform data and during manual picking. The values used by PyLoT for automatic A green value automatic value means the automatic and manual filter parameter is configured the same, red means they are configured differently.
picking are displayed next to the manual values. They can be changed in the [Tune Autopicker dialog](#tuning). By toggling the "Overwrite filteroptions" checkmark you can set whether the manual precise/second pick uses the filter settings for the automatic picker (unchecked) or whether it uses the filter options in this dialog (checked).
A green value automatic value means the automatic and manual filter parameter is configured the same, red means they are To guarantee consistent picking results between automatic and manual picking it is recommended to use the same filter settings for the determination of automatic and manual picks.
configured differently. By toggling the "Overwrite filteroptions" checkmark you can set whether the manual
precise/second pick uses the filter settings for the automatic picker (unchecked) or whether it uses the filter options
in this dialog (checked). To guarantee consistent picking results between automatic and manual picking it is recommended
to use the same filter settings for the determination of automatic and manual picks.
### Export and Import of manual picks ### Export and Import of manual picks
#### Export #### Export
After the creation of manual picks they can either be saved in the project file ( After the creation of manual picks they can either be saved in the project file (see [Saving projects](#saving-projects)). Alternatively the picks can be exported by pressing the <img src="../icons/savepicks.png" alt="Save event information button" title="Save picks button" height=24 width=24> button above the waveform plot or in the menu File->Save event information (shortcut Ctrl+p). Select the event directory in which to save the file. The filename will be ``PyLoT_[event_folder_name].[filetype selected during first startup]``.
see [Saving projects](#saving-projects)). Alternatively the picks can be exported by pressing You can rename and copy this file, but PyLoT will then no longer be able to automatically recognize the correct picks for an event and the file will have to be manually selected when loading.
the <img src="../icons/savepicks.png" alt="Save event information button" title="Save picks button" height=24 width=24>
button above the waveform plot or in the menu File->Save event information (shortcut Ctrl+p). Select the event directory
in which to save the file. The filename will be ``PyLoT_[event_folder_name].[filetype selected during first startup]``
.
You can rename and copy this file, but PyLoT will then no longer be able to automatically recognize the correct picks
for an event and the file will have to be manually selected when loading.
#### Import #### Import
To import previously saved picks press To import previously saved picks press the <img src="../icons/openpick.png" alt="Load event information button" width="24" height="24"> button and select the file to load. You will be asked to save the current state of your project if you have not done so before. You can continue without saving by pressing "Discard". This does not delete any information from your project, it just means that no project file is saved before the changes of importing picks are applied.
the <img src="../icons/openpick.png" alt="Load event information button" width="24" height="24"> button and select the PyLoT will automatically load files named after the scheme it uses when saving picks, described in the paragraph above. If it can't find any matching files, a file dialogue will open and you can select the file you wish to load.
file to load. You will be asked to save the current state of your project if you have not done so before. You can
continue without saving by pressing "Discard". This does not delete any information from your project, it just means
that no project file is saved before the changes of importing picks are applied. PyLoT will automatically load files
named after the scheme it uses when saving picks, described in the paragraph above. If it can't find any matching files,
a file dialogue will open and you can select the file you wish to load.
If you see a warning "Mismatch in event identifiers" and are asked whether to continue loading the picks, this means If you see a warning "Mismatch in event identifiers" and are asked whether to continue loading the picks, this means that PyLoT doesn't recognize the picks in the file as belonging to this specific event. They could have either been saved under a different installation of PyLoT but with the same waveform data, which means they are still compatible and you can continue loading them. Or they could be picks from a different event, in which case loading them is not recommended.
that PyLoT doesn't recognize the picks in the file as belonging to this specific event. They could have either been
saved under a different installation of PyLoT but with the same waveform data, which means they are still compatible and
you can continue loading them. Or they could be picks from a different event, in which case loading them is not
recommended.
## Automatic Picking ## Automatic Picking
The general workflow for automatic picking is as following:
- After setting up the project by loading waveforms and optionally metadata, the right parameters for the autopicker The general workflow for automatic picking is as following:
have to be determined - After setting up the project by loading waveforms and optionally metadata, the right parameters for the autopicker have to be determined
- This [tuning](#tuning) is done for single stations with immediate graphical feedback of all picking results - This [tuning](#tuning) is done for single stations with immediate graphical feedback of all picking results
- Afterwards the autopicker can be run for all or a subset of events from the project - Afterwards the autopicker can be run for all or a subset of events from the project
For automatic picking PyLoT discerns between tune and test events, which the user has to set as such. Tune events are For automatic picking PyLoT discerns between tune and test events, which the user has to set as such. Tune events are used to calibrate the autopicking algorithm, test events are then used to test the calibration. The purpose of that is controlling whether the parameters found during tuning are able to reliably pick the "unknown" test events.
used to calibrate the autopicking algorithm, test events are then used to test the calibration. The purpose of that is If this behaviour is not desired and all events should be handled the same, dont mark any events. Since this is just a way to group events to compare the picking results, nothing else will change.
controlling whether the parameters found during tuning are able to reliably pick the "unknown" test events.
If this behaviour is not desired and all events should be handled the same, dont mark any events. Since this is just a
way to group events to compare the picking results, nothing else will change.
### Tuning ### Tuning
Tuning describes the process of adjusting the autopicker settings to the characteristics of your data set. To do this in Tuning describes the process of adjusting the autopicker settings to the characteristics of your data set. To do this in PyLoT, use the <img src=../icons/tune.png height=24 alt="Tune autopicks button" title="Tune autopicks button"> button to open the Tune Autopicker.
PyLoT, use the <img src=../icons/tune.png height=24 alt="Tune autopicks button" title="Tune autopicks button"> button to
open the Tune Autopicker.
<img src=images/gui/tuning/tune_autopicker.png> <img src=images/gui/tuning/tune_autopicker.png>
View of a station in the Tune Autopicker window. View of a station in the Tune Autopicker window.
1. Select the event to be displayed and processed. 1. Select the event to be displayed and processed.
2. Select the station from the event. 2. Select the station from the event.
3. To pick the currently displayed trace, click 3. To pick the currently displayed trace, click the <img src=images/gui/tuning/autopick_trace_button.png alt="Pick trace button" title="Autopick trace button" height=16> button.
the <img src=images/gui/tuning/autopick_trace_button.png alt="Pick trace button" title="Autopick trace button" height=16> 4. These tabs are used to select the current view. __Traces Plot__ contains a plot of the stations traces, where manual picks can be created/edited. __Overview__ contains graphical results of the automatic picking process. The __P and S tabs__ contain the automatic picking results of the P and S phase, while __log__ contains a useful text output of automatic picking.
button. 5. These buttons are used to load/save/reset settings for automatic picking. The parameters can be saved in PyLoT input files, which have the file ending *.in*. They are human readable text files, which can also be edited by hand. Saving the parameters allows you to load them again later, even on different machines.
4. These tabs are used to select the current view. __Traces Plot__ contains a plot of the stations traces, where manual 6. These menus control the behaviour of the creation of manual picks from the Tune Autopicker window. Picks allows to select the phase for which a manual pick should be created, Filter allows to filter waveforms and edit the filter parameters. P-Channels and S-Channels allow to select the channels that should be displayed when creating a manual P or S pick.
picks can be created/edited. __Overview__ contains graphical results of the automatic picking process. The __P and S 7. This menu is the same as in the [Picking Window](#picking-window-settings), with the exception of the __Manual Onsets__ options. The __Manual Onsets__ buttons accepts or reject the manual picks created in the Tune Autopicker window, pressing accept adds them to the manual picks for the event, while reject removes them.
tabs__ contain the automatic picking results of the P and S phase, while __log__ contains a useful text output of
automatic picking.
5. These buttons are used to load/save/reset settings for automatic picking. The parameters can be saved in PyLoT input
files, which have the file ending *.in*. They are human readable text files, which can also be edited by hand. Saving
the parameters allows you to load them again later, even on different machines.
6. These menus control the behaviour of the creation of manual picks from the Tune Autopicker window. Picks allows to
select the phase for which a manual pick should be created, Filter allows to filter waveforms and edit the filter
parameters. P-Channels and S-Channels allow to select the channels that should be displayed when creating a manual P
or S pick.
7. This menu is the same as in the [Picking Window](#picking-window-settings), with the exception of the __Manual
Onsets__ options. The __Manual Onsets__ buttons accepts or reject the manual picks created in the Tune Autopicker
window, pressing accept adds them to the manual picks for the event, while reject removes them.
8. The traces plot in the centre allows creating manual picks and viewing the waveforms. 8. The traces plot in the centre allows creating manual picks and viewing the waveforms.
9. The parameters which influence the autopicking result are in the Main settings and Advanced settings tabs on the left 9. The parameters which influence the autopicking result are in the Main settings and Advanced settings tabs on the left side. For a description of all the parameters see the [tuning documentation](tuning.md).
side. For a description of all the parameters see the [tuning documentation](tuning.md).
### Production run of the autopicker ### Production run of the autopicker
After the settings used during tuning give the desired results, the autopicker can be used on the complete dataset. To After the settings used during tuning give the desired results, the autopicker can be used on the complete dataset. To invoke the autopicker on the whole set of events, click the <img src=../icons/autopylot_button.png alt="Autopick" title="Autopick" height=32> button.
invoke the autopicker on the whole set of events, click
the <img src=../icons/autopylot_button.png alt="Autopick" title="Autopick" height=32> button.
### Evaluation of automatic picks ### Evaluation of automatic picks
PyLoT has two internal consistency checks for automatic picks that were determined for an event: PyLoT has two internal consistency checks for automatic picks that were determined for an event:
1. Jackknife check 1. Jackknife check
2. Wadati check 2. Wadati check
#### 1. Jackknife check #### 1. Jackknife check
The jackknife test in PyLoT checks the consistency of automatically determined P-picks by checking the statistical The jackknife test in PyLoT checks the consistency of automatically determined P-picks by checking the statistical variance of the picks. The variance of all P-picks is calculated and compared to the variance of subsets, in which one pick is removed.
variance of the picks. The variance of all P-picks is calculated and compared to the variance of subsets, in which one The idea is, that picks that are close together in time should not influence the estimation of the variance much, while picks whose positions deviates from the norm influence the variance to a greater extent. If the estimated variance of a subset with a pick removed differs to much from the estimated variance of all picks, the pick that was removed from the subset will be marked as invalid.
pick is removed. The factor by which picks are allowed to skew from the estimation of variance can be configured, it is called *jackfactor*, see [here](tuning.md#Pick-quality-control).
The idea is, that picks that are close together in time should not influence the estimation of the variance much, while
picks whose positions deviates from the norm influence the variance to a greater extent. If the estimated variance of a
subset with a pick removed differs to much from the estimated variance of all picks, the pick that was removed from the
subset will be marked as invalid.
The factor by which picks are allowed to skew from the estimation of variance can be configured, it is called *
jackfactor*, see [here](tuning.md#Pick-quality-control).
Additionally, the deviation of picks from the median is checked. For that, the median of all P-picks that passed the Additionally, the deviation of picks from the median is checked. For that, the median of all P-picks that passed the Jackknife test is calculated. Picks whose onset times deviate from the mean onset time by more than the *mdttolerance* are marked as invalid.
Jackknife test is calculated. Picks whose onset times deviate from the mean onset time by more than the *mdttolerance*
are marked as invalid.
<img src=images/gui/jackknife_plot.png title="Jackknife/Median test diagram"> <img src=images/gui/jackknife_plot.png title="Jackknife/Median test diagram">
*The result of both tests (Jackknife and Median) is shown in a diagram afterwards. The onset time is plotted against a *The result of both tests (Jackknife and Median) is shown in a diagram afterwards. The onset time is plotted against a running number of stations. Picks that failed either the Jackknife or the median test are colored red. The median is plotted as a green line.*
running number of stations. Picks that failed either the Jackknife or the median test are colored red. The median is
plotted as a green line.*
The Jackknife and median check are suitable to check for picks that are outside of the expected time window, for The Jackknife and median check are suitable to check for picks that are outside of the expected time window, for example, when a wrong phase was picked. It won't recognize picks that are in close proximity to the right onset which are just slightly to late/early.
example, when a wrong phase was picked. It won't recognize picks that are in close proximity to the right onset which
are just slightly to late/early.
#### 2. Wadati check #### 2. Wadati check
The Wadati check checks the consistency of S picks. For this the SP-time, the time difference between S and P onset is The Wadati check checks the consistency of S picks. For this the SP-time, the time difference between S and P onset is plotted against the P onset time. A line is fitted to the points, which minimizes the error. Then the deviation of single picks to this line is checked. If the deviation in seconds is above the *wdttolerance* parameter ([see here](tuning.md#Pick-quality-control)), the pick is marked as invalid.
plotted against the P onset time. A line is fitted to the points, which minimizes the error. Then the deviation of
single picks to this line is checked. If the deviation in seconds is above the *wdttolerance*
parameter ([see here](tuning.md#Pick-quality-control)), the pick is marked as invalid.
<img src=images/gui/wadati_plot.png title="Output diagram of Wadati check"> <img src=images/gui/wadati_plot.png title="Output diagram of Wadati check">
*The Wadati plot in PyLoT shows the SP onset time difference over the P onset time. A first line is fitted (black). All *The Wadati plot in PyLoT shows the SP onset time difference over the P onset time. A first line is fitted (black). All picks which deviate to much from this line are marked invalid (red). Then a second line is fitted which excludes the invalid picks. From this lines slope, the ratio of P and S wave velocity is determined.*
picks which deviate to much from this line are marked invalid (red). Then a second line is fitted which excludes the
invalid picks. From this lines slope, the ratio of P and S wave velocity is determined.*
### Comparison between automatic and manual picks ### Comparison between automatic and manual picks
Every pick in PyLoT consists of an earliest possible, latest possible and most likely onset time. The earliest and Every pick in PyLoT consists of an earliest possible, latest possible and most likely onset time.
latest possible onset time characterize the uncertainty of a pick. This approach is described in Diel, Kissling and The earliest and latest possible onset time characterize the uncertainty of a pick.
Bormann (2012) - Tutorial for consistent phase picking at local to regional distances. These times are represented as a This approach is described in Diel, Kissling and Bormann (2012) - Tutorial for consistent phase picking at local to regional distances.
Probability Density Function (PDF) for every pick. The PDF is implemented as two exponential distributions around the These times are represented as a Probability Density Function (PDF) for every pick.
most likely onset as the expected value. The PDF is implemented as two exponential distributions around the most likely onset as the expected value.
To compare two single picks, their PDFs are cross correlated to create a new PDF. This corresponds to the subtraction of To compare two single picks, their PDFs are cross correlated to create a new PDF.
the automatic pick from the manual pick. This corresponds to the subtraction of the automatic pick from the manual pick.
<img src=images/gui/comparison/comparison_pdf.png title="Comparison between automatic and manual pick"> <img src=images/gui/comparison/comparison_pdf.png title="Comparison between automatic and manual pick">
*Comparison between an automatic and a manual pick for a station in PyLoT by comparing their PDFs.* *Comparison between an automatic and a manual pick for a station in PyLoT by comparing their PDFs.*
*The upper plot shows the difference between the two single picks that are shown in the lower plot.* *The upper plot shows the difference between the two single picks that are shown in the lower plot.*
*The difference is implemented as a cross correlation between the two PDFs. and results in a new PDF, the comparison *The difference is implemented as a cross correlation between the two PDFs. and results in a new PDF, the comparison PDF.*
PDF.* *The expected value of the comparison PDF corresponds to the time distance between the automatic and manual picks most likely onset.*
*The expected value of the comparison PDF corresponds to the time distance between the automatic and manual picks most *The standard deviation corresponds to the combined uncertainty.*
likely onset.*
*The standard deviation corresponds to the combined uncertainty.*
To compare the automatic and manual picks between multiple stations of an event, the properties of all the comparison To compare the automatic and manual picks between multiple stations of an event, the properties of all the comparison PDFs are shown in a histogram.
PDFs are shown in a histogram.
<img src=images/gui/comparison/compare_widget.png title="Comparison between picks of an event"> <img src=images/gui/comparison/compare_widget.png title="Comparison between picks of an event">
@ -443,13 +352,11 @@ PDFs are shown in a histogram.
*The bottom left plot shows the expected values of the comparison PDFs for P picks.* *The bottom left plot shows the expected values of the comparison PDFs for P picks.*
*The top right plot shows the standard deviation of the comparison PDFs for S picks.* *The top right plot shows the standard deviation of the comparison PDFs for S picks.*
*The bottom right plot shows the expected values of the comparison PDFs for S picks.* *The bottom right plot shows the expected values of the comparison PDFs for S picks.*
*The standard deviation plots show that most P picks have an uncertainty between 1 and 2 seconds, while S pick *The standard deviation plots show that most P picks have an uncertainty between 1 and 2 seconds, while S pick uncertainties have a much larger spread between 1 to 15 seconds.*
uncertainties have a much larger spread between 1 to 15 seconds.*
*This means P picks have higher quality classes on average than S picks.* *This means P picks have higher quality classes on average than S picks.*
*The expected values are largely negative, meaning that the algorithm tends to pick earlier than the analyst with the *The expected values are largely negative, meaning that the algorithm tends to pick earlier than the analyst with the applied settings (Manual - Automatic).*
applied settings (Manual - Automatic).* *The number of samples mentioned in the plots legends is the amount of stations that have an automatic and a manual P pick.*
*The number of samples mentioned in the plots legends is the amount of stations that have an automatic and a manual P
pick.*
### Export and Import of automatic picks ### Export and Import of automatic picks
@ -462,11 +369,7 @@ To be added.
# FAQ # FAQ
Q: During manual picking the error "No channel to plot for phase ..." is displayed, and I am unable to create a pick. Q: During manual picking the error "No channel to plot for phase ..." is displayed, and I am unable to create a pick.
A: Select a channel that should be used for the corresponding phase in the Pickwindow. For further information A: Select a channel that should be used for the corresponding phase in the Pickwindow. For further information read [Picking Window settings](#picking-window-settings).
read [Picking Window settings](#picking-window-settings).
Q: I see a warning "Mismatch in event identifiers" when loading picks from a file. Q: I see a warning "Mismatch in event identifiers" when loading picks from a file.
A: This means that PyLoT doesn't recognize the picks in the file as belonging to this specific event. They could have A: This means that PyLoT doesn't recognize the picks in the file as belonging to this specific event. They could have been saved under a different installation of PyLoT but with the same waveform data, which means they are still compatible and you can continue loading them or they could be the picks of a different event, in which case loading them is not recommended.
been saved under a different installation of PyLoT but with the same waveform data, which means they are still
compatible and you can continue loading them or they could be the picks of a different event, in which case loading them
is not recommended.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

View File

@ -6,146 +6,95 @@ A description of the parameters used for determining automatic picks.
Parameters applied to the traces before picking algorithm starts. Parameters applied to the traces before picking algorithm starts.
| Name | Description | Name | Description
|---------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| --- | ---
| *P Start*, *P | | *P Start*, *P Stop* | Define time interval relative to trace start time for CF calculation on vertical trace. Value is relative to theoretical onset time if 'Use TauPy' option is enabled in main settings of 'Tune Autopicker' dialogue.
| Stop* | Define time interval relative to trace start time for CF calculation on vertical trace. Value is relative to theoretical onset time if 'Use TauPy' option is enabled in main settings of 'Tune Autopicker' dialogue. | *S Start*, *S Stop* | Define time interval relative to trace start time for CF calculation on horizontal traces. Value is relative to theoretical onset time if 'Use TauPy' option is enabled in main settings of 'Tune Autopicker' dialogue.
| *S Start*, *S | | *Bandpass Z1* | Filter settings for Butterworth bandpass applied to vertical trace for calculation of initial P pick.
| Stop* | Define time interval relative to trace start time for CF calculation on horizontal traces. Value is relative to theoretical onset time if 'Use TauPy' option is enabled in main settings of 'Tune Autopicker' dialogue. | *Bandpass Z2* | Filter settings for Butterworth bandpass applied to vertical trace for calculation of precise P pick.
| *Bandpass | | *Bandpass H1* | Filter settings for Butterworth bandpass applied to horizontal traces for calculation of initial S pick.
| Z1* | Filter settings for Butterworth bandpass applied to vertical trace for calculation of initial P pick. | *Bandpass H2* | Filter settings for Butterworth bandpass applied to horizontal traces for calculation of precise S pick.
| *Bandpass | |
| Z2* | Filter settings for Butterworth bandpass applied to vertical trace for calculation of precise P pick. |
| *Bandpass | |
| H1* | Filter settings for Butterworth bandpass applied to horizontal traces for calculation of initial S pick. |
| *Bandpass | |
| H2* | Filter settings for Butterworth bandpass applied to horizontal traces for calculation of precise S pick. |
## Inital P pick ## Inital P pick
Parameters used for determination of initial P pick. Parameters used for determination of initial P pick.
| Name | Description | Name | Description
|-------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------| --- | ---
| * | | *tLTA* | Size of gliding LTA window in seconds used for calculation of HOS-CF.
| tLTA* | Size of gliding LTA window in seconds used for calculation of HOS-CF. | *pickwin P* | Size of time window in seconds in which the minimum of the AIC-CF in front of the maximum of the HOS-CF is determined.
| *pickwin | | *AICtsmooth* | Average of samples in this time window will be used for smoothing of the AIC-CF.
| P* | Size of time window in seconds in which the minimum of the AIC-CF in front of the maximum of the HOS-CF is determined. | *checkwinP* | Time in front of the global maximum of the HOS-CF in which to search for a second local extrema.
| * | | *minfactorP* | Used with *checkwinP*. If a second local maximum is found, it has to be at least as big as the first maximum * *minfactorP*.
| AICtsmooth* | Average of samples in this time window will be used for smoothing of the AIC-CF. | *tsignal* | Time window in seconds after the initial P pick used for determining signal amplitude.
| * | | *tnoise* | Time window in seconds in front of initial P pick used for determining noise amplitude.
| checkwinP* | Time in front of the global maximum of the HOS-CF in which to search for a second local extrema. | *tsafetey* | Time in seconds between *tsignal* and *tnoise*.
| *minfactorP* | Used with * | *tslope* | Time window in seconds after initial P pick in which the slope of the onset is calculated.
| checkwinP*. If a second local maximum is found, it has to be at least as big as the first maximum * *minfactorP*. | |
| * | |
| tsignal* | Time window in seconds after the initial P pick used for determining signal amplitude. |
| * | |
| tnoise* | Time window in seconds in front of initial P pick used for determining noise amplitude. |
| *tsafetey* | Time in seconds between *tsignal* and * |
| tnoise*. | |
| * | |
| tslope* | Time window in seconds after initial P pick in which the slope of the onset is calculated. |
## Inital S pick ## Inital S pick
Parameters used for determination of initial S pick Parameters used for determination of initial S pick
| Name | Description | Name | Description
|-------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------| --- | ---
| * | | *tdet1h* | Length of time window in seconds in which AR params of the waveform are determined.
| tdet1h* | Length of time window in seconds in which AR params of the waveform are determined. | *tpred1h* | Length of time window in seconds in which the waveform is predicted using the AR model.
| * | | *AICtsmoothS* | Average of samples in this time window is used for smoothing the AIC-CF.
| tpred1h* | Length of time window in seconds in which the waveform is predicted using the AR model. | *pickwinS* | Time window in which the minimum in the AIC-CF in front of the maximum in the ARH-CF is determined.
| * | | *checkwinS* | Time in front of the global maximum of the ARH-CF in which to search for a second local extrema.
| AICtsmoothS* | Average of samples in this time window is used for smoothing the AIC-CF. | *minfactorP* | Used with *checkwinS*. If a second local maximum is found, it has to be at least as big as the first maximum * *minfactorS*.
| * | | *tsignal* | Time window in seconds after the initial P pick used for determining signal amplitude.
| pickwinS* | Time window in which the minimum in the AIC-CF in front of the maximum in the ARH-CF is determined. | *tnoise* | Time window in seconds in front of initial P pick used for determining noise amplitude.
| * | | *tsafetey* | Time in seconds between *tsignal* and *tnoise*.
| checkwinS* | Time in front of the global maximum of the ARH-CF in which to search for a second local extrema. | *tslope* | Time window in seconds after initial P pick in which the slope of the onset is calculated.
| *minfactorP* | Used with * |
| checkwinS*. If a second local maximum is found, it has to be at least as big as the first maximum * *minfactorS*. | |
| * | |
| tsignal* | Time window in seconds after the initial P pick used for determining signal amplitude. |
| * | |
| tnoise* | Time window in seconds in front of initial P pick used for determining noise amplitude. |
| *tsafetey* | Time in seconds between *tsignal* and * |
| tnoise*. | |
| * | |
| tslope* | Time window in seconds after initial P pick in which the slope of the onset is calculated. |
## Precise P pick ## Precise P pick
Parameters used for determination of precise P pick. Parameters used for determination of precise P pick.
| Name | Description | Name | Description
|-------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| --- | ---
| *Precalcwin* | Time window in seconds for recalculation of the HOS-CF. The new CF will be two times the size of * | *Precalcwin* | Time window in seconds for recalculation of the HOS-CF. The new CF will be two times the size of *Precalcwin*, since it will be calculated from the initial pick to +/- *Precalcwin*.
| Precalcwin*, since it will be calculated from the initial pick to +/- *Precalcwin*. | | *tsmoothP* | Average of samples in this time window will be used for smoothing the second HOS-CF.
| * | | *ausP* | Controls artificial uplift of samples during precise picking. A common local minimum of the smoothed and unsmoothed HOS-CF is found when the previous sample is larger or equal to the current sample times (1+*ausP*).
| tsmoothP* | Average of samples in this time window will be used for smoothing the second HOS-CF. |
| * | |
| ausP* | Controls artificial uplift of samples during precise picking. A common local minimum of the smoothed and unsmoothed HOS-CF is found when the previous sample is larger or equal to the current sample times (1+* |
| ausP*). | |
## Precise S pick ## Precise S pick
Parameters used for determination of precise S pick. Parameters used for determination of precise S pick.
| Name | Description | Name | Description
|--------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| --- | ---
| * | | *tdet2h* | Time window for determination of AR coefficients.
| tdet2h* | Time window for determination of AR coefficients. | *tpred2h* | Time window in which the waveform is predicted using the determined AR parameters.
| * | | *Srecalcwin* | Time window for recalculation of ARH-CF. New CF will be calculated from initial pick +/- *Srecalcwin*.
| tpred2h* | Time window in which the waveform is predicted using the determined AR parameters. | *tsmoothS* | Average of samples in this time window will be used for smoothing the second ARH-CF.
| *Srecalcwin* | Time window for recalculation of ARH-CF. New CF will be calculated from initial pick +/- * | *ausS* | Controls artificial uplift of samples during precise picking. A common local minimum of the smoothed and unsmoothed ARH-CF is found when the previous sample is larger or equal to the current sample times (1+*ausS*).
| Srecalcwin*. | | *pickwinS* | Time window around initial pick in which to look for a precise pick.
| * | |
| tsmoothS* | Average of samples in this time window will be used for smoothing the second ARH-CF. |
| * | |
| ausS* | Controls artificial uplift of samples during precise picking. A common local minimum of the smoothed and unsmoothed ARH-CF is found when the previous sample is larger or equal to the current sample times (1+* |
| ausS*). | |
| * | |
| pickwinS* | Time window around initial pick in which to look for a precise pick. |
## Pick quality control ## Pick quality control
Parameters used for checking quality and integrity of automatic picks. Parameters used for checking quality and integrity of automatic picks.
| Name | Description | Name | Description
|--------------------------------------------|-----------------------------------------------------------------------| --- | ---
| * | | *minAICPslope* | Initial P picks with a slope lower than this value will be discared.
| minAICPslope* | Initial P picks with a slope lower than this value will be discared. | *minAICPSNR* | Initial P picks with a SNR below this value will be discarded.
| * | | *minAICSslope* | Initial S picks with a slope lower than this value will be discarded.
| minAICPSNR* | Initial P picks with a SNR below this value will be discarded. | *minAICSSNR* | Initial S picks with a SNR below this value will be discarded.
| * | | *minsiglength*, *noisefacor*. *minpercent* | Parameters for checking signal length. In the time window of size *minsiglength* after the initial P pick *minpercent* of samples have to be larger than the RMS value.
| minAICSslope* | Initial S picks with a slope lower than this value will be discarded. | *zfac* | To recognize misattributed S picks, the RMS amplitude of vertical and horizontal traces are compared. The RMS amplitude of the vertical traces has to be at least *zfac* higher than the RMS amplitude on the horizontal traces for the pick to be accepted as a valid P pick.
| * | | *jackfactor* | A P pick is removed if the jackknife pseudo value of the variance of his subgroup is larger than the variance of all picks multiplied with the *jackfactor*.
| minAICSSNR* | Initial S picks with a SNR below this value will be discarded. | *mdttolerance* | Maximum allowed deviation of P onset times from the median. Value in seconds.
| *minsiglength*, *noisefacor*. *minpercent* | Parameters for checking signal length. In the time window of size * | *wdttolerance* | Maximum allowed deviation of S onset times from the line during the Wadati test. Value in seconds.
minsiglength* after the initial P pick *
minpercent* of samples have to be larger than the RMS value. |
| *
zfac* | To recognize misattributed S picks, the RMS amplitude of vertical and horizontal traces are compared. The RMS amplitude of the vertical traces has to be at least *
zfac* higher than the RMS amplitude on the horizontal traces for the pick to be accepted as a valid P pick. |
| *
jackfactor* | A P pick is removed if the jackknife pseudo value of the variance of his subgroup is larger than the variance of all picks multiplied with the *
jackfactor*. |
| *
mdttolerance* | Maximum allowed deviation of P onset times from the median. Value in seconds. |
| *
wdttolerance* | Maximum allowed deviation of S onset times from the line during the Wadati test. Value in seconds. |
## Pick quality determination ## Pick quality determination
Parameters for discrete quality classes. Parameters for discrete quality classes.
| Name | Description | Name | Description
|--------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------| --- | ---
| * | | *timeerrorsP* | Width of the time windows in seconds between earliest and latest possible pick which represent the quality classes 0, 1, 2, 3 for P onsets.
| timeerrorsP* | Width of the time windows in seconds between earliest and latest possible pick which represent the quality classes 0, 1, 2, 3 for P onsets. | *timeerrorsS* | Width of the time windows in seconds between earliest and latest possible pick which represent the quality classes 0, 1, 2, 3 for S onsets.
| * | | *nfacP*, *nfacS* | For determination of latest possible onset time. The time when the signal reaches an amplitude of *nfac* * mean value of the RMS amplitude in the time window *tnoise* corresponds to the latest possible onset time.
| timeerrorsS* | Width of the time windows in seconds between earliest and latest possible pick which represent the quality classes 0, 1, 2, 3 for S onsets. |
| *nfacP*, *nfacS* | For determination of latest possible onset time. The time when the signal reaches an amplitude of * |
| nfac* * mean value of the RMS amplitude in the time window *tnoise* corresponds to the latest possible onset time. | |

View File

@ -30,7 +30,6 @@
<file>icons/openloc.png</file> <file>icons/openloc.png</file>
<file>icons/compare_button.png</file> <file>icons/compare_button.png</file>
<file>icons/pick_qualities_button.png</file> <file>icons/pick_qualities_button.png</file>
<file>icons/eventlist_xml_button.png</file>
<file>icons/locate_button.png</file> <file>icons/locate_button.png</file>
<file>icons/Matlab_PILOT_icon.png</file> <file>icons/Matlab_PILOT_icon.png</file>
<file>icons/printer.png</file> <file>icons/printer.png</file>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.2 KiB

File diff suppressed because it is too large Load Diff

218911
icons_rc_3.py

File diff suppressed because it is too large Load Diff

View File

@ -4,8 +4,10 @@
%Parameters are optimized for %extent data sets! %Parameters are optimized for %extent data sets!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#main settings# #main settings#
#rootpath# %project path
#datapath# %data path #datapath# %data path
#eventID# %event ID for single event processing (* for all events found in datapath) #database# %name of data base
#eventID# %event ID for single event processing (* for all events found in database)
#invdir# %full path to inventory or dataless-seed file #invdir# %full path to inventory or dataless-seed file
PILOT #datastructure# %choose data structure PILOT #datastructure# %choose data structure
True #apverbose# %choose 'True' or 'False' for terminal output True #apverbose# %choose 'True' or 'False' for terminal output
@ -41,7 +43,6 @@ global #extent# %extent of a
1150.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking 1150.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking
True #use_taup# %use estimated traveltimes from TauPy for calculating windows for CF True #use_taup# %use estimated traveltimes from TauPy for calculating windows for CF
iasp91 #taup_model# %define TauPy model for traveltime estimation. Possible values: 1066a, 1066b, ak135, ak135f, herrin, iasp91, jb, prem, pwdk, sp6 iasp91 #taup_model# %define TauPy model for traveltime estimation. Possible values: 1066a, 1066b, ak135, ak135f, herrin, iasp91, jb, prem, pwdk, sp6
P,Pdiff #taup_phases# %Specify possible phases for TauPy (comma separated). See Obspy TauPy documentation for possible values.
0.05 0.5 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz] 0.05 0.5 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz]
0.001 0.5 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz] 0.001 0.5 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz]
0.05 0.5 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz] 0.05 0.5 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz]

View File

@ -4,8 +4,10 @@
%Parameters are optimized for %extent data sets! %Parameters are optimized for %extent data sets!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#main settings# #main settings#
/DATA/Insheim/EVENT_DATA/LOCAL/2018.02_Insheim #datapath# %data path /DATA/Insheim #rootpath# %project path
e0006.038.18 #eventID# %event ID for single event processing (* for all events found in datapath) EVENT_DATA/LOCAL #datapath# %data path
2018.02_Insheim #database# %name of data base
e0006.038.18 #eventID# %event ID for single event processing (* for all events found in database)
/DATA/Insheim/STAT_INFO #invdir# %full path to inventory or dataless-seed file /DATA/Insheim/STAT_INFO #invdir# %full path to inventory or dataless-seed file
PILOT #datastructure# %choose data structure PILOT #datastructure# %choose data structure
True #apverbose# %choose 'True' or 'False' for terminal output True #apverbose# %choose 'True' or 'False' for terminal output
@ -41,7 +43,6 @@ local #extent# %extent of a
10.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking 10.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking
False #use_taup# %use estimated traveltimes from TauPy for calculating windows for CF False #use_taup# %use estimated traveltimes from TauPy for calculating windows for CF
iasp91 #taup_model# %define TauPy model for traveltime estimation iasp91 #taup_model# %define TauPy model for traveltime estimation
P #taup_phases# %Specify possible phases for TauPy (comma separated). See Obspy TauPy documentation for possible values.
2.0 20.0 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz] 2.0 20.0 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz]
2.0 30.0 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz] 2.0 30.0 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz]
2.0 10.0 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz] 2.0 10.0 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz]

View File

@ -4,8 +4,10 @@
%Parameters are optimized for %extent data sets! %Parameters are optimized for %extent data sets!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#main settings# #main settings#
#rootpath# %project path
#datapath# %data path #datapath# %data path
#eventID# %event ID for single event processing (* for all events found in datapath) #database# %name of data base
#eventID# %event ID for single event processing (* for all events found in database)
#invdir# %full path to inventory or dataless-seed file #invdir# %full path to inventory or dataless-seed file
PILOT #datastructure# %choose data structure PILOT #datastructure# %choose data structure
True #apverbose# %choose 'True' or 'False' for terminal output True #apverbose# %choose 'True' or 'False' for terminal output
@ -41,7 +43,6 @@ local #extent# %extent of a
10.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking 10.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking
True #use_taup# %use estimated traveltimes from TauPy for calculating windows for CF True #use_taup# %use estimated traveltimes from TauPy for calculating windows for CF
iasp91 #taup_model# %define TauPy model for traveltime estimation iasp91 #taup_model# %define TauPy model for traveltime estimation
P #taup_phases# %Specify possible phases for TauPy (comma separated). See Obspy TauPy documentation for possible values.
2.0 10.0 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz] 2.0 10.0 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz]
2.0 12.0 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz] 2.0 12.0 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz]
2.0 8.0 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz] 2.0 8.0 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz]

View File

@ -1,12 +0,0 @@
name: pylot_311
channels:
- conda-forge
- defaults
dependencies:
- cartopy=0.23.0=py311hcf9f919_1
- joblib=1.4.2=pyhd8ed1ab_0
- obspy=1.4.1=py311he736701_3
- pyaml=24.7.0=pyhd8ed1ab_0
- pyqtgraph=0.13.7=pyhd8ed1ab_0
- pyside2=5.15.8=py311h3d699ce_4
- pytest=8.3.2=pyhd8ed1ab_0

View File

@ -9,7 +9,7 @@ PyLoT - the Python picking and Localization Tool
This python library contains a graphical user interfaces for picking This python library contains a graphical user interfaces for picking
seismic phases. This software needs ObsPy (http://github.com/obspy/obspy/wiki) seismic phases. This software needs ObsPy (http://github.com/obspy/obspy/wiki)
and the Qt libraries to be installed first. and the Qt4 libraries to be installed first.
PILOT has been developed in Mathworks' MatLab. In order to distribute PILOT has been developed in Mathworks' MatLab. In order to distribute
PILOT without facing portability problems, it has been decided to re- PILOT without facing portability problems, it has been decided to re-

View File

@ -144,10 +144,6 @@ class Magnitude(object):
azimuthal_gap=self.origin_id.get_referred_object().quality.azimuthal_gap) azimuthal_gap=self.origin_id.get_referred_object().quality.azimuthal_gap)
else: else:
# no scaling necessary # no scaling necessary
# Temporary fix needs rework
if (len(self.magnitudes.keys()) == 0):
print("Error in local magnitude calculation ")
return None
mag = ope.Magnitude( mag = ope.Magnitude(
mag=np.median([M.mag for M in self.magnitudes.values()]), mag=np.median([M.mag for M in self.magnitudes.values()]),
magnitude_type=self.type, magnitude_type=self.type,
@ -225,16 +221,11 @@ class LocalMagnitude(Magnitude):
power = [np.power(tr.data, 2) for tr in st if tr.stats.channel[-1] not power = [np.power(tr.data, 2) for tr in st if tr.stats.channel[-1] not
in 'Z3'] in 'Z3']
# checking horizontal count and calculating power_sum accordingly if len(power) != 2:
if len(power) == 1: raise ValueError('Wood-Anderson amplitude defintion only valid for '
print('WARNING: Only one horizontal found for station {0}.'.format(st[0].stats.station)) 'two horizontals: {0} given'.format(len(power)))
power_sum = power[0]
elif len(power) == 2:
power_sum = power[0] + power[1] power_sum = power[0] + power[1]
else: #
raise ValueError('Wood-Anderson aomplitude defintion only valid for'
' up to two horizontals: {0} given'.format(len(power)))
sqH = np.sqrt(power_sum) sqH = np.sqrt(power_sum)
# get time array # get time array
@ -286,13 +277,12 @@ class LocalMagnitude(Magnitude):
for a in self.arrivals: for a in self.arrivals:
if a.phase not in 'sS': if a.phase not in 'sS':
continue continue
pick = a.pick_id.get_referred_object()
station = pick.waveform_id.station_code
# make sure calculating Ml only from reliable onsets # make sure calculating Ml only from reliable onsets
# NLLoc: time_weight = 0 => do not use onset! # NLLoc: time_weight = 0 => do not use onset!
if a.time_weight == 0: if a.time_weight == 0:
print("Uncertain pick at Station {}, do not use it!".format(station))
continue continue
pick = a.pick_id.get_referred_object()
station = pick.waveform_id.station_code
wf = select_for_phase(self.stream.select( wf = select_for_phase(self.stream.select(
station=station), a.phase) station=station), a.phase)
if not wf: if not wf:
@ -404,32 +394,24 @@ class MomentMagnitude(Magnitude):
print("WARNING: No instrument corrected data available," print("WARNING: No instrument corrected data available,"
" no magnitude calculation possible! Go on.") " no magnitude calculation possible! Go on.")
continue continue
wf = self.stream.select(station=station) scopy = self.stream.copy()
wf = scopy.select(station=station)
if not wf: if not wf:
continue continue
try:
scopy = wf.copy()
except AssertionError:
print("WARNING: Something's wrong with the data,"
"station {},"
"no calculation of moment magnitude possible! Go on.".format(station))
continue
onset = pick.time onset = pick.time
distance = degrees2kilometers(a.distance) distance = degrees2kilometers(a.distance)
azimuth = a.azimuth azimuth = a.azimuth
incidence = a.takeoff_angle incidence = a.takeoff_angle
if not 0. <= incidence <= 360.: w0, fc = calcsourcespec(wf, onset, self.p_velocity, distance,
if self.verbose:
print(f'WARNING: Incidence angle outside bounds - {incidence}')
return
w0, fc = calcsourcespec(scopy, onset, self.p_velocity, distance,
azimuth, incidence, self.p_attenuation, azimuth, incidence, self.p_attenuation,
self.plot_flag, self.verbose) self.plot_flag, self.verbose)
if w0 is None or fc is None: if w0 is None or fc is None:
if self.verbose: if self.verbose:
print("WARNING: insufficient frequency information") print("WARNING: insufficient frequency information")
continue continue
WF = select_for_phase(scopy, "P") WF = select_for_phase(self.stream.select(
station=station), a.phase)
WF = select_for_phase(WF, "P")
m0, mw = calcMoMw(WF, w0, self.rock_density, self.p_velocity, m0, mw = calcMoMw(WF, w0, self.rock_density, self.p_velocity,
distance, self.verbose) distance, self.verbose)
self.moment_props = (station, dict(w0=w0, fc=fc, Mo=m0)) self.moment_props = (station, dict(w0=w0, fc=fc, Mo=m0))
@ -440,40 +422,6 @@ class MomentMagnitude(Magnitude):
self.event.station_magnitudes.append(magnitude) self.event.station_magnitudes.append(magnitude)
self.magnitudes = (station, magnitude) self.magnitudes = (station, magnitude)
# WIP JG
def getSourceSpec(self):
for a in self.arrivals:
if a.phase not in 'pP':
continue
# make sure calculating Mo only from reliable onsets
# NLLoc: time_weight = 0 => do not use onset!
if a.time_weight == 0:
continue
pick = a.pick_id.get_referred_object()
station = pick.waveform_id.station_code
if len(self.stream) <= 2:
print("Station:" '{0}'.format(station))
print("WARNING: No instrument corrected data available,"
" no magnitude calculation possible! Go on.")
continue
wf = self.stream.select(station=station)
if not wf:
continue
try:
scopy = wf.copy()
except AssertionError:
print("WARNING: Something's wrong with the data,"
"station {},"
"no calculation of moment magnitude possible! Go on.".format(station))
continue
onset = pick.time
distance = degrees2kilometers(a.distance)
azimuth = a.azimuth
incidence = a.takeoff_angle
w0, fc, plt = calcsourcespec(scopy, onset, self.p_velocity, distance,
azimuth, incidence, self.p_attenuation,
3, self.verbose)
return w0, fc, plt
def calcMoMw(wfstream, w0, rho, vp, delta, verbosity=False): def calcMoMw(wfstream, w0, rho, vp, delta, verbosity=False):
''' '''
@ -636,15 +584,15 @@ def calcsourcespec(wfstream, onset, vp, delta, azimuth, incidence,
# fft # fft
fny = freq / 2 fny = freq / 2
# l = len(xdat) / freq #l = len(xdat) / freq
# number of fft bins after Bath # number of fft bins after Bath
# n = freq * l #n = freq * l
# find next power of 2 of data length # find next power of 2 of data length
m = pow(2, np.ceil(np.log(len(xdat)) / np.log(2))) m = pow(2, np.ceil(np.log(len(xdat)) / np.log(2)))
N = min(int(np.power(m, 2)), 16384) N = min(int(np.power(m, 2)), 16384)
# N = int(np.power(m, 2)) #N = int(np.power(m, 2))
y = dt * np.fft.fft(xdat, N) y = dt * np.fft.fft(xdat, N)
Y = abs(y[: int(N / 2)]) Y = abs(y[: N / 2])
L = (N - 1) / freq L = (N - 1) / freq
f = np.arange(0, fny, 1 / L) f = np.arange(0, fny, 1 / L)
@ -721,8 +669,6 @@ def calcsourcespec(wfstream, onset, vp, delta, azimuth, incidence,
plt.xlabel('Frequency [Hz]') plt.xlabel('Frequency [Hz]')
plt.ylabel('Amplitude [m/Hz]') plt.ylabel('Amplitude [m/Hz]')
plt.grid() plt.grid()
if iplot == 3:
return w0, Fc, plt
plt.show() plt.show()
try: try:
input() input()

View File

@ -2,26 +2,29 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import copy import copy
import logging
import os import os
from PySide2.QtWidgets import QMessageBox
from obspy import read_events from obspy import read_events
from obspy.core import read, Stream, UTCDateTime from obspy.core import read, Stream, UTCDateTime
from obspy.core.event import Event as ObsPyEvent from obspy.core.event import Event as ObsPyEvent
from obspy.io.sac import SacIOError from obspy.io.sac import SacIOError
from PySide2.QtWidgets import QMessageBox
import pylot.core.loc.velest as velest
import pylot.core.loc.focmec as focmec import pylot.core.loc.focmec as focmec
import pylot.core.loc.hypodd as hypodd import pylot.core.loc.hypodd as hypodd
import pylot.core.loc.velest as velest
from pylot.core.io.phases import readPILOTEvent, picks_from_picksdict, \ from pylot.core.io.phases import readPILOTEvent, picks_from_picksdict, \
picksdict_from_pilot, merge_picks, PylotParameter picksdict_from_pilot, merge_picks, PylotParameter
from pylot.core.util.errors import FormatError, OverwriteError from pylot.core.util.errors import FormatError, OverwriteError
from pylot.core.util.event import Event from pylot.core.util.event import Event
from pylot.core.util.obspyDMT_interface import qml_from_obspyDMT from pylot.core.util.obspyDMT_interface import qml_from_obspyDMT
from pylot.core.util.utils import fnConstructor, full_range, check4rotated, \ from pylot.core.util.utils import fnConstructor, full_range, check4rotated, \
check_for_gaps_and_merge, trim_station_components, check_for_nan check4gapsAndMerge, trim_station_components
try:
str_TypeLst = [str, unicode] # if python 2.X
except NameError:
str_TypeLst = [str] # if python 3.*
class Data(object): class Data(object):
""" """
@ -36,17 +39,8 @@ class Data(object):
loaded event. Container object holding, e.g. phase arrivals, etc. loaded event. Container object holding, e.g. phase arrivals, etc.
""" """
def __init__(self, parent=None, evtdata=None, picking_parameter=None): def __init__(self, parent=None, evtdata=None):
self._parent = parent self._parent = parent
if not picking_parameter:
if hasattr(parent, '_inputs'):
picking_parameter = parent._inputs
else:
logging.warning('No picking parameters found! Using default input parameters!!!')
picking_parameter = PylotParameter()
self.picking_parameter = picking_parameter
if self.getParent(): if self.getParent():
self.comp = parent.getComponent() self.comp = parent.getComponent()
else: else:
@ -58,10 +52,10 @@ class Data(object):
elif isinstance(evtdata, dict): elif isinstance(evtdata, dict):
evt = readPILOTEvent(**evtdata) evt = readPILOTEvent(**evtdata)
evtdata = evt evtdata = evt
elif isinstance(evtdata, str): elif type(evtdata) in str_TypeLst:
try: try:
cat = read_events(evtdata) cat = read_events(evtdata)
if len(cat) != 1: if len(cat) is not 1:
raise ValueError('ambiguous event information for file: ' raise ValueError('ambiguous event information for file: '
'{file}'.format(file=evtdata)) '{file}'.format(file=evtdata))
evtdata = cat[0] evtdata = cat[0]
@ -74,7 +68,7 @@ class Data(object):
elif 'LOC' in evtdata: elif 'LOC' in evtdata:
raise NotImplementedError('PILOT location information ' raise NotImplementedError('PILOT location information '
'read support not yet ' 'read support not yet '
'implemented.') 'implemeted.')
elif 'event.pkl' in evtdata: elif 'event.pkl' in evtdata:
evtdata = qml_from_obspyDMT(evtdata) evtdata = qml_from_obspyDMT(evtdata)
else: else:
@ -110,7 +104,7 @@ class Data(object):
old_pick.phase_hint == new_pick.phase_hint, old_pick.phase_hint == new_pick.phase_hint,
old_pick.method_id == new_pick.method_id] old_pick.method_id == new_pick.method_id]
if all(comparison): if all(comparison):
del (old_pick) del(old_pick)
old_picks.append(new_pick) old_picks.append(new_pick)
elif not other.isNew() and self.isNew(): elif not other.isNew() and self.isNew():
new = other + self new = other + self
@ -173,8 +167,9 @@ class Data(object):
def checkEvent(self, event, fcheck, forceOverwrite=False): def checkEvent(self, event, fcheck, forceOverwrite=False):
""" """
Check information in supplied event and own event and replace with own Check information in supplied event and own event and replace own
information if no other information are given or forced by forceOverwrite information with supplied information if own information not exiisting
or forced by forceOverwrite
:param event: Event that supplies information for comparison :param event: Event that supplies information for comparison
:type event: pylot.core.util.event.Event :type event: pylot.core.util.event.Event
:param fcheck: check and delete existing information :param fcheck: check and delete existing information
@ -230,7 +225,7 @@ class Data(object):
def replacePicks(self, event, picktype): def replacePicks(self, event, picktype):
""" """
Replace picks in event with own picks Replace own picks with the one in event
:param event: Event that supplies information for comparison :param event: Event that supplies information for comparison
:type event: pylot.core.util.event.Event :type event: pylot.core.util.event.Event
:param picktype: 'auto' or 'manual' picks :param picktype: 'auto' or 'manual' picks
@ -238,22 +233,19 @@ class Data(object):
:return: :return:
:rtype: None :rtype: None
""" """
checkflag = 1 checkflag = 0
picks = event.picks picks = event.picks
# remove existing picks # remove existing picks
for j, pick in reversed(list(enumerate(picks))): for j, pick in reversed(list(enumerate(picks))):
try: try:
if picktype in str(pick.method_id.id): if picktype in str(pick.method_id.id):
picks.pop(j) picks.pop(j)
checkflag = 2 checkflag = 1
except AttributeError as e: except AttributeError as e:
msg = '{}'.format(e) msg = '{}'.format(e)
print(e) print(e)
checkflag = 0 checkflag = 0
if checkflag > 0: if checkflag:
if checkflag == 1:
print("Write new %s picks to catalog." % picktype)
if checkflag == 2:
print("Found %s pick(s), remove them and append new picks to catalog." % picktype) print("Found %s pick(s), remove them and append new picks to catalog." % picktype)
# append new picks # append new picks
@ -270,6 +262,7 @@ class Data(object):
can be a str or a list of strings of ['manual', 'auto', 'origin', 'magnitude'] can be a str or a list of strings of ['manual', 'auto', 'origin', 'magnitude']
""" """
from pylot.core.util.defaults import OUTPUTFORMATS from pylot.core.util.defaults import OUTPUTFORMATS
if not type(fcheck) == list: if not type(fcheck) == list:
fcheck = [fcheck] fcheck = [fcheck]
@ -281,11 +274,8 @@ class Data(object):
raise FormatError(errmsg) raise FormatError(errmsg)
if hasattr(self.get_evt_data(), 'notes'): if hasattr(self.get_evt_data(), 'notes'):
try:
with open(os.path.join(os.path.dirname(fnout), 'notes.txt'), 'w') as notes_file: with open(os.path.join(os.path.dirname(fnout), 'notes.txt'), 'w') as notes_file:
notes_file.write(self.get_evt_data().notes) notes_file.write(self.get_evt_data().notes)
except Exception as e:
print('Warning: Could not save notes.txt: ', str(e))
# check for already existing xml-file # check for already existing xml-file
if fnext == '.xml': if fnext == '.xml':
@ -302,9 +292,7 @@ class Data(object):
return return
self.checkEvent(event, fcheck) self.checkEvent(event, fcheck)
self.setEvtData(event) self.setEvtData(event)
self.get_evt_data().write(fnout + fnext, format=evtformat) self.get_evt_data().write(fnout + fnext, format=evtformat)
# try exporting event # try exporting event
else: else:
evtdata_org = self.get_evt_data() evtdata_org = self.get_evt_data()
@ -327,64 +315,38 @@ class Data(object):
del picks_copy[k] del picks_copy[k]
break break
lendiff = len(picks) - len(picks_copy) lendiff = len(picks) - len(picks_copy)
if lendiff != 0: if lendiff is not 0:
print("Manual as well as automatic picks available. Prefered the {} manual ones!".format(lendiff)) print("Manual as well as automatic picks available. Prefered the {} manual ones!".format(lendiff))
no_uncertainties_p = []
no_uncertainties_s = []
if upperErrors: if upperErrors:
# check for pick uncertainties exceeding adjusted upper errors # check for pick uncertainties exceeding adjusted upper errors
# Picks with larger uncertainties will not be saved in output file! # Picks with larger uncertainties will not be saved in output file!
for j in range(len(picks)): for j in range(len(picks)):
for i in range(len(picks_copy)): for i in range(len(picks_copy)):
if picks_copy[i].phase_hint[0] == 'P': if picks_copy[i].phase_hint[0] == 'P':
# Skipping pick if no upper_uncertainty is found and warning user if (picks_copy[i].time_errors['upper_uncertainty'] >= upperErrors[0]) or \
if picks_copy[i].time_errors['upper_uncertainty'] is None: (picks_copy[i].time_errors['uncertainty'] is None):
#print("{1} P-Pick of station {0} does not have upper_uncertainty and cant be checked".format(
# picks_copy[i].waveform_id.station_code,
# picks_copy[i].method_id))
if not picks_copy[i].waveform_id.station_code in no_uncertainties_p:
no_uncertainties_p.append(picks_copy[i].waveform_id.station_code)
continue
#print ("checking for upper_uncertainty")
if (picks_copy[i].time_errors['uncertainty'] is None) or \
(picks_copy[i].time_errors['upper_uncertainty'] >= upperErrors[0]):
print("Uncertainty exceeds or equal adjusted upper time error!") print("Uncertainty exceeds or equal adjusted upper time error!")
print("Adjusted uncertainty: {}".format(upperErrors[0])) print("Adjusted uncertainty: {}".format(upperErrors[0]))
print("Pick uncertainty: {}".format(picks_copy[i].time_errors['uncertainty'])) print("Pick uncertainty: {}".format(picks_copy[i].time_errors['uncertainty']))
print("{1} P-Pick of station {0} will not be saved in outputfile".format( print("{1} P-Pick of station {0} will not be saved in outputfile".format(
picks_copy[i].waveform_id.station_code, picks_copy[i].waveform_id.station_code,
picks_copy[i].method_id)) picks_copy[i].method_id))
print("#")
del picks_copy[i] del picks_copy[i]
break break
if picks_copy[i].phase_hint[0] == 'S': if picks_copy[i].phase_hint[0] == 'S':
if (picks_copy[i].time_errors['upper_uncertainty'] >= upperErrors[1]) or \
# Skipping pick if no upper_uncertainty is found and warning user (picks_copy[i].time_errors['uncertainty'] is None):
if picks_copy[i].time_errors['upper_uncertainty'] is None:
#print("{1} S-Pick of station {0} does not have upper_uncertainty and cant be checked".format(
#picks_copy[i].waveform_id.station_code,
#picks_copy[i].method_id))
if not picks_copy[i].waveform_id.station_code in no_uncertainties_s:
no_uncertainties_s.append(picks_copy[i].waveform_id.station_code)
continue
if (picks_copy[i].time_errors['uncertainty'] is None) or \
(picks_copy[i].time_errors['upper_uncertainty'] >= upperErrors[1]):
print("Uncertainty exceeds or equal adjusted upper time error!") print("Uncertainty exceeds or equal adjusted upper time error!")
print("Adjusted uncertainty: {}".format(upperErrors[1])) print("Adjusted uncertainty: {}".format(upperErrors[1]))
print("Pick uncertainty: {}".format(picks_copy[i].time_errors['uncertainty'])) print("Pick uncertainty: {}".format(picks_copy[i].time_errors['uncertainty']))
print("{1} S-Pick of station {0} will not be saved in outputfile".format( print("{1} S-Pick of station {0} will not be saved in outputfile".format(
picks_copy[i].waveform_id.station_code, picks_copy[i].waveform_id.station_code,
picks_copy[i].method_id)) picks_copy[i].method_id))
print("#")
del picks_copy[i] del picks_copy[i]
break break
for s in no_uncertainties_p:
print("P-Pick of station {0} does not have upper_uncertainty and cant be checked".format(s))
for s in no_uncertainties_s:
print("S-Pick of station {0} does not have upper_uncertainty and cant be checked".format(s))
if fnext == '.obs': if fnext == '.obs':
try: try:
@ -394,14 +356,6 @@ class Data(object):
header = '# EQEVENT: Label: EQ%s Loc: X 0.00 Y 0.00 Z 10.00 OT 0.00 \n' % evid header = '# EQEVENT: Label: EQ%s Loc: X 0.00 Y 0.00 Z 10.00 OT 0.00 \n' % evid
nllocfile = open(fnout + fnext) nllocfile = open(fnout + fnext)
l = nllocfile.readlines() l = nllocfile.readlines()
# Adding A0/Generic Amplitude to .obs file
# l2 = []
# for li in l:
# for amp in evtdata_org.amplitudes:
# if amp.waveform_id.station_code == li[0:5].strip():
# li = li[0:64] + '{:0.2e}'.format(amp.generic_amplitude) + li[73:-1] + '\n'
# l2.append(li)
# l = l2
nllocfile.close() nllocfile.close()
l.insert(0, header) l.insert(0, header)
nllocfile = open(fnout + fnext, 'w') nllocfile = open(fnout + fnext, 'w')
@ -412,19 +366,25 @@ class Data(object):
not implemented: {1}'''.format(evtformat, e)) not implemented: {1}'''.format(evtformat, e))
if fnext == '.cnv': if fnext == '.cnv':
try: try:
velest.export(picks_copy, fnout + fnext, self.picking_parameter, eventinfo=self.get_evt_data()) velest.export(picks_copy, fnout + fnext, eventinfo=self.get_evt_data())
except KeyError as e: except KeyError as e:
raise KeyError('''{0} export format raise KeyError('''{0} export format
not implemented: {1}'''.format(evtformat, e)) not implemented: {1}'''.format(evtformat, e))
if fnext == '_focmec.in': if fnext == '_focmec.in':
try: try:
focmec.export(picks_copy, fnout + fnext, self.picking_parameter, eventinfo=self.get_evt_data()) infile = os.path.join(os.path.expanduser('~'), '.pylot', 'pylot.in')
print('Using default input file {}'.format(infile))
parameter = PylotParameter(infile)
focmec.export(picks_copy, fnout + fnext, parameter, eventinfo=self.get_evt_data())
except KeyError as e: except KeyError as e:
raise KeyError('''{0} export format raise KeyError('''{0} export format
not implemented: {1}'''.format(evtformat, e)) not implemented: {1}'''.format(evtformat, e))
if fnext == '.pha': if fnext == '.pha':
try: try:
hypodd.export(picks_copy, fnout + fnext, self.picking_parameter, eventinfo=self.get_evt_data()) infile = os.path.join(os.path.expanduser('~'), '.pylot', 'pylot.in')
print('Using default input file {}'.format(infile))
parameter = PylotParameter(infile)
hypodd.export(picks_copy, fnout + fnext, parameter, eventinfo=self.get_evt_data())
except KeyError as e: except KeyError as e:
raise KeyError('''{0} export format raise KeyError('''{0} export format
not implemented: {1}'''.format(evtformat, e)) not implemented: {1}'''.format(evtformat, e))
@ -455,30 +415,20 @@ class Data(object):
data.filter(**kwargs) data.filter(**kwargs)
self.dirty = True self.dirty = True
def setWFData(self, fnames, fnames_alt=None, checkRotated=False, metadata=None, tstart=0, tstop=0): def setWFData(self, fnames, fnames_syn=None, checkRotated=False, metadata=None, tstart=0, tstop=0):
""" """
Clear current waveform data and set given waveform data Clear current waveform data and set given waveform data
:param fnames: waveform data names to append :param fnames: waveform data names to append
:param fnames_alt: alternative data to show (e.g. synthetic/processed)
:type fnames: list :type fnames: list
""" """
def check_fname_exists(filenames: list) -> list:
if filenames:
filenames = [fn for fn in filenames if os.path.isfile(fn)]
return filenames
self.wfdata = Stream() self.wfdata = Stream()
self.wforiginal = None self.wforiginal = None
self.wf_alt = Stream() self.wfsyn = Stream()
if tstart == tstop: if tstart == tstop:
tstart = tstop = None tstart = tstop = None
self.tstart = tstart self.tstart = tstart
self.tstop = tstop self.tstop = tstop
# remove directories
fnames = check_fname_exists(fnames)
fnames_alt = check_fname_exists(fnames_alt)
# if obspy_dmt: # if obspy_dmt:
# wfdir = 'raw' # wfdir = 'raw'
# self.processed = False # self.processed = False
@ -496,8 +446,8 @@ class Data(object):
# wffnames = fnames # wffnames = fnames
if fnames is not None: if fnames is not None:
self.appendWFData(fnames) self.appendWFData(fnames)
if fnames_alt is not None: if fnames_syn is not None:
self.appendWFData(fnames_alt, alternative=True) self.appendWFData(fnames_syn, synthetic=True)
else: else:
return False return False
@ -505,9 +455,7 @@ class Data(object):
# remove possible underscores in station names # remove possible underscores in station names
# self.wfdata = remove_underscores(self.wfdata) # self.wfdata = remove_underscores(self.wfdata)
# check for gaps and merge # check for gaps and merge
self.wfdata, _ = check_for_gaps_and_merge(self.wfdata) self.wfdata = check4gapsAndMerge(self.wfdata)
# check for nans
check_for_nan(self.wfdata)
# check for stations with rotated components # check for stations with rotated components
if checkRotated and metadata is not None: if checkRotated and metadata is not None:
self.wfdata = check4rotated(self.wfdata, metadata, verbosity=0) self.wfdata = check4rotated(self.wfdata, metadata, verbosity=0)
@ -519,7 +467,7 @@ class Data(object):
self.dirty = False self.dirty = False
return True return True
def appendWFData(self, fnames, alternative=False): def appendWFData(self, fnames, synthetic=False):
""" """
Read waveform data from fnames and append it to current wf data Read waveform data from fnames and append it to current wf data
:param fnames: waveform data to append :param fnames: waveform data to append
@ -532,20 +480,19 @@ class Data(object):
if self.dirty: if self.dirty:
self.resetWFData() self.resetWFData()
orig_or_alternative_data = {True: self.wf_alt, real_or_syn_data = {True: self.wfsyn,
False: self.wfdata} False: self.wfdata}
warnmsg = '' warnmsg = ''
for fname in set(fnames): for fname in set(fnames):
try: try:
orig_or_alternative_data[alternative] += read(fname, starttime=self.tstart, endtime=self.tstop) real_or_syn_data[synthetic] += read(fname, starttime=self.tstart, endtime=self.tstop)
except TypeError: except TypeError:
try: try:
orig_or_alternative_data[alternative] += read(fname, format='GSE2', starttime=self.tstart, endtime=self.tstop) real_or_syn_data[synthetic] += read(fname, format='GSE2', starttime=self.tstart, endtime=self.tstop)
except Exception as e: except Exception as e:
try: try:
orig_or_alternative_data[alternative] += read(fname, format='SEGY', starttime=self.tstart, real_or_syn_data[synthetic] += read(fname, format='SEGY', starttime=self.tstart, endtime=self.tstop)
endtime=self.tstop)
except Exception as e: except Exception as e:
warnmsg += '{0}\n{1}\n'.format(fname, e) warnmsg += '{0}\n{1}\n'.format(fname, e)
except SacIOError as se: except SacIOError as se:
@ -560,8 +507,8 @@ class Data(object):
def getOriginalWFData(self): def getOriginalWFData(self):
return self.wforiginal return self.wforiginal
def getAltWFdata(self): def getSynWFData(self):
return self.wf_alt return self.wfsyn
def resetWFData(self): def resetWFData(self):
""" """
@ -585,7 +532,7 @@ class Data(object):
def setEvtData(self, event): def setEvtData(self, event):
self.evtdata = event self.evtdata = event
def applyEVTData(self, data, typ='pick'): def applyEVTData(self, data, typ='pick', authority_id='rub'):
""" """
Either takes an `obspy.core.event.Event` object and applies all new Either takes an `obspy.core.event.Event` object and applies all new
information on the event to the actual data if typ is 'event or information on the event to the actual data if typ is 'event or
@ -836,8 +783,8 @@ class PilotDataStructure(GenericDataStructure):
def __init__(self, **fields): def __init__(self, **fields):
if not fields: if not fields:
fields = {'database': '', fields = {'database': '2006.01',
'root': ''} 'root': '/data/Egelados/EVENT_DATA/LOCAL'}
GenericDataStructure.__init__(self, **fields) GenericDataStructure.__init__(self, **fields)

View File

@ -6,14 +6,24 @@ import numpy as np
Default parameters used for picking Default parameters used for picking
""" """
defaults = {'datapath': {'type': str, defaults = {'rootpath': {'type': str,
'tooltip': 'path to eventfolders', 'tooltip': 'project path',
'value': '',
'namestring': 'Root path'},
'datapath': {'type': str,
'tooltip': 'data path',
'value': '', 'value': '',
'namestring': 'Data path'}, 'namestring': 'Data path'},
'database': {'type': str,
'tooltip': 'name of data base',
'value': '',
'namestring': 'Database path'},
'eventID': {'type': str, 'eventID': {'type': str,
'tooltip': 'event ID for single event processing (* for all events found in datapath)', 'tooltip': 'event ID for single event processing (* for all events found in database)',
'value': '*', 'value': '',
'namestring': 'Event ID'}, 'namestring': 'Event ID'},
'extent': {'type': str, 'extent': {'type': str,
@ -512,7 +522,9 @@ defaults = {'datapath': {'type': str,
settings_main = { settings_main = {
'dirs': [ 'dirs': [
'rootpath',
'datapath', 'datapath',
'database',
'eventID', 'eventID',
'invdir', 'invdir',
'datastructure', 'datastructure',

View File

@ -1,84 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
"""
Script to get event parameters from PyLoT-xml file to write
them into eventlist.
LK, igem, 03/2021
Edited for use in PyLoT
JG, igem, 01/2022
"""
import os
import argparse
import numpy as np
import matplotlib.pyplot as plt
import glob
from obspy.core.event import read_events
from pyproj import Proj
"""
Creates an eventlist file summarizing all events found in a certain folder. Only called by pressing UI Button eventlis_xml_action
:rtype:
:param path: Path to root folder where single Event folder are to found
"""
def geteventlistfromxml(path, outpath):
p = Proj(proj='utm', zone=32, ellps='WGS84')
# open eventlist file and write header
evlist = outpath + '/eventlist'
evlistobj = open(evlist, 'w')
evlistobj.write(
'EventID Date To Lat Lon EAST NORTH Dep Ml NoP NoS RMS errH errZ Gap \n')
# data path
dp = path + "/e*/*.xml"
# list of all available xml-files
xmlnames = glob.glob(dp)
# read all onset weights
for names in xmlnames:
print("Getting location parameters from {}".format(names))
cat = read_events(names)
try:
st = cat.events[0].origins[0].time
Lat = cat.events[0].origins[0].latitude
Lon = cat.events[0].origins[0].longitude
EAST, NORTH = p(Lon, Lat)
Dep = cat.events[0].origins[0].depth / 1000
Ml = cat.events[0].magnitudes[1].mag
NoP = []
NoS = []
except IndexError:
print('Insufficient data found for event (not localised): ' + names.split('/')[-1].split('_')[-1][
:-4] + ' Skipping event for eventlist.')
continue
for i in range(len(cat.events[0].origins[0].arrivals)):
if cat.events[0].origins[0].arrivals[i].phase == 'P':
NoP.append(cat.events[0].origins[0].arrivals[i].phase)
elif cat.events[0].origins[0].arrivals[i].phase == 'S':
NoS.append(cat.events[0].origins[0].arrivals[i].phase)
# NoP = cat.events[0].origins[0].quality.used_station_count
errH = cat.events[0].origins[0].origin_uncertainty.max_horizontal_uncertainty
errZ = cat.events[0].origins[0].depth_errors.uncertainty
Gap = cat.events[0].origins[0].quality.azimuthal_gap
# evID = names.split('/')[6]
evID = names.split('/')[-1].split('_')[-1][:-4]
Date = str(st.year) + str('%02d' % st.month) + str('%02d' % st.day)
To = str('%02d' % st.hour) + str('%02d' % st.minute) + str('%02d' % st.second) + \
'.' + str('%06d' % st.microsecond)
# write into eventlist
evlistobj.write('%s %s %s %9.6f %9.6f %13.6f %13.6f %8.6f %3.1f %d %d NaN %d %d %d\n' % (evID, \
Date, To, Lat, Lon,
EAST, NORTH, Dep, Ml,
len(NoP), len(NoS),
errH, errZ, Gap))
print('Adding Event ' + names.split('/')[-1].split('_')[-1][:-4] + ' to eventlist')
print('Eventlist created and saved in: ' + outpath)
evlistobj.close()

View File

@ -1,7 +1,5 @@
#!/usr/bin/env python #!/usr/bin/env python
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import logging
import os
from pylot.core.io import default_parameters from pylot.core.io import default_parameters
from pylot.core.util.errors import ParameterError from pylot.core.util.errors import ParameterError
@ -53,16 +51,10 @@ class PylotParameter(object):
self.__parameter = {} self.__parameter = {}
self._verbosity = verbosity self._verbosity = verbosity
self._parFileCont = {} self._parFileCont = {}
# io from parsed arguments alternatively # io from parsed arguments alternatively
for key, val in kwargs.items(): for key, val in kwargs.items():
self._parFileCont[key] = val self._parFileCont[key] = val
self.from_file() self.from_file()
# if no filename or kwargs given, use default values
if not fnin and not kwargs:
self.reset_defaults()
if fnout: if fnout:
self.export2File(fnout) self.export2File(fnout)
@ -96,10 +88,10 @@ class PylotParameter(object):
return bool(self.__parameter) return bool(self.__parameter)
def __getitem__(self, key): def __getitem__(self, key):
if key in self.__parameter: try:
return self.__parameter[key] return self.__parameter[key]
else: except:
logging.warning(f'{key} not found in PylotParameter') return None
def __setitem__(self, key, value): def __setitem__(self, key, value):
try: try:
@ -426,28 +418,6 @@ class PylotParameter(object):
line = value + name + ttip line = value + name + ttip
fid.write(line) fid.write(line)
@staticmethod
def check_deprecated_parameters(parameters):
if parameters.hasParam('database') and parameters.hasParam('rootpath'):
parameters['datapath'] = os.path.join(parameters['rootpath'], parameters['datapath'],
parameters['database'])
logging.warning(
f'Parameters database and rootpath are deprecated. '
f'Tried to merge them to now path: {parameters["datapath"]}.'
)
remove_keys = []
for key in parameters:
if not key in default_parameters.defaults.keys():
remove_keys.append(key)
logging.warning(f'Removing deprecated parameter: {key}')
for key in remove_keys:
del parameters[key]
parameters._settings_main = default_parameters.settings_main
parameters._settings_special_pick = default_parameters.settings_special_pick
class FilterOptions(object): class FilterOptions(object):
''' '''

View File

@ -1,7 +1,7 @@
from obspy import UTCDateTime from obspy import UTCDateTime
from obspy.core import event as ope from obspy.core import event as ope
from pylot.core.util.utils import get_login, get_hash from pylot.core.util.utils import getLogin, getHash
def create_amplitude(pickID, amp, unit, category, cinfo): def create_amplitude(pickID, amp, unit, category, cinfo):
@ -61,7 +61,7 @@ def create_creation_info(agency_id=None, creation_time=None, author=None):
:return: :return:
''' '''
if author is None: if author is None:
author = get_login() author = getLogin()
if creation_time is None: if creation_time is None:
creation_time = UTCDateTime() creation_time = UTCDateTime()
return ope.CreationInfo(agency_id=agency_id, author=author, return ope.CreationInfo(agency_id=agency_id, author=author,
@ -210,7 +210,7 @@ def create_resourceID(timetohash, restype, authority_id=None, hrstr=None):
''' '''
assert isinstance(timetohash, UTCDateTime), "'timetohash' is not an ObsPy" \ assert isinstance(timetohash, UTCDateTime), "'timetohash' is not an ObsPy" \
"UTCDateTime object" "UTCDateTime object"
hid = get_hash(timetohash) hid = getHash(timetohash)
if hrstr is None: if hrstr is None:
resID = ope.ResourceIdentifier(restype + '/' + hid[0:6]) resID = ope.ResourceIdentifier(restype + '/' + hid[0:6])
else: else:

View File

@ -1,14 +1,13 @@
#!/usr/bin/env python #!/usr/bin/env python
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import glob
import logging
import os
import warnings
import glob
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
import numpy as np import numpy as np
import obspy.core.event as ope import obspy.core.event as ope
import os
import scipy.io as sio import scipy.io as sio
import warnings
from obspy.core import UTCDateTime from obspy.core import UTCDateTime
from obspy.core.event import read_events from obspy.core.event import read_events
from obspy.core.util import AttribDict from obspy.core.util import AttribDict
@ -17,8 +16,8 @@ from pylot.core.io.inputs import PylotParameter
from pylot.core.io.location import create_event, \ from pylot.core.io.location import create_event, \
create_magnitude create_magnitude
from pylot.core.pick.utils import select_for_phase, get_quality_class from pylot.core.pick.utils import select_for_phase, get_quality_class
from pylot.core.util.utils import get_owner, full_range, four_digits, transformFilterString4Export, \ from pylot.core.util.utils import getOwner, full_range, four_digits, transformFilterString4Export, \
backtransformFilterString, loopIdentifyPhase, identifyPhase backtransformFilterString
def add_amplitudes(event, amplitudes): def add_amplitudes(event, amplitudes):
@ -59,7 +58,7 @@ def readPILOTEvent(phasfn=None, locfn=None, authority_id='RUB', **kwargs):
if phasfn is not None and os.path.isfile(phasfn): if phasfn is not None and os.path.isfile(phasfn):
phases = sio.loadmat(phasfn) phases = sio.loadmat(phasfn)
phasctime = UTCDateTime(os.path.getmtime(phasfn)) phasctime = UTCDateTime(os.path.getmtime(phasfn))
phasauthor = get_owner(phasfn) phasauthor = getOwner(phasfn)
else: else:
phases = None phases = None
phasctime = None phasctime = None
@ -67,7 +66,7 @@ def readPILOTEvent(phasfn=None, locfn=None, authority_id='RUB', **kwargs):
if locfn is not None and os.path.isfile(locfn): if locfn is not None and os.path.isfile(locfn):
loc = sio.loadmat(locfn) loc = sio.loadmat(locfn)
locctime = UTCDateTime(os.path.getmtime(locfn)) locctime = UTCDateTime(os.path.getmtime(locfn))
locauthor = get_owner(locfn) locauthor = getOwner(locfn)
else: else:
loc = None loc = None
locctime = None locctime = None
@ -218,7 +217,7 @@ def picksdict_from_obs(fn):
return picks return picks
def picksdict_from_picks(evt, parameter=None): def picksdict_from_picks(evt):
""" """
Takes an Event object and return the pick dictionary commonly used within Takes an Event object and return the pick dictionary commonly used within
PyLoT PyLoT
@ -231,10 +230,9 @@ def picksdict_from_picks(evt, parameter=None):
'auto': {} 'auto': {}
} }
for pick in evt.picks: for pick in evt.picks:
errors = None
phase = {} phase = {}
station = pick.waveform_id.station_code station = pick.waveform_id.station_code
if pick.waveform_id.channel_code is None: if pick.waveform_id.channel_code == None:
channel = '' channel = ''
else: else:
channel = pick.waveform_id.channel_code channel = pick.waveform_id.channel_code
@ -246,15 +244,16 @@ def picksdict_from_picks(evt, parameter=None):
else: else:
filter_id = None filter_id = None
try: try:
pick_method = str(pick.method_id) picker = str(pick.method_id)
if pick_method.startswith('smi:local/'): if picker.startswith('smi:local/'):
pick_method = pick_method.split('smi:local/')[1] picker = picker.split('smi:local/')[1]
except IndexError: except IndexError:
pick_method = 'manual' # MP MP TODO maybe improve statement picker = 'manual' # MP MP TODO maybe improve statement
if pick_method == 'None': if picker == 'None':
pick_method = 'manual' picker = 'manual'
try: try:
onsets = picksdict[pick_method][station] #onsets = picksdict[picker][station]
onsets = picksdict[station]
except KeyError as e: except KeyError as e:
# print(e) # print(e)
onsets = {} onsets = {}
@ -275,32 +274,36 @@ def picksdict_from_picks(evt, parameter=None):
phase['epp'] = epp phase['epp'] = epp
phase['lpp'] = lpp phase['lpp'] = lpp
phase['spe'] = spe phase['spe'] = spe
weight = phase.get('weight') try:
if not weight: phase['weight'] = weight
if not parameter: except:
logging.warning('Using default input parameter') # get onset weight from uncertainty
parameter = PylotParameter() infile = os.path.join(os.path.expanduser('~'), '.pylot', 'pylot.in')
pick.phase_hint = identifyPhase(pick.phase_hint) print('Using default input file {}'.format(infile))
parameter = PylotParameter(infile)
if pick.phase_hint == 'P': if pick.phase_hint == 'P':
errors = parameter['timeerrorsP'] errors = parameter['timeerrorsP']
elif pick.phase_hint == 'S': elif pick.phase_hint == 'S':
errors = parameter['timeerrorsS'] errors = parameter['timeerrorsS']
if errors:
weight = get_quality_class(spe, errors) weight = get_quality_class(spe, errors)
phase['weight'] = weight phase['weight'] = weight
phase['channel'] = channel phase['channel'] = channel
phase['network'] = network phase['network'] = network
phase['picker'] = pick_method phase['picker'] = picker
try:
if pick.polarity == 'positive': if pick.polarity == 'positive':
phase['fm'] = 'U' phase['fm'] = 'U'
elif pick.polarity == 'negative': elif pick.polarity == 'negative':
phase['fm'] = 'D' phase['fm'] = 'D'
else: else:
phase['fm'] = 'N' phase['fm'] = 'N'
except:
print("No FM info available!")
phase['fm'] = 'N'
phase['filter_id'] = filter_id if filter_id is not None else '' phase['filter_id'] = filter_id if filter_id is not None else ''
onsets[pick.phase_hint] = phase.copy() onsets[pick.phase_hint] = phase.copy()
picksdict[pick_method][station] = onsets.copy() picksdict[station] = onsets.copy()
return picksdict return picksdict
@ -373,7 +376,8 @@ def picks_from_picksdict(picks, creation_info=None):
def reassess_pilot_db(root_dir, db_dir, out_dir=None, fn_param=None, verbosity=0): def reassess_pilot_db(root_dir, db_dir, out_dir=None, fn_param=None, verbosity=0):
# TODO: change root to datapath import glob
db_root = os.path.join(root_dir, db_dir) db_root = os.path.join(root_dir, db_dir)
evt_list = glob.glob1(db_root, 'e????.???.??') evt_list = glob.glob1(db_root, 'e????.???.??')
@ -388,7 +392,9 @@ def reassess_pilot_event(root_dir, db_dir, event_id, out_dir=None, fn_param=None
from pylot.core.io.inputs import PylotParameter from pylot.core.io.inputs import PylotParameter
from pylot.core.pick.utils import earllatepicker from pylot.core.pick.utils import earllatepicker
# TODO: change root to datapath
if fn_param is None:
fn_param = defaults.AUTOMATIC_DEFAULTS
default = PylotParameter(fn_param, verbosity) default = PylotParameter(fn_param, verbosity)
@ -478,6 +484,7 @@ def reassess_pilot_event(root_dir, db_dir, event_id, out_dir=None, fn_param=None
os.makedirs(out_dir) os.makedirs(out_dir)
fnout_prefix = os.path.join(out_dir, 'PyLoT_{0}.'.format(event_id)) fnout_prefix = os.path.join(out_dir, 'PyLoT_{0}.'.format(event_id))
evt.write(fnout_prefix + 'xml', format='QUAKEML') evt.write(fnout_prefix + 'xml', format='QUAKEML')
# evt.write(fnout_prefix + 'cnv', format='VELEST')
def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None): def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
@ -506,16 +513,16 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
and FOCMEC- and HASH-input files and FOCMEC- and HASH-input files
:type eventinfo: `obspy.core.event.Event` object :type eventinfo: `obspy.core.event.Event` object
""" """
if fformat == 'NLLoc': if fformat == 'NLLoc':
print("Writing phases to %s for NLLoc" % filename) print("Writing phases to %s for NLLoc" % filename)
fid = open("%s" % filename, 'w') fid = open("%s" % filename, 'w')
# write header # write header
fid.write('# EQEVENT: %s Label: EQ%s Loc: X 0.00 Y 0.00 Z 10.00 OT 0.00 \n' % fid.write('# EQEVENT: %s Label: EQ%s Loc: X 0.00 Y 0.00 Z 10.00 OT 0.00 \n' %
(parameter.get('datapath'), parameter.get('eventID'))) (parameter.get('database'), parameter.get('eventID')))
arrivals = chooseArrivals(arrivals)
for key in arrivals: for key in arrivals:
# P onsets # P onsets
if 'P' in arrivals[key]: if arrivals[key].has_key('P'):
try: try:
fm = arrivals[key]['P']['fm'] fm = arrivals[key]['P']['fm']
except KeyError as e: except KeyError as e:
@ -536,7 +543,6 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
try: try:
if arrivals[key]['P']['weight'] >= 4: if arrivals[key]['P']['weight'] >= 4:
pweight = 0 # do not use pick pweight = 0 # do not use pick
print("Station {}: Uncertain pick, do not use it!".format(key))
except KeyError as e: except KeyError as e:
print(e.message + '; no weight set during processing') print(e.message + '; no weight set during processing')
fid.write('%s ? ? ? P %s %d%02d%02d %02d%02d %7.4f GAU 0 0 0 0 %d \n' % (key, fid.write('%s ? ? ? P %s %d%02d%02d %02d%02d %7.4f GAU 0 0 0 0 %d \n' % (key,
@ -549,7 +555,7 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
ss_ms, ss_ms,
pweight)) pweight))
# S onsets # S onsets
if 'S' in arrivals[key] and arrivals[key]['S']['mpp'] is not None: if arrivals[key].has_key('S') and arrivals[key]['S']['mpp'] is not None:
fm = '?' fm = '?'
onset = arrivals[key]['S']['mpp'] onset = arrivals[key]['S']['mpp']
year = onset.year year = onset.year
@ -567,9 +573,7 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
except KeyError as e: except KeyError as e:
print(str(e) + '; no weight set during processing') print(str(e) + '; no weight set during processing')
Ao = arrivals[key]['S']['Ao'] # peak-to-peak amplitude Ao = arrivals[key]['S']['Ao'] # peak-to-peak amplitude
if Ao == None: #fid.write('%s ? ? ? S %s %d%02d%02d %02d%02d %7.4f GAU 0 0 0 0 %d \n' % (key,
Ao = 0.0
# fid.write('%s ? ? ? S %s %d%02d%02d %02d%02d %7.4f GAU 0 0 0 0 %d \n' % (key,
fid.write('%s ? ? ? S %s %d%02d%02d %02d%02d %7.4f GAU 0 %9.2f 0 0 %d \n' % (key, fid.write('%s ? ? ? S %s %d%02d%02d %02d%02d %7.4f GAU 0 %9.2f 0 0 %d \n' % (key,
fm, fm,
year, year,
@ -588,7 +592,6 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
# write header # write header
fid.write(' %s\n' % fid.write(' %s\n' %
parameter.get('eventID')) parameter.get('eventID'))
arrivals = chooseArrivals(arrivals) # MP MP what is chooseArrivals? It is not defined anywhere
for key in arrivals: for key in arrivals:
if arrivals[key]['P']['weight'] < 4: if arrivals[key]['P']['weight'] < 4:
stat = key stat = key
@ -664,11 +667,10 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
print("Writing phases to %s for HYPOSAT" % filename) print("Writing phases to %s for HYPOSAT" % filename)
fid = open("%s" % filename, 'w') fid = open("%s" % filename, 'w')
# write header # write header
fid.write('%s, event %s \n' % (parameter.get('datapath'), parameter.get('eventID'))) fid.write('%s, event %s \n' % (parameter.get('database'), parameter.get('eventID')))
arrivals = chooseArrivals(arrivals)
for key in arrivals: for key in arrivals:
# P onsets # P onsets
if 'P' in arrivals[key] and arrivals[key]['P']['mpp'] is not None: if arrivals[key].has_key('P') and arrivals[key]['P']['mpp'] is not None:
if arrivals[key]['P']['weight'] < 4: if arrivals[key]['P']['weight'] < 4:
Ponset = arrivals[key]['P']['mpp'] Ponset = arrivals[key]['P']['mpp']
pyear = Ponset.year pyear = Ponset.year
@ -697,7 +699,7 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
fid.write('%-5s P1 %4.0f %02d %02d %02d %02d %05.02f %5.3f -999. 0.00 -999. 0.00\n' fid.write('%-5s P1 %4.0f %02d %02d %02d %02d %05.02f %5.3f -999. 0.00 -999. 0.00\n'
% (key, pyear, pmonth, pday, phh, pmm, Pss, pstd)) % (key, pyear, pmonth, pday, phh, pmm, Pss, pstd))
# S onsets # S onsets
if 'S' in arrivals[key] and arrivals[key]['S']['mpp'] is not None: if arrivals[key].has_key('S') and arrivals[key]['S']['mpp'] is not None:
if arrivals[key]['S']['weight'] < 4: if arrivals[key]['S']['weight'] < 4:
Sonset = arrivals[key]['S']['mpp'] Sonset = arrivals[key]['S']['mpp']
syear = Sonset.year syear = Sonset.year
@ -756,40 +758,37 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
cns, eventsource['longitude'], cew, eventsource['depth'], eventinfo.magnitudes[0]['mag'], ifx)) cns, eventsource['longitude'], cew, eventsource['depth'], eventinfo.magnitudes[0]['mag'], ifx))
n = 0 n = 0
# check whether arrivals are dictionaries (autoPyLoT) or pick object (PyLoT) # check whether arrivals are dictionaries (autoPyLoT) or pick object (PyLoT)
if isinstance(arrivals, dict) is False: if isinstance(arrivals, dict) == False:
# convert pick object (PyLoT) into dictionary # convert pick object (PyLoT) into dictionary
evt = ope.Event(resource_id=eventinfo['resource_id']) evt = ope.Event(resource_id=eventinfo['resource_id'])
evt.picks = arrivals evt.picks = arrivals
arrivals = picksdict_from_picks(evt, parameter=parameter) arrivals = picksdict_from_picks(evt)
# check for automatic and manual picks for key in arrivals:
# prefer manual picks
usedarrivals = chooseArrivals(arrivals)
for key in usedarrivals:
# P onsets # P onsets
if 'P' in usedarrivals[key]: if arrivals[key].has_key('P'):
if usedarrivals[key]['P']['weight'] < 4: if arrivals[key]['P']['weight'] < 4:
n += 1 n += 1
stat = key stat = key
if len(stat) > 4: # VELEST handles only 4-string station IDs if len(stat) > 4: # VELEST handles only 4-string station IDs
stat = stat[1:5] stat = stat[1:5]
Ponset = usedarrivals[key]['P']['mpp'] Ponset = arrivals[key]['P']['mpp']
Pweight = usedarrivals[key]['P']['weight'] Pweight = arrivals[key]['P']['weight']
Prt = Ponset - stime # onset time relative to source time Prt = Ponset - stime # onset time relative to source time
if n % 6 != 0: if n % 6 is not 0:
fid.write('%-4sP%d%6.2f' % (stat, Pweight, Prt)) fid.write('%-4sP%d%6.2f' % (stat, Pweight, Prt))
else: else:
fid.write('%-4sP%d%6.2f\n' % (stat, Pweight, Prt)) fid.write('%-4sP%d%6.2f\n' % (stat, Pweight, Prt))
# S onsets # S onsets
if 'S' in usedarrivals[key]: if arrivals[key].has_key('S'):
if usedarrivals[key]['S']['weight'] < 4: if arrivals[key]['S']['weight'] < 4:
n += 1 n += 1
stat = key stat = key
if len(stat) > 4: # VELEST handles only 4-string station IDs if len(stat) > 4: # VELEST handles only 4-string station IDs
stat = stat[1:5] stat = stat[1:5]
Sonset = usedarrivals[key]['S']['mpp'] Sonset = arrivals[key]['S']['mpp']
Sweight = usedarrivals[key]['S']['weight'] Sweight = arrivals[key]['S']['weight']
Srt = Ponset - stime # onset time relative to source time Srt = Ponset - stime # onset time relative to source time
if n % 6 != 0: if n % 6 is not 0:
fid.write('%-4sS%d%6.2f' % (stat, Sweight, Srt)) fid.write('%-4sS%d%6.2f' % (stat, Sweight, Srt))
else: else:
fid.write('%-4sS%d%6.2f\n' % (stat, Sweight, Srt)) fid.write('%-4sS%d%6.2f\n' % (stat, Sweight, Srt))
@ -805,12 +804,8 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
print("No source origin calculated yet, thus no hypoDD-infile creation possible!") print("No source origin calculated yet, thus no hypoDD-infile creation possible!")
return return
stime = eventsource['time'] stime = eventsource['time']
try: event = parameter.get('eventID')
event = eventinfo['pylot_id']
hddID = event.split('.')[0][1:5] hddID = event.split('.')[0][1:5]
except:
print("Error 1111111!")
hddID = "00000"
# write header # write header
fid.write('# %d %d %d %d %d %5.2f %7.4f +%6.4f %7.4f %4.2f 0.1 0.5 %4.2f %s\n' % ( fid.write('# %d %d %d %d %d %5.2f %7.4f +%6.4f %7.4f %4.2f 0.1 0.5 %4.2f %s\n' % (
stime.year, stime.month, stime.day, stime.hour, stime.minute, stime.second, stime.year, stime.month, stime.day, stime.hour, stime.minute, stime.second,
@ -821,21 +816,18 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
# convert pick object (PyLoT) into dictionary # convert pick object (PyLoT) into dictionary
evt = ope.Event(resource_id=eventinfo['resource_id']) evt = ope.Event(resource_id=eventinfo['resource_id'])
evt.picks = arrivals evt.picks = arrivals
arrivals = picksdict_from_picks(evt, parameter=parameter) arrivals = picksdict_from_picks(evt)
# check for automatic and manual picks for key in arrivals:
# prefer manual picks if arrivals[key].has_key('P'):
usedarrivals = chooseArrivals(arrivals)
for key in usedarrivals:
if 'P' in usedarrivals[key]:
# P onsets # P onsets
if usedarrivals[key]['P']['weight'] < 4: if arrivals[key]['P']['weight'] < 4:
Ponset = usedarrivals[key]['P']['mpp'] Ponset = arrivals[key]['P']['mpp']
Prt = Ponset - stime # onset time relative to source time Prt = Ponset - stime # onset time relative to source time
fid.write('%s %6.3f 1 P\n' % (key, Prt)) fid.write('%s %6.3f 1 P\n' % (key, Prt))
if 'S' in usedarrivals[key]: if arrivals[key].has_key('S'):
# S onsets # S onsets
if usedarrivals[key]['S']['weight'] < 4: if arrivals[key]['S']['weight'] < 4:
Sonset = usedarrivals[key]['S']['mpp'] Sonset = arrivals[key]['S']['mpp']
Srt = Sonset - stime # onset time relative to source time Srt = Sonset - stime # onset time relative to source time
fid.write('%-5s %6.3f 1 S\n' % (key, Srt)) fid.write('%-5s %6.3f 1 S\n' % (key, Srt))
@ -872,13 +864,10 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
# convert pick object (PyLoT) into dictionary # convert pick object (PyLoT) into dictionary
evt = ope.Event(resource_id=eventinfo['resource_id']) evt = ope.Event(resource_id=eventinfo['resource_id'])
evt.picks = arrivals evt.picks = arrivals
arrivals = picksdict_from_picks(evt, parameter=parameter) arrivals = picksdict_from_picks(evt)
# check for automatic and manual picks for key in arrivals:
# prefer manual picks if arrivals[key].has_key('P'):
usedarrivals = chooseArrivals(arrivals) if arrivals[key]['P']['weight'] < 4 and arrivals[key]['P']['fm'] is not None:
for key in usedarrivals:
if 'P' in usedarrivals[key]:
if usedarrivals[key]['P']['weight'] < 4 and usedarrivals[key]['P']['fm'] is not None:
stat = key stat = key
for i in range(len(picks)): for i in range(len(picks)):
station = picks[i].waveform_id.station_code station = picks[i].waveform_id.station_code
@ -897,7 +886,7 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
fid.write('%-4s %6.2f %6.2f%s \n' % (stat, fid.write('%-4s %6.2f %6.2f%s \n' % (stat,
az, az,
inz, inz,
usedarrivals[key]['P']['fm'])) arrivals[key]['P']['fm']))
break break
fid.close() fid.close()
@ -955,11 +944,10 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
eventsource['quality']['used_phase_count'], eventsource['quality']['used_phase_count'],
erh, erz, eventinfo.magnitudes[0]['mag'], erh, erz, eventinfo.magnitudes[0]['mag'],
hashID)) hashID))
# Prefer Manual Picks over automatic ones if possible
arrivals = chooseArrivals(arrivals) # MP MP what is chooseArrivals? It is not defined anywhere
# write phase lines # write phase lines
for key in arrivals: for key in arrivals:
if 'P' in arrivals[key]: if arrivals[key].has_key('P'):
if arrivals[key]['P']['weight'] < 4 and arrivals[key]['P']['fm'] is not None: if arrivals[key]['P']['weight'] < 4 and arrivals[key]['P']['fm'] is not None:
stat = key stat = key
ccode = arrivals[key]['P']['channel'] ccode = arrivals[key]['P']['channel']
@ -1004,25 +992,6 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
fid2.close() fid2.close()
def chooseArrivals(arrivals):
"""
takes arrivals and returns the manual picks if manual and automatic ones are there
returns automatic picks if only automatic picks are there
:param arrivals: 'dictionary' with automatic and or manual arrivals
:return: arrivals but with the manual picks prefered if possible
"""
# If len of arrivals is greater than 2 it comes from autopicking so only autopicks are available
if len(arrivals) > 2:
return arrivals
if arrivals['auto'] and arrivals['manual']:
usedarrivals = arrivals['manual']
elif arrivals['auto']:
usedarrivals = arrivals['auto']
elif arrivals['manual']:
usedarrivals = arrivals['manual']
return usedarrivals
def merge_picks(event, picks): def merge_picks(event, picks):
""" """
takes an event object and a list of picks and searches for matching takes an event object and a list of picks and searches for matching
@ -1052,63 +1021,37 @@ def merge_picks(event, picks):
return event return event
def getQualitiesfromxml(path, errorsP, errorsS, plotflag=1, figure=None, verbosity=0): def getQualitiesfromxml(xmlnames, ErrorsP, ErrorsS, plotflag=1):
""" """
Script to get onset uncertainties from Quakeml.xml files created by PyLoT. Script to get onset uncertainties from Quakeml.xml files created by PyLoT.
Uncertainties are tranformed into quality classes and visualized via histogram if desired. Uncertainties are tranformed into quality classes and visualized via histogram if desired.
Ludger Küperkoch, BESTEC GmbH, 07/2017 Ludger Küperkoch, BESTEC GmbH, 07/2017
:param path: path containing xml files :param xmlnames: list of xml obspy event files containing picks
:type path: str :type xmlnames: list
:param errorsP: time errors of P waves for the four discrete quality classes :param ErrorsP: time errors of P waves for the four discrete quality classes
:type errorsP: :type ErrorsP:
:param errorsS: time errors of S waves for the four discrete quality classes :param ErrorsS: time errors of S waves for the four discrete quality classes
:type errorsS: :type ErrorsS:
:param plotflag: :param plotflag:
:type plotflag: :type plotflag:
:return: :return:
:rtype: :rtype:
""" """
def calc_perc(uncertainties, ntotal): from pylot.core.pick.utils import get_quality_class
''' simple function that calculates percentage of number of uncertainties (list length)''' from pylot.core.util.utils import loopIdentifyPhase, identifyPhase
if len(uncertainties) == 0:
return 0
else:
return 100. / ntotal * len(uncertainties)
def calc_weight_perc(psweights, weight_ids):
''' calculate percentages of different weights (pick classes!?) of total number of uncertainties of a phase'''
# count total number of list items for this phase
numWeights = np.sum([len(weight) for weight in psweights.values()])
# iterate over all available weights to return a list with percentages for plotting
plot_list = []
for weight_id in weight_ids:
plot_list.append(calc_perc(psweights[weight_id], numWeights))
return plot_list, numWeights
# get all xmlfiles in path (maybe this should be changed to one xml file for this function, selectable via GUI?)
xmlnames = glob.glob(os.path.join(path, '*.xml'))
if len(xmlnames) == 0:
print(f'No files found in path {path}.')
return False
# first define possible phases here
phases = ['P', 'S']
# define possible weights (0-4)
weight_ids = list(range(5))
# put both error lists in a dictionary with P/S key so that amount of code can be halfed by simply using P/S as key
errors = dict(P=errorsP, S=errorsS)
# create dictionaries for each phase (P/S) with a dictionary of empty list for each weight defined in weights
# tuple above
weights = {}
for phase in phases:
weights[phase] = {weight_id: [] for weight_id in weight_ids}
# read all onset weights
Pw0 = []
Pw1 = []
Pw2 = []
Pw3 = []
Pw4 = []
Sw0 = []
Sw1 = []
Sw2 = []
Sw3 = []
Sw4 = []
for names in xmlnames: for names in xmlnames:
print("Getting onset weights from {}".format(names)) print("Getting onset weights from {}".format(names))
cat = read_events(names) cat = read_events(names)
@ -1116,60 +1059,117 @@ def getQualitiesfromxml(path, errorsP, errorsS, plotflag=1, figure=None, verbosi
arrivals = cat.events[0].picks arrivals = cat.events[0].picks
arrivals_copy = cat_copy.events[0].picks arrivals_copy = cat_copy.events[0].picks
# Prefere manual picks if qualities are sufficient! # Prefere manual picks if qualities are sufficient!
for pick in arrivals: for Pick in arrivals:
if pick.method_id.id.split('/')[1] == 'manual': if Pick.method_id.id.split('/')[1] == 'manual':
mstation = pick.waveform_id.station_code mstation = Pick.waveform_id.station_code
mstation_ext = mstation + '_' mstation_ext = mstation + '_'
for mpick in arrivals_copy: for mpick in arrivals_copy:
phase = identifyPhase(loopIdentifyPhase(pick.phase_hint)) # MP MP catch if this fails? phase = identifyPhase(loopIdentifyPhase(Pick.phase_hint))
if phase == 'P':
if ((mpick.waveform_id.station_code == mstation) or if ((mpick.waveform_id.station_code == mstation) or
(mpick.waveform_id.station_code == mstation_ext)) and \ (mpick.waveform_id.station_code == mstation_ext)) and \
(mpick.method_id.id.split('/')[1] == 'auto') and \ (mpick.method_id.split('/')[1] == 'auto') and \
(mpick.time_errors['uncertainty'] <= errors[phase][3]): (mpick.time_errors['uncertainty'] <= ErrorsP[3]):
del mpick
break
elif phase == 'S':
if ((mpick.waveform_id.station_code == mstation) or
(mpick.waveform_id.station_code == mstation_ext)) and \
(mpick.method_id.split('/')[1] == 'auto') and \
(mpick.time_errors['uncertainty'] <= ErrorsS[3]):
del mpick del mpick
break break
lendiff = len(arrivals) - len(arrivals_copy) lendiff = len(arrivals) - len(arrivals_copy)
if lendiff != 0: if lendiff is not 0:
print("Found manual as well as automatic picks, prefered the {} manual ones!".format(lendiff)) print("Found manual as well as automatic picks, prefered the {} manual ones!".format(lendiff))
for pick in arrivals_copy: for Pick in arrivals_copy:
phase = identifyPhase(loopIdentifyPhase(pick.phase_hint)) phase = identifyPhase(loopIdentifyPhase(Pick.phase_hint))
uncertainty = pick.time_errors.uncertainty if phase == 'P':
if not uncertainty: Pqual = get_quality_class(Pick.time_errors.uncertainty, ErrorsP)
if verbosity > 0: if Pqual == 0:
print('No uncertainty, pick {} invalid!'.format(pick.method_id.id)) Pw0.append(Pick.time_errors.uncertainty)
continue elif Pqual == 1:
# check P/S phase Pw1.append(Pick.time_errors.uncertainty)
if phase not in phases: elif Pqual == 2:
Pw2.append(Pick.time_errors.uncertainty)
elif Pqual == 3:
Pw3.append(Pick.time_errors.uncertainty)
elif Pqual == 4:
Pw4.append(Pick.time_errors.uncertainty)
elif phase == 'S':
Squal = get_quality_class(Pick.time_errors.uncertainty, ErrorsS)
if Squal == 0:
Sw0.append(Pick.time_errors.uncertainty)
elif Squal == 1:
Sw1.append(Pick.time_errors.uncertainty)
elif Squal == 2:
Sw2.append(Pick.time_errors.uncertainty)
elif Squal == 3:
Sw3.append(Pick.time_errors.uncertainty)
elif Squal == 4:
Sw4.append(Pick.time_errors.uncertainty)
else:
print("Phase hint not defined for picking!") print("Phase hint not defined for picking!")
continue pass
qual = get_quality_class(uncertainty, errors[phase])
weights[phase][qual].append(uncertainty)
if plotflag == 0: if plotflag == 0:
p_unc = [weights['P'][weight_id] for weight_id in weight_ids] Punc = [Pw0, Pw1, Pw2, Pw3, Pw4]
s_unc = [weights['S'][weight_id] for weight_id in weight_ids] Sunc = [Sw0, Sw1, Sw2, Sw3, Sw4]
return p_unc, s_unc return Punc, Sunc
else: else:
if not figure:
fig = plt.figure()
ax = fig.add_subplot(111)
# get percentage of weights # get percentage of weights
listP, numPweights = calc_weight_perc(weights['P'], weight_ids) numPweights = np.sum([len(Pw0), len(Pw1), len(Pw2), len(Pw3), len(Pw4)])
listS, numSweights = calc_weight_perc(weights['S'], weight_ids) numSweights = np.sum([len(Sw0), len(Sw1), len(Sw2), len(Sw3), len(Sw4)])
if len(Pw0) > 0:
P0perc = 100 / numPweights * len(Pw0)
else:
P0perc = 0
if len(Pw1) > 0:
P1perc = 100 / numPweights * len(Pw1)
else:
P1perc = 0
if len(Pw2) > 0:
P2perc = 100 / numPweights * len(Pw2)
else:
P2perc = 0
if len(Pw3) > 0:
P3perc = 100 / numPweights * len(Pw3)
else:
P3perc = 0
if len(Pw4) > 0:
P4perc = 100 / numPweights * len(Pw4)
else:
P4perc = 0
if len(Sw0) > 0:
S0perc = 100 / numSweights * len(Sw0)
else:
S0perc = 0
if len(Sw1) > 0:
S1perc = 100 / numSweights * len(Sw1)
else:
S1perc = 0
if len(Sw2) > 0:
S2perc = 100 / numSweights * len(Sw2)
else:
S2perc = 0
if len(Sw3) > 0:
S3perc = 100 / numSweights * len(Sw3)
else:
S3perc = 0
if len(Sw4) > 0:
S4perc = 100 / numSweights * len(Sw4)
else:
S4perc = 0
y_pos = np.arange(len(weight_ids)) weights = ('0', '1', '2', '3', '4')
y_pos = np.arange(len(weights))
width = 0.34 width = 0.34
ax.bar(y_pos - width, listP, width, color='black') plt.bar(y_pos - width, [P0perc, P1perc, P2perc, P3perc, P4perc], width, color='black')
ax.bar(y_pos, listS, width, color='red') plt.bar(y_pos, [S0perc, S1perc, S2perc, S3perc, S4perc], width, color='red')
ax.set_ylabel('%') plt.ylabel('%')
ax.set_xticks(y_pos, weight_ids) plt.xticks(y_pos, weights)
ax.set_xlim([-0.5, 4.5]) plt.xlim([-0.5, 4.5])
ax.set_xlabel('Qualities') plt.xlabel('Qualities')
ax.set_title('{0} P-Qualities, {1} S-Qualities'.format(numPweights, numSweights)) plt.title('{0} P-Qualities, {1} S-Qualities'.format(numPweights, numSweights))
plt.show()
if not figure:
fig.show()
return listP, listS

View File

@ -4,12 +4,11 @@
import glob import glob
import os import os
import subprocess import subprocess
from obspy import read_events from obspy import read_events
from pylot.core.io.phases import writephases from pylot.core.io.phases import writephases
from pylot.core.util.gui import which
from pylot.core.util.utils import getPatternLine, runProgram from pylot.core.util.utils import getPatternLine, runProgram
from pylot.core.util.gui import which
from pylot.core.util.version import get_git_version as _getVersionString from pylot.core.util.version import get_git_version as _getVersionString
__version__ = _getVersionString() __version__ = _getVersionString()
@ -82,8 +81,9 @@ def locate(fnin, parameter=None):
:param fnin: external program name :param fnin: external program name
:return: None :return: None
""" """
exe_path = os.path.join(parameter['nllocbin'], 'NLLoc')
if not os.path.isfile(exe_path): exe_path = which('NLLoc', parameter)
if exe_path is None:
raise NLLocError('NonLinLoc executable not found; check your ' raise NLLocError('NonLinLoc executable not found; check your '
'environment variables') 'environment variables')

View File

@ -9,21 +9,21 @@ function conglomerate utils.
:author: MAGS2 EP3 working group / Ludger Kueperkoch :author: MAGS2 EP3 working group / Ludger Kueperkoch
""" """
import copy import copy
import traceback
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
import numpy as np import numpy as np
from obspy import Trace import traceback
from obspy.taup import TauPyModel from obspy.taup import TauPyModel
from pylot.core.pick.charfuns import CharacteristicFunction from pylot.core.pick.charfuns import CharacteristicFunction
from pylot.core.pick.charfuns import HOScf, AICcf, ARZcf, ARHcf, AR3Ccf from pylot.core.pick.charfuns import HOScf, AICcf, ARZcf, ARHcf, AR3Ccf
from pylot.core.pick.picker import AICPicker, PragPicker from pylot.core.pick.picker import AICPicker, PragPicker
from pylot.core.pick.utils import checksignallength, checkZ4S, earllatepicker, \ from pylot.core.pick.utils import checksignallength, checkZ4S, earllatepicker, \
getSNR, fmpicker, checkPonsets, wadaticheck, get_quality_class, PickingFailedException, MissingTraceException getSNR, fmpicker, checkPonsets, wadaticheck, get_pickparams, get_quality_class
from pylot.core.util.utils import getPatternLine, gen_Pool, \ from pylot.core.util.utils import getPatternLine, gen_Pool,\
get_bool, identifyPhaseID, get_none, correct_iplot get_Bool, identifyPhaseID, get_None, correct_iplot
from obspy.taup import TauPyModel
from obspy import Trace
def autopickevent(data, param, iplot=0, fig_dict=None, fig_dict_wadatijack=None, ncores=0, metadata=None, origin=None): def autopickevent(data, param, iplot=0, fig_dict=None, fig_dict_wadatijack=None, ncores=0, metadata=None, origin=None):
""" """
@ -232,6 +232,20 @@ class PickingContainer:
self.Sflag = 0 self.Sflag = 0
class MissingTraceException(ValueError):
"""
Used to indicate missing traces in a obspy.core.stream.Stream object
"""
pass
class PickingFailedException(Exception):
"""
Raised when picking fails due to missing values etc.
"""
pass
class AutopickStation(object): class AutopickStation(object):
def __init__(self, wfstream, pickparam, verbose, iplot=0, fig_dict=None, metadata=None, origin=None): def __init__(self, wfstream, pickparam, verbose, iplot=0, fig_dict=None, metadata=None, origin=None):
@ -258,14 +272,10 @@ class AutopickStation(object):
self.pickparams = copy.deepcopy(pickparam) self.pickparams = copy.deepcopy(pickparam)
self.verbose = verbose self.verbose = verbose
self.iplot = correct_iplot(iplot) self.iplot = correct_iplot(iplot)
self.fig_dict = get_none(fig_dict) self.fig_dict = get_None(fig_dict)
self.metadata = metadata self.metadata = metadata
self.origin = origin self.origin = origin
# initialize TauPy pick estimates
self.estFirstP = None
self.estFirstS = None
# initialize picking results # initialize picking results
self.p_results = PickingResults() self.p_results = PickingResults()
self.s_results = PickingResults() self.s_results = PickingResults()
@ -327,8 +337,7 @@ class AutopickStation(object):
for key in self.channelorder: for key in self.channelorder:
waveform_data[key] = self.wfstream.select(component=key) # try ZNE first waveform_data[key] = self.wfstream.select(component=key) # try ZNE first
if len(waveform_data[key]) == 0: if len(waveform_data[key]) == 0:
waveform_data[key] = self.wfstream.select( waveform_data[key] = self.wfstream.select(component=str(self.channelorder[key])) # use 123 as second option
component=str(self.channelorder[key])) # use 123 as second option
return waveform_data['Z'], waveform_data['N'], waveform_data['E'] return waveform_data['Z'], waveform_data['N'], waveform_data['E']
def get_traces_from_streams(self): def get_traces_from_streams(self):
@ -419,7 +428,7 @@ class AutopickStation(object):
if station_coords is None: if station_coords is None:
exit_taupy() exit_taupy()
raise AttributeError('Warning: Could not find station in metadata') raise AttributeError('Warning: Could not find station in metadata')
# TODO: raise when metadata.get_coordinates returns None # TODO raise when metadata.get_coordinates returns None
source_origin = origin[0] source_origin = origin[0]
model = TauPyModel(taup_model) model = TauPyModel(taup_model)
taup_phases = self.pickparams['taup_phases'] taup_phases = self.pickparams['taup_phases']
@ -447,29 +456,31 @@ class AutopickStation(object):
for arr in arrivals: for arr in arrivals:
phases[identifyPhaseID(arr.phase.name)].append(arr) phases[identifyPhaseID(arr.phase.name)].append(arr)
# get first P and S onsets from arrivals list # get first P and S onsets from arrivals list
arrival_time_p = 0 estFirstP = 0
arrival_time_s = 0 estFirstS = 0
if len(phases['P']) > 0: if len(phases['P']) > 0:
arrP, arrival_time_p = min([(arr, arr.time) for arr in phases['P']], key=lambda t: t[1]) arrP, estFirstP = min([(arr, arr.time) for arr in phases['P']], key=lambda t: t[1])
if len(phases['S']) > 0: if len(phases['S']) > 0:
arrS, arrival_time_s = min([(arr, arr.time) for arr in phases['S']], key=lambda t: t[1]) arrS, estFirstS = min([(arr, arr.time) for arr in phases['S']], key=lambda t: t[1])
print('autopick: estimated first arrivals for P: {} s, S:{} s after event' print('autopick: estimated first arrivals for P: {} s, S:{} s after event'
' origin time using TauPy'.format(arrival_time_p, arrival_time_s)) ' origin time using TauPy'.format(estFirstP, estFirstS))
return arrival_time_p, arrival_time_s return estFirstP, estFirstS
def exit_taupy(): def exit_taupy():
"""If taupy failed to calculate theoretical starttimes, picking continues. """If taupy failed to calculate theoretical starttimes, picking continues.
For this a clean exit is required, since the P starttime is no longer relative to the theoretic onset but For this a clean exit is required, since the P starttime is no longer relative to the theoretic onset but
to the vertical trace starttime, eg. it can't be < 0.""" to the vertical trace starttime, eg. it can't be < 0."""
# TODO here the pickparams is modified, instead of a copy
if self.pickparams["pstart"] < 0: if self.pickparams["pstart"] < 0:
# TODO here the pickparams is modified, instead of a copy
self.pickparams["pstart"] = 0 self.pickparams["pstart"] = 0
if self.pickparams["sstart"] < 0:
self.pickparams["sstart"] = 0
if get_bool(self.pickparams["use_taup"]) is False: if self.pickparams["use_taup"] is False or not self.origin or not self.metadata:
# correct user mistake where a relative cuttime is selected (pstart < 0) but use of taupy is disabled/ has # correct user mistake where a relative cuttime is selected (pstart < 0) but use of taupy is disabled/ has
# not the required parameters # not the required parameters
if not self.origin:
print('Requested use_taup but no origin given. Exit taupy.')
if not self.metadata:
print('Requested use_taup but no metadata given. Exit taupy.')
exit_taupy() exit_taupy()
return return
@ -481,34 +492,16 @@ class AutopickStation(object):
raise AttributeError('No source origins given!') raise AttributeError('No source origins given!')
arrivals = create_arrivals(self.metadata, self.origin, self.pickparams["taup_model"]) arrivals = create_arrivals(self.metadata, self.origin, self.pickparams["taup_model"])
arrival_P, arrival_S = first_PS_onsets(arrivals) estFirstP, estFirstS = first_PS_onsets(arrivals)
self.estFirstP = (self.origin[0].time + arrival_P) - self.ztrace.stats.starttime
# modifiy pstart and pstop relative to estimated first P arrival (relative to station time axis) # modifiy pstart and pstop relative to estimated first P arrival (relative to station time axis)
self.pickparams["pstart"] += self.estFirstP self.pickparams["pstart"] += (self.origin[0].time + estFirstP) - self.ztrace.stats.starttime
self.pickparams["pstop"] += self.estFirstP self.pickparams["pstop"] += (self.origin[0].time + estFirstP) - self.ztrace.stats.starttime
print('autopick: CF calculation times respectively:' print('autopick: CF calculation times respectively:'
' pstart: {} s, pstop: {} s'.format(self.pickparams["pstart"], self.pickparams["pstop"])) ' pstart: {} s, pstop: {} s'.format(self.pickparams["pstart"], self.pickparams["pstop"]))
# make sure pstart and pstop are inside the starttime/endtime of vertical trace # make sure pstart and pstop are inside the starttime/endtime of vertical trace
self.pickparams["pstart"] = max(self.pickparams["pstart"], 0) self.pickparams["pstart"] = max(self.pickparams["pstart"], 0)
self.pickparams["pstop"] = min(self.pickparams["pstop"], len(self.ztrace) * self.ztrace.stats.delta) self.pickparams["pstop"] = min(self.pickparams["pstop"], len(self.ztrace) * self.ztrace.stats.delta)
if self.horizontal_traces_exist():
# for the two horizontal components take earliest and latest time to make sure that the s onset is not clipped
# if start and endtime of horizontal traces differ, the s windowsize will automatically increase
trace_s_start = min([self.etrace.stats.starttime, self.ntrace.stats.starttime])
self.estFirstS = (self.origin[0].time + arrival_S) - trace_s_start
# modifiy sstart and sstop relative to estimated first S arrival (relative to station time axis)
self.pickparams["sstart"] += self.estFirstS
self.pickparams["sstop"] += self.estFirstS
print('autopick: CF calculation times respectively:'
' sstart: {} s, sstop: {} s'.format(self.pickparams["sstart"], self.pickparams["sstop"]))
# make sure pstart and pstop are inside the starttime/endtime of horizontal traces
self.pickparams["sstart"] = max(self.pickparams["sstart"], 0)
self.pickparams["sstop"] = min(self.pickparams["sstop"], len(self.ntrace) * self.ntrace.stats.delta,
len(self.etrace) * self.etrace.stats.delta)
def autopickstation(self): def autopickstation(self):
""" """
Main function of autopickstation, which calculates P and S picks and returns them in a dictionary. Main function of autopickstation, which calculates P and S picks and returns them in a dictionary.
@ -518,17 +511,6 @@ class AutopickStation(object):
station's value is the station name on which the picks were calculated. station's value is the station name on which the picks were calculated.
:rtype: dict :rtype: dict
""" """
if get_bool(self.pickparams['use_taup']) is True and self.origin is not None:
try:
# modify pstart, pstop, sstart, sstop to be around theoretical onset if taupy should be used,
# else do nothing
self.modify_starttimes_taupy()
except AttributeError as ae:
print(ae)
except MissingTraceException as mte:
print(mte)
try: try:
self.pick_p_phase() self.pick_p_phase()
except MissingTraceException as mte: except MissingTraceException as mte:
@ -536,9 +518,7 @@ class AutopickStation(object):
except PickingFailedException as pfe: except PickingFailedException as pfe:
print(pfe) print(pfe)
if self.horizontal_traces_exist(): if self.horizontal_traces_exist() and self.p_results.weight is not None and self.p_results.weight < 4:
if (self.p_results.weight is not None and self.p_results.weight < 4) or \
get_bool(self.pickparams.get('use_taup')):
try: try:
self.pick_s_phase() self.pick_s_phase()
except MissingTraceException as mte: except MissingTraceException as mte:
@ -548,7 +528,7 @@ class AutopickStation(object):
self.plot_pick_results() self.plot_pick_results()
self.finish_picking() self.finish_picking()
return [{'P': self.p_results, 'S': self.s_results}, self.ztrace.stats.station] return [{'P': self.p_results, 'S':self.s_results}, self.ztrace.stats.station]
def finish_picking(self): def finish_picking(self):
@ -612,20 +592,12 @@ class AutopickStation(object):
plt_flag = 0 plt_flag = 0
fig._tight = True fig._tight = True
ax1 = fig.add_subplot(311) ax1 = fig.add_subplot(311)
tdata = np.linspace(start=0, stop=self.ztrace.stats.endtime - self.ztrace.stats.starttime, tdata = np.linspace(start=0, stop=self.ztrace.stats.endtime-self.ztrace.stats.starttime, num=self.ztrace.stats.npts)
num=self.ztrace.stats.npts)
# plot tapered trace filtered with bpz2 filter settings # plot tapered trace filtered with bpz2 filter settings
ax1.plot(tdata, self.tr_filt_z_bpz2.data / max(self.tr_filt_z_bpz2.data), color=linecolor, linewidth=0.7, ax1.plot(tdata, self.tr_filt_z_bpz2.data/max(self.tr_filt_z_bpz2.data), color=linecolor, linewidth=0.7, label='Data')
label='Data')
# plot pickwindows for P
pstart, pstop = self.pickparams['pstart'], self.pickparams['pstop']
if pstart is not None and pstop is not None:
ax1.axvspan(pstart, pstop, color='r', alpha=0.1, zorder=0, label='P window')
if self.estFirstP is not None:
ax1.axvline(self.estFirstP, ls='dashed', color='r', alpha=0.4, label='TauPy estimate')
if self.p_results.weight < 4: if self.p_results.weight < 4:
# plot CF of initial onset (HOScf or ARZcf) # plot CF of initial onset (HOScf or ARZcf)
ax1.plot(self.cf1.getTimeArray(), self.cf1.getCF() / max(self.cf1.getCF()), 'b', label='CF1') ax1.plot(self.cf1.getTimeArray(), self.cf1.getCF()/max(self.cf1.getCF()), 'b', label='CF1')
if self.p_data.p_aic_plot_flag == 1: if self.p_data.p_aic_plot_flag == 1:
aicpick = self.p_data.aicpick aicpick = self.p_data.aicpick
refPpick = self.p_data.refPpick refPpick = self.p_data.refPpick
@ -660,44 +632,38 @@ class AutopickStation(object):
ax1.set_ylim([-1.5, 1.5]) ax1.set_ylim([-1.5, 1.5])
ax1.set_ylabel('Normalized Counts') ax1.set_ylabel('Normalized Counts')
if self.horizontal_traces_exist():# and self.s_data.Sflag == 1: if self.horizontal_traces_exist() and self.s_data.Sflag == 1:
# plot E trace # plot E trace
ax2 = fig.add_subplot(3, 1, 2, sharex=ax1) ax2 = fig.add_subplot(3, 1, 2, sharex=ax1)
th1data = np.linspace(0, self.etrace.stats.endtime - self.etrace.stats.starttime, th1data = np.linspace(0, self.etrace.stats.endtime-self.etrace.stats.starttime, self.etrace.stats.npts)
self.etrace.stats.npts)
# plot filtered and tapered waveform # plot filtered and tapered waveform
ax2.plot(th1data, self.etrace.data / max(self.etrace.data), color=linecolor, linewidth=0.7, ax2.plot(th1data, self.etrace.data / max(self.etrace.data), color=linecolor, linewidth=0.7, label='Data')
label='Data')
if self.p_results.weight < 4: if self.p_results.weight < 4:
# plot initial CF (ARHcf or AR3Ccf) # plot initial CF (ARHcf or AR3Ccf)
ax2.plot(self.arhcf1.getTimeArray(), self.arhcf1.getCF() / max(self.arhcf1.getCF()), 'b', ax2.plot(self.arhcf1.getTimeArray(), self.arhcf1.getCF() / max(self.arhcf1.getCF()), 'b', label='CF1')
label='CF1')
if self.s_data.aicSflag == 1 and self.s_results.weight <= 4: if self.s_data.aicSflag == 1 and self.s_results.weight <= 4:
aicarhpick = self.aicarhpick aicarhpick = self.aicarhpick
refSpick = self.refSpick refSpick = self.refSpick
# plot second cf, used for determing precise onset (ARHcf or AR3Ccf) # plot second cf, used for determing precise onset (ARHcf or AR3Ccf)
ax2.plot(self.arhcf2.getTimeArray(), self.arhcf2.getCF() / max(self.arhcf2.getCF()), 'm', ax2.plot(self.arhcf2.getTimeArray(), self.arhcf2.getCF() / max(self.arhcf2.getCF()), 'm', label='CF2')
label='CF2')
# plot preliminary onset time, calculated from CF1 # plot preliminary onset time, calculated from CF1
ax2.plot([aicarhpick.getpick(), aicarhpick.getpick()], [-1, 1], 'g', label='Initial S Onset') ax2.plot([aicarhpick.getpick(), aicarhpick.getpick()], [-1, 1], 'g', label='Initial S Onset')
ax2.plot([aicarhpick.getpick() - 0.5, aicarhpick.getpick() + 0.5], [1, 1], 'g') ax2.plot([aicarhpick.getpick() - 0.5, aicarhpick.getpick() + 0.5], [1, 1], 'g')
ax2.plot([aicarhpick.getpick() - 0.5, aicarhpick.getpick() + 0.5], [-1, -1], 'g') ax2.plot([aicarhpick.getpick() - 0.5, aicarhpick.getpick() + 0.5], [-1, -1], 'g')
# plot precise onset time, calculated from CF2 # plot precise onset time, calculated from CF2
ax2.plot([refSpick.getpick(), refSpick.getpick()], [-1.3, 1.3], 'g', linewidth=2, ax2.plot([refSpick.getpick(), refSpick.getpick()], [-1.3, 1.3], 'g', linewidth=2, label='Final S Pick')
label='Final S Pick')
ax2.plot([refSpick.getpick() - 0.5, refSpick.getpick() + 0.5], [1.3, 1.3], 'g', linewidth=2) ax2.plot([refSpick.getpick() - 0.5, refSpick.getpick() + 0.5], [1.3, 1.3], 'g', linewidth=2)
ax2.plot([refSpick.getpick() - 0.5, refSpick.getpick() + 0.5], [-1.3, -1.3], 'g', linewidth=2) ax2.plot([refSpick.getpick() - 0.5, refSpick.getpick() + 0.5], [-1.3, -1.3], 'g', linewidth=2)
ax2.plot([self.s_results.lpp, self.s_results.lpp], [-1.1, 1.1], 'g--', label='lpp') ax2.plot([self.s_results.lpp, self.s_results.lpp], [-1.1, 1.1], 'g--', label='lpp')
ax2.plot([self.s_results.epp, self.s_results.epp], [-1.1, 1.1], 'g--', label='epp') ax2.plot([self.s_results.epp, self.s_results.epp], [-1.1, 1.1], 'g--', label='epp')
title = '{channel}, S weight={sweight}, SNR={snr:7.2}, SNR[dB]={snrdb:7.2}' title = '{channel}, S weight={sweight}, SNR={snr:7.2}, SNR[dB]={snrdb:7.2}'
ax2.set_title(title.format(channel=str(self.etrace.stats.channel), ax2.set_title(title.format(channel=self.etrace.stats.channel,
sweight=str(self.s_results.weight), sweight=self.s_results.weight,
snr=str(self.s_results.snr), snr=self.s_results.snr,
snrdb=str(self.s_results.snrdb))) snrdb=self.s_results.snrdb))
else: else:
title = '{channel}, S weight={sweight}, SNR=None, SNR[dB]=None' title = '{channel}, S weight={sweight}, SNR=None, SNR[dB]=None'
ax2.set_title(title.format(channel=str(self.etrace.stats.channel), ax2.set_title(title.format(channel=self.etrace.stats.channel, sweight=self.s_results.weight))
sweight=str(self.s_results.weight)))
ax2.legend(loc=1) ax2.legend(loc=1)
ax2.set_yticks([]) ax2.set_yticks([])
ax2.set_ylim([-1.5, 1.5]) ax2.set_ylim([-1.5, 1.5])
@ -705,19 +671,15 @@ class AutopickStation(object):
# plot N trace # plot N trace
ax3 = fig.add_subplot(3, 1, 3, sharex=ax1) ax3 = fig.add_subplot(3, 1, 3, sharex=ax1)
th2data = np.linspace(0, self.ntrace.stats.endtime - self.ntrace.stats.starttime, th2data= np.linspace(0, self.ntrace.stats.endtime-self.ntrace.stats.starttime, self.ntrace.stats.npts)
self.ntrace.stats.npts)
# plot trace # plot trace
ax3.plot(th2data, self.ntrace.data / max(self.ntrace.data), color=linecolor, linewidth=0.7, ax3.plot(th2data, self.ntrace.data / max(self.ntrace.data), color=linecolor, linewidth=0.7, label='Data')
label='Data')
if self.p_results.weight < 4: if self.p_results.weight < 4:
p22, = ax3.plot(self.arhcf1.getTimeArray(), self.arhcf1.getCF() / max(self.arhcf1.getCF()), 'b', p22, = ax3.plot(self.arhcf1.getTimeArray(), self.arhcf1.getCF() / max(self.arhcf1.getCF()), 'b', label='CF1')
label='CF1')
if self.s_data.aicSflag == 1: if self.s_data.aicSflag == 1:
aicarhpick = self.aicarhpick aicarhpick = self.aicarhpick
refSpick = self.refSpick refSpick = self.refSpick
ax3.plot(self.arhcf2.getTimeArray(), self.arhcf2.getCF() / max(self.arhcf2.getCF()), 'm', ax3.plot(self.arhcf2.getTimeArray(), self.arhcf2.getCF() / max(self.arhcf2.getCF()), 'm', label='CF2')
label='CF2')
ax3.plot([aicarhpick.getpick(), aicarhpick.getpick()], [-1, 1], 'g', label='Initial S Onset') ax3.plot([aicarhpick.getpick(), aicarhpick.getpick()], [-1, 1], 'g', label='Initial S Onset')
ax3.plot([aicarhpick.getpick() - 0.5, aicarhpick.getpick() + 0.5], [1, 1], 'g') ax3.plot([aicarhpick.getpick() - 0.5, aicarhpick.getpick() + 0.5], [1, 1], 'g')
ax3.plot([aicarhpick.getpick() - 0.5, aicarhpick.getpick() + 0.5], [-1, -1], 'g') ax3.plot([aicarhpick.getpick() - 0.5, aicarhpick.getpick() + 0.5], [-1, -1], 'g')
@ -727,15 +689,6 @@ class AutopickStation(object):
ax3.plot([refSpick.getpick() - 0.5, refSpick.getpick() + 0.5], [-1.3, -1.3], 'g', linewidth=2) ax3.plot([refSpick.getpick() - 0.5, refSpick.getpick() + 0.5], [-1.3, -1.3], 'g', linewidth=2)
ax3.plot([self.s_results.lpp, self.s_results.lpp], [-1.1, 1.1], 'g--', label='lpp') ax3.plot([self.s_results.lpp, self.s_results.lpp], [-1.1, 1.1], 'g--', label='lpp')
ax3.plot([self.s_results.epp, self.s_results.epp], [-1.1, 1.1], 'g--', label='epp') ax3.plot([self.s_results.epp, self.s_results.epp], [-1.1, 1.1], 'g--', label='epp')
# plot pickwindows for S
sstart, sstop = self.pickparams['sstart'], self.pickparams['sstop']
if sstart is not None and sstop is not None:
for axis in [ax2, ax3]:
axis.axvspan(sstart, sstop, color='b', alpha=0.1, zorder=0, label='S window')
if self.estFirstS is not None:
axis.axvline(self.estFirstS, ls='dashed', color='b', alpha=0.4, label='TauPy estimate')
ax3.legend(loc=1) ax3.legend(loc=1)
ax3.set_yticks([]) ax3.set_yticks([])
ax3.set_ylim([-1.5, 1.5]) ax3.set_ylim([-1.5, 1.5])
@ -767,8 +720,7 @@ class AutopickStation(object):
if aicpick.getpick() is None: if aicpick.getpick() is None:
msg = "Bad initial (AIC) P-pick, skipping this onset!\nAIC-SNR={0}, AIC-Slope={1}counts/s\n " \ msg = "Bad initial (AIC) P-pick, skipping this onset!\nAIC-SNR={0}, AIC-Slope={1}counts/s\n " \
"(min. AIC-SNR={2}, min. AIC-Slope={3}counts/s)" "(min. AIC-SNR={2}, min. AIC-Slope={3}counts/s)"
msg = msg.format(aicpick.getSNR(), aicpick.getSlope(), self.pickparams["minAICPSNR"], msg = msg.format(aicpick.getSNR(), aicpick.getSlope(), self.pickparams["minAICPSNR"], self.pickparams["minAICPslope"])
self.pickparams["minAICPslope"])
self.vprint(msg) self.vprint(msg)
return 0 return 0
# Quality check initial pick with minimum signal length # Quality check initial pick with minimum signal length
@ -778,16 +730,14 @@ class AutopickStation(object):
if len(self.nstream) == 0 or len(self.estream) == 0: if len(self.nstream) == 0 or len(self.estream) == 0:
msg = 'One or more horizontal component(s) missing!\n' \ msg = 'One or more horizontal component(s) missing!\n' \
'Signal length only checked on vertical component!\n' \ 'Signal length only checked on vertical component!\n' \
'Decreasing minsiglengh from {0} to {1}' \ 'Decreasing minsiglengh from {0} to {1}'\
.format(minsiglength, minsiglength / 2) .format(minsiglength, minsiglength / 2)
self.vprint(msg) self.vprint(msg)
minsiglength = minsiglength / 2 minsiglength = minsiglength / 2
else: else:
# filter, taper other traces as well since signal length is compared on all traces # filter, taper other traces as well since signal length is compared on all traces
trH1_filt, _ = self.prepare_wfstream(self.estream, freqmin=self.pickparams["bph1"][0], trH1_filt, _ = self.prepare_wfstream(self.estream, freqmin=self.pickparams["bph1"][0], freqmax=self.pickparams["bph1"][1])
freqmax=self.pickparams["bph1"][1]) trH2_filt, _ = self.prepare_wfstream(self.nstream, freqmin=self.pickparams["bph1"][0], freqmax=self.pickparams["bph1"][1])
trH2_filt, _ = self.prepare_wfstream(self.nstream, freqmin=self.pickparams["bph1"][0],
freqmax=self.pickparams["bph1"][1])
zne += trH1_filt zne += trH1_filt
zne += trH2_filt zne += trH2_filt
minsiglength = minsiglength minsiglength = minsiglength
@ -834,6 +784,15 @@ class AutopickStation(object):
# save filtered trace in instance for later plotting # save filtered trace in instance for later plotting
self.tr_filt_z_bpz2 = tr_filt self.tr_filt_z_bpz2 = tr_filt
if get_Bool(self.pickparams['use_taup']) is True and self.origin is not None:
try:
# modify pstart, pstop to be around theoretical onset if taupy should be used, else does nothing
self.modify_starttimes_taupy()
except AttributeError as ae:
print(ae)
except MissingTraceException as mte:
print(mte)
Lc = self.pickparams['pstop'] - self.pickparams['pstart'] Lc = self.pickparams['pstop'] - self.pickparams['pstart']
Lwf = self.ztrace.stats.endtime - self.ztrace.stats.starttime Lwf = self.ztrace.stats.endtime - self.ztrace.stats.starttime
@ -858,34 +817,24 @@ class AutopickStation(object):
self.cf1 = None self.cf1 = None
assert isinstance(self.cf1, CharacteristicFunction), 'cf1 is not set correctly: maybe the algorithm name ({})' \ assert isinstance(self.cf1, CharacteristicFunction), 'cf1 is not set correctly: maybe the algorithm name ({})' \
' is corrupted'.format(self.pickparams["algoP"]) ' is corrupted'.format(self.pickparams["algoP"])
# get the original waveform stream from first CF class cut to identical length as CF for plotting
cut_ogstream = self.cf1.getDataArray(self.cf1.getCut())
# MP: Rename to cf_stream for further use of z_copy and to prevent chaos when z_copy suddenly becomes a cf
# stream and later again a waveform stream
cf_stream = z_copy.copy()
cf_stream[0].data = self.cf1.getCF()
# calculate AIC cf from first cf (either HOS or ARZ) # calculate AIC cf from first cf (either HOS or ARZ)
aiccf = AICcf(cf_stream, cuttimes) z_copy[0].data = self.cf1.getCF()
aiccf = AICcf(z_copy, cuttimes)
# get preliminary onset time from AIC-CF # get preliminary onset time from AIC-CF
self.set_current_figure('aicFig') self.set_current_figure('aicFig')
aicpick = AICPicker(aiccf, self.pickparams["tsnrz"], self.pickparams["pickwinP"], self.iplot, aicpick = AICPicker(aiccf, self.pickparams["tsnrz"], self.pickparams["pickwinP"], self.iplot,
Tsmooth=self.pickparams["aictsmooth"], fig=self.current_figure, Tsmooth=self.pickparams["aictsmooth"], fig=self.current_figure, linecolor=self.current_linecolor)
linecolor=self.current_linecolor, ogstream=cut_ogstream)
# save aicpick for plotting later # save aicpick for plotting later
self.p_data.aicpick = aicpick self.p_data.aicpick = aicpick
# add pstart and pstop to aic plot # add pstart and pstop to aic plot
if self.current_figure: if self.current_figure:
# TODO remove plotting from picking, make own plot function # TODO remove plotting from picking, make own plot function
for ax in self.current_figure.axes: for ax in self.current_figure.axes:
ax.vlines(self.pickparams["pstart"], ax.get_ylim()[0], ax.get_ylim()[1], color='c', linestyles='dashed', ax.vlines(self.pickparams["pstart"], ax.get_ylim()[0], ax.get_ylim()[1], color='c', linestyles='dashed', label='P start')
label='P start') ax.vlines(self.pickparams["pstop"], ax.get_ylim()[0], ax.get_ylim()[1], color='c', linestyles='dashed', label='P stop')
ax.vlines(self.pickparams["pstop"], ax.get_ylim()[0], ax.get_ylim()[1], color='c', linestyles='dashed',
label='P stop')
ax.legend(loc=1) ax.legend(loc=1)
Pflag = self._pick_p_quality_control(aicpick, cf_stream, tr_filt) Pflag = self._pick_p_quality_control(aicpick, z_copy, tr_filt)
# go on with processing if AIC onset passes quality control # go on with processing if AIC onset passes quality control
slope = aicpick.getSlope() slope = aicpick.getSlope()
if not slope: slope = 0 if not slope: slope = 0
@ -896,8 +845,7 @@ class AutopickStation(object):
error_msg = 'AIC P onset slope to small: got {}, min {}'.format(slope, self.pickparams["minAICPslope"]) error_msg = 'AIC P onset slope to small: got {}, min {}'.format(slope, self.pickparams["minAICPslope"])
raise PickingFailedException(error_msg) raise PickingFailedException(error_msg)
if aicpick.getSNR() < self.pickparams["minAICPSNR"]: if aicpick.getSNR() < self.pickparams["minAICPSNR"]:
error_msg = 'AIC P onset SNR to small: got {}, min {}'.format(aicpick.getSNR(), error_msg = 'AIC P onset SNR to small: got {}, min {}'.format(aicpick.getSNR(), self.pickparams["minAICPSNR"])
self.pickparams["minAICPSNR"])
raise PickingFailedException(error_msg) raise PickingFailedException(error_msg)
self.p_data.p_aic_plot_flag = 1 self.p_data.p_aic_plot_flag = 1
@ -905,8 +853,7 @@ class AutopickStation(object):
'autopickstation: re-filtering vertical trace...'.format(aicpick.getSlope(), aicpick.getSNR()) 'autopickstation: re-filtering vertical trace...'.format(aicpick.getSlope(), aicpick.getSNR())
self.vprint(msg) self.vprint(msg)
# refilter waveform with larger bandpass # refilter waveform with larger bandpass
tr_filt, z_copy = self.prepare_wfstream(self.zstream, freqmin=self.pickparams["bpz2"][0], tr_filt, z_copy = self.prepare_wfstream(self.zstream, freqmin=self.pickparams["bpz2"][0], freqmax=self.pickparams["bpz2"][1])
freqmax=self.pickparams["bpz2"][1])
# save filtered trace in instance for later plotting # save filtered trace in instance for later plotting
self.tr_filt_z_bpz2 = tr_filt self.tr_filt_z_bpz2 = tr_filt
# determine new times around initial onset # determine new times around initial onset
@ -921,26 +868,22 @@ class AutopickStation(object):
'corrupted'.format(self.pickparams["algoP"]) 'corrupted'.format(self.pickparams["algoP"])
self.set_current_figure('refPpick') self.set_current_figure('refPpick')
# get refined onset time from CF2 # get refined onset time from CF2
refPpick = PragPicker(self.cf2, self.pickparams["tsnrz"], self.pickparams["pickwinP"], self.iplot, refPpick = PragPicker(self.cf2, self.pickparams["tsnrz"], self.pickparams["pickwinP"], self.iplot, self.pickparams["ausP"],
self.pickparams["ausP"], self.pickparams["tsmoothP"], aicpick.getpick(), self.current_figure, self.current_linecolor)
self.pickparams["tsmoothP"], aicpick.getpick(), self.current_figure,
self.current_linecolor, ogstream=cut_ogstream)
# save PragPicker result for plotting # save PragPicker result for plotting
self.p_data.refPpick = refPpick self.p_data.refPpick = refPpick
self.p_results.mpp = refPpick.getpick() self.p_results.mpp = refPpick.getpick()
if self.p_results.mpp is None: if self.p_results.mpp is None:
msg = 'Bad initial (AIC) P-pick, skipping this onset!\n AIC-SNR={}, AIC-Slope={}counts/s\n' \ msg = 'Bad initial (AIC) P-pick, skipping this onset!\n AIC-SNR={}, AIC-Slope={}counts/s\n' \
'(min. AIC-SNR={}, min. AIC-Slope={}counts/s)' '(min. AIC-SNR={}, min. AIC-Slope={}counts/s)'
msg.format(aicpick.getSNR(), aicpick.getSlope(), self.pickparams["minAICPSNR"], msg.format(aicpick.getSNR(), aicpick.getSlope(), self.pickparams["minAICPSNR"], self.pickparams["minAICPslope"])
self.pickparams["minAICPslope"])
self.vprint(msg) self.vprint(msg)
self.s_data.Sflag = 0 self.s_data.Sflag = 0
raise PickingFailedException(msg) raise PickingFailedException(msg)
# quality assessment, get earliest/latest pick and symmetrized uncertainty # quality assessment, get earliest/latest pick and symmetrized uncertainty
# todo quality assessment in own function #todo quality assessment in own function
self.set_current_figure('el_Ppick') self.set_current_figure('el_Ppick')
elpicker_results = earllatepicker(z_copy, self.pickparams["nfacP"], self.pickparams["tsnrz"], elpicker_results = earllatepicker(z_copy, self.pickparams["nfacP"], self.pickparams["tsnrz"], self.p_results.mpp,
self.p_results.mpp,
self.iplot, fig=self.current_figure, linecolor=self.current_linecolor) self.iplot, fig=self.current_figure, linecolor=self.current_linecolor)
self.p_results.epp, self.p_results.lpp, self.p_results.spe = elpicker_results self.p_results.epp, self.p_results.lpp, self.p_results.spe = elpicker_results
snr_results = getSNR(z_copy, self.pickparams["tsnrz"], self.p_results.mpp) snr_results = getSNR(z_copy, self.pickparams["tsnrz"], self.p_results.mpp)
@ -948,11 +891,10 @@ class AutopickStation(object):
# weight P-onset using symmetric error # weight P-onset using symmetric error
self.p_results.weight = get_quality_class(self.p_results.spe, self.pickparams["timeerrorsP"]) self.p_results.weight = get_quality_class(self.p_results.spe, self.pickparams["timeerrorsP"])
if self.p_results.weight <= self.pickparams["minfmweight"] and self.p_results.snr >= self.pickparams[ if self.p_results.weight <= self.pickparams["minfmweight"] and self.p_results.snr >= self.pickparams["minFMSNR"]:
"minFMSNR"]:
# if SNR is high enough, try to determine first motion of onset # if SNR is high enough, try to determine first motion of onset
self.set_current_figure('fm_picker') self.set_current_figure('fm_picker')
self.p_results.fm = fmpicker(self.zstream.copy(), z_copy, self.pickparams["fmpickwin"], self.p_results.mpp, self.p_results.fm = fmpicker(self.zstream, z_copy, self.pickparams["fmpickwin"], self.p_results.mpp,
self.iplot, self.current_figure, self.current_linecolor) self.iplot, self.current_figure, self.current_linecolor)
msg = "autopickstation: P-weight: {}, SNR: {}, SNR[dB]: {}, Polarity: {}" msg = "autopickstation: P-weight: {}, SNR: {}, SNR[dB]: {}, Polarity: {}"
msg = msg.format(self.p_results.weight, self.p_results.snr, self.p_results.snrdb, self.p_results.fm) msg = msg.format(self.p_results.weight, self.p_results.snr, self.p_results.snrdb, self.p_results.fm)
@ -1022,7 +964,7 @@ class AutopickStation(object):
trH1_filt, _ = self.prepare_wfstream(self.zstream, filter_freq_min, filter_freq_max) trH1_filt, _ = self.prepare_wfstream(self.zstream, filter_freq_min, filter_freq_max)
trH2_filt, _ = self.prepare_wfstream(self.estream, filter_freq_min, filter_freq_max) trH2_filt, _ = self.prepare_wfstream(self.estream, filter_freq_min, filter_freq_max)
trH3_filt, _ = self.prepare_wfstream(self.nstream, filter_freq_min, filter_freq_max) trH3_filt, _ = self.prepare_wfstream(self.nstream, filter_freq_min, filter_freq_max)
h_copy = self.hdat.copy() h_copy =self. hdat.copy()
h_copy[0].data = trH1_filt.data h_copy[0].data = trH1_filt.data
h_copy[1].data = trH2_filt.data h_copy[1].data = trH2_filt.data
h_copy[2].data = trH3_filt.data h_copy[2].data = trH3_filt.data
@ -1164,9 +1106,7 @@ class AutopickStation(object):
''.format(self.s_results.weight, self.s_results.snr, self.s_results.snrdb)) ''.format(self.s_results.weight, self.s_results.snr, self.s_results.snrdb))
def pick_s_phase(self): def pick_s_phase(self):
if get_bool(self.pickparams.get('use_taup')) is True:
cuttimesh = (self.pickparams.get('sstart'), self.pickparams.get('sstop'))
else:
# determine time window for calculating CF after P onset # determine time window for calculating CF after P onset
cuttimesh = self._calculate_cuttimes(type='S', iteration=1) cuttimesh = self._calculate_cuttimes(type='S', iteration=1)
@ -1176,14 +1116,10 @@ class AutopickStation(object):
# calculate AIC cf # calculate AIC cf
haiccf = self._calculate_aic_cf_s_pick(cuttimesh) haiccf = self._calculate_aic_cf_s_pick(cuttimesh)
# get the original waveform stream cut to identical length as CF for plotting
ogstream = haiccf.getDataArray(haiccf.getCut())
# get preliminary onset time from AIC cf # get preliminary onset time from AIC cf
self.set_current_figure('aicARHfig') self.set_current_figure('aicARHfig')
aicarhpick = AICPicker(haiccf, self.pickparams["tsnrh"], self.pickparams["pickwinS"], self.iplot, aicarhpick = AICPicker(haiccf, self.pickparams["tsnrh"], self.pickparams["pickwinS"], self.iplot,
Tsmooth=self.pickparams["aictsmoothS"], fig=self.current_figure, Tsmooth=self.pickparams["aictsmoothS"], fig=self.current_figure, linecolor=self.current_linecolor)
linecolor=self.current_linecolor, ogstream=ogstream)
# save pick for later plotting # save pick for later plotting
self.aicarhpick = aicarhpick self.aicarhpick = aicarhpick
@ -1194,10 +1130,8 @@ class AutopickStation(object):
# get refined onset time from CF2 # get refined onset time from CF2
self.set_current_figure('refSpick') self.set_current_figure('refSpick')
refSpick = PragPicker(arhcf2, self.pickparams["tsnrh"], self.pickparams["pickwinS"], self.iplot, refSpick = PragPicker(arhcf2, self.pickparams["tsnrh"], self.pickparams["pickwinS"], self.iplot, self.pickparams["ausS"],
self.pickparams["ausS"], self.pickparams["tsmoothS"], aicarhpick.getpick(), self.current_figure, self.current_linecolor)
self.pickparams["tsmoothS"], aicarhpick.getpick(), self.current_figure,
self.current_linecolor)
# save refSpick for later plotitng # save refSpick for later plotitng
self.refSpick = refSpick self.refSpick = refSpick
self.s_results.mpp = refSpick.getpick() self.s_results.mpp = refSpick.getpick()
@ -1221,6 +1155,7 @@ class AutopickStation(object):
self.current_linecolor = plot_style['linecolor']['rgba_mpl'] self.current_linecolor = plot_style['linecolor']['rgba_mpl']
def autopickstation(wfstream, pickparam, verbose=False, iplot=0, fig_dict=None, metadata=None, origin=None): def autopickstation(wfstream, pickparam, verbose=False, iplot=0, fig_dict=None, metadata=None, origin=None):
""" """
Main function to calculate picks for the station. Main function to calculate picks for the station.

View File

@ -16,16 +16,11 @@ autoregressive prediction: application ot local and regional distances, Geophys.
:author: MAGS2 EP3 working group :author: MAGS2 EP3 working group
""" """
import numpy as np import numpy as np
try: from scipy import signal
from scipy.signal import tukey
except ImportError:
from scipy.signal.windows import tukey
from obspy.core import Stream from obspy.core import Stream
from pylot.core.pick.utils import PickingFailedException
class CharacteristicFunction(object): class CharacteristicFunction(object):
""" """
@ -60,7 +55,7 @@ class CharacteristicFunction(object):
self.setOrder(order) self.setOrder(order)
self.setFnoise(fnoise) self.setFnoise(fnoise)
self.setARdetStep(t2) self.setARdetStep(t2)
self.calcCF() self.calcCF(self.getDataArray())
self.arpara = np.array([]) self.arpara = np.array([])
self.xpred = np.array([]) self.xpred = np.array([])
@ -155,7 +150,7 @@ class CharacteristicFunction(object):
if self.cut[0] == 0 and self.cut[1] == 0: if self.cut[0] == 0 and self.cut[1] == 0:
start = 0 start = 0
stop = len(self.orig_data[0]) stop = len(self.orig_data[0])
elif self.cut[0] == 0 and self.cut[1] != 0: elif self.cut[0] == 0 and self.cut[1] is not 0:
start = 0 start = 0
stop = self.cut[1] / self.dt stop = self.cut[1] / self.dt
else: else:
@ -172,7 +167,7 @@ class CharacteristicFunction(object):
if self.cut[0] == 0 and self.cut[1] == 0: if self.cut[0] == 0 and self.cut[1] == 0:
start = 0 start = 0
stop = min([len(self.orig_data[0]), len(self.orig_data[1])]) stop = min([len(self.orig_data[0]), len(self.orig_data[1])])
elif self.cut[0] == 0 and self.cut[1] != 0: elif self.cut[0] == 0 and self.cut[1] is not 0:
start = 0 start = 0
stop = min([self.cut[1] / self.dt, len(self.orig_data[0]), stop = min([self.cut[1] / self.dt, len(self.orig_data[0]),
len(self.orig_data[1])]) len(self.orig_data[1])])
@ -192,7 +187,7 @@ class CharacteristicFunction(object):
start = 0 start = 0
stop = min([self.cut[1] / self.dt, len(self.orig_data[0]), stop = min([self.cut[1] / self.dt, len(self.orig_data[0]),
len(self.orig_data[1]), len(self.orig_data[2])]) len(self.orig_data[1]), len(self.orig_data[2])])
elif self.cut[0] == 0 and self.cut[1] != 0: elif self.cut[0] == 0 and self.cut[1] is not 0:
start = 0 start = 0
stop = self.cut[1] / self.dt stop = self.cut[1] / self.dt
else: else:
@ -212,15 +207,17 @@ class CharacteristicFunction(object):
data = self.orig_data.copy() data = self.orig_data.copy()
return data return data
def calcCF(self): def calcCF(self, data=None):
pass self.cf = data
class AICcf(CharacteristicFunction): class AICcf(CharacteristicFunction):
def calcCF(self): def calcCF(self, data):
""" """
Function to calculate the Akaike Information Criterion (AIC) after Maeda (1985). Function to calculate the Akaike Information Criterion (AIC) after Maeda (1985).
:param data: data, time series (whether seismogram or CF)
:type data: tuple
:return: AIC function :return: AIC function
:rtype: :rtype:
""" """
@ -229,7 +226,7 @@ class AICcf(CharacteristicFunction):
ind = np.where(~np.isnan(xnp))[0] ind = np.where(~np.isnan(xnp))[0]
if ind.size: if ind.size:
xnp[:ind[0]] = xnp[ind[0]] xnp[:ind[0]] = xnp[ind[0]]
xnp = tukey(len(xnp), alpha=0.05) * xnp xnp = signal.tukey(len(xnp), alpha=0.05) * xnp
xnp = xnp - np.mean(xnp) xnp = xnp - np.mean(xnp)
datlen = len(xnp) datlen = len(xnp)
k = np.arange(1, datlen) k = np.arange(1, datlen)
@ -258,11 +255,13 @@ class HOScf(CharacteristicFunction):
""" """
super(HOScf, self).__init__(data, cut, pickparams["tlta"], pickparams["hosorder"]) super(HOScf, self).__init__(data, cut, pickparams["tlta"], pickparams["hosorder"])
def calcCF(self): def calcCF(self, data):
""" """
Function to calculate skewness (statistics of order 3) or kurtosis Function to calculate skewness (statistics of order 3) or kurtosis
(statistics of order 4), using one long moving window, as published (statistics of order 4), using one long moving window, as published
in Kueperkoch et al. (2010), or order 2, i.e. STA/LTA. in Kueperkoch et al. (2010).
:param data: data, time series (whether seismogram or CF)
:type data: tuple
:return: HOS cf :return: HOS cf
:rtype: :rtype:
""" """
@ -314,13 +313,14 @@ class HOScf(CharacteristicFunction):
class ARZcf(CharacteristicFunction): class ARZcf(CharacteristicFunction):
def __init__(self, data, cut, t1, t2, pickparams): def __init__(self, data, cut, t1, t2, pickparams):
super(ARZcf, self).__init__(data, cut, t1=t1, t2=t2, order=pickparams["Parorder"], super(ARZcf, self).__init__(data, cut, t1=t1, t2=t2, order=pickparams["Parorder"], fnoise=pickparams["addnoise"])
fnoise=pickparams["addnoise"])
def calcCF(self): def calcCF(self, data):
""" """
function used to calculate the AR prediction error from a single vertical trace. Can be used to pick function used to calculate the AR prediction error from a single vertical trace. Can be used to pick
P onsets. P onsets.
:param data:
:type data: ~obspy.core.stream.Stream
:return: ARZ cf :return: ARZ cf
:rtype: :rtype:
""" """
@ -448,15 +448,16 @@ class ARZcf(CharacteristicFunction):
class ARHcf(CharacteristicFunction): class ARHcf(CharacteristicFunction):
def __init__(self, data, cut, t1, t2, pickparams): def __init__(self, data, cut, t1, t2, pickparams):
super(ARHcf, self).__init__(data, cut, t1=t1, t2=t2, order=pickparams["Sarorder"], super(ARHcf, self).__init__(data, cut, t1=t1, t2=t2, order=pickparams["Sarorder"], fnoise=pickparams["addnoise"])
fnoise=pickparams["addnoise"])
def calcCF(self): def calcCF(self, data):
""" """
Function to calculate a characteristic function using autoregressive modelling of the waveform of Function to calculate a characteristic function using autoregressive modelling of the waveform of
both horizontal traces. both horizontal traces.
The waveform is predicted in a moving time window using the calculated AR parameters. The difference The waveform is predicted in a moving time window using the calculated AR parameters. The difference
between the predicted and the actual waveform servers as a characteristic function. between the predicted and the actual waveform servers as a characteristic function.
:param data: wavefor stream
:type data: ~obspy.core.stream.Stream
:return: ARH cf :return: ARH cf
:rtype: :rtype:
""" """
@ -464,9 +465,6 @@ class ARHcf(CharacteristicFunction):
print('Calculating AR-prediction error from both horizontal traces ...') print('Calculating AR-prediction error from both horizontal traces ...')
xnp = self.getDataArray(self.getCut()) xnp = self.getDataArray(self.getCut())
if len(xnp[0]) == 0:
raise PickingFailedException('calcCF: Found empty data trace for cut times. Return')
n0 = np.isnan(xnp[0].data) n0 = np.isnan(xnp[0].data)
if len(n0) > 1: if len(n0) > 1:
xnp[0].data[n0] = 0 xnp[0].data[n0] = 0
@ -602,15 +600,16 @@ class ARHcf(CharacteristicFunction):
class AR3Ccf(CharacteristicFunction): class AR3Ccf(CharacteristicFunction):
def __init__(self, data, cut, t1, t2, pickparams): def __init__(self, data, cut, t1, t2, pickparams):
super(AR3Ccf, self).__init__(data, cut, t1=t1, t2=t2, order=pickparams["Sarorder"], super(AR3Ccf, self).__init__(data, cut, t1=t1, t2=t2, order=pickparams["Sarorder"], fnoise=pickparams["addnoise"])
fnoise=pickparams["addnoise"])
def calcCF(self): def calcCF(self, data):
""" """
Function to calculate a characteristic function using autoregressive modelling of the waveform of Function to calculate a characteristic function using autoregressive modelling of the waveform of
all three traces. all three traces.
The waveform is predicted in a moving time window using the calculated AR parameters. The difference The waveform is predicted in a moving time window using the calculated AR parameters. The difference
between the predicted and the actual waveform servers as a characteristic function between the predicted and the actual waveform servers as a characteristic function
:param data: stream holding all three traces
:type data: ~obspy.core.stream.Stream
:return: AR3C cf :return: AR3C cf
:rtype: :rtype:
""" """

View File

@ -2,11 +2,10 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import copy import copy
import operator
import os
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
import numpy as np import numpy as np
import operator
import os
from obspy.core import AttribDict from obspy.core import AttribDict
from pylot.core.util.pdf import ProbabilityDensityFunction from pylot.core.util.pdf import ProbabilityDensityFunction
@ -402,8 +401,6 @@ class PDFstatistics(object):
Takes a path as argument. Takes a path as argument.
""" """
# TODO: change root to datapath
def __init__(self, directory): def __init__(self, directory):
"""Initiates some values needed when dealing with pdfs later""" """Initiates some values needed when dealing with pdfs later"""
self._rootdir = directory self._rootdir = directory

View File

@ -19,10 +19,9 @@ calculated after Diehl & Kissling (2009).
:author: MAGS2 EP3 working group / Ludger Kueperkoch :author: MAGS2 EP3 working group / Ludger Kueperkoch
""" """
import warnings
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
import numpy as np import numpy as np
import warnings
from scipy.signal import argrelmax, argrelmin from scipy.signal import argrelmax, argrelmin
from pylot.core.pick.charfuns import CharacteristicFunction from pylot.core.pick.charfuns import CharacteristicFunction
@ -37,8 +36,7 @@ class AutoPicker(object):
warnings.simplefilter('ignore') warnings.simplefilter('ignore')
def __init__(self, cf, TSNR, PickWindow, iplot=0, aus=None, Tsmooth=None, Pick1=None, def __init__(self, cf, TSNR, PickWindow, iplot=0, aus=None, Tsmooth=None, Pick1=None, fig=None, linecolor='k'):
fig=None, linecolor='k', ogstream=None):
""" """
Create AutoPicker object Create AutoPicker object
:param cf: characteristic function, on which the picking algorithm is applied :param cf: characteristic function, on which the picking algorithm is applied
@ -60,15 +58,12 @@ class AutoPicker(object):
:type fig: `~matplotlib.figure.Figure` :type fig: `~matplotlib.figure.Figure`
:param linecolor: matplotlib line color string :param linecolor: matplotlib line color string
:type linecolor: str :type linecolor: str
:param ogstream: original stream (waveform), e.g. for plotting purposes
:type ogstream: `~obspy.core.stream.Stream`
""" """
assert isinstance(cf, CharacteristicFunction), "%s is not a CharacteristicFunction object" % str(cf) assert isinstance(cf, CharacteristicFunction), "%s is not a CharacteristicFunction object" % str(cf)
self._linecolor = linecolor self._linecolor = linecolor
self._pickcolor_p = 'b' self._pickcolor_p = 'b'
self.cf = cf.getCF() self.cf = cf.getCF()
self.ogstream = ogstream
self.Tcf = cf.getTimeArray() self.Tcf = cf.getTimeArray()
self.Data = cf.getXCF() self.Data = cf.getXCF()
self.dt = cf.getIncrement() self.dt = cf.getIncrement()
@ -177,14 +172,12 @@ class AICPicker(AutoPicker):
nn = np.isnan(self.cf) nn = np.isnan(self.cf)
if len(nn) > 1: if len(nn) > 1:
self.cf[nn] = 0 self.cf[nn] = 0
# taper AIC-CF to get rid of side maxima # taper AIC-CF to get rid off side maxima
tap = np.hanning(len(self.cf)) tap = np.hanning(len(self.cf))
aic = tap * self.cf + max(abs(self.cf)) aic = tap * self.cf + max(abs(self.cf))
# smooth AIC-CF # smooth AIC-CF
ismooth = int(round(self.Tsmooth / self.dt)) ismooth = int(round(self.Tsmooth / self.dt))
# MP MP better start with original data than zeros if array shall be smoothed, created artificial value before aicsmooth = np.zeros(len(aic))
# when starting with i in range(1...) loop below and subtracting offset afterwards
aicsmooth = np.copy(aic)
if len(aic) < ismooth: if len(aic) < ismooth:
print('AICPicker: Tsmooth larger than CF!') print('AICPicker: Tsmooth larger than CF!')
return return
@ -194,7 +187,7 @@ class AICPicker(AutoPicker):
ii1 = i - ismooth ii1 = i - ismooth
aicsmooth[i] = aicsmooth[i - 1] + (aic[i] - aic[ii1]) / ismooth aicsmooth[i] = aicsmooth[i - 1] + (aic[i] - aic[ii1]) / ismooth
else: else:
aicsmooth[i] = np.mean(aic[0: i]) # MP MP created np.nan for i=1 aicsmooth[i] = np.mean(aic[1: i])
# remove offset in AIC function # remove offset in AIC function
offset = abs(min(aic) - min(aicsmooth)) offset = abs(min(aic) - min(aicsmooth))
aicsmooth = aicsmooth - offset aicsmooth = aicsmooth - offset
@ -203,7 +196,7 @@ class AICPicker(AutoPicker):
# minimum in AIC function # minimum in AIC function
icfmax = np.argmax(cf) icfmax = np.argmax(cf)
# TODO: If this shall be kept, maybe add thresh_factor to pylot parameters # MP MP testing threshold
thresh_hit = False thresh_hit = False
thresh_factor = 0.7 thresh_factor = 0.7
thresh = thresh_factor * cf[icfmax] thresh = thresh_factor * cf[icfmax]
@ -215,6 +208,7 @@ class AICPicker(AutoPicker):
if sample <= cf[index - 1]: if sample <= cf[index - 1]:
icfmax = index - 1 icfmax = index - 1
break break
# MP MP ---
# find minimum in AIC-CF front of maximum of HOS/AR-CF # find minimum in AIC-CF front of maximum of HOS/AR-CF
lpickwindow = int(round(self.PickWindow / self.dt)) lpickwindow = int(round(self.PickWindow / self.dt))
@ -320,7 +314,16 @@ class AICPicker(AutoPicker):
plt.close(fig) plt.close(fig)
return return
iislope = islope[0][0:imax + 1] iislope = islope[0][0:imax + 1]
dataslope = self.Data[0].data[iislope] # MP MP change slope calculation
# get all maxima of aicsmooth
iaicmaxima = argrelmax(aicsmooth)[0]
# get first index of maximum after pickindex (indices saved in iaicmaxima)
aicmax = iaicmaxima[np.where(iaicmaxima > pickindex)[0]]
if len(aicmax) > 0:
iaicmax = aicmax[0]
else:
iaicmax = -1
dataslope = aicsmooth[pickindex: iaicmax]
# calculate slope as polynomal fit of order 1 # calculate slope as polynomal fit of order 1
xslope = np.arange(0, len(dataslope), 1) xslope = np.arange(0, len(dataslope), 1)
try: try:
@ -331,8 +334,8 @@ class AICPicker(AutoPicker):
else: else:
self.slope = 1 / (len(dataslope) * self.Data[0].stats.delta) * (datafit[-1] - datafit[0]) self.slope = 1 / (len(dataslope) * self.Data[0].stats.delta) * (datafit[-1] - datafit[0])
# normalize slope to maximum of cf to make it unit independent # normalize slope to maximum of cf to make it unit independent
self.slope /= self.Data[0].data[icfmax] self.slope /= aicsmooth[iaicmax]
except Exception as e: except ValueError as e:
print("AICPicker: Problems with data fitting! {}".format(e)) print("AICPicker: Problems with data fitting! {}".format(e))
else: else:
@ -351,12 +354,6 @@ class AICPicker(AutoPicker):
self.Tcf = self.Tcf[0:len(self.Tcf) - 1] self.Tcf = self.Tcf[0:len(self.Tcf) - 1]
ax1.plot(self.Tcf, cf / max(cf), color=self._linecolor, linewidth=0.7, label='(HOS-/AR-) Data') ax1.plot(self.Tcf, cf / max(cf), color=self._linecolor, linewidth=0.7, label='(HOS-/AR-) Data')
ax1.plot(self.Tcf, aicsmooth / max(aicsmooth), 'r', label='Smoothed AIC-CF') ax1.plot(self.Tcf, aicsmooth / max(aicsmooth), 'r', label='Smoothed AIC-CF')
# plot the original waveform also for evaluation of the CF and pick
if self.ogstream:
data = self.ogstream[0].data
if len(data) == len(self.Tcf):
ax1.plot(self.Tcf, 0.5 * data / max(data), 'k', label='Seismogram', alpha=0.3, zorder=0,
lw=0.5)
if self.Pick is not None: if self.Pick is not None:
ax1.plot([self.Pick, self.Pick], [-0.1, 0.5], 'b', linewidth=2, label='AIC-Pick') ax1.plot([self.Pick, self.Pick], [-0.1, 0.5], 'b', linewidth=2, label='AIC-Pick')
ax1.set_xlabel('Time [s] since %s' % self.Data[0].stats.starttime) ax1.set_xlabel('Time [s] since %s' % self.Data[0].stats.starttime)
@ -377,7 +374,7 @@ class AICPicker(AutoPicker):
label='Signal Window') label='Signal Window')
ax2.axvspan(self.Tcf[iislope[0]], self.Tcf[iislope[-1]], color='g', alpha=0.2, lw=0, ax2.axvspan(self.Tcf[iislope[0]], self.Tcf[iislope[-1]], color='g', alpha=0.2, lw=0,
label='Slope Window') label='Slope Window')
ax2.plot(self.Tcf[iislope], datafit, 'g', linewidth=2, ax2.plot(self.Tcf[pickindex: iaicmax], datafit, 'g', linewidth=2,
label='Slope') # MP MP changed temporarily! label='Slope') # MP MP changed temporarily!
if self.slope is not None: if self.slope is not None:
@ -479,7 +476,7 @@ class PragPicker(AutoPicker):
cfpick_r = 0 cfpick_r = 0
cfpick_l = 0 cfpick_l = 0
lpickwindow = int(round(self.PickWindow / self.dt)) lpickwindow = int(round(self.PickWindow / self.dt))
# for i in range(max(np.insert(ipick, 0, 2)), min([ipick1 + lpickwindow + 1, len(self.cf) - 1])): #for i in range(max(np.insert(ipick, 0, 2)), min([ipick1 + lpickwindow + 1, len(self.cf) - 1])):
# # local minimum # # local minimum
# if self.cf[i + 1] > self.cf[i] <= self.cf[i - 1]: # if self.cf[i + 1] > self.cf[i] <= self.cf[i - 1]:
# if cfsmooth[i - 1] * (1 + aus1) >= cfsmooth[i]: # if cfsmooth[i - 1] * (1 + aus1) >= cfsmooth[i]:
@ -511,9 +508,9 @@ class PragPicker(AutoPicker):
if flagpick_l > 0 and flagpick_r > 0 and cfpick_l <= 3 * cfpick_r: if flagpick_l > 0 and flagpick_r > 0 and cfpick_l <= 3 * cfpick_r:
self.Pick = pick_l self.Pick = pick_l
pickflag = 1 pickflag = 1
# elif flagpick_l > 0 and flagpick_r > 0 and cfpick_l >= cfpick_r: elif flagpick_l > 0 and flagpick_r > 0 and cfpick_l >= cfpick_r:
# self.Pick = pick_r self.Pick = pick_r
# pickflag = 1 pickflag = 1
elif flagpick_l == 0 and flagpick_r > 0 and cfpick_l >= cfpick_r: elif flagpick_l == 0 and flagpick_r > 0 and cfpick_l >= cfpick_r:
self.Pick = pick_l self.Pick = pick_l
pickflag = 1 pickflag = 1

View File

@ -7,15 +7,13 @@
:author: Ludger Kueperkoch, BESTEC GmbH :author: Ludger Kueperkoch, BESTEC GmbH
""" """
import warnings import warnings
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
import numpy as np import numpy as np
from obspy.core import Stream, UTCDateTime
from scipy.signal import argrelmax from scipy.signal import argrelmax
from obspy.core import Stream, UTCDateTime
from pylot.core.util.utils import get_bool, get_none, SetChannelComponents, common_range from pylot.core.util.utils import get_Bool, get_None, SetChannelComponents
def earllatepicker(X, nfac, TSNR, Pick1, iplot=0, verbosity=1, fig=None, linecolor='k'): def earllatepicker(X, nfac, TSNR, Pick1, iplot=0, verbosity=1, fig=None, linecolor='k'):
@ -62,8 +60,8 @@ def earllatepicker(X, nfac, TSNR, Pick1, iplot=0, verbosity=1, fig=None, linecol
plt_flag = 0 plt_flag = 0
try: try:
iplot = int(iplot) iplot = int(iplot)
except ValueError: except:
if get_bool(iplot): if get_Bool(iplot):
iplot = 2 iplot = 2
else: else:
iplot = 0 iplot = 0
@ -136,7 +134,7 @@ def earllatepicker(X, nfac, TSNR, Pick1, iplot=0, verbosity=1, fig=None, linecol
PickError = symmetrize_error(diffti_te, diffti_tl) PickError = symmetrize_error(diffti_te, diffti_tl)
if iplot > 1: if iplot > 1:
if get_none(fig) is None: if get_None(fig) is None:
fig = plt.figure() # iplot) fig = plt.figure() # iplot)
plt_flag = 1 plt_flag = 1
fig._tight = True fig._tight = True
@ -344,7 +342,7 @@ def fmpicker(Xraw, Xfilt, pickwin, Pick, iplot=0, fig=None, linecolor='k'):
print("fmpicker: Found polarity %s" % FM) print("fmpicker: Found polarity %s" % FM)
if iplot > 1: if iplot > 1:
if get_none(fig) is None: if get_None(fig) is None:
fig = plt.figure() # iplot) fig = plt.figure() # iplot)
plt_flag = 1 plt_flag = 1
fig._tight = True fig._tight = True
@ -537,10 +535,9 @@ def getslopewin(Tcf, Pick, tslope):
:rtype: `numpy.ndarray` :rtype: `numpy.ndarray`
""" """
# TODO: fill out docstring # TODO: fill out docstring
slope = np.where((Tcf <= min(Pick + tslope, Tcf[-1])) & (Tcf >= Pick)) slope = np.where( (Tcf <= min(Pick + tslope, Tcf[-1])) & (Tcf >= Pick) )
return slope[0] return slope[0]
def getResolutionWindow(snr, extent): def getResolutionWindow(snr, extent):
""" """
Produce the half of the time resolution window width from given SNR value Produce the half of the time resolution window width from given SNR value
@ -598,8 +595,6 @@ def select_for_phase(st, phase):
alter_comp = compclass.getCompPosition(comp) alter_comp = compclass.getCompPosition(comp)
alter_comp = str(alter_comp[0]) alter_comp = str(alter_comp[0])
sel_st += st.select(component=comp) sel_st += st.select(component=comp)
if len(sel_st) < 1:
sel_st += st.select(component="Q")
sel_st += st.select(component=alter_comp) sel_st += st.select(component=alter_comp)
elif phase.upper() == 'S': elif phase.upper() == 'S':
comps = 'NE' comps = 'NE'
@ -816,7 +811,7 @@ def checksignallength(X, pick, minsiglength, pickparams, iplot=0, fig=None, line
try: try:
iplot = int(iplot) iplot = int(iplot)
except: except:
if get_bool(iplot): if get_Bool(iplot):
iplot = 2 iplot = 2
else: else:
iplot = 0 iplot = 0
@ -828,22 +823,14 @@ def checksignallength(X, pick, minsiglength, pickparams, iplot=0, fig=None, line
if len(X) > 1: if len(X) > 1:
# all three components available # all three components available
# make sure, all components have equal lengths # make sure, all components have equal lengths
earliest_starttime = min(tr.stats.starttime for tr in X) ilen = min([len(X[0].data), len(X[1].data), len(X[2].data)])
cuttimes = common_range(X) x1 = X[0][0:ilen]
X = X.slice(cuttimes[0], cuttimes[1]) x2 = X[1][0:ilen]
x1, x2, x3 = X[:3] x3 = X[2][0:ilen]
if not (len(x1) == len(x2) == len(x3)):
raise PickingFailedException('checksignallength: unequal lengths of components!')
# get RMS trace # get RMS trace
rms = np.sqrt((np.power(x1, 2) + np.power(x2, 2) + np.power(x3, 2)) / 3) rms = np.sqrt((np.power(x1, 2) + np.power(x2, 2) + np.power(x3, 2)) / 3)
ilen = len(rms)
dt = earliest_starttime - X[0].stats.starttime
pick -= dt
else: else:
x1 = X[0].data x1 = X[0].data
x2 = x3 = None
ilen = len(x1) ilen = len(x1)
rms = abs(x1) rms = abs(x1)
@ -876,16 +863,12 @@ def checksignallength(X, pick, minsiglength, pickparams, iplot=0, fig=None, line
returnflag = 0 returnflag = 0
if iplot > 1: if iplot > 1:
if get_none(fig) is None: if get_None(fig) is None:
fig = plt.figure() # iplot) fig = plt.figure() # iplot)
plt_flag = 1 plt_flag = 1
fig._tight = True fig._tight = True
ax = fig.add_subplot(111) ax = fig.add_subplot(111)
ax.plot(t, rms, color=linecolor, linewidth=0.7, label='RMS Data') ax.plot(t, rms, color=linecolor, linewidth=0.7, label='RMS Data')
ax.plot(t, x1, 'k', alpha=0.3, lw=0.3, zorder=0)
if x2 is not None and x3 is not None:
ax.plot(t, x2, 'r', alpha=0.3, lw=0.3, zorder=0)
ax.plot(t, x3, 'g', alpha=0.3, lw=0.3, zorder=0)
ax.axvspan(t[inoise[0]], t[inoise[-1]], color='y', alpha=0.2, lw=0, label='Noise Window') ax.axvspan(t[inoise[0]], t[inoise[-1]], color='y', alpha=0.2, lw=0, label='Noise Window')
ax.axvspan(t[isignal[0]], t[isignal[-1]], color='b', alpha=0.2, lw=0, label='Signal Window') ax.axvspan(t[isignal[0]], t[isignal[-1]], color='b', alpha=0.2, lw=0, label='Signal Window')
ax.plot([t[isignal[0]], t[isignal[len(isignal) - 1]]], ax.plot([t[isignal[0]], t[isignal[len(isignal) - 1]]],
@ -895,7 +878,6 @@ def checksignallength(X, pick, minsiglength, pickparams, iplot=0, fig=None, line
ax.set_xlabel('Time [s] since %s' % X[0].stats.starttime) ax.set_xlabel('Time [s] since %s' % X[0].stats.starttime)
ax.set_ylabel('Counts') ax.set_ylabel('Counts')
ax.set_title('Check for Signal Length, Station %s' % X[0].stats.station) ax.set_title('Check for Signal Length, Station %s' % X[0].stats.station)
ax.set_xlim(pickparams["pstart"], pickparams["pstop"])
ax.set_yticks([]) ax.set_yticks([])
if plt_flag == 1: if plt_flag == 1:
fig.show() fig.show()
@ -903,8 +885,6 @@ def checksignallength(X, pick, minsiglength, pickparams, iplot=0, fig=None, line
input() input()
except SyntaxError: except SyntaxError:
pass pass
except EOFError:
pass
plt.close(fig) plt.close(fig)
return returnflag return returnflag
@ -1145,7 +1125,7 @@ def checkZ4S(X, pick, pickparams, iplot, fig=None, linecolor='k'):
try: try:
iplot = int(iplot) iplot = int(iplot)
except: except:
if get_bool(iplot): if get_Bool(iplot):
iplot = 2 iplot = 2
else: else:
iplot = 0 iplot = 0
@ -1226,14 +1206,14 @@ def checkZ4S(X, pick, pickparams, iplot, fig=None, linecolor='k'):
t = np.linspace(diff_dict[key], trace.stats.endtime - trace.stats.starttime + diff_dict[key], t = np.linspace(diff_dict[key], trace.stats.endtime - trace.stats.starttime + diff_dict[key],
trace.stats.npts) trace.stats.npts)
if i == 0: if i == 0:
if get_none(fig) is None: if get_None(fig) is None:
fig = plt.figure() # self.iplot) ### WHY? MP MP fig = plt.figure() # self.iplot) ### WHY? MP MP
plt_flag = 1 plt_flag = 1
ax1 = fig.add_subplot(3, 1, i + 1) ax1 = fig.add_subplot(3, 1, i + 1)
ax = ax1 ax = ax1
ax.set_title('CheckZ4S, Station %s' % zdat[0].stats.station) ax.set_title('CheckZ4S, Station %s' % zdat[0].stats.station)
else: else:
if get_none(fig) is None: if get_None(fig) is None:
fig = plt.figure() # self.iplot) ### WHY? MP MP fig = plt.figure() # self.iplot) ### WHY? MP MP
plt_flag = 1 plt_flag = 1
ax = fig.add_subplot(3, 1, i + 1, sharex=ax1) ax = fig.add_subplot(3, 1, i + 1, sharex=ax1)
@ -1335,7 +1315,6 @@ def get_quality_class(uncertainty, weight_classes):
:return: quality of pick (0-4) :return: quality of pick (0-4)
:rtype: int :rtype: int
""" """
if not uncertainty: return len(weight_classes)
try: try:
# create generator expression containing all indices of values in weight classes that are >= than uncertainty. # create generator expression containing all indices of values in weight classes that are >= than uncertainty.
# call next on it once to receive first value # call next on it once to receive first value
@ -1346,6 +1325,20 @@ def get_quality_class(uncertainty, weight_classes):
quality = len(weight_classes) quality = len(weight_classes)
return quality return quality
def set_NaNs_to(data, nan_value):
"""
Replace all NaNs in data with nan_value
:param data: array holding data
:type data: `~numpy.ndarray`
:param nan_value: value which all NaNs are set to
:type nan_value: float, int
:return: data array with all NaNs replaced with nan_value
:rtype: `~numpy.ndarray`
"""
nn = np.isnan(data)
if np.any(nn):
data[nn] = nan_value
return data
def taper_cf(cf): def taper_cf(cf):
""" """
@ -1358,7 +1351,6 @@ def taper_cf(cf):
tap = np.hanning(len(cf)) tap = np.hanning(len(cf))
return tap * cf return tap * cf
def cf_positive(cf): def cf_positive(cf):
""" """
Shifts cf so that all values are positive Shifts cf so that all values are positive
@ -1369,7 +1361,6 @@ def cf_positive(cf):
""" """
return cf + max(abs(cf)) return cf + max(abs(cf))
def smooth_cf(cf, t_smooth, delta): def smooth_cf(cf, t_smooth, delta):
""" """
Smooth cf by taking samples over t_smooth length Smooth cf by taking samples over t_smooth length
@ -1398,7 +1389,6 @@ def smooth_cf(cf, t_smooth, delta):
cf_smooth -= offset # remove offset from smoothed function cf_smooth -= offset # remove offset from smoothed function
return cf_smooth return cf_smooth
def check_counts_ms(data): def check_counts_ms(data):
""" """
check if data is in counts or m/s check if data is in counts or m/s
@ -1481,10 +1471,8 @@ def get_pickparams(pickparam):
:rtype: (dict, dict, dict, dict) :rtype: (dict, dict, dict, dict)
""" """
# Define names of all parameters in different groups # Define names of all parameters in different groups
p_parameter_names = 'algoP pstart pstop use_taup taup_model tlta tsnrz hosorder bpz1 bpz2 pickwinP aictsmooth tsmoothP ausP nfacP tpred1z tdet1z Parorder addnoise Precalcwin minAICPslope minAICPSNR timeerrorsP checkwindowP minfactorP'.split( p_parameter_names = 'algoP pstart pstop use_taup taup_model tlta tsnrz hosorder bpz1 bpz2 pickwinP aictsmooth tsmoothP ausP nfacP tpred1z tdet1z Parorder addnoise Precalcwin minAICPslope minAICPSNR timeerrorsP checkwindowP minfactorP'.split(' ')
' ') s_parameter_names = 'algoS sstart sstop bph1 bph2 tsnrh pickwinS tpred1h tdet1h tpred2h tdet2h Sarorder aictsmoothS tsmoothS ausS minAICSslope minAICSSNR Srecalcwin nfacS timeerrorsS zfac checkwindowS minfactorS'.split(' ')
s_parameter_names = 'algoS sstart sstop bph1 bph2 tsnrh pickwinS tpred1h tdet1h tpred2h tdet2h Sarorder aictsmoothS tsmoothS ausS minAICSslope minAICSSNR Srecalcwin nfacS timeerrorsS zfac checkwindowS minfactorS'.split(
' ')
first_motion_names = 'minFMSNR fmpickwin minfmweight'.split(' ') first_motion_names = 'minFMSNR fmpickwin minfmweight'.split(' ')
signal_length_names = 'minsiglength minpercent noisefactor'.split(' ') signal_length_names = 'minsiglength minpercent noisefactor'.split(' ')
# Get list of values from pickparam by name # Get list of values from pickparam by name
@ -1498,16 +1486,15 @@ def get_pickparams(pickparam):
first_motion_params = dict(zip(first_motion_names, fm_parameter_values)) first_motion_params = dict(zip(first_motion_names, fm_parameter_values))
signal_length_params = dict(zip(signal_length_names, sl_parameter_values)) signal_length_params = dict(zip(signal_length_names, sl_parameter_values))
p_params['use_taup'] = get_bool(p_params['use_taup']) p_params['use_taup'] = get_Bool(p_params['use_taup'])
return p_params, s_params, first_motion_params, signal_length_params return p_params, s_params, first_motion_params, signal_length_params
def getQualityFromUncertainty(uncertainty, Errors): def getQualityFromUncertainty(uncertainty, Errors):
# set initial quality to 4 (worst) and change only if one condition is hit # set initial quality to 4 (worst) and change only if one condition is hit
quality = 4 quality = 4
if get_none(uncertainty) is None: if get_None(uncertainty) is None:
return quality return quality
if uncertainty <= Errors[0]: if uncertainty <= Errors[0]:
@ -1526,22 +1513,7 @@ def getQualityFromUncertainty(uncertainty, Errors):
return quality return quality
if __name__ == '__main__': if __name__ == '__main__':
import doctest import doctest
doctest.testmod() doctest.testmod()
class PickingFailedException(Exception):
"""
Raised when picking fails due to missing values etc.
"""
pass
class MissingTraceException(ValueError):
"""
Used to indicate missing traces in a obspy.core.stream.Stream object
"""
pass

File diff suppressed because it is too large Load Diff

View File

@ -2,13 +2,12 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
try: try:
# noinspection PyUnresolvedReferences
from urllib2 import urlopen from urllib2 import urlopen
except: except:
from urllib.request import urlopen from urllib.request import urlopen
def checkurl(url='https://git.geophysik.ruhr-uni-bochum.de/marcel/pylot/'): def checkurl(url='https://ariadne.geophysik.ruhr-uni-bochum.de/trac/PyLoT/'):
""" """
check if URL is available check if URL is available
:param url: url :param url: url

View File

@ -2,11 +2,9 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import glob import glob
import logging import numpy as np
import os import os
import sys import sys
import numpy as np
from obspy import UTCDateTime, read_inventory, read from obspy import UTCDateTime, read_inventory, read
from obspy.io.xseed import Parser from obspy.io.xseed import Parser
@ -27,10 +25,6 @@ class Metadata(object):
# saves which metadata files are from obspy dmt # saves which metadata files are from obspy dmt
self.obspy_dmt_invs = [] self.obspy_dmt_invs = []
if inventory: if inventory:
# make sure that no accidental backslashes mess up the path
if isinstance(inventory, str):
inventory = inventory.replace('\\', '/')
inventory = os.path.abspath(inventory)
if os.path.isdir(inventory): if os.path.isdir(inventory):
self.add_inventory(inventory) self.add_inventory(inventory)
if os.path.isfile(inventory): if os.path.isfile(inventory):
@ -52,15 +46,13 @@ class Metadata(object):
def __repr__(self): def __repr__(self):
return self.__str__() return self.__str__()
def add_inventory(self, path_to_inventory, obspy_dmt_inv=False): def add_inventory(self, path_to_inventory, obspy_dmt_inv = False):
""" """
Add path to list of inventories. Add path to list of inventories.
:param path_to_inventory: Path to a folder :param path_to_inventory: Path to a folder
:type path_to_inventory: str :type path_to_inventory: str
:return: None :return: None
""" """
path_to_inventory = path_to_inventory.replace('\\', '/')
path_to_inventory = os.path.abspath(path_to_inventory)
assert (os.path.isdir(path_to_inventory)), '{} is no directory'.format(path_to_inventory) assert (os.path.isdir(path_to_inventory)), '{} is no directory'.format(path_to_inventory)
if path_to_inventory not in self.inventories: if path_to_inventory not in self.inventories:
self.inventories.append(path_to_inventory) self.inventories.append(path_to_inventory)
@ -91,10 +83,10 @@ class Metadata(object):
print('Path {} not in inventories list.'.format(path_to_inventory)) print('Path {} not in inventories list.'.format(path_to_inventory))
return return
self.inventories.remove(path_to_inventory) self.inventories.remove(path_to_inventory)
for filename in list(self.inventory_files.keys()): for filename in self.inventory_files.keys():
if filename.startswith(path_to_inventory): if filename.startswith(path_to_inventory):
del (self.inventory_files[filename]) del (self.inventory_files[filename])
for seed_id in list(self.seed_ids.keys()): for seed_id in self.seed_ids.keys():
if self.seed_ids[seed_id].startswith(path_to_inventory): if self.seed_ids[seed_id].startswith(path_to_inventory):
del (self.seed_ids[seed_id]) del (self.seed_ids[seed_id])
# have to clean self.stations_dict as well # have to clean self.stations_dict as well
@ -196,11 +188,7 @@ class Metadata(object):
metadata = self.get_metadata(seed_id, time) metadata = self.get_metadata(seed_id, time)
if not metadata: if not metadata:
return return
try:
return metadata['data'].get_coordinates(seed_id, time) return metadata['data'].get_coordinates(seed_id, time)
# no specific exception defined in obspy inventory
except Exception as e:
logging.warning(f'Could not get metadata for {seed_id}')
def get_all_coordinates(self): def get_all_coordinates(self):
def stat_info_from_parser(parser): def stat_info_from_parser(parser):
@ -220,10 +208,9 @@ class Metadata(object):
network_name = network.code network_name = network.code
if not station_name in self.stations_dict.keys(): if not station_name in self.stations_dict.keys():
st_id = '{}.{}'.format(network_name, station_name) st_id = '{}.{}'.format(network_name, station_name)
self.stations_dict[st_id] = {'latitude': station.latitude, self.stations_dict[st_id] = {'latitude': station[0].latitude,
'longitude': station.longitude, 'longitude': station[0].longitude,
'elevation': station.elevation} 'elevation': station[0].elevation}
read_stat = {'xml': stat_info_from_inventory, read_stat = {'xml': stat_info_from_inventory,
'dless': stat_info_from_parser} 'dless': stat_info_from_parser}
@ -268,6 +255,9 @@ class Metadata(object):
if not fnames: if not fnames:
# search for station name in filename # search for station name in filename
fnames = glob.glob(os.path.join(path_to_inventory, '*' + station + '*')) fnames = glob.glob(os.path.join(path_to_inventory, '*' + station + '*'))
if not fnames:
# search for network name in filename
fnames = glob.glob(os.path.join(path_to_inventory, '*' + network + '*'))
if not fnames: if not fnames:
if self.verbosity: if self.verbosity:
print('Could not find filenames matching station name, network name or seed id') print('Could not find filenames matching station name, network name or seed id')
@ -287,7 +277,6 @@ class Metadata(object):
self.seed_ids[station_seed_id] = fname self.seed_ids[station_seed_id] = fname
return True return True
except Exception as e: except Exception as e:
logging.warning(e)
continue continue
print('Could not find metadata for station_seed_id {} in path {}'.format(station_seed_id, path_to_inventory)) print('Could not find metadata for station_seed_id {} in path {}'.format(station_seed_id, path_to_inventory))
@ -362,25 +351,25 @@ def check_time(datetime):
:type datetime: list :type datetime: list
:return: returns True if Values are in supposed range, returns False otherwise :return: returns True if Values are in supposed range, returns False otherwise
>>> check_time([1999, 1, 1, 23, 59, 59, 999000]) >>> check_time([1999, 01, 01, 23, 59, 59, 999000])
True True
>>> check_time([1999, 1, 1, 23, 59, 60, 999000]) >>> check_time([1999, 01, 01, 23, 59, 60, 999000])
False False
>>> check_time([1999, 1, 1, 23, 59, 59, 1000000]) >>> check_time([1999, 01, 01, 23, 59, 59, 1000000])
False False
>>> check_time([1999, 1, 1, 23, 60, 59, 999000]) >>> check_time([1999, 01, 01, 23, 60, 59, 999000])
False False
>>> check_time([1999, 1, 1, 23, 60, 59, 999000]) >>> check_time([1999, 01, 01, 23, 60, 59, 999000])
False False
>>> check_time([1999, 1, 1, 24, 59, 59, 999000]) >>> check_time([1999, 01, 01, 24, 59, 59, 999000])
False False
>>> check_time([1999, 1, 31, 23, 59, 59, 999000]) >>> check_time([1999, 01, 31, 23, 59, 59, 999000])
True True
>>> check_time([1999, 2, 30, 23, 59, 59, 999000]) >>> check_time([1999, 02, 30, 23, 59, 59, 999000])
False False
>>> check_time([1999, 2, 29, 23, 59, 59, 999000]) >>> check_time([1999, 02, 29, 23, 59, 59, 999000])
False False
>>> check_time([2000, 2, 29, 23, 59, 59, 999000]) >>> check_time([2000, 02, 29, 23, 59, 59, 999000])
True True
>>> check_time([2000, 13, 29, 23, 59, 59, 999000]) >>> check_time([2000, 13, 29, 23, 59, 59, 999000])
False False
@ -392,7 +381,6 @@ def check_time(datetime):
return False return False
# TODO: change root to datapath
def get_file_list(root_dir): def get_file_list(root_dir):
""" """
Function uses a directorie to get all the *.gse files from it. Function uses a directorie to get all the *.gse files from it.
@ -456,7 +444,7 @@ def evt_head_check(root_dir, out_dir=None):
""" """
if not out_dir: if not out_dir:
print('WARNING files are going to be overwritten!') print('WARNING files are going to be overwritten!')
inp = str(input('Continue? [y/N]')) inp = str(raw_input('Continue? [y/N]'))
if not inp == 'y': if not inp == 'y':
sys.exit() sys.exit()
filelist = get_file_list(root_dir) filelist = get_file_list(root_dir)
@ -652,8 +640,6 @@ def restitute_data(data, metadata, unit='VEL', force=False, ncores=0):
""" """
# data = remove_underscores(data) # data = remove_underscores(data)
if not data:
return
# loop over traces # loop over traces
input_tuples = [] input_tuples = []
@ -661,11 +647,6 @@ def restitute_data(data, metadata, unit='VEL', force=False, ncores=0):
input_tuples.append((tr, metadata, unit, force)) input_tuples.append((tr, metadata, unit, force))
data.remove(tr) data.remove(tr)
if ncores == 0:
result = []
for input_tuple in input_tuples:
result.append(restitute_trace(input_tuple))
else:
pool = gen_Pool(ncores) pool = gen_Pool(ncores)
result = pool.imap_unordered(restitute_trace, input_tuples) result = pool.imap_unordered(restitute_trace, input_tuples)
pool.close() pool.close()

View File

@ -26,7 +26,9 @@ elif system_name == "Windows":
# suffix for phase name if not phase identified by last letter (P, p, etc.) # suffix for phase name if not phase identified by last letter (P, p, etc.)
ALTSUFFIX = ['diff', 'n', 'g', '1', '2', '3'] ALTSUFFIX = ['diff', 'n', 'g', '1', '2', '3']
FILTERDEFAULTS = readDefaultFilterInformation() FILTERDEFAULTS = readDefaultFilterInformation(os.path.join(os.path.expanduser('~'),
'.pylot',
'pylot.in'))
TIMEERROR_DEFAULTS = os.path.join(os.path.expanduser('~'), TIMEERROR_DEFAULTS = os.path.join(os.path.expanduser('~'),
'.pylot', '.pylot',

View File

@ -2,7 +2,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import os import os
from obspy import UTCDateTime from obspy import UTCDateTime
from obspy.core.event import Event as ObsPyEvent from obspy.core.event import Event as ObsPyEvent
from obspy.core.event import Origin, ResourceIdentifier from obspy.core.event import Origin, ResourceIdentifier
@ -26,7 +25,9 @@ class Event(ObsPyEvent):
# initialize super class # initialize super class
super(Event, self).__init__(resource_id=ResourceIdentifier('smi:local/' + self.pylot_id)) super(Event, self).__init__(resource_id=ResourceIdentifier('smi:local/' + self.pylot_id))
self.path = path self.path = path
self.datapath = os.path.split(path)[0] # path.split('/')[-3] self.database = path.split('/')[-2]
self.datapath = path.split('/')[-3]
self.rootpath = '/' + os.path.join(*path.split('/')[:-3])
self.pylot_autopicks = {} self.pylot_autopicks = {}
self.pylot_picks = {} self.pylot_picks = {}
self.notes = '' self.notes = ''
@ -72,7 +73,7 @@ class Event(ObsPyEvent):
text = lines[0] text = lines[0]
self.addNotes(text) self.addNotes(text)
try: try:
datetime = UTCDateTime(self.path.split('/')[-1]) datetime = UTCDateTime(path.split('/')[-1])
origin = Origin(resource_id=self.resource_id, time=datetime, latitude=0, longitude=0, depth=0) origin = Origin(resource_id=self.resource_id, time=datetime, latitude=0, longitude=0, depth=0)
self.origins.append(origin) self.origins.append(origin)
except: except:
@ -295,13 +296,13 @@ class Event(ObsPyEvent):
:rtype: None :rtype: None
""" """
try: try:
import pickle import cPickle
except ImportError: except ImportError:
import _pickle as pickle import _pickle as cPickle
try: try:
outfile = open(filename, 'wb') outfile = open(filename, 'wb')
pickle.dump(self, outfile, -1) cPickle.dump(self, outfile, -1)
self.dirty = False self.dirty = False
except Exception as e: except Exception as e:
print('Could not pickle PyLoT event. Reason: {}'.format(e)) print('Could not pickle PyLoT event. Reason: {}'.format(e))
@ -316,11 +317,11 @@ class Event(ObsPyEvent):
:rtype: Event :rtype: Event
""" """
try: try:
import pickle import cPickle
except ImportError: except ImportError:
import _pickle as pickle import _pickle as cPickle
infile = open(filename, 'rb') infile = open(filename, 'rb')
event = pickle.load(infile) event = cPickle.load(infile)
event.dirty = False event.dirty = False
print('Loaded %s' % filename) print('Loaded %s' % filename)
return event return event

View File

@ -3,37 +3,24 @@
# small script that creates array maps for each event within a previously generated PyLoT project # small script that creates array maps for each event within a previously generated PyLoT project
import os import os
num_thread = "16"
os.environ["OMP_NUM_THREADS"] = num_thread
os.environ["OPENBLAS_NUM_THREADS"] = num_thread
os.environ["MKL_NUM_THREADS"] = num_thread
os.environ["VECLIB_MAXIMUM_THREADS"] = num_thread
os.environ["NUMEXPR_NUM_THREADS"] = num_thread
os.environ["NUMEXPR_MAX_THREADS"] = num_thread
import multiprocessing import multiprocessing
import sys
import glob
import matplotlib
matplotlib.use('Qt5Agg')
sys.path.append(os.path.join('/'.join(sys.argv[0].split('/')[:-1]), '../../..'))
from PyLoT import Project from PyLoT import Project
from pylot.core.util.dataprocessing import Metadata from pylot.core.util.dataprocessing import Metadata
from pylot.core.util.array_map import Array_map from pylot.core.util.array_map import Array_map
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
import argparse
def main(project_file_path, manual=False, auto=True, file_format='png', f_ext='', ncores=None): def main(project_file_path, manual=False, auto=True, file_format='png', f_ext='', ncores=None):
project = Project.load(project_file_path) project = Project.load(project_file_path)
nEvents = len(project.eventlist) nEvents = len(project.eventlist)
input_list = [] input_list = []
print('\n')
for index, event in enumerate(project.eventlist): for index, event in enumerate(project.eventlist):
# MP MP TESTING +++
#if not eventdir.endswith('20170908_044946.a'):
# continue
# MP MP ----
kwargs = dict(project=project, event=event, nEvents=nEvents, index=index, manual=manual, auto=auto, kwargs = dict(project=project, event=event, nEvents=nEvents, index=index, manual=manual, auto=auto,
file_format=file_format, f_ext=f_ext) file_format=file_format, f_ext=f_ext)
input_list.append(kwargs) input_list.append(kwargs)
@ -42,21 +29,15 @@ def main(project_file_path, manual=False, auto=True, file_format='png', f_ext=''
for item in input_list: for item in input_list:
array_map_worker(item) array_map_worker(item)
else: else:
pool = multiprocessing.Pool(ncores, maxtasksperchild=1000) pool = multiprocessing.Pool(ncores)
pool.map(array_map_worker, input_list) result = pool.map(array_map_worker, input_list)
pool.close() pool.close()
pool.join() pool.join()
def array_map_worker(input_dict): def array_map_worker(input_dict):
event = input_dict['event'] event = input_dict['event']
eventdir = event.path eventdir = event.path
print('Working on event: {} ({}/{})'.format(eventdir, input_dict['index'] + 1, input_dict['nEvents'])) print('Working on event: {} ({}/{})'.format(eventdir, input_dict['index'] + 1, input_dict['nEvents']))
xml_picks = glob.glob(os.path.join(eventdir, f'*{input_dict["f_ext"]}.xml'))
if not len(xml_picks):
print('Event {} does not have any picks associated with event file extension {}'.format(eventdir,
input_dict['f_ext']))
return
# check for picks # check for picks
manualpicks = event.getPicks() manualpicks = event.getPicks()
autopicks = event.getAutopicks() autopicks = event.getAutopicks()
@ -70,12 +51,11 @@ def array_map_worker(input_dict):
continue continue
if not metadata: if not metadata:
metadata = Metadata(inventory=metadata_path, verbosity=0) metadata = Metadata(inventory=metadata_path, verbosity=0)
# create figure to plot on # create figure to plot on
fig, ax = plt.subplots(figsize=(15, 9)) fig = plt.figure(figsize=(16,9))
# create array map object # create array map object
map = Array_map(None, metadata, parameter=input_dict['project'].parameter, axes=ax, map = Array_map(None, metadata, parameter=input_dict['project'].parameter, figure=fig,
width=2.13e6, height=1.2e6, pointsize=25., linewidth=1.0) width=2.13e6, height=1.2e6, pointsize=15., linewidth=1.0)
# set combobox to auto/manual to plot correct pick type # set combobox to auto/manual to plot correct pick type
map.comboBox_am.setCurrentIndex(map.comboBox_am.findText(pick_type)) map.comboBox_am.setCurrentIndex(map.comboBox_am.findText(pick_type))
# add picks to map and save file # add picks to map and save file
@ -85,13 +65,11 @@ def array_map_worker(input_dict):
fig.savefig(fpath_out, dpi=300.) fig.savefig(fpath_out, dpi=300.)
print('Wrote file: {}'.format(fpath_out)) print('Wrote file: {}'.format(fpath_out))
if __name__ == '__main__': if __name__ == '__main__':
cl = argparse.ArgumentParser() dataroot = '/home/marcel'
cl.add_argument('--dataroot', help='Directory containing the PyLoT .plp file', type=str) infiles=['alparray_all_events_0.03-0.1_mantle_correlated_v3.plp']
cl.add_argument('--infiles', help='.plp files to use', nargs='+')
cl.add_argument('--ncores', hepl='Specify number of parallel processes', type=int, default=1)
args = cl.parse_args()
for infile in args.infiles: for infile in infiles:
main(os.path.join(args.dataroot, infile), f_ext='_correlated_0.03-0.1', ncores=args.ncores) main(os.path.join(dataroot, infile), f_ext='_correlated_0.1Hz', ncores=10)
#main('E:\Shared\AlpArray\\test_aa.plp', f_ext='_correlated_0.5Hz', ncores=1)
#main('/home/marcel/alparray_m6.5-6.9_mantle_correlated_v3.plp', f_ext='_correlated_0.5Hz')

View File

@ -1,7 +1,6 @@
#!/usr/bin/env python #!/usr/bin/env python
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import os import os
from functools import lru_cache
try: try:
import pyqtgraph as pg import pyqtgraph as pg
@ -12,6 +11,7 @@ try:
except Exception as e: except Exception as e:
print('Warning: Could not import module QtCore.') print('Warning: Could not import module QtCore.')
from pylot.core.util.utils import pick_color from pylot.core.util.utils import pick_color
@ -26,14 +26,14 @@ def pick_linestyle_pg(picktype, key):
:return: Qt line style parameters :return: Qt line style parameters
:rtype: :rtype:
""" """
linestyles_manu = {'mpp': (QtCore.Qt.SolidLine, 2), linestyles_manu = {'mpp': (QtCore.Qt.SolidLine, 2.),
'epp': (QtCore.Qt.DashLine, 1), 'epp': (QtCore.Qt.DashLine, 1.),
'lpp': (QtCore.Qt.DashLine, 1), 'lpp': (QtCore.Qt.DashLine, 1.),
'spe': (QtCore.Qt.DashLine, 1)} 'spe': (QtCore.Qt.DashLine, 1.)}
linestyles_auto = {'mpp': (QtCore.Qt.DotLine, 2), linestyles_auto = {'mpp': (QtCore.Qt.DotLine, 2.),
'epp': (QtCore.Qt.DashDotLine, 1), 'epp': (QtCore.Qt.DashDotLine, 1.),
'lpp': (QtCore.Qt.DashDotLine, 1), 'lpp': (QtCore.Qt.DashDotLine, 1.),
'spe': (QtCore.Qt.DashDotLine, 1)} 'spe': (QtCore.Qt.DashDotLine, 1.)}
linestyles = {'manual': linestyles_manu, linestyles = {'manual': linestyles_manu,
'auto': linestyles_auto} 'auto': linestyles_auto}
return linestyles[picktype][key] return linestyles[picktype][key]
@ -49,7 +49,7 @@ def which(program, parameter):
:rtype: str :rtype: str
""" """
try: try:
from PySide2.QtCore import QSettings from PySide.QtCore import QSettings
settings = QSettings() settings = QSettings()
for key in settings.allKeys(): for key in settings.allKeys():
if 'binPath' in key: if 'binPath' in key:
@ -57,7 +57,7 @@ def which(program, parameter):
nllocpath = ":" + parameter.get('nllocbin') nllocpath = ":" + parameter.get('nllocbin')
os.environ['PATH'] += nllocpath os.environ['PATH'] += nllocpath
except Exception as e: except Exception as e:
print(e) print(e.message)
def is_exe(fpath): def is_exe(fpath):
return os.path.exists(fpath) and os.access(fpath, os.X_OK) return os.path.exists(fpath) and os.access(fpath, os.X_OK)
@ -81,7 +81,6 @@ def which(program, parameter):
return None return None
@lru_cache(maxsize=128)
def make_pen(picktype, phase, key, quality): def make_pen(picktype, phase, key, quality):
""" """
Make PyQtGraph.QPen Make PyQtGraph.QPen

View File

@ -2,7 +2,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import os import os
from obspy import UTCDateTime from obspy import UTCDateTime
@ -35,25 +34,17 @@ def qml_from_obspyDMT(path):
if not os.path.exists(path): if not os.path.exists(path):
return IOError('Could not find Event at {}'.format(path)) return IOError('Could not find Event at {}'.format(path))
infile = open(path, 'rb')
with open(path, 'rb') as infile: event_dmt = pickle.load(infile)#, fix_imports=True)
event_dmt = pickle.load(infile) # , fix_imports=True)
event_dmt['origin_id'].id = str(event_dmt['origin_id'].id) event_dmt['origin_id'].id = str(event_dmt['origin_id'].id)
ev = Event(resource_id=event_dmt['event_id']) ev = Event(resource_id=event_dmt['event_id'])
# small bugfix "unhashable type: 'newstr' " #small bugfix "unhashable type: 'newstr' "
event_dmt['origin_id'].id = str(event_dmt['origin_id'].id) event_dmt['origin_id'].id = str(event_dmt['origin_id'].id)
origin = Origin(resource_id=event_dmt['origin_id'], time=event_dmt['datetime'], longitude=event_dmt['longitude'],
origin = Origin(resource_id=event_dmt['origin_id'], latitude=event_dmt['latitude'], depth=event_dmt['depth'])
time=event_dmt['datetime'], mag = Magnitude(mag=event_dmt['magnitude'], magnitude_type=event_dmt['magnitude_type'],
longitude=event_dmt['longitude'],
latitude=event_dmt['latitude'],
depth=event_dmt['depth'])
mag = Magnitude(mag=event_dmt['magnitude'],
magnitude_type=event_dmt['magnitude_type'],
origin_id=event_dmt['origin_id']) origin_id=event_dmt['origin_id'])
ev.magnitudes.append(mag) ev.magnitudes.append(mag)
ev.origins.append(origin) ev.origins.append(origin)
return ev return ev

View File

@ -1,9 +1,8 @@
#!/usr/bin/env python #!/usr/bin/env python
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import warnings
import numpy as np import numpy as np
import warnings
from obspy import UTCDateTime from obspy import UTCDateTime
from pylot.core.util.utils import fit_curve, clims from pylot.core.util.utils import fit_curve, clims

View File

@ -1,9 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import sys, os, traceback
import multiprocessing import multiprocessing
import os
import sys
import traceback
from PySide2.QtCore import QThread, Signal, Qt, Slot, QRunnable, QObject from PySide2.QtCore import QThread, Signal, Qt, Slot, QRunnable, QObject
from PySide2.QtWidgets import QDialog, QProgressBar, QLabel, QHBoxLayout, QPushButton from PySide2.QtWidgets import QDialog, QProgressBar, QLabel, QHBoxLayout, QPushButton
@ -22,11 +19,9 @@ class Thread(QThread):
self.abortButton = abortButton self.abortButton = abortButton
self.finished.connect(self.hideProgressbar) self.finished.connect(self.hideProgressbar)
self.showProgressbar() self.showProgressbar()
self.old_stdout = None
def run(self): def run(self):
if self.redirect_stdout: if self.redirect_stdout:
self.old_stdout = sys.stdout
sys.stdout = self sys.stdout = self
try: try:
if self.arg is not None: if self.arg is not None:
@ -41,8 +36,7 @@ class Thread(QThread):
exctype, value = sys.exc_info()[:2] exctype, value = sys.exc_info()[:2]
self._executedErrorInfo = '{} {} {}'. \ self._executedErrorInfo = '{} {} {}'. \
format(exctype, value, traceback.format_exc()) format(exctype, value, traceback.format_exc())
if self.redirect_stdout: sys.stdout = sys.__stdout__
sys.stdout = self.old_stdout
def showProgressbar(self): def showProgressbar(self):
if self.progressText: if self.progressText:
@ -99,25 +93,23 @@ class Worker(QRunnable):
self.progressText = progressText self.progressText = progressText
self.pb_widget = pb_widget self.pb_widget = pb_widget
self.redirect_stdout = redirect_stdout self.redirect_stdout = redirect_stdout
self.old_stdout = None
@Slot() @Slot()
def run(self): def run(self):
if self.redirect_stdout: if self.redirect_stdout:
self.old_stdout = sys.stdout
sys.stdout = self sys.stdout = self
try: try:
result = self.fun(self.args) result = self.fun(self.args)
except: except:
exctype, value = sys.exc_info()[:2] exctype, value = sys.exc_info ()[:2]
print(exctype, value, traceback.format_exc()) print(exctype, value, traceback.format_exc())
self.signals.error.emit((exctype, value, traceback.format_exc())) self.signals.error.emit ((exctype, value, traceback.format_exc ()))
else: else:
self.signals.result.emit(result) self.signals.result.emit(result)
finally: finally:
self.signals.finished.emit('Done') self.signals.finished.emit('Done')
sys.stdout = self.old_stdout sys.stdout = sys.__stdout__
def write(self, text): def write(self, text):
self.signals.message.emit(text) self.signals.message.emit(text)
@ -149,18 +141,16 @@ class MultiThread(QThread):
self.progressText = progressText self.progressText = progressText
self.pb_widget = pb_widget self.pb_widget = pb_widget
self.redirect_stdout = redirect_stdout self.redirect_stdout = redirect_stdout
self.old_stdout = None
self.finished.connect(self.hideProgressbar) self.finished.connect(self.hideProgressbar)
self.showProgressbar() self.showProgressbar()
def run(self): def run(self):
if self.redirect_stdout: if self.redirect_stdout:
self.old_stdout = sys.stdout
sys.stdout = self sys.stdout = self
try: try:
if not self.ncores: if not self.ncores:
self.ncores = multiprocessing.cpu_count() self.ncores = multiprocessing.cpu_count()
pool = multiprocessing.Pool(self.ncores, maxtasksperchild=1000) pool = multiprocessing.Pool(self.ncores)
self.data = pool.map_async(self.func, self.args, callback=self.emitDone) self.data = pool.map_async(self.func, self.args, callback=self.emitDone)
# self.data = pool.apply_async(self.func, self.shotlist, callback=self.emitDone) #emit each time returned # self.data = pool.apply_async(self.func, self.shotlist, callback=self.emitDone) #emit each time returned
pool.close() pool.close()
@ -171,7 +161,7 @@ class MultiThread(QThread):
exc_type, exc_obj, exc_tb = sys.exc_info() exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1] fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
print('Exception: {}, file: {}, line: {}'.format(exc_type, fname, exc_tb.tb_lineno)) print('Exception: {}, file: {}, line: {}'.format(exc_type, fname, exc_tb.tb_lineno))
sys.stdout = self.old_stdout sys.stdout = sys.__stdout__
def showProgressbar(self): def showProgressbar(self):
if self.progressText: if self.progressText:

View File

@ -1,17 +1,13 @@
#!/usr/bin/env python #!/usr/bin/env python
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import glob
import hashlib import hashlib
import logging import numpy as np
import os import os
import platform import platform
import re import re
import subprocess import subprocess
import warnings import warnings
from typing import Literal, Tuple, Type
from functools import lru_cache
import numpy as np
from obspy import UTCDateTime, read from obspy import UTCDateTime, read
from obspy.core import AttribDict from obspy.core import AttribDict
from obspy.signal.rotate import rotate2zne from obspy.signal.rotate import rotate2zne
@ -21,10 +17,6 @@ from pylot.core.io.inputs import PylotParameter, FilterOptions
from pylot.core.util.obspyDMT_interface import check_obspydmt_eventfolder from pylot.core.util.obspyDMT_interface import check_obspydmt_eventfolder
from pylot.styles import style_settings from pylot.styles import style_settings
Rgba: Type[tuple] = Tuple[int, int, int, int]
Mplrgba: Type[tuple] = Tuple[float, float, float, float]
Mplrgbastr: Type[tuple] = Tuple[str, str, str, str]
def _pickle_method(m): def _pickle_method(m):
if m.im_self is None: if m.im_self is None:
@ -44,13 +36,15 @@ def getAutoFilteroptions(phase, parameter):
return filteroptions return filteroptions
def readDefaultFilterInformation(): def readDefaultFilterInformation(fname):
""" """
Read default filter information from pylot.in file Read default filter information from pylot.in file
:param fname: path to pylot.in file
:type fname: str
:return: dictionary containing the defailt filter information :return: dictionary containing the defailt filter information
:rtype: dict :rtype: dict
""" """
pparam = PylotParameter() pparam = PylotParameter(fname)
return readFilterInformation(pparam) return readFilterInformation(pparam)
@ -87,6 +81,25 @@ def fit_curve(x, y):
return splev, splrep(x, y) return splev, splrep(x, y)
def getindexbounds(f, eta):
"""
Get indices of values closest below and above maximum value in an array
:param f: array
:type f: `~numpy.ndarray`
:param eta: look for value in array that is closes to max_value * eta
:type eta: float
:return: tuple containing index of max value, index of value closest below max value,
index of value closest above max value
:rtype: (int, int, int)
"""
mi = f.argmax() # get indices of max values
m = max(f) # get maximum value
b = m * eta #
l = find_nearest(f[:mi], b) # find closest value below max value
u = find_nearest(f[mi:], b) + mi # find closest value above max value
return mi, l, u
def gen_Pool(ncores=0): def gen_Pool(ncores=0):
""" """
Generate mulitprocessing pool object utilizing ncores amount of cores Generate mulitprocessing pool object utilizing ncores amount of cores
@ -106,7 +119,7 @@ def gen_Pool(ncores=0):
print('gen_Pool: Generated multiprocessing Pool with {} cores\n'.format(ncores)) print('gen_Pool: Generated multiprocessing Pool with {} cores\n'.format(ncores))
pool = multiprocessing.Pool(ncores, maxtasksperchild=100) pool = multiprocessing.Pool(ncores)
return pool return pool
@ -152,11 +165,11 @@ def clims(lim1, lim2):
""" """
takes two pairs of limits and returns one pair of common limts takes two pairs of limits and returns one pair of common limts
:param lim1: limit 1 :param lim1: limit 1
:type lim1: List[int] :type lim1: int
:param lim2: limit 2 :param lim2: limit 2
:type lim2: List[int] :type lim2: int
:return: new upper and lower limit common to both given limits :return: new upper and lower limit common to both given limits
:rtype: List[int] :rtype: [int, int]
>>> clims([0, 4], [1, 3]) >>> clims([0, 4], [1, 3])
[0, 4] [0, 4]
@ -214,7 +227,7 @@ def findComboBoxIndex(combo_box, val):
:type val: basestring :type val: basestring
:return: index value of item with name val or 0 :return: index value of item with name val or 0
""" """
return combo_box.findText(val) if combo_box.findText(val) != -1 else 0 return combo_box.findText(val) if combo_box.findText(val) is not -1 else 0
def find_in_list(list, str): def find_in_list(list, str):
@ -288,7 +301,7 @@ def fnConstructor(s):
if type(s) is str: if type(s) is str:
s = s.split(':')[-1] s = s.split(':')[-1]
else: else:
s = get_hash(UTCDateTime()) s = getHash(UTCDateTime())
badchars = re.compile(r'[^A-Za-z0-9_. ]+|^\.|\.$|^ | $|^$') badchars = re.compile(r'[^A-Za-z0-9_. ]+|^\.|\.$|^ | $|^$')
badsuffix = re.compile(r'(aux|com[1-9]|con|lpt[1-9]|prn)(\.|$)') badsuffix = re.compile(r'(aux|com[1-9]|con|lpt[1-9]|prn)(\.|$)')
@ -300,75 +313,32 @@ def fnConstructor(s):
return fn return fn
def get_none(value): def get_None(value):
""" """
Convert "None" to None Convert "None" to None
:param value: :param value:
:type value: str, NoneType :type value: str, bool
:return: :return:
:rtype: type(value) or NoneType :rtype: bool
>>> st = read()
>>> print(get_none(st))
3 Trace(s) in Stream:
BW.RJOB..EHZ | 2009-08-24T00:20:03.000000Z - 2009-08-24T00:20:32.990000Z | 100.0 Hz, 3000 samples
BW.RJOB..EHN | 2009-08-24T00:20:03.000000Z - 2009-08-24T00:20:32.990000Z | 100.0 Hz, 3000 samples
BW.RJOB..EHE | 2009-08-24T00:20:03.000000Z - 2009-08-24T00:20:32.990000Z | 100.0 Hz, 3000 samples
>>> get_none('Stream')
'Stream'
>>> get_none(0)
0
>>> get_none(0.)
0.0
>>> print(get_none('None'))
None
>>> print(get_none(None))
None
""" """
if value is None or (type(value) is str and value == 'None'): if value == 'None':
return None return None
else: else:
return value return value
def get_bool(value): def get_Bool(value):
""" """
Convert string representations of bools to their true boolean value. Return value if it cannot be identified as bool. Convert string representations of bools to their true boolean value
:param value: :param value:
:type value: str, bool, int, float :type value: str, bool
:return: true boolean value :return: true boolean value
:rtype: bool :rtype: bool
>>> get_bool(True)
True
>>> get_bool(False)
False
>>> get_bool(0)
False
>>> get_bool(0.)
False
>>> get_bool(0.1)
True
>>> get_bool(2)
True
>>> get_bool(-1)
False
>>> get_bool(-0.3)
False
>>> get_bool(None)
None
""" """
if type(value) is bool: if value in ['True', 'true']:
return value
elif value in ['True', 'true']:
return True return True
elif value in ['False', 'false']: elif value in ['False', 'false']:
return False return False
elif isinstance(value, float) or isinstance(value, int):
if value > 0. or value > 0:
return True
else:
return False
else: else:
return value return value
@ -382,8 +352,8 @@ def four_digits(year):
:return: four digit year correspondent :return: four digit year correspondent
:rtype: int :rtype: int
>>> four_digits(75) >>> four_digits(20)
1975 1920
>>> four_digits(16) >>> four_digits(16)
2016 2016
>>> four_digits(00) >>> four_digits(00)
@ -465,53 +435,36 @@ def backtransformFilterString(st):
return st return st
def get_hash(time): def getHash(time):
""" """
takes a time object and returns the corresponding SHA1 hash of the formatted date string takes a time object and returns the corresponding SHA1 hash of the formatted date string
:param time: time object for which a hash should be calculated :param time: time object for which a hash should be calculated
:type time: `~obspy.core.utcdatetime.UTCDateTime` :type time: `~obspy.core.utcdatetime.UTCDateTime`
:return: SHA1 hash :return: SHA1 hash
:rtype: str :rtype: str
>>> time = UTCDateTime(0)
>>> get_hash(time)
'7627cce3b1b58dd21b005dac008b34d18317dd15'
>>> get_hash(0)
Traceback (most recent call last):
...
AssertionError: 'time' is not an ObsPy UTCDateTime object
""" """
assert isinstance(time, UTCDateTime), '\'time\' is not an ObsPy UTCDateTime object'
hg = hashlib.sha1() hg = hashlib.sha1()
hg.update(time.strftime('%Y-%m-%d %H:%M:%S.%f').encode('utf-8')) hg.update(time.strftime('%Y-%m-%d %H:%M:%S.%f'))
return hg.hexdigest() return hg.hexdigest()
def get_login(): def getLogin():
""" """
returns the actual user's name returns the actual user's login ID
:return: login name :return: login ID
:rtype: str :rtype: str
""" """
import getpass import getpass
return getpass.getuser() return getpass.getuser()
def get_owner(fn): def getOwner(fn):
""" """
takes a filename and return the login ID of the actual owner of the file takes a filename and return the login ID of the actual owner of the file
:param fn: filename of the file tested :param fn: filename of the file tested
:type fn: str :type fn: str
:return: login ID of the file's owner :return: login ID of the file's owner
:rtype: str :rtype: str
>>> import tempfile
>>> with tempfile.NamedTemporaryFile() as tmpfile:
... tmpfile.write(b'') and True
... tmpfile.flush()
... get_owner(tmpfile.name) == os.path.expanduser('~').split('/')[-1]
0
True
""" """
system_name = platform.system() system_name = platform.system()
if system_name in ["Linux", "Darwin"]: if system_name in ["Linux", "Darwin"]:
@ -557,11 +510,6 @@ def is_executable(fn):
:param fn: path to the file to be tested :param fn: path to the file to be tested
:return: True or False :return: True or False
:rtype: bool :rtype: bool
>>> is_executable('/bin/ls')
True
>>> is_executable('/var/log/system.log')
False
""" """
return os.path.isfile(fn) and os.access(fn, os.X_OK) return os.path.isfile(fn) and os.access(fn, os.X_OK)
@ -588,36 +536,24 @@ def isSorted(iterable):
>>> isSorted([2,3,1,4]) >>> isSorted([2,3,1,4])
False False
""" """
assert is_iterable(iterable), "object is not iterable; object: {}".format(iterable) assert isIterable(iterable), 'object is not iterable; object: {' \
'}'.format(iterable)
if type(iterable) is str: if type(iterable) is str:
iterable = [s for s in iterable] iterable = [s for s in iterable]
return sorted(iterable) == iterable return sorted(iterable) == iterable
def is_iterable(obj): def isIterable(obj):
""" """
takes a python object and returns True is the object is iterable and takes a python object and returns True is the object is iterable and
False otherwise False otherwise
:param obj: a python object :param obj: a python object
:type obj: obj :type obj: object
:return: True of False :return: True of False
:rtype: bool :rtype: bool
>>> is_iterable(1)
False
>>> is_iterable(True)
False
>>> is_iterable(0.)
False
>>> is_iterable((0,1,3,4))
True
>>> is_iterable([1])
True
>>> is_iterable('a')
True
""" """
try: try:
iter(obj) iterator = iter(obj)
except TypeError as te: except TypeError as te:
return False return False
return True return True
@ -626,19 +562,13 @@ def is_iterable(obj):
def key_for_set_value(d): def key_for_set_value(d):
""" """
takes a dictionary and returns the first key for which's value the takes a dictionary and returns the first key for which's value the
boolean representation is True boolean is True
:param d: dictionary containing values :param d: dictionary containing values
:type d: dict :type d: dict
:return: key to the first non-False value found; None if no value's :return: key to the first non-False value found; None if no value's
boolean equals True boolean equals True
:rtype: bool or NoneType :rtype:
>>> key_for_set_value({'one': 0, 'two': 1})
'two'
>>> print(key_for_set_value({1: 0, 2: False}))
None
""" """
assert type(d) is dict, "Function only defined for inputs of type 'dict'."
r = None r = None
for k, v in d.items(): for k, v in d.items():
if v: if v:
@ -646,53 +576,32 @@ def key_for_set_value(d):
return r return r
def prep_time_axis(offset, trace, verbosity=0): def prepTimeAxis(stime, trace, verbosity=0):
""" """
takes an offset and a trace object and returns a valid time axis for takes a starttime and a trace object and returns a valid time axis for
plotting plotting
:param offset: offset of the actual seismogram on plotting axis :param stime: start time of the actual seismogram as UTCDateTime
:type offset: float or int :type stime: `~obspy.core.utcdatetime.UTCDateTime`
:param trace: seismic trace object :param trace: seismic trace object
:type trace: `~obspy.core.trace.Trace` :type trace: `~obspy.core.trace.Trace`
:param verbosity: if != 0, debug output will be written to console :param verbosity: if != 0, debug output will be written to console
:type verbosity: int :type verbosity: int
:return: valid numpy array with time stamps for plotting :return: valid numpy array with time stamps for plotting
:rtype: `~numpy.ndarray` :rtype: `~numpy.ndarray`
>>> tr = read()[0]
>>> prep_time_axis(0., tr)
array([0.00000000e+00, 1.00033344e-02, 2.00066689e-02, ...,
2.99799933e+01, 2.99899967e+01, 3.00000000e+01])
>>> prep_time_axis(22.5, tr)
array([22.5 , 22.51000333, 22.52000667, ..., 52.47999333,
52.48999667, 52.5 ])
>>> prep_time_axis(tr.stats.starttime, tr)
Traceback (most recent call last):
...
AssertionError: 'offset' is not of type 'float' or 'int'; type: <class 'obspy.core.utcdatetime.UTCDateTime'>
>>> tr.stats.npts -= 1
>>> prep_time_axis(0, tr)
array([0.00000000e+00, 1.00033356e-02, 2.00066711e-02, ...,
2.99699933e+01, 2.99799967e+01, 2.99900000e+01])
>>> tr.stats.npts += 2
>>> prep_time_axis(0, tr)
array([0.00000000e+00, 1.00033333e-02, 2.00066667e-02, ...,
2.99899933e+01, 2.99999967e+01, 3.00100000e+01])
""" """
assert isinstance(offset, (float, int)), "'offset' is not of type 'float' or 'int'; type: {}".format(type(offset))
nsamp = trace.stats.npts nsamp = trace.stats.npts
srate = trace.stats.sampling_rate srate = trace.stats.sampling_rate
tincr = trace.stats.delta tincr = trace.stats.delta
etime = offset + nsamp / srate etime = stime + nsamp / srate
time_ax = np.linspace(offset, etime, nsamp) time_ax = np.linspace(stime, etime, nsamp)
if len(time_ax) < nsamp: if len(time_ax) < nsamp:
if verbosity: if verbosity:
print('elongate time axes by one datum') print('elongate time axes by one datum')
time_ax = np.arange(offset, etime + tincr, tincr) time_ax = np.arange(stime, etime + tincr, tincr)
elif len(time_ax) > nsamp: elif len(time_ax) > nsamp:
if verbosity: if verbosity:
print('shorten time axes by one datum') print('shorten time axes by one datum')
time_ax = np.arange(offset, etime - tincr, tincr) time_ax = np.arange(stime, etime - tincr, tincr)
if len(time_ax) != nsamp: if len(time_ax) != nsamp:
print('Station {0}, {1} samples of data \n ' print('Station {0}, {1} samples of data \n '
'{2} length of time vector \n' '{2} length of time vector \n'
@ -708,13 +617,13 @@ def find_horizontals(data):
:param data: waveform data :param data: waveform data
:type data: `obspy.core.stream.Stream` :type data: `obspy.core.stream.Stream`
:return: components list :return: components list
:rtype: List(str) :rtype: list
..example:: ..example::
>>> st = read() >>> st = read()
>>> find_horizontals(st) >>> find_horizontals(st)
['N', 'E'] [u'N', u'E']
""" """
rval = [] rval = []
for tr in data: for tr in data:
@ -725,7 +634,7 @@ def find_horizontals(data):
return rval return rval
def pick_color(picktype: Literal['manual', 'automatic'], phase: Literal['P', 'S'], quality: int = 0) -> Rgba: def pick_color(picktype, phase, quality=0):
""" """
Create pick color by modifying the base color by the quality. Create pick color by modifying the base color by the quality.
@ -738,7 +647,7 @@ def pick_color(picktype: Literal['manual', 'automatic'], phase: Literal['P', 'S'
:param quality: quality of pick. Decides the new intensity of the modifier color :param quality: quality of pick. Decides the new intensity of the modifier color
:type quality: int :type quality: int
:return: tuple containing modified rgba color values :return: tuple containing modified rgba color values
:rtype: Rgba :rtype: (int, int, int, int)
""" """
min_quality = 3 min_quality = 3
bpc = base_phase_colors(picktype, phase) # returns dict like {'modifier': 'g', 'rgba': (0, 0, 255, 255)} bpc = base_phase_colors(picktype, phase) # returns dict like {'modifier': 'g', 'rgba': (0, 0, 255, 255)}
@ -794,17 +703,17 @@ def pick_linestyle_plt(picktype, key):
return linestyles[picktype][key] return linestyles[picktype][key]
def modify_rgba(rgba: Rgba, modifier: Literal['r', 'g', 'b'], intensity: float) -> Rgba: def modify_rgba(rgba, modifier, intensity):
""" """
Modify rgba color by adding the given intensity to the modifier color Modify rgba color by adding the given intensity to the modifier color
:param rgba: tuple containing rgba values :param rgba: tuple containing rgba values
:type rgba: Rgba :type rgba: (int, int, int, int)
:param modifier: which color should be modified; options: 'r', 'g', 'b' :param modifier: which color should be modified, eg. 'r', 'g', 'b'
:type modifier: Literal['r', 'g', 'b'] :type modifier: str
:param intensity: intensity to be added to selected color :param intensity: intensity to be added to selected color
:type intensity: float :type intensity: float
:return: tuple containing rgba values :return: tuple containing rgba values
:rtype: Rgba :rtype: (int, int, int, int)
""" """
rgba = list(rgba) rgba = list(rgba)
index = {'r': 0, index = {'r': 0,
@ -838,20 +747,18 @@ def transform_colors_mpl_str(colors, no_alpha=False):
Transforms rgba color values to a matplotlib string of color values with a range of [0, 1] Transforms rgba color values to a matplotlib string of color values with a range of [0, 1]
:param colors: tuple of rgba color values ranging from [0, 255] :param colors: tuple of rgba color values ranging from [0, 255]
:type colors: (float, float, float, float) :type colors: (float, float, float, float)
:param no_alpha: Whether to return an alpha value in the matplotlib color string :param no_alpha: Wether to return a alpha value in the matplotlib color string
:type no_alpha: bool :type no_alpha: bool
:return: String containing r, g, b values and alpha value if no_alpha is False (default) :return: String containing r, g, b values and alpha value if no_alpha is False (default)
:rtype: str :rtype: str
>>> transform_colors_mpl_str((255., 255., 255., 255.), True)
'(1.0, 1.0, 1.0)'
>>> transform_colors_mpl_str((255., 255., 255., 255.))
'(1.0, 1.0, 1.0, 1.0)'
""" """
colors = list(colors)
colors_mpl = tuple([color / 255. for color in colors])
if no_alpha: if no_alpha:
return '({}, {}, {})'.format(*transform_colors_mpl(colors)) colors_mpl = '({}, {}, {})'.format(*colors_mpl)
else: else:
return '({}, {}, {}, {})'.format(*transform_colors_mpl(colors)) colors_mpl = '({}, {}, {}, {})'.format(*colors_mpl)
return colors_mpl
def transform_colors_mpl(colors): def transform_colors_mpl(colors):
@ -861,16 +768,27 @@ def transform_colors_mpl(colors):
:type colors: (float, float, float, float) :type colors: (float, float, float, float)
:return: tuple of rgba color values ranging from [0, 1] :return: tuple of rgba color values ranging from [0, 1]
:rtype: (float, float, float, float) :rtype: (float, float, float, float)
>>> transform_colors_mpl((127.5, 0., 63.75, 255.))
(0.5, 0.0, 0.25, 1.0)
>>> transform_colors_mpl(())
""" """
colors = list(colors) colors = list(colors)
colors_mpl = tuple([color / 255. for color in colors]) colors_mpl = tuple([color / 255. for color in colors])
return colors_mpl return colors_mpl
def remove_underscores(data):
"""
takes a `obspy.core.stream.Stream` object and removes all underscores
from station names
:param data: stream of seismic data
:type data: `~obspy.core.stream.Stream`
:return: data stream
:rtype: `~obspy.core.stream.Stream`
"""
# for tr in data:
# # remove underscores
# tr.stats.station = tr.stats.station.strip('_')
return data
def trim_station_components(data, trim_start=True, trim_end=True): def trim_station_components(data, trim_start=True, trim_end=True):
""" """
cut a stream so only the part common to all three traces is kept to avoid dealing with offsets cut a stream so only the part common to all three traces is kept to avoid dealing with offsets
@ -899,6 +817,19 @@ def trim_station_components(data, trim_start=True, trim_end=True):
return data return data
def merge_stream(stream):
gaps = stream.get_gaps()
if gaps:
# list of merged stations (seed_ids)
merged = ['{}.{}.{}.{}'.format(*gap[:4]) for gap in gaps]
stream.merge(method=1)
print('Merged the following stations because of gaps:')
for merged_station in merged:
print(merged_station)
return stream, gaps
def check4gapsAndRemove(data): def check4gapsAndRemove(data):
""" """
check for gaps in Stream and remove them check for gaps in Stream and remove them
@ -919,12 +850,12 @@ def check4gapsAndRemove(data):
return data return data
def check_for_gaps_and_merge(data): def check4gapsAndMerge(data):
""" """
check for gaps in Stream and merge if gaps are found check for gaps in Stream and merge if gaps are found
:param data: stream of seismic data :param data: stream of seismic data
:type data: `~obspy.core.stream.Stream` :type data: `~obspy.core.stream.Stream`
:return: data stream, gaps returned from obspy get_gaps :return: data stream
:rtype: `~obspy.core.stream.Stream` :rtype: `~obspy.core.stream.Stream`
""" """
gaps = data.get_gaps() gaps = data.get_gaps()
@ -935,7 +866,7 @@ def check_for_gaps_and_merge(data):
for merged_station in merged: for merged_station in merged:
print(merged_station) print(merged_station)
return data, gaps return data
def check4doubled(data): def check4doubled(data):
@ -965,53 +896,13 @@ def check4doubled(data):
return data return data
def check_for_nan(data, nan_value=0.):
"""
Replace all NaNs in data with nan_value (in place)
:param data: stream of seismic data
:type data: `~obspy.core.stream.Stream`
:param nan_value: value which all NaNs are set to
:type nan_value: float, int
:return: None
"""
if not data:
return
for trace in data:
np.nan_to_num(trace.data, copy=False, nan=nan_value)
def get_pylot_eventfile_with_extension(event, fext):
if hasattr(event, 'path'):
eventpath = event.path
else:
logging.warning('No attribute path found for event.')
return
eventname = event.pylot_id #path.split('/')[-1] # or event.pylot_id
filename = os.path.join(eventpath, 'PyLoT_' + eventname + fext)
if os.path.isfile(filename):
return filename
def get_possible_pylot_eventfile_extensions(event, fext):
if hasattr(event, 'path'):
eventpath = event.path
else:
logging.warning('No attribute path found for event.')
return []
eventname = event.pylot_id
filename = os.path.join(eventpath, 'PyLoT_' + eventname + fext)
filenames = glob.glob(filename)
extensions = [os.path.split(path)[-1].split('PyLoT_' + eventname)[-1] for path in filenames]
return extensions
def get_stations(data): def get_stations(data):
""" """
Get list of all station names in data-stream Get list of all station names in data stream
:param data: stream containing seismic traces :param data: stream containing seismic traces
:type data: `~obspy.core.stream.Stream` :type data: `~obspy.core.stream.Stream`
:return: list of all station names in data, no duplicates :return: list of all station names in data, no duplicates
:rtype: List(str) :rtype: list of str
""" """
stations = [] stations = []
for tr in data: for tr in data:
@ -1038,88 +929,63 @@ def check4rotated(data, metadata=None, verbosity=1):
:rtype: `~obspy.core.stream.Stream` :rtype: `~obspy.core.stream.Stream`
""" """
def rotation_required(trace_ids): def rotate_components(wfstream, metadata=None):
"""
Derive if any rotation is required from the orientation code of the input.
:param trace_ids: string identifier of waveform data trace
:type trace_ids: List(str)
:return: boolean representing if rotation is necessary for any of the traces
:rtype: bool
"""
orientations = [trace_id[-1] for trace_id in trace_ids]
return any([orientation.isnumeric() for orientation in orientations])
def rotate_components(wfs_in, metadata=None):
""" """
Rotate components if orientation code is numeric (= non traditional orientation). Rotate components if orientation code is numeric (= non traditional orientation).
Azimut and dip are fetched from metadata. To be rotated, traces of a station have to be cut to the same length. Azimut and dip are fetched from metadata. To be rotated, traces of a station have to be cut to the same length.
Returns unrotated traces of no metadata is provided Returns unrotated traces of no metadata is provided
:param wfs_in: stream containing seismic traces of a station :param wfstream: stream containing seismic traces of a station
:type wfs_in: `~obspy.core.stream.Stream` :type wfstream: `~obspy.core.stream.Stream`
:param metadata: tuple containing metadata type string and metadata parser object :param metadata: tuple containing metadata type string and metadata parser object
:type metadata: (str, `~obspy.io.xseed.parser.Parser`) :type metadata: (str, `~obspy.io.xseed.parser.Parser`)
:return: stream object with traditionally oriented traces (ZNE) :return: stream object with traditionally oriented traces (ZNE)
:rtype: `~obspy.core.stream.Stream` :rtype: `~obspy.core.stream.Stream`
""" """
if len(wfs_in) < 3:
print(f"Stream {wfs_in=}, has not enough components to rotate.")
return wfs_in
# check if any traces in this station need to be rotated # check if any traces in this station need to be rotated
trace_ids = [trace.id for trace in wfs_in] trace_ids = [trace.id for trace in wfstream]
if not rotation_required(trace_ids): orientations = [trace_id[-1] for trace_id in trace_ids]
logging.debug(f"Stream does not need any rotation: Traces are {trace_ids=}") rotation_required = [orientation.isnumeric() for orientation in orientations]
return wfs_in if any(rotation_required):
t_start = full_range(wfstream)
# check metadata quality
t_start = full_range(wfs_in)[0]
try: try:
azimuths = [] azimuts = [metadata.get_coordinates(tr_id, t_start)['azimuth'] for tr_id in trace_ids]
dips = [] dips = [metadata.get_coordinates(tr_id, t_start)['dip'] for tr_id in trace_ids]
for tr_id in trace_ids: except (KeyError, TypeError) as e:
azimuths.append(metadata.get_coordinates(tr_id, t_start)['azimuth']) print('Failed to rotate trace {}, no azimuth or dip available in metadata'.format(trace_id))
dips.append(metadata.get_coordinates(tr_id, t_start)['dip']) return wfstream
except (KeyError, TypeError) as err: if len(wfstream) < 3:
logging.warning(f"Rotating not possible, not all azimuth and dip information " print('Failed to rotate Stream {}, not enough components available.'.format(wfstream))
f"available in metadata. Stream remains unchanged.") return wfstream
logging.debug(f"Rotating not possible, {err=}, {type(err)=}")
return wfs_in
except Exception as err:
print(f"Unexpected {err=}, {type(err)=}")
raise
# to rotate all traces must have same length, so trim them # to rotate all traces must have same length, so trim them
wfs_out = trim_station_components(wfs_in, trim_start=True, trim_end=True) wfstream = trim_station_components(wfstream, trim_start=True, trim_end=True)
try: try:
z, n, e = rotate2zne(wfs_out[0], azimuths[0], dips[0], z, n, e = rotate2zne(wfstream[0], azimuts[0], dips[0],
wfs_out[1], azimuths[1], dips[1], wfstream[1], azimuts[1], dips[1],
wfs_out[2], azimuths[2], dips[2]) wfstream[2], azimuts[2], dips[2])
print('check4rotated: rotated trace {} to ZNE'.format(trace_ids)) print('check4rotated: rotated trace {} to ZNE'.format(trace_id))
# replace old data with rotated data, change the channel code to ZNE # replace old data with rotated data, change the channel code to ZNE
z_index = dips.index(min( z_index = dips.index(min(
dips)) # get z-trace index, z has minimum dip of -90 (dip is measured from 0 to -90, with -90 dips)) # get z-trace index, z has minimum dip of -90 (dip is measured from 0 to -90, with -90 being vertical)
# being vertical) wfstream[z_index].data = z
wfs_out[z_index].data = z wfstream[z_index].stats.channel = wfstream[z_index].stats.channel[0:-1] + 'Z'
wfs_out[z_index].stats.channel = wfs_out[z_index].stats.channel[0:-1] + 'Z'
del trace_ids[z_index] del trace_ids[z_index]
for trace_id in trace_ids: for trace_id in trace_ids:
coordinates = metadata.get_coordinates(trace_id, t_start) coordinates = metadata.get_coordinates(trace_id, t_start)
dip, az = coordinates['dip'], coordinates['azimuth'] dip, az = coordinates['dip'], coordinates['azimuth']
trace = wfs_out.select(id=trace_id)[0] trace = wfstream.select(id=trace_id)[0]
if az > 315 or az <= 45 or 135 < az <= 225: if az > 315 or az <= 45 or az > 135 and az <= 225:
trace.data = n trace.data = n
trace.stats.channel = trace.stats.channel[0:-1] + 'N' trace.stats.channel = trace.stats.channel[0:-1] + 'N'
elif 45 < az <= 135 or 225 < az <= 315: elif az > 45 and az <= 135 or az > 225 and az <= 315:
trace.data = e trace.data = e
trace.stats.channel = trace.stats.channel[0:-1] + 'E' trace.stats.channel = trace.stats.channel[0:-1] + 'E'
except ValueError as err: except (ValueError) as e:
print(f"{err=} Rotation failed. Stream remains unchanged.") print(e)
return wfs_in return wfstream
return wfs_out return wfstream
if metadata is None: if metadata is None:
if verbosity: if verbosity:
@ -1133,6 +999,38 @@ def check4rotated(data, metadata=None, verbosity=1):
return data return data
def scaleWFData(data, factor=None, components='all'):
"""
produce scaled waveforms from given waveform data and a scaling factor,
waveform may be selected by their components name
:param data: waveform data to be scaled
:type data: `~obspy.core.stream.Stream` object
:param factor: scaling factor
:type factor: float
:param components: components labels for the traces in data to be scaled by
the scaling factor (optional, default: 'all')
:type components: tuple
:return: scaled waveform data
:rtype: `~obspy.core.stream.Stream` object
"""
if components is not 'all':
for comp in components:
if factor is None:
max_val = np.max(np.abs(data.select(component=comp)[0].data))
data.select(component=comp)[0].data /= 2 * max_val
else:
data.select(component=comp)[0].data /= 2 * factor
else:
for tr in data:
if factor is None:
max_val = float(np.max(np.abs(tr.data)))
tr.data /= 2 * max_val
else:
tr.data /= 2 * factor
return data
def runProgram(cmd, parameter=None): def runProgram(cmd, parameter=None):
""" """
run an external program specified by cmd with parameters input returning the run an external program specified by cmd with parameters input returning the
@ -1204,7 +1102,6 @@ def identifyPhase(phase):
return False return False
@lru_cache
def identifyPhaseID(phase): def identifyPhaseID(phase):
""" """
Returns phase id (capital P or S) Returns phase id (capital P or S)
@ -1268,7 +1165,7 @@ def correct_iplot(iplot):
try: try:
iplot = int(iplot) iplot = int(iplot)
except ValueError: except ValueError:
if get_bool(iplot): if get_Bool(iplot):
iplot = 2 iplot = 2
else: else:
iplot = 0 iplot = 0

View File

@ -35,9 +35,9 @@ from __future__ import print_function
__all__ = "get_git_version" __all__ = "get_git_version"
import inspect
# NO IMPORTS FROM PYLOT IN THIS FILE! (file gets used at installation time) # NO IMPORTS FROM PYLOT IN THIS FILE! (file gets used at installation time)
import os import os
import inspect
from subprocess import Popen, PIPE from subprocess import Popen, PIPE
# NO IMPORTS FROM PYLOT IN THIS FILE! (file gets used at installation time) # NO IMPORTS FROM PYLOT IN THIS FILE! (file gets used at installation time)

File diff suppressed because it is too large Load Diff

View File

@ -1,2 +0,0 @@
# -*- coding: utf-8 -*-
#

View File

@ -1,101 +0,0 @@
############################# correlation parameters #####################################
# min_corr_stacking: minimum correlation coefficient for building beam trace
# min_corr_export: minimum correlation coefficient for pick export
# min_stack: minimum number of stations for building beam trace
# t_before: correlation window before pick
# t_after: correlation window after pick#
# cc_maxlag: maximum shift for initial correlation
# cc_maxlag2: maximum shift for second (final) correlation (also for calculating pick uncertainty)
# initial_pick_outlier_threshold: (hopefully) threshold for excluding large outliers of initial (AIC) picks
# export_threshold: automatically exclude all onsets which deviate more than this threshold from corrected taup onsets
# min_picks_export: minimum number of correlated picks for export
# min_picks_autopylot: minimum number of reference auto picks to continue with event
# check_RMS: do RMS check to search for restitution errors (very experimental)
# use_taupy_onsets: use taupy onsets as reference picks instead of external picks
# station_list: use the following stations as reference for stacking
# use_stacked_trace: use existing stacked trace if found (spare re-computation)
# data_dir: obspyDMT data subdirectory (e.g. 'raw', 'processed')
# pickfile_extension: use quakeML files (PyLoT output) with the following extension, e.g. '_autopylot' for pickfiles
# such as 'PyLoT_20170501_141822_autopylot.xml'
# dt_stacking: time shift for stacking (e.g. [0, 250] for 0 and 250 seconds shift)
# filter_options: filter for first correlation (rough)
# filter_options_final: filter for second correlation (fine)
# filter_type: e.g. 'bandpass'
# sampfreq: sampling frequency of the data
logging: info
pick_phases: ['P', 'S']
# P-phase
P:
min_corr_stacking: 0.8
min_corr_export: 0.6
min_stack: 20
t_before: 30.
t_after: 50.
cc_maxlag: 50.
cc_maxlag2: 5.
initial_pick_outlier_threshold: 30.
export_threshold: 2.5
min_picks_export: 100
min_picks_autopylot: 50
check_RMS: True
use_taupy_onsets: False
station_list: ['HU.MORH', 'HU.TIH', 'OX.FUSE', 'OX.BAD']
use_stacked_trace: False
data_dir: 'processed'
pickfile_extension: '_autopylot'
dt_stacking: [250, 250]
# filter for first correlation (rough)
filter_options:
freqmax: 0.5
freqmin: 0.03
# filter for second correlation (fine)
filter_options_final:
freqmax: 0.5
freqmin: 0.03
filter_type: bandpass
sampfreq: 20.0
# ignore if autopylot fails to pick master-trace (not recommended if absolute onset times matter)
ignore_autopylot_fail_on_master: True
# S-phase
S:
min_corr_stacking: 0.7
min_corr_export: 0.6
min_stack: 20
t_before: 60.
t_after: 60.
cc_maxlag: 100.
cc_maxlag2: 25.
initial_pick_outlier_threshold: 30.
export_threshold: 5.0
min_picks_export: 200
min_picks_autopylot: 50
check_RMS: True
use_taupy_onsets: False
station_list: ['HU.MORH','HU.TIH', 'OX.FUSE', 'OX.BAD']
use_stacked_trace: False
data_dir: 'processed'
pickfile_extension: '_autopylot'
dt_stacking: [250, 250]
# filter for first correlation (rough)
filter_options:
freqmax: 0.1
freqmin: 0.01
# filter for second correlation (fine)
filter_options_final:
freqmax: 0.2
freqmin: 0.01
filter_type: bandpass
sampfreq: 20.0
# ignore if autopylot fails to pick master-trace (not recommended if absolute onset times matter)
ignore_autopylot_fail_on_master: True

File diff suppressed because it is too large Load Diff

View File

@ -1,41 +0,0 @@
#!/bin/bash
#ulimit -s 8192
#ulimit -v $(ulimit -v | awk '{printf("%d",$1*0.95)}')
#ulimit -v
#655360
source /opt/anaconda3/etc/profile.d/conda.sh
conda activate pylot_311
NSLOTS=20
#qsub -l low -cwd -l "os=*stretch" -pe smp 40 submit_pick_corr_correction.sh
#$ -l low
#$ -l h_vmem=6G
#$ -cwd
#$ -pe smp 20
#$ -N corr_pick
export PYTHONPATH="$PYTHONPATH:/home/marcel/git/pylot_tools/"
export PYTHONPATH="$PYTHONPATH:/home/marcel/git/"
export PYTHONPATH="$PYTHONPATH:/home/marcel/git/pylot/"
#export MKL_NUM_THREADS=${NSLOTS:=1}
#export NUMEXPR_NUM_THREADS=${NSLOTS:=1}
#export OMP_NUM_THREADS=${NSLOTS:=1}
#python pick_correlation_correction.py '/data/AlpArray_Data/dmt_database_mantle_M5.8-6.0' '/home/marcel/.pylot/pylot_alparray_mantle_corr_stack_0.03-0.5.in' -pd -n ${NSLOTS:=1} -istart 0 -istop 100
#python pick_correlation_correction.py '/data/AlpArray_Data/dmt_database_mantle_M5.8-6.0' '/home/marcel/.pylot/pylot_alparray_mantle_corr_stack_0.03-0.5.in' -pd -n ${NSLOTS:=1} -istart 100 -istop 200
#python pick_correlation_correction.py '/data/AlpArray_Data/dmt_database_mantle_M6.0-6.5' '/home/marcel/.pylot/pylot_alparray_mantle_corr_stack_0.03-0.5.in' -pd -n ${NSLOTS:=1} -istart 0 -istop 100
#python pick_correlation_correction.py '/data/AlpArray_Data/dmt_database_mantle_M5.8-6.0' '/home/marcel/.pylot/pylot_alparray_mantle_corr_stack_0.03-0.5.in' -pd -n ${NSLOTS:=1} -istart 100 -istop 200
#python pick_correlation_correction.py 'H:\sciebo\dmt_database' 'H:\Sciebo\dmt_database\pylot_alparray_mantle_corr_S_0.01-0.2.in' -pd -n 4 -t
#pylot_infile='/home/marcel/.pylot/pylot_alparray_syn_fwi_mk6_it3.in'
pylot_infile='/home/marcel/.pylot/pylot_adriaarray_corr_P_and_S.in'
# THIS SCRIPT SHOLD BE CALLED BY "submit_to_grid_engine.py" using the following line:
# use -pd for detailed plots in eventdir/correlation_XX_XX/figures
python pick_correlation_correction.py $1 $pylot_infile -n ${NSLOTS:=1} -istart $2 --params 'parameters_adriaarray.yaml' # -pd
#--event_blacklist eventlist.txt

View File

@ -1,28 +0,0 @@
#!/usr/bin/env python
import subprocess
fnames = [
('/data/AdriaArray_Data/dmt_database_mantle_M5.0-5.4', 0),
('/data/AdriaArray_Data/dmt_database_mantle_M5.4-5.7', 0),
('/data/AdriaArray_Data/dmt_database_mantle_M5.7-6.0', 0),
('/data/AdriaArray_Data/dmt_database_mantle_M6.0-6.3', 0),
('/data/AdriaArray_Data/dmt_database_mantle_M6.3-10.0', 0),
# ('/data/AdriaArray_Data/dmt_database_ISC_mantle_M5.0-5.4', 0),
# ('/data/AdriaArray_Data/dmt_database_ISC_mantle_M5.4-5.7', 0),
# ('/data/AdriaArray_Data/dmt_database_ISC_mantle_M5.7-6.0', 0),
# ('/data/AdriaArray_Data/dmt_database_ISC_mantle_M6.0-10.0', 0),
]
#fnames = [('/data/AlpArray_Data/dmt_database_mantle_0.01-0.2_SKS-phase', 0),
# ('/data/AlpArray_Data/dmt_database_mantle_0.01-0.2_S-phase', 0),]
####
script_location = '/home/marcel/VersionCtrl/git/pylot/pylot/correlation/submit_pick_corr_correction.sh'
####
for fnin, istart in fnames:
input_cmds = f'qsub -q low.q@minos15,low.q@minos14,low.q@minos13,low.q@minos12,low.q@minos11 {script_location} {fnin} {istart}'
print(input_cmds)
print(subprocess.check_output(input_cmds.split()))

View File

@ -1,61 +0,0 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import glob
import json
from obspy import read_events
from pylot.core.util.dataprocessing import Metadata
from pylot.core.util.obspyDMT_interface import qml_from_obspyDMT
def get_event_obspy_dmt(eventdir):
event_pkl_file = os.path.join(eventdir, 'info', 'event.pkl')
if not os.path.exists(event_pkl_file):
raise IOError('Could not find event path for event: {}'.format(eventdir))
event = qml_from_obspyDMT(event_pkl_file)
return event
def get_event_pylot(eventdir, extension=''):
event_id = get_event_id(eventdir)
filename = os.path.join(eventdir, 'PyLoT_{}{}.xml'.format(event_id, extension))
if not os.path.isfile(filename):
return
cat = read_events(filename)
return cat[0]
def get_event_id(eventdir):
event_id = os.path.split(eventdir)[-1]
return event_id
def get_picks(eventdir, extension=''):
event_id = get_event_id(eventdir)
filename = 'PyLoT_{}{}.xml'
filename = filename.format(event_id, extension)
fpath = os.path.join(eventdir, filename)
fpaths = glob.glob(fpath)
if len(fpaths) == 1:
cat = read_events(fpaths[0])
picks = cat[0].picks
return picks
elif len(fpaths) == 0:
print('get_picks: File not found: {}'.format(fpath))
return
print(f'WARNING: Ambiguous pick file specification. Found the following pick files {fpaths}\nFilemask: {fpath}')
return
def write_json(object, fname):
with open(fname, 'w') as outfile:
json.dump(object, outfile, sort_keys=True, indent=4)
def get_metadata(eventdir):
metadata_path = os.path.join(eventdir, 'resp')
metadata = Metadata(inventory=metadata_path, verbosity=0)
return metadata

View File

@ -1,7 +0,0 @@
Cartopy==0.23.0
joblib==1.4.2
obspy==1.4.1
pyaml==24.7.0
pyqtgraph==0.13.7
PySide2==5.15.8
pytest==8.3.2

17
setup.py Normal file
View File

@ -0,0 +1,17 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from distutils.core import setup
setup(
name='PyLoT',
version='0.2',
packages=['pylot', 'pylot.core', 'pylot.core.loc', 'pylot.core.pick',
'pylot.core.io', 'pylot.core.util', 'pylot.core.active',
'pylot.core.analysis', 'pylot.testing'],
requires=['obspy', 'PySide', 'matplotlib', 'numpy', 'scipy', 'pyqtgraph'],
url='dummy',
license='LGPLv3',
author='Sebastian Wehling-Benatelli',
author_email='sebastian.wehling@rub.de',
description='Comprehensive Python picking and Location Toolbox for seismological data.'
)

File diff suppressed because it is too large Load Diff

View File

@ -1,8 +1,6 @@
import unittest import unittest
from pylot.core.pick.autopick import PickingResults from pylot.core.pick.autopick import PickingResults
class TestPickingResults(unittest.TestCase): class TestPickingResults(unittest.TestCase):
def setUp(self): def setUp(self):
@ -72,9 +70,9 @@ class TestPickingResults(unittest.TestCase):
curr_len = len(self.pr) curr_len = len(self.pr)
except Exception: except Exception:
self.fail("test_dunder_attributes overwrote an instance internal dunder method") self.fail("test_dunder_attributes overwrote an instance internal dunder method")
self.assertEqual(prev_len + 1, curr_len) # +1 for the added __len__ key/value-pair self.assertEqual(prev_len+1, curr_len) # +1 for the added __len__ key/value-pair
self.pr.__len__ = 42 self.pr.__len__ = 42
self.assertEqual(42, self.pr['__len__']) self.assertEqual(42, self.pr['__len__'])
self.assertEqual(prev_len + 1, curr_len, msg="__len__ was overwritten") self.assertEqual(prev_len+1, curr_len, msg="__len__ was overwritten")

View File

@ -1,6 +1,5 @@
import os import os
import unittest import unittest
from obspy import UTCDateTime from obspy import UTCDateTime
from obspy.io.xseed import Parser from obspy.io.xseed import Parser
from obspy.io.xseed.utils import SEEDParserException from obspy.io.xseed.utils import SEEDParserException
@ -28,7 +27,7 @@ class TestMetadata(unittest.TestCase):
result = {} result = {}
for channel in ('Z', 'N', 'E'): for channel in ('Z', 'N', 'E'):
with HidePrints(): with HidePrints():
coords = self.m.get_coordinates(self.station_id + channel, time=self.time) coords = self.m.get_coordinates(self.station_id+channel, time=self.time)
result[channel] = coords result[channel] = coords
self.assertDictEqual(result[channel], expected[channel]) self.assertDictEqual(result[channel], expected[channel])
@ -43,7 +42,7 @@ class TestMetadata(unittest.TestCase):
result = {} result = {}
for channel in ('Z', 'N', 'E'): for channel in ('Z', 'N', 'E'):
with HidePrints(): with HidePrints():
coords = self.m.get_coordinates(self.station_id + channel) coords = self.m.get_coordinates(self.station_id+channel)
result[channel] = coords result[channel] = coords
self.assertDictEqual(result[channel], expected[channel]) self.assertDictEqual(result[channel], expected[channel])
@ -146,7 +145,7 @@ class TestMetadata_read_single_file(unittest.TestCase):
def test_read_single_file(self): def test_read_single_file(self):
"""Test if reading a single file works""" """Test if reading a single file works"""
fname = os.path.join(self.metadata_folders[0], 'DATALESS.' + self.station_id) fname = os.path.join(self.metadata_folders[0], 'DATALESS.'+self.station_id)
with HidePrints(): with HidePrints():
res = self.m.read_single_file(fname) res = self.m.read_single_file(fname)
# method should return true if file is successfully read # method should return true if file is successfully read
@ -173,7 +172,7 @@ class TestMetadata_read_single_file(unittest.TestCase):
def test_read_single_file_multiple_times(self): def test_read_single_file_multiple_times(self):
"""Test if reading a file twice doesnt add it twice to the metadata object""" """Test if reading a file twice doesnt add it twice to the metadata object"""
fname = os.path.join(self.metadata_folders[0], 'DATALESS.' + self.station_id) fname = os.path.join(self.metadata_folders[0], 'DATALESS.'+self.station_id)
with HidePrints(): with HidePrints():
res1 = self.m.read_single_file(fname) res1 = self.m.read_single_file(fname)
res2 = self.m.read_single_file(fname) res2 = self.m.read_single_file(fname)
@ -198,8 +197,7 @@ class TestMetadataMultipleTime(unittest.TestCase):
def setUp(self): def setUp(self):
self.seed_id = 'LE.ROTT..HN' self.seed_id = 'LE.ROTT..HN'
path = os.path.dirname(__file__) # gets path to currently running script path = os.path.dirname(__file__) # gets path to currently running script
metadata = os.path.join('test_data', 'dless_multiple_times', metadata = os.path.join('test_data', 'dless_multiple_times', 'MAGS2_LE_ROTT.dless') # specific subfolder of test data
'MAGS2_LE_ROTT.dless') # specific subfolder of test data
metadata_path = os.path.join(path, metadata) metadata_path = os.path.join(path, metadata)
self.m = Metadata(metadata_path) self.m = Metadata(metadata_path)
self.p = Parser(metadata_path) self.p = Parser(metadata_path)
@ -301,8 +299,7 @@ Channels:
def setUp(self): def setUp(self):
self.seed_id = 'KB.TMO07.00.HHZ' self.seed_id = 'KB.TMO07.00.HHZ'
path = os.path.dirname(__file__) # gets path to currently running script path = os.path.dirname(__file__) # gets path to currently running script
metadata = os.path.join('test_data', 'dless_multiple_instruments', metadata = os.path.join('test_data', 'dless_multiple_instruments', 'MAGS2_KB_TMO07.dless') # specific subfolder of test data
'MAGS2_KB_TMO07.dless') # specific subfolder of test data
metadata_path = os.path.join(path, metadata) metadata_path = os.path.join(path, metadata)
self.m = Metadata(metadata_path) self.m = Metadata(metadata_path)
self.p = Parser(metadata_path) self.p = Parser(metadata_path)

View File

@ -0,0 +1,35 @@
import unittest
from pylot.core.pick.autopick import PickingParameters
class TestPickingParameters(unittest.TestCase):
def setUp(self):
self.simple_dict = {'a': 3, 'b': 14}
self.nested_dict = {'a': self.simple_dict, 'b': self.simple_dict}
def assertParameterEquality(self, dic, instance):
"""Test wether all parameters given in dic are found in instance"""
for key, value in dic.items():
self.assertEqual(value, getattr(instance, key))
def test_add_params_from_dict_simple(self):
pickparam = PickingParameters()
pickparam.add_params_from_dict(self.simple_dict)
self.assertParameterEquality(self.simple_dict, pickparam)
def test_add_params_from_dict_nested(self):
pickparam = PickingParameters()
pickparam.add_params_from_dict(self.nested_dict)
self.assertParameterEquality(self.nested_dict, pickparam)
def test_init(self):
pickparam = PickingParameters(self.simple_dict)
self.assertParameterEquality(self.simple_dict, pickparam)
def test_dot_access(self):
pickparam = PickingParameters(self.simple_dict)
self.assertEqual(pickparam.a, self.simple_dict['a'])
if __name__ == '__main__':
unittest.main()

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,99 +0,0 @@
%This is a parameter input file for PyLoT/autoPyLoT.
%All main and special settings regarding data handling
%and picking are to be set here!
%Parameters are optimized for %extent data sets!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#main settings#
dmt_database_test #datapath# %data path
20171010_063224.a #eventID# %event ID for single event processing (* for all events found in database)
#invdir# %full path to inventory or dataless-seed file
PILOT #datastructure# %choose data structure
True #apverbose# %choose 'True' or 'False' for terminal output
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#NLLoc settings#
None #nllocbin# %path to NLLoc executable
None #nllocroot# %root of NLLoc-processing directory
None #phasefile# %name of autoPyLoT-output phase file for NLLoc
None #ctrfile# %name of autoPyLoT-output control file for NLLoc
ttime #ttpatter# %pattern of NLLoc ttimes from grid
AUTOLOC_nlloc #outpatter# %pattern of NLLoc-output file
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#parameters for seismic moment estimation#
3530.0 #vp# %average P-wave velocity
2500.0 #rho# %average rock density [kg/m^3]
300.0 0.8 #Qp# %quality factor for P waves (Qp*f^a); list(Qp, a)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#settings local magnitude#
1.0 1.0 1.0 #WAscaling# %Scaling relation (log(Ao)+Alog(r)+Br+C) of Wood-Anderson amplitude Ao [nm] If zeros are set, original Richter magnitude is calculated!
1.0 1.0 #magscaling# %Scaling relation for derived local magnitude [a*Ml+b]. If zeros are set, no scaling of network magnitude is applied!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#filter settings#
0.03 0.03 #minfreq# %Lower filter frequency [P, S]
0.5 0.5 #maxfreq# %Upper filter frequency [P, S]
4 4 #filter_order# %filter order [P, S]
bandpass bandpass #filter_type# %filter type (bandpass, bandstop, lowpass, highpass) [P, S]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#common settings picker#
global #extent# %extent of array ("local", "regional" or "global")
-100.0 #pstart# %start time [s] for calculating CF for P-picking (if TauPy: seconds relative to estimated onset)
50.0 #pstop# %end time [s] for calculating CF for P-picking (if TauPy: seconds relative to estimated onset)
-50.0 #sstart# %start time [s] relative to P-onset for calculating CF for S-picking
50.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking
True #use_taup# %use estimated traveltimes from TauPy for calculating windows for CF
ak135 #taup_model# %Define TauPy model for traveltime estimation. Possible values: 1066a, 1066b, ak135, ak135f, herrin, iasp91, jb, prem, pwdk, sp6
P,Pdiff,S,SKS #taup_phases# %Specify possible phases for TauPy (comma separated). See Obspy TauPy documentation for possible values.
0.03 0.5 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz]
0.01 0.5 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz]
0.03 0.5 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz]
0.01 0.5 #bph2# %lower/upper corner freq. of second band pass filter z-comp. [Hz]
#special settings for calculating CF#
%!!Edit the following only if you know what you are doing!!%
#Z-component#
HOS #algoP# %choose algorithm for P-onset determination (HOS, ARZ, or AR3)
300.0 #tlta# %for HOS-/AR-AIC-picker, length of LTA window [s]
4 #hosorder# %for HOS-picker, order of Higher Order Statistics
2 #Parorder# %for AR-picker, order of AR process of Z-component
16.0 #tdet1z# %for AR-picker, length of AR determination window [s] for Z-component, 1st pick
10.0 #tpred1z# %for AR-picker, length of AR prediction window [s] for Z-component, 1st pick
12.0 #tdet2z# %for AR-picker, length of AR determination window [s] for Z-component, 2nd pick
6.0 #tpred2z# %for AR-picker, length of AR prediction window [s] for Z-component, 2nd pick
0.001 #addnoise# %add noise to seismogram for stable AR prediction
60.0 5.0 20.0 12.0 #tsnrz# %for HOS/AR, window lengths for SNR-and slope estimation [tnoise, tsafetey, tsignal, tslope] [s]
50.0 #pickwinP# %for initial AIC pick, length of P-pick window [s]
30.0 #Precalcwin# %for HOS/AR, window length [s] for recalculation of CF (relative to 1st pick)
2.0 #aictsmooth# %for HOS/AR, take average of samples for smoothing of AIC-function [s]
2.0 #tsmoothP# %for HOS/AR, take average of samples in this time window for smoothing CF [s]
0.006 #ausP# %for HOS/AR, artificial uplift of samples (aus) of CF (P)
2.0 #nfacP# %for HOS/AR, noise factor for noise level determination (P)
#H-components#
ARH #algoS# %choose algorithm for S-onset determination (ARH or AR3)
12.0 #tdet1h# %for HOS/AR, length of AR-determination window [s], H-components, 1st pick
6.0 #tpred1h# %for HOS/AR, length of AR-prediction window [s], H-components, 1st pick
8.0 #tdet2h# %for HOS/AR, length of AR-determinaton window [s], H-components, 2nd pick
4.0 #tpred2h# %for HOS/AR, length of AR-prediction window [s], H-components, 2nd pick
4 #Sarorder# %for AR-picker, order of AR process of H-components
100.0 #Srecalcwin# %for AR-picker, window length [s] for recalculation of CF (2nd pick) (H)
195.0 #pickwinS# %for initial AIC pick, length of S-pick window [s]
60.0 10.0 30.0 12.0 #tsnrh# %for ARH/AR3, window lengths for SNR-and slope estimation [tnoise, tsafetey, tsignal, tslope] [s]
22.0 #aictsmoothS# %for AIC-picker, take average of samples in this time window for smoothing of AIC-function [s]
20.0 #tsmoothS# %for AR-picker, take average of samples for smoothing CF [s] (S)
0.001 #ausS# %for HOS/AR, artificial uplift of samples (aus) of CF (S)
2.0 #nfacS# %for AR-picker, noise factor for noise level determination (S)
#first-motion picker#
1 #minfmweight# %minimum required P weight for first-motion determination
3.0 #minFMSNR# %miniumum required SNR for first-motion determination
10.0 #fmpickwin# %pick window [s] around P onset for calculating zero crossings
#quality assessment#
0.1 0.2 0.4 0.8 #timeerrorsP# %discrete time errors [s] corresponding to picking weights [0 1 2 3] for P
4.0 8.0 16.0 32.0 #timeerrorsS# %discrete time errors [s] corresponding to picking weights [0 1 2 3] for S
0.005 #minAICPslope# %below this slope [counts/s] the initial P pick is rejected
1.1 #minAICPSNR# %below this SNR the initial P pick is rejected
0.002 #minAICSslope# %below this slope [counts/s] the initial S pick is rejected
1.3 #minAICSSNR# %below this SNR the initial S pick is rejected
20.0 #minsiglength# %length of signal part for which amplitudes must exceed noiselevel [s]
1.0 #noisefactor# %noiselevel*noisefactor=threshold
10.0 #minpercent# %required percentage of amplitudes exceeding threshold
0.1 #zfac# %P-amplitude must exceed at least zfac times RMS-S amplitude
100.0 #mdttolerance# %maximum allowed deviation of P picks from median [s]
50.0 #wdttolerance# %maximum allowed deviation from Wadati-diagram
25.0 #jackfactor# %pick is removed if the variance of the subgroup with the pick removed is larger than the mean variance of all subgroups times safety factor

View File

@ -1,67 +0,0 @@
import os
import pytest
from obspy import read_events
from autoPyLoT import autoPyLoT
class TestAutopickerGlobal():
def init(self):
self.params_infile = 'pylot_alparray_mantle_corr_stack_0.03-0.5.in'
self.test_event_dir = 'dmt_database_test'
self.fname_outfile_xml = os.path.join(
self.test_event_dir, '20171010_063224.a', 'PyLoT_20171010_063224.a_autopylot.xml'
)
# check if the input files exist
if not os.path.isfile(self.params_infile):
print(f'Test input file {os.path.abspath(self.params_infile)} not found.')
return False
if not os.path.exists(self.test_event_dir):
print(
f'Test event directory not found at location "{os.path.abspath(self.test_event_dir)}". '
f'Make sure to load it from the website first.'
)
return False
return True
def test_autopicker(self):
assert self.init(), 'Initialization failed due to missing input files.'
# check for output file in test directory and remove it if necessary
if os.path.isfile(self.fname_outfile_xml):
os.remove(self.fname_outfile_xml)
autoPyLoT(inputfile=self.params_infile, eventid='20171010_063224.a', obspyDMT_wfpath='processed')
# test for different known output files if they are identical or not
compare_pickfiles(self.fname_outfile_xml, 'PyLoT_20171010_063224.a_autopylot.xml', True)
compare_pickfiles(self.fname_outfile_xml, 'PyLoT_20171010_063224.a_saved_from_GUI.xml', True)
compare_pickfiles(self.fname_outfile_xml, 'PyLoT_20171010_063224.a_corrected_taup_times_0.03-0.5_P.xml', False)
def compare_pickfiles(pickfile1: str, pickfile2: str, samefile: bool = True) -> None:
"""
Compare the pick times and errors from two pick files.
Parameters:
pickfile1 (str): The path to the first pick file.
pickfile2 (str): The path to the second pick file.
samefile (bool): A flag indicating whether the two files are expected to be the same. Defaults to True.
Returns:
None
"""
cat1 = read_events(pickfile1)
cat2 = read_events(pickfile2)
picks1 = sorted(cat1[0].picks, key=lambda pick: str(pick.waveform_id))
picks2 = sorted(cat2[0].picks, key=lambda pick: str(pick.waveform_id))
pick_times1 = [pick.time for pick in picks1]
pick_times2 = [pick.time for pick in picks2]
pick_terrs1 = [pick.time_errors for pick in picks1]
pick_terrs2 = [pick.time_errors for pick in picks2]
# check if times and errors are identical or not depending on the samefile flag
assert (pick_times1 == pick_times2) is samefile, 'Pick times error'
assert (pick_terrs1 == pick_terrs2) is samefile, 'Pick time errors errors'

View File

@ -4,8 +4,10 @@
%Parameters are optimized for %extent data sets! %Parameters are optimized for %extent data sets!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#main settings# #main settings#
/home/darius/alparray/waveforms_used #datapath# %data path /home/darius #rootpath# %project path
e0093.173.16 #eventID# %event ID for single event processing (* for all events found in datapath) alparray #datapath# %data path
waveforms_used #database# %name of data base
e0093.173.16 #eventID# %event ID for single event processing (* for all events found in database)
/home/darius/alparray/metadata #invdir# %full path to inventory or dataless-seed file /home/darius/alparray/metadata #invdir# %full path to inventory or dataless-seed file
PILOT #datastructure# %choose data structure PILOT #datastructure# %choose data structure
True #apverbose# %choose 'True' or 'False' for terminal output True #apverbose# %choose 'True' or 'False' for terminal output
@ -41,7 +43,6 @@ global #extent# %extent of a
875.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking 875.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking
False #use_taup# %use estimated traveltimes from TauPy for calculating windows for CF False #use_taup# %use estimated traveltimes from TauPy for calculating windows for CF
IASP91 #taup_model# %define TauPy model for traveltime estimation. Possible values: 1066a, 1066b, ak135, ak135f, herrin, iasp91, jb, prem, pwdk, sp6 IASP91 #taup_model# %define TauPy model for traveltime estimation. Possible values: 1066a, 1066b, ak135, ak135f, herrin, iasp91, jb, prem, pwdk, sp6
P,Pdiff,S,Sdiff #taup_phases# %Specify possible phases for TauPy (comma separated). See Obspy TauPy documentation for possible values.
0.01 0.1 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz] 0.01 0.1 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz]
0.001 0.5 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz] 0.001 0.5 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz]
0.01 0.5 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz] 0.01 0.5 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz]

View File

@ -4,8 +4,10 @@
%Parameters are optimized for %extent data sets! %Parameters are optimized for %extent data sets!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#main settings# #main settings#
/home/darius/alparray/waveforms_used #datapath# %data path /home/darius #rootpath# %project path
e0093.173.16 #eventID# %event ID for single event processing (* for all events found in datapath) alparray #datapath# %data path
waveforms_used #database# %name of data base
e0093.173.16 #eventID# %event ID for single event processing (* for all events found in database)
/home/darius/alparray/metadata #invdir# %full path to inventory or dataless-seed file /home/darius/alparray/metadata #invdir# %full path to inventory or dataless-seed file
PILOT #datastructure# %choose data structure PILOT #datastructure# %choose data structure
True #apverbose# %choose 'True' or 'False' for terminal output True #apverbose# %choose 'True' or 'False' for terminal output
@ -41,7 +43,6 @@ global #extent# %extent of a
875.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking 875.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking
True #use_taup# %use estimated traveltimes from TauPy for calculating windows for CF True #use_taup# %use estimated traveltimes from TauPy for calculating windows for CF
IASP91 #taup_model# %define TauPy model for traveltime estimation. Possible values: 1066a, 1066b, ak135, ak135f, herrin, iasp91, jb, prem, pwdk, sp6 IASP91 #taup_model# %define TauPy model for traveltime estimation. Possible values: 1066a, 1066b, ak135, ak135f, herrin, iasp91, jb, prem, pwdk, sp6
P,Pdiff,S,Sdiff #taup_phases# %Specify possible phases for TauPy (comma separated). See Obspy TauPy documentation for possible values.
0.01 0.1 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz] 0.01 0.1 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz]
0.001 0.5 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz] 0.001 0.5 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz]
0.01 0.5 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz] 0.01 0.5 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz]

View File

@ -1,14 +1,12 @@
import os
import sys
import unittest import unittest
import pytest from unittest import skip
import obspy import obspy
from obspy import UTCDateTime from obspy import UTCDateTime
import os
from pylot.core.io.data import Data import sys
from pylot.core.io.inputs import PylotParameter
from pylot.core.pick.autopick import autopickstation from pylot.core.pick.autopick import autopickstation
from pylot.core.io.inputs import PylotParameter
from pylot.core.io.data import Data
from pylot.core.util.utils import trim_station_components from pylot.core.util.utils import trim_station_components
@ -95,155 +93,82 @@ class TestAutopickStation(unittest.TestCase):
self.inputfile_taupy_disabled = os.path.join(os.path.dirname(__file__), 'autoPyLoT_global_taupy_false.in') self.inputfile_taupy_disabled = os.path.join(os.path.dirname(__file__), 'autoPyLoT_global_taupy_false.in')
self.pickparam_taupy_enabled = PylotParameter(fnin=self.inputfile_taupy_enabled) self.pickparam_taupy_enabled = PylotParameter(fnin=self.inputfile_taupy_enabled)
self.pickparam_taupy_disabled = PylotParameter(fnin=self.inputfile_taupy_disabled) self.pickparam_taupy_disabled = PylotParameter(fnin=self.inputfile_taupy_disabled)
self.xml_file = os.path.join(os.path.dirname(__file__), self.event_id, 'PyLoT_' + self.event_id + '.xml') self.xml_file = os.path.join(os.path.dirname(__file__),self.event_id, 'PyLoT_'+self.event_id+'.xml')
self.data = Data(evtdata=self.xml_file) self.data = Data(evtdata=self.xml_file)
# create origin for taupy testing # create origin for taupy testing
self.origin = [obspy.core.event.origin.Origin(magnitude=7.1, latitude=59.66, longitude=-153.45, depth=128.0, self.origin = [obspy.core.event.origin.Origin(magnitude=7.1, latitude=59.66, longitude=-153.45, depth=128.0, time=UTCDateTime("2016-01-24T10:30:30.0"))]
time=UTCDateTime("2016-01-24T10:30:30.0"))]
# mocking metadata since reading it takes a long time to read from file # mocking metadata since reading it takes a long time to read from file
self.metadata = MockMetadata() self.metadata = MockMetadata()
# show complete diff when difference in results dictionaries are found # show complete diff when difference in results dictionaries are found
self.maxDiff = None self.maxDiff = None
#@skip("Works")
def test_autopickstation_taupy_disabled_gra1(self): def test_autopickstation_taupy_disabled_gra1(self):
expected = { expected = {'P': {'picker': 'auto', 'snrdb': 15.405649120980094, 'weight': 0, 'Mo': None, 'marked': [], 'Mw': None, 'fc': None, 'snr': 34.718816470730317, 'mpp': UTCDateTime(2016, 1, 24, 10, 41, 31, 690000), 'w0': None, 'spe': 0.93333333333333235, 'network': u'GR', 'epp': UTCDateTime(2016, 1, 24, 10, 41, 28, 890000), 'lpp': UTCDateTime(2016, 1, 24, 10, 41, 32, 690000), 'fm': 'D', 'channel': u'LHZ'}, 'S': {'picker': 'auto', 'snrdb': 10.669661906545489, 'network': u'GR', 'weight': 0, 'Ao': None, 'lpp': UTCDateTime(2016, 1, 24, 10, 50, 30, 690000), 'snr': 11.667187857573905, 'epp': UTCDateTime(2016, 1, 24, 10, 50, 21, 690000), 'mpp': UTCDateTime(2016, 1, 24, 10, 50, 29, 690000), 'fm': None, 'spe': 2.6666666666666665, 'channel': u'LHE'}}
'P': {'picker': 'auto', 'snrdb': 15.405649120980094, 'weight': 0, 'Mo': None, 'marked': [], 'Mw': None,
'fc': None, 'snr': 34.718816470730317, 'mpp': UTCDateTime(2016, 1, 24, 10, 41, 31, 690000),
'w0': None, 'spe': 0.93333333333333235, 'network': u'GR',
'epp': UTCDateTime(2016, 1, 24, 10, 41, 28, 890000),
'lpp': UTCDateTime(2016, 1, 24, 10, 41, 32, 690000), 'fm': 'D', 'channel': u'LHZ'},
'S': {'picker': 'auto', 'snrdb': 10.669661906545489, 'network': u'GR', 'weight': 0, 'Ao': None,
'lpp': UTCDateTime(2016, 1, 24, 10, 50, 30, 690000), 'snr': 11.667187857573905,
'epp': UTCDateTime(2016, 1, 24, 10, 50, 21, 690000),
'mpp': UTCDateTime(2016, 1, 24, 10, 50, 29, 690000), 'fm': None, 'spe': 2.6666666666666665,
'channel': u'LHE'}}
with HidePrints(): with HidePrints():
result, station = autopickstation(wfstream=self.gra1, pickparam=self.pickparam_taupy_disabled, result, station = autopickstation(wfstream=self.gra1, pickparam=self.pickparam_taupy_disabled, metadata=(None, None))
metadata=(None, None)) self.assertDictContainsSubset(expected=expected['P'], actual=result['P'])
compare_dicts(expected=expected['P'], result=result['P'], hint='P-') self.assertDictContainsSubset(expected=expected['S'], actual=result['S'])
compare_dicts(expected=expected['S'], result=result['S'], hint='S-')
self.assertEqual('GRA1', station) self.assertEqual('GRA1', station)
def test_autopickstation_taupy_enabled_gra1(self): def test_autopickstation_taupy_enabled_gra1(self):
expected = { expected = {'P': {'picker': 'auto', 'snrdb': 15.599905299126778, 'weight': 0, 'Mo': None, 'marked': [], 'Mw': None, 'fc': None, 'snr': 36.307013769185403, 'mpp': UTCDateTime(2016, 1, 24, 10, 41, 27, 690000), 'w0': None, 'spe': 0.93333333333333235, 'network': u'GR', 'epp': UTCDateTime(2016, 1, 24, 10, 41, 24, 890000), 'lpp': UTCDateTime(2016, 1, 24, 10, 41, 28, 690000), 'fm': 'U', 'channel': u'LHZ'}, 'S': {'picker': 'auto', 'snrdb': 10.669661906545489, 'network': u'GR', 'weight': 0, 'Ao': None, 'lpp': UTCDateTime(2016, 1, 24, 10, 50, 30, 690000), 'snr': 11.667187857573905, 'epp': UTCDateTime(2016, 1, 24, 10, 50, 21, 690000), 'mpp': UTCDateTime(2016, 1, 24, 10, 50, 29, 690000), 'fm': None, 'spe': 2.6666666666666665, 'channel': u'LHE'}}
'P': {'picker': 'auto', 'snrdb': 15.599905299126778, 'weight': 0, 'Mo': None, 'marked': [], 'Mw': None,
'fc': None, 'snr': 36.307013769185403, 'mpp': UTCDateTime(2016, 1, 24, 10, 41, 27, 690000),
'w0': None, 'spe': 0.93333333333333235, 'network': u'GR',
'epp': UTCDateTime(2016, 1, 24, 10, 41, 24, 890000),
'lpp': UTCDateTime(2016, 1, 24, 10, 41, 28, 690000), 'fm': 'U', 'channel': u'LHZ'},
'S': {'picker': 'auto', 'snrdb': 10.669661906545489, 'network': u'GR', 'weight': 0, 'Ao': None,
'lpp': UTCDateTime(2016, 1, 24, 10, 50, 30, 690000), 'snr': 11.667187857573905,
'epp': UTCDateTime(2016, 1, 24, 10, 50, 21, 690000),
'mpp': UTCDateTime(2016, 1, 24, 10, 50, 29, 690000), 'fm': None, 'spe': 2.6666666666666665,
'channel': u'LHE'}}
with HidePrints(): with HidePrints():
result, station = autopickstation(wfstream=self.gra1, pickparam=self.pickparam_taupy_enabled, result, station = autopickstation(wfstream=self.gra1, pickparam=self.pickparam_taupy_enabled, metadata=self.metadata, origin=self.origin)
metadata=self.metadata, origin=self.origin) self.assertDictContainsSubset(expected=expected['P'], actual=result['P'])
compare_dicts(expected=expected['P'], result=result['P'], hint='P-') self.assertDictContainsSubset(expected=expected['S'], actual=result['S'])
compare_dicts(expected=expected['S'], result=result['S'], hint='S-')
self.assertEqual('GRA1', station) self.assertEqual('GRA1', station)
def test_autopickstation_taupy_disabled_gra2(self): def test_autopickstation_taupy_disabled_gra2(self):
expected = { expected = {'P': {'picker': 'auto', 'snrdb': None, 'weight': 9, 'Mo': None, 'marked': 'shortsignallength', 'Mw': None, 'fc': None, 'snr': None, 'mpp': UTCDateTime(2016, 1, 24, 10, 36, 59, 150000), 'w0': None, 'spe': None, 'network': u'GR', 'epp': UTCDateTime(2016, 1, 24, 10, 36, 43, 150000), 'lpp': UTCDateTime(2016, 1, 24, 10, 37, 15, 150000), 'fm': 'N', 'channel': u'LHZ'}, 'S': {'picker': 'auto', 'snrdb': None, 'network': u'GR', 'weight': 4, 'Ao': None, 'lpp': UTCDateTime(2016, 1, 24, 10, 37, 15, 150000), 'snr': None, 'epp': UTCDateTime(2016, 1, 24, 10, 36, 43, 150000), 'mpp': UTCDateTime(2016, 1, 24, 10, 36, 59, 150000), 'fm': None, 'spe': None, 'channel': u'LHE'}}
'P': {'picker': 'auto', 'snrdb': None, 'weight': 9, 'Mo': None, 'marked': 'shortsignallength', 'Mw': None,
'fc': None, 'snr': None, 'mpp': UTCDateTime(2016, 1, 24, 10, 36, 59, 150000), 'w0': None, 'spe': None,
'network': u'GR', 'epp': UTCDateTime(2016, 1, 24, 10, 36, 43, 150000),
'lpp': UTCDateTime(2016, 1, 24, 10, 37, 15, 150000), 'fm': 'N', 'channel': u'LHZ'},
'S': {'picker': 'auto', 'snrdb': None, 'network': u'GR', 'weight': 4, 'Ao': None,
'lpp': UTCDateTime(2016, 1, 24, 10, 37, 15, 150000), 'snr': None,
'epp': UTCDateTime(2016, 1, 24, 10, 36, 43, 150000),
'mpp': UTCDateTime(2016, 1, 24, 10, 36, 59, 150000), 'fm': None, 'spe': None, 'channel': u'LHE'}}
with HidePrints(): with HidePrints():
result, station = autopickstation(wfstream=self.gra2, pickparam=self.pickparam_taupy_disabled, result, station = autopickstation(wfstream=self.gra2, pickparam=self.pickparam_taupy_disabled, metadata=(None, None))
metadata=(None, None)) self.assertDictContainsSubset(expected=expected['P'], actual=result['P'])
compare_dicts(expected=expected['P'], result=result['P'], hint='P-') self.assertDictContainsSubset(expected=expected['S'], actual=result['S'])
compare_dicts(expected=expected['S'], result=result['S'], hint='S-')
self.assertEqual('GRA2', station) self.assertEqual('GRA2', station)
def test_autopickstation_taupy_enabled_gra2(self): def test_autopickstation_taupy_enabled_gra2(self):
expected = { expected = {'P': {'picker': 'auto', 'snrdb': 13.957959025719253, 'weight': 0, 'Mo': None, 'marked': [], 'Mw': None, 'fc': None, 'snr': 24.876879503607871, 'mpp': UTCDateTime(2016, 1, 24, 10, 41, 29, 150000), 'w0': None, 'spe': 1.0, 'network': u'GR', 'epp': UTCDateTime(2016, 1, 24, 10, 41, 26, 150000), 'lpp': UTCDateTime(2016, 1, 24, 10, 41, 30, 150000), 'fm': None, 'channel': u'LHZ'}, 'S': {'picker': 'auto', 'snrdb': 10.573236990555648, 'network': u'GR', 'weight': 1, 'Ao': None, 'lpp': UTCDateTime(2016, 1, 24, 10, 50, 34, 150000), 'snr': 11.410999834108294, 'epp': UTCDateTime(2016, 1, 24, 10, 50, 21, 150000), 'mpp': UTCDateTime(2016, 1, 24, 10, 50, 33, 150000), 'fm': None, 'spe': 4.666666666666667, 'channel': u'LHE'}}
'P': {'picker': 'auto', 'snrdb': 13.957959025719253, 'weight': 0, 'Mo': None, 'marked': [], 'Mw': None,
'fc': None, 'snr': 24.876879503607871, 'mpp': UTCDateTime(2016, 1, 24, 10, 41, 29, 150000),
'w0': None, 'spe': 1.0, 'network': u'GR', 'epp': UTCDateTime(2016, 1, 24, 10, 41, 26, 150000),
'lpp': UTCDateTime(2016, 1, 24, 10, 41, 30, 150000), 'fm': None, 'channel': u'LHZ'},
'S': {'picker': 'auto', 'snrdb': 10.573236990555648, 'network': u'GR', 'weight': 1, 'Ao': None,
'lpp': UTCDateTime(2016, 1, 24, 10, 50, 34, 150000), 'snr': 11.410999834108294,
'epp': UTCDateTime(2016, 1, 24, 10, 50, 21, 150000),
'mpp': UTCDateTime(2016, 1, 24, 10, 50, 33, 150000), 'fm': None, 'spe': 4.666666666666667,
'channel': u'LHE'}}
with HidePrints(): with HidePrints():
result, station = autopickstation(wfstream=self.gra2, pickparam=self.pickparam_taupy_enabled, result, station = autopickstation(wfstream=self.gra2, pickparam=self.pickparam_taupy_enabled, metadata=self.metadata, origin = self.origin)
metadata=self.metadata, origin=self.origin) self.assertDictContainsSubset(expected=expected['P'], actual=result['P'])
compare_dicts(expected=expected['P'], result=result['P'], hint='P-') self.assertDictContainsSubset(expected=expected['S'], actual=result['S'])
compare_dicts(expected=expected['S'], result=result['S'], hint='S-')
self.assertEqual('GRA2', station) self.assertEqual('GRA2', station)
def test_autopickstation_taupy_disabled_ech(self): def test_autopickstation_taupy_disabled_ech(self):
expected = {'P': {'picker': 'auto', 'snrdb': None, 'weight': 9, 'Mo': None, 'marked': 'SinsteadP', 'Mw': None, expected = {'P': {'picker': 'auto', 'snrdb': None, 'weight': 9, 'Mo': None, 'marked': 'SinsteadP', 'Mw': None, 'fc': None, 'snr': None, 'mpp': UTCDateTime(2016, 1, 24, 10, 26, 57), 'w0': None, 'spe': None, 'network': u'G', 'epp': UTCDateTime(2016, 1, 24, 10, 26, 41), 'lpp': UTCDateTime(2016, 1, 24, 10, 27, 13), 'fm': 'N', 'channel': u'LHZ'}, 'S': {'picker': 'auto', 'snrdb': None, 'network': u'G', 'weight': 4, 'Ao': None, 'lpp': UTCDateTime(2016, 1, 24, 10, 27, 13), 'snr': None, 'epp': UTCDateTime(2016, 1, 24, 10, 26, 41), 'mpp': UTCDateTime(2016, 1, 24, 10, 26, 57), 'fm': None, 'spe': None, 'channel': u'LHE'}}
'fc': None, 'snr': None, 'mpp': UTCDateTime(2016, 1, 24, 10, 26, 57), 'w0': None, 'spe': None,
'network': u'G', 'epp': UTCDateTime(2016, 1, 24, 10, 26, 41),
'lpp': UTCDateTime(2016, 1, 24, 10, 27, 13), 'fm': 'N', 'channel': u'LHZ'},
'S': {'picker': 'auto', 'snrdb': None, 'network': u'G', 'weight': 4, 'Ao': None,
'lpp': UTCDateTime(2016, 1, 24, 10, 27, 13), 'snr': None,
'epp': UTCDateTime(2016, 1, 24, 10, 26, 41), 'mpp': UTCDateTime(2016, 1, 24, 10, 26, 57),
'fm': None, 'spe': None, 'channel': u'LHE'}}
with HidePrints(): with HidePrints():
result, station = autopickstation(wfstream=self.ech, pickparam=self.pickparam_taupy_disabled) result, station = autopickstation(wfstream=self.ech, pickparam=self.pickparam_taupy_disabled)
compare_dicts(expected=expected['P'], result=result['P'], hint='P-') self.assertDictContainsSubset(expected=expected['P'], actual=result['P'])
compare_dicts(expected=expected['S'], result=result['S'], hint='S-') self.assertDictContainsSubset(expected=expected['S'], actual=result['S'])
self.assertEqual('ECH', station) self.assertEqual('ECH', station)
def test_autopickstation_taupy_enabled_ech(self): def test_autopickstation_taupy_enabled_ech(self):
# this station has a long time of before the first onset, so taupy will help during picking # this station has a long time of before the first onset, so taupy will help during picking
expected = { expected = {'P': {'picker': 'auto', 'snrdb': 9.9753586609166316, 'weight': 0, 'Mo': None, 'marked': [], 'Mw': None, 'fc': None, 'snr': 9.9434218804137107, 'mpp': UTCDateTime(2016, 1, 24, 10, 41, 34), 'w0': None, 'spe': 1.6666666666666667, 'network': u'G', 'epp': UTCDateTime(2016, 1, 24, 10, 41, 29), 'lpp': UTCDateTime(2016, 1, 24, 10, 41, 35), 'fm': None, 'channel': u'LHZ'}, 'S': {'picker': 'auto', 'snrdb': 12.698999454169567, 'network': u'G', 'weight': 0, 'Ao': None, 'lpp': UTCDateTime(2016, 1, 24, 10, 50, 44), 'snr': 18.616581906366577, 'epp': UTCDateTime(2016, 1, 24, 10, 50, 33), 'mpp': UTCDateTime(2016, 1, 24, 10, 50, 43), 'fm': None, 'spe': 3.3333333333333335, 'channel': u'LHE'}}
'P': {'picker': 'auto', 'snrdb': 9.9753586609166316, 'weight': 0, 'Mo': None, 'marked': [], 'Mw': None,
'fc': None, 'snr': 9.9434218804137107, 'mpp': UTCDateTime(2016, 1, 24, 10, 41, 34), 'w0': None,
'spe': 1.6666666666666667, 'network': u'G', 'epp': UTCDateTime(2016, 1, 24, 10, 41, 29),
'lpp': UTCDateTime(2016, 1, 24, 10, 41, 35), 'fm': None, 'channel': u'LHZ'},
'S': {'picker': 'auto', 'snrdb': 12.698999454169567, 'network': u'G', 'weight': 0, 'Ao': None,
'lpp': UTCDateTime(2016, 1, 24, 10, 50, 44), 'snr': 18.616581906366577,
'epp': UTCDateTime(2016, 1, 24, 10, 50, 33), 'mpp': UTCDateTime(2016, 1, 24, 10, 50, 43), 'fm': None,
'spe': 3.3333333333333335, 'channel': u'LHE'}}
with HidePrints(): with HidePrints():
result, station = autopickstation(wfstream=self.ech, pickparam=self.pickparam_taupy_enabled, result, station = autopickstation(wfstream=self.ech, pickparam=self.pickparam_taupy_enabled, metadata=self.metadata, origin=self.origin)
metadata=self.metadata, origin=self.origin) self.assertDictContainsSubset(expected=expected['P'], actual=result['P'])
compare_dicts(expected=expected['P'], result=result['P'], hint='P-') self.assertDictContainsSubset(expected=expected['S'], actual=result['S'])
compare_dicts(expected=expected['S'], result=result['S'], hint='S-')
self.assertEqual('ECH', station) self.assertEqual('ECH', station)
def test_autopickstation_taupy_disabled_fiesa(self): def test_autopickstation_taupy_disabled_fiesa(self):
# this station has a long time of before the first onset, so taupy will help during picking # this station has a long time of before the first onset, so taupy will help during picking
expected = {'P': {'picker': 'auto', 'snrdb': None, 'weight': 9, 'Mo': None, 'marked': 'SinsteadP', 'Mw': None, expected = {'P': {'picker': 'auto', 'snrdb': None, 'weight': 9, 'Mo': None, 'marked': 'SinsteadP', 'Mw': None, 'fc': None, 'snr': None, 'mpp': UTCDateTime(2016, 1, 24, 10, 35, 58), 'w0': None, 'spe': None, 'network': u'CH', 'epp': UTCDateTime(2016, 1, 24, 10, 35, 42), 'lpp': UTCDateTime(2016, 1, 24, 10, 36, 14), 'fm': 'N', 'channel': u'LHZ'}, 'S': {'picker': 'auto', 'snrdb': None, 'network': u'CH', 'weight': 4, 'Ao': None, 'lpp': UTCDateTime(2016, 1, 24, 10, 36, 14), 'snr': None, 'epp': UTCDateTime(2016, 1, 24, 10, 35, 42), 'mpp': UTCDateTime(2016, 1, 24, 10, 35, 58), 'fm': None, 'spe': None, 'channel': u'LHE'}}
'fc': None, 'snr': None, 'mpp': UTCDateTime(2016, 1, 24, 10, 35, 58), 'w0': None, 'spe': None,
'network': u'CH', 'epp': UTCDateTime(2016, 1, 24, 10, 35, 42),
'lpp': UTCDateTime(2016, 1, 24, 10, 36, 14), 'fm': 'N', 'channel': u'LHZ'},
'S': {'picker': 'auto', 'snrdb': None, 'network': u'CH', 'weight': 4, 'Ao': None,
'lpp': UTCDateTime(2016, 1, 24, 10, 36, 14), 'snr': None,
'epp': UTCDateTime(2016, 1, 24, 10, 35, 42), 'mpp': UTCDateTime(2016, 1, 24, 10, 35, 58),
'fm': None, 'spe': None, 'channel': u'LHE'}}
with HidePrints(): with HidePrints():
result, station = autopickstation(wfstream=self.fiesa, pickparam=self.pickparam_taupy_disabled) result, station = autopickstation(wfstream=self.fiesa, pickparam=self.pickparam_taupy_disabled)
compare_dicts(expected=expected['P'], result=result['P'], hint='P-') self.assertDictContainsSubset(expected=expected['P'], actual=result['P'])
compare_dicts(expected=expected['S'], result=result['S'], hint='S-') self.assertDictContainsSubset(expected=expected['S'], actual=result['S'])
self.assertEqual('FIESA', station) self.assertEqual('FIESA', station)
def test_autopickstation_taupy_enabled_fiesa(self): def test_autopickstation_taupy_enabled_fiesa(self):
# this station has a long time of before the first onset, so taupy will help during picking # this station has a long time of before the first onset, so taupy will help during picking
expected = { expected = {'P': {'picker': 'auto', 'snrdb': 13.921049277904373, 'weight': 0, 'Mo': None, 'marked': [], 'Mw': None, 'fc': None, 'snr': 24.666352170589487, 'mpp': UTCDateTime(2016, 1, 24, 10, 41, 47), 'w0': None, 'spe': 1.2222222222222285, 'network': u'CH', 'epp': UTCDateTime(2016, 1, 24, 10, 41, 43, 333333), 'lpp': UTCDateTime(2016, 1, 24, 10, 41, 48), 'fm': None, 'channel': u'LHZ'}, 'S': {'picker': 'auto', 'snrdb': 10.893086316477728, 'network': u'CH', 'weight': 0, 'Ao': None, 'lpp': UTCDateTime(2016, 1, 24, 10, 51, 5), 'snr': 12.283118216397849, 'epp': UTCDateTime(2016, 1, 24, 10, 50, 59, 333333), 'mpp': UTCDateTime(2016, 1, 24, 10, 51, 2), 'fm': None, 'spe': 2.8888888888888764, 'channel': u'LHE'}}
'P': {'picker': 'auto', 'snrdb': 13.921049277904373, 'weight': 0, 'Mo': None, 'marked': [], 'Mw': None,
'fc': None, 'snr': 24.666352170589487, 'mpp': UTCDateTime(2016, 1, 24, 10, 41, 47), 'w0': None,
'spe': 1.2222222222222285, 'network': u'CH', 'epp': UTCDateTime(2016, 1, 24, 10, 41, 43, 333333),
'lpp': UTCDateTime(2016, 1, 24, 10, 41, 48), 'fm': None, 'channel': u'LHZ'},
'S': {'picker': 'auto', 'snrdb': 10.893086316477728, 'network': u'CH', 'weight': 0, 'Ao': None,
'lpp': UTCDateTime(2016, 1, 24, 10, 51, 5), 'snr': 12.283118216397849,
'epp': UTCDateTime(2016, 1, 24, 10, 50, 59, 333333), 'mpp': UTCDateTime(2016, 1, 24, 10, 51, 2),
'fm': None, 'spe': 2.8888888888888764, 'channel': u'LHE'}}
with HidePrints(): with HidePrints():
result, station = autopickstation(wfstream=self.fiesa, pickparam=self.pickparam_taupy_enabled, result, station = autopickstation(wfstream=self.fiesa, pickparam=self.pickparam_taupy_enabled, metadata=self.metadata, origin=self.origin)
metadata=self.metadata, origin=self.origin) self.assertDictContainsSubset(expected=expected['P'], actual=result['P'])
compare_dicts(expected=expected['P'], result=result['P'], hint='P-') self.assertDictContainsSubset(expected=expected['S'], actual=result['S'])
compare_dicts(expected=expected['S'], result=result['S'], hint='S-')
self.assertEqual('FIESA', station) self.assertEqual('FIESA', station)
def test_autopickstation_gra1_z_comp_missing(self): def test_autopickstation_gra1_z_comp_missing(self):
@ -251,8 +176,7 @@ class TestAutopickStation(unittest.TestCase):
wfstream = self.gra1.copy() wfstream = self.gra1.copy()
wfstream = wfstream.select(channel='*E') + wfstream.select(channel='*N') wfstream = wfstream.select(channel='*E') + wfstream.select(channel='*N')
with HidePrints(): with HidePrints():
result, station = autopickstation(wfstream=wfstream, pickparam=self.pickparam_taupy_disabled, result, station = autopickstation(wfstream=wfstream, pickparam=self.pickparam_taupy_disabled, metadata=(None, None))
metadata=(None, None))
self.assertIsNone(result) self.assertIsNone(result)
self.assertEqual('GRA1', station) self.assertEqual('GRA1', station)
@ -260,91 +184,28 @@ class TestAutopickStation(unittest.TestCase):
"""Picking on a stream without horizontal traces should still pick the P phase on the vertical component""" """Picking on a stream without horizontal traces should still pick the P phase on the vertical component"""
wfstream = self.gra1.copy() wfstream = self.gra1.copy()
wfstream = wfstream.select(channel='*Z') wfstream = wfstream.select(channel='*Z')
expected = { expected = {'P': {'picker': 'auto', 'snrdb': 15.405649120980094, 'network': u'GR', 'weight': 0, 'Ao': None, 'Mo': None, 'marked': [], 'lpp': UTCDateTime(2016, 1, 24, 10, 41, 32, 690000), 'Mw': None, 'fc': None, 'snr': 34.718816470730317, 'epp': UTCDateTime(2016, 1, 24, 10, 41, 28, 890000), 'mpp': UTCDateTime(2016, 1, 24, 10, 41, 31, 690000), 'w0': None, 'spe': 0.9333333333333323, 'fm': 'D', 'channel': u'LHZ'}, 'S': {'picker': 'auto', 'snrdb': None, 'network': None, 'weight': 4, 'Mo': None, 'Ao': None, 'lpp': None, 'Mw': None, 'fc': None, 'snr': None, 'marked': [], 'mpp': None, 'w0': None, 'spe': None, 'epp': None, 'fm': 'N', 'channel': None}}
'P': {'picker': 'auto', 'snrdb': 15.405649120980094, 'network': u'GR', 'weight': 0, 'Ao': None, 'Mo': None,
'marked': [], 'lpp': UTCDateTime(2016, 1, 24, 10, 41, 32, 690000), 'Mw': None, 'fc': None,
'snr': 34.718816470730317, 'epp': UTCDateTime(2016, 1, 24, 10, 41, 28, 890000),
'mpp': UTCDateTime(2016, 1, 24, 10, 41, 31, 690000), 'w0': None, 'spe': 0.9333333333333323, 'fm': 'D',
'channel': u'LHZ'},
'S': {'picker': 'auto', 'snrdb': None, 'network': None, 'weight': 4, 'Mo': None, 'Ao': None, 'lpp': None,
'Mw': None, 'fc': None, 'snr': None, 'marked': [], 'mpp': None, 'w0': None, 'spe': None, 'epp': None,
'fm': 'N', 'channel': None}}
with HidePrints(): with HidePrints():
result, station = autopickstation(wfstream=wfstream, pickparam=self.pickparam_taupy_disabled, result, station = autopickstation(wfstream=wfstream, pickparam=self.pickparam_taupy_disabled, metadata=(None, None))
metadata=(None, None)) self.assertEqual(expected, result)
compare_dicts(expected=expected['P'], result=result['P'], hint='P-')
compare_dicts(expected=expected['S'], result=result['S'], hint='S-')
self.assertEqual('GRA1', station) self.assertEqual('GRA1', station)
def test_autopickstation_a106_taupy_enabled(self): def test_autopickstation_a106_taupy_enabled(self):
"""This station has invalid values recorded on both N and E component, but a pick can still be found on Z""" """This station has invalid values recorded on both N and E component, but a pick can still be found on Z"""
expected = { expected = {'P': {'picker': 'auto', 'snrdb': 12.862128789922826, 'network': u'Z3', 'weight': 0, 'Ao': None, 'Mo': None, 'marked': [], 'lpp': UTCDateTime(2016, 1, 24, 10, 41, 34), 'Mw': None, 'fc': None, 'snr': 19.329155459132608, 'epp': UTCDateTime(2016, 1, 24, 10, 41, 30), 'mpp': UTCDateTime(2016, 1, 24, 10, 41, 33), 'w0': None, 'spe': 1.6666666666666667, 'fm': None, 'channel': u'LHZ'}, 'S': {'picker': 'auto', 'snrdb': None, 'network': u'Z3', 'weight': 4, 'Ao': None, 'Mo': None, 'marked': [], 'lpp': UTCDateTime(2016, 1, 24, 10, 28, 56), 'Mw': None, 'fc': None, 'snr': None, 'epp': UTCDateTime(2016, 1, 24, 10, 28, 24), 'mpp': UTCDateTime(2016, 1, 24, 10, 28, 40), 'w0': None, 'spe': None, 'fm': None, 'channel': u'LHE'}}
'P': {'picker': 'auto', 'snrdb': 12.862128789922826, 'network': u'Z3', 'weight': 0, 'Ao': None, 'Mo': None,
'marked': [], 'lpp': UTCDateTime(2016, 1, 24, 10, 41, 34), 'Mw': None, 'fc': None,
'snr': 19.329155459132608, 'epp': UTCDateTime(2016, 1, 24, 10, 41, 30),
'mpp': UTCDateTime(2016, 1, 24, 10, 41, 33), 'w0': None, 'spe': 1.6666666666666667, 'fm': None,
'channel': u'LHZ'},
'S': {'picker': 'auto', 'snrdb': None, 'network': u'Z3', 'weight': 4, 'Ao': None, 'Mo': None, 'marked': [],
'lpp': UTCDateTime(2016, 1, 24, 10, 28, 56), 'Mw': None, 'fc': None, 'snr': None,
'epp': UTCDateTime(2016, 1, 24, 10, 28, 24), 'mpp': UTCDateTime(2016, 1, 24, 10, 28, 40), 'w0': None,
'spe': None, 'fm': None, 'channel': u'LHE'}}
with HidePrints(): with HidePrints():
result, station = autopickstation(wfstream=self.a106, pickparam=self.pickparam_taupy_enabled, result, station = autopickstation(wfstream=self.a106, pickparam=self.pickparam_taupy_enabled, metadata=self.metadata, origin=self.origin)
metadata=self.metadata, origin=self.origin) self.assertEqual(expected, result)
compare_dicts(expected=expected['P'], result=result['P'], hint='P-')
compare_dicts(expected=expected['S'], result=result['S'], hint='S-')
def test_autopickstation_station_missing_in_metadata(self): def test_autopickstation_station_missing_in_metadata(self):
"""This station is not in the metadata, but Taupy is enabled. Taupy should exit cleanly and modify the starttime """This station is not in the metadata, but Taupy is enabled. Taupy should exit cleanly and modify the starttime
relative to the theoretical onset to one relative to the traces starttime, eg never negative. relative to the theoretical onset to one relative to the traces starttime, eg never negative.
""" """
self.pickparam_taupy_enabled.setParamKV('pstart', -100) # modify starttime to be relative to theoretical onset self.pickparam_taupy_enabled.setParamKV('pstart', -100) # modify starttime to be relative to theoretical onset
expected = { expected = {'P': {'picker': 'auto', 'snrdb': 14.464757855513506, 'network': u'Z3', 'weight': 0, 'Mo': None, 'Ao': None, 'lpp': UTCDateTime(2016, 1, 24, 10, 41, 39, 605000), 'Mw': None, 'fc': None, 'snr': 27.956048519707181, 'marked': [], 'mpp': UTCDateTime(2016, 1, 24, 10, 41, 38, 605000), 'w0': None, 'spe': 1.6666666666666667, 'epp': UTCDateTime(2016, 1, 24, 10, 41, 35, 605000), 'fm': None, 'channel': u'LHZ'}, 'S': {'picker': 'auto', 'snrdb': 10.112844176301248, 'network': u'Z3', 'weight': 1, 'Mo': None, 'Ao': None, 'lpp': UTCDateTime(2016, 1, 24, 10, 50, 51, 605000), 'Mw': None, 'fc': None, 'snr': 10.263238413785425, 'marked': [], 'mpp': UTCDateTime(2016, 1, 24, 10, 50, 48, 605000), 'w0': None, 'spe': 4.666666666666667, 'epp': UTCDateTime(2016, 1, 24, 10, 50, 40, 605000), 'fm': None, 'channel': u'LHE'}}
'P': {'picker': 'auto', 'snrdb': 14.464757855513506, 'network': u'Z3', 'weight': 0, 'Mo': None, 'Ao': None,
'lpp': UTCDateTime(2016, 1, 24, 10, 41, 39, 605000), 'Mw': None, 'fc': None,
'snr': 27.956048519707181, 'marked': [], 'mpp': UTCDateTime(2016, 1, 24, 10, 41, 38, 605000),
'w0': None, 'spe': 1.6666666666666667, 'epp': UTCDateTime(2016, 1, 24, 10, 41, 35, 605000),
'fm': None, 'channel': u'LHZ'},
'S': {'picker': 'auto', 'snrdb': 10.112844176301248, 'network': u'Z3', 'weight': 1, 'Mo': None, 'Ao': None,
'lpp': UTCDateTime(2016, 1, 24, 10, 50, 51, 605000), 'Mw': None, 'fc': None,
'snr': 10.263238413785425, 'marked': [], 'mpp': UTCDateTime(2016, 1, 24, 10, 50, 48, 605000),
'w0': None, 'spe': 4.666666666666667, 'epp': UTCDateTime(2016, 1, 24, 10, 50, 40, 605000), 'fm': None,
'channel': u'LHE'}}
with HidePrints(): with HidePrints():
result, station = autopickstation(wfstream=self.a005a, pickparam=self.pickparam_taupy_enabled, result, station = autopickstation(wfstream = self.a005a, pickparam=self.pickparam_taupy_enabled, metadata=self.metadata, origin=self.origin)
metadata=self.metadata, origin=self.origin) self.assertEqual(expected, result)
compare_dicts(expected=expected['P'], result=result['P'], hint='P-')
compare_dicts(expected=expected['S'], result=result['S'], hint='S-')
def run_dict_comparison(result, expected):
for key, expected_value in expected.items():
if isinstance(expected_value, dict):
run_dict_comparison(result[key], expected[key])
else:
res = result[key]
if isinstance(res, UTCDateTime) and isinstance(expected_value, UTCDateTime):
res = res.timestamp
expected_value = expected_value.timestamp
assert expected_value == pytest.approx(res), f'{key}: {expected_value} != {res}'
def compare_dicts(result, expected, hint=''):
try:
run_dict_comparison(result, expected)
except AssertionError:
raise AssertionError(f'{hint}Dictionaries not equal.'
f'\n\n<<Expected>>\n{pretty_print_dict(expected)}'
f'\n\n<<Result>>\n{pretty_print_dict(result)}')
def pretty_print_dict(dct):
retstr = ''
for key, value in sorted(dct.items(), key=lambda x: x[0]):
retstr += f"{key} : {value}\n"
return retstr
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@ -1,33 +0,0 @@
import unittest
from pylot.core.io.phases import getQualitiesfromxml
class TestQualityFromXML(unittest.TestCase):
def setUp(self):
self.path = '.'
self.ErrorsP = [0.02, 0.04, 0.08, 0.16]
self.ErrorsS = [0.04, 0.08, 0.16, 0.32]
self.test0_result = [[0.0136956521739, 0.0126, 0.0101612903226, 0.00734848484849, 0.0135069444444,
0.00649659863946, 0.0129513888889, 0.0122747747748, 0.0119252873563, 0.0103947368421,
0.0092380952381, 0.00916666666667, 0.0104444444444, 0.0125333333333, 0.00904761904762,
0.00885714285714, 0.00911616161616, 0.0164166666667, 0.0128787878788, 0.0122756410256,
0.013653253667966917], [0.0239333333333, 0.0223791578953, 0.0217974304255],
[0.0504861111111, 0.0610833333333], [], [0.171029411765]], [
[0.0195, 0.0203623188406, 0.0212121212121, 0.0345833333333, 0.0196180555556,
0.0202536231884, 0.0200347222222, 0.0189, 0.0210763888889, 0.018275862069,
0.0213888888889, 0.0319791666667, 0.0205303030303, 0.0156388888889, 0.0192,
0.0231349206349, 0.023625, 0.02875, 0.0195512820513, 0.0239393939394, 0.0234166666667,
0.0174702380952, 0.0204151307995], [0.040314343081226646], [0.148555555556], [], []]
self.test1_result = [77.77777777777777, 11.11111111111111, 7.407407407407407, 0, 3.7037037037037037],\
[92.0, 4.0, 4.0, 0, 0]
def test_result_plotflag0(self):
self.assertEqual(getQualitiesfromxml(self.path, self.ErrorsP, self.ErrorsS, 0), self.test0_result)
def test_result_plotflag1(self):
self.assertEqual(getQualitiesfromxml(self.path, self.ErrorsP, self.ErrorsS, 1), self.test1_result)
if __name__ == '__main__':
unittest.main()

View File

@ -1,5 +1,4 @@
import unittest import unittest
from pylot.core.pick.utils import get_quality_class from pylot.core.pick.utils import get_quality_class
@ -53,6 +52,5 @@ class TestQualityClassFromUncertainty(unittest.TestCase):
# Error exactly in class 3 # Error exactly in class 3
self.assertEqual(3, get_quality_class(5.6, self.error_classes)) self.assertEqual(3, get_quality_class(5.6, self.error_classes))
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@ -1,76 +0,0 @@
import pytest
from obspy import read, Trace, UTCDateTime
from pylot.correlation.pick_correlation_correction import XCorrPickCorrection
class TestXCorrPickCorrection():
def setup(self):
self.make_test_traces()
self.make_test_picks()
self.t_before = 2.
self.t_after = 2.
self.cc_maxlag = 0.5
def make_test_traces(self):
# take first trace of test Stream from obspy
tr1 = read()[0]
# filter trace
tr1.filter('bandpass', freqmin=1, freqmax=20)
# make a copy and shift the copy by 0.1 s
tr2 = tr1.copy()
tr2.stats.starttime += 0.1
self.trace1 = tr1
self.trace2 = tr2
def make_test_picks(self):
# create an artificial reference pick on reference trace (trace1) and another one on the 0.1 s shifted trace
self.tpick1 = UTCDateTime('2009-08-24T00:20:07.7')
# shift the second pick by 0.2 s, the correction should be around 0.1 s now
self.tpick2 = self.tpick1 + 0.2
def test_slice_trace_okay(self):
self.setup()
xcpc = XCorrPickCorrection(UTCDateTime(), Trace(), UTCDateTime(), Trace(),
t_before=self.t_before, t_after=self.t_after, cc_maxlag=self.cc_maxlag)
test_trace = self.trace1
pick_time = self.tpick2
sliced_trace = xcpc.slice_trace(test_trace, pick_time)
assert ((sliced_trace.stats.starttime == pick_time - self.t_before - self.cc_maxlag / 2)
and (sliced_trace.stats.endtime == pick_time + self.t_after + self.cc_maxlag / 2))
def test_slice_trace_fails(self):
self.setup()
test_trace = self.trace1
pick_time = self.tpick1
with pytest.raises(ValueError):
xcpc = XCorrPickCorrection(UTCDateTime(), Trace(), UTCDateTime(), Trace(),
t_before=self.t_before + 20, t_after=self.t_after, cc_maxlag=self.cc_maxlag)
xcpc.slice_trace(test_trace, pick_time)
with pytest.raises(ValueError):
xcpc = XCorrPickCorrection(UTCDateTime(), Trace(), UTCDateTime(), Trace(),
t_before=self.t_before, t_after=self.t_after + 50, cc_maxlag=self.cc_maxlag)
xcpc.slice_trace(test_trace, pick_time)
def test_cross_correlation(self):
self.setup()
# create XCorrPickCorrection object
xcpc = XCorrPickCorrection(self.tpick1, self.trace1, self.tpick2, self.trace2, t_before=self.t_before,
t_after=self.t_after, cc_maxlag=self.cc_maxlag)
# execute correlation
correction, cc_max, uncert, fwfm = xcpc.cross_correlation(False, '', '')
# define awaited test result
test_result = (-0.09983091718314982, 0.9578431835689154, 0.0015285160561610929, 0.03625786256084631)
# check results
assert pytest.approx(test_result, rel=1e-6) == (correction, cc_max, uncert, fwfm)

View File

@ -33,7 +33,6 @@ class HidePrints:
def silencer(*args, **kwargs): def silencer(*args, **kwargs):
with HidePrints(): with HidePrints():
func(*args, **kwargs) func(*args, **kwargs)
return silencer return silencer
def __init__(self, hide_prints=True): def __init__(self, hide_prints=True):