Compare commits

..

97 Commits

Author SHA1 Message Date
4bd2e78259 [bugfix] explicitly pass parameters to "picksdict_from_picks" to calculate pick weights. Otherwise no weights could be calculated. Closes #40 2024-11-20 17:16:15 +01:00
468a7721c8 [bugfix] changed default behavior of PylotParameter class to use default Parameter if called without input parameters. Related to #40 2024-11-20 17:01:53 +01:00
555fb8a719 [minor] small code fixes 2024-11-20 16:57:27 +01:00
5a2a1fe990 [bugfix] flawed logic after parameter renaming corrected 2024-11-20 11:14:27 +01:00
64b719fd54 [minor] increased robustness of correlation algorithm for unknown exceptions... 2024-11-20 11:13:15 +01:00
71d4269a4f [bugfix] reverting code from commit 3069e7d5. Checking for coordinates in dataless Parser IS necessary to make sure correct Metadata were found. Fixes #37.
[minor] Commented out search for network name in metadata filename considered being unsafe
2024-10-09 17:07:22 +02:00
81e34875b9 [update] small changes increasing code robustness 2024-10-09 16:59:12 +02:00
d7ee820de3 [minor] adding missing image to doc 2024-09-30 16:40:41 +02:00
621cbbfbda [minor] modify README 2024-09-18 16:59:34 +02:00
050b9fb0c4 Merge remote-tracking branch 'origin/develop' into develop 2024-09-18 16:57:42 +02:00
eb3cd713c6 [update] add description for pick correlation algorithm 2024-09-18 16:56:54 +02:00
18c37dfdd0 [bugfix] take care of more unescaped backslashes in Metadata 2024-09-16 16:27:36 +02:00
9333ebf7f3 [update] deactivate Spectrogram tab features in main branch 2024-09-12 16:58:27 +02:00
8c46b1ed18 [update] README.md 2024-09-12 16:54:39 +02:00
c743813446 Merge branch 'refs/heads/develop'
# Conflicts:
#	PyLoT.py
#	README.md
#	pylot/core/util/widgets.py
2024-09-12 16:32:15 +02:00
41c9183be3 Merge branch 'refs/heads/correlation_picker' into develop 2024-09-12 16:24:50 +02:00
ae6c4966a9 [bugfix] compare options always activated using obspy_dmt independent of data availability 2024-09-12 12:23:18 +02:00
e8a516d16b [update] trying to increase plot performance for large datasets, can need overhaul of drawPicks method in the future (too much recursion) 2024-09-12 12:19:44 +02:00
f78315dec4 [update] new test files for test_autopicker after changes in autopicker 2024-09-11 11:02:32 +02:00
28f75cedcb Merge branch 'refs/heads/develop' into correlation_picker 2024-09-11 10:31:50 +02:00
e02b62696d [bugfix] no actual UTCDateTime object was used to check metadata availability for check4rotated 2024-09-10 16:59:02 +02:00
e4217f0e30 [critical] fixing a major bug in checksignallength, testing needed 2024-09-10 16:58:12 +02:00
8f154e70d7 [minor] plot coloring 2024-09-10 16:57:18 +02:00
6542b6cc4f [minor] slightly improved test output 2024-09-10 16:57:00 +02:00
5ab6c494c5 [update] increased code readability and improved figures created in autopick.py and picker.py 2024-09-10 16:16:46 +02:00
3da47c6f6b [revert] changed slope calculation in AICPicker back to older state (probably causing problems changing results in test_autopickstation.py) 2024-09-09 16:56:38 +02:00
cc7716a2b7 [minor] improved unittest result 2024-09-09 16:05:02 +02:00
03947d2363 [update] removed bad STA/LTA implementation from CF class 2024-09-09 14:42:54 +02:00
e1b0d48527 [refactor] removed unused parameter "data" from calcCF methods 2024-09-09 14:20:41 +02:00
431dbe8924 [testing] improved dictionary comparison. Failed tests have completely different picks (not only snrdb) 2024-08-30 15:07:31 +02:00
63810730e5 [bugfix] added missing parameter "taup_phases" introduced a long time ago into default parameters and parameters for unit tests 2024-08-30 14:51:30 +02:00
f2159c47f9 [testing] brought test_autopickstation up-to-date using, removing deprecated methods and using pytest.approx
Certain tests fail on snrdb calculation which has to be examined (WIP)
2024-08-30 12:41:16 +02:00
d0fbb91ffe [update] added test for AutoPyLoT, added test files for correlation picker as well 2024-08-29 16:46:30 +02:00
424d42aa1c Merge branch 'refs/heads/develop' into correlation_picker
# Conflicts:
#	pylot/core/pick/charfuns.py
#	tests/test_autopicker/pylot_alparray_mantle_corr_stack_0.03-0.5.in
#	tests/test_autopicker/test_autopylot.py
2024-08-29 16:37:15 +02:00
2cea10088d [update] added test for AutoPyLoT, added test files for correlation picker as well 2024-08-29 16:35:04 +02:00
5971508cab [bugfix] Metadata object did not find inventory for relative directory paths/unescaped backslashes 2024-08-29 16:34:37 +02:00
c765e7c66b [bugfix] fixed import for tukey in newer scipy versions which moved to signal.windows module 2024-08-29 16:33:39 +02:00
466f19eb2e [bugfix] fixed import for tukey in newer scipy versions 2024-08-28 18:01:03 +02:00
e6a4ba7ee2 [update] remove mean from picks (WIP for residual plotting) pt2 2024-08-28 10:37:31 +02:00
5d90904838 Merge branch 'develop' into correlation_picker 2024-08-27 17:46:21 +02:00
7a13288c85 [major] getting rid of unused/unnecessary "rootpath" and "database" structure. Testing required. 2024-08-27 17:45:15 +02:00
3f97097bf6 [update] add mailmap for better readability of git commit history 2024-08-27 16:19:19 +02:00
29107ee40c [update] WIP: adding tests for autopylot (global) 2024-08-26 17:18:41 +02:00
fa310461d0 [update] added possibility to remove the mean from picks (WIP for residual plotting) 2024-08-15 17:15:10 +02:00
42a7d12292 [bugfix] reduce maximum number of stations listed in array map status 2024-08-15 16:30:05 +02:00
2e49813292 [update] some general bugfixes and improvements in array map 2024-08-14 17:04:17 +02:00
5d6f4619cc [update] add selection for merge strategy for loading of single event files 2024-08-12 16:03:29 +02:00
db11e125c0 [todos] add todos 2024-08-09 16:53:21 +02:00
b59232d77b [bugfix] function name accidentally overwritten on parameter renaming 2024-08-09 16:52:57 +02:00
176e93d833 [refactor] finished annotations (type hints) 2024-08-09 16:52:32 +02:00
759e7bb848 [bugfix] partially reverted signature of an inner function with shadowed variable name
[refactor] minor
2024-08-09 16:24:40 +02:00
61c3f40063 Merge branch 'develop' into correlation_picker 2024-08-09 15:50:46 +02:00
213819c702 [update] simplify dependencies (remove sub-dependencies), update installation instructions in README.md 2024-08-09 15:50:02 +02:00
67f34cc871 Merge branch 'develop' into correlation_picker
# Conflicts:
#	pylot.yml
#	requirements.txt
2024-08-09 15:05:30 +02:00
f4f48a930f [refactor] moved unittest to existing test folder 2024-08-09 15:03:55 +02:00
b41e2b2de6 [update] new requirements.txt and pylot.yml for python 3.11 2024-08-09 15:02:31 +02:00
a068bb8457 [update] refactoring, added type hints 2024-08-08 16:49:15 +02:00
452f2a2e18 [bugfix] test raised different Exception than planned 2024-08-08 14:41:16 +02:00
c3a2ef5022 [minor] changed test to be approximately equal to test result on different machine 2024-08-08 11:28:10 +02:00
8e7bd87711 [new] added some unit tests for correlation picker (WIP) 2024-08-07 17:11:27 +02:00
d5817adc46 [merge] changes to correlation picker from different machines that were not committed 2024-08-07 10:17:35 +02:00
14f01ec46d Merge branch 'correlation_picker' of git.geophysik.ruhr-uni-bochum.de:marcel/pylot into correlation_picker 2024-08-07 10:08:57 +02:00
1b074d14ff [update] WIP: Adding type hints, docstrings etc. 2024-08-06 16:03:50 +02:00
ce71c549ca [bugfix] removed parameter that was re-introduced accidentally from manual merge 2024-08-06 16:03:16 +02:00
c4220b389e Merge branch 'correlation_picker' of git.geophysik.ruhr-uni-bochum.de:marcel/pylot into correlation_picker 2024-07-25 15:36:06 +02:00
0f29d0e20d [minor] small modifications (naming conventions) 2024-07-25 14:50:40 +02:00
e1e0913e3a Merge remote-tracking branch 'origin/develop' into develop 2024-07-25 10:25:59 +02:00
cdcd226c87 [initial] adding files from correlation picker 2024-07-24 14:07:13 +02:00
5f53cc5365 [bugfix] renamed method inside array_map.py 2024-07-23 16:31:52 +02:00
6cce05b035 Merge branch 'feature/dae' into develop
# Conflicts:
#	pylot/core/io/data.py
#	pylot/core/util/widgets.py
2024-06-12 16:19:21 +02:00
7326f061e5 [minor] inform if station coordinates were not found in metadata 2024-06-12 16:11:53 +02:00
1a18401fe3 [bugfix] added missing Parameter object in call for picksdict_from_picks 2024-06-12 13:44:02 +02:00
ec930dbc12 [minor] removed unneeded imports 2024-06-07 15:56:05 +02:00
b991f771af [bugfix] removing redundancy and wrong bullsh.. try-except code 2024-06-07 15:04:16 +02:00
2c3b1876ab [minor] switch default cmap for array_map to 'viridis' 2024-06-07 14:34:27 +02:00
0acd23d4d0 [update] further improved Pickfile selection dialog, now providing methods "overwrite" or "merge" 2024-06-07 14:32:57 +02:00
f349c8bc7e [update] improve pickfile selection, give the ability to select only specific files 2024-06-07 13:09:34 +02:00
6688ef845d [bugfix] re-implement ability of get_bool to return unidentifiable input 2024-06-07 13:08:51 +02:00
5b18e9ab71 [merge] merge branch 'improve-util-utils' of pull request #35 into develop 2024-06-07 10:29:39 +02:00
31ca0d7a85 Merge remote-tracking branch 'origin/develop' into develop
# Conflicts:
#	pylot/core/util/widgets.py
2024-04-09 16:12:03 +02:00
c7f9ad4c6f [update] changed sorting of traces overview if all station names are numeric (e.g. active experiments) 2024-04-09 15:53:19 +02:00
65dbaad446 [update] adding possibility to display other waveform data (e.g. denoised/synthetic) together with genuine data for comparison 2024-03-22 17:12:04 +01:00
5b97d51517 [minor] mpl.figure.canvas.draw -> draw_idle 2024-03-22 17:12:04 +01:00
b3fdbc811e Merge branch 'develop' into improve-util-utils 2023-06-23 09:37:54 +02:00
9fce4998d3 bugfix: remove unused functions; correct for wrong formatting (PEP) 2023-04-23 22:05:11 +02:00
c468bfbe84 feat: add type hints and tests for plot utils 2023-04-23 21:37:20 +02:00
4861d33e9a bugfix: add tests to key_for_set_value 2023-04-16 09:58:51 +02:00
f5f4635c3d bugfix: rename is_iterable and add doc tests 2023-04-16 09:50:42 +02:00
b12d92eebb bugfix: refactor get_owner and get_hash; add tests 2023-04-12 21:22:58 +02:00
e9da81376e bugfix: add new tests and refactor get_none 2023-04-12 20:32:44 +02:00
e68fc849f0 bugfix: correct erroneous and add new doctests 2023-04-10 19:14:23 +02:00
efb117177c bugfix: update check4rotate 2023-04-10 18:35:58 +02:00
710ea57503 Merge branch 'github-master' 2017-09-25 15:50:38 +02:00
8aaad643ec release version 0.2
release notes:
==============
Features:
- centralize all functionalities of PyLoT and control them from within the main GUI
- handling multiple events inside GUI with project files (save and load work progress)
- GUI based adjustments of pick parameters and I/O
- interactive tuning of parameters from within the GUI
- call automatic picking algorithm from within the GUI
- comparison of automatic with manual picks for multiple events using clear differentiation of manual picks into 'tune' and 'test-set' (beta)
- manual picking of different (user defined) phase types
- phase onset estimation with ObsPy TauPy

- interactive zoom/scale functionalities in all plots (mousewheel, pan, pan-zoom)
- array map to visualize stations and control onsets (beta feature, switch to manual picks not implemented)

Platform support:
- python 3 support
- Windows support

Performance:
- multiprocessing for automatic picking and restitution of multiple stations
- use pyqtgraph library for better performance on main waveform plot

Visualization:
- pick uncertainty (quality classes) visualization with gradients
- pick color unification for all plots
- new icons and stylesheets

Known Issues:
2017-09-25 14:24:52 +02:00
bc808b66c2 [update] README.md 2017-09-25 10:17:58 +02:00
472e5b3b9e Merge branch 'develop' 2017-09-21 16:18:53 +02:00
Marc S. Boxberg
503ea419c4 release version: 0.1a
release notes:
==============
Features
- consistent manual phase picking through predefined SNR dependant zoom level
- uniform uncertainty estimation from waveform's properties for automatic and manual picks
- pdf representation and comparison of picks taking the uncertainty intrinsically into account
- Richter and moment magnitude estimation
- location determination with external installation of [NonLinLoc](http://alomax.free.fr/nlloc/index.html)
Known issues
- Magnitude estimation from manual PyLoT takes some time (instrument correction)
2016-10-04 09:38:05 +02:00
60 changed files with 71032 additions and 1558 deletions

1
.gitignore vendored
View File

@ -2,3 +2,4 @@
*~ *~
.idea .idea
pylot/RELEASE-VERSION pylot/RELEASE-VERSION
/tests/test_autopicker/dmt_database_test/

39
.mailmap Normal file
View File

@ -0,0 +1,39 @@
Darius Arnold <Darius.Arnold@ruhr-uni-bochum.de> <Darius_A@web.de>
Darius Arnold <Darius.Arnold@ruhr-uni-bochum.de> <darius.arnold@rub.de>
Darius Arnold <Darius.Arnold@ruhr-uni-bochum.de> <darius.arnold@ruhr-uni-bochum.de>
Darius Arnold <Darius.Arnold@ruhr-uni-bochum.de> <mail@dariusarnold.de>
Dennis Wlecklik <dennisw@minos02.geophysik.ruhr-uni-bochum.de>
Jeldrik Gaal <jeldrikgaal@gmail.com>
Kaan Coekerim <kaan.coekerim@ruhr-uni-bochum.de>
Kaan Coekerim <kaan.coekerim@ruhr-uni-bochum.de> <kaan.coekerim@rub.de>
Ludger Kueperkoch <kueperkoch@igem-energie.de> <kueperkoch@bestec-for-nature.com>
Ludger Kueperkoch <kueperkoch@igem-energie.de> <ludger@quake2.(none)>
Ludger Kueperkoch <kueperkoch@igem-energie.de> <ludger@sauron.bestec-for-nature>
Marc S. Boxberg <marc.boxberg@rub.de>
Marcel Paffrath <marcel.paffrath@ruhr-uni-bochum.de> <marcel.paffrath@rub.de>
Marcel Paffrath <marcel.paffrath@ruhr-uni-bochum.de> <marcel@minos01.geophysik.ruhr-uni-bochum.de>
Marcel Paffrath <marcel.paffrath@ruhr-uni-bochum.de> <marcel@minos02.geophysik.ruhr-uni-bochum.de>
Marcel Paffrath <marcel.paffrath@ruhr-uni-bochum.de> <marcel@minos25.geophysik.ruhr-uni-bochum.de>
Marcel Paffrath <marcel.paffrath@ruhr-uni-bochum.de> <marcel@email.com>
Sally Zimmermann <sally.zimmermann@ruhr-uni-bochum.de>
Sebastian Wehling-Benatelli <sebastian.wehling-benatelli@cgi.com> <sebastianw@minos01.geophysik.ruhr-uni-bochum.de>
Sebastian Wehling-Benatelli <sebastian.wehling-benatelli@cgi.com> <sebastianw@minos02.geophysik.ruhr-uni-bochum.de>
Sebastian Wehling-Benatelli <sebastian.wehling-benatelli@cgi.com> <sebastianw@minos22.geophysik.ruhr-uni-bochum.de>
Sebastian Wehling-Benatelli <sebastian.wehling-benatelli@cgi.com> <sebastian.wehling-benatelli@scisys.de>
Sebastian Wehling-Benatelli <sebastian.wehling-benatelli@cgi.com> <sebastian.wehling@rub.de>
Sebastian Wehling-Benatelli <sebastian.wehling-benatelli@cgi.com> <sebastian.wehling@rub.de>
Sebastian Wehling-Benatelli <sebastian.wehling-benatelli@cgi.com> <DarkBeQst@users.noreply.github.com>
Thomas Moeller <thomas.moeller@rub.de>
Ann-Christin Koch <ann-christin.koch@ruhr-uni-bochum.de> <Ann-Christin.Koch@ruhr-uni-bochum.de>
Sebastian Priebe <sebastian.priebe@rub.de>

828
PyLoT.py

File diff suppressed because it is too large Load Diff

View File

@ -11,7 +11,7 @@ PILOT has originally been developed in Mathworks' MatLab. In order to distribute
problems, it has been decided to redevelop the software package in Python. The great work of the ObsPy group allows easy problems, it has been decided to redevelop the software package in Python. The great work of the ObsPy group allows easy
handling of a bunch of seismic data and PyLoT will benefit a lot compared to the former MatLab version. handling of a bunch of seismic data and PyLoT will benefit a lot compared to the former MatLab version.
The development of PyLoT is part of the joint research project MAGS2 and AlpArray. The development of PyLoT is part of the joint research project MAGS2, AlpArray and AdriaArray.
## Installation ## Installation
@ -27,58 +27,44 @@ Afterwards run (from the PyLoT main directory where the files *requirements.txt*
conda env create -f pylot.yml conda env create -f pylot.yml
or or
conda create --name pylot_38 --file requirements.txt conda create -c conda-forge --name pylot_311 python=3.11 --file requirements.txt
to create a new Anaconda environment called "pylot_38". to create a new Anaconda environment called *pylot_311*.
Afterwards activate the environment by typing Afterwards activate the environment by typing
conda activate pylot_38 conda activate pylot_311
#### Prerequisites: #### Prerequisites:
In order to run PyLoT you need to install: In order to run PyLoT you need to install:
- Python 3 - Python 3
- obspy
- pyside2
- pyqtgraph
- cartopy - cartopy
- joblib
- obspy
- pyaml
- pyqtgraph
- pyside2
(the following are already dependencies of the above packages): (the following are already dependencies of the above packages):
- scipy - scipy
- numpy - numpy
- matplotlib <= 3.3.x - matplotlib
#### Some handwork: #### Some handwork:
PyLoT needs a properties folder on your system to work. It should be situated in your home directory Some extra information on error estimates (just needed for reading old PILOT data) and the Richter magnitude scaling
(on Windows usually C:/Users/*username*):
mkdir ~/.pylot
In the next step you have to copy some files to this directory:
*for local distance seismicity*
cp path-to-pylot/inputs/pylot_local.in ~/.pylot/pylot.in
*for regional distance seismicity*
cp path-to-pylot/inputs/pylot_regional.in ~/.pylot/pylot.in
*for global distance seismicity*
cp path-to-pylot/inputs/pylot_global.in ~/.pylot/pylot.in
and some extra information on error estimates (just needed for reading old PILOT data) and the Richter magnitude scaling
relation relation
cp path-to-pylot/inputs/PILOT_TimeErrors.in path-to-pylot/inputs/richter_scaling.data ~/.pylot/ cp path-to-pylot/inputs/PILOT_TimeErrors.in path-to-pylot/inputs/richter_scaling.data ~/.pylot/
You may need to do some modifications to these files. Especially folder names should be reviewed. You may need to do some modifications to these files. Especially folder names should be reviewed.
PyLoT has been tested on Mac OSX (10.11), Debian Linux 8 and on Windows 10. PyLoT has been tested on Mac OSX (10.11), Debian Linux 8 and on Windows 10/11.
## Example Dataset
An example dataset with waveform data, metadata and automatic picks in the obspy-dmt dataset format for testing the teleseismic picking can be found at https://zenodo.org/doi/10.5281/zenodo.13759803
## Release notes ## Release notes
@ -87,6 +73,7 @@ PyLoT has been tested on Mac OSX (10.11), Debian Linux 8 and on Windows 10.
- event organisation in project files and waveform visualisation - event organisation in project files and waveform visualisation
- consistent manual phase picking through predefined SNR dependant zoom level - consistent manual phase picking through predefined SNR dependant zoom level
- consistent automatic phase picking routines using Higher Order Statistics, AIC and Autoregression - consistent automatic phase picking routines using Higher Order Statistics, AIC and Autoregression
- pick correlation correction for teleseismic waveforms
- interactive tuning of auto-pick parameters - interactive tuning of auto-pick parameters
- uniform uncertainty estimation from waveform's properties for automatic and manual picks - uniform uncertainty estimation from waveform's properties for automatic and manual picks
- pdf representation and comparison of picks taking the uncertainty intrinsically into account - pdf representation and comparison of picks taking the uncertainty intrinsically into account
@ -95,7 +82,7 @@ PyLoT has been tested on Mac OSX (10.11), Debian Linux 8 and on Windows 10.
#### Known issues: #### Known issues:
We hope to solve these with the next release. Current release is still in development progress and has several issues. We are currently lacking manpower, but hope to assess many of the issues in the near future.
## Staff ## Staff
@ -108,4 +95,4 @@ Others: A. Bruestle, T. Meier, W. Friederich
[ObsPy]: http://github.com/obspy/obspy/wiki [ObsPy]: http://github.com/obspy/obspy/wiki
April 2022 September 2024

View File

@ -136,11 +136,9 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
if parameter.hasParam('datastructure'): if parameter.hasParam('datastructure'):
# getting information on data structure # getting information on data structure
datastructure = DATASTRUCTURE[parameter.get('datastructure')]() datastructure = DATASTRUCTURE[parameter.get('datastructure')]()
dsfields = {'root': parameter.get('rootpath'), dsfields = {'dpath': parameter.get('datapath'),}
'dpath': parameter.get('datapath'),
'dbase': parameter.get('database')}
exf = ['root', 'dpath', 'dbase'] exf = ['dpath']
if parameter['eventID'] != '*' and fnames == 'None': if parameter['eventID'] != '*' and fnames == 'None':
dsfields['eventID'] = parameter['eventID'] dsfields['eventID'] = parameter['eventID']
@ -186,15 +184,15 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
if not input_dict: if not input_dict:
# started in production mode # started in production mode
datapath = datastructure.expandDataPath() datapath = datastructure.expandDataPath()
if fnames == 'None' and parameter['eventID'] == '*': if fnames in [None, 'None'] and parameter['eventID'] == '*':
# multiple event processing # multiple event processing
# read each event in database # read each event in database
events = [event for event in glob.glob(os.path.join(datapath, '*')) if events = [event for event in glob.glob(os.path.join(datapath, '*')) if
(os.path.isdir(event) and not event.endswith('EVENTS-INFO'))] (os.path.isdir(event) and not event.endswith('EVENTS-INFO'))]
elif fnames == 'None' and parameter['eventID'] != '*' and not type(parameter['eventID']) == list: elif fnames in [None, 'None'] and parameter['eventID'] != '*' and not type(parameter['eventID']) == list:
# single event processing # single event processing
events = glob.glob(os.path.join(datapath, parameter['eventID'])) events = glob.glob(os.path.join(datapath, parameter['eventID']))
elif fnames == 'None' and type(parameter['eventID']) == list: elif fnames in [None, 'None'] and type(parameter['eventID']) == list:
# multiple event processing # multiple event processing
events = [] events = []
for eventID in parameter['eventID']: for eventID in parameter['eventID']:
@ -206,12 +204,10 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
locflag = 2 locflag = 2
else: else:
# started in tune or interactive mode # started in tune or interactive mode
datapath = os.path.join(parameter['rootpath'], datapath = parameter['datapath']
parameter['datapath'])
events = [] events = []
for eventID in eventid: for eventID in eventid:
events.append(os.path.join(datapath, events.append(os.path.join(datapath,
parameter['database'],
eventID)) eventID))
if not events: if not events:
@ -238,12 +234,15 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
data.get_evt_data().path = eventpath data.get_evt_data().path = eventpath
print('Reading event data from filename {}...'.format(filename)) print('Reading event data from filename {}...'.format(filename))
except Exception as e: except Exception as e:
if type(e) == FileNotFoundError:
print('Creating new event file.')
else:
print('Could not read event from file {}: {}'.format(filename, e)) print('Could not read event from file {}: {}'.format(filename, e))
data = Data() data = Data()
pylot_event = Event(eventpath) # event should be path to event directory pylot_event = Event(eventpath) # event should be path to event directory
data.setEvtData(pylot_event) data.setEvtData(pylot_event)
if fnames == 'None': if fnames in [None, 'None']:
data.set_wf_data(glob.glob(os.path.join(datapath, event_datapath, '*'))) data.setWFData(glob.glob(os.path.join(event_datapath, '*')))
# the following is necessary because within # the following is necessary because within
# multiple event processing no event ID is provided # multiple event processing no event ID is provided
# in autopylot.in # in autopylot.in
@ -258,7 +257,7 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
now.minute) now.minute)
parameter.setParam(eventID=eventID) parameter.setParam(eventID=eventID)
else: else:
data.set_wf_data(fnames) data.setWFData(fnames)
eventpath = events[0] eventpath = events[0]
# now = datetime.datetime.now() # now = datetime.datetime.now()
@ -268,7 +267,7 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
# now.hour, # now.hour,
# now.minute) # now.minute)
parameter.setParam(eventID=eventid) parameter.setParam(eventID=eventid)
wfdat = data.get_wf_data() # all available streams wfdat = data.getWFData() # all available streams
if not station == 'all': if not station == 'all':
wfdat = wfdat.select(station=station) wfdat = wfdat.select(station=station)
if not wfdat: if not wfdat:

77
docs/correlation.md Normal file
View File

@ -0,0 +1,77 @@
# Pick-Correlation Correction
## Introduction
Currently, the pick-correlation correction algorithm is not accessible from they PyLoT GUI. The main file *pick_correlation_correction.py* is located in the directory *pylot\correlation*.
The program only works for an obspy dmt database structure.
The basic workflow of the algorithm is shown in the following diagram. The first step **(1)** is the normal (automatic) picking procedure in PyLoT. Everything from step **(2)** to **(5)** is part of the correlation correction algorithm.
*Note: The first step is not required in case theoretical onsets are used instead of external picks when the parameter use_taupy_onsets is set to True. However, an existing event quakeML (.xml) file generated by PyLoT might be required for each event in case not external picks are used.*
![images/workflow_stacking.png](images/workflow_stacking.png)
A detailed description of the algorithm can be found in the corresponding publication:
*Paffrath, M., Friederich, W., and the AlpArray and AlpArray-SWATH D Working Groups: Teleseismic P waves at the AlpArray seismic network: wave fronts, absolute travel times and travel-time residuals, Solid Earth, 12, 16351660, https://doi.org/10.5194/se-12-1635-2021, 2021.*
## How to use
To use the program you have to call the main program providing two mandatory arguments: a path to the obspy dmt database folder *dmt_database_path* and the path to the PyLoT infile *pylot.in* for picking of the beam trace:
```python pick_correlation_correction.py dmt_database_path pylot.in```
By default, the parameter file *parameters.yaml* is used. You can use the command line option *--params* to specify a different parameter file and other optional arguments such as *-pd* for plotting detailed information or *-n 4* to use 4 cores for parallel processing:
```python pick_correlation_correction.py dmt_database_path pylot.in --params parameters_adriaarray.yaml -pd -n 4```
## Cross-Correlation Parameters
The program uses the parameters in the file *parameters.yaml* by default. You can use the command line option *--params* to specify a different parameter file. An example of the parameter file is provided in the *correlation\parameters.yaml* file.
In the top level of the parameter file the logging level *logging* can be set, as well as a list of pick phases *pick_phases* (e.g. ['P', 'S']).
For each pick phase the different parameters can be set in the first sub-level of the parameter file, e.g.:
```yaml
logging: info
pick_phases: ['P', 'S']
P:
min_corr_stacking: 0.8
min_corr_export: 0.6
[...]
S:
min_corr_stacking: 0.7
[...]
```
The following parameters are available:
| Parameter Name | Description | Parameter Type |
|--------------------------------|----------------------------------------------------------------------------------------------------|----------------|
| min_corr_stacking | Minimum correlation coefficient for building beam trace | float |
| min_corr_export | Minimum correlation coefficient for pick export | float |
| min_stack | Minimum number of stations for building beam trace | int |
| t_before | Correlation window before reference pick | float |
| t_after | Correlation window after reference pick | float |
| cc_maxlag | Maximum shift for initial correlation | float |
| cc_maxlag2 | Maximum shift for second (final) correlation (also for calculating pick uncertainty) | float |
| initial_pick_outlier_threshold | Threshold for excluding large outliers of initial (AIC) picks | float |
| export_threshold | Automatically exclude all onsets which deviate more than this threshold from corrected taup onsets | float |
| min_picks_export | Minimum number of correlated picks for export | int |
| min_picks_autopylot | Minimum number of reference auto picks to continue with event | int |
| check_RMS | Do RMS check to search for restitution errors (very experimental) | bool |
| use_taupy_onsets | Use taupy onsets as reference picks instead of external picks | bool |
| station_list | Use the following stations as reference for stacking | list[str] |
| use_stacked_trace | Use existing stacked trace if found (spare re-computation) | bool |
| data_dir | obspyDMT data subdirectory (e.g. 'raw', 'processed') | str |
| pickfile_extension | Use quakeML files (PyLoT output) with the following extension | str |
| dt_stacking | Time difference for stacking window (in seconds) | list[float] |
| filter_options | Filter for first correlation (rough) | dict |
| filter_options_final | Filter for second correlation (fine) | dict |
| filter_type | Filter type (e.g. bandpass) | str |
| sampfreq | Sampling frequency (in Hz) | float |
## Example Dataset
An example dataset with waveform data, metadata and automatic picks in the obspy-dmt dataset format for testing can be found at https://zenodo.org/doi/10.5281/zenodo.13759803

View File

@ -203,8 +203,6 @@ The meaning of the header entries is:
PyLoT GUI starts with an empty project. To add events, use the add event data button. Select one or multiple folders PyLoT GUI starts with an empty project. To add events, use the add event data button. Select one or multiple folders
containing events. containing events.
[//]: <> (TODO: explain _Directories: Root path, Data path, Database path_)
### Saving projects ### Saving projects
Save the current project from the menu with File->Save project or File->Save project as. PyLoT uses ``.plp`` files to Save the current project from the menu with File->Save project or File->Save project as. PyLoT uses ``.plp`` files to

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

View File

@ -7,121 +7,122 @@ A description of the parameters used for determining automatic picks.
Parameters applied to the traces before picking algorithm starts. Parameters applied to the traces before picking algorithm starts.
| Name | Description | | Name | Description |
|---------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |---------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| *P Start*, *P | *P Start*, *P | |
Stop* | Define time interval relative to trace start time for CF calculation on vertical trace. Value is relative to theoretical onset time if 'Use TauPy' option is enabled in main settings of 'Tune Autopicker' dialogue. | | Stop* | Define time interval relative to trace start time for CF calculation on vertical trace. Value is relative to theoretical onset time if 'Use TauPy' option is enabled in main settings of 'Tune Autopicker' dialogue. |
| *S Start*, *S | *S Start*, *S | |
Stop* | Define time interval relative to trace start time for CF calculation on horizontal traces. Value is relative to theoretical onset time if 'Use TauPy' option is enabled in main settings of 'Tune Autopicker' dialogue. | | Stop* | Define time interval relative to trace start time for CF calculation on horizontal traces. Value is relative to theoretical onset time if 'Use TauPy' option is enabled in main settings of 'Tune Autopicker' dialogue. |
| *Bandpass | *Bandpass | |
Z1* | Filter settings for Butterworth bandpass applied to vertical trace for calculation of initial P pick. | | Z1* | Filter settings for Butterworth bandpass applied to vertical trace for calculation of initial P pick. |
| *Bandpass | *Bandpass | |
Z2* | Filter settings for Butterworth bandpass applied to vertical trace for calculation of precise P pick. | | Z2* | Filter settings for Butterworth bandpass applied to vertical trace for calculation of precise P pick. |
| *Bandpass | *Bandpass | |
H1* | Filter settings for Butterworth bandpass applied to horizontal traces for calculation of initial S pick. | | H1* | Filter settings for Butterworth bandpass applied to horizontal traces for calculation of initial S pick. |
| *Bandpass | *Bandpass | |
H2* | Filter settings for Butterworth bandpass applied to horizontal traces for calculation of precise S pick. | | H2* | Filter settings for Butterworth bandpass applied to horizontal traces for calculation of precise S pick. |
## Inital P pick ## Inital P pick
Parameters used for determination of initial P pick. Parameters used for determination of initial P pick.
| Name | Description | | Name | Description |
|--------------|------------------------------------------------------------------------------------------------------------------------------| |-------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------|
| * | * | |
tLTA* | Size of gliding LTA window in seconds used for calculation of HOS-CF. | | tLTA* | Size of gliding LTA window in seconds used for calculation of HOS-CF. |
| *pickwin | *pickwin | |
P* | Size of time window in seconds in which the minimum of the AIC-CF in front of the maximum of the HOS-CF is determined. | | P* | Size of time window in seconds in which the minimum of the AIC-CF in front of the maximum of the HOS-CF is determined. |
| * | * | |
AICtsmooth* | Average of samples in this time window will be used for smoothing of the AIC-CF. | | AICtsmooth* | Average of samples in this time window will be used for smoothing of the AIC-CF. |
| * | * | |
checkwinP* | Time in front of the global maximum of the HOS-CF in which to search for a second local extrema. | | checkwinP* | Time in front of the global maximum of the HOS-CF in which to search for a second local extrema. |
| *minfactorP* | Used with * | *minfactorP* | Used with * |
checkwinP*. If a second local maximum is found, it has to be at least as big as the first maximum * *minfactorP*. | | checkwinP*. If a second local maximum is found, it has to be at least as big as the first maximum * *minfactorP*. | |
| * | * | |
tsignal* | Time window in seconds after the initial P pick used for determining signal amplitude. | | tsignal* | Time window in seconds after the initial P pick used for determining signal amplitude. |
| * | * | |
tnoise* | Time window in seconds in front of initial P pick used for determining noise amplitude. | | tnoise* | Time window in seconds in front of initial P pick used for determining noise amplitude. |
| *tsafetey* | Time in seconds between *tsignal* and * | *tsafetey* | Time in seconds between *tsignal* and * |
tnoise*. | | tnoise*. | |
| * | * | |
tslope* | Time window in seconds after initial P pick in which the slope of the onset is calculated. | | tslope* | Time window in seconds after initial P pick in which the slope of the onset is calculated. |
## Inital S pick ## Inital S pick
Parameters used for determination of initial S pick Parameters used for determination of initial S pick
| Name | Description | | Name | Description |
|---------------|------------------------------------------------------------------------------------------------------------------------------| |-------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------|
| * | * | |
tdet1h* | Length of time window in seconds in which AR params of the waveform are determined. | | tdet1h* | Length of time window in seconds in which AR params of the waveform are determined. |
| * | * | |
tpred1h* | Length of time window in seconds in which the waveform is predicted using the AR model. | | tpred1h* | Length of time window in seconds in which the waveform is predicted using the AR model. |
| * | * | |
AICtsmoothS* | Average of samples in this time window is used for smoothing the AIC-CF. | | AICtsmoothS* | Average of samples in this time window is used for smoothing the AIC-CF. |
| * | * | |
pickwinS* | Time window in which the minimum in the AIC-CF in front of the maximum in the ARH-CF is determined. | | pickwinS* | Time window in which the minimum in the AIC-CF in front of the maximum in the ARH-CF is determined. |
| * | * | |
checkwinS* | Time in front of the global maximum of the ARH-CF in which to search for a second local extrema. | | checkwinS* | Time in front of the global maximum of the ARH-CF in which to search for a second local extrema. |
| *minfactorP* | Used with * | *minfactorP* | Used with * |
checkwinS*. If a second local maximum is found, it has to be at least as big as the first maximum * *minfactorS*. | | checkwinS*. If a second local maximum is found, it has to be at least as big as the first maximum * *minfactorS*. | |
| * | * | |
tsignal* | Time window in seconds after the initial P pick used for determining signal amplitude. | | tsignal* | Time window in seconds after the initial P pick used for determining signal amplitude. |
| * | * | |
tnoise* | Time window in seconds in front of initial P pick used for determining noise amplitude. | | tnoise* | Time window in seconds in front of initial P pick used for determining noise amplitude. |
| *tsafetey* | Time in seconds between *tsignal* and * | *tsafetey* | Time in seconds between *tsignal* and * |
tnoise*. | | tnoise*. | |
| * | * | |
tslope* | Time window in seconds after initial P pick in which the slope of the onset is calculated. | | tslope* | Time window in seconds after initial P pick in which the slope of the onset is calculated. |
## Precise P pick ## Precise P pick
Parameters used for determination of precise P pick. Parameters used for determination of precise P pick.
| Name | Description | | Name | Description |
|--------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |-------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| *Precalcwin* | Time window in seconds for recalculation of the HOS-CF. The new CF will be two times the size of * | *Precalcwin* | Time window in seconds for recalculation of the HOS-CF. The new CF will be two times the size of * |
Precalcwin*, since it will be calculated from the initial pick to +/- *Precalcwin*. | | Precalcwin*, since it will be calculated from the initial pick to +/- *Precalcwin*. | |
| * | * | |
tsmoothP* | Average of samples in this time window will be used for smoothing the second HOS-CF. | | tsmoothP* | Average of samples in this time window will be used for smoothing the second HOS-CF. |
| * | * | |
ausP* | Controls artificial uplift of samples during precise picking. A common local minimum of the smoothed and unsmoothed HOS-CF is found when the previous sample is larger or equal to the current sample times (1+* | ausP* | Controls artificial uplift of samples during precise picking. A common local minimum of the smoothed and unsmoothed HOS-CF is found when the previous sample is larger or equal to the current sample times (1+* |
ausP*). | | ausP*). | |
## Precise S pick ## Precise S pick
Parameters used for determination of precise S pick. Parameters used for determination of precise S pick.
| Name | Description | | Name | Description |
|--------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |--------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| * | * | |
tdet2h* | Time window for determination of AR coefficients. | | tdet2h* | Time window for determination of AR coefficients. |
| * | * | |
tpred2h* | Time window in which the waveform is predicted using the determined AR parameters. | | tpred2h* | Time window in which the waveform is predicted using the determined AR parameters. |
| *Srecalcwin* | Time window for recalculation of ARH-CF. New CF will be calculated from initial pick +/- * | *Srecalcwin* | Time window for recalculation of ARH-CF. New CF will be calculated from initial pick +/- * |
Srecalcwin*. | | Srecalcwin*. | |
| * | * | |
tsmoothS* | Average of samples in this time window will be used for smoothing the second ARH-CF. | | tsmoothS* | Average of samples in this time window will be used for smoothing the second ARH-CF. |
| * | * | |
ausS* | Controls artificial uplift of samples during precise picking. A common local minimum of the smoothed and unsmoothed ARH-CF is found when the previous sample is larger or equal to the current sample times (1+* | ausS* | Controls artificial uplift of samples during precise picking. A common local minimum of the smoothed and unsmoothed ARH-CF is found when the previous sample is larger or equal to the current sample times (1+* |
ausS*). | | ausS*). | |
| * | * | |
pickwinS* | Time window around initial pick in which to look for a precise pick. | | pickwinS* | Time window around initial pick in which to look for a precise pick. |
## Pick quality control ## Pick quality control
Parameters used for checking quality and integrity of automatic picks. Parameters used for checking quality and integrity of automatic picks.
| Name | Description | | Name | Description |
|--------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |--------------------------------------------|-----------------------------------------------------------------------|
| * | * | |
minAICPslope* | Initial P picks with a slope lower than this value will be discared. | | minAICPslope* | Initial P picks with a slope lower than this value will be discared. |
| * | * | |
minAICPSNR* | Initial P picks with a SNR below this value will be discarded. | | minAICPSNR* | Initial P picks with a SNR below this value will be discarded. |
| * | * | |
minAICSslope* | Initial S picks with a slope lower than this value will be discarded. | | minAICSslope* | Initial S picks with a slope lower than this value will be discarded. |
| * | * | |
minAICSSNR* | Initial S picks with a SNR below this value will be discarded. | | minAICSSNR* | Initial S picks with a SNR below this value will be discarded. |
| *minsiglength*, *noisefacor*. *minpercent* | Parameters for checking signal length. In the time window of size * | *minsiglength*, *noisefacor*. *minpercent* | Parameters for checking signal length. In the time window of size * |
minsiglength* after the initial P pick * minsiglength* after the initial P pick *
minpercent* of samples have to be larger than the RMS value. | minpercent* of samples have to be larger than the RMS value. |
| * | *
@ -140,11 +141,11 @@ wdttolerance* | Maximum allowed deviation of S onset
Parameters for discrete quality classes. Parameters for discrete quality classes.
| Name | Description | | Name | Description |
|------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |--------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------|
| * | * | |
timeerrorsP* | Width of the time windows in seconds between earliest and latest possible pick which represent the quality classes 0, 1, 2, 3 for P onsets. | | timeerrorsP* | Width of the time windows in seconds between earliest and latest possible pick which represent the quality classes 0, 1, 2, 3 for P onsets. |
| * | * | |
timeerrorsS* | Width of the time windows in seconds between earliest and latest possible pick which represent the quality classes 0, 1, 2, 3 for S onsets. | | timeerrorsS* | Width of the time windows in seconds between earliest and latest possible pick which represent the quality classes 0, 1, 2, 3 for S onsets. |
| *nfacP*, *nfacS* | For determination of latest possible onset time. The time when the signal reaches an amplitude of * | *nfacP*, *nfacS* | For determination of latest possible onset time. The time when the signal reaches an amplitude of * |
nfac* * mean value of the RMS amplitude in the time window *tnoise* corresponds to the latest possible onset time. | | nfac* * mean value of the RMS amplitude in the time window *tnoise* corresponds to the latest possible onset time. | |

View File

@ -4,10 +4,8 @@
%Parameters are optimized for %extent data sets! %Parameters are optimized for %extent data sets!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#main settings# #main settings#
#rootpath# %project path
#datapath# %data path #datapath# %data path
#database# %name of data base #eventID# %event ID for single event processing (* for all events found in datapath)
#eventID# %event ID for single event processing (* for all events found in database)
#invdir# %full path to inventory or dataless-seed file #invdir# %full path to inventory or dataless-seed file
PILOT #datastructure# %choose data structure PILOT #datastructure# %choose data structure
True #apverbose# %choose 'True' or 'False' for terminal output True #apverbose# %choose 'True' or 'False' for terminal output
@ -43,6 +41,7 @@ global #extent# %extent of a
1150.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking 1150.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking
True #use_taup# %use estimated traveltimes from TauPy for calculating windows for CF True #use_taup# %use estimated traveltimes from TauPy for calculating windows for CF
iasp91 #taup_model# %define TauPy model for traveltime estimation. Possible values: 1066a, 1066b, ak135, ak135f, herrin, iasp91, jb, prem, pwdk, sp6 iasp91 #taup_model# %define TauPy model for traveltime estimation. Possible values: 1066a, 1066b, ak135, ak135f, herrin, iasp91, jb, prem, pwdk, sp6
P,Pdiff #taup_phases# %Specify possible phases for TauPy (comma separated). See Obspy TauPy documentation for possible values.
0.05 0.5 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz] 0.05 0.5 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz]
0.001 0.5 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz] 0.001 0.5 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz]
0.05 0.5 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz] 0.05 0.5 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz]

View File

@ -4,10 +4,8 @@
%Parameters are optimized for %extent data sets! %Parameters are optimized for %extent data sets!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#main settings# #main settings#
/DATA/Insheim #rootpath# %project path /DATA/Insheim/EVENT_DATA/LOCAL/2018.02_Insheim #datapath# %data path
EVENT_DATA/LOCAL #datapath# %data path e0006.038.18 #eventID# %event ID for single event processing (* for all events found in datapath)
2018.02_Insheim #database# %name of data base
e0006.038.18 #eventID# %event ID for single event processing (* for all events found in database)
/DATA/Insheim/STAT_INFO #invdir# %full path to inventory or dataless-seed file /DATA/Insheim/STAT_INFO #invdir# %full path to inventory or dataless-seed file
PILOT #datastructure# %choose data structure PILOT #datastructure# %choose data structure
True #apverbose# %choose 'True' or 'False' for terminal output True #apverbose# %choose 'True' or 'False' for terminal output
@ -43,6 +41,7 @@ local #extent# %extent of a
10.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking 10.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking
False #use_taup# %use estimated traveltimes from TauPy for calculating windows for CF False #use_taup# %use estimated traveltimes from TauPy for calculating windows for CF
iasp91 #taup_model# %define TauPy model for traveltime estimation iasp91 #taup_model# %define TauPy model for traveltime estimation
P #taup_phases# %Specify possible phases for TauPy (comma separated). See Obspy TauPy documentation for possible values.
2.0 20.0 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz] 2.0 20.0 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz]
2.0 30.0 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz] 2.0 30.0 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz]
2.0 10.0 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz] 2.0 10.0 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz]

View File

@ -4,10 +4,8 @@
%Parameters are optimized for %extent data sets! %Parameters are optimized for %extent data sets!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#main settings# #main settings#
#rootpath# %project path
#datapath# %data path #datapath# %data path
#database# %name of data base #eventID# %event ID for single event processing (* for all events found in datapath)
#eventID# %event ID for single event processing (* for all events found in database)
#invdir# %full path to inventory or dataless-seed file #invdir# %full path to inventory or dataless-seed file
PILOT #datastructure# %choose data structure PILOT #datastructure# %choose data structure
True #apverbose# %choose 'True' or 'False' for terminal output True #apverbose# %choose 'True' or 'False' for terminal output
@ -43,6 +41,7 @@ local #extent# %extent of a
10.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking 10.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking
True #use_taup# %use estimated traveltimes from TauPy for calculating windows for CF True #use_taup# %use estimated traveltimes from TauPy for calculating windows for CF
iasp91 #taup_model# %define TauPy model for traveltime estimation iasp91 #taup_model# %define TauPy model for traveltime estimation
P #taup_phases# %Specify possible phases for TauPy (comma separated). See Obspy TauPy documentation for possible values.
2.0 10.0 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz] 2.0 10.0 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz]
2.0 12.0 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz] 2.0 12.0 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz]
2.0 8.0 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz] 2.0 8.0 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz]

View File

@ -1,14 +1,12 @@
name: pylot_38 name: pylot_311
channels: channels:
- conda-forge - conda-forge
- defaults - defaults
dependencies: dependencies:
- cartopy=0.20.2 - cartopy=0.23.0=py311hcf9f919_1
- matplotlib-base=3.3.4 - joblib=1.4.2=pyhd8ed1ab_0
- numpy=1.22.3 - obspy=1.4.1=py311he736701_3
- obspy=1.3.0 - pyaml=24.7.0=pyhd8ed1ab_0
- pyqtgraph=0.12.4 - pyqtgraph=0.13.7=pyhd8ed1ab_0
- pyside2>=5.13.2 - pyside2=5.15.8=py311h3d699ce_4
- python=3.8.12 - pytest=8.3.2=pyhd8ed1ab_0
- qt>=5.12.9
- scipy=1.8.0

View File

@ -9,7 +9,7 @@ PyLoT - the Python picking and Localization Tool
This python library contains a graphical user interfaces for picking This python library contains a graphical user interfaces for picking
seismic phases. This software needs ObsPy (http://github.com/obspy/obspy/wiki) seismic phases. This software needs ObsPy (http://github.com/obspy/obspy/wiki)
and the Qt4 libraries to be installed first. and the Qt libraries to be installed first.
PILOT has been developed in Mathworks' MatLab. In order to distribute PILOT has been developed in Mathworks' MatLab. In order to distribute
PILOT without facing portability problems, it has been decided to re- PILOT without facing portability problems, it has been decided to re-

View File

@ -1,69 +1,665 @@
#!/usr/bin/env python #!/usr/bin/env python
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import copy
import logging
import os import os
from dataclasses import dataclass, field
from typing import List, Union
from obspy import UTCDateTime from PySide2.QtWidgets import QMessageBox
from obspy import read_events
from obspy.core import read, Stream, UTCDateTime
from obspy.core.event import Event as ObsPyEvent
from obspy.io.sac import SacIOError
from pylot.core.io.event import EventData import pylot.core.loc.focmec as focmec
from pylot.core.io.waveformdata import WaveformData import pylot.core.loc.hypodd as hypodd
from pylot.core.util.dataprocessing import Metadata import pylot.core.loc.velest as velest
from pylot.core.io.phases import readPILOTEvent, picks_from_picksdict, \
picksdict_from_pilot, merge_picks, PylotParameter
from pylot.core.util.errors import FormatError, OverwriteError
from pylot.core.util.event import Event
from pylot.core.util.obspyDMT_interface import qml_from_obspyDMT
from pylot.core.util.utils import fnConstructor, full_range, check4rotated, \
check_for_gaps_and_merge, trim_station_components, check_for_nan
@dataclass
class Data:
event_data: EventData = field(default_factory=EventData)
waveform_data: WaveformData = field(default_factory=WaveformData)
metadata: Metadata = field(default_factory=Metadata)
_parent: Union[None, 'QtWidgets.QWidget'] = None
def __init__(self, parent=None, evtdata=None): class Data(object):
"""
Data container with attributes wfdata holding ~obspy.core.stream.
:type parent: PySide2.QtWidgets.QWidget object, optional
:param parent: A PySide2.QtWidgets.QWidget object utilized when
called by a GUI to display a PySide2.QtWidgets.QMessageBox instead of printing
to standard out.
:type evtdata: ~obspy.core.event.Event object, optional
:param evtdata ~obspy.core.event.Event object containing all derived or
loaded event. Container object holding, e.g. phase arrivals, etc.
"""
def __init__(self, parent=None, evtdata=None, picking_parameter=None):
self._parent = parent self._parent = parent
self.event_data = EventData(evtdata)
self.waveform_data = WaveformData() if not picking_parameter:
if hasattr(parent, '_inputs'):
picking_parameter = parent._inputs
else:
logging.warning('No picking parameters found! Using default input parameters!!!')
picking_parameter = PylotParameter()
self.picking_parameter = picking_parameter
if self.getParent():
self.comp = parent.getComponent()
else:
self.comp = 'Z'
self.wfdata = Stream()
self._new = False
if isinstance(evtdata, ObsPyEvent) or isinstance(evtdata, Event):
pass
elif isinstance(evtdata, dict):
evt = readPILOTEvent(**evtdata)
evtdata = evt
elif isinstance(evtdata, str):
try:
cat = read_events(evtdata)
if len(cat) != 1:
raise ValueError('ambiguous event information for file: '
'{file}'.format(file=evtdata))
evtdata = cat[0]
except TypeError as e:
if 'Unknown format for file' in e.message:
if 'PHASES' in evtdata:
picks = picksdict_from_pilot(evtdata)
evtdata = ObsPyEvent()
evtdata.picks = picks_from_picksdict(picks)
elif 'LOC' in evtdata:
raise NotImplementedError('PILOT location information '
'read support not yet '
'implemented.')
elif 'event.pkl' in evtdata:
evtdata = qml_from_obspyDMT(evtdata)
else:
raise e
else:
raise e
else: # create an empty Event object
self.setNew()
evtdata = ObsPyEvent()
evtdata.picks = []
self.evtdata = evtdata
self.wforiginal = None
self.cuttimes = None
self.dirty = False
self.processed = None
def __str__(self): def __str__(self):
return str(self.waveform_data.wfdata) return str(self.wfdata)
def __add__(self, other): def __add__(self, other):
if not isinstance(other, Data): assert isinstance(other, Data), "operands must be of same type 'Data'"
raise TypeError("Operands must be of type 'Data'") rs_id = self.get_evt_data().get('resource_id')
if self.event_data.is_new() and other.event_data.is_new(): rs_id_other = other.get_evt_data().get('resource_id')
if other.isNew() and not self.isNew():
picks_to_add = other.get_evt_data().picks
old_picks = self.get_evt_data().picks
wf_ids_old = [pick.waveform_id for pick in old_picks]
for new_pick in picks_to_add:
wf_id = new_pick.waveform_id
if wf_id in wf_ids_old:
for old_pick in old_picks:
comparison = [old_pick.waveform_id == new_pick.waveform_id,
old_pick.phase_hint == new_pick.phase_hint,
old_pick.method_id == new_pick.method_id]
if all(comparison):
del (old_pick)
old_picks.append(new_pick)
elif not other.isNew() and self.isNew():
new = other + self
self.evtdata = new.get_evt_data()
elif self.isNew() and other.isNew():
pass pass
elif other.event_data.is_new(): elif rs_id == rs_id_other:
new_picks = other.event_data.evtdata.picks other.setNew()
old_picks = self.event_data.evtdata.picks
old_picks.extend([pick for pick in new_picks if pick not in old_picks])
elif self.event_data.is_new():
return other + self
elif self.event_data.get_id() == other.event_data.get_id():
other.event_data.set_new()
return self + other return self + other
else: else:
raise ValueError("Both Data objects have differing unique Event identifiers") raise ValueError("both Data objects have differing "
"unique Event identifiers")
return self return self
def get_parent(self): def getPicksStr(self):
"""
Return picks in event data
:return: picks seperated by newlines
:rtype: str
"""
picks_str = ''
for pick in self.get_evt_data().picks:
picks_str += str(pick) + '\n'
return picks_str
def getParent(self):
"""
Get PySide.QtGui.QWidget parent object
"""
return self._parent return self._parent
def filter_wf_data(self, **kwargs): def isNew(self):
self.waveform_data.wfdata.detrend('linear') return self._new
self.waveform_data.wfdata.taper(0.02, type='cosine')
self.waveform_data.wfdata.filter(**kwargs)
self.waveform_data.dirty = True
def set_wf_data(self, fnames: List[str], fnames_alt: List[str] = None, check_rotated=False, metadata=None, tstart=0, tstop=0): def setNew(self):
return self.waveform_data.load_waveforms(fnames, fnames_alt, check_rotated, metadata, tstart, tstop) self._new = True
def reset_wf_data(self): def getCutTimes(self):
self.waveform_data.reset() """
Returns earliest start and latest end of all waveform data
:return: minimum start time and maximum end time as a tuple
:rtype: (UTCDateTime, UTCDateTime)
"""
if self.cuttimes is None:
self.updateCutTimes()
return self.cuttimes
def get_wf_data(self): def updateCutTimes(self):
return self.waveform_data.wfdata """
Update cuttimes to contain earliest start and latest end time
of all waveform data
:rtype: None
"""
self.cuttimes = full_range(self.getWFData())
def rotate_wf_data(self): def getEventFileName(self):
self.waveform_data.rotate_zne(self.metadata) ID = self.getID()
# handle forbidden filenames especially on windows systems
return fnConstructor(str(ID))
def checkEvent(self, event, fcheck, forceOverwrite=False):
"""
Check information in supplied event and own event and replace with own
information if no other information are given or forced by forceOverwrite
:param event: Event that supplies information for comparison
:type event: pylot.core.util.event.Event
:param fcheck: check and delete existing information
can be a str or a list of strings of ['manual', 'auto', 'origin', 'magnitude']
:type fcheck: str, [str]
:param forceOverwrite: Set to true to force overwrite own information. If false,
supplied information from event is only used if there is no own information in that
category (given in fcheck: manual, auto, origin, magnitude)
:type forceOverwrite: bool
:return:
:rtype: None
"""
if 'origin' in fcheck:
self.replaceOrigin(event, forceOverwrite)
if 'magnitude' in fcheck:
self.replaceMagnitude(event, forceOverwrite)
if 'auto' in fcheck:
self.replacePicks(event, 'auto')
if 'manual' in fcheck:
self.replacePicks(event, 'manual')
def replaceOrigin(self, event, forceOverwrite=False):
"""
Replace own origin with the one supplied in event if own origin is not
existing or forced by forceOverwrite = True
:param event: Event that supplies information for comparison
:type event: pylot.core.util.event.Event
:param forceOverwrite: always replace own information with supplied one if true
:type forceOverwrite: bool
:return:
:rtype: None
"""
if self.get_evt_data().origins or forceOverwrite:
if event.origins:
print("Found origin, replace it by new origin.")
event.origins = self.get_evt_data().origins
def replaceMagnitude(self, event, forceOverwrite=False):
"""
Replace own magnitude with the one supplied in event if own magnitude is not
existing or forced by forceOverwrite = True
:param event: Event that supplies information for comparison
:type event: pylot.core.util.event.Event
:param forceOverwrite: always replace own information with supplied one if true
:type forceOverwrite: bool
:return:
:rtype: None
"""
if self.get_evt_data().magnitudes or forceOverwrite:
if event.magnitudes:
print("Found magnitude, replace it by new magnitude")
event.magnitudes = self.get_evt_data().magnitudes
def replacePicks(self, event, picktype):
"""
Replace picks in event with own picks
:param event: Event that supplies information for comparison
:type event: pylot.core.util.event.Event
:param picktype: 'auto' or 'manual' picks
:type picktype: str
:return:
:rtype: None
"""
checkflag = 1
picks = event.picks
# remove existing picks
for j, pick in reversed(list(enumerate(picks))):
try:
if picktype in str(pick.method_id.id):
picks.pop(j)
checkflag = 2
except AttributeError as e:
msg = '{}'.format(e)
print(e)
checkflag = 0
if checkflag > 0:
if checkflag == 1:
print("Write new %s picks to catalog." % picktype)
if checkflag == 2:
print("Found %s pick(s), remove them and append new picks to catalog." % picktype)
# append new picks
for pick in self.get_evt_data().picks:
if picktype in str(pick.method_id.id):
picks.append(pick)
def exportEvent(self, fnout, fnext='.xml', fcheck='auto', upperErrors=None):
"""
Export event to file
:param fnout: basename of file
:param fnext: file extensions xml, cnv, obs, focmec, or/and pha
:param fcheck: check and delete existing information
can be a str or a list of strings of ['manual', 'auto', 'origin', 'magnitude']
"""
from pylot.core.util.defaults import OUTPUTFORMATS
if not type(fcheck) == list:
fcheck = [fcheck]
try:
evtformat = OUTPUTFORMATS[fnext]
except KeyError as e:
errmsg = '{0}; selected file extension {1} not ' \
'supported'.format(e, fnext)
raise FormatError(errmsg)
if hasattr(self.get_evt_data(), 'notes'):
try:
with open(os.path.join(os.path.dirname(fnout), 'notes.txt'), 'w') as notes_file:
notes_file.write(self.get_evt_data().notes)
except Exception as e:
print('Warning: Could not save notes.txt: ', str(e))
# check for already existing xml-file
if fnext == '.xml':
if os.path.isfile(fnout + fnext):
print("xml-file already exists! Check content ...")
cat = read_events(fnout + fnext)
if len(cat) > 1:
raise IOError('Ambigious event information in file {}'.format(fnout + fnext))
if len(cat) < 1:
raise IOError('No event information in file {}'.format(fnout + fnext))
event = cat[0]
if not event.resource_id == self.get_evt_data().resource_id:
QMessageBox.warning(self, 'Warning', 'Different resource IDs!')
return
self.checkEvent(event, fcheck)
self.setEvtData(event)
self.get_evt_data().write(fnout + fnext, format=evtformat)
# try exporting event
else:
evtdata_org = self.get_evt_data()
picks = evtdata_org.picks
eventpath = evtdata_org.path
picks_copy = copy.deepcopy(picks)
evtdata_copy = Event(eventpath)
evtdata_copy.picks = picks_copy
# check for stations picked automatically as well as manually
# Prefer manual picks!
for i in range(len(picks)):
if picks[i].method_id == 'manual':
mstation = picks[i].waveform_id.station_code
mstation_ext = mstation + '_'
for k in range(len(picks_copy)):
if ((picks_copy[k].waveform_id.station_code == mstation) or
(picks_copy[k].waveform_id.station_code == mstation_ext)) and \
(picks_copy[k].method_id == 'auto'):
del picks_copy[k]
break
lendiff = len(picks) - len(picks_copy)
if lendiff != 0:
print("Manual as well as automatic picks available. Prefered the {} manual ones!".format(lendiff))
no_uncertainties_p = []
no_uncertainties_s = []
if upperErrors:
# check for pick uncertainties exceeding adjusted upper errors
# Picks with larger uncertainties will not be saved in output file!
for j in range(len(picks)):
for i in range(len(picks_copy)):
if picks_copy[i].phase_hint[0] == 'P':
# Skipping pick if no upper_uncertainty is found and warning user
if picks_copy[i].time_errors['upper_uncertainty'] is None:
#print("{1} P-Pick of station {0} does not have upper_uncertainty and cant be checked".format(
# picks_copy[i].waveform_id.station_code,
# picks_copy[i].method_id))
if not picks_copy[i].waveform_id.station_code in no_uncertainties_p:
no_uncertainties_p.append(picks_copy[i].waveform_id.station_code)
continue
#print ("checking for upper_uncertainty")
if (picks_copy[i].time_errors['uncertainty'] is None) or \
(picks_copy[i].time_errors['upper_uncertainty'] >= upperErrors[0]):
print("Uncertainty exceeds or equal adjusted upper time error!")
print("Adjusted uncertainty: {}".format(upperErrors[0]))
print("Pick uncertainty: {}".format(picks_copy[i].time_errors['uncertainty']))
print("{1} P-Pick of station {0} will not be saved in outputfile".format(
picks_copy[i].waveform_id.station_code,
picks_copy[i].method_id))
del picks_copy[i]
break
if picks_copy[i].phase_hint[0] == 'S':
# Skipping pick if no upper_uncertainty is found and warning user
if picks_copy[i].time_errors['upper_uncertainty'] is None:
#print("{1} S-Pick of station {0} does not have upper_uncertainty and cant be checked".format(
#picks_copy[i].waveform_id.station_code,
#picks_copy[i].method_id))
if not picks_copy[i].waveform_id.station_code in no_uncertainties_s:
no_uncertainties_s.append(picks_copy[i].waveform_id.station_code)
continue
if (picks_copy[i].time_errors['uncertainty'] is None) or \
(picks_copy[i].time_errors['upper_uncertainty'] >= upperErrors[1]):
print("Uncertainty exceeds or equal adjusted upper time error!")
print("Adjusted uncertainty: {}".format(upperErrors[1]))
print("Pick uncertainty: {}".format(picks_copy[i].time_errors['uncertainty']))
print("{1} S-Pick of station {0} will not be saved in outputfile".format(
picks_copy[i].waveform_id.station_code,
picks_copy[i].method_id))
del picks_copy[i]
break
for s in no_uncertainties_p:
print("P-Pick of station {0} does not have upper_uncertainty and cant be checked".format(s))
for s in no_uncertainties_s:
print("S-Pick of station {0} does not have upper_uncertainty and cant be checked".format(s))
if fnext == '.obs':
try:
evtdata_copy.write(fnout + fnext, format=evtformat)
# write header afterwards
evid = str(evtdata_org.resource_id).split('/')[1]
header = '# EQEVENT: Label: EQ%s Loc: X 0.00 Y 0.00 Z 10.00 OT 0.00 \n' % evid
nllocfile = open(fnout + fnext)
l = nllocfile.readlines()
# Adding A0/Generic Amplitude to .obs file
# l2 = []
# for li in l:
# for amp in evtdata_org.amplitudes:
# if amp.waveform_id.station_code == li[0:5].strip():
# li = li[0:64] + '{:0.2e}'.format(amp.generic_amplitude) + li[73:-1] + '\n'
# l2.append(li)
# l = l2
nllocfile.close()
l.insert(0, header)
nllocfile = open(fnout + fnext, 'w')
nllocfile.write("".join(l))
nllocfile.close()
except KeyError as e:
raise KeyError('''{0} export format
not implemented: {1}'''.format(evtformat, e))
if fnext == '.cnv':
try:
velest.export(picks_copy, fnout + fnext, self.picking_parameter, eventinfo=self.get_evt_data())
except KeyError as e:
raise KeyError('''{0} export format
not implemented: {1}'''.format(evtformat, e))
if fnext == '_focmec.in':
try:
focmec.export(picks_copy, fnout + fnext, self.picking_parameter, eventinfo=self.get_evt_data())
except KeyError as e:
raise KeyError('''{0} export format
not implemented: {1}'''.format(evtformat, e))
if fnext == '.pha':
try:
hypodd.export(picks_copy, fnout + fnext, self.picking_parameter, eventinfo=self.get_evt_data())
except KeyError as e:
raise KeyError('''{0} export format
not implemented: {1}'''.format(evtformat, e))
def getComp(self):
"""
Get component (ZNE)
"""
return self.comp
def getID(self):
"""
Get unique resource id
"""
try:
return self.evtdata.get('resource_id').id
except:
return None
def filterWFData(self, kwargs):
"""
Filter waveform data
:param kwargs: arguments to pass through to filter function
"""
data = self.getWFData()
data.detrend('linear')
data.taper(0.02, type='cosine')
data.filter(**kwargs)
self.dirty = True
def setWFData(self, fnames, fnames_alt=None, checkRotated=False, metadata=None, tstart=0, tstop=0):
"""
Clear current waveform data and set given waveform data
:param fnames: waveform data names to append
:param fnames_alt: alternative data to show (e.g. synthetic/processed)
:type fnames: list
"""
def check_fname_exists(filenames: list) -> list:
if filenames:
filenames = [fn for fn in filenames if os.path.isfile(fn)]
return filenames
self.wfdata = Stream()
self.wforiginal = None
self.wf_alt = Stream()
if tstart == tstop:
tstart = tstop = None
self.tstart = tstart
self.tstop = tstop
# remove directories
fnames = check_fname_exists(fnames)
fnames_alt = check_fname_exists(fnames_alt)
# if obspy_dmt:
# wfdir = 'raw'
# self.processed = False
# for fname in fnames:
# if fname.endswith('processed'):
# wfdir = 'processed'
# self.processed = True
# break
# for fpath in fnames:
# if fpath.endswith(wfdir):
# wffnames = [os.path.join(fpath, fname) for fname in os.listdir(fpath)]
# if 'syngine' in fpath.split('/')[-1]:
# wffnames_syn = [os.path.join(fpath, fname) for fname in os.listdir(fpath)]
# else:
# wffnames = fnames
if fnames is not None:
self.appendWFData(fnames)
if fnames_alt is not None:
self.appendWFData(fnames_alt, alternative=True)
else:
return False
# various pre-processing steps:
# remove possible underscores in station names
# self.wfdata = remove_underscores(self.wfdata)
# check for gaps and merge
self.wfdata, _ = check_for_gaps_and_merge(self.wfdata)
# check for nans
check_for_nan(self.wfdata)
# check for stations with rotated components
if checkRotated and metadata is not None:
self.wfdata = check4rotated(self.wfdata, metadata, verbosity=0)
# trim station components to same start value
trim_station_components(self.wfdata, trim_start=True, trim_end=False)
# make a copy of original data
self.wforiginal = self.getWFData().copy()
self.dirty = False
return True
def appendWFData(self, fnames, alternative=False):
"""
Read waveform data from fnames and append it to current wf data
:param fnames: waveform data to append
:type fnames: list
"""
assert isinstance(fnames, list), "input parameter 'fnames' is " \
"supposed to be of type 'list' " \
"but is actually" \
" {0}".format(type(fnames))
if self.dirty:
self.resetWFData()
orig_or_alternative_data = {True: self.wf_alt,
False: self.wfdata}
warnmsg = ''
for fname in set(fnames):
try:
orig_or_alternative_data[alternative] += read(fname, starttime=self.tstart, endtime=self.tstop)
except TypeError:
try:
orig_or_alternative_data[alternative] += read(fname, format='GSE2', starttime=self.tstart, endtime=self.tstop)
except Exception as e:
try:
orig_or_alternative_data[alternative] += read(fname, format='SEGY', starttime=self.tstart,
endtime=self.tstop)
except Exception as e:
warnmsg += '{0}\n{1}\n'.format(fname, e)
except SacIOError as se:
warnmsg += '{0}\n{1}\n'.format(fname, se)
if warnmsg:
warnmsg = 'WARNING in appendWFData: unable to read waveform data\n' + warnmsg
print(warnmsg)
def getWFData(self):
return self.wfdata
def getOriginalWFData(self):
return self.wforiginal
def getAltWFdata(self):
return self.wf_alt
def resetWFData(self):
"""
Set waveform data to original waveform data
"""
if self.getOriginalWFData():
self.wfdata = self.getOriginalWFData().copy()
else:
self.wfdata = Stream()
self.dirty = False
def resetPicks(self):
"""
Clear all picks from event
"""
self.get_evt_data().picks = []
def get_evt_data(self):
return self.evtdata
def setEvtData(self, event):
self.evtdata = event
def applyEVTData(self, data, typ='pick'):
"""
Either takes an `obspy.core.event.Event` object and applies all new
information on the event to the actual data if typ is 'event or
creates ObsPy pick objects and append it to the picks list from the
PyLoT dictionary contain all picks if type is pick
:param data: data to apply, either picks or complete event
:type data:
:param typ: which event data to apply, 'pick' or 'event'
:type typ: str
:param authority_id: (currently unused)
:type: str
:raise OverwriteError:
"""
def applyPicks(picks):
"""
Creates ObsPy pick objects and append it to the picks list from the
PyLoT dictionary contain all picks.
:param picks:
:raise OverwriteError: raises an OverwriteError if the picks list is
not empty. The GUI will then ask for a decision.
"""
# firstonset = find_firstonset(picks)
# check for automatic picks
print("Writing phases to ObsPy-quakeml file")
for key in picks:
if not picks[key].get('P'):
continue
if picks[key]['P']['picker'] == 'auto':
print("Existing auto-picks will be overwritten in pick-dictionary!")
picks = picks_from_picksdict(picks)
break
else:
if self.get_evt_data().picks:
raise OverwriteError('Existing picks would be overwritten!')
else:
picks = picks_from_picksdict(picks)
break
self.get_evt_data().picks = picks
# if 'smi:local' in self.getID() and firstonset:
# fonset_str = firstonset.strftime('%Y_%m_%d_%H_%M_%S')
# ID = ResourceIdentifier('event/' + fonset_str)
# ID.convertIDToQuakeMLURI(authority_id=authority_id)
# self.get_evt_data().resource_id = ID
def applyEvent(event):
"""
takes an `obspy.core.event.Event` object and applies all new
information on the event to the actual data
:param event:
"""
if event is None:
print("applyEvent: Received None")
return
if self.isNew():
self.setEvtData(event)
else:
# prevent overwriting original pick information
event_old = self.get_evt_data()
if not event_old.resource_id == event.resource_id:
print("WARNING: Missmatch in event resource id's: {} and {}".format(
event_old.resource_id,
event.resource_id))
else:
picks = copy.deepcopy(event_old.picks)
event = merge_picks(event, picks)
# apply event information from location
event_old.update(event)
applydata = {'pick': applyPicks,
'event': applyEvent}
applydata[typ](data)
self._new = False
class GenericDataStructure(object): class GenericDataStructure(object):
@ -248,6 +844,22 @@ class PilotDataStructure(GenericDataStructure):
self.setExpandFields(['root', 'database']) self.setExpandFields(['root', 'database'])
class ObspyDMTdataStructure(GenericDataStructure):
"""
Object containing the data access information for the old PILOT data
structure.
"""
def __init__(self, **fields):
if not fields:
fields = {'database': '',
'root': ''}
GenericDataStructure.__init__(self, **fields)
self.setExpandFields(['root', 'database'])
class SeiscompDataStructure(GenericDataStructure): class SeiscompDataStructure(GenericDataStructure):
""" """
Dictionary containing the data access information for an SDS data archive: Dictionary containing the data access information for an SDS data archive:

View File

@ -6,24 +6,14 @@ import numpy as np
Default parameters used for picking Default parameters used for picking
""" """
defaults = {'rootpath': {'type': str, defaults = {'datapath': {'type': str,
'tooltip': 'project path', 'tooltip': 'path to eventfolders',
'value': '',
'namestring': 'Root path'},
'datapath': {'type': str,
'tooltip': 'data path',
'value': '', 'value': '',
'namestring': 'Data path'}, 'namestring': 'Data path'},
'database': {'type': str,
'tooltip': 'name of data base',
'value': '',
'namestring': 'Database path'},
'eventID': {'type': str, 'eventID': {'type': str,
'tooltip': 'event ID for single event processing (* for all events found in database)', 'tooltip': 'event ID for single event processing (* for all events found in datapath)',
'value': '', 'value': '*',
'namestring': 'Event ID'}, 'namestring': 'Event ID'},
'extent': {'type': str, 'extent': {'type': str,
@ -511,7 +501,7 @@ defaults = {'rootpath': {'type': str,
'taup_model': {'type': str, 'taup_model': {'type': str,
'tooltip': 'Define TauPy model for traveltime estimation. Possible values: 1066a, 1066b, ak135, ak135f, herrin, iasp91, jb, prem, pwdk, sp6', 'tooltip': 'Define TauPy model for traveltime estimation. Possible values: 1066a, 1066b, ak135, ak135f, herrin, iasp91, jb, prem, pwdk, sp6',
'value': None, 'value': 'iasp91',
'namestring': 'TauPy model'}, 'namestring': 'TauPy model'},
'taup_phases': {'type': str, 'taup_phases': {'type': str,
@ -522,9 +512,7 @@ defaults = {'rootpath': {'type': str,
settings_main = { settings_main = {
'dirs': [ 'dirs': [
'rootpath',
'datapath', 'datapath',
'database',
'eventID', 'eventID',
'invdir', 'invdir',
'datastructure', 'datastructure',

View File

@ -1,106 +0,0 @@
import copy
from dataclasses import dataclass
from typing import Union
from obspy import read_events
from obspy.core.event import Event as ObsPyEvent
@dataclass
class EventData:
evtdata: Union[ObsPyEvent, None] = None
_new: bool = False
def __init__(self, evtdata=None):
self.set_event_data(evtdata)
def set_event_data(self, evtdata):
if isinstance(evtdata, ObsPyEvent):
self.evtdata = evtdata
elif isinstance(evtdata, dict):
self.evtdata = self.read_pilot_event(**evtdata)
elif isinstance(evtdata, str):
self.evtdata = self.read_event_file(evtdata)
else:
self.set_new()
self.evtdata = ObsPyEvent(picks=[])
def read_event_file(self, evtdata: str) -> ObsPyEvent:
try:
cat = read_events(evtdata)
if len(cat) != 1:
raise ValueError(f'ambiguous event information for file: {evtdata}')
return cat[0]
except TypeError as e:
self.handle_event_file_error(e, evtdata)
def handle_event_file_error(self, e: TypeError, evtdata: str):
if 'Unknown format for file' in str(e):
if 'PHASES' in evtdata:
picks = self.picksdict_from_pilot(evtdata)
evtdata = ObsPyEvent(picks=self.picks_from_picksdict(picks))
elif 'LOC' in evtdata:
raise NotImplementedError('PILOT location information read support not yet implemented.')
elif 'event.pkl' in evtdata:
evtdata = self.qml_from_obspy_dmt(evtdata)
else:
raise e
else:
raise e
def set_new(self):
self._new = True
def is_new(self) -> bool:
return self._new
def get_picks_str(self) -> str:
return '\n'.join(str(pick) for pick in self.evtdata.picks)
def replace_origin(self, event: ObsPyEvent, force_overwrite: bool = False):
if self.evtdata.origins or force_overwrite:
event.origins = self.evtdata.origins
def replace_magnitude(self, event: ObsPyEvent, force_overwrite: bool = False):
if self.evtdata.magnitudes or force_overwrite:
event.magnitudes = self.evtdata.magnitudes
def replace_picks(self, event: ObsPyEvent, picktype: str):
checkflag = 1
picks = event.picks
for j, pick in reversed(list(enumerate(picks))):
if picktype in str(pick.method_id.id):
picks.pop(j)
checkflag = 2
if checkflag > 0:
for pick in self.evtdata.picks:
if picktype in str(pick.method_id.id):
picks.append(pick)
def get_id(self) -> Union[str, None]:
try:
return self.evtdata.resource_id.id
except:
return None
def apply_event_data(self, data, typ='pick'):
if typ == 'pick':
self.apply_picks(data)
elif typ == 'event':
self.apply_event(data)
def apply_picks(self, picks):
self.evtdata.picks = picks
def apply_event(self, event: ObsPyEvent):
if self.is_new():
self.evtdata = event
else:
old_event = self.evtdata
if old_event.resource_id == event.resource_id:
picks = copy.deepcopy(old_event.picks)
event = self.merge_picks(event, picks)
old_event.update(event)
else:
print(f"WARNING: Mismatch in event resource id's: {old_event.resource_id} and {event.resource_id}")

View File

@ -1,5 +1,7 @@
#!/usr/bin/env python #!/usr/bin/env python
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import logging
import os
from pylot.core.io import default_parameters from pylot.core.io import default_parameters
from pylot.core.util.errors import ParameterError from pylot.core.util.errors import ParameterError
@ -51,10 +53,16 @@ class PylotParameter(object):
self.__parameter = {} self.__parameter = {}
self._verbosity = verbosity self._verbosity = verbosity
self._parFileCont = {} self._parFileCont = {}
# io from parsed arguments alternatively # io from parsed arguments alternatively
for key, val in kwargs.items(): for key, val in kwargs.items():
self._parFileCont[key] = val self._parFileCont[key] = val
self.from_file() self.from_file()
# if no filename or kwargs given, use default values
if not fnin and not kwargs:
self.reset_defaults()
if fnout: if fnout:
self.export2File(fnout) self.export2File(fnout)
@ -88,10 +96,10 @@ class PylotParameter(object):
return bool(self.__parameter) return bool(self.__parameter)
def __getitem__(self, key): def __getitem__(self, key):
try: if key in self.__parameter:
return self.__parameter[key] return self.__parameter[key]
except: else:
return None logging.warning(f'{key} not found in PylotParameter')
def __setitem__(self, key, value): def __setitem__(self, key, value):
try: try:
@ -418,6 +426,28 @@ class PylotParameter(object):
line = value + name + ttip line = value + name + ttip
fid.write(line) fid.write(line)
@staticmethod
def check_deprecated_parameters(parameters):
if parameters.hasParam('database') and parameters.hasParam('rootpath'):
parameters['datapath'] = os.path.join(parameters['rootpath'], parameters['datapath'],
parameters['database'])
logging.warning(
f'Parameters database and rootpath are deprecated. '
f'Tried to merge them to now path: {parameters["datapath"]}.'
)
remove_keys = []
for key in parameters:
if not key in default_parameters.defaults.keys():
remove_keys.append(key)
logging.warning(f'Removing deprecated parameter: {key}')
for key in remove_keys:
del parameters[key]
parameters._settings_main = default_parameters.settings_main
parameters._settings_special_pick = default_parameters.settings_special_pick
class FilterOptions(object): class FilterOptions(object):
''' '''

View File

@ -21,6 +21,25 @@ from pylot.core.util.utils import get_owner, full_range, four_digits, transformF
backtransformFilterString, loopIdentifyPhase, identifyPhase backtransformFilterString, loopIdentifyPhase, identifyPhase
def add_amplitudes(event, amplitudes):
amplitude_list = []
for pick in event.picks:
try:
a0 = amplitudes[pick.waveform_id.station_code]
amplitude = ope.Amplitude(generic_amplitude=a0 * 1e-3)
amplitude.unit = 'm'
amplitude.category = 'point'
amplitude.waveform_id = pick.waveform_id
amplitude.magnitude_hint = 'ML'
amplitude.pick_id = pick.resource_id
amplitude.type = 'AML'
amplitude_list.append(amplitude)
except KeyError:
continue
event.amplitudes = amplitude_list
return event
def readPILOTEvent(phasfn=None, locfn=None, authority_id='RUB', **kwargs): def readPILOTEvent(phasfn=None, locfn=None, authority_id='RUB', **kwargs):
""" """
readPILOTEvent - function readPILOTEvent - function
@ -174,6 +193,31 @@ def convert_pilot_times(time_array):
return UTCDateTime(*times) return UTCDateTime(*times)
def picksdict_from_obs(fn):
"""
create pick dictionary from obs file
:param fn: filename
:type fn:
:return:
:rtype:
"""
picks = dict()
station_name = str()
for line in open(fn, 'r'):
if line.startswith('#'):
continue
else:
phase_line = line.split()
if not station_name == phase_line[0]:
phase = dict()
station_name = phase_line[0]
phase_name = phase_line[4].upper()
pick = UTCDateTime(phase_line[6] + phase_line[7] + phase_line[8])
phase[phase_name] = dict(mpp=pick, fm=phase_line[5])
picks[station_name] = phase
return picks
def picksdict_from_picks(evt, parameter=None): def picksdict_from_picks(evt, parameter=None):
""" """
Takes an Event object and return the pick dictionary commonly used within Takes an Event object and return the pick dictionary commonly used within
@ -234,7 +278,6 @@ def picksdict_from_picks(evt, parameter=None):
weight = phase.get('weight') weight = phase.get('weight')
if not weight: if not weight:
if not parameter: if not parameter:
logging.warning('Using ')
logging.warning('Using default input parameter') logging.warning('Using default input parameter')
parameter = PylotParameter() parameter = PylotParameter()
pick.phase_hint = identifyPhase(pick.phase_hint) pick.phase_hint = identifyPhase(pick.phase_hint)
@ -329,228 +372,636 @@ def picks_from_picksdict(picks, creation_info=None):
return picks_list return picks_list
def write_phases(arrivals, fformat, filename, parameter=None, eventinfo=None): def reassess_pilot_db(root_dir, db_dir, out_dir=None, fn_param=None, verbosity=0):
""" # TODO: change root to datapath
Writes earthquake phase data to different file formats. db_root = os.path.join(root_dir, db_dir)
evt_list = glob.glob1(db_root, 'e????.???.??')
:param arrivals: Dictionary containing phase information (station ID, phase, first motion, weight, etc.) for evt in evt_list:
:type arrivals: dict if verbosity > 0:
:param fformat: File format to write to (e.g., 'NLLoc', 'HYPO71', 'HYPOSAT', 'VELEST', 'HYPODD', 'FOCMEC') print('Reassessing event {0}'.format(evt))
:type fformat: str reassess_pilot_event(root_dir, db_dir, evt, out_dir, fn_param, verbosity)
:param filename: Path and name of the output phase file
:type filename: str
:param parameter: Additional parameters for writing the phase data
:type parameter: object
:param eventinfo: Event information needed for specific formats like VELEST, FOCMEC, and HASH
:type eventinfo: obspy.core.event.Event
"""
def write_nlloc():
with open(filename, 'w') as fid:
fid.write('# EQEVENT: {} Label: EQ{} Loc: X 0.00 Y 0.00 Z 10.00 OT 0.00 \n'.format(
parameter.get('database'), parameter.get('eventID')))
for key, value in arrivals.items():
for phase in ['P', 'S']:
if phase in value:
fm = value[phase].get('fm', '?')
onset = value[phase]['mpp']
ss_ms = onset.second + onset.microsecond / 1000000.0
weight = 1 if value[phase].get('weight', 0) < 4 else 0
amp = value[phase].get('Ao', 0.0) if phase == 'S' else ''
fid.write('{} ? ? ? {} {}{}{} {}{} {:7.4f} GAU 0 {} 0 0 {}\n'.format(
key, phase, fm, onset.year, onset.month, onset.day, onset.hour, onset.minute, ss_ms, amp,
weight))
def write_hypo71(): def reassess_pilot_event(root_dir, db_dir, event_id, out_dir=None, fn_param=None, verbosity=0):
with open(filename, 'w') as fid: from obspy import read
fid.write(
' {}\n'.format(parameter.get('eventID'))) from pylot.core.io.inputs import PylotParameter
for key, value in arrivals.items(): from pylot.core.pick.utils import earllatepicker
if value['P'].get('weight', 0) < 4: # TODO: change root to datapath
stat = key[:4]
Ponset = value['P']['mpp'] default = PylotParameter(fn_param, verbosity)
Sonset = value.get('S', {}).get('mpp')
pweight = value['P'].get('weight', 0) search_base = os.path.join(root_dir, db_dir, event_id)
sweight = value.get('S', {}).get('weight', 0) phases_file = glob.glob(os.path.join(search_base, 'PHASES.mat'))
fm = value['P'].get('fm', '-') if not phases_file:
Ao = value.get('S', {}).get('Ao', '') return
year = Ponset.year - 2000 if Ponset.year >= 2000 else Ponset.year - 1900 if verbosity > 1:
ss_ms = Ponset.second + Ponset.microsecond / 1000000.0 print('Opening PILOT phases file: {fn}'.format(fn=phases_file[0]))
if Sonset: picks_dict = picksdict_from_pilot(phases_file[0])
Sss_ms = Sonset.second + Sonset.microsecond / 1000000.0 if verbosity > 0:
fid.write('{}P{}{}{} {}{}{}{}{} {:5.2f} {}{}S {} {}\n'.format( print('Dictionary read from PHASES.mat:\n{0}'.format(picks_dict))
stat, 'I' if pweight < 2 else 'E', fm, pweight, year, Ponset.month, Ponset.day, datacheck = list()
Ponset.hour, Ponset.minute, ss_ms, Sss_ms, 'I' if sweight < 2 else 'E', sweight, Ao)) info = None
for station in picks_dict.keys():
fn_pattern = os.path.join(search_base, '{0}*'.format(station))
try:
st = read(fn_pattern)
except TypeError as e:
if 'Unknown format for file' in e.message:
try:
st = read(fn_pattern, format='GSE2')
except ValueError as e:
if e.message == 'second must be in 0..59':
info = 'A known Error was raised. Please find the list of corrupted files and double-check these files.'
datacheck.append(fn_pattern + ' (time info)\n')
continue
else: else:
fid.write('{}P{}{}{} {}{}{}{}{} {:5.2f} {}\n'.format( raise ValueError(e.message)
stat, 'I' if pweight < 2 else 'E', fm, pweight, year, Ponset.month, Ponset.day, except Exception as e:
Ponset.hour, Ponset.minute, ss_ms, Ao)) if 'No file matching file pattern:' in e.message:
if verbosity > 0:
warnings.warn('no waveform data found for station {station}'.format(station=station),
RuntimeWarning)
datacheck.append(fn_pattern + ' (no data)\n')
continue
else:
raise e
else:
raise e
for phase in picks_dict[station].keys():
try:
mpp = picks_dict[station][phase]['mpp']
except KeyError as e:
print(e.message, station)
continue
sel_st = select_for_phase(st, phase)
if not sel_st:
msg = 'no waveform data found for station {station}'.format(station=station)
warnings.warn(msg, RuntimeWarning)
continue
stime, etime = full_range(sel_st)
rel_pick = mpp - stime
epp, lpp, spe = earllatepicker(sel_st,
default.get('nfac{0}'.format(phase)),
default.get('tsnrz' if phase == 'P' else 'tsnrh'),
Pick1=rel_pick,
iplot=0,
verbosity=0)
if epp is None or lpp is None:
continue
epp = stime + epp
lpp = stime + lpp
min_diff = 3 * st[0].stats.delta
if lpp - mpp < min_diff:
lpp = mpp + min_diff
if mpp - epp < min_diff:
epp = mpp - min_diff
picks_dict[station][phase] = dict(epp=epp, mpp=mpp, lpp=lpp, spe=spe)
if datacheck:
if info:
if verbosity > 0:
print(info + ': {0}'.format(search_base))
fncheck = open(os.path.join(search_base, 'datacheck_list'), 'w')
fncheck.writelines(datacheck)
fncheck.close()
del datacheck
# create Event object for export
evt = ope.Event(resource_id=event_id)
evt.picks = picks_from_picksdict(picks_dict)
# write phase information to file
if not out_dir:
fnout_prefix = os.path.join(root_dir, db_dir, event_id, 'PyLoT_{0}.'.format(event_id))
else:
out_dir = os.path.join(out_dir, db_dir)
if not os.path.isdir(out_dir):
os.makedirs(out_dir)
fnout_prefix = os.path.join(out_dir, 'PyLoT_{0}.'.format(event_id))
evt.write(fnout_prefix + 'xml', format='QUAKEML')
def write_hyposat():
with open(filename, 'w') as fid:
fid.write('{}, event {} \n'.format(parameter.get('database'), parameter.get('eventID')))
for key, value in arrivals.items():
for phase in ['P', 'S']:
if phase in value and value[phase].get('weight', 0) < 4:
onset = value[phase]['mpp']
ss_ms = onset.second + onset.microsecond / 1000000.0
std = value[phase].get('spe', parameter.get('timeerrorsP')[value[phase].get('weight', 0)])
fid.write(
'{:<5} {}1 {:4} {:02} {:02} {:02} {:02} {:05.02f} {:5.3f} -999. 0.00 -999. 0.00\n'.format(
key, phase, onset.year, onset.month, onset.day, onset.hour, onset.minute, ss_ms, std))
def write_velest(): def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
if not eventinfo: """
print("No source origin calculated yet, thus no cnv-file creation possible!") Function of methods to write phases to the following standard file
return formats used for locating earthquakes:
with open(filename, 'w') as fid:
origin = eventinfo.origins[0]
lat_dir = 'S' if origin.latitude < 0 else 'N'
lon_dir = 'W' if origin.longitude < 0 else 'E'
year = origin.time.year - 2000 if origin.time.year >= 2000 else origin.time.year - 1900
fid.write(
'{}{}{} {}{} {} {:05.2f} {:7.4f}{} {:8.4f}{} {:7.2f} {:6.2f} {:02.0f} 0.0 0.03 1.0 1.0\n'.format(
year, origin.time.month, origin.time.day, origin.time.hour, origin.time.minute, origin.time.second,
origin.latitude, lat_dir, origin.longitude, lon_dir, origin.depth, eventinfo.magnitudes[0].mag, 0))
for key, value in arrivals.items():
for phase in ['P', 'S']:
if phase in value and value[phase].get('weight', 0) < 4:
onset = value[phase]['mpp']
rt = (onset - origin.time).total_seconds()
fid.write('{:<4}{}{}{:6.2f}\n'.format(key[:4], phase, value[phase].get('weight', 0), rt))
def write_hypodd(): HYPO71, NLLoc, VELEST, HYPOSAT, FOCMEC, and hypoDD
if not eventinfo:
print("No source origin calculated yet, thus no hypoDD-infile creation possible!")
return
with open(filename, 'w') as fid:
origin = eventinfo.origins[0]
stime = origin.time
fid.write('# {} {} {} {} {} {} {:7.4f} +{:6.4f} {:7.4f} {:4.2f} 0.1 0.5 {:4.2f} {}\n'.format(
stime.year, stime.month, stime.day, stime.hour, stime.minute, stime.second,
origin.latitude, origin.longitude, origin.depth / 1000, eventinfo.magnitudes[0].mag,
origin.quality.standard_error, "00000"))
for key, value in arrivals.items():
for phase in ['P', 'S']:
if phase in value and value[phase].get('weight', 0) < 4:
onset = value[phase]['mpp']
rt = (onset - stime).total_seconds()
fid.write('{} {:6.3f} 1 {}\n'.format(key, rt, phase))
def write_focmec(): :param arrivals:dictionary containing all phase information including
if not eventinfo: station ID, phase, first motion, weight (uncertainty), ...
print("No source origin calculated yet, thus no FOCMEC-infile creation possible!") :type arrivals: dict
return
with open(filename, 'w') as fid:
origin = eventinfo.origins[0]
stime = origin.time
fid.write('{} {}{:02d}{:02d}{:02d}{:02d}{:02.0f} {:7.4f} {:6.4f} {:3.1f} {:3.1f}\n'.format(
parameter.get('eventid', 'e0000'), stime.year, stime.month, stime.day, stime.hour, stime.minute,
stime.second, origin.latitude, origin.longitude, origin.depth / 1000, eventinfo.magnitudes[0].mag))
for key, value in arrivals.items():
if 'P' in value and value['P'].get('weight', 0) < 4 and value['P'].get('fm'):
for pick in eventinfo.picks:
if pick.waveform_id.station_code == key:
for arrival in origin.arrivals:
if arrival.pick_id == pick.resource_id and arrival.phase == 'P':
stat = key[:4]
az = arrival.azimuth
inz = arrival.takeoff_angle
fid.write('{:<4} {:6.2f} {:6.2f}{}\n'.format(stat, az, inz, value['P']['fm']))
break
def write_hash(): :param fformat: chosen file format (location routine),
# Define filenames for HASH driver 1 and 2 choose between NLLoc, HYPO71, HYPOSAT, VELEST,
filename1 = f"{filename}drv1.phase" HYPOINVERSE, FOCMEC, and hypoDD
filename2 = f"{filename}drv2.phase" :type fformat: str
print(f"Writing phases to {filename1} for HASH-driver 1") :param filename: full path and name of phase file
print(f"Writing phases to {filename2} for HASH-driver 2") :type filename: string
# Open files for writing :param parameter: all input information
with open(filename1, 'w') as fid1, open(filename2, 'w') as fid2: :type parameter: object
# Get event information needed for HASH-input file
:param eventinfo: optional, needed for VELEST-cnv file
and FOCMEC- and HASH-input files
:type eventinfo: `obspy.core.event.Event` object
"""
if fformat == 'NLLoc':
print("Writing phases to %s for NLLoc" % filename)
fid = open("%s" % filename, 'w')
# write header
fid.write('# EQEVENT: %s Label: EQ%s Loc: X 0.00 Y 0.00 Z 10.00 OT 0.00 \n' %
(parameter.get('datapath'), parameter.get('eventID')))
arrivals = chooseArrivals(arrivals)
for key in arrivals:
# P onsets
if 'P' in arrivals[key]:
try:
fm = arrivals[key]['P']['fm']
except KeyError as e:
print(e)
fm = None
if fm is None:
fm = '?'
onset = arrivals[key]['P']['mpp']
year = onset.year
month = onset.month
day = onset.day
hh = onset.hour
mm = onset.minute
ss = onset.second
ms = onset.microsecond
ss_ms = ss + ms / 1000000.0
pweight = 1 # use pick
try:
if arrivals[key]['P']['weight'] >= 4:
pweight = 0 # do not use pick
print("Station {}: Uncertain pick, do not use it!".format(key))
except KeyError as e:
print(e.message + '; no weight set during processing')
fid.write('%s ? ? ? P %s %d%02d%02d %02d%02d %7.4f GAU 0 0 0 0 %d \n' % (key,
fm,
year,
month,
day,
hh,
mm,
ss_ms,
pweight))
# S onsets
if 'S' in arrivals[key] and arrivals[key]['S']['mpp'] is not None:
fm = '?'
onset = arrivals[key]['S']['mpp']
year = onset.year
month = onset.month
day = onset.day
hh = onset.hour
mm = onset.minute
ss = onset.second
ms = onset.microsecond
ss_ms = ss + ms / 1000000.0
sweight = 1 # use pick
try:
if arrivals[key]['S']['weight'] >= 4:
sweight = 0 # do not use pick
except KeyError as e:
print(str(e) + '; no weight set during processing')
Ao = arrivals[key]['S']['Ao'] # peak-to-peak amplitude
if Ao == None:
Ao = 0.0
# fid.write('%s ? ? ? S %s %d%02d%02d %02d%02d %7.4f GAU 0 0 0 0 %d \n' % (key,
fid.write('%s ? ? ? S %s %d%02d%02d %02d%02d %7.4f GAU 0 %9.2f 0 0 %d \n' % (key,
fm,
year,
month,
day,
hh,
mm,
ss_ms,
Ao,
sweight))
fid.close()
elif fformat == 'HYPO71':
print("Writing phases to %s for HYPO71" % filename)
fid = open("%s" % filename, 'w')
# write header
fid.write(' %s\n' %
parameter.get('eventID'))
arrivals = chooseArrivals(arrivals) # MP MP what is chooseArrivals? It is not defined anywhere
for key in arrivals:
if arrivals[key]['P']['weight'] < 4:
stat = key
if len(stat) > 4: # HYPO71 handles only 4-string station IDs
stat = stat[1:5]
Ponset = arrivals[key]['P']['mpp']
Sonset = arrivals[key]['S']['mpp']
pweight = arrivals[key]['P']['weight']
sweight = arrivals[key]['S']['weight']
fm = arrivals[key]['P']['fm']
if fm is None:
fm = '-'
Ao = arrivals[key]['S']['Ao']
if Ao is None:
Ao = ''
else:
Ao = str('%7.2f' % Ao)
year = Ponset.year
if year >= 2000:
year = year - 2000
else:
year = year - 1900
month = Ponset.month
day = Ponset.day
hh = Ponset.hour
mm = Ponset.minute
ss = Ponset.second
ms = Ponset.microsecond
ss_ms = ss + ms / 1000000.0
if pweight < 2:
pstr = 'I'
elif pweight >= 2:
pstr = 'E'
if arrivals[key]['S']['weight'] < 4:
Sss = Sonset.second
Sms = Sonset.microsecond
Sss_ms = Sss + Sms / 1000000.0
Sss_ms = str('%5.02f' % Sss_ms)
if sweight < 2:
sstr = 'I'
elif sweight >= 2:
sstr = 'E'
fid.write('%-4s%sP%s%d %02d%02d%02d%02d%02d%5.2f %s%sS %d %s\n' % (stat,
pstr,
fm,
pweight,
year,
month,
day,
hh,
mm,
ss_ms,
Sss_ms,
sstr,
sweight,
Ao))
else:
fid.write('%-4s%sP%s%d %02d%02d%02d%02d%02d%5.2f %s\n' % (stat,
pstr,
fm,
pweight,
year,
month,
day,
hh,
mm,
ss_ms,
Ao))
fid.close()
elif fformat == 'HYPOSAT':
print("Writing phases to %s for HYPOSAT" % filename)
fid = open("%s" % filename, 'w')
# write header
fid.write('%s, event %s \n' % (parameter.get('datapath'), parameter.get('eventID')))
arrivals = chooseArrivals(arrivals)
for key in arrivals:
# P onsets
if 'P' in arrivals[key] and arrivals[key]['P']['mpp'] is not None:
if arrivals[key]['P']['weight'] < 4:
Ponset = arrivals[key]['P']['mpp']
pyear = Ponset.year
pmonth = Ponset.month
pday = Ponset.day
phh = Ponset.hour
pmm = Ponset.minute
pss = Ponset.second
pms = Ponset.microsecond
Pss = pss + pms / 1000000.0
# use symmetrized picking error as std
# (read the HYPOSAT manual)
pstd = arrivals[key]['P']['spe']
if pstd is None:
errorsP = parameter.get('timeerrorsP')
if arrivals[key]['P']['weight'] == 0:
pstd = errorsP[0]
elif arrivals[key]['P']['weight'] == 1:
pstd = errorsP[1]
elif arrivals[key]['P']['weight'] == 2:
pstd = errorsP[2]
elif arrivals[key]['P']['weight'] == 3:
psrd = errorsP[3]
else:
pstd = errorsP[4]
fid.write('%-5s P1 %4.0f %02d %02d %02d %02d %05.02f %5.3f -999. 0.00 -999. 0.00\n'
% (key, pyear, pmonth, pday, phh, pmm, Pss, pstd))
# S onsets
if 'S' in arrivals[key] and arrivals[key]['S']['mpp'] is not None:
if arrivals[key]['S']['weight'] < 4:
Sonset = arrivals[key]['S']['mpp']
syear = Sonset.year
smonth = Sonset.month
sday = Sonset.day
shh = Sonset.hour
smm = Sonset.minute
sss = Sonset.second
sms = Sonset.microsecond
Sss = sss + sms / 1000000.0
sstd = arrivals[key]['S']['spe']
if pstd is None:
errorsS = parameter.get('timeerrorsS')
if arrivals[key]['S']['weight'] == 0:
pstd = errorsS[0]
elif arrivals[key]['S']['weight'] == 1:
pstd = errorsS[1]
elif arrivals[key]['S']['weight'] == 2:
pstd = errorsS[2]
elif arrivals[key]['S']['weight'] == 3:
psrd = errorsS[3]
else:
pstd = errorsP[4]
fid.write('%-5s S1 %4.0f %02d %02d %02d %02d %05.02f %5.3f -999. 0.00 -999. 0.00\n'
% (key, syear, smonth, sday, shh, smm, Sss, sstd))
fid.close()
elif fformat == 'VELEST':
print("Writing phases to %s for VELEST" % filename)
fid = open("%s" % filename, 'w')
# get informations needed in cnv-file
# check, whether latitude is N or S and longitude is E or W
try: try:
eventsource = eventinfo.origins[0] eventsource = eventinfo.origins[0]
except IndexError: except:
print("No source origin calculated yet, thus no cnv-file creation possible!") print("No source origin calculated yet, thus no cnv-file creation possible!")
return return
if eventsource['latitude'] < 0:
cns = 'S'
else:
cns = 'N'
if eventsource['longitude'] < 0:
cew = 'W'
else:
cew = 'E'
# get last two integers of origin year
stime = eventsource['time']
if stime.year - 2000 >= 0:
syear = stime.year - 2000
else:
syear = stime.year - 1900
ifx = 0 # default value, see VELEST manual, pp. 22-23
# write header
fid.write('%s%02d%02d %02d%02d %05.2f %7.4f%c %8.4f%c %7.2f %6.2f %02.0f 0.0 0.03 1.0 1.0\n' % (
syear, stime.month, stime.day, stime.hour, stime.minute, stime.second, eventsource['latitude'],
cns, eventsource['longitude'], cew, eventsource['depth'], eventinfo.magnitudes[0]['mag'], ifx))
n = 0
# check whether arrivals are dictionaries (autoPyLoT) or pick object (PyLoT)
if isinstance(arrivals, dict) is False:
# convert pick object (PyLoT) into dictionary
evt = ope.Event(resource_id=eventinfo['resource_id'])
evt.picks = arrivals
arrivals = picksdict_from_picks(evt, parameter=parameter)
# check for automatic and manual picks
# prefer manual picks
usedarrivals = chooseArrivals(arrivals)
for key in usedarrivals:
# P onsets
if 'P' in usedarrivals[key]:
if usedarrivals[key]['P']['weight'] < 4:
n += 1
stat = key
if len(stat) > 4: # VELEST handles only 4-string station IDs
stat = stat[1:5]
Ponset = usedarrivals[key]['P']['mpp']
Pweight = usedarrivals[key]['P']['weight']
Prt = Ponset - stime # onset time relative to source time
if n % 6 != 0:
fid.write('%-4sP%d%6.2f' % (stat, Pweight, Prt))
else:
fid.write('%-4sP%d%6.2f\n' % (stat, Pweight, Prt))
# S onsets
if 'S' in usedarrivals[key]:
if usedarrivals[key]['S']['weight'] < 4:
n += 1
stat = key
if len(stat) > 4: # VELEST handles only 4-string station IDs
stat = stat[1:5]
Sonset = usedarrivals[key]['S']['mpp']
Sweight = usedarrivals[key]['S']['weight']
Srt = Ponset - stime # onset time relative to source time
if n % 6 != 0:
fid.write('%-4sS%d%6.2f' % (stat, Sweight, Srt))
else:
fid.write('%-4sS%d%6.2f\n' % (stat, Sweight, Srt))
fid.close()
elif fformat == 'HYPODD':
print("Writing phases to %s for hypoDD" % filename)
fid = open("%s" % filename, 'w')
# get event information needed for hypoDD-phase file
try:
eventsource = eventinfo.origins[0]
except:
print("No source origin calculated yet, thus no hypoDD-infile creation possible!")
return
stime = eventsource['time']
try:
event = eventinfo['pylot_id']
hddID = event.split('.')[0][1:5]
except:
print("Error 1111111!")
hddID = "00000"
# write header
fid.write('# %d %d %d %d %d %5.2f %7.4f +%6.4f %7.4f %4.2f 0.1 0.5 %4.2f %s\n' % (
stime.year, stime.month, stime.day, stime.hour, stime.minute, stime.second,
eventsource['latitude'], eventsource['longitude'], eventsource['depth'] / 1000,
eventinfo.magnitudes[0]['mag'], eventsource['quality']['standard_error'], hddID))
# check whether arrivals are dictionaries (autoPyLoT) or pick object (PyLoT)
if isinstance(arrivals, dict) == False:
# convert pick object (PyLoT) into dictionary
evt = ope.Event(resource_id=eventinfo['resource_id'])
evt.picks = arrivals
arrivals = picksdict_from_picks(evt, parameter=parameter)
# check for automatic and manual picks
# prefer manual picks
usedarrivals = chooseArrivals(arrivals)
for key in usedarrivals:
if 'P' in usedarrivals[key]:
# P onsets
if usedarrivals[key]['P']['weight'] < 4:
Ponset = usedarrivals[key]['P']['mpp']
Prt = Ponset - stime # onset time relative to source time
fid.write('%s %6.3f 1 P\n' % (key, Prt))
if 'S' in usedarrivals[key]:
# S onsets
if usedarrivals[key]['S']['weight'] < 4:
Sonset = usedarrivals[key]['S']['mpp']
Srt = Sonset - stime # onset time relative to source time
fid.write('%-5s %6.3f 1 S\n' % (key, Srt))
fid.close()
elif fformat == 'FOCMEC':
print("Writing phases to %s for FOCMEC" % filename)
fid = open("%s" % filename, 'w')
# get event information needed for FOCMEC-input file
try:
eventsource = eventinfo.origins[0]
except:
print("No source origin calculated yet, thus no FOCMEC-infile creation possible!")
return
stime = eventsource['time']
# avoid printing '*' in focmec-input file
if parameter.get('eventid') == '*' or parameter.get('eventid') is None:
evID = 'e0000'
else:
evID = parameter.get('eventid')
# write header line including event information
fid.write('%s %d%02d%02d%02d%02d%02.0f %7.4f %6.4f %3.1f %3.1f\n' % (evID,
stime.year, stime.month, stime.day,
stime.hour, stime.minute, stime.second,
eventsource['latitude'],
eventsource['longitude'],
eventsource['depth'] / 1000,
eventinfo.magnitudes[0]['mag']))
picks = eventinfo.picks
# check whether arrivals are dictionaries (autoPyLoT) or pick object (PyLoT)
if isinstance(arrivals, dict) == False:
# convert pick object (PyLoT) into dictionary
evt = ope.Event(resource_id=eventinfo['resource_id'])
evt.picks = arrivals
arrivals = picksdict_from_picks(evt, parameter=parameter)
# check for automatic and manual picks
# prefer manual picks
usedarrivals = chooseArrivals(arrivals)
for key in usedarrivals:
if 'P' in usedarrivals[key]:
if usedarrivals[key]['P']['weight'] < 4 and usedarrivals[key]['P']['fm'] is not None:
stat = key
for i in range(len(picks)):
station = picks[i].waveform_id.station_code
if station == stat:
# get resource ID
resid_picks = picks[i].get('resource_id')
# find same ID in eventinfo
# there it is the pick_id!!
for j in range(len(eventinfo.origins[0].arrivals)):
resid_eventinfo = eventinfo.origins[0].arrivals[j].get('pick_id')
if resid_eventinfo == resid_picks and eventinfo.origins[0].arrivals[j].phase == 'P':
if len(stat) > 4: # FOCMEC handles only 4-string station IDs
stat = stat[1:5]
az = eventinfo.origins[0].arrivals[j].get('azimuth')
inz = eventinfo.origins[0].arrivals[j].get('takeoff_angle')
fid.write('%-4s %6.2f %6.2f%s \n' % (stat,
az,
inz,
usedarrivals[key]['P']['fm']))
break
fid.close()
elif fformat == 'HASH':
# two different input files for
# HASH-driver 1 and 2 (see HASH manual!)
filename1 = filename + 'drv1' + '.phase'
filename2 = filename + 'drv2' + '.phase'
print("Writing phases to %s for HASH for HASH-driver 1" % filename1)
fid1 = open("%s" % filename1, 'w')
print("Writing phases to %s for HASH for HASH-driver 2" % filename2)
fid2 = open("%s" % filename2, 'w')
# get event information needed for HASH-input file
try:
eventsource = eventinfo.origins[0]
except:
print("No source origin calculated yet, thus no cnv-file creation possible!")
return
eventsource = eventinfo.origins[0]
event = parameter.get('eventID') event = parameter.get('eventID')
hashID = event.split('.')[0][1:5] hashID = event.split('.')[0][1:5]
latdeg = eventsource['latitude'] latdeg = eventsource['latitude']
latmin = (eventsource['latitude'] * 60) / 10000 latmin = eventsource['latitude'] * 60 / 10000
londeg = eventsource['longitude'] londeg = eventsource['longitude']
lonmin = (eventsource['longitude'] * 60) / 10000 lonmin = eventsource['longitude'] * 60 / 10000
erh = 1 / 2 * (eventsource.origin_uncertainty['min_horizontal_uncertainty'] +
erh = (eventsource.origin_uncertainty['min_horizontal_uncertainty'] + eventsource.origin_uncertainty['max_horizontal_uncertainty']) / 1000
eventsource.origin_uncertainty['max_horizontal_uncertainty']) / 2000
erz = eventsource.depth_errors['uncertainty'] erz = eventsource.depth_errors['uncertainty']
stime = eventsource['time'] stime = eventsource['time']
syear = stime.year % 100 # Calculate two-digit year if stime.year - 2000 >= 0:
syear = stime.year - 2000
else:
syear = stime.year - 1900
picks = eventinfo.picks picks = eventinfo.picks
# write header line including event information
# Write header line including event information for HASH-driver 1 # for HASH-driver 1
fid1.write(f"{syear:02d}{stime.month:02d}{stime.day:02d}{stime.hour:02d}{stime.minute:02d}" fid1.write('%s%02d%02d%02d%02d%5.2f%2dN%5.2f%3dE%5.2f%6.3f%4.2f%5.2f%5.2f%s\n' % (syear,
f"{stime.second:05.2f}{latdeg:2d}N{latmin:05.2f}{londeg:3d}E{lonmin:05.2f}" stime.month, stime.day,
f"{eventsource['depth']:6.2f}{eventinfo.magnitudes[0]['mag']:4.2f}{erh:5.2f}{erz:5.2f}{hashID}\n") stime.hour, stime.minute,
stime.second,
# Write header line including event information for HASH-driver 2 latdeg, latmin, londeg,
fid2.write(f"{syear:02d}{stime.month:02d}{stime.day:02d}{stime.hour:02d}{stime.minute:02d}" lonmin, eventsource['depth'],
f"{stime.second:05.2f}{latdeg}N{latmin:05.2f}{londeg}E{lonmin:6.2f}{eventsource['depth']:5.2f}" eventinfo.magnitudes[0][
f"{eventsource['quality']['used_phase_count']:3d}{erh:5.2f}{erz:5.2f}" 'mag'], erh, erz,
f"{eventinfo.magnitudes[0]['mag']:4.2f}{hashID}\n") hashID))
# write header line including event information
# Write phase lines # for HASH-driver 2
for key, arrival in arrivals.items(): fid2.write(
if 'P' in arrival and arrival['P']['weight'] < 4 and arrival['P']['fm'] is not None: '%d%02d%02d%02d%02d%5.2f%dN%5.2f%3dE%6.2f%5.2f %d %5.2f %5.2f %4.2f %s \n' % (
syear, stime.month, stime.day,
stime.hour, stime.minute, stime.second,
latdeg, latmin, londeg, lonmin,
eventsource['depth'],
eventsource['quality']['used_phase_count'],
erh, erz, eventinfo.magnitudes[0]['mag'],
hashID))
# Prefer Manual Picks over automatic ones if possible
arrivals = chooseArrivals(arrivals) # MP MP what is chooseArrivals? It is not defined anywhere
# write phase lines
for key in arrivals:
if 'P' in arrivals[key]:
if arrivals[key]['P']['weight'] < 4 and arrivals[key]['P']['fm'] is not None:
stat = key stat = key
ccode = arrival['P']['channel'] ccode = arrivals[key]['P']['channel']
ncode = arrival['P']['network'] ncode = arrivals[key]['P']['network']
Pqual = 'I' if arrival['P']['weight'] < 2 else 'E'
for pick in picks: if arrivals[key]['P']['weight'] < 2:
if pick.waveform_id.station_code == stat: Pqual = 'I'
resid_picks = pick.get('resource_id') else:
for origin_arrival in eventinfo.origins[0].arrivals: Pqual = 'E'
if (origin_arrival.get('pick_id') == resid_picks and
origin_arrival.phase == 'P'): for i in range(len(picks)):
if len(stat) > 4: # HASH handles only 4-character station IDs station = picks[i].waveform_id.station_code
if station == stat:
# get resource ID
resid_picks = picks[i].get('resource_id')
# find same ID in eventinfo
# there it is the pick_id!!
for j in range(len(eventinfo.origins[0].arrivals)):
resid_eventinfo = eventinfo.origins[0].arrivals[j].get('pick_id')
if resid_eventinfo == resid_picks and eventinfo.origins[0].arrivals[j].phase == 'P':
if len(stat) > 4: # HASH handles only 4-string station IDs
stat = stat[1:5] stat = stat[1:5]
az = eventinfo.origins[0].arrivals[j].get('azimuth')
az = origin_arrival.get('azimuth') inz = eventinfo.origins[0].arrivals[j].get('takeoff_angle')
inz = origin_arrival.get('takeoff_angle') dist = eventinfo.origins[0].arrivals[j].get('distance')
dist = origin_arrival.get('distance') # write phase line for HASH-driver 1
fid1.write(
# Write phase line for HASH-driver 1 '%-4s%sP%s%d 0 %3.1f %03d %03d 2 1 %s\n' % (
fid1.write(f"{stat:<4}{Pqual}P{arrival['P']['fm']}{arrival['P']['weight']:d}" stat, Pqual, arrivals[key]['P']['fm'], arrivals[key]['P']['weight'],
f"{dist:3.1f}{inz:03d}{az:03d}{ccode}\n") dist, inz, az, ccode))
# write phase line for HASH-driver 2
# Write phase line for HASH-driver 2 fid2.write('%-4s %s %s %s %s \n' % (
fid2.write(f"{stat:<4} {ncode} {ccode} {Pqual} {arrival['P']['fm']}\n") stat,
ncode,
ccode,
Pqual,
arrivals[key]['P']['fm']))
break break
fid1.write(f"{'':<36}{hashID}") fid1.write(' %s' % hashID)
fid1.close()
# Prefer Manual Picks over automatic ones if possible fid2.close()
arrivals = chooseArrivals(arrivals) # Function not defined, assumed to exist
if fformat == 'NLLoc':
write_nlloc()
elif fformat == 'HYPO71':
write_hypo71()
elif fformat == 'HYPOSAT':
write_hyposat()
elif fformat == 'VELEST':
write_velest()
elif fformat == 'HYPODD':
write_hypodd()
elif fformat == 'FOCMEC':
write_focmec()
elif fformat == 'HASH':
write_hash()
def chooseArrivals(arrivals): def chooseArrivals(arrivals):

View File

@ -1,177 +0,0 @@
import os
from pylot.core.util.event import Event
class Project(object):
'''
Pickable class containing information of a PyLoT project, like event lists and file locations.
'''
# TODO: remove rootpath
def __init__(self):
self.eventlist = []
self.location = None
self.rootpath = None
self.datapath = None
self.dirty = False
self.parameter = None
self._table = None
def add_eventlist(self, eventlist):
'''
Add events from an eventlist containing paths to event directories.
Will skip existing paths.
'''
if len(eventlist) == 0:
return
for item in eventlist:
event = Event(item)
event.rootpath = self.parameter['rootpath']
event.database = self.parameter['database']
event.datapath = self.parameter['datapath']
if not event.path in self.getPaths():
self.eventlist.append(event)
self.setDirty()
else:
print('Skipping event with path {}. Already part of project.'.format(event.path))
self.eventlist.sort(key=lambda x: x.pylot_id)
self.search_eventfile_info()
def remove_event(self, event):
self.eventlist.remove(event)
def remove_event_by_id(self, eventID):
for event in self.eventlist:
if eventID in str(event.resource_id):
self.remove_event(event)
break
def read_eventfile_info(self, filename, separator=','):
'''
Try to read event information from file (:param:filename) comparing specific event datetimes.
File structure (each row): event, date, time, magnitude, latitude, longitude, depth
separated by :param:separator each.
'''
with open(filename, 'r') as infile:
for line in infile.readlines():
eventID, date, time, mag, lat, lon, depth = line.split(separator)[:7]
# skip first line
try:
day, month, year = date.split('/')
except:
continue
year = int(year)
# hardcoded, if year only consists of 2 digits (e.g. 16 instead of 2016)
if year < 100:
year += 2000
datetime = '{}-{}-{}T{}'.format(year, month, day, time)
try:
datetime = UTCDateTime(datetime)
except Exception as e:
print(e, datetime, filename)
continue
for event in self.eventlist:
if eventID in str(event.resource_id) or eventID in event.origins:
if event.origins:
origin = event.origins[0] # should have only one origin
if origin.time == datetime:
origin.latitude = float(lat)
origin.longitude = float(lon)
origin.depth = float(depth)
else:
continue
elif not event.origins:
origin = Origin(resource_id=event.resource_id,
time=datetime, latitude=float(lat),
longitude=float(lon), depth=float(depth))
event.origins.append(origin)
event.magnitudes.append(Magnitude(resource_id=event.resource_id,
mag=float(mag),
mag_type='M'))
break
def search_eventfile_info(self):
'''
Search all datapaths in rootpath for filenames with given file extension fext
and try to read event info from it
'''
datapaths = []
fext = '.csv'
for event in self.eventlist:
if not event.datapath in datapaths:
datapaths.append(event.datapath)
for datapath in datapaths:
# datapath = os.path.join(self.rootpath, datapath)
if os.path.isdir(datapath):
for filename in os.listdir(datapath):
filename = os.path.join(datapath, filename)
if os.path.isfile(filename) and filename.endswith(fext):
try:
self.read_eventfile_info(filename)
except Exception as e:
print('Failed on reading eventfile info from file {}: {}'.format(filename, e))
else:
print("Directory %s does not exist!" % datapath)
def getPaths(self):
'''
Returns paths (eventlist) of all events saved in the project.
'''
paths = []
for event in self.eventlist:
paths.append(event.path)
return paths
def setDirty(self, value=True):
self.dirty = value
def getEventFromPath(self, path):
'''
Search for an event in the project by event path.
'''
for event in self.eventlist:
if event.path == path:
return event
def save(self, filename=None):
'''
Save PyLoT Project to a file.
Can be loaded by using project.load(filename).
'''
try:
import pickle
except ImportError:
import _pickle as pickle
if filename:
self.location = filename
else:
filename = self.location
table = self._table # MP: see below
try:
outfile = open(filename, 'wb')
self._table = [] # MP: Workaround as long as table cannot be saved as part of project
pickle.dump(self, outfile, protocol=pickle.HIGHEST_PROTOCOL)
self.setDirty(False)
self._table = table # MP: see above
return True
except Exception as e:
print('Could not pickle PyLoT project. Reason: {}'.format(e))
self.setDirty()
self._table = table # MP: see above
return False
@staticmethod
def load(filename):
'''
Load project from filename.
'''
import pickle
infile = open(filename, 'rb')
project = pickle.load(infile)
infile.close()
project.location = filename
print('Loaded %s' % filename)
return project

View File

@ -1,13 +0,0 @@
import os
from typing import List
def validate_filenames(filenames: List[str]) -> List[str]:
"""
validate a list of filenames for file abundance
:param filenames: list of possible filenames
:type filenames: List[str]
:return: list of valid filenames
:rtype: List[str]
"""
return [fn for fn in filenames if os.path.isfile(fn)]

View File

@ -1,123 +0,0 @@
import logging
from dataclasses import dataclass, field
from typing import Union, List
from obspy import Stream, read
from obspy.io.sac import SacIOError
from pylot.core.io.utils import validate_filenames
from pylot.core.util.dataprocessing import Metadata
from pylot.core.util.utils import get_stations, check_for_nan, check4rotated
@dataclass
class WaveformData:
wfdata: Stream = field(default_factory=Stream)
wforiginal: Union[Stream, None] = None
wf_alt: Stream = field(default_factory=Stream)
dirty: bool = False
def load_waveforms(self, fnames: List[str], fnames_alt: List[str] = None, check_rotated=False, metadata=None, tstart=0, tstop=0):
fn_list = validate_filenames(fnames)
if not fn_list:
logging.warning('No valid filenames given for loading waveforms')
else:
self.clear()
self.add_waveforms(fn_list)
if fnames_alt is None:
pass
else:
alt_fn_list = validate_filenames(fnames_alt)
if not alt_fn_list:
logging.warning('No valid alternative filenames given for loading waveforms')
else:
self.add_waveforms(alt_fn_list, alternative=True)
if not fn_list and not alt_fn_list:
logging.error('No filenames or alternative filenames given for loading waveforms')
return False
self.merge()
self.replace_nan()
if not check_rotated or not metadata:
pass
else:
self.rotate_zne()
self.trim_station_traces()
self.wforiginal = self.wfdata.copy()
self.dirty = False
return True
def add_waveforms(self, fnames: List[str], alternative: bool = False):
data_stream = self.wf_alt if alternative else self.wfdata
warnmsg = ''
for fname in set(fnames):
try:
data_stream += read(fname)
except TypeError:
try:
data_stream += read(fname, format='GSE2')
except Exception as e:
try:
data_stream += read(fname, format='SEGY')
except Exception as e:
warnmsg += f'{fname}\n{e}\n'
except SacIOError as se:
warnmsg += f'{fname}\n{se}\n'
if warnmsg:
print(f'WARNING in add_waveforms: unable to read waveform data\n{warnmsg}')
def clear(self):
self.wfdata = Stream()
self.wforiginal = None
self.wf_alt = Stream()
def reset(self):
"""
Resets the waveform data to its original state.
"""
if self.wforiginal:
self.wfdata = self.wforiginal.copy()
else:
self.wfdata = Stream()
self.dirty = False
def merge(self):
"""
check for gaps in Stream and merge if gaps are found
"""
gaps = self.wfdata.get_gaps()
if gaps:
merged = ['{}.{}.{}.{}'.format(*gap[:4]) for gap in gaps]
self.wfdata.merge(method=1)
logging.info('Merged the following stations because of gaps:')
for station in merged:
logging.info(station)
def replace_nan(self):
"""
Replace all NaNs in data with 0. (in place)
"""
self.wfdata = check_for_nan(self.wfdata)
def rotate_zne(self, metadata: Metadata = None):
"""
Check all traces in stream for rotation. If a trace is not in ZNE rotation (last symbol of channel code is numeric) and the trace
is in the metadata with azimuth and dip, rotate it to classical ZNE orientation.
Rotating the traces requires them to be of the same length, so, all traces will be trimmed to a common length as a
side effect.
"""
self.wfdata = check4rotated(self.wfdata, metadata)
def trim_station_traces(self):
"""
trim data stream to common time window
"""
for station in get_stations(self.wfdata):
station_traces = self.wfdata.select(station=station)
station_traces.trim(starttime=max([trace.stats.starttime for trace in station_traces]),
endtime=min([trace.stats.endtime for trace in station_traces]))

View File

@ -1,7 +1,7 @@
#!/usr/bin/env python #!/usr/bin/env python
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from pylot.core.io.phases import write_phases from pylot.core.io.phases import writephases
from pylot.core.util.version import get_git_version as _getVersionString from pylot.core.util.version import get_git_version as _getVersionString
__version__ = _getVersionString() __version__ = _getVersionString()
@ -25,4 +25,4 @@ def export(picks, fnout, parameter, eventinfo):
:type eventinfo: list object :type eventinfo: list object
''' '''
# write phases to FOCMEC-phase file # write phases to FOCMEC-phase file
write_phases(picks, 'FOCMEC', fnout, parameter, eventinfo) writephases(picks, 'FOCMEC', fnout, parameter, eventinfo)

View File

@ -1,7 +1,7 @@
#!/usr/bin/env python #!/usr/bin/env python
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from pylot.core.io.phases import write_phases from pylot.core.io.phases import writephases
from pylot.core.util.version import get_git_version as _getVersionString from pylot.core.util.version import get_git_version as _getVersionString
__version__ = _getVersionString() __version__ = _getVersionString()
@ -25,4 +25,4 @@ def export(picks, fnout, parameter, eventinfo):
:type eventinfo: list object :type eventinfo: list object
''' '''
# write phases to HASH-phase file # write phases to HASH-phase file
write_phases(picks, 'HASH', fnout, parameter, eventinfo) writephases(picks, 'HASH', fnout, parameter, eventinfo)

View File

@ -1,7 +1,7 @@
#!/usr/bin/env python #!/usr/bin/env python
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from pylot.core.io.phases import write_phases from pylot.core.io.phases import writephases
from pylot.core.util.version import get_git_version as _getVersionString from pylot.core.util.version import get_git_version as _getVersionString
__version__ = _getVersionString() __version__ = _getVersionString()
@ -22,4 +22,4 @@ def export(picks, fnout, parameter):
:type parameter: object :type parameter: object
''' '''
# write phases to HYPO71-phase file # write phases to HYPO71-phase file
write_phases(picks, 'HYPO71', fnout, parameter) writephases(picks, 'HYPO71', fnout, parameter)

View File

@ -1,7 +1,7 @@
#!/usr/bin/env python #!/usr/bin/env python
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from pylot.core.io.phases import write_phases from pylot.core.io.phases import writephases
from pylot.core.util.version import get_git_version as _getVersionString from pylot.core.util.version import get_git_version as _getVersionString
__version__ = _getVersionString() __version__ = _getVersionString()
@ -25,4 +25,4 @@ def export(picks, fnout, parameter, eventinfo):
:type eventinfo: list object :type eventinfo: list object
''' '''
# write phases to hypoDD-phase file # write phases to hypoDD-phase file
write_phases(picks, 'HYPODD', fnout, parameter, eventinfo) writephases(picks, 'HYPODD', fnout, parameter, eventinfo)

View File

@ -1,7 +1,7 @@
#!/usr/bin/env python #!/usr/bin/env python
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from pylot.core.io.phases import write_phases from pylot.core.io.phases import writephases
from pylot.core.util.version import get_git_version as _getVersionString from pylot.core.util.version import get_git_version as _getVersionString
__version__ = _getVersionString() __version__ = _getVersionString()
@ -22,4 +22,4 @@ def export(picks, fnout, parameter):
:type parameter: object :type parameter: object
''' '''
# write phases to HYPOSAT-phase file # write phases to HYPOSAT-phase file
write_phases(picks, 'HYPOSAT', fnout, parameter) writephases(picks, 'HYPOSAT', fnout, parameter)

View File

@ -7,7 +7,7 @@ import subprocess
from obspy import read_events from obspy import read_events
from pylot.core.io.phases import write_phases from pylot.core.io.phases import writephases
from pylot.core.util.gui import which from pylot.core.util.gui import which
from pylot.core.util.utils import getPatternLine, runProgram from pylot.core.util.utils import getPatternLine, runProgram
from pylot.core.util.version import get_git_version as _getVersionString from pylot.core.util.version import get_git_version as _getVersionString
@ -34,7 +34,7 @@ def export(picks, fnout, parameter):
:type parameter: object :type parameter: object
''' '''
# write phases to NLLoc-phase file # write phases to NLLoc-phase file
write_phases(picks, 'NLLoc', fnout, parameter) writephases(picks, 'NLLoc', fnout, parameter)
def modify_inputs(ctrfn, root, nllocoutn, phasefn, tttn): def modify_inputs(ctrfn, root, nllocoutn, phasefn, tttn):

View File

@ -1,7 +1,7 @@
#!/usr/bin/env python #!/usr/bin/env python
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from pylot.core.io.phases import write_phases from pylot.core.io.phases import writephases
from pylot.core.util.version import get_git_version as _getVersionString from pylot.core.util.version import get_git_version as _getVersionString
__version__ = _getVersionString() __version__ = _getVersionString()
@ -25,4 +25,4 @@ def export(picks, fnout, eventinfo, parameter=None):
:type parameter: object :type parameter: object
''' '''
# write phases to VELEST-phase file # write phases to VELEST-phase file
write_phases(picks, 'VELEST', fnout, parameter, eventinfo) writephases(picks, 'VELEST', fnout, parameter, eventinfo)

View File

@ -262,6 +262,10 @@ class AutopickStation(object):
self.metadata = metadata self.metadata = metadata
self.origin = origin self.origin = origin
# initialize TauPy pick estimates
self.estFirstP = None
self.estFirstS = None
# initialize picking results # initialize picking results
self.p_results = PickingResults() self.p_results = PickingResults()
self.s_results = PickingResults() self.s_results = PickingResults()
@ -443,15 +447,15 @@ class AutopickStation(object):
for arr in arrivals: for arr in arrivals:
phases[identifyPhaseID(arr.phase.name)].append(arr) phases[identifyPhaseID(arr.phase.name)].append(arr)
# get first P and S onsets from arrivals list # get first P and S onsets from arrivals list
estFirstP = 0 arrival_time_p = 0
estFirstS = 0 arrival_time_s = 0
if len(phases['P']) > 0: if len(phases['P']) > 0:
arrP, estFirstP = min([(arr, arr.time) for arr in phases['P']], key=lambda t: t[1]) arrP, arrival_time_p = min([(arr, arr.time) for arr in phases['P']], key=lambda t: t[1])
if len(phases['S']) > 0: if len(phases['S']) > 0:
arrS, estFirstS = min([(arr, arr.time) for arr in phases['S']], key=lambda t: t[1]) arrS, arrival_time_s = min([(arr, arr.time) for arr in phases['S']], key=lambda t: t[1])
print('autopick: estimated first arrivals for P: {} s, S:{} s after event' print('autopick: estimated first arrivals for P: {} s, S:{} s after event'
' origin time using TauPy'.format(estFirstP, estFirstS)) ' origin time using TauPy'.format(arrival_time_p, arrival_time_s))
return estFirstP, estFirstS return arrival_time_p, arrival_time_s
def exit_taupy(): def exit_taupy():
"""If taupy failed to calculate theoretical starttimes, picking continues. """If taupy failed to calculate theoretical starttimes, picking continues.
@ -477,10 +481,13 @@ class AutopickStation(object):
raise AttributeError('No source origins given!') raise AttributeError('No source origins given!')
arrivals = create_arrivals(self.metadata, self.origin, self.pickparams["taup_model"]) arrivals = create_arrivals(self.metadata, self.origin, self.pickparams["taup_model"])
estFirstP, estFirstS = first_PS_onsets(arrivals) arrival_P, arrival_S = first_PS_onsets(arrivals)
self.estFirstP = (self.origin[0].time + arrival_P) - self.ztrace.stats.starttime
# modifiy pstart and pstop relative to estimated first P arrival (relative to station time axis) # modifiy pstart and pstop relative to estimated first P arrival (relative to station time axis)
self.pickparams["pstart"] += (self.origin[0].time + estFirstP) - self.ztrace.stats.starttime self.pickparams["pstart"] += self.estFirstP
self.pickparams["pstop"] += (self.origin[0].time + estFirstP) - self.ztrace.stats.starttime self.pickparams["pstop"] += self.estFirstP
print('autopick: CF calculation times respectively:' print('autopick: CF calculation times respectively:'
' pstart: {} s, pstop: {} s'.format(self.pickparams["pstart"], self.pickparams["pstop"])) ' pstart: {} s, pstop: {} s'.format(self.pickparams["pstart"], self.pickparams["pstop"]))
# make sure pstart and pstop are inside the starttime/endtime of vertical trace # make sure pstart and pstop are inside the starttime/endtime of vertical trace
@ -491,9 +498,10 @@ class AutopickStation(object):
# for the two horizontal components take earliest and latest time to make sure that the s onset is not clipped # for the two horizontal components take earliest and latest time to make sure that the s onset is not clipped
# if start and endtime of horizontal traces differ, the s windowsize will automatically increase # if start and endtime of horizontal traces differ, the s windowsize will automatically increase
trace_s_start = min([self.etrace.stats.starttime, self.ntrace.stats.starttime]) trace_s_start = min([self.etrace.stats.starttime, self.ntrace.stats.starttime])
self.estFirstS = (self.origin[0].time + arrival_S) - trace_s_start
# modifiy sstart and sstop relative to estimated first S arrival (relative to station time axis) # modifiy sstart and sstop relative to estimated first S arrival (relative to station time axis)
self.pickparams["sstart"] += (self.origin[0].time + estFirstS) - trace_s_start self.pickparams["sstart"] += self.estFirstS
self.pickparams["sstop"] += (self.origin[0].time + estFirstS) - trace_s_start self.pickparams["sstop"] += self.estFirstS
print('autopick: CF calculation times respectively:' print('autopick: CF calculation times respectively:'
' sstart: {} s, sstop: {} s'.format(self.pickparams["sstart"], self.pickparams["sstop"])) ' sstart: {} s, sstop: {} s'.format(self.pickparams["sstart"], self.pickparams["sstop"]))
# make sure pstart and pstop are inside the starttime/endtime of horizontal traces # make sure pstart and pstop are inside the starttime/endtime of horizontal traces
@ -609,6 +617,12 @@ class AutopickStation(object):
# plot tapered trace filtered with bpz2 filter settings # plot tapered trace filtered with bpz2 filter settings
ax1.plot(tdata, self.tr_filt_z_bpz2.data / max(self.tr_filt_z_bpz2.data), color=linecolor, linewidth=0.7, ax1.plot(tdata, self.tr_filt_z_bpz2.data / max(self.tr_filt_z_bpz2.data), color=linecolor, linewidth=0.7,
label='Data') label='Data')
# plot pickwindows for P
pstart, pstop = self.pickparams['pstart'], self.pickparams['pstop']
if pstart is not None and pstop is not None:
ax1.axvspan(pstart, pstop, color='r', alpha=0.1, zorder=0, label='P window')
if self.estFirstP is not None:
ax1.axvline(self.estFirstP, ls='dashed', color='r', alpha=0.4, label='TauPy estimate')
if self.p_results.weight < 4: if self.p_results.weight < 4:
# plot CF of initial onset (HOScf or ARZcf) # plot CF of initial onset (HOScf or ARZcf)
ax1.plot(self.cf1.getTimeArray(), self.cf1.getCF() / max(self.cf1.getCF()), 'b', label='CF1') ax1.plot(self.cf1.getTimeArray(), self.cf1.getCF() / max(self.cf1.getCF()), 'b', label='CF1')
@ -713,6 +727,15 @@ class AutopickStation(object):
ax3.plot([refSpick.getpick() - 0.5, refSpick.getpick() + 0.5], [-1.3, -1.3], 'g', linewidth=2) ax3.plot([refSpick.getpick() - 0.5, refSpick.getpick() + 0.5], [-1.3, -1.3], 'g', linewidth=2)
ax3.plot([self.s_results.lpp, self.s_results.lpp], [-1.1, 1.1], 'g--', label='lpp') ax3.plot([self.s_results.lpp, self.s_results.lpp], [-1.1, 1.1], 'g--', label='lpp')
ax3.plot([self.s_results.epp, self.s_results.epp], [-1.1, 1.1], 'g--', label='epp') ax3.plot([self.s_results.epp, self.s_results.epp], [-1.1, 1.1], 'g--', label='epp')
# plot pickwindows for S
sstart, sstop = self.pickparams['sstart'], self.pickparams['sstop']
if sstart is not None and sstop is not None:
for axis in [ax2, ax3]:
axis.axvspan(sstart, sstop, color='b', alpha=0.1, zorder=0, label='S window')
if self.estFirstS is not None:
axis.axvline(self.estFirstS, ls='dashed', color='b', alpha=0.4, label='TauPy estimate')
ax3.legend(loc=1) ax3.legend(loc=1)
ax3.set_yticks([]) ax3.set_yticks([])
ax3.set_ylim([-1.5, 1.5]) ax3.set_ylim([-1.5, 1.5])
@ -835,14 +858,21 @@ class AutopickStation(object):
self.cf1 = None self.cf1 = None
assert isinstance(self.cf1, CharacteristicFunction), 'cf1 is not set correctly: maybe the algorithm name ({})' \ assert isinstance(self.cf1, CharacteristicFunction), 'cf1 is not set correctly: maybe the algorithm name ({})' \
' is corrupted'.format(self.pickparams["algoP"]) ' is corrupted'.format(self.pickparams["algoP"])
# get the original waveform stream from first CF class cut to identical length as CF for plotting
cut_ogstream = self.cf1.getDataArray(self.cf1.getCut())
# MP: Rename to cf_stream for further use of z_copy and to prevent chaos when z_copy suddenly becomes a cf
# stream and later again a waveform stream
cf_stream = z_copy.copy()
cf_stream[0].data = self.cf1.getCF()
# calculate AIC cf from first cf (either HOS or ARZ) # calculate AIC cf from first cf (either HOS or ARZ)
z_copy[0].data = self.cf1.getCF() aiccf = AICcf(cf_stream, cuttimes)
aiccf = AICcf(z_copy, cuttimes)
# get preliminary onset time from AIC-CF # get preliminary onset time from AIC-CF
self.set_current_figure('aicFig') self.set_current_figure('aicFig')
aicpick = AICPicker(aiccf, self.pickparams["tsnrz"], self.pickparams["pickwinP"], self.iplot, aicpick = AICPicker(aiccf, self.pickparams["tsnrz"], self.pickparams["pickwinP"], self.iplot,
Tsmooth=self.pickparams["aictsmooth"], fig=self.current_figure, Tsmooth=self.pickparams["aictsmooth"], fig=self.current_figure,
linecolor=self.current_linecolor) linecolor=self.current_linecolor, ogstream=cut_ogstream)
# save aicpick for plotting later # save aicpick for plotting later
self.p_data.aicpick = aicpick self.p_data.aicpick = aicpick
# add pstart and pstop to aic plot # add pstart and pstop to aic plot
@ -855,7 +885,7 @@ class AutopickStation(object):
label='P stop') label='P stop')
ax.legend(loc=1) ax.legend(loc=1)
Pflag = self._pick_p_quality_control(aicpick, z_copy, tr_filt) Pflag = self._pick_p_quality_control(aicpick, cf_stream, tr_filt)
# go on with processing if AIC onset passes quality control # go on with processing if AIC onset passes quality control
slope = aicpick.getSlope() slope = aicpick.getSlope()
if not slope: slope = 0 if not slope: slope = 0
@ -894,7 +924,7 @@ class AutopickStation(object):
refPpick = PragPicker(self.cf2, self.pickparams["tsnrz"], self.pickparams["pickwinP"], self.iplot, refPpick = PragPicker(self.cf2, self.pickparams["tsnrz"], self.pickparams["pickwinP"], self.iplot,
self.pickparams["ausP"], self.pickparams["ausP"],
self.pickparams["tsmoothP"], aicpick.getpick(), self.current_figure, self.pickparams["tsmoothP"], aicpick.getpick(), self.current_figure,
self.current_linecolor) self.current_linecolor, ogstream=cut_ogstream)
# save PragPicker result for plotting # save PragPicker result for plotting
self.p_data.refPpick = refPpick self.p_data.refPpick = refPpick
self.p_results.mpp = refPpick.getpick() self.p_results.mpp = refPpick.getpick()
@ -1146,11 +1176,14 @@ class AutopickStation(object):
# calculate AIC cf # calculate AIC cf
haiccf = self._calculate_aic_cf_s_pick(cuttimesh) haiccf = self._calculate_aic_cf_s_pick(cuttimesh)
# get the original waveform stream cut to identical length as CF for plotting
ogstream = haiccf.getDataArray(haiccf.getCut())
# get preliminary onset time from AIC cf # get preliminary onset time from AIC cf
self.set_current_figure('aicARHfig') self.set_current_figure('aicARHfig')
aicarhpick = AICPicker(haiccf, self.pickparams["tsnrh"], self.pickparams["pickwinS"], self.iplot, aicarhpick = AICPicker(haiccf, self.pickparams["tsnrh"], self.pickparams["pickwinS"], self.iplot,
Tsmooth=self.pickparams["aictsmoothS"], fig=self.current_figure, Tsmooth=self.pickparams["aictsmoothS"], fig=self.current_figure,
linecolor=self.current_linecolor) linecolor=self.current_linecolor, ogstream=ogstream)
# save pick for later plotting # save pick for later plotting
self.aicarhpick = aicarhpick self.aicarhpick = aicarhpick

View File

@ -17,7 +17,11 @@ autoregressive prediction: application ot local and regional distances, Geophys.
:author: MAGS2 EP3 working group :author: MAGS2 EP3 working group
""" """
import numpy as np import numpy as np
from scipy import signal try:
from scipy.signal import tukey
except ImportError:
from scipy.signal.windows import tukey
from obspy.core import Stream from obspy.core import Stream
from pylot.core.pick.utils import PickingFailedException from pylot.core.pick.utils import PickingFailedException
@ -56,7 +60,7 @@ class CharacteristicFunction(object):
self.setOrder(order) self.setOrder(order)
self.setFnoise(fnoise) self.setFnoise(fnoise)
self.setARdetStep(t2) self.setARdetStep(t2)
self.calcCF(self.getDataArray()) self.calcCF()
self.arpara = np.array([]) self.arpara = np.array([])
self.xpred = np.array([]) self.xpred = np.array([])
@ -208,17 +212,15 @@ class CharacteristicFunction(object):
data = self.orig_data.copy() data = self.orig_data.copy()
return data return data
def calcCF(self, data=None): def calcCF(self):
self.cf = data pass
class AICcf(CharacteristicFunction): class AICcf(CharacteristicFunction):
def calcCF(self, data): def calcCF(self):
""" """
Function to calculate the Akaike Information Criterion (AIC) after Maeda (1985). Function to calculate the Akaike Information Criterion (AIC) after Maeda (1985).
:param data: data, time series (whether seismogram or CF)
:type data: tuple
:return: AIC function :return: AIC function
:rtype: :rtype:
""" """
@ -227,7 +229,7 @@ class AICcf(CharacteristicFunction):
ind = np.where(~np.isnan(xnp))[0] ind = np.where(~np.isnan(xnp))[0]
if ind.size: if ind.size:
xnp[:ind[0]] = xnp[ind[0]] xnp[:ind[0]] = xnp[ind[0]]
xnp = signal.tukey(len(xnp), alpha=0.05) * xnp xnp = tukey(len(xnp), alpha=0.05) * xnp
xnp = xnp - np.mean(xnp) xnp = xnp - np.mean(xnp)
datlen = len(xnp) datlen = len(xnp)
k = np.arange(1, datlen) k = np.arange(1, datlen)
@ -256,13 +258,11 @@ class HOScf(CharacteristicFunction):
""" """
super(HOScf, self).__init__(data, cut, pickparams["tlta"], pickparams["hosorder"]) super(HOScf, self).__init__(data, cut, pickparams["tlta"], pickparams["hosorder"])
def calcCF(self, data): def calcCF(self):
""" """
Function to calculate skewness (statistics of order 3) or kurtosis Function to calculate skewness (statistics of order 3) or kurtosis
(statistics of order 4), using one long moving window, as published (statistics of order 4), using one long moving window, as published
in Kueperkoch et al. (2010), or order 2, i.e. STA/LTA. in Kueperkoch et al. (2010), or order 2, i.e. STA/LTA.
:param data: data, time series (whether seismogram or CF)
:type data: tuple
:return: HOS cf :return: HOS cf
:rtype: :rtype:
""" """
@ -277,47 +277,28 @@ class HOScf(CharacteristicFunction):
elif self.getOrder() == 4: # this is kurtosis elif self.getOrder() == 4: # this is kurtosis
y = np.power(xnp, 4) y = np.power(xnp, 4)
y1 = np.power(xnp, 2) y1 = np.power(xnp, 2)
elif self.getOrder() == 2: # this is variance, used for STA/LTA processing
y = np.power(xnp, 2)
y1 = np.power(xnp, 2)
# Initialisation # Initialisation
# t2: long term moving window # t2: long term moving window
ilta = int(round(self.getTime2() / self.getIncrement())) ilta = int(round(self.getTime2() / self.getIncrement()))
ista = int(round((self.getTime2() / 10) / self.getIncrement())) # TODO: still hard coded!!
lta = y[0] lta = y[0]
lta1 = y1[0] lta1 = y1[0]
sta = y[0]
# moving windows # moving windows
LTA = np.zeros(len(xnp)) LTA = np.zeros(len(xnp))
STA = np.zeros(len(xnp))
for j in range(0, len(xnp)): for j in range(0, len(xnp)):
if j < 4: if j < 4:
LTA[j] = 0 LTA[j] = 0
STA[j] = 0
elif j <= ista and self.getOrder() == 2:
lta = (y[j] + lta * (j - 1)) / j
if self.getOrder() == 2:
sta = (y[j] + sta * (j - 1)) / j
# elif j < 4:
elif j <= ilta: elif j <= ilta:
lta = (y[j] + lta * (j - 1)) / j lta = (y[j] + lta * (j - 1)) / j
lta1 = (y1[j] + lta1 * (j - 1)) / j lta1 = (y1[j] + lta1 * (j - 1)) / j
if self.getOrder() == 2:
sta = (y[j] - y[j - ista]) / ista + sta
else: else:
lta = (y[j] - y[j - ilta]) / ilta + lta lta = (y[j] - y[j - ilta]) / ilta + lta
lta1 = (y1[j] - y1[j - ilta]) / ilta + lta1 lta1 = (y1[j] - y1[j - ilta]) / ilta + lta1
if self.getOrder() == 2:
sta = (y[j] - y[j - ista]) / ista + sta
# define LTA # define LTA
if self.getOrder() == 3: if self.getOrder() == 3:
LTA[j] = lta / np.power(lta1, 1.5) LTA[j] = lta / np.power(lta1, 1.5)
elif self.getOrder() == 4: elif self.getOrder() == 4:
LTA[j] = lta / np.power(lta1, 2) LTA[j] = lta / np.power(lta1, 2)
else:
LTA[j] = lta
STA[j] = sta
# remove NaN's with first not-NaN-value, # remove NaN's with first not-NaN-value,
# so autopicker doesnt pick discontinuity at start of the trace # so autopicker doesnt pick discontinuity at start of the trace
@ -326,10 +307,7 @@ class HOScf(CharacteristicFunction):
first = ind[0] first = ind[0]
LTA[:first] = LTA[first] LTA[:first] = LTA[first]
if self.getOrder() > 2:
self.cf = LTA self.cf = LTA
else: # order 2 means STA/LTA!
self.cf = STA / LTA
self.xcf = x self.xcf = x
@ -339,12 +317,10 @@ class ARZcf(CharacteristicFunction):
super(ARZcf, self).__init__(data, cut, t1=t1, t2=t2, order=pickparams["Parorder"], super(ARZcf, self).__init__(data, cut, t1=t1, t2=t2, order=pickparams["Parorder"],
fnoise=pickparams["addnoise"]) fnoise=pickparams["addnoise"])
def calcCF(self, data): def calcCF(self):
""" """
function used to calculate the AR prediction error from a single vertical trace. Can be used to pick function used to calculate the AR prediction error from a single vertical trace. Can be used to pick
P onsets. P onsets.
:param data:
:type data: ~obspy.core.stream.Stream
:return: ARZ cf :return: ARZ cf
:rtype: :rtype:
""" """
@ -475,14 +451,12 @@ class ARHcf(CharacteristicFunction):
super(ARHcf, self).__init__(data, cut, t1=t1, t2=t2, order=pickparams["Sarorder"], super(ARHcf, self).__init__(data, cut, t1=t1, t2=t2, order=pickparams["Sarorder"],
fnoise=pickparams["addnoise"]) fnoise=pickparams["addnoise"])
def calcCF(self, data): def calcCF(self):
""" """
Function to calculate a characteristic function using autoregressive modelling of the waveform of Function to calculate a characteristic function using autoregressive modelling of the waveform of
both horizontal traces. both horizontal traces.
The waveform is predicted in a moving time window using the calculated AR parameters. The difference The waveform is predicted in a moving time window using the calculated AR parameters. The difference
between the predicted and the actual waveform servers as a characteristic function. between the predicted and the actual waveform servers as a characteristic function.
:param data: wavefor stream
:type data: ~obspy.core.stream.Stream
:return: ARH cf :return: ARH cf
:rtype: :rtype:
""" """
@ -631,14 +605,12 @@ class AR3Ccf(CharacteristicFunction):
super(AR3Ccf, self).__init__(data, cut, t1=t1, t2=t2, order=pickparams["Sarorder"], super(AR3Ccf, self).__init__(data, cut, t1=t1, t2=t2, order=pickparams["Sarorder"],
fnoise=pickparams["addnoise"]) fnoise=pickparams["addnoise"])
def calcCF(self, data): def calcCF(self):
""" """
Function to calculate a characteristic function using autoregressive modelling of the waveform of Function to calculate a characteristic function using autoregressive modelling of the waveform of
all three traces. all three traces.
The waveform is predicted in a moving time window using the calculated AR parameters. The difference The waveform is predicted in a moving time window using the calculated AR parameters. The difference
between the predicted and the actual waveform servers as a characteristic function between the predicted and the actual waveform servers as a characteristic function
:param data: stream holding all three traces
:type data: ~obspy.core.stream.Stream
:return: AR3C cf :return: AR3C cf
:rtype: :rtype:
""" """

View File

@ -37,7 +37,8 @@ class AutoPicker(object):
warnings.simplefilter('ignore') warnings.simplefilter('ignore')
def __init__(self, cf, TSNR, PickWindow, iplot=0, aus=None, Tsmooth=None, Pick1=None, fig=None, linecolor='k'): def __init__(self, cf, TSNR, PickWindow, iplot=0, aus=None, Tsmooth=None, Pick1=None,
fig=None, linecolor='k', ogstream=None):
""" """
Create AutoPicker object Create AutoPicker object
:param cf: characteristic function, on which the picking algorithm is applied :param cf: characteristic function, on which the picking algorithm is applied
@ -59,12 +60,15 @@ class AutoPicker(object):
:type fig: `~matplotlib.figure.Figure` :type fig: `~matplotlib.figure.Figure`
:param linecolor: matplotlib line color string :param linecolor: matplotlib line color string
:type linecolor: str :type linecolor: str
:param ogstream: original stream (waveform), e.g. for plotting purposes
:type ogstream: `~obspy.core.stream.Stream`
""" """
assert isinstance(cf, CharacteristicFunction), "%s is not a CharacteristicFunction object" % str(cf) assert isinstance(cf, CharacteristicFunction), "%s is not a CharacteristicFunction object" % str(cf)
self._linecolor = linecolor self._linecolor = linecolor
self._pickcolor_p = 'b' self._pickcolor_p = 'b'
self.cf = cf.getCF() self.cf = cf.getCF()
self.ogstream = ogstream
self.Tcf = cf.getTimeArray() self.Tcf = cf.getTimeArray()
self.Data = cf.getXCF() self.Data = cf.getXCF()
self.dt = cf.getIncrement() self.dt = cf.getIncrement()
@ -173,7 +177,7 @@ class AICPicker(AutoPicker):
nn = np.isnan(self.cf) nn = np.isnan(self.cf)
if len(nn) > 1: if len(nn) > 1:
self.cf[nn] = 0 self.cf[nn] = 0
# taper AIC-CF to get rid off side maxima # taper AIC-CF to get rid of side maxima
tap = np.hanning(len(self.cf)) tap = np.hanning(len(self.cf))
aic = tap * self.cf + max(abs(self.cf)) aic = tap * self.cf + max(abs(self.cf))
# smooth AIC-CF # smooth AIC-CF
@ -316,16 +320,7 @@ class AICPicker(AutoPicker):
plt.close(fig) plt.close(fig)
return return
iislope = islope[0][0:imax + 1] iislope = islope[0][0:imax + 1]
# MP MP change slope calculation dataslope = self.Data[0].data[iislope]
# get all maxima of aicsmooth
iaicmaxima = argrelmax(aicsmooth)[0]
# get first index of maximum after pickindex (indices saved in iaicmaxima)
aicmax = iaicmaxima[np.where(iaicmaxima > pickindex)[0]]
if len(aicmax) > 0:
iaicmax = aicmax[0]
else:
iaicmax = -1
dataslope = aicsmooth[pickindex: iaicmax]
# calculate slope as polynomal fit of order 1 # calculate slope as polynomal fit of order 1
xslope = np.arange(0, len(dataslope), 1) xslope = np.arange(0, len(dataslope), 1)
try: try:
@ -336,7 +331,7 @@ class AICPicker(AutoPicker):
else: else:
self.slope = 1 / (len(dataslope) * self.Data[0].stats.delta) * (datafit[-1] - datafit[0]) self.slope = 1 / (len(dataslope) * self.Data[0].stats.delta) * (datafit[-1] - datafit[0])
# normalize slope to maximum of cf to make it unit independent # normalize slope to maximum of cf to make it unit independent
self.slope /= aicsmooth[iaicmax] self.slope /= self.Data[0].data[icfmax]
except Exception as e: except Exception as e:
print("AICPicker: Problems with data fitting! {}".format(e)) print("AICPicker: Problems with data fitting! {}".format(e))
@ -356,6 +351,12 @@ class AICPicker(AutoPicker):
self.Tcf = self.Tcf[0:len(self.Tcf) - 1] self.Tcf = self.Tcf[0:len(self.Tcf) - 1]
ax1.plot(self.Tcf, cf / max(cf), color=self._linecolor, linewidth=0.7, label='(HOS-/AR-) Data') ax1.plot(self.Tcf, cf / max(cf), color=self._linecolor, linewidth=0.7, label='(HOS-/AR-) Data')
ax1.plot(self.Tcf, aicsmooth / max(aicsmooth), 'r', label='Smoothed AIC-CF') ax1.plot(self.Tcf, aicsmooth / max(aicsmooth), 'r', label='Smoothed AIC-CF')
# plot the original waveform also for evaluation of the CF and pick
if self.ogstream:
data = self.ogstream[0].data
if len(data) == len(self.Tcf):
ax1.plot(self.Tcf, 0.5 * data / max(data), 'k', label='Seismogram', alpha=0.3, zorder=0,
lw=0.5)
if self.Pick is not None: if self.Pick is not None:
ax1.plot([self.Pick, self.Pick], [-0.1, 0.5], 'b', linewidth=2, label='AIC-Pick') ax1.plot([self.Pick, self.Pick], [-0.1, 0.5], 'b', linewidth=2, label='AIC-Pick')
ax1.set_xlabel('Time [s] since %s' % self.Data[0].stats.starttime) ax1.set_xlabel('Time [s] since %s' % self.Data[0].stats.starttime)
@ -376,7 +377,7 @@ class AICPicker(AutoPicker):
label='Signal Window') label='Signal Window')
ax2.axvspan(self.Tcf[iislope[0]], self.Tcf[iislope[-1]], color='g', alpha=0.2, lw=0, ax2.axvspan(self.Tcf[iislope[0]], self.Tcf[iislope[-1]], color='g', alpha=0.2, lw=0,
label='Slope Window') label='Slope Window')
ax2.plot(self.Tcf[pickindex: iaicmax], datafit, 'g', linewidth=2, ax2.plot(self.Tcf[iislope], datafit, 'g', linewidth=2,
label='Slope') # MP MP changed temporarily! label='Slope') # MP MP changed temporarily!
if self.slope is not None: if self.slope is not None:

View File

@ -15,7 +15,7 @@ import numpy as np
from obspy.core import Stream, UTCDateTime from obspy.core import Stream, UTCDateTime
from scipy.signal import argrelmax from scipy.signal import argrelmax
from pylot.core.util.utils import get_bool, get_none, SetChannelComponents from pylot.core.util.utils import get_bool, get_none, SetChannelComponents, common_range
def earllatepicker(X, nfac, TSNR, Pick1, iplot=0, verbosity=1, fig=None, linecolor='k'): def earllatepicker(X, nfac, TSNR, Pick1, iplot=0, verbosity=1, fig=None, linecolor='k'):
@ -828,14 +828,22 @@ def checksignallength(X, pick, minsiglength, pickparams, iplot=0, fig=None, line
if len(X) > 1: if len(X) > 1:
# all three components available # all three components available
# make sure, all components have equal lengths # make sure, all components have equal lengths
ilen = min([len(X[0].data), len(X[1].data), len(X[2].data)]) earliest_starttime = min(tr.stats.starttime for tr in X)
x1 = X[0][0:ilen] cuttimes = common_range(X)
x2 = X[1][0:ilen] X = X.slice(cuttimes[0], cuttimes[1])
x3 = X[2][0:ilen] x1, x2, x3 = X[:3]
if not (len(x1) == len(x2) == len(x3)):
raise PickingFailedException('checksignallength: unequal lengths of components!')
# get RMS trace # get RMS trace
rms = np.sqrt((np.power(x1, 2) + np.power(x2, 2) + np.power(x3, 2)) / 3) rms = np.sqrt((np.power(x1, 2) + np.power(x2, 2) + np.power(x3, 2)) / 3)
ilen = len(rms)
dt = earliest_starttime - X[0].stats.starttime
pick -= dt
else: else:
x1 = X[0].data x1 = X[0].data
x2 = x3 = None
ilen = len(x1) ilen = len(x1)
rms = abs(x1) rms = abs(x1)
@ -874,6 +882,10 @@ def checksignallength(X, pick, minsiglength, pickparams, iplot=0, fig=None, line
fig._tight = True fig._tight = True
ax = fig.add_subplot(111) ax = fig.add_subplot(111)
ax.plot(t, rms, color=linecolor, linewidth=0.7, label='RMS Data') ax.plot(t, rms, color=linecolor, linewidth=0.7, label='RMS Data')
ax.plot(t, x1, 'k', alpha=0.3, lw=0.3, zorder=0)
if x2 is not None and x3 is not None:
ax.plot(t, x2, 'r', alpha=0.3, lw=0.3, zorder=0)
ax.plot(t, x3, 'g', alpha=0.3, lw=0.3, zorder=0)
ax.axvspan(t[inoise[0]], t[inoise[-1]], color='y', alpha=0.2, lw=0, label='Noise Window') ax.axvspan(t[inoise[0]], t[inoise[-1]], color='y', alpha=0.2, lw=0, label='Noise Window')
ax.axvspan(t[isignal[0]], t[isignal[-1]], color='b', alpha=0.2, lw=0, label='Signal Window') ax.axvspan(t[isignal[0]], t[isignal[-1]], color='b', alpha=0.2, lw=0, label='Signal Window')
ax.plot([t[isignal[0]], t[isignal[len(isignal) - 1]]], ax.plot([t[isignal[0]], t[isignal[len(isignal) - 1]]],
@ -883,6 +895,7 @@ def checksignallength(X, pick, minsiglength, pickparams, iplot=0, fig=None, line
ax.set_xlabel('Time [s] since %s' % X[0].stats.starttime) ax.set_xlabel('Time [s] since %s' % X[0].stats.starttime)
ax.set_ylabel('Counts') ax.set_ylabel('Counts')
ax.set_title('Check for Signal Length, Station %s' % X[0].stats.station) ax.set_title('Check for Signal Length, Station %s' % X[0].stats.station)
ax.set_xlim(pickparams["pstart"], pickparams["pstop"])
ax.set_yticks([]) ax.set_yticks([])
if plt_flag == 1: if plt_flag == 1:
fig.show() fig.show()

View File

@ -5,14 +5,17 @@ import traceback
import cartopy.crs as ccrs import cartopy.crs as ccrs
import cartopy.feature as cf import cartopy.feature as cf
from cartopy.mpl.ticker import LongitudeFormatter, LatitudeFormatter
import matplotlib import matplotlib
import matplotlib.patheffects as PathEffects import matplotlib.patheffects as PathEffects
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
import numpy as np import numpy as np
import obspy import obspy
from PySide2 import QtWidgets from PySide2 import QtWidgets, QtGui
from matplotlib.backends.backend_qt5agg import FigureCanvasQTAgg as FigureCanvas from matplotlib.backends.backend_qt5agg import FigureCanvasQTAgg as FigureCanvas
from mpl_toolkits.axes_grid1.inset_locator import inset_axes from mpl_toolkits.axes_grid1.inset_locator import inset_axes
from obspy import UTCDateTime
from pylot.core.util.utils import identifyPhaseID from pylot.core.util.utils import identifyPhaseID
from scipy.interpolate import griddata from scipy.interpolate import griddata
@ -24,10 +27,10 @@ matplotlib.use('Qt5Agg')
class MplCanvas(FigureCanvas): class MplCanvas(FigureCanvas):
def __init__(self, parent=None, extern_axes=None, width=5, height=4, dpi=100): def __init__(self, extern_axes=None, projection=None, width=15, height=5, dpi=100):
if extern_axes is None: if extern_axes is None:
self.fig = plt.figure(figsize=(width, height), dpi=dpi) self.fig = plt.figure(figsize=(width, height), dpi=dpi)
self.axes = self.fig.add_subplot(111) self.axes = self.fig.add_subplot(111, projection=projection)
else: else:
self.fig = extern_axes.figure self.fig = extern_axes.figure
self.axes = extern_axes self.axes = extern_axes
@ -59,24 +62,30 @@ class Array_map(QtWidgets.QWidget):
self.parameter = parameter if parameter else parent._inputs self.parameter = parameter if parameter else parent._inputs
self.picks_rel = {} self.picks_rel = {}
self.picks_rel_mean_corrected = {}
self.marked_stations = [] self.marked_stations = []
self.highlighted_stations = [] self.highlighted_stations = []
# call functions to draw everything # call functions to draw everything
self.projection = ccrs.PlateCarree()
self.init_graphics() self.init_graphics()
self.ax = self.canvas.axes
self.ax.set_adjustable('datalim')
self.init_stations() self.init_stations()
self.init_crtpyMap() self.init_crtpyMap()
self.init_map() self.init_map()
# set original map limits to fall back on when home button is pressed # set original map limits to fall back on when home button is pressed
self.org_xlim = self.canvas.axes.get_xlim() self.org_xlim = self.ax.get_xlim()
self.org_ylim = self.canvas.axes.get_ylim() self.org_ylim = self.ax.get_ylim()
# initial map without event # initial map without event
self.canvas.axes.set_xlim(self.org_xlim[0], self.org_xlim[1]) self.ax.set_xlim(self.org_xlim[0], self.org_xlim[1])
self.canvas.axes.set_ylim(self.org_ylim[0], self.org_ylim[1]) self.ax.set_ylim(self.org_ylim[0], self.org_ylim[1])
self._style = None if not hasattr(parent, '_style') else parent._style self._style = None if not hasattr(parent, '_style') else parent._style
def init_map(self): def init_map(self):
self.init_colormap() self.init_colormap()
self.connectSignals() self.connectSignals()
@ -89,23 +98,24 @@ class Array_map(QtWidgets.QWidget):
# initialize figure elements # initialize figure elements
if self.extern_plot_axes is None: if self.extern_plot_axes is None:
self.canvas = MplCanvas(self) self.canvas = MplCanvas(projection=self.projection)
self.plotWidget = FigureCanvas(self.canvas.fig)
else: else:
self.canvas = MplCanvas(self, extern_axes=self.extern_plot_axes) self.canvas = MplCanvas(extern_axes=self.extern_plot_axes)
self.plotWidget = FigureCanvas(self.canvas.fig)
self.plotWidget = self.canvas
# initialize GUI elements # initialize GUI elements
self.status_label = QtWidgets.QLabel() self.status_label = QtWidgets.QLabel()
self.map_reset_button = QtWidgets.QPushButton('Reset Map View') self.map_reset_button = QtWidgets.QPushButton('Reset Map View')
self.save_map_button = QtWidgets.QPushButton('Save Map') self.save_map_button = QtWidgets.QPushButton('Save Map')
self.go2eq_button = QtWidgets.QPushButton('Go to Event Location') self.go2eq_button = QtWidgets.QPushButton('Go to Event Location')
self.subtract_mean_cb = QtWidgets.QCheckBox('Subtract mean')
self.main_box = QtWidgets.QVBoxLayout() self.main_box = QtWidgets.QVBoxLayout()
self.setLayout(self.main_box) self.setLayout(self.main_box)
self.top_row = QtWidgets.QHBoxLayout() self.top_row = QtWidgets.QHBoxLayout()
self.main_box.addLayout(self.top_row, 1) self.main_box.addLayout(self.top_row, 0)
self.comboBox_phase = QtWidgets.QComboBox() self.comboBox_phase = QtWidgets.QComboBox()
self.comboBox_phase.insertItem(0, 'P') self.comboBox_phase.insertItem(0, 'P')
@ -124,8 +134,8 @@ class Array_map(QtWidgets.QWidget):
self.cmaps_box = QtWidgets.QComboBox() self.cmaps_box = QtWidgets.QComboBox()
self.cmaps_box.setMaxVisibleItems(20) self.cmaps_box.setMaxVisibleItems(20)
[self.cmaps_box.addItem(map_name) for map_name in sorted(plt.colormaps())] [self.cmaps_box.addItem(map_name) for map_name in sorted(plt.colormaps())]
# try to set to viridis as default # try to set to plasma as default
self.cmaps_box.setCurrentIndex(self.cmaps_box.findText('viridis')) self.cmaps_box.setCurrentIndex(self.cmaps_box.findText('plasma'))
self.top_row.addWidget(QtWidgets.QLabel('Select a phase: ')) self.top_row.addWidget(QtWidgets.QLabel('Select a phase: '))
self.top_row.addWidget(self.comboBox_phase) self.top_row.addWidget(self.comboBox_phase)
@ -138,14 +148,15 @@ class Array_map(QtWidgets.QWidget):
self.top_row.addWidget(self.auto_refresh_box) self.top_row.addWidget(self.auto_refresh_box)
self.top_row.addWidget(self.refresh_button) self.top_row.addWidget(self.refresh_button)
self.main_box.addWidget(self.plotWidget, 1) self.main_box.addWidget(self.plotWidget, 10)
self.bot_row = QtWidgets.QHBoxLayout() self.bot_row = QtWidgets.QHBoxLayout()
self.main_box.addLayout(self.bot_row, 0.3) self.main_box.addLayout(self.bot_row, 0)
self.bot_row.addWidget(QtWidgets.QLabel(''), 5) self.bot_row.addWidget(QtWidgets.QLabel(''), 5)
self.bot_row.addWidget(self.map_reset_button, 2) self.bot_row.addWidget(self.map_reset_button, 2)
self.bot_row.addWidget(self.go2eq_button, 2) self.bot_row.addWidget(self.go2eq_button, 2)
self.bot_row.addWidget(self.save_map_button, 2) self.bot_row.addWidget(self.save_map_button, 2)
self.bot_row.addWidget(self.subtract_mean_cb, 0)
self.bot_row.addWidget(self.status_label, 5) self.bot_row.addWidget(self.status_label, 5)
def init_colormap(self): def init_colormap(self):
@ -153,14 +164,12 @@ class Array_map(QtWidgets.QWidget):
self.init_lat_lon_grid() self.init_lat_lon_grid()
def init_crtpyMap(self): def init_crtpyMap(self):
self.canvas.axes.cla() self.ax.add_feature(cf.LAND)
self.canvas.axes = plt.axes(projection=ccrs.PlateCarree()) self.ax.add_feature(cf.OCEAN)
self.canvas.axes.add_feature(cf.LAND) self.ax.add_feature(cf.COASTLINE, linewidth=1, edgecolor='gray')
self.canvas.axes.add_feature(cf.OCEAN) self.ax.add_feature(cf.BORDERS, alpha=0.7)
self.canvas.axes.add_feature(cf.COASTLINE, linewidth=1, edgecolor='gray') self.ax.add_feature(cf.LAKES, alpha=0.7)
self.canvas.axes.add_feature(cf.BORDERS, alpha=0.7) self.ax.add_feature(cf.RIVERS, linewidth=1)
self.canvas.axes.add_feature(cf.LAKES, alpha=0.7)
self.canvas.axes.add_feature(cf.RIVERS, linewidth=1)
# parallels and meridians # parallels and meridians
self.add_merid_paral() self.add_merid_paral()
@ -168,12 +177,8 @@ class Array_map(QtWidgets.QWidget):
self.canvas.fig.tight_layout() self.canvas.fig.tight_layout()
def add_merid_paral(self): def add_merid_paral(self):
self.gridlines = self.canvas.axes.gridlines(draw_labels=False, alpha=0.6, color='gray', self.gridlines = self.ax.gridlines(draw_labels=False, alpha=0.6, color='gray',
linewidth=self.linewidth / 2, zorder=7) linewidth=self.linewidth / 2, zorder=7, crs=ccrs.PlateCarree())
# TODO: current cartopy version does not support label removal. Devs are working on it.
# Should be fixed in coming cartopy versions
# self.gridlines.xformatter = LONGITUDE_FORMATTER
# self.gridlines.yformatter = LATITUDE_FORMATTER
def remove_merid_paral(self): def remove_merid_paral(self):
if len(self.gridlines.xline_artists): if len(self.gridlines.xline_artists):
@ -181,24 +186,24 @@ class Array_map(QtWidgets.QWidget):
self.gridlines.yline_artists[0].remove() self.gridlines.yline_artists[0].remove()
def org_map_view(self): def org_map_view(self):
self.canvas.axes.set_xlim(self.org_xlim[0], self.org_xlim[1]) self.ax.set_xlim(self.org_xlim[0], self.org_xlim[1])
self.canvas.axes.set_ylim(self.org_ylim[0], self.org_ylim[1]) self.ax.set_ylim(self.org_ylim[0], self.org_ylim[1])
# parallels and meridians # parallels and meridians
self.remove_merid_paral() #self.remove_merid_paral()
self.add_merid_paral() #self.add_merid_paral()
self.canvas.axes.figure.canvas.draw_idle() self.canvas.draw_idle()
def go2eq(self): def go2eq(self):
if self.eventLoc: if self.eventLoc:
lats, lons = self.eventLoc lats, lons = self.eventLoc
self.canvas.axes.set_xlim(lons - 10, lons + 10) self.ax.set_xlim(lons - 10, lons + 10)
self.canvas.axes.set_ylim(lats - 5, lats + 5) self.ax.set_ylim(lats - 5, lats + 5)
# parallels and meridians # parallels and meridians
self.remove_merid_paral() #self.remove_merid_paral()
self.add_merid_paral() #self.add_merid_paral()
self.canvas.axes.figure.canvas.draw_idle() self.canvas.draw_idle()
else: else:
self.status_label.setText('No event information available') self.status_label.setText('No event information available')
@ -212,6 +217,7 @@ class Array_map(QtWidgets.QWidget):
self.map_reset_button.clicked.connect(self.org_map_view) self.map_reset_button.clicked.connect(self.org_map_view)
self.go2eq_button.clicked.connect(self.go2eq) self.go2eq_button.clicked.connect(self.go2eq)
self.save_map_button.clicked.connect(self.saveFigure) self.save_map_button.clicked.connect(self.saveFigure)
self.subtract_mean_cb.stateChanged.connect(self.toggle_subtract_mean)
self.plotWidget.mpl_connect('motion_notify_event', self.mouse_moved) self.plotWidget.mpl_connect('motion_notify_event', self.mouse_moved)
self.plotWidget.mpl_connect('scroll_event', self.mouse_scroll) self.plotWidget.mpl_connect('scroll_event', self.mouse_scroll)
@ -220,21 +226,32 @@ class Array_map(QtWidgets.QWidget):
# set mouse events ----------------------------------------------------- # set mouse events -----------------------------------------------------
def mouse_moved(self, event): def mouse_moved(self, event):
if not event.inaxes == self.canvas.axes: if not event.inaxes == self.ax:
return return
else:
cont, inds = self.sc.contains(event)
lat = event.ydata lat = event.ydata
lon = event.xdata lon = event.xdata
self.status_label.setText('Latitude: {:3.5f}, Longitude: {:3.5f}'.format(lat, lon)) text = f'Longitude: {lon:3.3f}, Latitude: {lat:3.3f}'
if cont:
indices = inds['ind']
text += ' | Station: ' if len(indices) == 1 else ' | Stations: '
text += ' - '.join([self._station_onpick_ids[index] for index in indices[:5]])
if len(indices) > 5:
text += '...'
self.status_label.setText(text)
def mouse_scroll(self, event): def mouse_scroll(self, event):
if not event.inaxes == self.canvas.axes: if not event.inaxes == self.ax:
return return
zoom = {'up': 1. / 2., 'down': 2.} zoom = {'up': 1. / 2., 'down': 2.}
if event.button in zoom: if event.button in zoom:
xlim = self.canvas.axes.get_xlim() xlim = self.ax.get_xlim()
ylim = self.canvas.axes.get_ylim() ylim = self.ax.get_ylim()
x, y = event.xdata, event.ydata x, y = event.xdata, event.ydata
@ -246,24 +263,24 @@ class Array_map(QtWidgets.QWidget):
yb = y - 0.5 * ydiff yb = y - 0.5 * ydiff
yt = y + 0.5 * ydiff yt = y + 0.5 * ydiff
self.canvas.axes.set_xlim(xl, xr) self.ax.set_xlim(xl, xr)
self.canvas.axes.set_ylim(yb, yt) self.ax.set_ylim(yb, yt)
# parallels and meridians # parallels and meridians
self.remove_merid_paral() #self.remove_merid_paral()
self.add_merid_paral() #self.add_merid_paral()
self.canvas.axes.figure.canvas.draw_idle() self.ax.figure.canvas.draw_idle()
def mouseLeftPress(self, event): def mouseLeftPress(self, event):
if not event.inaxes == self.canvas.axes: if not event.inaxes == self.ax:
return return
self.map_x = event.xdata self.map_x = event.xdata
self.map_y = event.ydata self.map_y = event.ydata
self.map_xlim = self.canvas.axes.get_xlim() self.map_xlim = self.ax.get_xlim()
self.map_ylim = self.canvas.axes.get_ylim() self.map_ylim = self.ax.get_ylim()
def mouseLeftRelease(self, event): def mouseLeftRelease(self, event):
if not event.inaxes == self.canvas.axes: if not event.inaxes == self.ax:
return return
new_x = event.xdata new_x = event.xdata
new_y = event.ydata new_y = event.ydata
@ -271,13 +288,13 @@ class Array_map(QtWidgets.QWidget):
dx = new_x - self.map_x dx = new_x - self.map_x
dy = new_y - self.map_y dy = new_y - self.map_y
self.canvas.axes.set_xlim((self.map_xlim[0] - dx, self.map_xlim[1] - dx)) self.ax.set_xlim((self.map_xlim[0] - dx, self.map_xlim[1] - dx))
self.canvas.axes.set_ylim(self.map_ylim[0] - dy, self.map_ylim[1] - dy) self.ax.set_ylim(self.map_ylim[0] - dy, self.map_ylim[1] - dy)
# parallels and meridians # parallels and meridians
self.remove_merid_paral() #self.remove_merid_paral()
self.add_merid_paral() #self.add_merid_paral()
self.canvas.axes.figure.canvas.draw_idle() self.ax.figure.canvas.draw_idle()
def onpick(self, event): def onpick(self, event):
btn_msg = {1: ' in selection. Aborted', 2: ' to delete a pick on. Aborted', 3: ' to display info.'} btn_msg = {1: ' in selection. Aborted', 2: ' to delete a pick on. Aborted', 3: ' to display info.'}
@ -357,12 +374,6 @@ class Array_map(QtWidgets.QWidget):
def get_max_from_stations(self, key): def get_max_from_stations(self, key):
return self._from_dict(max, key) return self._from_dict(max, key)
def get_min_from_picks(self):
return min(self.picks_rel.values())
def get_max_from_picks(self):
return max(self.picks_rel.values())
def current_picks_dict(self): def current_picks_dict(self):
picktype = self.comboBox_am.currentText().split(' ')[0] picktype = self.comboBox_am.currentText().split(' ')[0]
auto_manu = {'auto': self.autopicks_dict, auto_manu = {'auto': self.autopicks_dict,
@ -407,22 +418,34 @@ class Array_map(QtWidgets.QWidget):
print('Cannot display pick for station {}. Reason: {}'.format(station_name, e)) print('Cannot display pick for station {}. Reason: {}'.format(station_name, e))
return picks, uncertainties return picks, uncertainties
def get_picks_rel(picks): def get_picks_rel(picks, func=min):
picks_rel = {} picks_rel = {}
picks_utc = [] picks_utc = []
for pick in picks.values(): for pick in picks.values():
if type(pick) is obspy.core.utcdatetime.UTCDateTime: if type(pick) is UTCDateTime:
picks_utc.append(pick) picks_utc.append(pick.timestamp)
if picks_utc: if picks_utc:
self._earliest_picktime = min(picks_utc) self._reference_picktime = UTCDateTime(func(picks_utc))
for st_id, pick in picks.items(): for st_id, pick in picks.items():
if type(pick) is obspy.core.utcdatetime.UTCDateTime: if type(pick) is UTCDateTime:
pick -= self._earliest_picktime pick -= self._reference_picktime
picks_rel[st_id] = pick picks_rel[st_id] = pick
return picks_rel return picks_rel
def get_picks_rel_mean_corr(picks):
return get_picks_rel(picks, func=np.nanmean)
self.picks, self.uncertainties = get_picks(self.stations_dict) self.picks, self.uncertainties = get_picks(self.stations_dict)
self.picks_rel = get_picks_rel(self.picks) self.picks_rel = get_picks_rel(self.picks)
self.picks_rel_mean_corrected = get_picks_rel_mean_corr(self.picks)
def toggle_subtract_mean(self):
if self.subtract_mean_cb.isChecked():
cmap = 'seismic'
else:
cmap = 'viridis'
self.cmaps_box.setCurrentIndex(self.cmaps_box.findText(cmap))
self._refresh_drawings()
def init_lat_lon_dimensions(self): def init_lat_lon_dimensions(self):
# init minimum and maximum lon and lat dimensions # init minimum and maximum lon and lat dimensions
@ -453,11 +476,12 @@ class Array_map(QtWidgets.QWidget):
return stations, latitudes, longitudes return stations, latitudes, longitudes
def get_picks_lat_lon(self): def get_picks_lat_lon(self):
picks_rel = self.picks_rel_mean_corrected if self.subtract_mean_cb.isChecked() else self.picks_rel
picks = [] picks = []
uncertainties = [] uncertainties = []
latitudes = [] latitudes = []
longitudes = [] longitudes = []
for st_id, pick in self.picks_rel.items(): for st_id, pick in picks_rel.items():
picks.append(pick) picks.append(pick)
uncertainties.append(self.uncertainties.get(st_id)) uncertainties.append(self.uncertainties.get(st_id))
latitudes.append(self.stations_dict[st_id]['latitude']) latitudes.append(self.stations_dict[st_id]['latitude'])
@ -469,13 +493,20 @@ class Array_map(QtWidgets.QWidget):
stat_dict = self.stations_dict['{}.{}'.format(network, station)] stat_dict = self.stations_dict['{}.{}'.format(network, station)]
lat = stat_dict['latitude'] lat = stat_dict['latitude']
lon = stat_dict['longitude'] lon = stat_dict['longitude']
self.highlighted_stations.append(self.canvas.axes.scatter(lon, lat, s=self.pointsize, edgecolors=color, self.highlighted_stations.append(self.ax.scatter(lon, lat, s=self.pointsize, edgecolors=color,
facecolors='none', zorder=12, facecolors='none', zorder=12,
transform=ccrs.PlateCarree(), label='deleted')) transform=ccrs.PlateCarree(), label='deleted'))
def openPickDlg(self, ind): def openPickDlg(self, ind):
wfdata = self._parent.get_data().get_wf_data() try:
wfdata_comp = self._parent.get_data().get_wf_dataComp() wfdata = self._parent.get_data().getWFData()
except AttributeError:
QtWidgets.QMessageBox.warning(
self, "PyLoT Warning",
"No waveform data found. Check if they were already loaded in Waveform plot tab."
)
return
wfdata_comp = self._parent.get_data().getAltWFdata()
for index in ind: for index in ind:
network, station = self._station_onpick_ids[index].split('.')[:2] network, station = self._station_onpick_ids[index].split('.')[:2]
pyl_mw = self._parent pyl_mw = self._parent
@ -490,7 +521,6 @@ class Array_map(QtWidgets.QWidget):
picks=self._parent.get_current_event().getPick(station), picks=self._parent.get_current_event().getPick(station),
autopicks=self._parent.get_current_event().getAutopick(station), autopicks=self._parent.get_current_event().getAutopick(station),
filteroptions=self._parent.filteroptions, metadata=self.metadata, filteroptions=self._parent.filteroptions, metadata=self.metadata,
model=self.parameter.get('taup_model'),
event=pyl_mw.get_current_event()) event=pyl_mw.get_current_event())
except Exception as e: except Exception as e:
message = 'Could not generate Plot for station {st}.\n {er}'.format(st=station, er=e) message = 'Could not generate Plot for station {st}.\n {er}'.format(st=station, er=e)
@ -518,20 +548,27 @@ class Array_map(QtWidgets.QWidget):
print(message, e) print(message, e)
print(traceback.format_exc()) print(traceback.format_exc())
def draw_contour_filled(self, nlevel=50): def draw_contour_filled(self, nlevel=51):
levels = np.linspace(self.get_min_from_picks(), self.get_max_from_picks(), nlevel) if self.subtract_mean_cb.isChecked():
abs_max = self.get_residuals_absmax()
levels = np.linspace(-abs_max, abs_max, nlevel)
else:
levels = np.linspace(min(self.picks_rel.values()), max(self.picks_rel.values()), nlevel)
self.contourf = self.canvas.axes.contourf(self.longrid, self.latgrid, self.picksgrid_active, levels, self.contourf = self.ax.contourf(self.longrid, self.latgrid, self.picksgrid_active, levels,
linewidths=self.linewidth * 5, transform=ccrs.PlateCarree(), linewidths=self.linewidth * 5, transform=ccrs.PlateCarree(),
alpha=0.4, zorder=8, cmap=self.get_colormap()) alpha=0.4, zorder=8, cmap=self.get_colormap())
def get_residuals_absmax(self):
return np.max(np.absolute(list(self.picks_rel_mean_corrected.values())))
def get_colormap(self): def get_colormap(self):
return plt.get_cmap(self.cmaps_box.currentText()) return plt.get_cmap(self.cmaps_box.currentText())
def scatter_all_stations(self): def scatter_all_stations(self):
stations, lats, lons = self.get_st_lat_lon_for_plot() stations, lats, lons = self.get_st_lat_lon_for_plot()
self.sc = self.canvas.axes.scatter(lons, lats, s=self.pointsize * 3, facecolor='none', marker='.', self.sc = self.ax.scatter(lons, lats, s=self.pointsize * 3, facecolor='none', marker='.',
zorder=10, picker=True, edgecolor='0.5', label='Not Picked', zorder=10, picker=True, edgecolor='0.5', label='Not Picked',
transform=ccrs.PlateCarree()) transform=ccrs.PlateCarree())
@ -539,7 +576,7 @@ class Array_map(QtWidgets.QWidget):
self._station_onpick_ids = stations self._station_onpick_ids = stations
if self.eventLoc: if self.eventLoc:
lats, lons = self.eventLoc lats, lons = self.eventLoc
self.sc_event = self.canvas.axes.scatter(lons, lats, s=5 * self.pointsize, facecolor='red', zorder=11, self.sc_event = self.ax.scatter(lons, lats, s=5 * self.pointsize, facecolor='red', zorder=11,
label='Event (might be outside map region)', marker='*', label='Event (might be outside map region)', marker='*',
edgecolors='black', edgecolors='black',
transform=ccrs.PlateCarree()) transform=ccrs.PlateCarree())
@ -555,7 +592,12 @@ class Array_map(QtWidgets.QWidget):
for uncertainty in uncertainties]) for uncertainty in uncertainties])
cmap = self.get_colormap() cmap = self.get_colormap()
self.sc_picked = self.canvas.axes.scatter(lons, lats, s=sizes, edgecolors='white', cmap=cmap,
vmin = vmax = None
if self.subtract_mean_cb.isChecked():
vmin, vmax = -self.get_residuals_absmax(), self.get_residuals_absmax()
self.sc_picked = self.ax.scatter(lons, lats, s=sizes, edgecolors='white', cmap=cmap, vmin=vmin, vmax=vmax,
c=picks, zorder=11, label='Picked', transform=ccrs.PlateCarree()) c=picks, zorder=11, label='Picked', transform=ccrs.PlateCarree())
def annotate_ax(self): def annotate_ax(self):
@ -574,20 +616,20 @@ class Array_map(QtWidgets.QWidget):
if st in self.marked_stations: if st in self.marked_stations:
color = 'red' color = 'red'
self.annotations.append( self.annotations.append(
self.canvas.axes.annotate(' %s' % st, xy=(x + 0.003, y + 0.003), fontsize=self.pointsize / 4., self.ax.annotate(f'{st}', xy=(x + 0.003, y + 0.003), fontsize=self.pointsize / 4.,
fontweight='semibold', color=color, alpha=0.8, fontweight='semibold', color=color, alpha=0.8,
transform=ccrs.PlateCarree(), zorder=14, transform=ccrs.PlateCarree(), zorder=14,
path_effects=[PathEffects.withStroke( path_effects=[PathEffects.withStroke(
linewidth=self.pointsize / 15., foreground='k')])) linewidth=self.pointsize / 15., foreground='k')]))
self.legend = self.canvas.axes.legend(loc=1, framealpha=1) self.legend = self.ax.legend(loc=1, framealpha=1)
self.legend.set_zorder(100) self.legend.set_zorder(100)
self.legend.get_frame().set_facecolor((1, 1, 1, 0.95)) self.legend.get_frame().set_facecolor((1, 1, 1, 0.95))
def add_cbar(self, label): def add_cbar(self, label):
self.cbax_bg = inset_axes(self.canvas.axes, width="6%", height="75%", loc=5) self.cbax_bg = inset_axes(self.ax, width="6%", height="75%", loc=5)
cbax = inset_axes(self.canvas.axes, width='2%', height='70%', loc=5) cbax = inset_axes(self.ax, width='2%', height='70%', loc=5)
cbar = self.canvas.axes.figure.colorbar(self.sc_picked, cax=cbax) cbar = self.ax.figure.colorbar(self.sc_picked, cax=cbax)
cbar.set_label(label) cbar.set_label(label)
cbax.yaxis.tick_left() cbax.yaxis.tick_left()
cbax.yaxis.set_label_position('left') cbax.yaxis.set_label_position('left')
@ -632,7 +674,9 @@ class Array_map(QtWidgets.QWidget):
if picks_available: if picks_available:
self.scatter_picked_stations() self.scatter_picked_stations()
if hasattr(self, 'sc_picked'): if hasattr(self, 'sc_picked'):
self.cbar = self.add_cbar(label='Time relative to first onset ({}) [s]'.format(self._earliest_picktime)) self.cbar = self.add_cbar(
label='Time relative to reference onset ({}) [s]'.format(self._reference_picktime)
)
self.comboBox_phase.setEnabled(True) self.comboBox_phase.setEnabled(True)
else: else:
self.comboBox_phase.setEnabled(False) self.comboBox_phase.setEnabled(False)

View File

@ -27,6 +27,10 @@ class Metadata(object):
# saves which metadata files are from obspy dmt # saves which metadata files are from obspy dmt
self.obspy_dmt_invs = [] self.obspy_dmt_invs = []
if inventory: if inventory:
# make sure that no accidental backslashes mess up the path
if isinstance(inventory, str):
inventory = inventory.replace('\\', '/')
inventory = os.path.abspath(inventory)
if os.path.isdir(inventory): if os.path.isdir(inventory):
self.add_inventory(inventory) self.add_inventory(inventory)
if os.path.isfile(inventory): if os.path.isfile(inventory):
@ -55,6 +59,8 @@ class Metadata(object):
:type path_to_inventory: str :type path_to_inventory: str
:return: None :return: None
""" """
path_to_inventory = path_to_inventory.replace('\\', '/')
path_to_inventory = os.path.abspath(path_to_inventory)
assert (os.path.isdir(path_to_inventory)), '{} is no directory'.format(path_to_inventory) assert (os.path.isdir(path_to_inventory)), '{} is no directory'.format(path_to_inventory)
if path_to_inventory not in self.inventories: if path_to_inventory not in self.inventories:
self.inventories.append(path_to_inventory) self.inventories.append(path_to_inventory)
@ -262,9 +268,6 @@ class Metadata(object):
if not fnames: if not fnames:
# search for station name in filename # search for station name in filename
fnames = glob.glob(os.path.join(path_to_inventory, '*' + station + '*')) fnames = glob.glob(os.path.join(path_to_inventory, '*' + station + '*'))
if not fnames:
# search for network name in filename
fnames = glob.glob(os.path.join(path_to_inventory, '*' + network + '*'))
if not fnames: if not fnames:
if self.verbosity: if self.verbosity:
print('Could not find filenames matching station name, network name or seed id') print('Could not find filenames matching station name, network name or seed id')
@ -276,7 +279,7 @@ class Metadata(object):
continue continue
invtype, robj = self._read_metadata_file(os.path.join(path_to_inventory, fname)) invtype, robj = self._read_metadata_file(os.path.join(path_to_inventory, fname))
try: try:
# robj.get_coordinates(station_seed_id) # TODO: Commented out, failed with Parser, is this needed? robj.get_coordinates(station_seed_id)
self.inventory_files[fname] = {'invtype': invtype, self.inventory_files[fname] = {'invtype': invtype,
'data': robj} 'data': robj}
if station_seed_id in self.seed_ids.keys(): if station_seed_id in self.seed_ids.keys():
@ -284,6 +287,7 @@ class Metadata(object):
self.seed_ids[station_seed_id] = fname self.seed_ids[station_seed_id] = fname
return True return True
except Exception as e: except Exception as e:
logging.warning(e)
continue continue
print('Could not find metadata for station_seed_id {} in path {}'.format(station_seed_id, path_to_inventory)) print('Could not find metadata for station_seed_id {} in path {}'.format(station_seed_id, path_to_inventory))
@ -338,6 +342,19 @@ class Metadata(object):
return inv, exc return inv, exc
def time_from_header(header):
"""
Function takes in the second line from a .gse file and takes out the date and time from that line.
:param header: second line from .gse file
:type header: string
:return: a list of integers of form [year, month, day, hour, minute, second, microsecond]
"""
timeline = header.split(' ')
time = timeline[1].split('/') + timeline[2].split(':')
time = time[:-1] + time[-1].split('.')
return [int(t) for t in time]
def check_time(datetime): def check_time(datetime):
""" """
Function takes in date and time as list and validates it's values by trying to make an UTCDateTime object from it Function takes in date and time as list and validates it's values by trying to make an UTCDateTime object from it
@ -375,6 +392,167 @@ def check_time(datetime):
return False return False
# TODO: change root to datapath
def get_file_list(root_dir):
"""
Function uses a directorie to get all the *.gse files from it.
:param root_dir: a directorie leading to the .gse files
:type root_dir: string
:return: returns a list of filenames (without path to them)
"""
file_list = glob.glob1(root_dir, '*.gse')
return file_list
def checks_station_second(datetime, file):
"""
Function uses the given list to check if the parameter 'second' is set to 60 by mistake
and sets the time correctly if so. Can only correct time if no date change would be necessary.
:param datetime: [year, month, day, hour, minute, second, microsecond]
:return: returns the input with the correct value for second
"""
if datetime[5] == 60:
if datetime[4] == 59:
if datetime[3] == 23:
err_msg = 'Date should be next day. ' \
'File not changed: {0}'.format(file)
raise ValueError(err_msg)
else:
datetime[3] += 1
datetime[4] = 0
datetime[5] = 0
else:
datetime[4] += 1
datetime[5] = 0
return datetime
def make_time_line(line, datetime):
"""
Function takes in the original line from a .gse file and a list of date and
time values to make a new line with corrected date and time.
:param line: second line from .gse file.
:type line: string
:param datetime: list of integers [year, month, day, hour, minute, second, microsecond]
:type datetime: list
:return: returns a string to write it into a file.
"""
ins_form = '{0:02d}:{1:02d}:{2:02d}.{3:03d}'
insertion = ins_form.format(int(datetime[3]),
int(datetime[4]),
int(datetime[5]),
int(datetime[6] * 1e-3))
newline = line[:16] + insertion + line[28:]
return newline
def evt_head_check(root_dir, out_dir=None):
"""
A function to make sure that an arbitrary number of .gse files have correct values in their header.
:param root_dir: a directory leading to the .gse files.
:type root_dir: string
:param out_dir: a directory to store the new files somwhere els.
:return: returns nothing
"""
if not out_dir:
print('WARNING files are going to be overwritten!')
inp = str(input('Continue? [y/N]'))
if not inp == 'y':
sys.exit()
filelist = get_file_list(root_dir)
nfiles = 0
for file in filelist:
infile = open(os.path.join(root_dir, file), 'r')
lines = infile.readlines()
infile.close()
datetime = time_from_header(lines[1])
if check_time(datetime):
continue
else:
nfiles += 1
datetime = checks_station_second(datetime, file)
print('writing ' + file)
# write File
lines[1] = make_time_line(lines[1], datetime)
if not out_dir:
out = open(os.path.join(root_dir, file), 'w')
out.writelines(lines)
out.close()
else:
out = open(os.path.join(out_dir, file), 'w')
out.writelines(lines)
out.close()
print(nfiles)
def read_metadata(path_to_inventory):
"""
take path_to_inventory and return either the corresponding list of files
found or the Parser object for a network dataless seed volume to prevent
read overhead for large dataless seed volumes
:param path_to_inventory:
:return: tuple containing a either list of files or `obspy.io.xseed.Parser`
object and the inventory type found
:rtype: tuple
"""
dlfile = list()
invfile = list()
respfile = list()
# possible file extensions specified here:
inv = dict(dless=dlfile, xml=invfile, resp=respfile, dseed=dlfile[:])
if os.path.isfile(path_to_inventory):
ext = os.path.splitext(path_to_inventory)[1].split('.')[1]
inv[ext] += [path_to_inventory]
else:
for ext in inv.keys():
inv[ext] += glob.glob1(path_to_inventory, '*.{0}'.format(ext))
invtype = key_for_set_value(inv)
if invtype is None:
print("Neither dataless-SEED file, inventory-xml file nor "
"RESP-file found!")
print("!!WRONG CALCULATION OF SOURCE PARAMETERS!!")
robj = None,
elif invtype == 'dless': # prevent multiple read of large dlsv
print("Reading metadata information from dataless-SEED file ...")
if len(inv[invtype]) == 1:
fullpath_inv = os.path.join(path_to_inventory, inv[invtype][0])
robj = Parser(fullpath_inv)
else:
robj = inv[invtype]
else:
print("Reading metadata information from inventory-xml file ...")
robj = read_inventory(inv[invtype])
return invtype, robj
# idea to optimize read_metadata
# def read_metadata_new(path_to_inventory):
# metadata_objects = []
# # read multiple files from directory
# if os.path.isdir(path_to_inventory):
# fnames = os.listdir(path_to_inventory)
# # read single file
# elif os.path.isfile(path_to_inventory):
# fnames = [path_to_inventory]
# else:
# print("Neither dataless-SEED file, inventory-xml file nor "
# "RESP-file found!")
# print("!!WRONG CALCULATION OF SOURCE PARAMETERS!!")
# fnames = []
#
# for fname in fnames:
# path_to_inventory_filename = os.path.join(path_to_inventory, fname)
# try:
# ftype, robj = read_metadata_file(path_to_inventory_filename)
# metadata_objects.append((ftype, robj))
# except Exception as e:
# print('Could not read metadata file {} '
# 'because of the following Exception: {}'.format(path_to_inventory_filename, e))
# return metadata_objects
def restitute_trace(input_tuple): def restitute_trace(input_tuple):
def no_metadata(tr, seed_id): def no_metadata(tr, seed_id):
print('no metadata file found ' print('no metadata file found '
@ -474,6 +652,8 @@ def restitute_data(data, metadata, unit='VEL', force=False, ncores=0):
""" """
# data = remove_underscores(data) # data = remove_underscores(data)
if not data:
return
# loop over traces # loop over traces
input_tuples = [] input_tuples = []
@ -481,6 +661,11 @@ def restitute_data(data, metadata, unit='VEL', force=False, ncores=0):
input_tuples.append((tr, metadata, unit, force)) input_tuples.append((tr, metadata, unit, force))
data.remove(tr) data.remove(tr)
if ncores == 0:
result = []
for input_tuple in input_tuples:
result.append(restitute_trace(input_tuple))
else:
pool = gen_Pool(ncores) pool = gen_Pool(ncores)
result = pool.imap_unordered(restitute_trace, input_tuples) result = pool.imap_unordered(restitute_trace, input_tuples)
pool.close() pool.close()

View File

@ -22,14 +22,11 @@ class Event(ObsPyEvent):
:param path: path to event directory :param path: path to event directory
:type path: str :type path: str
""" """
# TODO: remove rootpath and database
self.pylot_id = path.split('/')[-1] self.pylot_id = path.split('/')[-1]
# initialize super class # initialize super class
super(Event, self).__init__(resource_id=ResourceIdentifier('smi:local/' + self.pylot_id)) super(Event, self).__init__(resource_id=ResourceIdentifier('smi:local/' + self.pylot_id))
self.path = path self.path = path
self.database = path.split('/')[-2]
self.datapath = os.path.split(path)[0] # path.split('/')[-3] self.datapath = os.path.split(path)[0] # path.split('/')[-3]
self.rootpath = '/' + os.path.join(*path.split('/')[:-3])
self.pylot_autopicks = {} self.pylot_autopicks = {}
self.pylot_picks = {} self.pylot_picks = {}
self.notes = '' self.notes = ''

View File

@ -20,7 +20,7 @@ import matplotlib
matplotlib.use('Qt5Agg') matplotlib.use('Qt5Agg')
sys.path.append(os.path.join('/'.join(sys.argv[0].split('/')[:-1]), '../../..')) sys.path.append(os.path.join('/'.join(sys.argv[0].split('/')[:-1]), '../../..'))
from pylot.core.io.project import Project from PyLoT import Project
from pylot.core.util.dataprocessing import Metadata from pylot.core.util.dataprocessing import Metadata
from pylot.core.util.array_map import Array_map from pylot.core.util.array_map import Array_map

View File

@ -1,6 +1,7 @@
#!/usr/bin/env python #!/usr/bin/env python
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import os import os
from functools import lru_cache
try: try:
import pyqtgraph as pg import pyqtgraph as pg
@ -25,14 +26,14 @@ def pick_linestyle_pg(picktype, key):
:return: Qt line style parameters :return: Qt line style parameters
:rtype: :rtype:
""" """
linestyles_manu = {'mpp': (QtCore.Qt.SolidLine, 2.), linestyles_manu = {'mpp': (QtCore.Qt.SolidLine, 2),
'epp': (QtCore.Qt.DashLine, 1.), 'epp': (QtCore.Qt.DashLine, 1),
'lpp': (QtCore.Qt.DashLine, 1.), 'lpp': (QtCore.Qt.DashLine, 1),
'spe': (QtCore.Qt.DashLine, 1.)} 'spe': (QtCore.Qt.DashLine, 1)}
linestyles_auto = {'mpp': (QtCore.Qt.DotLine, 2.), linestyles_auto = {'mpp': (QtCore.Qt.DotLine, 2),
'epp': (QtCore.Qt.DashDotLine, 1.), 'epp': (QtCore.Qt.DashDotLine, 1),
'lpp': (QtCore.Qt.DashDotLine, 1.), 'lpp': (QtCore.Qt.DashDotLine, 1),
'spe': (QtCore.Qt.DashDotLine, 1.)} 'spe': (QtCore.Qt.DashDotLine, 1)}
linestyles = {'manual': linestyles_manu, linestyles = {'manual': linestyles_manu,
'auto': linestyles_auto} 'auto': linestyles_auto}
return linestyles[picktype][key] return linestyles[picktype][key]
@ -80,6 +81,7 @@ def which(program, parameter):
return None return None
@lru_cache(maxsize=128)
def make_pen(picktype, phase, key, quality): def make_pen(picktype, phase, key, quality):
""" """
Make PyQtGraph.QPen Make PyQtGraph.QPen

View File

@ -6,7 +6,7 @@ Created on Wed Jan 26 17:47:25 2015
@author: sebastianw @author: sebastianw
""" """
from pylot.core.io.data import SeiscompDataStructure, PilotDataStructure from pylot.core.io.data import SeiscompDataStructure, PilotDataStructure, ObspyDMTdataStructure
DATASTRUCTURE = {'PILOT': PilotDataStructure, 'SeisComP': SeiscompDataStructure, DATASTRUCTURE = {'PILOT': PilotDataStructure, 'SeisComP': SeiscompDataStructure,
'obspyDMT': PilotDataStructure, None: PilotDataStructure} 'obspyDMT': ObspyDMTdataStructure, None: PilotDataStructure}

View File

@ -8,6 +8,7 @@ import platform
import re import re
import subprocess import subprocess
import warnings import warnings
from typing import Literal, Tuple, Type
from functools import lru_cache from functools import lru_cache
import numpy as np import numpy as np
@ -50,7 +51,6 @@ def readDefaultFilterInformation():
:rtype: dict :rtype: dict
""" """
pparam = PylotParameter() pparam = PylotParameter()
pparam.reset_defaults()
return readFilterInformation(pparam) return readFilterInformation(pparam)
@ -106,7 +106,7 @@ def gen_Pool(ncores=0):
print('gen_Pool: Generated multiprocessing Pool with {} cores\n'.format(ncores)) print('gen_Pool: Generated multiprocessing Pool with {} cores\n'.format(ncores))
pool = multiprocessing.Pool(ncores) pool = multiprocessing.Pool(ncores, maxtasksperchild=100)
return pool return pool
@ -357,10 +357,8 @@ def get_bool(value):
False False
>>> get_bool(None) >>> get_bool(None)
None None
>>> get_bool('Stream')
'Stream'
""" """
if type(value) == bool: if type(value) is bool:
return value return value
elif value in ['True', 'true']: elif value in ['True', 'true']:
return True return True
@ -901,19 +899,6 @@ def trim_station_components(data, trim_start=True, trim_end=True):
return data return data
def merge_stream(stream):
gaps = stream.get_gaps()
if gaps:
# list of merged stations (seed_ids)
merged = ['{}.{}.{}.{}'.format(*gap[:4]) for gap in gaps]
stream.merge(method=1)
print('Merged the following stations because of gaps:')
for merged_station in merged:
print(merged_station)
return stream, gaps
def check4gapsAndRemove(data): def check4gapsAndRemove(data):
""" """
check for gaps in Stream and remove them check for gaps in Stream and remove them
@ -934,12 +919,12 @@ def check4gapsAndRemove(data):
return data return data
def check4gapsAndMerge(data): def check_for_gaps_and_merge(data):
""" """
check for gaps in Stream and merge if gaps are found check for gaps in Stream and merge if gaps are found
:param data: stream of seismic data :param data: stream of seismic data
:type data: `~obspy.core.stream.Stream` :type data: `~obspy.core.stream.Stream`
:return: data stream :return: data stream, gaps returned from obspy get_gaps
:rtype: `~obspy.core.stream.Stream` :rtype: `~obspy.core.stream.Stream`
""" """
gaps = data.get_gaps() gaps = data.get_gaps()
@ -950,7 +935,7 @@ def check4gapsAndMerge(data):
for merged_station in merged: for merged_station in merged:
print(merged_station) print(merged_station)
return data return data, gaps
def check4doubled(data): def check4doubled(data):
@ -1090,7 +1075,7 @@ def check4rotated(data, metadata=None, verbosity=1):
return wfs_in return wfs_in
# check metadata quality # check metadata quality
t_start = full_range(wfs_in) t_start = full_range(wfs_in)[0]
try: try:
azimuths = [] azimuths = []
dips = [] dips = []
@ -1098,8 +1083,9 @@ def check4rotated(data, metadata=None, verbosity=1):
azimuths.append(metadata.get_coordinates(tr_id, t_start)['azimuth']) azimuths.append(metadata.get_coordinates(tr_id, t_start)['azimuth'])
dips.append(metadata.get_coordinates(tr_id, t_start)['dip']) dips.append(metadata.get_coordinates(tr_id, t_start)['dip'])
except (KeyError, TypeError) as err: except (KeyError, TypeError) as err:
logging.error(f"{type(err)=} occurred: {err=} Rotating not possible, not all azimuth and dip information " logging.warning(f"Rotating not possible, not all azimuth and dip information "
f"available in metadata. Stream remains unchanged.") f"available in metadata. Stream remains unchanged.")
logging.debug(f"Rotating not possible, {err=}, {type(err)=}")
return wfs_in return wfs_in
except Exception as err: except Exception as err:
print(f"Unexpected {err=}, {type(err)=}") print(f"Unexpected {err=}, {type(err)=}")
@ -1218,6 +1204,7 @@ def identifyPhase(phase):
return False return False
@lru_cache
def identifyPhaseID(phase): def identifyPhaseID(phase):
""" """
Returns phase id (capital P or S) Returns phase id (capital P or S)

View File

@ -8,6 +8,7 @@ import copy
import datetime import datetime
import getpass import getpass
import glob import glob
import logging
import multiprocessing import multiprocessing
import os import os
import subprocess import subprocess
@ -140,6 +141,18 @@ class LogWidget(QtWidgets.QWidget):
self.stderr.append(60 * '#' + '\n\n') self.stderr.append(60 * '#' + '\n\n')
def getDataType(parent):
type = QInputDialog().getItem(parent, "Select phases type", "Type:",
["manual", "automatic"])
if type[0].startswith('auto'):
type = 'auto'
else:
type = type[0]
return type
def plot_pdf(_axes, x, y, annotation, bbox_props, xlabel=None, ylabel=None, def plot_pdf(_axes, x, y, annotation, bbox_props, xlabel=None, ylabel=None,
title=None): title=None):
# try method or data # try method or data
@ -997,6 +1010,15 @@ class WaveformWidgetPG(QtWidgets.QWidget):
time_ax = np.linspace(time_ax[0], time_ax[-1], num=len(data)) time_ax = np.linspace(time_ax[0], time_ax[-1], num=len(data))
return data, time_ax return data, time_ax
# def getAxes(self):
# return self.axes
# def getXLims(self):
# return self.getAxes().get_xlim()
# def getYLims(self):
# return self.getAxes().get_ylim()
def setXLims(self, lims): def setXLims(self, lims):
vb = self.plotWidget.getPlotItem().getViewBox() vb = self.plotWidget.getPlotItem().getViewBox()
vb.setXRange(float(lims[0]), float(lims[1]), padding=0) vb.setXRange(float(lims[0]), float(lims[1]), padding=0)
@ -1150,6 +1172,8 @@ class PylotCanvas(FigureCanvas):
break break
if not ax_check: return if not ax_check: return
# self.updateCurrentLimits() #maybe put this down to else:
# calculate delta (relative values in axis) # calculate delta (relative values in axis)
old_x, old_y = self.press_rel old_x, old_y = self.press_rel
xdiff = gui_event.x - old_x xdiff = gui_event.x - old_x
@ -1357,96 +1381,83 @@ class PylotCanvas(FigureCanvas):
component='*', nth_sample=1, iniPick=None, verbosity=0, component='*', nth_sample=1, iniPick=None, verbosity=0,
plot_additional=False, additional_channel=None, scaleToChannel=None, plot_additional=False, additional_channel=None, scaleToChannel=None,
snr=None): snr=None):
ax = self.prepare_plot() def get_wf_dict(data: Stream = Stream(), linecolor = 'k', offset: float = 0., **plot_kwargs):
self.clearPlotDict()
wfstart, wfend = self.get_wf_range(wfdata)
compclass = self.get_comp_class()
plot_streams = self.get_plot_streams(wfdata, wfdata_compare, component, compclass)
st_main = plot_streams['wfdata']['data']
if mapping:
plot_positions = self.calcPlotPositions(st_main, compclass)
nslc = self.get_sorted_nslc(st_main)
nmax = self.plot_traces(ax, plot_streams, nslc, wfstart, mapping, plot_positions,
scaleToChannel, noiselevel, scaleddata, nth_sample, verbosity)
if plot_additional and additional_channel:
self.plot_additional_trace(ax, wfdata, additional_channel, scaleToChannel,
scaleddata, nth_sample, wfstart)
self.finalize_plot(ax, wfstart, wfend, nmax, zoomx, zoomy, iniPick, title, snr)
def prepare_plot(self):
ax = self.axes[0]
ax.cla()
return ax
def get_wf_range(self, wfdata):
return full_range(wfdata)
def get_comp_class(self):
settings = QSettings()
return SetChannelComponents.from_qsettings(settings)
def get_plot_streams(self, wfdata, wfdata_compare, component, compclass):
def get_wf_dict(data=Stream(), linecolor='k', offset=0., **plot_kwargs):
return dict(data=data, linecolor=linecolor, offset=offset, plot_kwargs=plot_kwargs) return dict(data=data, linecolor=linecolor, offset=offset, plot_kwargs=plot_kwargs)
linecolor = (0., 0., 0., 1.) if not self.style else self.style['linecolor']['rgba_mpl']
plot_streams = {
'wfdata': get_wf_dict(linecolor=linecolor, linewidth=0.7),
'wfdata_comp': get_wf_dict(offset=0.1, linecolor='b', alpha=0.7, linewidth=0.5)
}
if component != '*': ax = self.axes[0]
ax.cla()
self.clearPlotDict()
wfstart, wfend = full_range(wfdata)
nmax = 0
settings = QSettings()
compclass = SetChannelComponents.from_qsettings(settings)
linecolor = (0., 0., 0., 1.) if not self.style else self.style['linecolor']['rgba_mpl']
plot_streams = dict(wfdata=get_wf_dict(linecolor=linecolor, linewidth=0.7),
wfdata_comp=get_wf_dict(offset=0.1, linecolor='b', alpha=0.7, linewidth=0.5))
if not component == '*':
alter_comp = compclass.getCompPosition(component) alter_comp = compclass.getCompPosition(component)
plot_streams['wfdata']['data'] = wfdata.select(component=component) + wfdata.select(component=alter_comp) # alter_comp = str(alter_comp[0])
plot_streams['wfdata']['data'] = wfdata.select(component=component)
plot_streams['wfdata']['data'] += wfdata.select(component=alter_comp)
if wfdata_compare: if wfdata_compare:
plot_streams['wfdata_comp']['data'] = wfdata_compare.select( plot_streams['wfdata_comp']['data'] = wfdata_compare.select(component=component)
component=component) + wfdata_compare.select(component=alter_comp) plot_streams['wfdata_comp']['data'] += wfdata_compare.select(component=alter_comp)
else: else:
plot_streams['wfdata']['data'] = wfdata plot_streams['wfdata']['data'] = wfdata
if wfdata_compare: if wfdata_compare:
plot_streams['wfdata_comp']['data'] = wfdata_compare plot_streams['wfdata_comp']['data'] = wfdata_compare
return plot_streams st_main = plot_streams['wfdata']['data']
def get_sorted_nslc(self, st_main): if mapping:
nslc = [trace.get_id() for trace in st_main if trace.stats.channel[-1] in ['Z', 'N', 'E', '1', '2', '3']] plot_positions = self.calcPlotPositions(st_main, compclass)
nslc.sort(reverse=True)
return nslc # list containing tuples of network, station, channel and plot position (for sorting)
nslc = []
for plot_pos, trace in enumerate(st_main):
if not trace.stats.channel[-1] in ['Z', 'N', 'E', '1', '2', '3']:
print('Warning: Unrecognized channel {}'.format(trace.stats.channel))
continue
nslc.append(trace.get_id())
nslc.sort()
nslc.reverse()
def plot_traces(self, ax, plot_streams, nslc, wfstart, mapping, plot_positions, scaleToChannel, noiselevel,
scaleddata, nth_sample, verbosity):
nmax = 0
for n, seed_id in enumerate(nslc): for n, seed_id in enumerate(nslc):
network, station, location, channel = seed_id.split('.') network, station, location, channel = seed_id.split('.')
for wf_name, wf_dict in plot_streams.items(): for wf_name, wf_dict in plot_streams.items():
st_select = wf_dict.get('data') st_select = wf_dict.get('data')
if not st_select: if not st_select:
continue continue
trace = st_select.select(id=seed_id)[0].copy() st = st_select.select(id=seed_id)
trace = st[0].copy()
if mapping: if mapping:
n = plot_positions[trace.stats.channel] n = plot_positions[trace.stats.channel]
if n > nmax: if n > nmax:
nmax = n nmax = n
if verbosity: if verbosity:
print(f'plotting {channel} channel of station {station}') msg = 'plotting %s channel of station %s' % (channel, station)
time_ax = prep_time_axis(trace.stats.starttime - wfstart, trace) print(msg)
self.plot_trace(ax, trace, time_ax, wf_dict, n, scaleToChannel, noiselevel, scaleddata, nth_sample) stime = trace.stats.starttime - wfstart
self.setPlotDict(n, seed_id) time_ax = prep_time_axis(stime, trace)
return nmax
def plot_trace(self, ax, trace, time_ax, wf_dict, n, scaleToChannel, noiselevel, scaleddata, nth_sample):
if time_ax is not None: if time_ax is not None:
if scaleToChannel: if scaleToChannel:
self.scale_trace(trace, scaleToChannel) st_scale = wfdata.select(channel=scaleToChannel)
if st_scale:
tr = st_scale[0]
trace.detrend('constant')
trace.normalize(np.max(np.abs(tr.data)) * 2)
scaleddata = True scaleddata = True
if not scaleddata: if not scaleddata:
trace.detrend('constant') trace.detrend('constant')
trace.normalize(np.max(np.abs(trace.data)) * 2) trace.normalize(np.max(np.abs(trace.data)) * 2)
offset = wf_dict.get('offset') offset = wf_dict.get('offset')
times = [time for index, time in enumerate(time_ax) if not index % nth_sample] times = [time for index, time in enumerate(time_ax) if not index % nth_sample]
@ -1454,43 +1465,41 @@ class PylotCanvas(FigureCanvas):
ax.axhline(n, color="0.5", lw=0.5) ax.axhline(n, color="0.5", lw=0.5)
ax.plot(times, data, color=wf_dict.get('linecolor'), **wf_dict.get('plot_kwargs')) ax.plot(times, data, color=wf_dict.get('linecolor'), **wf_dict.get('plot_kwargs'))
if noiselevel is not None: if noiselevel is not None:
self.plot_noise_level(ax, time_ax, noiselevel, channel, n, wf_dict.get('linecolor')) for level in [-noiselevel[channel], noiselevel[channel]]:
ax.plot([time_ax[0], time_ax[-1]],
def scale_trace(self, trace, scaleToChannel): [n + level, n + level],
color=wf_dict.get('linecolor'),
linestyle='dashed')
self.setPlotDict(n, seed_id)
if plot_additional and additional_channel:
compare_stream = wfdata.select(channel=additional_channel)
if compare_stream:
trace = compare_stream[0]
if scaleToChannel:
st_scale = wfdata.select(channel=scaleToChannel) st_scale = wfdata.select(channel=scaleToChannel)
if st_scale: if st_scale:
tr = st_scale[0] tr = st_scale[0]
trace.detrend('constant') trace.detrend('constant')
trace.normalize(np.max(np.abs(tr.data)) * 2) trace.normalize(np.max(np.abs(tr.data)) * 2)
def plot_noise_level(self, ax, time_ax, noiselevel, channel, n, linecolor):
for level in [-noiselevel[channel], noiselevel[channel]]:
ax.plot([time_ax[0], time_ax[-1]], [n + level, n + level], color=linecolor, linestyle='dashed')
def plot_additional_trace(self, ax, wfdata, additional_channel, scaleToChannel, scaleddata, nth_sample, wfstart):
compare_stream = wfdata.select(channel=additional_channel)
if compare_stream:
trace = compare_stream[0]
if scaleToChannel:
self.scale_trace(trace, scaleToChannel)
scaleddata = True scaleddata = True
if not scaleddata: if not scaleddata:
trace.detrend('constant') trace.detrend('constant')
trace.normalize(np.max(np.abs(trace.data)) * 2) trace.normalize(np.max(np.abs(trace.data)) * 2)
time_ax = prep_time_axis(trace.stats.starttime - wfstart, trace) time_ax = prep_time_axis(stime, trace)
self.plot_additional_data(ax, trace, time_ax, nth_sample)
def plot_additional_data(self, ax, trace, time_ax, nth_sample):
times = [time for index, time in enumerate(time_ax) if not index % nth_sample] times = [time for index, time in enumerate(time_ax) if not index % nth_sample]
p_data = trace.data p_data = compare_stream[0].data
# #normalize
# p_max = max(abs(p_data))
# p_data /= p_max
for index in range(3): for index in range(3):
ax.plot(times, p_data, color='red', alpha=0.5, linewidth=0.7) ax.plot(times, p_data, color='red', alpha=0.5, linewidth=0.7)
p_data += 1 p_data += 1
def finalize_plot(self, ax, wfstart, wfend, nmax, zoomx, zoomy, iniPick, title, snr):
if iniPick: if iniPick:
ax.vlines(iniPick, ax.get_ylim()[0], ax.get_ylim()[1], colors='m', linestyles='dashed', linewidth=2) ax.vlines(iniPick, ax.get_ylim()[0], ax.get_ylim()[1],
xlabel = f'seconds since {wfstart}' colors='m', linestyles='dashed',
linewidth=2)
xlabel = 'seconds since {0}'.format(wfstart)
ylabel = '' ylabel = ''
self.updateWidget(xlabel, ylabel, title) self.updateWidget(xlabel, ylabel, title)
self.setXLims(ax, [0, wfend - wfstart]) self.setXLims(ax, [0, wfend - wfstart])
@ -1500,13 +1509,14 @@ class PylotCanvas(FigureCanvas):
if zoomy is not None: if zoomy is not None:
self.setYLims(ax, zoomy) self.setYLims(ax, zoomy)
if snr is not None: if snr is not None:
self.plot_snr_warning(ax, snr)
self.draw()
def plot_snr_warning(self, ax, snr):
if snr < 2: if snr < 2:
warning = 'LOW SNR' if snr >= 1.5 else 'VERY LOW SNR' warning = 'LOW SNR'
ax.text(0.1, 0.9, f'WARNING - {warning}', ha='center', va='center', transform=ax.transAxes, color='red') if snr < 1.5:
warning = 'VERY LOW SNR'
ax.text(0.1, 0.9, 'WARNING - {}'.format(warning), ha='center', va='center', transform=ax.transAxes,
color='red')
self.draw()
@staticmethod @staticmethod
def getXLims(ax): def getXLims(ax):
@ -1861,13 +1871,14 @@ class PickDlg(QDialog):
def __init__(self, parent=None, data=None, data_compare=None, station=None, network=None, location=None, picks=None, def __init__(self, parent=None, data=None, data_compare=None, station=None, network=None, location=None, picks=None,
autopicks=None, rotate=False, parameter=None, embedded=False, metadata=None, show_comp_data=False, autopicks=None, rotate=False, parameter=None, embedded=False, metadata=None, show_comp_data=False,
event=None, filteroptions=None, model=None, wftype=None): event=None, filteroptions=None, wftype=None):
super(PickDlg, self).__init__(parent, Qt.Window) super(PickDlg, self).__init__(parent, Qt.Window)
self.orig_parent = parent self.orig_parent = parent
self.setAttribute(Qt.WA_DeleteOnClose) self.setAttribute(Qt.WA_DeleteOnClose)
# initialize attributes # initialize attributes
self.parameter = parameter self.parameter = parameter
model = self.parameter.get('taup_model')
self._embedded = embedded self._embedded = embedded
self.showCompData = show_comp_data self.showCompData = show_comp_data
self.station = station self.station = station
@ -1912,7 +1923,7 @@ class PickDlg(QDialog):
# set attribute holding data # set attribute holding data
if data is None or not data: if data is None or not data:
try: try:
data = parent.get_data().get_wf_data().copy() data = parent.get_data().getWFData().copy()
self.data = data.select(station=station) self.data = data.select(station=station)
except AttributeError as e: except AttributeError as e:
errmsg = 'You either have to put in a data or an appropriate ' \ errmsg = 'You either have to put in a data or an appropriate ' \
@ -1946,7 +1957,7 @@ class PickDlg(QDialog):
self.cur_xlim = None self.cur_xlim = None
self.cur_ylim = None self.cur_ylim = None
self.stime, self.etime = full_range(self.get_wf_data()) self.stime, self.etime = full_range(self.getWFData())
# initialize plotting widget # initialize plotting widget
self.multicompfig = PylotCanvas(parent=self, multicursor=True) self.multicompfig = PylotCanvas(parent=self, multicursor=True)
@ -1960,7 +1971,7 @@ class PickDlg(QDialog):
self.referenceChannel.addItem('-', None) self.referenceChannel.addItem('-', None)
self.scaleChannel.addItem('individual', None) self.scaleChannel.addItem('individual', None)
for trace in self.get_wf_data(): for trace in self.getWFData():
channel = trace.stats.channel channel = trace.stats.channel
self.referenceChannel.addItem(channel, trace) self.referenceChannel.addItem(channel, trace)
if not channel[-1] in ['Z', 'N', 'E', '1', '2', '3']: if not channel[-1] in ['Z', 'N', 'E', '1', '2', '3']:
@ -1979,7 +1990,7 @@ class PickDlg(QDialog):
if self.wftype is not None: if self.wftype is not None:
title += ' | ({})'.format(self.wftype) title += ' | ({})'.format(self.wftype)
self.multicompfig.plotWFData(wfdata=self.get_wf_data(), wfdata_compare=self.get_wf_dataComp(), self.multicompfig.plotWFData(wfdata=self.getWFData(), wfdata_compare=self.getWFDataComp(),
title=title) title=title)
self.multicompfig.setZoomBorders2content() self.multicompfig.setZoomBorders2content()
@ -2260,8 +2271,8 @@ class PickDlg(QDialog):
arrivals = func[plot](source_origin.depth, arrivals = func[plot](source_origin.depth,
source_origin.latitude, source_origin.latitude,
source_origin.longitude, source_origin.longitude,
station_coords['latitude'], station_coords.get('latitude'),
station_coords['longitude'], station_coords.get('longitude'),
phases) phases)
self.arrivals = arrivals self.arrivals = arrivals
@ -2514,7 +2525,7 @@ class PickDlg(QDialog):
if self.autoFilterAction.isChecked(): if self.autoFilterAction.isChecked():
for filteraction in [self.filterActionP, self.filterActionS]: for filteraction in [self.filterActionP, self.filterActionS]:
filteraction.setChecked(False) filteraction.setChecked(False)
self.filter_wf_data() self.filterWFData()
self.draw() self.draw()
else: else:
self.draw() self.draw()
@ -2598,10 +2609,10 @@ class PickDlg(QDialog):
def getGlobalLimits(self, ax, axis): def getGlobalLimits(self, ax, axis):
return self.multicompfig.getGlobalLimits(ax, axis) return self.multicompfig.getGlobalLimits(ax, axis)
def get_wf_data(self): def getWFData(self):
return self.data return self.data
def get_wf_dataComp(self): def getWFDataComp(self):
if self.showCompData: if self.showCompData:
return self.data_compare return self.data_compare
else: else:
@ -2616,17 +2627,17 @@ class PickDlg(QDialog):
return tr return tr
if component == 'E' or component == 'N': if component == 'E' or component == 'N':
for trace in self.get_wf_data(): for trace in self.getWFData():
trace = selectTrace(trace, 'NE') trace = selectTrace(trace, 'NE')
if trace: if trace:
wfdata.append(trace) wfdata.append(trace)
elif component == '1' or component == '2': elif component == '1' or component == '2':
for trace in self.get_wf_data(): for trace in self.getWFData():
trace = selectTrace(trace, '12') trace = selectTrace(trace, '12')
if trace: if trace:
wfdata.append(trace) wfdata.append(trace)
else: else:
wfdata = self.get_wf_data().select(component=component) wfdata = self.getWFData().select(component=component)
return wfdata return wfdata
def getPicks(self, picktype='manual'): def getPicks(self, picktype='manual'):
@ -2729,8 +2740,8 @@ class PickDlg(QDialog):
stime = self.getStartTime() stime = self.getStartTime()
# copy wfdata for plotting # copy wfdata for plotting
wfdata = self.get_wf_data().copy() wfdata = self.getWFData().copy()
wfdata_comp = self.get_wf_dataComp().copy() wfdata_comp = self.getWFDataComp().copy()
wfdata = self.getPickPhases(wfdata, phase) wfdata = self.getPickPhases(wfdata, phase)
wfdata_comp = self.getPickPhases(wfdata_comp, phase) wfdata_comp = self.getPickPhases(wfdata_comp, phase)
for wfd in [wfdata, wfdata_comp]: for wfd in [wfdata, wfdata_comp]:
@ -2839,7 +2850,7 @@ class PickDlg(QDialog):
filteroptions = None filteroptions = None
# copy and filter data for earliest and latest possible picks # copy and filter data for earliest and latest possible picks
wfdata = self.get_wf_data().copy().select(channel=channel) wfdata = self.getWFData().copy().select(channel=channel)
if filteroptions: if filteroptions:
try: try:
wfdata.detrend('linear') wfdata.detrend('linear')
@ -2886,7 +2897,7 @@ class PickDlg(QDialog):
minFMSNR = parameter.get('minFMSNR') minFMSNR = parameter.get('minFMSNR')
quality = get_quality_class(spe, parameter.get('timeerrorsP')) quality = get_quality_class(spe, parameter.get('timeerrorsP'))
if quality <= minFMweight and snr >= minFMSNR: if quality <= minFMweight and snr >= minFMSNR:
FM = fmpicker(self.get_wf_data().select(channel=channel).copy(), wfdata.copy(), parameter.get('fmpickwin'), FM = fmpicker(self.getWFData().select(channel=channel).copy(), wfdata.copy(), parameter.get('fmpickwin'),
pick - stime_diff) pick - stime_diff)
# save pick times for actual phase # save pick times for actual phase
@ -3178,7 +3189,7 @@ class PickDlg(QDialog):
def togglePickBlocker(self): def togglePickBlocker(self):
return not self.pick_block return not self.pick_block
def filter_wf_data(self, phase=None): def filterWFData(self, phase=None):
if not phase: if not phase:
phase = self.currentPhase phase = self.currentPhase
if self.getPhaseID(phase) == 'P': if self.getPhaseID(phase) == 'P':
@ -3196,8 +3207,8 @@ class PickDlg(QDialog):
self.cur_xlim = self.multicompfig.axes[0].get_xlim() self.cur_xlim = self.multicompfig.axes[0].get_xlim()
self.cur_ylim = self.multicompfig.axes[0].get_ylim() self.cur_ylim = self.multicompfig.axes[0].get_ylim()
# self.multicompfig.updateCurrentLimits() # self.multicompfig.updateCurrentLimits()
wfdata = self.get_wf_data().copy() wfdata = self.getWFData().copy()
wfdata_comp = self.get_wf_dataComp().copy() wfdata_comp = self.getWFDataComp().copy()
title = self.getStation() title = self.getStation()
if filter: if filter:
filtoptions = None filtoptions = None
@ -3234,14 +3245,14 @@ class PickDlg(QDialog):
def filterP(self): def filterP(self):
self.filterActionS.setChecked(False) self.filterActionS.setChecked(False)
if self.filterActionP.isChecked(): if self.filterActionP.isChecked():
self.filter_wf_data('P') self.filterWFData('P')
else: else:
self.refreshPlot() self.refreshPlot()
def filterS(self): def filterS(self):
self.filterActionP.setChecked(False) self.filterActionP.setChecked(False)
if self.filterActionS.isChecked(): if self.filterActionS.isChecked():
self.filter_wf_data('S') self.filterWFData('S')
else: else:
self.refreshPlot() self.refreshPlot()
@ -3300,7 +3311,7 @@ class PickDlg(QDialog):
if self.autoFilterAction.isChecked(): if self.autoFilterAction.isChecked():
self.filterActionP.setChecked(False) self.filterActionP.setChecked(False)
self.filterActionS.setChecked(False) self.filterActionS.setChecked(False)
# data = self.get_wf_data().copy() # data = self.getWFData().copy()
# title = self.getStation() # title = self.getStation()
filter = False filter = False
phase = None phase = None
@ -3800,8 +3811,8 @@ class TuneAutopicker(QWidget):
fnames = self.station_ids[self.get_current_station_id()] fnames = self.station_ids[self.get_current_station_id()]
if not fnames == self.fnames: if not fnames == self.fnames:
self.fnames = fnames self.fnames = fnames
self.data.set_wf_data(fnames) self.data.setWFData(fnames)
wfdat = self.data.get_wf_data() # all available streams wfdat = self.data.getWFData() # all available streams
# remove possible underscores in station names # remove possible underscores in station names
# wfdat = remove_underscores(wfdat) # wfdat = remove_underscores(wfdat)
# rotate misaligned stations to ZNE # rotate misaligned stations to ZNE
@ -3826,7 +3837,7 @@ class TuneAutopicker(QWidget):
self.stb_names = ['aicARHfig', 'refSpick', 'el_S1pick', 'el_S2pick'] self.stb_names = ['aicARHfig', 'refSpick', 'el_S1pick', 'el_S2pick']
def add_parameters(self): def add_parameters(self):
self.paraBox = PylotParaBox(self.parameter, parent=self, windowflag=Qt.Widget) self.paraBox = PylotParameterWidget(self.parameter, parent=self, windowflag=Qt.Widget)
self.paraBox.set_tune_mode(True) self.paraBox.set_tune_mode(True)
self.update_eventID() self.update_eventID()
self.parameter_layout.addWidget(self.paraBox) self.parameter_layout.addWidget(self.paraBox)
@ -3906,8 +3917,8 @@ class TuneAutopicker(QWidget):
network = None network = None
location = None location = None
wfdata = self.data.get_wf_data() wfdata = self.data.getWFData()
wfdata_comp = self.data.get_wf_dataComp() wfdata_comp = self.data.getWFDataComp()
metadata = self.parent().metadata metadata = self.parent().metadata
event = self.get_current_event() event = self.get_current_event()
filteroptions = self.parent().filteroptions filteroptions = self.parent().filteroptions
@ -3962,7 +3973,7 @@ class TuneAutopicker(QWidget):
for plotitem in self._manual_pick_plots: for plotitem in self._manual_pick_plots:
self.clear_plotitem(plotitem) self.clear_plotitem(plotitem)
self._manual_pick_plots = [] self._manual_pick_plots = []
st = self.data.get_wf_data() st = self.data.getWFData()
tr = st.select(station=self.get_current_station())[0] tr = st.select(station=self.get_current_station())[0]
starttime = tr.stats.starttime starttime = tr.stats.starttime
# create two lists with figure names and subindices (for subplots) to get the correct axes # create two lists with figure names and subindices (for subplots) to get the correct axes
@ -4193,7 +4204,7 @@ class TuneAutopicker(QWidget):
self.qmb.show() self.qmb.show()
class PylotParaBox(QtWidgets.QWidget): class PylotParameterWidget(QtWidgets.QWidget):
accepted = QtCore.Signal(str) accepted = QtCore.Signal(str)
rejected = QtCore.Signal(str) rejected = QtCore.Signal(str)
@ -4307,6 +4318,11 @@ class PylotParaBox(QtWidgets.QWidget):
grid = QtWidgets.QGridLayout() grid = QtWidgets.QGridLayout()
for index1, name in enumerate(parameter_names): for index1, name in enumerate(parameter_names):
if name in ['rootpath', 'database']:
logging.warning(
f'Deprecated parameter loaded: {name}. Check if datapath is still correct in parameter widget.'
)
continue
default_item = self.parameter.get_defaults()[name] default_item = self.parameter.get_defaults()[name]
tooltip = default_item['tooltip'] tooltip = default_item['tooltip']
tooltip += ' | type: {}'.format(default_item['type']) tooltip += ' | type: {}'.format(default_item['type'])
@ -4876,7 +4892,7 @@ class PropTab(QWidget):
def getValues(self): def getValues(self):
return None return None
def resetValues(self, infile=None): def resetValues(self, infile):
return None return None
@ -4973,12 +4989,7 @@ class InputsTab(PropTab):
else: else:
index = 2 index = 2
datapath = para.get('datapath') if not para.get('datapath') is None else '' datapath = para.get('datapath') if not para.get('datapath') is None else ''
rootpath = para.get('rootpath') if not para.get('rootpath') is None else '' values = {"data/dataRoot": self.dataDirEdit.setText("%s" % datapath),
database = para.get('database') if not para.get('database') is None else ''
if isinstance(database, int):
database = str(database)
path = os.path.join(os.path.expanduser('~'), rootpath, datapath, database)
values = {"data/dataRoot": self.dataDirEdit.setText("%s" % path),
"user/FullName": self.fullNameEdit.text(), "user/FullName": self.fullNameEdit.text(),
"data/Structure": self.structureSelect.setCurrentIndex(index), "data/Structure": self.structureSelect.setCurrentIndex(index),
"tstart": self.tstartBox.setValue(0), "tstart": self.tstartBox.setValue(0),

View File

@ -0,0 +1,2 @@
# -*- coding: utf-8 -*-
#

View File

@ -0,0 +1,95 @@
############################# correlation parameters #####################################
# min_corr_stacking: minimum correlation coefficient for building beam trace
# min_corr_export: minimum correlation coefficient for pick export
# min_stack: minimum number of stations for building beam trace
# t_before: correlation window before pick
# t_after: correlation window after pick#
# cc_maxlag: maximum shift for initial correlation
# cc_maxlag2: maximum shift for second (final) correlation (also for calculating pick uncertainty)
# initial_pick_outlier_threshold: (hopefully) threshold for excluding large outliers of initial (AIC) picks
# export_threshold: automatically exclude all onsets which deviate more than this threshold from corrected taup onsets
# min_picks_export: minimum number of correlated picks for export
# min_picks_autopylot: minimum number of reference auto picks to continue with event
# check_RMS: do RMS check to search for restitution errors (very experimental)
# use_taupy_onsets: use taupy onsets as reference picks instead of external picks
# station_list: use the following stations as reference for stacking
# use_stacked_trace: use existing stacked trace if found (spare re-computation)
# data_dir: obspyDMT data subdirectory (e.g. 'raw', 'processed')
# pickfile_extension: use quakeML files (PyLoT output) with the following extension, e.g. '_autopylot' for pickfiles
# such as 'PyLoT_20170501_141822_autopylot.xml'
# dt_stacking: time shift for stacking (e.g. [0, 250] for 0 and 250 seconds shift)
# filter_options: filter for first correlation (rough)
# filter_options_final: filter for second correlation (fine)
# filter_type: e.g. 'bandpass'
# sampfreq: sampling frequency of the data
logging: info
pick_phases: ['P', 'S']
# P-phase
P:
min_corr_stacking: 0.8
min_corr_export: 0.6
min_stack: 20
t_before: 30.
t_after: 50.
cc_maxlag: 50.
cc_maxlag2: 5.
initial_pick_outlier_threshold: 30.
export_threshold: 2.5
min_picks_export: 100
min_picks_autopylot: 50
check_RMS: True
use_taupy_onsets: False
station_list: ['HU.MORH', 'HU.TIH', 'OX.FUSE', 'OX.BAD']
use_stacked_trace: False
data_dir: 'processed'
pickfile_extension: '_autopylot'
dt_stacking: [250, 250]
# filter for first correlation (rough)
filter_options:
freqmax: 0.5
freqmin: 0.03
# filter for second correlation (fine)
filter_options_final:
freqmax: 0.5
freqmin: 0.03
filter_type: bandpass
sampfreq: 20.0
# S-phase
S:
min_corr_stacking: 0.7
min_corr_export: 0.6
min_stack: 20
t_before: 60.
t_after: 60.
cc_maxlag: 100.
cc_maxlag2: 25.
initial_pick_outlier_threshold: 30.
export_threshold: 5.0
min_picks_export: 200
min_picks_autopylot: 50
check_RMS: True
use_taupy_onsets: False
station_list: ['HU.MORH','HU.TIH', 'OX.FUSE', 'OX.BAD']
use_stacked_trace: False
data_dir: 'processed'
pickfile_extension: '_autopylot'
dt_stacking: [250, 250]
# filter for first correlation (rough)
filter_options:
freqmax: 0.1
freqmin: 0.01
# filter for second correlation (fine)
filter_options_final:
freqmax: 0.2
freqmin: 0.01
filter_type: bandpass
sampfreq: 20.0

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,40 @@
#!/bin/bash
#ulimit -s 8192
#ulimit -v $(ulimit -v | awk '{printf("%d",$1*0.95)}')
#ulimit -v
#655360
source /opt/anaconda3/etc/profile.d/conda.sh
conda activate pylot_311
NSLOTS=20
#qsub -l low -cwd -l "os=*stretch" -pe smp 40 submit_pick_corr_correction.sh
#$ -l low
#$ -l h_vmem=6G
#$ -cwd
#$ -pe smp 20
#$ -N corr_pick
export PYTHONPATH="$PYTHONPATH:/home/marcel/git/pylot_tools/"
export PYTHONPATH="$PYTHONPATH:/home/marcel/git/"
export PYTHONPATH="$PYTHONPATH:/home/marcel/git/pylot/"
#export MKL_NUM_THREADS=${NSLOTS:=1}
#export NUMEXPR_NUM_THREADS=${NSLOTS:=1}
#export OMP_NUM_THREADS=${NSLOTS:=1}
#python pick_correlation_correction.py '/data/AlpArray_Data/dmt_database_mantle_M5.8-6.0' '/home/marcel/.pylot/pylot_alparray_mantle_corr_stack_0.03-0.5.in' -pd -n ${NSLOTS:=1} -istart 0 -istop 100
#python pick_correlation_correction.py '/data/AlpArray_Data/dmt_database_mantle_M5.8-6.0' '/home/marcel/.pylot/pylot_alparray_mantle_corr_stack_0.03-0.5.in' -pd -n ${NSLOTS:=1} -istart 100 -istop 200
#python pick_correlation_correction.py '/data/AlpArray_Data/dmt_database_mantle_M6.0-6.5' '/home/marcel/.pylot/pylot_alparray_mantle_corr_stack_0.03-0.5.in' -pd -n ${NSLOTS:=1} -istart 0 -istop 100
#python pick_correlation_correction.py '/data/AlpArray_Data/dmt_database_mantle_M5.8-6.0' '/home/marcel/.pylot/pylot_alparray_mantle_corr_stack_0.03-0.5.in' -pd -n ${NSLOTS:=1} -istart 100 -istop 200
#python pick_correlation_correction.py 'H:\sciebo\dmt_database' 'H:\Sciebo\dmt_database\pylot_alparray_mantle_corr_S_0.01-0.2.in' -pd -n 4 -t
pylot_infile='/home/marcel/.pylot/pylot_alparray_syn_fwi_mk6_it3.in'
#pylot_infile='/home/marcel/.pylot/pylot_adriaarray_corr_P_and_S.in'
# THIS SCRIPT SHOLD BE CALLED BY "submit_to_grid_engine.py" using the following line:
python pick_correlation_correction.py $1 $pylot_infile -pd -n ${NSLOTS:=1} -istart $2 --params 'parameters_fwi_mk6_it3.yaml'
#--event_blacklist eventlist.txt

View File

@ -0,0 +1,23 @@
#!/usr/bin/env python
import subprocess
fnames = [
('/data/AlpArray_Data/dmt_database_synth_model_mk6_it3_no_rotation', 0),
]
#fnames = [('/data/AlpArray_Data/dmt_database_mantle_0.01-0.2_SKS-phase', 0),
# ('/data/AlpArray_Data/dmt_database_mantle_0.01-0.2_S-phase', 0),]
####
script_location = '/home/marcel/VersionCtrl/git/code_base/correlation_picker/submit_pick_corr_correction.sh'
####
for fnin, istart in fnames:
input_cmds = f'qsub -q low.q@minos15,low.q@minos14,low.q@minos13,low.q@minos12,low.q@minos11 {script_location} {fnin} {istart}'
print(input_cmds)
print(subprocess.check_output(input_cmds.split()))

View File

@ -0,0 +1,61 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import glob
import json
from obspy import read_events
from pylot.core.util.dataprocessing import Metadata
from pylot.core.util.obspyDMT_interface import qml_from_obspyDMT
def get_event_obspy_dmt(eventdir):
event_pkl_file = os.path.join(eventdir, 'info', 'event.pkl')
if not os.path.exists(event_pkl_file):
raise IOError('Could not find event path for event: {}'.format(eventdir))
event = qml_from_obspyDMT(event_pkl_file)
return event
def get_event_pylot(eventdir, extension=''):
event_id = get_event_id(eventdir)
filename = os.path.join(eventdir, 'PyLoT_{}{}.xml'.format(event_id, extension))
if not os.path.isfile(filename):
return
cat = read_events(filename)
return cat[0]
def get_event_id(eventdir):
event_id = os.path.split(eventdir)[-1]
return event_id
def get_picks(eventdir, extension=''):
event_id = get_event_id(eventdir)
filename = 'PyLoT_{}{}.xml'
filename = filename.format(event_id, extension)
fpath = os.path.join(eventdir, filename)
fpaths = glob.glob(fpath)
if len(fpaths) == 1:
cat = read_events(fpaths[0])
picks = cat[0].picks
return picks
elif len(fpaths) == 0:
print('get_picks: File not found: {}'.format(fpath))
return
print(f'WARNING: Ambiguous pick file specification. Found the following pick files {fpaths}\nFilemask: {fpath}')
return
def write_json(object, fname):
with open(fname, 'w') as outfile:
json.dump(object, outfile, sort_keys=True, indent=4)
def get_metadata(eventdir):
metadata_path = os.path.join(eventdir, 'resp')
metadata = Metadata(inventory=metadata_path, verbosity=0)
return metadata

View File

@ -1,9 +1,7 @@
# This file may be used to create an environment using: Cartopy==0.23.0
# $ conda create --name <env> --file <this file> joblib==1.4.2
# platform: win-64 obspy==1.4.1
cartopy>=0.20.2 pyaml==24.7.0
numpy<2 pyqtgraph==0.13.7
obspy>=1.3.0 PySide2==5.15.8
pyqtgraph>=0.12.4 pytest==8.3.2
pyside2>=5.13.2
scipy>=1.8.0

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,99 @@
%This is a parameter input file for PyLoT/autoPyLoT.
%All main and special settings regarding data handling
%and picking are to be set here!
%Parameters are optimized for %extent data sets!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#main settings#
dmt_database_test #datapath# %data path
20171010_063224.a #eventID# %event ID for single event processing (* for all events found in database)
#invdir# %full path to inventory or dataless-seed file
PILOT #datastructure# %choose data structure
True #apverbose# %choose 'True' or 'False' for terminal output
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#NLLoc settings#
None #nllocbin# %path to NLLoc executable
None #nllocroot# %root of NLLoc-processing directory
None #phasefile# %name of autoPyLoT-output phase file for NLLoc
None #ctrfile# %name of autoPyLoT-output control file for NLLoc
ttime #ttpatter# %pattern of NLLoc ttimes from grid
AUTOLOC_nlloc #outpatter# %pattern of NLLoc-output file
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#parameters for seismic moment estimation#
3530.0 #vp# %average P-wave velocity
2500.0 #rho# %average rock density [kg/m^3]
300.0 0.8 #Qp# %quality factor for P waves (Qp*f^a); list(Qp, a)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#settings local magnitude#
1.0 1.0 1.0 #WAscaling# %Scaling relation (log(Ao)+Alog(r)+Br+C) of Wood-Anderson amplitude Ao [nm] If zeros are set, original Richter magnitude is calculated!
1.0 1.0 #magscaling# %Scaling relation for derived local magnitude [a*Ml+b]. If zeros are set, no scaling of network magnitude is applied!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#filter settings#
0.03 0.03 #minfreq# %Lower filter frequency [P, S]
0.5 0.5 #maxfreq# %Upper filter frequency [P, S]
4 4 #filter_order# %filter order [P, S]
bandpass bandpass #filter_type# %filter type (bandpass, bandstop, lowpass, highpass) [P, S]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#common settings picker#
global #extent# %extent of array ("local", "regional" or "global")
-100.0 #pstart# %start time [s] for calculating CF for P-picking (if TauPy: seconds relative to estimated onset)
50.0 #pstop# %end time [s] for calculating CF for P-picking (if TauPy: seconds relative to estimated onset)
-50.0 #sstart# %start time [s] relative to P-onset for calculating CF for S-picking
50.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking
True #use_taup# %use estimated traveltimes from TauPy for calculating windows for CF
ak135 #taup_model# %Define TauPy model for traveltime estimation. Possible values: 1066a, 1066b, ak135, ak135f, herrin, iasp91, jb, prem, pwdk, sp6
P,Pdiff,S,SKS #taup_phases# %Specify possible phases for TauPy (comma separated). See Obspy TauPy documentation for possible values.
0.03 0.5 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz]
0.01 0.5 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz]
0.03 0.5 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz]
0.01 0.5 #bph2# %lower/upper corner freq. of second band pass filter z-comp. [Hz]
#special settings for calculating CF#
%!!Edit the following only if you know what you are doing!!%
#Z-component#
HOS #algoP# %choose algorithm for P-onset determination (HOS, ARZ, or AR3)
300.0 #tlta# %for HOS-/AR-AIC-picker, length of LTA window [s]
4 #hosorder# %for HOS-picker, order of Higher Order Statistics
2 #Parorder# %for AR-picker, order of AR process of Z-component
16.0 #tdet1z# %for AR-picker, length of AR determination window [s] for Z-component, 1st pick
10.0 #tpred1z# %for AR-picker, length of AR prediction window [s] for Z-component, 1st pick
12.0 #tdet2z# %for AR-picker, length of AR determination window [s] for Z-component, 2nd pick
6.0 #tpred2z# %for AR-picker, length of AR prediction window [s] for Z-component, 2nd pick
0.001 #addnoise# %add noise to seismogram for stable AR prediction
60.0 5.0 20.0 12.0 #tsnrz# %for HOS/AR, window lengths for SNR-and slope estimation [tnoise, tsafetey, tsignal, tslope] [s]
50.0 #pickwinP# %for initial AIC pick, length of P-pick window [s]
30.0 #Precalcwin# %for HOS/AR, window length [s] for recalculation of CF (relative to 1st pick)
2.0 #aictsmooth# %for HOS/AR, take average of samples for smoothing of AIC-function [s]
2.0 #tsmoothP# %for HOS/AR, take average of samples in this time window for smoothing CF [s]
0.006 #ausP# %for HOS/AR, artificial uplift of samples (aus) of CF (P)
2.0 #nfacP# %for HOS/AR, noise factor for noise level determination (P)
#H-components#
ARH #algoS# %choose algorithm for S-onset determination (ARH or AR3)
12.0 #tdet1h# %for HOS/AR, length of AR-determination window [s], H-components, 1st pick
6.0 #tpred1h# %for HOS/AR, length of AR-prediction window [s], H-components, 1st pick
8.0 #tdet2h# %for HOS/AR, length of AR-determinaton window [s], H-components, 2nd pick
4.0 #tpred2h# %for HOS/AR, length of AR-prediction window [s], H-components, 2nd pick
4 #Sarorder# %for AR-picker, order of AR process of H-components
100.0 #Srecalcwin# %for AR-picker, window length [s] for recalculation of CF (2nd pick) (H)
195.0 #pickwinS# %for initial AIC pick, length of S-pick window [s]
60.0 10.0 30.0 12.0 #tsnrh# %for ARH/AR3, window lengths for SNR-and slope estimation [tnoise, tsafetey, tsignal, tslope] [s]
22.0 #aictsmoothS# %for AIC-picker, take average of samples in this time window for smoothing of AIC-function [s]
20.0 #tsmoothS# %for AR-picker, take average of samples for smoothing CF [s] (S)
0.001 #ausS# %for HOS/AR, artificial uplift of samples (aus) of CF (S)
2.0 #nfacS# %for AR-picker, noise factor for noise level determination (S)
#first-motion picker#
1 #minfmweight# %minimum required P weight for first-motion determination
3.0 #minFMSNR# %miniumum required SNR for first-motion determination
10.0 #fmpickwin# %pick window [s] around P onset for calculating zero crossings
#quality assessment#
0.1 0.2 0.4 0.8 #timeerrorsP# %discrete time errors [s] corresponding to picking weights [0 1 2 3] for P
4.0 8.0 16.0 32.0 #timeerrorsS# %discrete time errors [s] corresponding to picking weights [0 1 2 3] for S
0.005 #minAICPslope# %below this slope [counts/s] the initial P pick is rejected
1.1 #minAICPSNR# %below this SNR the initial P pick is rejected
0.002 #minAICSslope# %below this slope [counts/s] the initial S pick is rejected
1.3 #minAICSSNR# %below this SNR the initial S pick is rejected
20.0 #minsiglength# %length of signal part for which amplitudes must exceed noiselevel [s]
1.0 #noisefactor# %noiselevel*noisefactor=threshold
10.0 #minpercent# %required percentage of amplitudes exceeding threshold
0.1 #zfac# %P-amplitude must exceed at least zfac times RMS-S amplitude
100.0 #mdttolerance# %maximum allowed deviation of P picks from median [s]
50.0 #wdttolerance# %maximum allowed deviation from Wadati-diagram
25.0 #jackfactor# %pick is removed if the variance of the subgroup with the pick removed is larger than the mean variance of all subgroups times safety factor

View File

@ -0,0 +1,67 @@
import os
import pytest
from obspy import read_events
from autoPyLoT import autoPyLoT
class TestAutopickerGlobal():
def init(self):
self.params_infile = 'pylot_alparray_mantle_corr_stack_0.03-0.5.in'
self.test_event_dir = 'dmt_database_test'
self.fname_outfile_xml = os.path.join(
self.test_event_dir, '20171010_063224.a', 'PyLoT_20171010_063224.a_autopylot.xml'
)
# check if the input files exist
if not os.path.isfile(self.params_infile):
print(f'Test input file {os.path.abspath(self.params_infile)} not found.')
return False
if not os.path.exists(self.test_event_dir):
print(
f'Test event directory not found at location "{os.path.abspath(self.test_event_dir)}". '
f'Make sure to load it from the website first.'
)
return False
return True
def test_autopicker(self):
assert self.init(), 'Initialization failed due to missing input files.'
# check for output file in test directory and remove it if necessary
if os.path.isfile(self.fname_outfile_xml):
os.remove(self.fname_outfile_xml)
autoPyLoT(inputfile=self.params_infile, eventid='20171010_063224.a', obspyDMT_wfpath='processed')
# test for different known output files if they are identical or not
compare_pickfiles(self.fname_outfile_xml, 'PyLoT_20171010_063224.a_autopylot.xml', True)
compare_pickfiles(self.fname_outfile_xml, 'PyLoT_20171010_063224.a_saved_from_GUI.xml', True)
compare_pickfiles(self.fname_outfile_xml, 'PyLoT_20171010_063224.a_corrected_taup_times_0.03-0.5_P.xml', False)
def compare_pickfiles(pickfile1: str, pickfile2: str, samefile: bool = True) -> None:
"""
Compare the pick times and errors from two pick files.
Parameters:
pickfile1 (str): The path to the first pick file.
pickfile2 (str): The path to the second pick file.
samefile (bool): A flag indicating whether the two files are expected to be the same. Defaults to True.
Returns:
None
"""
cat1 = read_events(pickfile1)
cat2 = read_events(pickfile2)
picks1 = sorted(cat1[0].picks, key=lambda pick: str(pick.waveform_id))
picks2 = sorted(cat2[0].picks, key=lambda pick: str(pick.waveform_id))
pick_times1 = [pick.time for pick in picks1]
pick_times2 = [pick.time for pick in picks2]
pick_terrs1 = [pick.time_errors for pick in picks1]
pick_terrs2 = [pick.time_errors for pick in picks2]
# check if times and errors are identical or not depending on the samefile flag
assert (pick_times1 == pick_times2) is samefile, 'Pick times error'
assert (pick_terrs1 == pick_terrs2) is samefile, 'Pick time errors errors'

View File

@ -4,10 +4,8 @@
%Parameters are optimized for %extent data sets! %Parameters are optimized for %extent data sets!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#main settings# #main settings#
/home/darius #rootpath# %project path /home/darius/alparray/waveforms_used #datapath# %data path
alparray #datapath# %data path e0093.173.16 #eventID# %event ID for single event processing (* for all events found in datapath)
waveforms_used #database# %name of data base
e0093.173.16 #eventID# %event ID for single event processing (* for all events found in database)
/home/darius/alparray/metadata #invdir# %full path to inventory or dataless-seed file /home/darius/alparray/metadata #invdir# %full path to inventory or dataless-seed file
PILOT #datastructure# %choose data structure PILOT #datastructure# %choose data structure
True #apverbose# %choose 'True' or 'False' for terminal output True #apverbose# %choose 'True' or 'False' for terminal output
@ -43,6 +41,7 @@ global #extent# %extent of a
875.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking 875.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking
False #use_taup# %use estimated traveltimes from TauPy for calculating windows for CF False #use_taup# %use estimated traveltimes from TauPy for calculating windows for CF
IASP91 #taup_model# %define TauPy model for traveltime estimation. Possible values: 1066a, 1066b, ak135, ak135f, herrin, iasp91, jb, prem, pwdk, sp6 IASP91 #taup_model# %define TauPy model for traveltime estimation. Possible values: 1066a, 1066b, ak135, ak135f, herrin, iasp91, jb, prem, pwdk, sp6
P,Pdiff,S,Sdiff #taup_phases# %Specify possible phases for TauPy (comma separated). See Obspy TauPy documentation for possible values.
0.01 0.1 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz] 0.01 0.1 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz]
0.001 0.5 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz] 0.001 0.5 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz]
0.01 0.5 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz] 0.01 0.5 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz]

View File

@ -4,10 +4,8 @@
%Parameters are optimized for %extent data sets! %Parameters are optimized for %extent data sets!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#main settings# #main settings#
/home/darius #rootpath# %project path /home/darius/alparray/waveforms_used #datapath# %data path
alparray #datapath# %data path e0093.173.16 #eventID# %event ID for single event processing (* for all events found in datapath)
waveforms_used #database# %name of data base
e0093.173.16 #eventID# %event ID for single event processing (* for all events found in database)
/home/darius/alparray/metadata #invdir# %full path to inventory or dataless-seed file /home/darius/alparray/metadata #invdir# %full path to inventory or dataless-seed file
PILOT #datastructure# %choose data structure PILOT #datastructure# %choose data structure
True #apverbose# %choose 'True' or 'False' for terminal output True #apverbose# %choose 'True' or 'False' for terminal output
@ -43,6 +41,7 @@ global #extent# %extent of a
875.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking 875.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking
True #use_taup# %use estimated traveltimes from TauPy for calculating windows for CF True #use_taup# %use estimated traveltimes from TauPy for calculating windows for CF
IASP91 #taup_model# %define TauPy model for traveltime estimation. Possible values: 1066a, 1066b, ak135, ak135f, herrin, iasp91, jb, prem, pwdk, sp6 IASP91 #taup_model# %define TauPy model for traveltime estimation. Possible values: 1066a, 1066b, ak135, ak135f, herrin, iasp91, jb, prem, pwdk, sp6
P,Pdiff,S,Sdiff #taup_phases# %Specify possible phases for TauPy (comma separated). See Obspy TauPy documentation for possible values.
0.01 0.1 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz] 0.01 0.1 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz]
0.001 0.5 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz] 0.001 0.5 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz]
0.01 0.5 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz] 0.01 0.5 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz]

View File

@ -1,6 +1,7 @@
import os import os
import sys import sys
import unittest import unittest
import pytest
import obspy import obspy
from obspy import UTCDateTime from obspy import UTCDateTime
@ -105,7 +106,6 @@ class TestAutopickStation(unittest.TestCase):
# show complete diff when difference in results dictionaries are found # show complete diff when difference in results dictionaries are found
self.maxDiff = None self.maxDiff = None
# @skip("Works")
def test_autopickstation_taupy_disabled_gra1(self): def test_autopickstation_taupy_disabled_gra1(self):
expected = { expected = {
'P': {'picker': 'auto', 'snrdb': 15.405649120980094, 'weight': 0, 'Mo': None, 'marked': [], 'Mw': None, 'P': {'picker': 'auto', 'snrdb': 15.405649120980094, 'weight': 0, 'Mo': None, 'marked': [], 'Mw': None,
@ -121,8 +121,8 @@ class TestAutopickStation(unittest.TestCase):
with HidePrints(): with HidePrints():
result, station = autopickstation(wfstream=self.gra1, pickparam=self.pickparam_taupy_disabled, result, station = autopickstation(wfstream=self.gra1, pickparam=self.pickparam_taupy_disabled,
metadata=(None, None)) metadata=(None, None))
self.assertDictContainsSubset(expected=expected['P'], actual=result['P']) compare_dicts(expected=expected['P'], result=result['P'], hint='P-')
self.assertDictContainsSubset(expected=expected['S'], actual=result['S']) compare_dicts(expected=expected['S'], result=result['S'], hint='S-')
self.assertEqual('GRA1', station) self.assertEqual('GRA1', station)
def test_autopickstation_taupy_enabled_gra1(self): def test_autopickstation_taupy_enabled_gra1(self):
@ -140,8 +140,8 @@ class TestAutopickStation(unittest.TestCase):
with HidePrints(): with HidePrints():
result, station = autopickstation(wfstream=self.gra1, pickparam=self.pickparam_taupy_enabled, result, station = autopickstation(wfstream=self.gra1, pickparam=self.pickparam_taupy_enabled,
metadata=self.metadata, origin=self.origin) metadata=self.metadata, origin=self.origin)
self.assertDictContainsSubset(expected=expected['P'], actual=result['P']) compare_dicts(expected=expected['P'], result=result['P'], hint='P-')
self.assertDictContainsSubset(expected=expected['S'], actual=result['S']) compare_dicts(expected=expected['S'], result=result['S'], hint='S-')
self.assertEqual('GRA1', station) self.assertEqual('GRA1', station)
def test_autopickstation_taupy_disabled_gra2(self): def test_autopickstation_taupy_disabled_gra2(self):
@ -157,8 +157,8 @@ class TestAutopickStation(unittest.TestCase):
with HidePrints(): with HidePrints():
result, station = autopickstation(wfstream=self.gra2, pickparam=self.pickparam_taupy_disabled, result, station = autopickstation(wfstream=self.gra2, pickparam=self.pickparam_taupy_disabled,
metadata=(None, None)) metadata=(None, None))
self.assertDictContainsSubset(expected=expected['P'], actual=result['P']) compare_dicts(expected=expected['P'], result=result['P'], hint='P-')
self.assertDictContainsSubset(expected=expected['S'], actual=result['S']) compare_dicts(expected=expected['S'], result=result['S'], hint='S-')
self.assertEqual('GRA2', station) self.assertEqual('GRA2', station)
def test_autopickstation_taupy_enabled_gra2(self): def test_autopickstation_taupy_enabled_gra2(self):
@ -175,8 +175,8 @@ class TestAutopickStation(unittest.TestCase):
with HidePrints(): with HidePrints():
result, station = autopickstation(wfstream=self.gra2, pickparam=self.pickparam_taupy_enabled, result, station = autopickstation(wfstream=self.gra2, pickparam=self.pickparam_taupy_enabled,
metadata=self.metadata, origin=self.origin) metadata=self.metadata, origin=self.origin)
self.assertDictContainsSubset(expected=expected['P'], actual=result['P']) compare_dicts(expected=expected['P'], result=result['P'], hint='P-')
self.assertDictContainsSubset(expected=expected['S'], actual=result['S']) compare_dicts(expected=expected['S'], result=result['S'], hint='S-')
self.assertEqual('GRA2', station) self.assertEqual('GRA2', station)
def test_autopickstation_taupy_disabled_ech(self): def test_autopickstation_taupy_disabled_ech(self):
@ -190,8 +190,8 @@ class TestAutopickStation(unittest.TestCase):
'fm': None, 'spe': None, 'channel': u'LHE'}} 'fm': None, 'spe': None, 'channel': u'LHE'}}
with HidePrints(): with HidePrints():
result, station = autopickstation(wfstream=self.ech, pickparam=self.pickparam_taupy_disabled) result, station = autopickstation(wfstream=self.ech, pickparam=self.pickparam_taupy_disabled)
self.assertDictContainsSubset(expected=expected['P'], actual=result['P']) compare_dicts(expected=expected['P'], result=result['P'], hint='P-')
self.assertDictContainsSubset(expected=expected['S'], actual=result['S']) compare_dicts(expected=expected['S'], result=result['S'], hint='S-')
self.assertEqual('ECH', station) self.assertEqual('ECH', station)
def test_autopickstation_taupy_enabled_ech(self): def test_autopickstation_taupy_enabled_ech(self):
@ -208,8 +208,8 @@ class TestAutopickStation(unittest.TestCase):
with HidePrints(): with HidePrints():
result, station = autopickstation(wfstream=self.ech, pickparam=self.pickparam_taupy_enabled, result, station = autopickstation(wfstream=self.ech, pickparam=self.pickparam_taupy_enabled,
metadata=self.metadata, origin=self.origin) metadata=self.metadata, origin=self.origin)
self.assertDictContainsSubset(expected=expected['P'], actual=result['P']) compare_dicts(expected=expected['P'], result=result['P'], hint='P-')
self.assertDictContainsSubset(expected=expected['S'], actual=result['S']) compare_dicts(expected=expected['S'], result=result['S'], hint='S-')
self.assertEqual('ECH', station) self.assertEqual('ECH', station)
def test_autopickstation_taupy_disabled_fiesa(self): def test_autopickstation_taupy_disabled_fiesa(self):
@ -224,8 +224,8 @@ class TestAutopickStation(unittest.TestCase):
'fm': None, 'spe': None, 'channel': u'LHE'}} 'fm': None, 'spe': None, 'channel': u'LHE'}}
with HidePrints(): with HidePrints():
result, station = autopickstation(wfstream=self.fiesa, pickparam=self.pickparam_taupy_disabled) result, station = autopickstation(wfstream=self.fiesa, pickparam=self.pickparam_taupy_disabled)
self.assertDictContainsSubset(expected=expected['P'], actual=result['P']) compare_dicts(expected=expected['P'], result=result['P'], hint='P-')
self.assertDictContainsSubset(expected=expected['S'], actual=result['S']) compare_dicts(expected=expected['S'], result=result['S'], hint='S-')
self.assertEqual('FIESA', station) self.assertEqual('FIESA', station)
def test_autopickstation_taupy_enabled_fiesa(self): def test_autopickstation_taupy_enabled_fiesa(self):
@ -242,8 +242,8 @@ class TestAutopickStation(unittest.TestCase):
with HidePrints(): with HidePrints():
result, station = autopickstation(wfstream=self.fiesa, pickparam=self.pickparam_taupy_enabled, result, station = autopickstation(wfstream=self.fiesa, pickparam=self.pickparam_taupy_enabled,
metadata=self.metadata, origin=self.origin) metadata=self.metadata, origin=self.origin)
self.assertDictContainsSubset(expected=expected['P'], actual=result['P']) compare_dicts(expected=expected['P'], result=result['P'], hint='P-')
self.assertDictContainsSubset(expected=expected['S'], actual=result['S']) compare_dicts(expected=expected['S'], result=result['S'], hint='S-')
self.assertEqual('FIESA', station) self.assertEqual('FIESA', station)
def test_autopickstation_gra1_z_comp_missing(self): def test_autopickstation_gra1_z_comp_missing(self):
@ -272,7 +272,8 @@ class TestAutopickStation(unittest.TestCase):
with HidePrints(): with HidePrints():
result, station = autopickstation(wfstream=wfstream, pickparam=self.pickparam_taupy_disabled, result, station = autopickstation(wfstream=wfstream, pickparam=self.pickparam_taupy_disabled,
metadata=(None, None)) metadata=(None, None))
self.assertEqual(expected, result) compare_dicts(expected=expected['P'], result=result['P'], hint='P-')
compare_dicts(expected=expected['S'], result=result['S'], hint='S-')
self.assertEqual('GRA1', station) self.assertEqual('GRA1', station)
def test_autopickstation_a106_taupy_enabled(self): def test_autopickstation_a106_taupy_enabled(self):
@ -290,7 +291,9 @@ class TestAutopickStation(unittest.TestCase):
with HidePrints(): with HidePrints():
result, station = autopickstation(wfstream=self.a106, pickparam=self.pickparam_taupy_enabled, result, station = autopickstation(wfstream=self.a106, pickparam=self.pickparam_taupy_enabled,
metadata=self.metadata, origin=self.origin) metadata=self.metadata, origin=self.origin)
self.assertEqual(expected, result) compare_dicts(expected=expected['P'], result=result['P'], hint='P-')
compare_dicts(expected=expected['S'], result=result['S'], hint='S-')
def test_autopickstation_station_missing_in_metadata(self): def test_autopickstation_station_missing_in_metadata(self):
"""This station is not in the metadata, but Taupy is enabled. Taupy should exit cleanly and modify the starttime """This station is not in the metadata, but Taupy is enabled. Taupy should exit cleanly and modify the starttime
@ -311,8 +314,37 @@ class TestAutopickStation(unittest.TestCase):
with HidePrints(): with HidePrints():
result, station = autopickstation(wfstream=self.a005a, pickparam=self.pickparam_taupy_enabled, result, station = autopickstation(wfstream=self.a005a, pickparam=self.pickparam_taupy_enabled,
metadata=self.metadata, origin=self.origin) metadata=self.metadata, origin=self.origin)
self.assertEqual(expected, result) compare_dicts(expected=expected['P'], result=result['P'], hint='P-')
compare_dicts(expected=expected['S'], result=result['S'], hint='S-')
def run_dict_comparison(result, expected):
for key, expected_value in expected.items():
if isinstance(expected_value, dict):
run_dict_comparison(result[key], expected[key])
else:
res = result[key]
if isinstance(res, UTCDateTime) and isinstance(expected_value, UTCDateTime):
res = res.timestamp
expected_value = expected_value.timestamp
assert expected_value == pytest.approx(res), f'{key}: {expected_value} != {res}'
def compare_dicts(result, expected, hint=''):
try:
run_dict_comparison(result, expected)
except AssertionError:
raise AssertionError(f'{hint}Dictionaries not equal.'
f'\n\n<<Expected>>\n{pretty_print_dict(expected)}'
f'\n\n<<Result>>\n{pretty_print_dict(result)}')
def pretty_print_dict(dct):
retstr = ''
for key, value in sorted(dct.items(), key=lambda x: x[0]):
retstr += f"{key} : {value}\n"
return retstr
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@ -0,0 +1,76 @@
import pytest
from obspy import read, Trace, UTCDateTime
from pylot.correlation.pick_correlation_correction import XCorrPickCorrection
class TestXCorrPickCorrection():
def setup(self):
self.make_test_traces()
self.make_test_picks()
self.t_before = 2.
self.t_after = 2.
self.cc_maxlag = 0.5
def make_test_traces(self):
# take first trace of test Stream from obspy
tr1 = read()[0]
# filter trace
tr1.filter('bandpass', freqmin=1, freqmax=20)
# make a copy and shift the copy by 0.1 s
tr2 = tr1.copy()
tr2.stats.starttime += 0.1
self.trace1 = tr1
self.trace2 = tr2
def make_test_picks(self):
# create an artificial reference pick on reference trace (trace1) and another one on the 0.1 s shifted trace
self.tpick1 = UTCDateTime('2009-08-24T00:20:07.7')
# shift the second pick by 0.2 s, the correction should be around 0.1 s now
self.tpick2 = self.tpick1 + 0.2
def test_slice_trace_okay(self):
self.setup()
xcpc = XCorrPickCorrection(UTCDateTime(), Trace(), UTCDateTime(), Trace(),
t_before=self.t_before, t_after=self.t_after, cc_maxlag=self.cc_maxlag)
test_trace = self.trace1
pick_time = self.tpick2
sliced_trace = xcpc.slice_trace(test_trace, pick_time)
assert ((sliced_trace.stats.starttime == pick_time - self.t_before - self.cc_maxlag / 2)
and (sliced_trace.stats.endtime == pick_time + self.t_after + self.cc_maxlag / 2))
def test_slice_trace_fails(self):
self.setup()
test_trace = self.trace1
pick_time = self.tpick1
with pytest.raises(ValueError):
xcpc = XCorrPickCorrection(UTCDateTime(), Trace(), UTCDateTime(), Trace(),
t_before=self.t_before + 20, t_after=self.t_after, cc_maxlag=self.cc_maxlag)
xcpc.slice_trace(test_trace, pick_time)
with pytest.raises(ValueError):
xcpc = XCorrPickCorrection(UTCDateTime(), Trace(), UTCDateTime(), Trace(),
t_before=self.t_before, t_after=self.t_after + 50, cc_maxlag=self.cc_maxlag)
xcpc.slice_trace(test_trace, pick_time)
def test_cross_correlation(self):
self.setup()
# create XCorrPickCorrection object
xcpc = XCorrPickCorrection(self.tpick1, self.trace1, self.tpick2, self.trace2, t_before=self.t_before,
t_after=self.t_after, cc_maxlag=self.cc_maxlag)
# execute correlation
correction, cc_max, uncert, fwfm = xcpc.cross_correlation(False, '', '')
# define awaited test result
test_result = (-0.09983091718314982, 0.9578431835689154, 0.0015285160561610929, 0.03625786256084631)
# check results
assert pytest.approx(test_result, rel=1e-6) == (correction, cc_max, uncert, fwfm)