Compare commits

...

951 Commits

Author SHA1 Message Date
8a1da72d1c [update] increased robustness of correlation picker. If autoPyLoT fails on the stacked trace it tries to pick other stacked traces. Ignoring failed autoPyLoT picks can manually be set if they are not important (e.g. when only pick differences are important) 2025-04-03 11:23:17 +02:00
c989b2abc9 [update] re-implemented code that was lost when corr_pick was integrated in pylot. Use all reference-pick-corrected theoretical picks as correlation reference time instead of using only reference picks 2025-03-19 15:43:24 +01:00
2dc27013b2
Update README.md 2025-03-06 12:18:41 +01:00
4bd2e78259 [bugfix] explicitly pass parameters to "picksdict_from_picks" to calculate pick weights. Otherwise no weights could be calculated. Closes #40 2024-11-20 17:16:15 +01:00
468a7721c8 [bugfix] changed default behavior of PylotParameter class to use default Parameter if called without input parameters. Related to #40 2024-11-20 17:01:53 +01:00
555fb8a719 [minor] small code fixes 2024-11-20 16:57:27 +01:00
5a2a1fe990 [bugfix] flawed logic after parameter renaming corrected 2024-11-20 11:14:27 +01:00
64b719fd54 [minor] increased robustness of correlation algorithm for unknown exceptions... 2024-11-20 11:13:15 +01:00
71d4269a4f [bugfix] reverting code from commit 3069e7d5. Checking for coordinates in dataless Parser IS necessary to make sure correct Metadata were found. Fixes #37.
[minor] Commented out search for network name in metadata filename considered being unsafe
2024-10-09 17:07:22 +02:00
81e34875b9 [update] small changes increasing code robustness 2024-10-09 16:59:12 +02:00
d7ee820de3 [minor] adding missing image to doc 2024-09-30 16:40:41 +02:00
621cbbfbda [minor] modify README 2024-09-18 16:59:34 +02:00
050b9fb0c4 Merge remote-tracking branch 'origin/develop' into develop 2024-09-18 16:57:42 +02:00
eb3cd713c6 [update] add description for pick correlation algorithm 2024-09-18 16:56:54 +02:00
18c37dfdd0 [bugfix] take care of more unescaped backslashes in Metadata 2024-09-16 16:27:36 +02:00
9333ebf7f3 [update] deactivate Spectrogram tab features in main branch 2024-09-12 16:58:27 +02:00
8c46b1ed18 [update] README.md 2024-09-12 16:54:39 +02:00
c743813446 Merge branch 'refs/heads/develop'
# Conflicts:
#	PyLoT.py
#	README.md
#	pylot/core/util/widgets.py
2024-09-12 16:32:15 +02:00
41c9183be3 Merge branch 'refs/heads/correlation_picker' into develop 2024-09-12 16:24:50 +02:00
ae6c4966a9 [bugfix] compare options always activated using obspy_dmt independent of data availability 2024-09-12 12:23:18 +02:00
e8a516d16b [update] trying to increase plot performance for large datasets, can need overhaul of drawPicks method in the future (too much recursion) 2024-09-12 12:19:44 +02:00
f78315dec4 [update] new test files for test_autopicker after changes in autopicker 2024-09-11 11:02:32 +02:00
28f75cedcb Merge branch 'refs/heads/develop' into correlation_picker 2024-09-11 10:31:50 +02:00
e02b62696d [bugfix] no actual UTCDateTime object was used to check metadata availability for check4rotated 2024-09-10 16:59:02 +02:00
e4217f0e30 [critical] fixing a major bug in checksignallength, testing needed 2024-09-10 16:58:12 +02:00
8f154e70d7 [minor] plot coloring 2024-09-10 16:57:18 +02:00
6542b6cc4f [minor] slightly improved test output 2024-09-10 16:57:00 +02:00
5ab6c494c5 [update] increased code readability and improved figures created in autopick.py and picker.py 2024-09-10 16:16:46 +02:00
3da47c6f6b [revert] changed slope calculation in AICPicker back to older state (probably causing problems changing results in test_autopickstation.py) 2024-09-09 16:56:38 +02:00
cc7716a2b7 [minor] improved unittest result 2024-09-09 16:05:02 +02:00
03947d2363 [update] removed bad STA/LTA implementation from CF class 2024-09-09 14:42:54 +02:00
e1b0d48527 [refactor] removed unused parameter "data" from calcCF methods 2024-09-09 14:20:41 +02:00
431dbe8924 [testing] improved dictionary comparison. Failed tests have completely different picks (not only snrdb) 2024-08-30 15:07:31 +02:00
63810730e5 [bugfix] added missing parameter "taup_phases" introduced a long time ago into default parameters and parameters for unit tests 2024-08-30 14:51:30 +02:00
f2159c47f9 [testing] brought test_autopickstation up-to-date using, removing deprecated methods and using pytest.approx
Certain tests fail on snrdb calculation which has to be examined (WIP)
2024-08-30 12:41:16 +02:00
d0fbb91ffe [update] added test for AutoPyLoT, added test files for correlation picker as well 2024-08-29 16:46:30 +02:00
424d42aa1c Merge branch 'refs/heads/develop' into correlation_picker
# Conflicts:
#	pylot/core/pick/charfuns.py
#	tests/test_autopicker/pylot_alparray_mantle_corr_stack_0.03-0.5.in
#	tests/test_autopicker/test_autopylot.py
2024-08-29 16:37:15 +02:00
2cea10088d [update] added test for AutoPyLoT, added test files for correlation picker as well 2024-08-29 16:35:04 +02:00
5971508cab [bugfix] Metadata object did not find inventory for relative directory paths/unescaped backslashes 2024-08-29 16:34:37 +02:00
c765e7c66b [bugfix] fixed import for tukey in newer scipy versions which moved to signal.windows module 2024-08-29 16:33:39 +02:00
466f19eb2e [bugfix] fixed import for tukey in newer scipy versions 2024-08-28 18:01:03 +02:00
e6a4ba7ee2 [update] remove mean from picks (WIP for residual plotting) pt2 2024-08-28 10:37:31 +02:00
5d90904838 Merge branch 'develop' into correlation_picker 2024-08-27 17:46:21 +02:00
7a13288c85 [major] getting rid of unused/unnecessary "rootpath" and "database" structure. Testing required. 2024-08-27 17:45:15 +02:00
3f97097bf6 [update] add mailmap for better readability of git commit history 2024-08-27 16:19:19 +02:00
29107ee40c [update] WIP: adding tests for autopylot (global) 2024-08-26 17:18:41 +02:00
fa310461d0 [update] added possibility to remove the mean from picks (WIP for residual plotting) 2024-08-15 17:15:10 +02:00
42a7d12292 [bugfix] reduce maximum number of stations listed in array map status 2024-08-15 16:30:05 +02:00
2e49813292 [update] some general bugfixes and improvements in array map 2024-08-14 17:04:17 +02:00
5d6f4619cc [update] add selection for merge strategy for loading of single event files 2024-08-12 16:03:29 +02:00
db11e125c0 [todos] add todos 2024-08-09 16:53:21 +02:00
b59232d77b [bugfix] function name accidentally overwritten on parameter renaming 2024-08-09 16:52:57 +02:00
176e93d833 [refactor] finished annotations (type hints) 2024-08-09 16:52:32 +02:00
759e7bb848 [bugfix] partially reverted signature of an inner function with shadowed variable name
[refactor] minor
2024-08-09 16:24:40 +02:00
61c3f40063 Merge branch 'develop' into correlation_picker 2024-08-09 15:50:46 +02:00
213819c702 [update] simplify dependencies (remove sub-dependencies), update installation instructions in README.md 2024-08-09 15:50:02 +02:00
67f34cc871 Merge branch 'develop' into correlation_picker
# Conflicts:
#	pylot.yml
#	requirements.txt
2024-08-09 15:05:30 +02:00
f4f48a930f [refactor] moved unittest to existing test folder 2024-08-09 15:03:55 +02:00
b41e2b2de6 [update] new requirements.txt and pylot.yml for python 3.11 2024-08-09 15:02:31 +02:00
a068bb8457 [update] refactoring, added type hints 2024-08-08 16:49:15 +02:00
452f2a2e18 [bugfix] test raised different Exception than planned 2024-08-08 14:41:16 +02:00
c3a2ef5022 [minor] changed test to be approximately equal to test result on different machine 2024-08-08 11:28:10 +02:00
8e7bd87711 [new] added some unit tests for correlation picker (WIP) 2024-08-07 17:11:27 +02:00
d5817adc46 [merge] changes to correlation picker from different machines that were not committed 2024-08-07 10:17:35 +02:00
14f01ec46d Merge branch 'correlation_picker' of git.geophysik.ruhr-uni-bochum.de:marcel/pylot into correlation_picker 2024-08-07 10:08:57 +02:00
1b074d14ff [update] WIP: Adding type hints, docstrings etc. 2024-08-06 16:03:50 +02:00
ce71c549ca [bugfix] removed parameter that was re-introduced accidentally from manual merge 2024-08-06 16:03:16 +02:00
c4220b389e Merge branch 'correlation_picker' of git.geophysik.ruhr-uni-bochum.de:marcel/pylot into correlation_picker 2024-07-25 15:36:06 +02:00
0f29d0e20d [minor] small modifications (naming conventions) 2024-07-25 14:50:40 +02:00
e1e0913e3a Merge remote-tracking branch 'origin/develop' into develop 2024-07-25 10:25:59 +02:00
cdcd226c87 [initial] adding files from correlation picker 2024-07-24 14:07:13 +02:00
5f53cc5365 [bugfix] renamed method inside array_map.py 2024-07-23 16:31:52 +02:00
6cce05b035 Merge branch 'feature/dae' into develop
# Conflicts:
#	pylot/core/io/data.py
#	pylot/core/util/widgets.py
2024-06-12 16:19:21 +02:00
7326f061e5 [minor] inform if station coordinates were not found in metadata 2024-06-12 16:11:53 +02:00
1a18401fe3 [bugfix] added missing Parameter object in call for picksdict_from_picks 2024-06-12 13:44:02 +02:00
ec930dbc12 [minor] removed unneeded imports 2024-06-07 15:56:05 +02:00
b991f771af [bugfix] removing redundancy and wrong bullsh.. try-except code 2024-06-07 15:04:16 +02:00
2c3b1876ab [minor] switch default cmap for array_map to 'viridis' 2024-06-07 14:34:27 +02:00
0acd23d4d0 [update] further improved Pickfile selection dialog, now providing methods "overwrite" or "merge" 2024-06-07 14:32:57 +02:00
f349c8bc7e [update] improve pickfile selection, give the ability to select only specific files 2024-06-07 13:09:34 +02:00
6688ef845d [bugfix] re-implement ability of get_bool to return unidentifiable input 2024-06-07 13:08:51 +02:00
5b18e9ab71 [merge] merge branch 'improve-util-utils' of pull request #35 into develop 2024-06-07 10:29:39 +02:00
c79e886d77 [minor] update default shell script for SGE 2024-06-06 15:55:47 +02:00
76f2d5d972 [update] improve SearchForFileExtensionDialog now proposing available file endings 2024-06-06 15:54:36 +02:00
2d08fd029d [hotfix] datetime formatting caused error when time not set 2024-06-06 13:55:11 +02:00
8f22d438d3 [update] changed eventbox overview to show P and S onsets separately 2024-06-05 16:18:25 +02:00
93b7de3baa [update] raising PickingFailedException when CF cannot be calculated due to missing signal (too short waveform)
[update] raising PickingFailedException when CF cannot be calculated due to missing signal (too short waveform)
2024-06-05 14:31:09 +02:00
05642e775b [minor] some tweaks (convenience)
[update] raising PickingFailedException when CF cannot be calculated due to missing signal (too short waveform)
2024-06-05 14:31:07 +02:00
47205ca493 [update] improved calculation of smoothed AIC. Old code always created an artificial value and a np.nan at the array start 2024-06-05 14:31:07 +02:00
5c7f0b56eb [update] improved SearchFileByExtensionDialog widget 2024-06-05 14:19:17 +02:00
c574031931 [bugfix] the implementation approach of STA/LTA inside characteristic function calculation (skewness/kurtosis) corrupted the old, working code due to a mistake in the logic 2024-06-05 14:17:57 +02:00
e1a0fde619 [update] improved array map to identify and display other phase names than P / S 2024-05-29 11:43:31 +02:00
48d196df11 [update] improved SearchFileByExtensionDialog widget (table, auto refresh) 2024-05-29 11:42:10 +02:00
6cc9cb4a96 [minor] reduced maxtasksperchild for multiprocessing 2024-05-29 11:40:34 +02:00
5eab686445 [bugfix] accidentally removed return value 2024-05-24 16:10:32 +02:00
b12e7937ac [update] added a widget for loading pick files that lists available files depending on chosen filemask 2024-05-22 10:58:07 +02:00
78f2dbcab2 [update] added check for nan values in waveforms which crashed obspy filter routines
[minor] some tweaks and optimisations
2024-04-30 15:48:54 +02:00
31ca0d7a85 Merge remote-tracking branch 'origin/develop' into develop
# Conflicts:
#	pylot/core/util/widgets.py
2024-04-09 16:12:03 +02:00
8b95c7a0fe [update] changed sorting of traces overview if all station names are numeric (e.g. active experiments) 2024-04-09 16:02:31 +02:00
c7f9ad4c6f [update] changed sorting of traces overview if all station names are numeric (e.g. active experiments) 2024-04-09 15:53:19 +02:00
65dbaad446 [update] adding possibility to display other waveform data (e.g. denoised/synthetic) together with genuine data for comparison 2024-03-22 17:12:04 +01:00
5b97d51517 [minor] mpl.figure.canvas.draw -> draw_idle 2024-03-22 17:12:04 +01:00
f03ace75e7 [bugfix] QWidget.show() killed figure axis dimensions creating unexpected error of fig.aspect=0 when creating colorbar inset_axes in Python 3.11 2024-03-22 17:10:04 +01:00
9c78471d20 [bugfix] header resize method renamed in QT5 2024-03-22 15:34:05 +01:00
09d2fb1022 [bugfix] pt2 of fmpicker fix, make sure to also copy stream in autoPyLoT
closes #24
2023-08-24 12:55:30 +02:00
3cae6d3a78 [bugfix] use copies of wfdata when calling fmpicker to prevent modification of actual data used inside GUI 2023-08-24 11:28:30 +02:00
2e85d083a3 [bugfix] do not call calcsourcespec if incidence angle is outside bounds (for whatever reason) 2023-08-24 11:27:30 +02:00
ba4e6cfe50 [bugfix] bin directory + /bin creates "/bin/bin". Also it is not taken care of os compatibility and also compatibility with existing code (line 86ff was useless after recent change in line 85) 2023-08-23 14:48:21 +02:00
1f16d01648 [minor] give more precise user warning if no pick channel was selected 2023-08-23 09:38:16 +02:00
3069e7d526 [minor] commented - possibly unnecessary - line of code that created an error when using old metadata Parser 2023-08-22 15:53:49 +02:00
a9aeb7aaa3 [bugfix] set simple phase hint (P or S) 2023-08-22 15:53:49 +02:00
b9adb182ad [bugfix] could not handle asterisk-marked events when opening tune-autopicker 2023-08-22 15:53:49 +02:00
a823eb2440 [workaround] using explicit Exception definition without a special handling does not make sense. Function broke on other errors in polyfit. Still might need fixes in the two lines above the "except" block(s). 2023-08-22 15:53:49 +02:00
486e3dc9c3 Merge pull request 'Disabled button in case flag is false' (#31) from disable-show-log-widget into develop
Reviewed-on: #31
2023-08-22 12:05:33 +02:00
8d356050d7 [update] corrected original authors of PILOT 2023-08-22 12:01:51 +02:00
43cab3767f [Bugfix] fixxed wrong check for taupymodel 2023-06-27 08:04:00 +02:00
b3fdbc811e Merge branch 'develop' into improve-util-utils 2023-06-23 09:37:54 +02:00
a1f6c5ffca Bugfixxes, spectogram tab wip 2023-06-14 13:11:54 +02:00
e4e7afa996 Minor changes to adjust to python 3. Temporary Fix for file exporting not working properly. WIP spectrogram view. 2023-04-27 10:24:55 +02:00
9fce4998d3 bugfix: remove unused functions; correct for wrong formatting (PEP) 2023-04-23 22:05:11 +02:00
c468bfbe84 feat: add type hints and tests for plot utils 2023-04-23 21:37:20 +02:00
4861d33e9a bugfix: add tests to key_for_set_value 2023-04-16 09:58:51 +02:00
f5f4635c3d bugfix: rename is_iterable and add doc tests 2023-04-16 09:50:42 +02:00
b12d92eebb bugfix: refactor get_owner and get_hash; add tests 2023-04-12 21:22:58 +02:00
e9da81376e bugfix: add new tests and refactor get_none 2023-04-12 20:32:44 +02:00
e68fc849f0 bugfix: correct erroneous and add new doctests 2023-04-10 19:14:23 +02:00
efb117177c bugfix: update check4rotate 2023-04-10 18:35:58 +02:00
0634d24814 fix: disabled button in case flag is false
The button was not disabled in case the flag variable was false. The get_Bool function was renamed and improved to also work in case in the input variable is of type int or float.

Additionally, the environment file was corrected to also work for macOS installations with ARM architecture.
2023-04-06 16:40:20 +02:00
43c2b97b3d Small changes 2023-01-24 11:45:12 +01:00
ann-christin
8d94440e77 [bugfix] logwidget always initiated 2022-11-14 14:14:59 +01:00
ann-christin
66b7dea706 [update] pylot.in no longer mandatory 2022-11-14 14:14:12 +01:00
ann-christin
ebf6d4806a [minor] reformating 2022-11-14 11:52:25 +01:00
ann-christin
207d0b3a6f [update] directly pass args from arg parser 2022-11-14 11:18:15 +01:00
ann-christin
3b3bbc29d1 Merge remote-tracking branch 'origin/develop' into develop 2022-11-14 10:30:38 +01:00
0c3fca9299 Re-Added local changes that had been lost due to technical problems ( no access to old machine ) 2022-10-04 11:44:31 +02:00
2d33a60421 [Bugfix] Multiple small bugfixxes keeping NLL from working in python3.+ 2022-09-15 14:31:13 +02:00
ann-christin
a8c6f4c972 [reformat] spell checking 2022-08-25 15:31:08 +02:00
ann-christin
0d91f9e3fe update github link 2022-08-25 14:03:05 +02:00
ann-christin
494d281d61 update github link 2022-08-25 14:00:37 +02:00
5ef427ec12 Merge branch 'develop' of git.geophysik.ruhr-uni-bochum.de:marcel/pylot into develop 2022-06-29 14:07:35 +02:00
29cf978782 minor bug fixes 2022-06-29 14:07:06 +02:00
091449819c [update] tau-p usage for s-picking 2022-05-31 18:16:02 +02:00
cd9c139349 [bugfix] mainly added missing taup model (lost in branch merging for python3), some smaller fixes for S picking 2022-05-31 12:45:22 +02:00
084fb10cea Merge remote-tracking branch 'origin/develop' into develop 2022-05-31 09:59:17 +02:00
dde9520879 [minor] deactivate logwidget by default as it seems to irregularly create segfaults 2022-05-31 09:40:15 +02:00
7847f40a35 [minor] added developers do README 2022-04-01 12:07:51 +02:00
2c188432a1 Merge branch 'develop' of https://git.geophysik.ruhr-uni-bochum.de/marcel/pylot into develop 2022-03-24 13:28:52 +01:00
86dc0f5436 [minor] small style changes 2022-03-22 11:26:29 +01:00
83ba63a3fd Merge pull request 'feature/port-to-py3' (#11) from feature/port-to-py3 into develop
Reviewed-on: #11
2022-03-21 15:30:05 +01:00
3392100206 Removed psd 2022-03-17 13:42:27 +01:00
401265eb3a [minor] added some comments 2022-03-16 16:03:54 +01:00
dd685d5d5e [refactor] rewrote/simplified getQualitiesfromxml code, used function already implemented in phases.py 2022-03-16 16:00:14 +01:00
3cd17ff364 [update] added xml file for unittest 2022-03-16 14:31:03 +01:00
d879aa1a8b [bugfix] small fix to get getQualitiesfromxml running, added unittest for function 2022-03-16 14:29:47 +01:00
445f1da5ac [bugfix] reset stdout to previously set one and not to default sys.__stdout__ 2022-03-16 09:25:28 +01:00
962cf4edac [minor] README.md 2022-03-15 11:14:08 +01:00
eb0dd87a9e [update] optimize requirements/yml file 2022-03-15 10:44:37 +01:00
e9cc579cf6 [minor] README.md 2022-03-15 10:41:48 +01:00
f1fd52b750 [bugfix] when closing mainwindow, also close logwidget 2022-03-15 10:41:29 +01:00
5449210797 [update] optimize requirements/yml file 2022-03-15 10:12:39 +01:00
0af948030b [bugfix] chooseArrival -> chooseArrivals, fixes #25 2022-03-15 09:42:25 +01:00
21b1be0e56 [update] updated README 2022-03-14 16:05:58 +01:00
58eee13b07 [update] removed setup.py (deprecated) 2022-03-14 15:53:42 +01:00
c57eb4f556 [update] added requirements, updated pylot.yml and setup.py 2022-03-14 15:33:33 +01:00
47bb8c4326 Merge branch 'develop' into feature/port-to-py3 2022-03-14 13:19:20 +01:00
29ffcf2e37 [bugfix] pick_r unreferenced, closes #26 2022-03-14 11:18:51 +01:00
e35d5d6df9 [refactor] automatic code reformatting (Pycharm) 2022-03-09 14:41:34 +01:00
79f3d40714 [refactor] code cleanup (WIP) 2022-03-09 14:28:30 +01:00
9cef22b74b [refactor] code cleanup (WIP), open issues #25 and #26 2022-03-09 14:08:04 +01:00
c254ee09b3 [bugfix] updated some window flags for PySide2 2022-03-09 10:43:16 +01:00
be32f4f61d [improvement] log can be opened from menu now (or focuses) 2022-03-09 10:42:52 +01:00
9036a9054e [new] log widget for PyLoT 2022-03-08 16:27:36 +01:00
dd02527c1d [bugfix] connected save figure button to a function that saves a figure 2022-03-08 15:07:18 +01:00
a383101d2c [minor] adjustments array map layout 2022-03-08 14:42:55 +01:00
Kaan Cökerim
7c926327dd Merge branch 'feature/port-to-py3' of git.geophysik.ruhr-uni-bochum.de:marcel/pylot into feature/port-to-py3 2022-03-08 12:45:49 +01:00
Kaan Cökerim
e9d3caafd3 [FOLDER RESTRUCTURING] Changed project structure from ROOT-DATA-DATABASE seperation into one path (DATA). Some ROOT and DATABASE variables still remain as unfunctional artefacts and need to be removed from the code 2022-03-08 12:30:43 +01:00
0b50a91f57 [bugfix] scrolling inhibited withing QSpin/QDoubleSpinBox 2022-03-07 15:32:29 +01:00
8725f5083c Merge branch 'develop' into feature/port-to-py3 2022-03-07 14:46:35 +01:00
13e0d8b106 Added second check to toggle locate button
When the User was using the continue to next station button the locate button did not get turned on every time while picking
2022-02-23 09:14:40 +01:00
a13defbe42 Added getQualitiesFromXML and EventlistFromXML scripts and respective buttons
Changed Icons for both Button - only temporary versions at the moment
2022-01-24 10:11:44 +01:00
0ee239f4a6 Added chooseArrivals Function to phases.py
Function takes dictionary with auto and manual picks and prefers the manual ones for saving data
2022-01-18 22:39:57 +01:00
131f6782af Temporary fix for Auto/Manual Pick Dictionary detection + fix for .pha files not getting saved 2021-11-21 13:04:25 +01:00
15cad42868 [Bugfix] Export of picks did not work because dictionary key pickmethod
was not taken into account.
2021-09-20 12:13:20 +02:00
Kaan Cökerim
0dc5aff3fc [FOLDER RESTURCTURING #22] some changes in path variables 2021-09-08 13:06:51 +02:00
7f9d240d02 Adding Bugfix from port-to-py3 port for incorrect picksdict to develop branch 2021-09-05 09:26:59 +02:00
Kaan Cökerim
7c5e16ecc6 [FOLDER RESTURCTURING #22] designated datapath as new single path variable and changed variable assignments accordingly for Project() and Event() classes and partially in MainWindow(). 2021-08-30 14:04:25 +02:00
Kaan Cökerim
321a871d62 Implemented merge suggestions #11 2021-08-12 15:43:23 +02:00
Kaan Cökerim
a738dc57ec Minor bug fixes. Fixes #16 2021-08-12 14:24:05 +02:00
ca56139922 [minor] corrected some coding mistakes, added some requirements 2021-08-12 09:42:18 +02:00
81240a8ffe [bugfix] also regarding #20 2021-08-11 16:18:41 +02:00
d236d3d80f Merge branch 'develop' into feature/port-to-py3 2021-08-11 15:16:19 +02:00
2dd48b4c62 [bugfix] picksdict was corrupted so that no manual picks could be loaded from .xml, fixes #20 2021-08-10 17:40:49 +02:00
Kaan Cökerim
e850e5d4fe Removed hard coded user specific data paths 2021-08-04 17:05:43 +02:00
Kaan Cökerim
4fd27994f6 Extern saving routine for map png 2021-08-04 16:58:14 +02:00
Kaan Cökerim
ca140d3fe1 Tabel now checks if magnitude object has any entries before trying to write to event table 2021-07-28 10:14:09 +02:00
Kaan Cökerim
60db23e83c Fixed minor plot bugs 2021-07-26 13:18:24 +02:00
b976ec6ab0 [minor] remove redundant code 2021-07-20 17:24:27 +02:00
702b7f3fb1 [minor] removed total number of picks which seems to be buggy (and also not explained) 2021-07-20 14:09:57 +02:00
320cb42390 [bugfixes] some more bugfixes 2021-07-16 10:15:40 +02:00
cc1139d2ca [bugfix] some (py3) related fixes in rotate station and minMax plotting 2021-07-15 12:04:34 +02:00
aa73bb0ae8 [bugfix] WIP, creating new issue: Could not save project after initializing Event table; PySide2.QTableWidgetItem cannot be pickled! 2021-07-09 15:31:12 +02:00
de11c3b611 [bugfix] added dummy argument for QAction trigger signal
fixes #14
2021-07-09 12:16:32 +02:00
92c328e3c6 [bugfix] closes #15 increased pointsize and moved station labels 2021-07-09 12:10:10 +02:00
819ea8bf00 [bugfix] Parameter window opening when loading events 2021-07-09 12:09:25 +02:00
Kaan Cökerim
bd9c1d0f73 Some Minor Fixes 2021-07-09 10:06:28 +02:00
30eb8fbbbd [Bugifx] Taup model was hard coded as iasp91. 2021-07-07 11:10:23 +02:00
8b03aab381 [Bugfix] Take into account if only local or moment magnitude has been
derived to write event list properly.
2021-07-05 11:29:44 +02:00
Kaan Cökerim
5efe75f4ea Minor bug fixes 2021-06-21 11:28:54 +02:00
Kaan Cökerim
7ccad4cb86 minor bug fixes 2021-06-18 19:21:43 +02:00
3af02bbbf7 Merge branch 'develop' of https://git.geophysik.ruhr-uni-bochum.de/marcel/pylot into develop 2021-06-18 13:01:43 +02:00
a6af015f05 For some data scopy is not working (??), captured that. 2021-06-18 12:59:38 +02:00
Kaan Cökerim
b4e72eafe7 Fixed another QtGui issue 2021-06-17 16:35:47 +02:00
Kaan Cökerim
fbeab85bb3 Fixed pgplot QtGui imports 2021-06-17 16:27:30 +02:00
Kaan Cökerim
0c43040faf Map finalized. Added evironment YML 2021-06-17 16:04:18 +02:00
6e40811b01 [minor] small bugfix 2021-06-17 15:34:55 +02:00
Kaan Cökerim
04067cac8e Map zoom and pan work. Interaction with station scatters broken. Map grid does not adjust to changed map scales 2021-06-16 16:02:35 +02:00
67e2ba3ad7 Additional screen output, temporarily comment amplitude info for
obs-file.
2021-06-15 11:18:24 +02:00
Kaan Cökerim
2ccdfb8470 Map is displayed correctly. Zoom/Pan does not work -> Structure needs rework 2021-06-02 21:04:51 +02:00
Kaan Cökerim
f502ac0a00 Array map is now setting up, though it does not display the map correctly. Issue with plotting axes or show command? 2021-05-26 14:08:02 +02:00
Kaan Cökerim
978bf42a3e Paramter window works again. TODO: find line that sets PyLot.get_Parameters(show=FALSE). At the moment it is hard coded to be always shown 2021-05-09 14:46:13 +02:00
5289be55dd Merge branch 'develop' of https://git.geophysik.ruhr-uni-bochum.de/marcel/pylot into develop 2021-05-06 13:49:12 +02:00
27c9bbb96a Manualy adding generic Amplitude to .obs files
Due to the obspy write function not including the generic Amplitude/A0 in the .obs file (NonLinLoc) even when the Event holds this Data. Obspy Team has been informed by opening an issue on Github
2021-05-06 13:48:44 +02:00
Kaan Cökerim
4451209907 GUI starts and most menus work. Some variables apperently got lost within various branches 2021-04-23 12:00:04 +02:00
Kaan Cökerim
ded46d45cc GUI starts successfully. Changed imports tu work with pyside2 2021-04-04 12:21:27 +02:00
c9bd0a147c Preparation of implementing pick-quality button. 2021-04-01 11:01:06 +02:00
07e7ef3efa [revision] cleaned up unclean code: tabs -> spaces 2021-03-30 16:39:44 +02:00
f11ffc67ff Merge branch 'develop' of https://git.geophysik.ruhr-uni-bochum.de/marcel/pylot into develop 2021-03-29 16:10:08 +02:00
217fa330c1 Fixed eventlist not working after locating twice
Changed exception when less then 2 horizontals are found to warning
2021-03-29 16:05:50 +02:00
bfec58cc24 [bugfix] entering if when len(mag) == 1 2021-02-01 16:03:23 +01:00
172786dc6d [Bugfix] Remove elements of shrinking list! 2021-01-14 14:04:50 +01:00
a9784d33e5 If events have been removed, project-event list is updated
automatically.
2021-01-14 10:13:51 +01:00
21453159b7 Started implementing saving peak-to-peak amplitude in .obs file 2020-12-10 12:50:33 +01:00
f40d22af33 FOCMEC needs suffix *.in for input file. 2020-10-15 10:02:02 +02:00
01fea084d5 [Bugfix] Avoid printing * in focmec-input file. 2020-10-15 09:43:25 +02:00
16f6e2d1fd [Bugfix] localmag and momentmag referenced before assignment. 2020-10-07 10:37:41 +02:00
49c747b638 Merge branch 'develop' of https://git.geophysik.ruhr-uni-bochum.de/marcel/pylot into develop 2020-10-06 11:26:37 +02:00
f8f4e6789c [new] add extent option "active" to pick high frequency active seismic data 2020-10-06 10:44:27 +02:00
08c2d7556f Implemented first-motion info into picking window. 2020-09-30 13:27:02 +02:00
7d77cb0b2f [Bugfix] Wrong syntax for multiple if statements. 2020-09-30 10:11:10 +02:00
6f70b2c0e2 Implemented first-motion picker within manual picking. 2020-09-29 16:48:38 +02:00
8f1ab87045 Started implementing automated first-motion picking within manual
picking.
2020-09-28 16:55:25 +02:00
5351043493 Local and moment magntiude occur on event list. 2020-09-28 15:11:23 +02:00
dccbaa357a Check if data directory exists to avoid program abortion. 2020-09-22 10:30:48 +02:00
37c8858096 Limit number of fft bins. 2020-09-21 17:01:06 +02:00
f93499da7d For unknown reasons sometimes station and network information are not
avaialble, avoid program failure.
2020-09-16 16:16:53 +02:00
e4cfebe989 Capture possible error occuring due to to incomplete dictonaries. 2020-07-30 15:21:26 +02:00
da360990f6 [Bugfix]: Capture error ocurring when writing output files without event
location.
2020-07-30 13:55:56 +02:00
e083e75d73 Make sure to load autoPyLoT-event information from xml file. 2020-07-30 13:27:38 +02:00
79192cda33 Distinguish between auto picks and manual picks in code. 2020-07-30 13:04:45 +02:00
066db3df53 Removed unnecassary print statements. 2020-07-30 10:40:34 +02:00
6d81e4a132 Speed up spectral fit; take into account that in seldom cases not all
pick information in pick dictionary are available.
2020-07-30 10:37:39 +02:00
f49b9054b0 [Bugfix]: Take into account onset time might be None. 2020-07-29 15:10:22 +02:00
82e2c325ae Return 'N' (noisy) for first motion instead None. 2020-07-29 14:51:23 +02:00
85ae05b580 Updated author information. 2020-07-29 13:55:54 +02:00
e64135c7a8 [Bugfix]: Check for key S and be sure to get element weight for pick
dictionary.
2020-07-29 12:14:42 +02:00
b7795ca562 Removed right-side picking to stabilize algorithm. 2020-07-29 10:10:18 +02:00
8066bd2f01 Avoid IOerror, relaxed to warning message to keep PyLoT running. 2020-07-28 17:16:48 +02:00
df6a6c1c71 Added output for hypoDD. 2020-07-28 13:08:25 +02:00
1f6a44eeef [Bugfix]: elif statement was not working. 2020-07-27 16:55:19 +02:00
2bab42a0dc [Bugfix] invlist is already inventory object! 2020-07-27 16:26:11 +02:00
c7d8569105 Enabled writing for FOCMEC output. 2020-07-23 16:51:05 +02:00
003ba76650 [Bugfix]: PyLoT is now able to write VELEST-input cnv-file. 2020-07-21 16:22:17 +02:00
7617958a1c Capture problems during data fitting. 2020-07-21 11:55:27 +02:00
32c95757c6 [minor] small changes to autopylot.sh script 2020-07-16 12:36:23 +02:00
dad383b197 [revert] reverted accidental changes to file PyLoT.py 2020-07-16 12:35:18 +02:00
707c596672 [Bugfix]: No scaling if 0.0! 2020-07-08 15:32:16 +02:00
075beb2db3 [Bugfix]: network magnitude has never been scaled! 2020-07-08 15:20:03 +02:00
7023bc4ce4 Bug workaround: sometimes rotate2zne fails resulting in an permanent
error
2020-07-08 14:57:43 +02:00
9486d7aced [changed] major change in QSettings behavior when saving channel order (alternative naming (ZNE -> 123)), RESET OF QSETTINGS MIGHT BE NECESSARY (PyLoT.py --reset_qsettings) 2020-06-16 15:06:51 +02:00
1a20d7fbcb [bugfix] workaround for MacOS, where SetChannelComponents class can not be saved using QSettings 2020-06-16 12:03:34 +02:00
fb6ba83cc5 [new] added --reset_qsettings option to PyLoT in case of corrupted QSettings 2020-06-16 11:09:50 +02:00
02083e2bf8 [change] magnitude calculation problem using evalresp in GUI, changed to usage of ObsPy intrinsic function 2020-06-08 17:42:05 +02:00
7187cb337b [various] fixes on array_map 2020-06-08 14:23:05 +02:00
fa093ccac6 Take into account that creation info sometimes is not available. 2020-06-03 13:19:31 +02:00
6ad187b7fa [update] added missing function for refactor commit 2020-01-14 14:03:13 +01:00
78eca1b222 [update] some smaller changes in GUI and array_map 2020-01-14 13:57:12 +01:00
492f5b96f2 [refactor] moved functions requiring QT import from util.utils to util.gui 2020-01-14 13:55:30 +01:00
be397edf1c [bugfix] same problem as prior commit 2019-12-17 17:04:08 +01:00
fa4423f65d [bugfix] transform to float for different python library versions 2019-12-17 16:54:07 +01:00
29440deb6f [new] set cuttimes for waveforms in properties tab to speedup reading and visualization
[improvements] only re-read/re-plot if needed after changing properties
2019-12-17 16:24:32 +01:00
aa38e2b851 Merge branch 'develop' of ariadne.geophysik.rub.de:/data/git/pylot into develop 2019-12-17 13:51:44 +01:00
3a1c907043 [add] delete all autopicks button in GUI
[bugfix] small bugfix in case of problems with Stream String representation
2019-12-17 13:50:54 +01:00
3354e3980e [update] improved array map, point size showing pick error class now 2019-12-11 15:59:24 +01:00
a9135ec483 [bugfix] calculation of hybrid picks (auto+manual) for array_map called too often taking a lot of time 2019-11-27 10:56:40 +01:00
be1e49fa57 [updates] generate_array_maps multiprocessing, added extension '_autopylot' to picks generated by autopylot to prevent overwriting on auto-save 2019-11-26 11:11:25 +01:00
Ludger Küperkoch
3be78ec576 Some debugging. 2019-11-19 15:12:10 +01:00
ludger
5a2aeeb70d Re-implemented second option for calculating fit to source spectrum, take median of both options; some bug fixes 2019-11-15 15:09:25 +01:00
ludger
570e47f2df No changes!! 2019-11-15 11:36:26 +01:00
ludger
4a0759ee6e Replaced IOError with warning message, GUI keeps being alive, but still buggy with TauPy! 2019-11-15 09:57:31 +01:00
ludger
8419fc9681 Taper CF before calculating AIC. 2019-11-14 16:47:16 +01:00
Ludger Küperkoch
98e429d8fd Update moment magnitudes before calculating network magnitude. 2019-10-28 16:51:04 +01:00
Ludger Küperkoch
ceb33bf59e Merge branch 'develop' of ariadne.geophysik.rub.de:/data/git/pylot into develop 2019-10-28 16:43:11 +01:00
Ludger Küperkoch
f28ee2bb59 New default parameters (min-, max-values) for magnitude scaling. 2019-10-28 16:41:48 +01:00
8b741c9102 [bugfix] old parameter default values, especially min/max values remained from older pylot projects 2019-10-28 16:33:19 +01:00
Ludger Küperkoch
0ef2afa775 Update of event object for moment magntiude was missing. 2019-10-25 16:20:48 +02:00
Ludger Küperkoch
dd05d46be7 Added calulation of network moment magnitude, additional screen output of moment magnitude. 2019-10-25 16:16:07 +02:00
e6a326d437 [bugfixes] on GUI driven earthquake location/magnitude calculation 2019-10-24 10:12:00 +02:00
18aa67e5ba [update] some bugfixes relating array_maps, removed some unneccessary files 2019-10-23 11:34:50 +02:00
Ludger Küperkoch
e94129ec36 If several locations done, use the most recent one. 2019-09-13 12:29:42 +02:00
Ludger Küperkoch
813a2c989e Some cosmetics. 2019-09-11 12:22:11 +02:00
Ludger Küperkoch
f21973c7b6 Restitute and calculate magnitudes only on picked traces. 2019-09-11 12:13:53 +02:00
Ludger Küperkoch
2ecd7b1ec5 Removed deactivated code. 2019-09-11 11:30:39 +02:00
Ludger Küperkoch
929ddb0ab2 Activated magnitude calculation, removed typo (WAscaling), still buggy in applyEvent! 2019-09-09 16:23:34 +02:00
Ludger Küperkoch
b4eeef55a6 Bug: AxesSubplot object has no attribute set_facecolor. Deactivated that line. 2019-09-09 14:40:51 +02:00
cc8d301cff [new] small script to automatically generate array maps (maybe add this as a GUI option?) 2019-09-05 17:20:11 +02:00
b39489b253 [update] increased flexibility of ArrayMap class 2019-09-05 17:19:40 +02:00
7382ca5d47 [bugfix] start/endtimes modified for taupy were not used at all 2019-09-02 10:44:25 +02:00
34ca64ee17 [bugfix] old picks were not properly erased before loeading new picks 2019-08-23 11:30:16 +02:00
2963be5bba [bugfix] try and error reading did not work for StationXML files 2019-08-23 11:29:15 +02:00
c1cb0e4a37 [bugfix] gaps are now merged when data are loaded and not when plotting (could cause problems with data object being empty) 2019-08-14 14:52:56 +02:00
15882ed27e Merge branch 'feature/metadata_clean_inventory' into develop 2019-08-12 16:39:12 +02:00
b21b113edd [bugfix] dataprocessing, xml
[update] save images as svg/jpg
2019-08-12 11:58:04 +02:00
9764e12b19 [new] save event table as .csv file 2019-07-01 17:24:06 +02:00
8ae00529f0 [minor] modified status output (count events) 2019-06-26 12:00:55 +02:00
3c3fd00654 [minor] changed number of levels on array map 2019-06-26 11:58:03 +02:00
2d775fa1b9 [rename] real_Bool/real_None -> get_Bool/get_None 2019-06-24 15:05:04 +02:00
2687d6cdef [bugfix] get bool on 'use_taup' parameter in case it was a string 2019-06-24 15:05:03 +02:00
eaabdce47a [bugfix] on magnitudes 2019-06-24 15:05:02 +02:00
fe73d2e054 [minor] make safety copy of existing deleted picks file before overwriting 2019-06-24 15:05:02 +02:00
Ludger Küperkoch
6caa3ee9ce Additional screen output. 2019-06-19 17:32:57 +02:00
Ludger Küperkoch
bae20e7e7c [bugfix] Plot was not enabled, indentation correction (though uncomment) 2019-06-19 16:52:31 +02:00
Ludger Küperkoch
bf20ad92da [bugfix] Condition for using entire waveform (Ldiff<0) was too weak. 2019-06-19 14:41:45 +02:00
Ludger Küperkoch
bca4ce1c11 [bugfix] If getsignalwindow provided an empty array, checksignallength failed and lead to incomplete pick dictionaries. 2019-06-19 14:07:15 +02:00
e6f4545058 [remove] restflag (unused) 2019-06-18 15:38:05 +02:00
f51fdd01d5 [minor] typo 2019-06-18 14:34:16 +02:00
6f4378e4c2 [minor] updated .gitignore 2019-06-18 14:04:10 +02:00
2963587ea1 Revert "Take into account that for individual stations cut times are set to zero, which has then lead to wrong initial pick times."
This reverts commit 291db16e2bdfa32e8c4e7362668a81b3f1f698e0.
2019-06-18 14:01:05 +02:00
06235cc092 [bugfix] added check for length of data array, code was lost on refactoring 2019-06-18 13:58:35 +02:00
61b514fae1 [bugfix] on deleted picks 2019-06-18 11:35:43 +02:00
dcca731890 [bugfix] workaround for SPt in pick dictionary 2019-06-18 11:35:43 +02:00
508250f03f [minor] corrected time axes to correct endtime (was increased by one sample using npts*dt instead of npts-1*dt) 2019-06-18 11:35:42 +02:00
Ludger Küperkoch
291db16e2b Take into account that for individual stations cut times are set to zero, which has then lead to wrong initial pick times. 2019-06-17 20:48:01 +02:00
Ludger Küperkoch
d9777eaa05 Take into account seismograms too short for given cut times! 2019-06-17 17:26:23 +02:00
Ludger Küperkoch
0e65478645 Removed unnecessary pdb. 2019-06-14 18:49:53 +02:00
Ludger Küperkoch
2d7b7708a1 Variable pick referenced before assignment, cosmetics. 2019-06-14 12:50:16 +02:00
Ludger Küperkoch
025822c485 On the way to stable magnitude determination, still buggy! 2019-06-14 12:35:56 +02:00
Ludger Küperkoch
ed9a7c27aa Additional information on the screen. 2019-06-04 14:22:08 +02:00
Ludger Küperkoch
bb313f0f49 Re-implemented magnitude calculation within manual picking/location, yet on-going work 2019-06-04 12:29:53 +02:00
102b7eafe3 [new] log deleted picks in json file 2019-05-29 15:59:05 +02:00
Ludger Küperkoch
02a0abffd9 Merge branch 'develop' of ariadne.geophysik.rub.de:/data/git/pylot into develop 2019-05-29 13:14:02 +02:00
Ludger Küperkoch
caf76fe94b Replaced ValueError with return, maybe this is only a bad workaround! 2019-05-29 13:12:39 +02:00
3ccf826f25 [new] FileExtensionDialog for load-picks (e.g. _correlatex.xml) 2019-05-29 10:37:16 +02:00
fdcd35186f [add] $NSLOTS variable to autopylot shell script 2019-05-29 10:36:52 +02:00
87bd46b535 [bugfix] Exception had no attribute message 2019-05-29 10:36:37 +02:00
8164710ea4 [bugfix] plotting map without picks lead to an exception 2019-05-28 12:20:29 +02:00
74c68d6a14 [hotfix] deepcopy of Pickparameter object 2019-05-28 11:23:47 +02:00
77eec8e94c [bugfix] some small fixes regarding plotting and reading notes.txt 2019-05-28 11:23:47 +02:00
20dc5a132a [minor] removed old unused code 2019-05-28 11:23:46 +02:00
Sally
8270bd4974 [change] delete old obspy dmt database files, current files unselectable
The metadata files that come from the obspy dmt database are saved in a list. When a new event is opened, the obspy dmt metadata files from the old event are deleted from the inventory (and the new ones are added). 
The metadata files from the obspy dmt database are now unselectable in the MetadataWidget. 
The clear inventory function is called when saving the project (otherwise the obspy dmt metadata gets saved and becomes editable). The update_obspy_dmt function is called after the save to get the obspy_dmt_metadata back to continue working after the save.
2019-05-17 11:20:12 +02:00
Sally
777981e8c6 Removal of Dead Code
Old Code that is not used any longer (was rewritten).
2019-04-30 09:09:14 +02:00
Sally
399bb722eb [bugfix] Sorting items before removing them from list
The selectedIndexes are now sorted before the are removed from the listView. Otherwise you get problems with indices that aren't in descending order.
2019-04-30 09:09:14 +02:00
b228467097 [bugfix] mixed up > and < 2019-04-12 15:30:00 +02:00
3806026438 [bugfix] import lost on merge 2019-04-12 15:29:45 +02:00
d0e6c2819c [bugfix] partial of old code remained after merge 2019-04-12 10:37:15 +02:00
1ee3e43451 [change] visual changes in array map (contourf->contour) 2019-04-12 10:35:16 +02:00
aca0cfcabc [minor] refactor comparison 2019-04-12 10:35:16 +02:00
98e9e786af [new] gain control for main plot window (hotkeys: +, - [amplified with Ctrl] and R [reset]) 2019-04-12 10:35:16 +02:00
6b9231abd3 [merge] feature/refactor into develop 2019-04-12 10:31:29 +02:00
b11b5c630b [change] pre-merge changes in develop that were made in parallel on refactor 2019-04-12 10:28:50 +02:00
Darius Arnold
dd2efe7514 Remove unused code of old PickingParameters 2019-04-09 22:13:49 +02:00
Darius Arnold
49f7ba3eab Merge branch 'develop' of ariadne.geophysik.ruhr-uni-bochum.de:/data/git/pylot into develop 2019-04-09 20:41:08 +02:00
Darius Arnold
d4fa0a7697 Replace all usages of signal_length_params with corresponding pickparams 2019-04-09 20:34:06 +02:00
Darius Arnold
7b2973982d Replace all usages of first_motion_params with corresponding pickparams 2019-04-09 18:45:07 +02:00
Darius Arnold
dce00caf25 Replace all usages of s_params with corresponding pickparams 2019-04-09 18:41:41 +02:00
Darius Arnold
076c498fab Replace all usages of p_params with corresponding pickparams 2019-04-09 18:39:24 +02:00
Darius Arnold
281e92e2a9 Add init to AR3Ccf that extracts parameters from pickparam 2019-04-09 18:19:26 +02:00
Darius Arnold
cd63896060 Add init to ARHcf that extracts parameters from pickparam 2019-04-09 18:18:17 +02:00
Darius Arnold
790cf803d7 Add init to ARZcf that extracts parameters from pickparams
t1, t2 cant be extracted in the init function since they are not the same during the first and the second instantiation.
2019-04-09 17:43:30 +02:00
Darius Arnold
ab6e74c482 Add init to HOScf that extracts parameters from pickparams 2019-04-09 17:30:05 +02:00
Darius Arnold
d45a5ccf0d Change checkZ4S so it extracts the required parameters 2019-04-09 17:05:51 +02:00
Darius Arnold
c45f52510b Change checksignallength so it extracts the required parameters
from the PylotParameter instance itself. The function is only used in one place, so this change wont break anything else.
2019-04-09 17:00:43 +02:00
Darius Arnold
8a4be00ac6 Make correct_iplot use real_bool 2019-04-09 16:55:11 +02:00
Darius Arnold
242d8e3b1d Remove unwanted indentation in autopickstation test input file 2019-04-09 16:49:01 +02:00
Darius Arnold
531d33946f Remove needless bool checking
in Operator returns a bool that can directly be returned.
2019-04-09 16:46:20 +02:00
f21e5913bb Merge remote-tracking branch 'origin/feature/documentation' into develop 2019-04-09 13:36:17 +02:00
7be10a22b4 [bugfix] arrivals removed when filtering in 3comp window 2019-04-05 10:23:10 +02:00
912347beda [change] colormap can be adjusted in array_map 2019-04-05 10:23:10 +02:00
6d4610983d [minor] exclude folder "EVENTS-INFO" from possible event folders in dmt_structure 2019-04-03 15:08:36 +02:00
dfc9de69de [bugfix] small workaround, will still cause error in case Z-component ist tilted by more than 5deg
before: wfdat[0] was using HHE component of previously rotated stream, which could not be found in metadata (dataless fname still HH2)
2019-04-03 15:00:08 +02:00
0995350697 [bugfix] Thomas commit append an .in to every input file (e.g. pylot.in.in) 2019-04-01 15:54:58 +02:00
a90ef26344 [add] paramter taup_phases 2019-04-01 15:29:29 +02:00
9fd982b1ef Modified the Save settings widget for saving the input files such that the file extension .in is always added 2019-03-28 14:04:39 +01:00
Darius Arnold
af43d247c6 Bugfix #270: Create path for Symmetric picking error so it can be drawn
The function shape() will generate the missing path without side effects.
2019-03-22 10:42:16 +01:00
Sally
be71fc0298 Removal of Dead Code
Old Code that is not used any longer (was rewritten).
2019-03-21 12:40:35 +01:00
Sally
bcc5055ef3 [bugfix] Sorting items before removing them from list
The selectedIndexes are now sorted before the are removed from the listView. Otherwise you get problems with indices that aren't in descending order.
2019-03-21 12:37:09 +01:00
81c7ac2f6e [bugfix] split for dirty event (*) 2019-03-20 10:16:46 +01:00
0d07adeeae [bugfix] unhashable type "newstr" 2019-03-20 10:16:46 +01:00
106c6c3939 [new] array maps can now be refreshed on button click as well to spare
refresh time for large arrays
2019-03-12 15:19:24 +01:00
e8df5198ea [bugfix] missing split for * in dirty events 2019-03-12 15:19:24 +01:00
a8db13095e [update] merging picks in Data updated 2019-03-12 15:19:24 +01:00
ea07823a61 [change] imported pick/location data now completely overwrites current
data instead of "weirdly" merging everything
2019-03-12 15:19:23 +01:00
Sally
02900d8c9d Markierte Items/Dateipfade erscheinen im Lineedit
Ein in der Liste markiertes Item erscheint im Lineedit. Wenn mehrere ausgewählt sind, erscheint das zuletzt gewählte im Lineedit. Wenn kein Item ausgewählt ist, wird das Lineedit geleert.
2019-03-12 11:44:47 +01:00
Sally
545203f798 Entfernen von Dead Code
Struktur wurde geändert, sodass jetzt zwei verschiedenen Listen verwendet werden und kein Dictionary mehr aufgerufen wird.
2019-03-12 11:44:47 +01:00
Sally
685d68fd47 Einfügen eines LineEdits
Einfügen von einem LineEdit oben im Widget, in das ein Dateipfad geschrieben werden kann. Alternativ kann über den Browse Button ein File Explorer geöffnet werden und in diesem die Datei ausgewählt werden. Der ausgewählte Dateipfad erscheint im LineEdit. Der Dateipfad im LineEdit kann über den Add Button in die Liste eingefügt werden. Nach betätigen des Add Buttons (nach wie vor Überprüfung, ob Dateipfad existiert und eventuell schon in Liste ist) wird das LineEdit geleert.
2019-03-12 11:44:47 +01:00
Sally
15d294626f Filter von Übersicht auf einzelne Stationen übertragen
Vorher: Wenn man in der Übersicht einen Filter anwendet (P oder S) und dann eine Station öffnet ist dort immer der P-Filter aktiv.
Jetzt: Wenn man die Übersicht filtert und dann eine Station öffnet, wird derselbe Filter angewendet wie in der Übersicht. 
Disclaimer: Gibt vermutlich eine elegantere Lösung.
2019-03-12 11:44:47 +01:00
Sally
2e2924efa1 Update for changed button name 2019-03-12 11:44:47 +01:00
Sally
e9ee37f5f7 Aktualisierung der Zuweisung von Funktionen und Buttons 2019-03-12 11:44:00 +01:00
Sally
bae318520b Entfernen einer Funktion, die nie aufgerufen wurde 2019-03-12 11:44:00 +01:00
Sally
eedcd2aaa5 Anpassung der from_metadata Funktion an neue Programmstruktur 2019-03-12 11:44:00 +01:00
Sally
ff4755bc79 Änderungen am GUI-Aufbau, Initialisierung von allen Elementen 2019-03-12 11:44:00 +01:00
Sally
7d2e1e7df6 Hinzufuegen und loeschen von Metadaten (Pfaden dahin), Cancel-Option
Die hinzuzufügenden/zu löschenden Pfade werden erst in Listen gespeichert (bzw. aus diesen gelöscht). Sobald der Nutzer den Accept-Button drückt, werden die letzten aktuellen Änderungen beibehalten. Bei drücken des Cancel-Buttons werden alle Änderungen verworfen.
2019-03-12 11:44:00 +01:00
Sally
56919b7d8a Resize 2019-03-12 11:44:00 +01:00
d6381e80c1 [bugfix] cell changing did not set event/project dirty 2019-03-07 17:02:11 +01:00
6032018067 [bugfix] network/station interchanged leading to huge load times for
metatada
2019-03-07 17:02:11 +01:00
a9d92e70ce [bugfix] stations not updated when changing event 2019-03-07 17:01:45 +01:00
49bcdd2680 [bugfix] time axis for plots created with linspace instead of arange
[minor] enhanced error output
2019-03-07 16:31:41 +01:00
9583f9d351 [bugfix] make sure t and rms have same length by using np.linspace instead of np.arange 2019-03-07 16:31:41 +01:00
7bf3bb6835 [minor] various small fixes 2019-03-07 16:31:41 +01:00
9ae8e36061 [bugfix] taupy picks not initiated in array_map 2019-02-28 17:07:41 +01:00
4dfcdb41ec [bugfix] missing split for event dirty flag (*) in event_table 2019-02-28 16:39:53 +01:00
16b469ff67 [bugfix] information about deleted picks was not transferred to the
current event, thus not saved
2019-02-28 16:20:21 +01:00
Darius Arnold
f0e2fdb470 [bugfix]/[improvement] handle station id lacking location, related to #266
Unpack depending on length of station id after split, if something unexpected happens, initialise with default values.
This is related to https://ariadne.geophysik.ruhr-uni-bochum.de/trac/PyLoT/ticket/266. This should fix the missing station information in the title of the TuneAutopicker waveform plot.
2018-12-07 13:12:13 +01:00
Darius Arnold
f892225229 [bugfix] Strip star from event path when getting text from eventBox
See https://ariadne.geophysik.ruhr-uni-bochum.de/trac/PyLoT/ticket/264
2018-12-07 12:47:06 +01:00
Darius Arnold
2a22409145 Change "not a in b" to more readable "a not in b", add comments in Metadata class 2018-12-07 11:24:11 +01:00
Darius Arnold
45949afca6 [bugfix] #267 Remove stations from ArrayMap whose metadata files were removed
Fixes https://ariadne.geophysik.ruhr-uni-bochum.de/trac/PyLoT/ticket/267
2018-12-07 11:19:54 +01:00
Darius Arnold
f1690edf71 [Bugfix]: Fix stationlist in TuneAutopicker dialog, partial fix of #266
Remove the channel from the station id
2018-12-07 09:44:12 +01:00
Darius Arnold
6a0c1dda9d [Bugfix]: Fix plot title in TuneAutopicker, partial fix of #266
For some reason the network.station.location.channel string wasn't split.
2018-12-07 09:34:47 +01:00
Darius Arnold
3abeabc75c [bugfix] File wasn't closed explicitly, add implicit close by with statement 2018-11-30 11:20:47 +01:00
Darius Arnold
48d5406639 [Add] section describing the event info CSV file 2018-11-30 11:03:30 +01:00
Darius Arnold
0964139799 Merge branch 'develop' into feature/documentation 2018-11-30 10:27:42 +01:00
Darius Arnold
6afb61b666 [change] order to Day/Month/Year when reading csv file for event info 2018-11-30 10:26:57 +01:00
Darius Arnold
815a18f2f3 [bugfix] error in check during loading of event info csv file
Original condition didn't check for eventID in event.origins.
2018-11-30 10:21:54 +01:00
Darius Arnold
718172dcd1 [minor] change axis label for comparison dialog 2018-11-30 09:50:14 +01:00
Darius Arnold
5e6d2e7211 [Bugfix] Merge traces treating overlaps as data instead of gaps
Could not open the Pickwindow on some stations. 
Gaps were merged so that overlapping values were marked as gaps, which lead to an error when those traces were processed. Method 1 takes data from traces instead, see https://docs.obspy.org/packages/autogen/obspy.core.trace.Trace.__add__.html
2018-11-30 09:42:22 +01:00
Darius Arnold
b01c15d755 Fix errors, add picking window description 2018-10-23 11:47:02 +02:00
Darius Arnold
86035d3cd6 Add description of comparison options 2018-10-19 12:01:25 +02:00
Darius Arnold
4ab453899a Add linebreak that was missing in exported document 2018-10-15 14:38:07 +02:00
Darius Arnold
6c3998285b Add invisible character to increase size of column in table
Otherwise icons were squished when exporting
2018-10-15 14:37:39 +02:00
Darius Arnold
974be21a06 Correct some errors, add some information 2018-10-12 11:22:19 +02:00
Ludger Küperkoch
dfe3f9b32a [Bugfix]: There might be no network nor location information available, strange! 2018-10-10 11:40:11 +02:00
Ludger Küperkoch
cf7fafbe62 Additional possible suffix for metadata file. 2018-10-10 11:17:58 +02:00
Darius Arnold
637b821582 Add description of Waveform Plot buttons 2018-10-01 12:55:54 +02:00
Darius Arnold
d0503d4261 Add description of the tune autopicker window 2018-09-24 12:54:32 +02:00
Darius Arnold
18308a7168 Add description of jackknife and wadati check 2018-09-24 11:02:23 +02:00
Darius Arnold
f56087b68e More work on GUI.md 2018-09-17 12:00:55 +02:00
Darius Arnold
7dbc2cb334 Worked on GUI documenation, added images 2018-09-14 23:31:36 +02:00
Darius Arnold
2d233a7099 Merge branch 'develop' into feature/documentation 2018-09-14 21:46:29 +02:00
739a6e89f4 [pycharm] optimized imports 2018-08-16 17:34:05 +02:00
c23bbfae8a [bugfix] enhanced emergency reset loop for corrupted QSettings 2018-08-16 16:22:35 +02:00
6546f8f196 [update] warn user in case of low SNR for manual pick 2018-08-16 16:14:48 +02:00
ce1564c2f8 [minor] improved convenience, small bugfix 2018-08-15 15:51:41 +02:00
8a187905cb [update] finalized recent projects
[minor] some small fixes, improvements
2018-08-15 15:39:43 +02:00
402248f340 [new] minor, add recently used projects (WIP) 2018-08-14 14:21:32 +02:00
Darius Arnold
6936cfcfa6 [bugfix] Taupy didn't check return of get_coordinates
For a station not in the metadata, get_coordinates would return None which wasn't checked for.
This includes a test for a station which is not in metadata.
2018-08-13 22:42:19 +02:00
Darius Arnold
96adbddeba [remove] unused parameter in create_arrivals
The station id is no longer passed into the function and used to retrieve the station coordinates, but the id is taken from the vertical stream instead.
2018-08-13 22:37:48 +02:00
Darius Arnold
dcd0bc40d7 [remove] old autopickstation code 2018-08-13 22:35:38 +02:00
a82a1cddc8 [minor] changed message for pick deletion 2018-08-13 17:44:24 +02:00
f22f7845cb [new] remove picks on map with middle-click
[bugfix] remove old annotations
2018-08-13 12:49:48 +02:00
26a4cc568a [new] perform jackknife on gradient of stations in array map and highlight them in case of high std 2018-08-13 11:24:48 +02:00
a0f9561bcf [new] hybrid selection for array_map (plot automatic and manual picks, prefer manuals) 2018-08-09 11:16:20 +02:00
Darius Arnold
b7d3568498 [bugfix] Taupy used even if disabled when p start time > 0
If Taupy was disabled, but pstart was larger than zero, the and would lead to the function that modifies starttimes not exiting. This resulted in taupy being used even though it was disabled when the p starttime was above 0.
2018-08-09 10:03:25 +02:00
Darius Arnold
9929e3d441 [change] testPickingResults.py in accordance with bugfix 2018-08-08 19:25:41 +02:00
Darius Arnold
45370e2c67 [bugfix] PickingResults raised incorrect error on missing attribute
Accessing a non existing attribute raised a KeyError instead of an AttributeError, breaking methods that relied on the correct error type. Maybe this fixes the __setstate__ bug during event picking from GUI.
2018-08-08 19:25:11 +02:00
Darius Arnold
d4e2f4a070 [bugfix] maxDiff not correctly set in test_autopickstation 2018-08-08 09:11:34 +02:00
Darius Arnold
146da1d794 [change] Quality class determination documentation improvement
Also change order of expected/actual values in test assert methods to improve output of test function.
The first parameter is supposed to be the expected value, the second one the actual result.
2018-08-07 13:58:59 +02:00
677d3a200c [bugfix] return statement disappeared on merge in a45e81721366463d73c142b255f6ba7cb582bb08 2018-08-07 10:33:57 +02:00
e46c0fb862 [minor] typos 2018-08-07 10:06:39 +02:00
a2af6b44f3 [bugfix] regard location ID in PyLoT! (WIP) 2018-08-06 16:03:54 +02:00
34891d3dc1 [minor] added verbosity flag for Metadata to reduce output in GUI 2018-08-06 09:19:30 +02:00
a85b79e432 [change] try to merge streams instead of deleting them when they have gaps 2018-08-06 09:02:05 +02:00
5f2848d584 [bugfix] do not read the same files multiple times for each channel 2018-08-03 14:03:27 +02:00
Darius Arnold
d3fee396b2 [modify] test_autopickstation to work with new return value 2018-08-03 13:50:41 +02:00
Darius Arnold
0c4085ed76 [bugfix] station name was in dictionary instead of a list item for autopickstations return value
Old implementation of autopickstation returned a list containing a dcitionary with results as a first value and a string with the station name as a second value. The new version used to return a single dict with a key/value station name pair, which the calling code could not unpack. The new implementation now also returns a list with picking results and station name.
2018-08-03 13:50:14 +02:00
Darius Arnold
2b9cf655ae [change] autopickstation works with new Metadata class 2018-08-03 13:47:46 +02:00
Darius Arnold
86f6584d91 Modify test_autopickstation's MockMetadata to work with the new Metadata class 2018-08-03 13:23:52 +02:00
Darius Arnold
a45e817213 Merge remote-tracking branch 'origin/develop' into feature/refactor 2018-08-03 11:55:13 +02:00
5c74160710 Merge branch 'feature/metadata_class' into develop 2018-08-03 11:18:34 +02:00
e75beff883 [minor] cosmetics on dataPlot bottom layout 2018-08-03 11:16:56 +02:00
d9a4c40d0b [minor] hide pick qualities histogram action (not implemented yet) 2018-08-03 11:16:08 +02:00
Darius Arnold
b9cf219b39 Adding/modifying documentation in autopick.py 2018-08-03 11:05:42 +02:00
Darius Arnold
7b2ae2fa8f Disable NonLinLoc in testing parameter file 2018-08-03 11:01:58 +02:00
Darius Arnold
108ba80b83 [minor] disable length limit when printing diff of picking results
By default the difference printed when two picking result dictionaries are compared is limited in length, this disables the limit.
2018-08-03 11:00:53 +02:00
Darius Arnold
8dcea2a8c3 [change] PickingResults now only stores actual results, PickingContainer stores intermediary values
PickingResults now stores only the actual results of picking, which are returned from the autopickstation function. 
To store intermediary results during picking, the new class PickingContainer is used.
2018-08-03 10:59:10 +02:00
e211a901ff [bugfix] could not init array map when loading project 2018-08-03 09:33:37 +02:00
393413d6fc [bugfix] closes #215 2018-08-02 15:42:34 +02:00
d5e9cd07d4 [revert] went back to old quality assessment without SNR 2018-08-02 14:18:20 +02:00
529939b4b8 [minor] removed unused object 2018-08-02 13:32:45 +02:00
7dae8e1107 [bugfix] not regarding possible * in eventpath in eventbox 2018-08-02 13:00:36 +02:00
017683806b [new] if QSettings fails, ask to reset!
[bugfix] checkBoxPG outdated in QSettings
[bugfix] moved SetChannelComponents to utils (produced circular imports)
2018-08-01 13:49:01 +02:00
c898f93293 [new] idea for new quality check using SNR 2018-08-01 13:25:27 +02:00
93bdaa8998 [bugfix] forgot to split * in getFromPath 2018-07-31 10:22:45 +02:00
e68b634f25 [new] event modification status saved with "dirty" attribute, only save event-XML when modified 2018-07-31 09:41:48 +02:00
cc9ae9c146 [bugfix] empty picks dictionary leading to KeyError 2018-07-31 09:24:43 +02:00
Darius Arnold
076115f7f5 [add] Station A106 to tests
This station has invalid values recorded on both N and E component, but a pick can still be found on Z
2018-07-30 14:32:32 +02:00
Darius Arnold
d14e789e97 Modify test to test that picking is possible for station with only vertical component 2018-07-30 14:31:56 +02:00
Darius Arnold
7bbcb489bf [refactor] add current_figure attribute
This instance attribute holds the current figure,which removes the need for an external function to extract the correct figure from the fig_dict and the need to pass the figure into instance methods (it can now be directly accessed by the attribute).
2018-07-30 14:28:21 +02:00
Darius Arnold
97458b5b42 [minor] small code changes 2018-07-30 14:08:49 +02:00
Darius Arnold
b4316ae717 [refactor] improving S pick code by extracting functions 2018-07-30 13:36:09 +02:00
Darius Arnold
c89e47ac43 [change] Extract function that calculates cuttimes 2018-07-30 12:43:56 +02:00
Darius Arnold
da2b1ed133 [change] Enable picking on traces with only one vertical component
Only the P pick will be calculated
2018-07-30 12:03:33 +02:00
db9a1371b1 [bugfix] closes #223 2018-07-25 15:16:14 +02:00
bf5c371459 [bugfix] various bugfixes originating from changes (more picks) in dictionary (refs #233 refs #234) 2018-07-25 14:05:15 +02:00
bfc745dd30 Merge branch 'develop'
Conflicts:
	pylot/core/pick/autopick.py
2018-07-25 10:53:08 +02:00
146ef7098c [bugfix] closes #233 closes #234
can cope with stations without horizontal components now, removed dangerous try/except construct
2018-07-25 10:48:43 +02:00
974ee12076 [update] closes #242 - dpi change added, figures using PylotCanvas can be saved with CTRL+S 2018-07-24 15:20:00 +02:00
37e2e39f3a Merge branch 'develop' into feature/metadata_class 2018-07-24 14:36:22 +02:00
f310afb4c6 [minor] renaming QtAction 2018-07-24 14:33:38 +02:00
6d6179bb55 [bugfix] closes #251 also fixed for histogram and locateAction 2018-07-24 14:32:15 +02:00
62287df1b4 [minor] removed "None" from filteroptions 2018-07-24 14:16:24 +02:00
b3524c562d [bugfix] closes #260 checking for filtertype first 2018-07-24 14:15:32 +02:00
9e75f8d4f2 [bugfix] closes #255 filtering should work as intended now 2018-07-24 13:54:40 +02:00
ef6e011d22 [update] slightly improved feasibility using obspy_dmt with metadata and array map 2018-07-23 16:03:15 +02:00
eb07c19c2e [update] metadata/inventory management
[bugfix] errors when there were no manual picks using array_map
2018-07-23 14:54:53 +02:00
Darius Arnold
09721c7bb9 [add] station CH.FIESA to test_autopickstation 2018-07-20 15:06:15 +02:00
Darius Arnold
653e7c9b9a [remove] unused function for creating an error message for autopickstation tests 2018-07-20 15:05:42 +02:00
Darius Arnold
1b815a27b8 [add] HidePrint can be configured to skip hiding for quick debugging of tests 2018-07-20 15:04:48 +02:00
Darius Arnold
7fe5d0f7a3 [remove] Redundant checks for missing waveform data
Missing waveform data is now handled before picking starts, so it is not necessary to check during picking.
2018-07-20 14:50:27 +02:00
Darius Arnold
61b14dc770 [change] Create clean PickingResults instance with only required values
During picking, intermediary values were saved in p_results or s_results PickingResults instance.
Additionally the PickingResults class contained members that were only used during picking but not when returning the results, thus wasting space after picking, because they stayed in the instance. 
Now autopickstation generates a clean PickingResults containing only the members that are expected in the results.
2018-07-20 14:49:10 +02:00
Darius Arnold
153beff663 [add] tests for picking with missing trace in stream 2018-07-20 13:02:57 +02:00
Darius Arnold
318ca25ef8 [add] handling missing traces
either a missing vertical or missing both vertical traces in the wfstream will fail picking and return None. This is the same behaviour as before refactoring.
2018-07-20 13:00:19 +02:00
f802f1db58 [new] working on metadata/inventory selection from GUI 2018-07-19 11:50:29 +02:00
b348117d5a [bugfix] tuneAutopicker not using new pb_widget 2018-07-18 15:04:07 +02:00
90e26cbd8f [bugfix] various, pick delete/plot behaviour 2018-07-18 14:12:59 +02:00
12082cb2a9 [minor] removed old code 2018-07-18 11:42:46 +02:00
ed654024b2 [update] reinstated check4rotated in autoPyLoT 2018-07-18 11:39:01 +02:00
17c27f9294 [bugfix] add time to metadata.get_coordinates in check4rotated 2018-07-18 11:38:33 +02:00
ed7ee5d944 [bugfix] progressText not updating in progressBar 2018-07-18 11:13:29 +02:00
37da7327d0 [bugfix] using Metadata class for pickDlg, not yet working for array_map (WIP!) 2018-07-17 16:04:20 +02:00
c172cfa892 Merge branch 'develop' into feature/metadata_class 2018-07-17 14:51:24 +02:00
1fc62a5e9d [bugfix] add trace starttime to metadata.get_coordinates call 2018-07-17 14:49:37 +02:00
cf34c9465c [minor] cosmetics 2018-07-17 14:49:37 +02:00
dda997e457 [update] autopick -> only export necessary XML (WIP) 2018-07-17 14:49:37 +02:00
fbc01290d5 [update] make PYQTGRAPH mandatory for PyLoT (stop maintaining unused stuff)! 2018-07-17 14:49:37 +02:00
64885db214 [update] no more replotting of whole data when pick is changed 2018-07-17 14:49:36 +02:00
5b48b744fd [update] plotting (added fill_between again now performance got increased) 2018-07-17 14:49:36 +02:00
469d91bfdc [cleanup] removed has_spe function, using dict.get method instead! 2018-07-17 14:49:36 +02:00
663d244f70 [cleanup] removed some parentheses 2018-07-17 14:49:35 +02:00
7a0d3486a6 [cleanup] code cleanup by PyCharm 2018-07-17 14:49:34 +02:00
754d42c8e3 [cleanup] add requirements to setup.py 2018-07-17 14:45:51 +02:00
415af11f63 [cleanup] add @staticmethod, some other stuff 2018-07-17 14:45:51 +02:00
d151cdbc6d [bugfix] added missing parameter "WAScaling" (not knowing what it does but it was missing!) 2018-07-17 14:45:50 +02:00
b22d04db47 [cleanup] replace equality by "is" 2018-07-17 14:45:50 +02:00
e210bd8793 [add] number of active picks (and total) 2018-07-17 14:45:48 +02:00
0e70520a78 [bugfix] Spick quality check doing random stuff caused by copy paste errors 2018-07-17 14:45:48 +02:00
ad686d42db [minor] update_status when loading multiple events 2018-07-17 14:45:48 +02:00
a9636caf73 [bugfix] picks for S and P were deleted when wadati check failed 2018-07-17 14:44:31 +02:00
Darius Arnold
27f5a0d50c [remove] unused function, check was moved to init method
In the init method of autopickstation the N trace will be copied to the E trace or reversed if one of them is missing, so checking for it during S picking has become redundant.
2018-07-17 14:01:23 +02:00
Darius Arnold
85b34177c4 [add] default values for parameters of AutoPickStation init method 2018-07-17 13:59:46 +02:00
Darius Arnold
775d3de46c Change whitespace, add comments/documentation 2018-07-17 13:59:13 +02:00
Darius Arnold
6d26338f2e Decrease indentation in pick_s_phase
Instead of checking if(condition){DoSomething()}; check if (!condition){ErrorOut()}; DoSomething();.
This allows to decrease the indentation of the potentially large codeblock DoSomething().
2018-07-17 13:56:25 +02:00
Darius Arnold
02873938cb [change] minSlope value decreased due to difference in slope calculation 2018-07-17 11:37:29 +02:00
Darius Arnold
d84be68694 [add] tests for file where metadata has changed due to changed instruments 2018-07-17 11:18:07 +02:00
Darius Arnold
787c46eeaf [add] tests for file where metadata is available at multiple time intervals 2018-07-17 11:17:23 +02:00
Darius Arnold
da29b286f1 [add] tests for read_single_file 2018-07-17 10:56:53 +02:00
Darius Arnold
7734650d2e Move the metadata files used ofr testing into own folder 2018-07-17 10:34:42 +02:00
Darius Arnold
f5c1fea159 [Change] use HidePrints to clean output of test results 2018-07-13 23:25:11 +02:00
Darius Arnold
6ae5a213d1 [update] Metadata docstrings improved 2018-07-13 23:10:47 +02:00
Darius Arnold
10aa0360bc [Add] Utility class to hide prints during testing
Provides with HidePrints context manager and @HidePrints.hide decorator
2018-07-13 21:33:45 +02:00
Darius Arnold
c308415aa5 Move test_autopickstation to subdirectory in pylot/tests 2018-07-13 09:45:57 +02:00
Darius Arnold
2c92f6f2fd Merge branch 'develop' into feature/refactor 2018-07-13 09:28:50 +02:00
3b52b7a28f [bugfix] fixed some bugs resulting from restructuring of serial/parallel picking 2018-07-12 15:56:34 +02:00
9f1672d793 [hotfix] something went wrong trying to merge/rebase develop 2018-07-12 14:17:06 +02:00
319343499b [bugfix] parallel picking called accidently 2018-07-12 14:14:22 +02:00
d360d9db92 [bugfix] raise Exception if no Z-component is found for Magnitude calculation 2018-07-12 14:13:34 +02:00
9a5624c951 [change] return values of autopickstation 2018-07-12 14:13:19 +02:00
1d215c181c [update] time optional for get_metadata/get_coordinates, if not time is specified use current UTCDateTime 2018-07-12 11:43:15 +02:00
84ed0b900f [update] unit test for Metadata class 2018-07-12 11:28:43 +02:00
1d571a0046 [add] Metadata class now requires time for get_metadata and get_coordinates as well 2018-07-12 11:27:20 +02:00
3bd5243f2a [minor] typo 2018-07-12 10:04:35 +02:00
6b28b9c9c6 Merge remote-tracking branch 'origin/feature/metadata_class' into feature/metadata_class 2018-07-12 09:47:40 +02:00
704a45f845 [minor] changes/testing threshfactor 2018-07-12 09:46:53 +02:00
Darius Arnold
869ede96a9 Rewrite check4rotated function to work with new Metadata class 2018-07-12 09:46:52 +02:00
6c162fc94a [new] first use of Metadata class in autoPyLoT, largely increasing read performance using obspyDMT (single DATALESS-Seed files) 2018-07-12 09:42:35 +02:00
27ea20fa47 [bugfix] NLLoc: take nlloc bin explicitly from PylotParameter and not (randomly) from default pylot.in file!! 2018-07-12 09:40:57 +02:00
c376364206 Merge branch 'develop' into feature/metadata_class 2018-07-11 14:02:53 +02:00
c3e8b2c03d /metadata_class: Auto stash before merge of "feature/metadata_class" and "develop" 2018-07-11 14:01:47 +02:00
afade77922 Merge branch 'develop' into feature/metadata_class 2018-07-11 09:48:28 +02:00
0cec2c81d3 Merge remote-tracking branch 'origin/develop' into develop 2018-07-11 09:42:43 +02:00
a0fcf03c1e [update] add try/except to autopickstation call 2018-07-11 09:42:28 +02:00
91959367d4 [minor] set DATASTRUCTURE correctly 2018-07-11 09:41:58 +02:00
Darius Arnold
82ac08157c [bugfix] azimuth <45° was misattributed as E instead of N trace in check4rotated 2018-07-10 21:41:07 +02:00
Darius Arnold
0dc4d64826 Rewrite check4rotated function to work with new Metadata class 2018-07-10 20:22:25 +02:00
Darius Arnold
51f2ded063 [add] basic unittests for Metadata class 2018-07-10 17:07:46 +02:00
b870b5378a [bugfix] setting Datastructure to obspydmt or None caused crash and corrupted QSettings 2018-07-10 14:20:31 +02:00
5f6f986e3e [update] removed read of pylot.in on start!!! Now using default params
from now on all input files should be saved in directory different to ~/.pylot
2018-07-10 14:16:36 +02:00
8d3bd88be9 [minor] gitignore update 2018-07-10 13:54:49 +02:00
e7284cba41 [bugfix] dictionary containing params not initiated in __init__ 2018-07-10 13:36:52 +02:00
6ab2609e8c [bugfix] default filename given for nlloc in default_parameters 2018-07-10 13:35:24 +02:00
47b045ad35 Merge branch 'develop' into feature/metadata_class 2018-07-10 11:49:48 +02:00
1de1dc71f8 [bugfix] reinstated ludgers changes, added bugfix for GUI 2018-07-10 11:47:45 +02:00
f6688b3a28 Merge branch 'develop' into feature/metadata_class 2018-07-10 11:38:10 +02:00
77dd0d4733 Merge branch 'develop' of ariadne:/data/git/pylot into develop 2018-07-10 11:36:31 +02:00
3bc3d07793 [new] first use of Metadata class in autoPyLoT, largely increasing read performance using obspyDMT (single DATALESS-Seed files) 2018-07-10 11:35:13 +02:00
c46513a626 [bugfix] full_range called for each station when drawing picks, largely decreasing plot performance 2018-07-10 11:13:41 +02:00
b4516fb2cb [update] save automatic picks calculated inside GUI in XML files as well 2018-07-10 11:01:34 +02:00
9c119819a6 [bugfix] self.inventory_files initiated too late 2018-07-09 14:14:59 +02:00
0474e5fbe9 [minor] add progressbar to autopick widget 2018-07-09 13:32:44 +02:00
ccec76b8a1 [revert] iplot = iplot for wadati/jackknife not showing plots anymore in GUI
Note: Are not Wadati and Jackknife figures meant to be shown always? Independent of iplot flag?
2018-07-09 11:06:19 +02:00
Darius Arnold
1ff50c000e Decrease indentation by returning from error condition 2018-07-06 16:17:07 +02:00
689192b25d [bugfixes] in Metadata class (not fully tested yet) 2018-07-06 12:22:24 +02:00
Darius Arnold
71d8626fa3 [bugfix] parameters tpred2z and tdet2z were unused 2018-07-06 11:10:02 +02:00
d223011f90 [bugfix] Metadata class raised Exception using obspy get_coordinates 2018-07-06 11:03:21 +02:00
Darius Arnold
67cf1f9010 Change tests to work with updated AIC 2018-07-06 10:51:50 +02:00
Darius Arnold
b1a1e8924a Merging picker.py did not work correctly 2018-07-06 10:51:17 +02:00
1f2dd689ba [change] slope normalized to maximum y value, TESTING NEEDED 2018-07-06 10:36:38 +02:00
Darius Arnold
5258a7e9b4 Merge remote-tracking branch 'origin/develop' into feature/refactor 2018-07-06 09:49:52 +02:00
871fb685a4 [removed] amplification of cf by empirical values to account for restituted data 2018-07-05 15:30:40 +02:00
2c588c1c80 [change] initial pick now searches for latest local minimum instead of first increase in cf 2018-07-05 15:29:09 +02:00
df0f059ff3 [bugfix] parameters tpred2z and tdet2z were unused 2018-07-05 14:37:32 +02:00
b0ad99eced [minor] improved text output if wf data cannot be read 2018-07-04 17:25:11 +02:00
e05909b188 [minor] textobjects split into single lines before adding to log for convenience 2018-07-04 16:48:14 +02:00
27e425844a Merge branch 'develop' of ariadne:/data/git/pylot into develop 2018-07-04 10:56:44 +02:00
2379dee142 [hotfix] metadata could not be read (improve this) 2018-07-04 10:56:18 +02:00
Darius Arnold
4581025b4d Bugfix: typo l/1 in figure name 2018-07-03 13:56:29 +02:00
Darius Arnold
59e67b51d5 Bugfix where not giving origin with taupy would still use negative starttime 2018-07-03 13:55:51 +02:00
Darius Arnold
80835bc56e Transfer plotting of results in GUI after picking 2018-07-03 13:55:04 +02:00
Ludger Küperkoch
b6c682315d Allow negative saftey gap for slope determination, use only mean of noise window for SNR determination 2018-07-03 11:34:32 +02:00
Ludger Küperkoch
46d4ad7ea5 Some new parameter settings. 2018-07-03 10:35:09 +02:00
Ludger Küperkoch
854f5b8c7e [Bugfix] No network magnitude available for screen output while tuning 2018-07-03 10:34:43 +02:00
4896c1e1d8 [cleanup] removed redundancy 2018-07-02 13:34:30 +02:00
e3e420a94b Merge branch 'develop' of ariadne:/data/git/pylot into develop 2018-07-02 10:11:31 +02:00
Darius Arnold
86419220e2 Minor changes in autopick.py 2018-06-29 15:57:58 +02:00
Darius Arnold
68b2917e7f Refactor quality control of initial pick into own function 2018-06-29 15:56:32 +02:00
Ludger Küperkoch
d2ba8888d1 Reduced maximum number if iterations to two, some effort to avoid error for screen output if no magnitude scaling is applied. 2018-06-29 15:07:26 +02:00
Darius Arnold
d4e279aeba Refactor taupy usage 2018-06-29 14:21:03 +02:00
2c07dd63b8 [bugfix] save Event before tarting tuneAutopicker (else source information might not be accessible to autoPyLoT) 2018-06-29 14:19:09 +02:00
22e5de194e [bugfix] array map only checking for manual picks 2018-06-29 13:55:42 +02:00
9b63c1bb24 [bugfix] taupy phases lost on plot refresh 2018-06-29 10:13:58 +02:00
f4201c4e2f [bugfix] full path for ctrfile 2018-06-28 15:59:13 +02:00
4cb3f72ba8 [bugfixes] location/magnitude w ludger 2018-06-28 15:22:40 +02:00
5870843a03 [bugfix] event_datapath not set if not obspyDMT 2018-06-28 10:31:48 +02:00
4ed8e54822 Merge branch 'develop' of ariadne:/data/git/pylot into develop 2018-06-27 14:21:35 +02:00
8ae0449c01 [new] metadata class (WIP) 2018-06-27 14:21:20 +02:00
3a66ec1c95 [bugfix] print a warning in case saveData fails 2018-06-27 14:20:11 +02:00
19f943627a [bugfixes] autopylot 2018-06-26 17:07:38 +02:00
Ludger Küperkoch
4d1c3985f4 [Bugfix] Input argument iplot was fixed to 1. 2018-06-26 12:25:03 +02:00
Ludger Küperkoch
f43478d0c0 Consistent calculation of SNR, using maximum (not mean) of signal window, more stable! 2018-06-26 11:58:08 +02:00
Ludger Küperkoch
88ddbc1efc [Bugfix] Time array improper calculated (rounding errors?) 2018-06-26 10:51:39 +02:00
Ludger Küperkoch
8487d696c6 Cosmetics 2018-06-25 15:59:52 +02:00
Ludger Küperkoch
3845e7291e Relaxed condition for slope determination, plot interims results even if picking failed. 2018-06-25 15:56:15 +02:00
Ludger Küperkoch
d4b76bdecb Cosmetics 2018-06-25 15:31:58 +02:00
Ludger Küperkoch
7d175a4bc9 Accidently screen output got lost. 2018-06-25 15:30:10 +02:00
Ludger Küperkoch
b18298c4db Additional screen output of derived onset time, [Bugfix] Use of argrelmax not reliable. 2018-06-25 15:24:36 +02:00
77b4458ab5 [idea] change read_metadata to also read files without specific file ending 2018-06-25 14:22:14 +02:00
04e75abcf5 [bugfix] read_metadata obspyDMT 2018-06-25 14:21:34 +02:00
Ludger Küperkoch
b607da7262 [Bugfix] it was not sure that enough waveform remains for processing after cutting with improper cut times 2018-06-25 10:30:36 +02:00
34508fc4a3 [bugfix] addressing PyLoTCanvas already deleted
setParent(None) -> del(widget), not fully working on autopickwidget
2018-06-21 15:04:58 +02:00
1b5f58bbbd [minor] prefer processed data on init 2018-06-21 14:24:15 +02:00
a5667c1e06 [bugfix] nextStation not working with deleted PickDlg 2018-06-21 14:23:52 +02:00
2a8efd0904 [bugfix] autoPyLoT <-> obspyDMT 2018-06-21 13:24:24 +02:00
7ad36c2305 [bugfix] take width of mainwindow for min/max plot estimation
before width was only 80px for initial plot
2018-06-20 13:55:59 +02:00
910ed66676 [new] only load relevant waveforms into TAP widget 2018-06-20 13:49:27 +02:00
08124174b1 [update] autopicker - obspyDMT (WIP) 2018-06-20 11:47:10 +02:00
Darius Arnold
fe1e76f53a Add correct way to put picking results even when picking fails, now passing all tests 2018-06-19 19:55:09 +02:00
Darius Arnold
f4750472c7 Change PickingResults to inherit from dict and add dot access methods 2018-06-19 19:44:26 +02:00
Darius Arnold
675052975a Add finish_picking which transfers all results into the correct format in PickingResults
This function is called whether picking succeeded or failed. If it failed, it will put the default initialized values into the result. If picking succeeded, the default values will have been overwritten with the results during picking.
2018-06-19 19:38:13 +02:00
Darius Arnold
e3f7840d9e Add S phase picking 2018-06-19 19:38:12 +02:00
Darius Arnold
d4033eca08 Add converting fig_dict and iplot to the correct types 2018-06-19 19:33:22 +02:00
Darius Arnold
5b5da2e40d Move Trace Extraction to __init__ and copy over existing hor. trace if one is missing 2018-06-19 18:42:32 +02:00
Darius Arnold
7b6f64b46a Bugfixes/typos fixed during picking 2018-06-19 11:39:00 +02:00
Darius Arnold
335f3c4150 Replace access to trace over Stream index by direct access to trace attribute 2018-06-19 11:36:30 +02:00
47d6aeabff [minor] used event highlighting of eventlist also for eventbox 2018-06-19 11:28:41 +02:00
Darius Arnold
0a1d15c429 Add PickingFailedException 2018-06-19 11:26:42 +02:00
Darius Arnold
accb3c5d51 Add unittests for PickingResults 2018-06-19 11:20:57 +02:00
Darius Arnold
075b6e26c7 Modify test assertion to use DictContainsSubset and look at P and S seperately
With AssertDictContainsSubset the resulting dict from the new code can have more key/value pairs than the one from the old code, but all key/value pairs that are in the results from the old code must exist and be the same.
2018-06-19 11:19:49 +02:00
7de8c2ee8b [minor] info on number of traces for main plot 2018-06-19 11:13:06 +02:00
ca886d4355 [update] on obspyDMT compatibility (WIP) 2018-06-19 10:35:34 +02:00
ec32981787 [minor] improve current event highlighting 2018-06-19 10:26:17 +02:00
7c0de44974 [update] array_map legend inaxes, add lat/lon status, some fixes and improvements 2018-06-15 14:49:26 +02:00
0168d8923d [minor] tighten PyLoT tabs layouts 2018-06-14 16:26:26 +02:00
46a6cdcc00 [bugfix] remove autopicks weight>3 2018-06-14 15:50:44 +02:00
fe0e4be43d [update] array_map uses PylotCanvas now, added grid and labels 2018-06-14 14:08:25 +02:00
06cacdd4cb [update] major improvements of array_map, code restyled, increased flexibility 2018-06-13 17:01:05 +02:00
f50e38241e [minor] tweaks and finalization of obspyDMT options 2018-06-08 15:01:05 +02:00
083e5c8fe9 [new] synthetics checkbox 2018-06-08 14:29:45 +02:00
8155389b3d [new] add qcombobox for raw/processed selection 2018-06-08 14:18:28 +02:00
5e161d308a [bugfix] trying to access station not in autopicks dictionary 2018-06-05 15:11:43 +02:00
9b5fe3baba [bugfix] saving xml when tuning autopicker unnecessary (and caused id mismatch) 2018-06-05 15:07:06 +02:00
d695c0016a [bugfix] delete_later not caused tuneAutoPicker to malfunction 2018-06-05 15:06:33 +02:00
cbba41f16a [bugfix] autoPickWidget not enabled again after Error 2018-06-05 14:24:00 +02:00
d57c193a0b Merge branch 'feature/obspy_dmt_interface' into develop 2018-06-05 13:48:53 +02:00
20eb54e9c5 [minor] gitignore modification 2018-06-05 13:47:00 +02:00
Darius Arnold
629bae63e2 [bugfix] wrong variable used during calculating length of waveform 2018-05-29 23:51:09 +02:00
Darius Arnold
b0749e0ddc remove all usage of checkwindowP/S and minfactorP/S
Those changes have their own branch now, the parameters were not used during picking on this branch.
2018-05-29 21:28:32 +02:00
Darius Arnold
8d931fa97a [refactor] remove unnecessary if/else for earllatepicker 2018-05-29 21:27:11 +02:00
Darius Arnold
ec9bce97ee [refactor] use existing functions to avoid repetition in autopickstation
DRY
2018-05-29 19:20:59 +02:00
Darius Arnold
b93407012e [refactor] remove positional None argument in AICPicker instantiation
Instead use keyword arguments
2018-05-29 19:18:36 +02:00
Darius Arnold
1ffe4dcbb2 [minor] correct typos, add to docs, remove unnecessary parentheses 2018-05-29 19:14:41 +02:00
Darius Arnold
a68cb9b0b7 [add] transfer P picking routine to new function 2018-05-29 19:08:06 +02:00
Darius Arnold
58191e2d8f [add] test for dot access in PickingParameter object 2018-05-29 19:04:51 +02:00
Darius Arnold
628f335b34 [refactor] get_quality_class uses generator expression
Can handy any size of list for weight_classes now
2018-05-29 19:02:58 +02:00
Darius Arnold
c38e3eb07a [refactor] rename getQualityFromUncertainty function to be more pythonic 2018-05-29 18:53:38 +02:00
Darius Arnold
7910df3cc9 [add] tests for getQualityFromUncertainty function
Testing uncertainty in classes, between classes and outside of upper/lower bound of quality classes
2018-05-29 18:43:59 +02:00
Darius Arnold
2d850b70db Merge branch 'develop' into feature/refactor 2018-05-29 18:28:24 +02:00
Darius Arnold
756b5d67da [bugfix] getQualityFromUncertainty giving worst quality for uncertainty values between two classes
Now the quality classes are exclusive lower bound and inclusive upper bound.
2018-05-29 18:27:15 +02:00
Darius Arnold
82c0b7837c Add tests for creating of PickingParameters object from *args and **kwargs 2018-05-25 18:00:06 +02:00
Darius Arnold
0230f0bf2e Add object based approach to autopickstation: preparing traces, modifying cuttimes with taupy 2018-05-25 17:59:30 +02:00
Darius Arnold
9b787ac4e8 Add functions for splitting and preparing wfstream 2018-05-25 17:55:00 +02:00
Darius Arnold
02b7fae3df Minor formatting and docstring changes 2018-05-25 17:50:45 +02:00
Darius Arnold
15722c436e Merge remote-tracking branch 'origin/develop' into feature/refactor 2018-05-25 17:14:02 +02:00
Darius Arnold
b220059e5b Move use_taupy and taupy_model from s_parameters to p_parameters 2018-05-25 12:28:15 +02:00
Darius Arnold
455733ea19 [Add] Unittest for autopickstation function picking with and without taupy 2018-05-22 19:21:29 +02:00
f49d323c13 [add] no data label 2018-04-27 10:45:20 +02:00
ae4c345fa7 [update] same functionality but within main thread 2018-04-26 16:40:46 +02:00
65f0d23f07 [revert] changes not working due to Thread 2018-04-26 16:37:57 +02:00
2554f6ca7e [new] dont show plot if no data 2018-04-26 16:32:05 +02:00
8073a872c1 [add] event colored grey if folder isEmpty 2018-04-26 16:26:03 +02:00
ea9939a0c0 Merge branch 'develop' into feature/obspy_dmt_interface 2018-04-26 16:19:30 +02:00
18926610cf [add] colorized table 2018-04-26 16:19:12 +02:00
3d1d97dd26 Merge branch 'develop' into feature/obspy_dmt_interface 2018-04-26 15:01:29 +02:00
b02a0e8f9a [bugfix] os.getlogin not working on all OS/python versions 2018-04-26 15:00:40 +02:00
4c42d34adc [bugfix] check for time_ax not working this way 2018-04-26 14:52:17 +02:00
c90a2f6ae1 [bugfix] forgot to replace PyQt4 with PySide 2018-04-26 14:50:18 +02:00
be82706413 [bugfix] event_path overwritten 2018-04-25 14:06:07 +02:00
644470f156 [bugfix] was only checking current data processing state 2018-04-25 14:01:05 +02:00
d6150c5d1a [new] disable event selection of folder is empty 2018-04-25 13:48:06 +02:00
073a71e150 [add] origin/mag to eventBox 2018-04-24 15:37:15 +02:00
d3de33a12b [add] lineEdit showing if using processed/raw data 2018-04-23 17:00:52 +02:00
e6648a3cec [add] read event info from obsDMT pickle file 2018-04-23 16:49:01 +02:00
36a4f0df8a [bugfix] error when there is nothing to plot 2018-04-23 13:33:05 +02:00
Darius Arnold
6e8ed7dad9 Add documentation for tuning parameters and wip of gui docs 2018-04-20 23:35:21 +02:00
0320ad67b9 [new] station highlighting by middle mouseclick 2018-04-20 14:53:15 +02:00
9b0ef55b70 [bugfix] wasnt using processed data 2018-04-20 13:51:21 +02:00
4d3b300e17 [removed] check4gaps/doubled, plotting merged instead 2018-04-19 17:03:11 +02:00
56295f0c81 Merge branch 'feature/obspy_dmt_interface' into develop 2018-04-19 13:44:42 +02:00
a876384338 [add] info for user on min/max plot 2018-04-19 13:38:17 +02:00
964aa9ce6c [change] set downsampling to false again (creating errors when plotting
picks)
2018-04-18 16:46:21 +02:00
af54cb0d4b [add] auto set plotmethod 'fast' for large dataset 2018-04-18 16:44:33 +02:00
349715d13c [removed] nth_sample dialog 2018-04-18 16:21:57 +02:00
726210daeb [add] plot-method = 'fast' for min/max plot 2018-04-18 16:21:27 +02:00
e72111a6fb [add] obspy-like min/max function, removed testing area 2018-04-18 16:17:54 +02:00
5fcb07e647 [change] prepTimeAxis using linspace (is that ok?) 2018-04-18 16:17:16 +02:00
f0b6897053 [bugfix] identity check of np.array 2018-04-18 16:01:39 +02:00
Darius Arnold
0366181532 [refactor] picking parameters into dictionaries 2018-04-17 11:53:47 +02:00
ba37d587a6 [update] testing min/max plot (WIP) 2018-04-13 10:56:52 +02:00
Darius Arnold
90e6c758d8 [bugfix] convert string response from qsettings to integer for calculation 2018-04-06 14:35:24 +02:00
Darius Arnold
1edc745903 Merge branch 'develop' into feature/refactor 2018-04-06 13:53:04 +02:00
10c26a8261 Merge branch 'develop' into feature/obspy_dmt_interface 2018-04-05 16:03:26 +02:00
b730b16ac6 [add] check for plot quantity 2018-04-05 16:02:53 +02:00
c655371bae [minor] add RELEASE-VERSION to gitignore 2018-04-05 15:34:49 +02:00
67ee544522 Merge branch 'develop' into feature/obspy_dmt_interface 2018-04-05 15:33:48 +02:00
4cf785a135 [add] synthetic data plot (not yet flexible) 2018-04-05 15:33:40 +02:00
f3627033b1 [bugfix] auto downsampling not working 2018-04-05 15:06:23 +02:00
bd086e6cd4 Merge branch 'develop' into feature/obspy_dmt_interface 2018-04-04 15:20:30 +02:00
a201b99c2e [bugfix] closes #256, not asking for filteroptions if no filterphase 2018-04-04 15:17:55 +02:00
889628ceee [remove] RELEASE VERSION!!! 2018-04-04 15:17:55 +02:00
366db9aef0 [initial] first changes to supply an interface to an obspyDMT database 2018-04-04 14:57:54 +02:00
79412c392d [add] gitignore 2018-04-03 16:04:29 +02:00
8f5292b957 [minor] added Error if phase can not be identified 2018-03-28 10:24:24 +02:00
4002ccfd1a [bugfix] pickdlg not deleted, consuming tons of memory 2018-03-28 10:24:24 +02:00
Darius Arnold
aba1a16f98 Merge branch 'feature/parameter_limits' into develop 2018-03-19 11:44:22 +01:00
Darius Arnold
8f78b6d8c8 [Add] Parameters clamped according to min/max values in default_parameters 2018-03-07 14:24:41 +01:00
Darius Arnold
a44ada9938 [minor] correct misspelling in min value key in default_parameters 2018-03-07 14:23:36 +01:00
Darius Arnold
ef8c19b747 Merge branch 'develop' into feature/parameter_limits 2018-03-07 11:16:37 +01:00
f9203d92e8 [bugfix] tuneAutopicker not working without 'P' and 'S' in picks 2018-03-05 17:49:40 +01:00
39f9238a06 [new] function to modify project/event rootpath (called on loading project) 2018-03-05 17:45:07 +01:00
8dabfb41d8 [add] check for event data 2018-03-05 16:51:06 +01:00
4918448d92 [bugfix] some fixes on hovering/renaming picks 2018-02-23 10:37:23 +01:00
9e74ff04c2 [add] linear detrend for manual picking... 2018-02-22 16:31:01 +01:00
39b925f475 [bugfix] init of Filteroptions class not working with kwargs 2018-02-22 16:26:46 +01:00
b076ee90fa [add] cb for overwriteFilter, else filter automatically again 2018-02-22 16:03:11 +01:00
bff888c98a [bugfix] emitted signal disturbing reset feature 2018-02-22 11:33:14 +01:00
2e5ed974f0 [bugfix] filteroptions were not loaded from project 2018-02-22 11:28:46 +01:00
90bc7642fd [update] default corners=4! 2018-02-22 11:19:24 +01:00
08b3c70497 [add] labels auto/manu 2018-02-22 11:16:56 +01:00
69955ff439 [add] reset (to auto settings) button 2018-02-22 10:51:26 +01:00
fa46385a52 [add] color representation of auto filter values 2018-02-22 10:05:57 +01:00
144ecda61e Merge branch 'feature/additional_channel_display' into develop 2018-02-21 13:39:49 +01:00
ff65fd2290 [minor] some updates 2018-02-21 13:36:43 +01:00
Ludger Küperkoch
f56d32b661 New button for showing pick qualities. Still work in progress. 2018-02-01 11:42:34 +01:00
Ludger Küperkoch
da06115886 New button for getting histogram of pick qualities, in progress. 2018-01-30 16:16:41 +01:00
Ludger Küperkoch
dc6c6445a9 Default for input parameter iplot. 2018-01-30 11:11:57 +01:00
Ludger Küperkoch
1b85625643 [Bugfix] Two times "def calPick(self)" (accidently?). 2018-01-30 10:06:22 +01:00
Ludger Küperkoch
af393d5118 Take into account that dip/azimut might be 0 when not available. 2018-01-25 11:34:37 +01:00
819e4d7076 [bugfix] skip unknown channels for everything but comparison option 2018-01-17 16:38:11 +01:00
60a2863629 [bugfix] copy/paste failure 2018-01-17 16:28:50 +01:00
c6cd6b2d69 [bugfix] use filter selected for ini pick not auto, save this filteropt. 2018-01-17 12:19:32 +01:00
0e057af39b [cleanup] simplifaction of setIniPickP/S functions, merge into one 2018-01-17 12:04:07 +01:00
473529961e [update] EVERY trace now scaled relative to its own noiselevel 2018-01-17 11:47:52 +01:00
d2437b5014 Merge branch 'develop' into feature/additional_channel_display 2018-01-17 10:31:40 +01:00
97de485c99 [bugfix] case pg=False: internal change of pylot canvas updated 2018-01-17 10:30:38 +01:00
98f7ff8406 [update] resetZoom after picking 2018-01-16 16:17:18 +01:00
f4632ac1f5 [update] enable/disable comboboxes in pick mode 2018-01-16 16:10:47 +01:00
c520c7c212 [new] HUGE rearrangement of initial pick behavior (untested)
- zooms on selected channels after initial pick
- ylims not relying on noiselevel anymore (better overview and not possible with more than one phase),
xlims now based on mean snr of all selected channels for initial pick instead of snr of one 'random' channel
2018-01-16 16:06:24 +01:00
ec394d447d [new] choose channels to pick on for each phase 2018-01-16 13:33:12 +01:00
a22e64889d [add] scaling by channel (untested) 2018-01-15 16:35:19 +01:00
4835c0ca8a [change] possible to select one of all available channels for comparison 2018-01-15 15:29:46 +01:00
975bd64266 Merge branch 'develop' into feature/additional_channel_display 2018-01-15 14:43:26 +01:00
de8886b73b [update] leave rename phase mode when picking initiated 2018-01-15 14:37:07 +01:00
f4839bb609 [bugfix] too many arguments in function (outdated) 2018-01-15 14:31:42 +01:00
5b5902a329 [update] keep filter corresponding to phase when refreshing plot 2018-01-15 14:28:08 +01:00
1c51ed12fc [bugfix] missing paranthesis 2018-01-15 14:21:10 +01:00
49de0fb22e [add] Hotkey for renaming 2018-01-15 14:20:48 +01:00
b191bd8cf4 [update] warn/do not save picks with unknown phase ID 2018-01-15 14:17:14 +01:00
4589f92a1a [new] renaming of phases (untested) 2018-01-15 13:52:31 +01:00
44f63f8163 [change] ax.vlines -> ax.axvline (infinite) 2018-01-15 11:38:07 +01:00
a1ee8d408c [add] user_help to status bar
[bugfix] missing change to mouseevent button
2018-01-15 11:32:02 +01:00
d6400562d6 [change] DELETE PICK WITH MIDDLE MOUSE BUTTON (else deleted using zoom) 2018-01-15 11:28:25 +01:00
0013d099f3 [new] tooltip for phases on mouseover (untested) 2018-01-15 11:21:43 +01:00
534222e241 [bugfix] Exception when pick.filter_id was None 2018-01-15 11:16:26 +01:00
796eea87a1 [new] show time and pick info on left click in 3 comp window! 2018-01-12 15:14:57 +01:00
bb395d2514 [new] preparation for pick info on click 2018-01-12 14:42:16 +01:00
5aef50f922 [new] save filteroptions as strings in XML (untested) 2018-01-12 14:07:36 +01:00
ffa30e92e9 [bugfix] tried to filter without active event 2018-01-12 10:15:30 +01:00
23ed5ceb5c [update] filter hotkeys changed to P, S, Ctrl+F 2018-01-12 10:10:51 +01:00
185cc14e38 [add] P and S Filter options for main window
[bugfix] wforiginal not pre-processed
2018-01-12 10:07:14 +01:00
80d2ff508b Merge branch 'develop' into feature/additional_channel_display 2018-01-11 10:56:14 +01:00
c7dc1481a1 [bugfix] wrong parent set for TuneAutoPicker 2018-01-10 16:59:34 +01:00
0ebc6e11f8 [add] prev/next event button (~~had to remove min button width) 2018-01-10 16:54:26 +01:00
78d5ee58b5 [update] apply new filter settings after selection 2018-01-10 16:08:28 +01:00
8d6de02afe [bugfix] filter settings will be retained when leaving picking mode etc 2018-01-10 16:00:34 +01:00
7ab13ebaad Merge branch 'develop' into feature/additional_channel_display 2018-01-10 14:50:46 +01:00
a76dd06497 [update] add detrend/taper to 3comp filter (better only one function!) 2018-01-10 14:50:37 +01:00
7c61c7c471 Merge branch 'develop' into feature/additional_channel_display 2018-01-09 15:37:01 +01:00
39e048cf2f [bugfix] rectangle zoom connection 2018-01-09 13:46:21 +01:00
440c2c7bf3 [bugfix] reconnect interactive zoom after rectangle zoom 2018-01-09 13:37:53 +01:00
7be9fc28fa [minor] beautification 2018-01-09 12:21:49 +01:00
a81c98c0b8 [update] pickdlg maximized by default 2018-01-09 12:18:47 +01:00
fe81bd9719 [bugfix] enable buttons when switching between P and S by hotkey 2018-01-09 12:15:55 +01:00
fde0de26f2 [update] added hotkeys for picking 2018-01-09 12:14:34 +01:00
acdcf74c63 [bugfix] corrected hotkey 2018-01-09 12:04:43 +01:00
1324bf3d7c [bugfix] retain filtering when switching events 2018-01-09 12:04:03 +01:00
00cdbb5052 [update] changed frame color to pick color on init pick 2018-01-09 11:54:46 +01:00
4150e7f7d4 [update/bugfix] big update on filter options:
- added toggle and QSetting for automatic filtering when picking
- filtering in 3comp window independent from main plot, current filter options are copied after 3comp. init though
- several bugfixes when activating/deactivating filter
2018-01-09 11:44:22 +01:00
8badba4a89 Merge branch 'develop' into feature/additional_channel_display 2018-01-09 09:55:09 +01:00
549700bf7a [update] changed plot/filter behavior for 3comp window 2018-01-09 09:54:39 +01:00
c39303042f Merge branch 'develop' into feature/additional_channel_display 2018-01-08 14:46:58 +01:00
fb4753ca83 [add] improved filteroptions and fixed some bugs 2018-01-08 14:45:29 +01:00
08703bf691 [add] some hacks to display air pressure in plot 2018-01-08 13:47:40 +01:00
8c9fded873 [update] working on filter bugs WIP 2017-12-21 13:13:44 +01:00
1296e914c3 Merge branch 'develop' of ariadne.geophysik.ruhr-uni-bochum.de:/data/git/pylot into develop 2017-12-06 14:53:18 +01:00
e0e106d898 [bugfix] no more zoom reset entering picking mode 2017-12-06 14:52:31 +01:00
151900753a [update] improve help for event folder structure 2017-11-23 11:37:04 +01:00
aada482b19 [add] some fancy color stuff for picking 2017-11-22 14:25:42 +01:00
Darius Arnold
b6fddaf208 [add] docstrings for event.py 2017-11-15 11:37:13 +01:00
Darius Arnold
fc3a825e73 Merge remote-tracking branch 'origin/feature/refactor' into feature/refactor 2017-11-15 11:36:14 +01:00
Darius Arnold
983e4c25a3 [add] docstrings in pylot/core/loc 2017-11-13 11:45:25 +01:00
Darius Arnold
6aacd2dd55 [add] docstrings in pylot/core/io 2017-11-13 11:44:57 +01:00
Darius Arnold
b2f211516a [change] docstrings in picker.py 2017-11-13 09:49:44 +01:00
Darius Arnold
16a8170e49 [add] an exception if unable to find minimum in AIC cf 2017-11-13 09:32:39 +01:00
Darius Arnold
8280a5043c [bugfix] snr calculation takes offset from cutting into account 2017-11-10 10:23:50 +01:00
Ludger Küperkoch
55101c9e9e Reset some former changes, net_magnitude still buggy, as it returns Nonetype if no reliable picks are available! 2017-11-09 14:58:06 +01:00
Ludger Küperkoch
615ca54640 Make sure calculating Ml only from reliable onset times. 2017-11-09 10:30:01 +01:00
Ludger Küperkoch
f01860b6f1 Calculate Mo only from reliable onset times. 2017-11-09 10:15:58 +01:00
Ludger Küperkoch
3f50dae2de [Bugfix]Wrong application of title, wrong onset plotting, removed obsolete intrinsic ObsPy-plot. 2017-11-07 15:15:23 +01:00
Darius Arnold
887eeba6ec [minor] small docstring changes 2017-10-25 21:55:19 +02:00
Darius Arnold
8f721da643 refactoring picker.py (AICPicker and PragPicker) 2017-10-25 21:54:25 +02:00
deaa67bb30 [bugfix] wront parent used for style 2017-10-20 11:04:03 +02:00
f01868bc36 [bugfix] missing parameter ax for getXLims (mpl) 2017-10-20 11:00:52 +02:00
36e68ee1ed [bugfix] getAxes missing in PylotCanvas for mpl 2017-10-20 10:57:00 +02:00
Darius Arnold
149fe3f74d [change] docstrings in util/utils.py 2017-10-16 21:55:24 +02:00
Darius Arnold
7086d43076 [change] cleanup in pick/utils.py 2017-10-16 21:10:21 +02:00
Darius Arnold
a5b3c5da1b [change] docstrings in pick/utils.py 2017-10-16 21:04:37 +02:00
Darius Arnold
fab1f53a41 [change] docstrings in compare.py 2017-10-16 21:02:04 +02:00
4d37abc7d8 [update] flexibility for full_range
teach PyLoT to cope with future (synthetic) events as well
2017-10-11 11:52:51 +02:00
Ludger Küperkoch
7d3196bf21 [Bugfix] argrelmax not working if array contains only one element. 2017-10-09 14:40:26 +02:00
Darius Arnold
f1cd250cb1 [minor] one min/max key was forgotten in previous commit 2017-10-05 18:25:00 +02:00
Darius Arnold
da6f35d76b [change] some tooltips for the picking parameters 2017-10-05 18:23:47 +02:00
Darius Arnold
b10432f3bf [add] min/max values for most keys in default_parameters 2017-10-05 18:22:13 +02:00
Darius Arnold
9491a28710 [cleanup] of autoPyLoT.py 2017-10-05 17:26:18 +02:00
Darius Arnold
3dde42e639 [change] improved docstring for autopylot.py 2017-10-05 17:19:00 +02:00
Darius Arnold
87b4ce1345 [cleanup] in charfuns.py using code inspection 2017-10-05 16:52:36 +02:00
Darius Arnold
9a8d7da0e6 [change] docstrings for functions in charfuns.py 2017-10-05 16:50:53 +02:00
Darius Arnold
5931331b1d [cleanup] code cleanup in autopick.py using code inspection 2017-10-05 16:07:29 +02:00
Darius Arnold
6f65789844 [change] docstrings for functions in autopick.py 2017-10-05 16:06:11 +02:00
Darius Arnold
c61277f429 [bugfix] pciking dialog initiated in arrayMap had no _style attribute 2017-10-04 16:52:18 +02:00
710ea57503 Merge branch 'github-master' 2017-09-25 15:50:38 +02:00
8aaad643ec release version 0.2
release notes:
==============
Features:
- centralize all functionalities of PyLoT and control them from within the main GUI
- handling multiple events inside GUI with project files (save and load work progress)
- GUI based adjustments of pick parameters and I/O
- interactive tuning of parameters from within the GUI
- call automatic picking algorithm from within the GUI
- comparison of automatic with manual picks for multiple events using clear differentiation of manual picks into 'tune' and 'test-set' (beta)
- manual picking of different (user defined) phase types
- phase onset estimation with ObsPy TauPy

- interactive zoom/scale functionalities in all plots (mousewheel, pan, pan-zoom)
- array map to visualize stations and control onsets (beta feature, switch to manual picks not implemented)

Platform support:
- python 3 support
- Windows support

Performance:
- multiprocessing for automatic picking and restitution of multiple stations
- use pyqtgraph library for better performance on main waveform plot

Visualization:
- pick uncertainty (quality classes) visualization with gradients
- pick color unification for all plots
- new icons and stylesheets

Known Issues:
2017-09-25 14:24:52 +02:00
bc808b66c2 [update] README.md 2017-09-25 10:17:58 +02:00
472e5b3b9e Merge branch 'develop' 2017-09-21 16:18:53 +02:00
846da4cab7 [minor] version number changed to 0.2 2017-09-21 15:59:34 +02:00
c8adc24c67 [minor] README update 2017-09-21 15:58:52 +02:00
255eca3c05 [update] rename QtPylot -> PyLoT (suggestion) 2017-09-21 15:36:17 +02:00
af5953b053 [update] README.md 2017-09-21 15:34:29 +02:00
c19f057abf [update] default pylot.in file updates 2017-09-21 15:20:05 +02:00
1f2b3147fd [cleanup] remove old files pt.2 2017-09-21 15:03:14 +02:00
43930a07cb [minor] remove navigation toolbar for tap 2017-09-21 14:42:19 +02:00
2a987cbdfa [cleanup] removed unused old code and output 2017-09-21 14:38:56 +02:00
238998e626 [cleanup] removed old unused class 2017-09-21 14:27:42 +02:00
b465ba2111 [minor] color for qstatusbar missing 2017-09-21 14:26:35 +02:00
370e8a7074 [update] make HELP work again 2017-09-21 13:31:45 +02:00
259538f0b0 [minor] add some description to exp. histograms 2017-09-21 10:54:34 +02:00
3096ee4573 [bugfix] usage of unordered dict for comparison
implicit extraction of pdf names (auto, manu) for comparison from a list generated within dict iteration lead to possible flipping (auto - manu .or. manu - auto) of compared pdfs (and with that histograms)
2017-09-21 10:34:21 +02:00
92b12537eb [update] small improvements on comparison figures 2017-09-21 10:31:29 +02:00
e9ce9635e0 [update] tuneAutoPicker color/pick visualisation 2017-09-20 11:03:30 +02:00
c70222972c [update] small change in comparison figures 2017-09-19 16:08:04 +02:00
8397856213 [refs #247] array map initiation when loading project 2017-09-18 16:12:02 +02:00
961be8ccbc [bugfix] ibad referenced before assignment 2017-09-18 15:10:52 +02:00
7522201e06 [minor] beautiful progressbar 2017-09-18 15:06:50 +02:00
69ad0bf23b [bugfix] self.pdlg_widget not initiated 2017-09-18 15:06:33 +02:00
5bb55a8181 [update] move progressbar to bottom of mainwindow 2017-09-18 14:57:48 +02:00
eab416df3f [bugfix] reset stdout in Worker 2017-09-18 14:46:29 +02:00
0949a6deac [update] apw saved, proper deletion of figures 2017-09-18 14:44:18 +02:00
6acb0ad580 [minor] wadata/jk improvements 2017-09-18 12:20:48 +02:00
316d19ba9e [bugfix] one replacement of i -> index missing 2017-09-18 11:39:27 +02:00
8a2bb65581 Merge remote-tracking branch 'origin/develop' into develop 2017-09-18 11:24:57 +02:00
330e15441e [bugfix] fig._tight=True tried to be set on None type 2017-09-18 11:23:52 +02:00
365657064f [cleanup] pycharm code inspection 2017-09-18 10:41:27 +02:00
a5d863bf95 [cleanup] method without usage 2017-09-18 10:15:07 +02:00
Darius Arnold
6d0083040c Merge branch 'feature/fix-tuneautopicker_stationlist' into develop 2017-09-15 20:49:38 +02:00
Darius Arnold
586abea874 [bugfix] same treatment of waveform data in tune autopicker and load waveform
fixes #217. While generating the station list for the tune autopicker dialog, waveform data was loaded and some traces were removed (gaps, doubled, rotated). However, not all the functions were called on wfdat, so some stations would be unavailable in the general waveform overview for the event, but could be selected in the tune autopicker.
2017-09-15 20:48:22 +02:00
Darius Arnold
2ee3c9a304 [change] show more stations in stationbox of tune autopicker dialog 2017-09-15 18:59:17 +02:00
Darius Arnold
2e6c33de45 [bugfix] allow picking of S phase in lower area of waveform plot
Fixes bug report #211
2017-09-15 18:56:04 +02:00
Darius Arnold
864f035456 Merge branch 'improve_jackwada_plots' into develop 2017-09-15 18:32:26 +02:00
Darius Arnold
27a6af6636 improved jackknife/median plot
added different colors to markers for accepted picks, jackknife reject picks and median rejected picks, added lines indicating median tolerance
2017-09-15 18:31:20 +02:00
Darius Arnold
62625d6941 improved wadati plot
added names of stations, added lines indicating wadati tolerance
2017-09-15 18:28:33 +02:00
Darius Arnold
7c528a4bfd [change] jackknife and wadati plot show one instead of two points in legend 2017-09-15 16:48:08 +02:00
Darius Arnold
8dc5be8e49 [bugfix] removed unneccessary command added during tight layout change 2017-09-15 16:47:12 +02:00
Darius Arnold
a7fd239574 [bugfix] improvements to removal of NaNs during Cf calculation 2017-09-15 16:28:27 +02:00
6a40eef3fc Merge branch 'develop' of ariadne.geophysik.ruhr-uni-bochum.de:/data/git/pylot into develop 2017-09-15 14:47:44 +02:00
183f9e77c0 [fixes #249] stylesheet path relative to style_settings 2017-09-15 14:47:05 +02:00
Darius Arnold
c8affae0c2 Merge branch 'tightlayout_tuneauto' into develop 2017-09-15 14:12:33 +02:00
Darius Arnold
ec8d471ce1 [change] enabled tight layout for tune autopicker plots 2017-09-15 14:11:28 +02:00
Darius Arnold
a96f8dcb7d [add] wadati, jackknife and median test print excluded stations 2017-09-15 13:24:13 +02:00
Darius Arnold
c65d6b8376 [bugfix] catch empty pickdics during comparison
Only the P phase might be picked by purpose, or the autopicker might not find any valid S onsets
2017-09-15 13:17:40 +02:00
Darius Arnold
ff891cacdc add tight layout to overview plots of tune autopicker dialog 2017-09-15 13:15:30 +02:00
Darius Arnold
00df77e723 [add] remove mean from traces to avoid filtering artifact
Traces with constant offset (mean != 0) produce a large, low period artifact when filtered, even with tapering.
2017-09-15 12:09:56 +02:00
Darius Arnold
219d2d0e5a Merge remote-tracking branch 'origin/develop' into develop 2017-09-15 11:38:06 +02:00
7496025137 [rename] Ref Button -> Tune Button 2017-09-13 14:44:56 +02:00
4269256469 [update] make PyLoT icon great again
(bugfix and color change for visibility on dark system background)
2017-09-12 16:37:26 +02:00
f2f908f8b5 [bugfix] missing parameter in getGlobalLimits 2017-09-12 15:31:02 +02:00
8381782194 [bugfix] missing third value (color) in tuple for autopicker 2017-09-12 14:10:15 +02:00
518019322d [minor] add event name to Histogram Window 2017-09-12 13:30:51 +02:00
4bf03ef0d9 [minor] update radioButton StyleSheet 2017-09-12 13:30:38 +02:00
cc0281f9dc [add] color for tune/test events on MultiEventWidget 2017-09-12 12:09:59 +02:00
1d253b5ee5 [minor] text color changed in stylesheet 2017-09-12 11:58:24 +02:00
0eeb07583b [change] major change: wadati/JK checked picks REMOVED from autopicks
before, all picks were saved as XML automatically by autoPyLoT (also weight 8, 9), maybe change this to a more convenient way in the futur (e.g. add pick weight as ObsPy.Pick.comment ?)
2017-09-12 11:50:26 +02:00
0cf46a0cc3 [minor] improve user output on used CPU cores 2017-09-12 11:02:08 +02:00
7027f9ca58 [update] stylename active used to init combobox 2017-09-11 16:06:25 +02:00
c23fc0f303 [minor] stylesheet changes 2017-09-11 16:01:03 +02:00
ed915f29df Merge branch 'style_options' into develop 2017-09-11 15:53:49 +02:00
0418acf472 [minor] embellish autoPyLoT button 2017-09-11 15:52:01 +02:00
e333ae0ece [revert] accidently added line for local testing 2017-09-11 15:16:42 +02:00
197164f849 [issue] trying to solve FigureCanvas already deleted exception WIP 2017-09-11 15:15:33 +02:00
a4c697d250 [minor] solving some parental issues 2017-09-11 14:39:46 +02:00
bed5c8ffcf [minor] stylesheet changes 2017-09-11 14:23:42 +02:00
330ab9b823 Merge branch 'develop' into style_options 2017-09-11 13:29:22 +02:00
b1a12bf666 [bugfix] workaround: if datastructure not set 2017-09-11 13:28:39 +02:00
f0e4ba5ab2 [add] style settings option 2017-09-11 13:27:32 +02:00
d93a571a51 Merge branch 'develop' into style_options 2017-09-11 12:23:40 +02:00
1862b6d6e1 [bugfix] QSettings not loaded on setupUI because
application/organization name were not set
2017-09-11 12:18:30 +02:00
6e43e67172 [minor] stylesheet modifications 2017-09-11 10:24:22 +02:00
e5b0210c27 [update] bright stylesheet 2017-09-08 17:25:29 +02:00
78ba5484c6 [bugfix] old usage of _parent 2017-09-08 17:19:34 +02:00
f1fdd3b17b [update] bright stylesheet 2017-09-08 17:19:18 +02:00
104a8dda64 [update] add linecolor settings to autoPyLoT figs 2017-09-08 17:01:36 +02:00
7ef784f2d2 [minor] color changes 2017-09-08 15:50:08 +02:00
d15828a058 [add] pick trace button color 2017-09-08 12:25:30 +02:00
51fe48553d [update] bright stylesheet 2017-09-08 12:21:56 +02:00
a97258513e [bugfix] missing variable name change 2017-09-08 10:28:54 +02:00
e310e1b6a6 Merge branch 'develop' into style_options 2017-09-08 09:39:47 +02:00
38a071ac5b [minor] suppress rotation warning in GUI 2017-09-08 09:36:34 +02:00
8717df836a [add] bright stylesheet 2017-09-08 09:27:47 +02:00
cacfb37468 [minor] some updates, added bright style sheet 2017-09-07 16:51:56 +02:00
Darius Arnold
49adc5418f [bugfix] calculate slope up to first local maximum only
before it was calculated to global maximum in slope window
2017-09-07 16:47:42 +02:00
9c63621ba4 [update] style_settings 2017-09-07 16:07:43 +02:00
bab34a23c4 [add] settings for styles 2017-09-07 10:59:09 +02:00
7e39593b05 [change] dark stylesheet update 2017-09-07 10:52:22 +02:00
7ecf976401 [add] dark stylesheet 2017-09-06 18:34:59 +02:00
7f0d3c2ab4 [new] missing parents added, some preperations 2017-09-06 18:02:09 +02:00
Marc S. Boxberg
503ea419c4 release version: 0.1a
release notes:
==============
Features
- consistent manual phase picking through predefined SNR dependant zoom level
- uniform uncertainty estimation from waveform's properties for automatic and manual picks
- pdf representation and comparison of picks taking the uncertainty intrinsically into account
- Richter and moment magnitude estimation
- location determination with external installation of [NonLinLoc](http://alomax.free.fr/nlloc/index.html)
Known issues
- Magnitude estimation from manual PyLoT takes some time (instrument correction)
2016-10-04 09:38:05 +02:00
157 changed files with 205833 additions and 518862 deletions

5
.gitignore vendored Normal file
View File

@ -0,0 +1,5 @@
*.pyc
*~
.idea
pylot/RELEASE-VERSION
/tests/test_autopicker/dmt_database_test/

39
.mailmap Normal file
View File

@ -0,0 +1,39 @@
Darius Arnold <Darius.Arnold@ruhr-uni-bochum.de> <Darius_A@web.de>
Darius Arnold <Darius.Arnold@ruhr-uni-bochum.de> <darius.arnold@rub.de>
Darius Arnold <Darius.Arnold@ruhr-uni-bochum.de> <darius.arnold@ruhr-uni-bochum.de>
Darius Arnold <Darius.Arnold@ruhr-uni-bochum.de> <mail@dariusarnold.de>
Dennis Wlecklik <dennisw@minos02.geophysik.ruhr-uni-bochum.de>
Jeldrik Gaal <jeldrikgaal@gmail.com>
Kaan Coekerim <kaan.coekerim@ruhr-uni-bochum.de>
Kaan Coekerim <kaan.coekerim@ruhr-uni-bochum.de> <kaan.coekerim@rub.de>
Ludger Kueperkoch <kueperkoch@igem-energie.de> <kueperkoch@bestec-for-nature.com>
Ludger Kueperkoch <kueperkoch@igem-energie.de> <ludger@quake2.(none)>
Ludger Kueperkoch <kueperkoch@igem-energie.de> <ludger@sauron.bestec-for-nature>
Marc S. Boxberg <marc.boxberg@rub.de>
Marcel Paffrath <marcel.paffrath@ruhr-uni-bochum.de> <marcel.paffrath@rub.de>
Marcel Paffrath <marcel.paffrath@ruhr-uni-bochum.de> <marcel@minos01.geophysik.ruhr-uni-bochum.de>
Marcel Paffrath <marcel.paffrath@ruhr-uni-bochum.de> <marcel@minos02.geophysik.ruhr-uni-bochum.de>
Marcel Paffrath <marcel.paffrath@ruhr-uni-bochum.de> <marcel@minos25.geophysik.ruhr-uni-bochum.de>
Marcel Paffrath <marcel.paffrath@ruhr-uni-bochum.de> <marcel@email.com>
Sally Zimmermann <sally.zimmermann@ruhr-uni-bochum.de>
Sebastian Wehling-Benatelli <sebastian.wehling-benatelli@cgi.com> <sebastianw@minos01.geophysik.ruhr-uni-bochum.de>
Sebastian Wehling-Benatelli <sebastian.wehling-benatelli@cgi.com> <sebastianw@minos02.geophysik.ruhr-uni-bochum.de>
Sebastian Wehling-Benatelli <sebastian.wehling-benatelli@cgi.com> <sebastianw@minos22.geophysik.ruhr-uni-bochum.de>
Sebastian Wehling-Benatelli <sebastian.wehling-benatelli@cgi.com> <sebastian.wehling-benatelli@scisys.de>
Sebastian Wehling-Benatelli <sebastian.wehling-benatelli@cgi.com> <sebastian.wehling@rub.de>
Sebastian Wehling-Benatelli <sebastian.wehling-benatelli@cgi.com> <sebastian.wehling@rub.de>
Sebastian Wehling-Benatelli <sebastian.wehling-benatelli@cgi.com> <DarkBeQst@users.noreply.github.com>
Thomas Moeller <thomas.moeller@rub.de>
Ann-Christin Koch <ann-christin.koch@ruhr-uni-bochum.de> <Ann-Christin.Koch@ruhr-uni-bochum.de>
Sebastian Priebe <sebastian.priebe@rub.de>

File diff suppressed because it is too large Load Diff

View File

@ -1,69 +1,80 @@
# PyLoT
version: 0.1a
version: 0.3
The Python picking and Localisation Tool
This python library contains a graphical user interfaces for picking
seismic phases. This software needs [ObsPy][ObsPy]
and the PySide Qt4 bindings for python to be installed first.
This python library contains a graphical user interfaces for picking seismic phases. This software needs [ObsPy][ObsPy]
and the PySide2 Qt5 bindings for python to be installed first.
PILOT has originally been developed in Mathworks' MatLab. In order to
distribute PILOT without facing portability problems, it has been decided
to redevelop the software package in Python. The great work of the ObsPy
group allows easy handling of a bunch of seismic data and PyLoT will
benefit a lot compared to the former MatLab version.
PILOT has originally been developed in Mathworks' MatLab. In order to distribute PILOT without facing portability
problems, it has been decided to redevelop the software package in Python. The great work of the ObsPy group allows easy
handling of a bunch of seismic data and PyLoT will benefit a lot compared to the former MatLab version.
The development of PyLoT is part of the joint research project MAGS2.
The development of PyLoT is part of the joint research project MAGS2, AlpArray and AdriaArray.
## Installation
At the moment there is no automatic installation procedure available for PyLoT.
Best way to install is to clone the repository and add the path to your Python path.
At the moment there is no automatic installation procedure available for PyLoT. Best way to install is to clone the
repository and add the path to your Python path.
It is highly recommended to use Anaconda for a simple creation of a Python installation using either the *pylot.yml* or the *requirements.txt* file found in the PyLoT root directory. First make sure that the *conda-forge* channel is available in your Anaconda installation:
conda config --add channels conda-forge
Afterwards run (from the PyLoT main directory where the files *requirements.txt* and *pylot.yml* are located)
conda env create -f pylot.yml
or
conda create -c conda-forge --name pylot_311 python=3.11 --file requirements.txt
to create a new Anaconda environment called *pylot_311*.
Afterwards activate the environment by typing
conda activate pylot_311
#### Prerequisites:
In order to run PyLoT you need to install:
- python
- Python 3
- cartopy
- joblib
- obspy
- pyaml
- pyqtgraph
- pyside2
(the following are already dependencies of the above packages):
- scipy
- numpy
- matplotlib
- obspy
- pyside
#### Some handwork:
PyLoT needs a properties folder on your system to work. It should be situated in your home directory:
mkdir ~/.pylot
In the next step you have to copy some files to this directory:
cp path-to-pylot/inputs/pylot.in ~/.pylot/
for local distance seismicity
cp path-to-pylot/inputs/autoPyLoT_local.in ~/.pylot/autoPyLoT.in
for regional distance seismicity
cp path-to-pylot/inputs/autoPyLoT_regional.in ~/.pylot/autoPyLoT.in
and some extra information on error estimates (just needed for reading old PILOT data) and the Richter magnitude scaling relation
Some extra information on error estimates (just needed for reading old PILOT data) and the Richter magnitude scaling
relation
cp path-to-pylot/inputs/PILOT_TimeErrors.in path-to-pylot/inputs/richter_scaling.data ~/.pylot/
You may need to do some modifications to these files. Especially folder names should be reviewed.
PyLoT has been tested on Mac OSX (10.11) and Debian Linux 8.
PyLoT has been tested on Mac OSX (10.11), Debian Linux 8 and on Windows 10/11.
## Example Dataset
An example dataset with waveform data, metadata and automatic picks in the obspy-dmt dataset format for testing the teleseismic picking can be found at https://zenodo.org/doi/10.5281/zenodo.13759803
## Release notes
#### Features:
- event organisation in project files and waveform visualisation
- consistent manual phase picking through predefined SNR dependant zoom level
- consistent automatic phase picking routines using Higher Order Statistics, AIC and Autoregression
- pick correlation correction for teleseismic waveforms
- interactive tuning of auto-pick parameters
- uniform uncertainty estimation from waveform's properties for automatic and manual picks
- pdf representation and comparison of picks taking the uncertainty intrinsically into account
- Richter and moment magnitude estimation
@ -71,20 +82,17 @@ PyLoT has been tested on Mac OSX (10.11) and Debian Linux 8.
#### Known issues:
- Magnitude estimation from manual PyLoT takes some time (instrument correction)
We hope to solve these with the next release.
Current release is still in development progress and has several issues. We are currently lacking manpower, but hope to assess many of the issues in the near future.
## Staff
Original author(s): L. Kueperkoch, S. Wehling-Benatelli, M. Bischoff (PILOT)
Developer(s): M. Paffrath, S. Wehling-Benatelli, L. Kueperkoch, D. Arnold, K. Cökerim, K. Olbert, M. Bischoff, C. Wollin, M. Rische, S. Zimmermann
Developer(s): S. Wehling-Benatelli, L. Kueperkoch, K. Olbert, M. Bischoff,
C. Wollin, M. Rische, M. Paffrath
Original author(s): M. Rische, S. Wehling-Benatelli, L. Kueperkoch, M. Bischoff (PILOT)
Others: A. Bruestle, T. Meier, W. Friederich
[ObsPy]: http://github.com/obspy/obspy/wiki
October 2016
March 2025

View File

@ -7,6 +7,10 @@ import argparse
import datetime
import glob
import os
import traceback
from obspy import read_events
from obspy.core.event import ResourceIdentifier
import pylot.core.loc.focmec as focmec
import pylot.core.loc.hash as hash
@ -15,18 +19,16 @@ import pylot.core.loc.hypodd as hypodd
import pylot.core.loc.hyposat as hyposat
import pylot.core.loc.nll as nll
import pylot.core.loc.velest as velest
from obspy import read_events
from obspy.core.event import ResourceIdentifier
# from PySide.QtGui import QWidget, QInputDialog
from pylot.core.analysis.magnitude import MomentMagnitude, LocalMagnitude
from pylot.core.io.data import Data
from pylot.core.io.inputs import PylotParameter
from pylot.core.pick.autopick import autopickevent, iteratepicker
from pylot.core.util.dataprocessing import restitute_data, read_metadata
from pylot.core.util.dataprocessing import restitute_data, Metadata
from pylot.core.util.defaults import SEPARATOR
from pylot.core.util.event import Event
from pylot.core.util.structure import DATASTRUCTURE
from pylot.core.util.utils import real_None, remove_underscores, trim_station_components, check4gaps, check4doubled, \
from pylot.core.util.utils import get_none, trim_station_components, check4gapsAndRemove, check4doubled, \
check4rotated
from pylot.core.util.version import get_git_version as _getVersionString
@ -34,19 +36,34 @@ __version__ = _getVersionString()
def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, eventid=None, savepath=None,
savexml=True, station='all', iplot=0, ncores=0):
savexml=True, station='all', iplot=0, ncores=0, obspyDMT_wfpath=False):
"""
Determine phase onsets automatically utilizing the automatic picking
algorithms by Kueperkoch et al. 2010/2012.
:param inputfile: path to the input file containing all parameter
information for automatic picking (for formatting details, see.
`~pylot.core.io.inputs.PylotParameter`
:param obspyDMT_wfpath: if obspyDMT is used, name of data directory ("raw" or "processed")
:param input_dict:
:type input_dict:
:param parameter: PylotParameter object containing parameters used for automatic picking
:type parameter: pylot.core.io.inputs.PylotParameter
:param inputfile: path to the input file containing all parameter information for automatic picking
(for formatting details, see. `~pylot.core.io.inputs.PylotParameter`
:type inputfile: str
:return:
.. rubric:: Example
:param fnames: list of data file names or None when called from GUI
:type fnames: str
:param eventid: event path incl. event ID (path to waveform files)
:type eventid: str
:param savepath: save path for autoPyLoT output, if None/"None" output will be saved in event folder
:type savepath: str
:param savexml: export results in XML file if True
:type savexml: bool
:param station: choose specific station name or 'all' to pick all stations
:type station: str
:param iplot: logical variable for plotting: 0=none, 1=partial, 2=all
:type iplot: int
:param ncores: number of cores used for parallel processing. Default (0) uses all available cores
:type ncores: int
:return: dictionary containing picks
:rtype: dict
"""
if ncores == 1:
@ -64,7 +81,8 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
Version {version} 2017\n
\n
Authors:\n
L. Kueperkoch (BESTEC GmbH, Landau i. d. Pfalz)\n
L. Kueperkoch (BESTEC GmbH, Landau i. d. Pfalz, \n
now at igem GmbH, Mainz)
M. Paffrath (Ruhr-Universitaet Bochum)\n
S. Wehling-Benatelli (Ruhr-Universitaet Bochum)\n
@ -73,15 +91,13 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
sp=sp_info)
print(splash)
parameter = real_None(parameter)
inputfile = real_None(inputfile)
eventid = real_None(eventid)
parameter = get_none(parameter)
inputfile = get_none(inputfile)
eventid = get_none(eventid)
fig_dict = None
fig_dict_wadatijack = None
locflag = 1
if input_dict and isinstance(input_dict, dict):
if 'parameter' in input_dict:
parameter = input_dict['parameter']
@ -97,19 +113,15 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
eventid = input_dict['eventid']
if 'iplot' in input_dict:
iplot = input_dict['iplot']
if 'locflag' in input_dict:
locflag = input_dict['locflag']
if 'savexml' in input_dict:
savexml = input_dict['savexml']
if 'obspyDMT_wfpath' in input_dict:
obspyDMT_wfpath = input_dict['obspyDMT_wfpath']
if not parameter:
if inputfile:
if not inputfile:
print('Using default input parameter')
parameter = PylotParameter(inputfile)
#iplot = parameter['iplot']
else:
infile = os.path.join(os.path.expanduser('~'), '.pylot', 'pylot.in')
print('Using default input file {}'.format(infile))
parameter = PylotParameter(infile)
else:
if not type(parameter) == PylotParameter:
print('Wrong input type for parameter: {}'.format(type(parameter)))
@ -124,21 +136,20 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
if parameter.hasParam('datastructure'):
# getting information on data structure
datastructure = DATASTRUCTURE[parameter.get('datastructure')]()
dsfields = {'root': parameter.get('rootpath'),
'dpath': parameter.get('datapath'),
'dbase': parameter.get('database')}
dsfields = {'dpath': parameter.get('datapath'),}
exf = ['root', 'dpath', 'dbase']
exf = ['dpath']
if parameter['eventID'] is not '*' and fnames == 'None':
if parameter['eventID'] != '*' and fnames == 'None':
dsfields['eventID'] = parameter['eventID']
exf.append('eventID')
datastructure.modifyFields(**dsfields)
datastructure.setExpandFields(exf)
# check if default location routine NLLoc is available
if real_None(parameter['nllocbin']) and locflag:
# check if default location routine NLLoc is available and all stations are used
if get_none(parameter['nllocbin']) and station == 'all':
locflag = 1
# get NLLoc-root path
nllocroot = parameter.get('nllocroot')
# get path to NLLoc executable
@ -154,7 +165,7 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
ttpat = parameter.get('ttpatter')
# pattern of NLLoc-output file
nllocoutpatter = parameter.get('outpatter')
maxnumit = 3 # maximum number of iterations for re-picking
maxnumit = 2 # maximum number of iterations for re-picking
else:
locflag = 0
print(" !!! ")
@ -162,35 +173,41 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
print("!!No source parameter estimation possible!!")
print(" !!! ")
wfpath_extension = ''
if obspyDMT_wfpath not in [None, False, 'False', '']:
wfpath_extension = obspyDMT_wfpath
print('Using obspyDMT structure. There will be no restitution, as pre-processed data are expected.')
if wfpath_extension != 'processed':
print('WARNING: Expecting wfpath_extension to be "processed" for'
' pre-processed data but received "{}" instead!!!'.format(wfpath_extension))
if not input_dict:
# started in production mode
datapath = datastructure.expandDataPath()
if fnames == 'None' and parameter['eventID'] is '*':
if fnames in [None, 'None'] and parameter['eventID'] == '*':
# multiple event processing
# read each event in database
events = [events for events in glob.glob(os.path.join(datapath, '*')) if os.path.isdir(events)]
elif fnames == 'None' and parameter['eventID'] is not '*' and not type(parameter['eventID']) == list:
events = [event for event in glob.glob(os.path.join(datapath, '*')) if
(os.path.isdir(event) and not event.endswith('EVENTS-INFO'))]
elif fnames in [None, 'None'] and parameter['eventID'] != '*' and not type(parameter['eventID']) == list:
# single event processing
events = glob.glob(os.path.join(datapath, parameter['eventID']))
elif fnames == 'None' and type(parameter['eventID']) == list:
elif fnames in [None, 'None'] and type(parameter['eventID']) == list:
# multiple event processing
events = []
for eventID in parameter['eventID']:
events.append(os.path.join(datapath, eventID))
else:
# autoPyLoT was initialized from GUI
events = []
events.append(eventid)
events = [eventid]
evID = os.path.split(eventid)[-1]
locflag = 2
else:
# started in tune or interactive mode
datapath = os.path.join(parameter['rootpath'],
parameter['datapath'])
datapath = parameter['datapath']
events = []
for eventID in eventid:
events.append(os.path.join(datapath,
parameter['database'],
eventID))
if not events:
@ -204,8 +221,12 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
allpicks = {}
glocflag = locflag
for eventpath in events:
nEvents = len(events)
for index, eventpath in enumerate(events):
print('Working on: {} ({}/{})'.format(eventpath, index + 1, nEvents))
evID = os.path.split(eventpath)[-1]
event_datapath = os.path.join(eventpath, wfpath_extension)
fext = '.xml'
filename = os.path.join(eventpath, 'PyLoT_' + evID + fext)
try:
@ -213,18 +234,21 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
data.get_evt_data().path = eventpath
print('Reading event data from filename {}...'.format(filename))
except Exception as e:
if type(e) == FileNotFoundError:
print('Creating new event file.')
else:
print('Could not read event from file {}: {}'.format(filename, e))
data = Data()
pylot_event = Event(eventpath) # event should be path to event directory
data.setEvtData(pylot_event)
if fnames == 'None':
data.setWFData(glob.glob(os.path.join(datapath, eventpath, '*')))
if fnames in [None, 'None']:
data.setWFData(glob.glob(os.path.join(event_datapath, '*')))
# the following is necessary because within
# multiple event processing no event ID is provided
# in autopylot.in
try:
parameter.get('eventID')
except:
except Exception:
now = datetime.datetime.now()
eventID = '%d%02d%02d%02d%02d' % (now.year,
now.month,
@ -249,21 +273,29 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
if not wfdat:
print('Could not find station {}. STOP!'.format(station))
return
wfdat = remove_underscores(wfdat)
# wfdat = remove_underscores(wfdat)
# trim components for each station to avoid problems with different trace starttimes for one station
wfdat = check4gaps(wfdat)
wfdat = check4gapsAndRemove(wfdat)
wfdat = check4doubled(wfdat)
wfdat = trim_station_components(wfdat, trim_start=True, trim_end=False)
metadata = read_metadata(parameter.get('invdir'))
# rotate stations to ZNE
wfdat = check4rotated(wfdat, metadata)
if not wfpath_extension:
metadata = Metadata(parameter.get('invdir'))
else:
metadata = Metadata(os.path.join(eventpath, 'resp'))
corr_dat = None
if metadata:
# rotate stations to ZNE
try:
wfdat = check4rotated(wfdat, metadata)
except Exception as e:
print('Could not rotate station {} to ZNE:\n{}'.format(wfdat[0].stats.station,
traceback.format_exc()))
if locflag:
print("Restitute data ...")
corr_dat = restitute_data(wfdat.copy(), *metadata, ncores=ncores)
corr_dat = restitute_data(wfdat.copy(), metadata, ncores=ncores)
if not corr_dat and locflag:
locflag = 2
print('Working on event %s. Stations: %s' % (eventpath, station))
print('Stations: %s' % (station))
print(wfdat)
##########################################################
# !automated picking starts here!
@ -286,7 +318,7 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
ttpat)
# locate the event
nll.locate(ctrfile, inputfile)
nll.locate(ctrfile, parameter)
# !iterative picking if traces remained unpicked or occupied with bad picks!
# get theoretical onset times for picks with weights >= 4
@ -311,13 +343,14 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
# calculate seismic moment Mo and moment magnitude Mw
moment_mag = MomentMagnitude(corr_dat, evt, parameter.get('vp'),
parameter.get('Qp'),
parameter.get('rho'), True, \
parameter.get('rho'), True,
iplot)
# update pick with moment property values (w0, fc, Mo)
for stats, props in moment_mag.moment_props.items():
picks[stats]['P'].update(props)
evt = moment_mag.updated_event()
net_mw = moment_mag.net_magnitude()
if net_mw is not None:
print("Network moment magnitude: %4.1f" % net_mw.mag)
# calculate local (Richter) magntiude
WAscaling = parameter.get('WAscaling')
@ -325,6 +358,7 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
local_mag = LocalMagnitude(corr_dat, evt,
parameter.get('sstop'),
WAscaling, True, iplot)
# update pick with local magnitude property values
for stats, amplitude in local_mag.amplitudes.items():
picks[stats]['S']['Ao'] = amplitude.generic_amplitude
print("Local station magnitudes scaled with:")
@ -333,7 +367,15 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
WAscaling[2]))
evt = local_mag.updated_event(magscaling)
net_ml = local_mag.net_magnitude(magscaling)
if net_ml:
print("Network local magnitude: %4.1f" % net_ml.mag)
if magscaling is None:
scaling = False
elif magscaling[0] != 0 and magscaling[1] != 0:
scaling = False
else:
scaling = True
if scaling:
print("Network local magnitude scaled with:")
print("%f * Ml + %f" % (magscaling[0], magscaling[1]))
else:
@ -365,7 +407,7 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
# remove actual NLLoc-location file to keep only the last
os.remove(nllocfile)
# locate the event
nll.locate(ctrfile, inputfile)
nll.locate(ctrfile, parameter)
print("autoPyLoT: Iteration No. %d finished." % nlloccounter)
# get updated NLLoc-location file
nllocfile = max(glob.glob(locsearch), key=os.path.getctime)
@ -374,7 +416,7 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
for key in picks:
if picks[key]['P']['weight'] >= 4 or picks[key]['S']['weight'] >= 4:
badpicks.append([key, picks[key]['P']['mpp']])
print("autoPyLoT: After iteration No. %d: %d bad onsets found ..." % (nlloccounter, \
print("autoPyLoT: After iteration No. %d: %d bad onsets found ..." % (nlloccounter,
len(badpicks)))
if len(badpicks) == 0:
print("autoPyLoT: No more bad onsets found, stop iterative picking!")
@ -384,14 +426,15 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
# calculate seismic moment Mo and moment magnitude Mw
moment_mag = MomentMagnitude(corr_dat, evt, parameter.get('vp'),
parameter.get('Qp'),
parameter.get('rho'), True, \
parameter.get('rho'), True,
iplot)
# update pick with moment property values (w0, fc, Mo)
for stats, props in moment_mag.moment_props.items():
if picks.has_key(stats):
if stats in picks:
picks[stats]['P'].update(props)
evt = moment_mag.updated_event()
net_mw = moment_mag.net_magnitude()
if net_mw is not None:
print("Network moment magnitude: %4.1f" % net_mw.mag)
# calculate local (Richter) magntiude
WAscaling = parameter.get('WAscaling')
@ -399,8 +442,9 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
local_mag = LocalMagnitude(corr_dat, evt,
parameter.get('sstop'),
WAscaling, True, iplot)
# update pick with local magnitude property values
for stats, amplitude in local_mag.amplitudes.items():
if picks.has_key(stats):
if stats in picks:
picks[stats]['S']['Ao'] = amplitude.generic_amplitude
print("Local station magnitudes scaled with:")
print("log(Ao) + %f * log(r) + %f * r + %f" % (WAscaling[0],
@ -408,7 +452,15 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
WAscaling[2]))
evt = local_mag.updated_event(magscaling)
net_ml = local_mag.net_magnitude(magscaling)
if net_ml:
print("Network local magnitude: %4.1f" % net_ml.mag)
if magscaling is None:
scaling = False
elif magscaling[0] != 0 and magscaling[1] != 0:
scaling = False
else:
scaling = True
if scaling:
print("Network local magnitude scaled with:")
print("%f * Ml + %f" % (magscaling[0], magscaling[1]))
else:
@ -424,11 +476,11 @@ def autoPyLoT(input_dict=None, parameter=None, inputfile=None, fnames=None, even
data.applyEVTData(evt, 'event')
data.applyEVTData(picks)
if savexml:
if savepath == 'None' or savepath == None:
if savepath == 'None' or savepath is None:
saveEvtPath = eventpath
else:
saveEvtPath = savepath
fnqml = '%s/PyLoT_%s' % (saveEvtPath, evID)
fnqml = '%s/PyLoT_%s_autopylot' % (saveEvtPath, evID)
data.exportEvent(fnqml, fnext='.xml', fcheck=['auto', 'magnitude', 'origin'])
if locflag == 1:
# HYPO71
@ -483,7 +535,7 @@ if __name__ == "__main__":
help='''full path to the file containing the input
parameters for autoPyLoT''')
parser.add_argument('-p', '-P', '--iplot', type=int,
action='store',
action='store', default=0,
help='''optional, logical variable for plotting: 0=none, 1=partial, 2=all''')
parser.add_argument('-f', '-F', '--fnames', type=str,
action='store',
@ -497,9 +549,12 @@ if __name__ == "__main__":
parser.add_argument('-c', '-C', '--ncores', type=int,
action='store', default=0,
help='''optional, number of CPU cores used for parallel processing (default: all available(=0))''')
parser.add_argument('-dmt', '-DMT', '--obspy_dmt_wfpath', type=str,
action='store', default=False,
help='''optional, wftype (raw, processed) used for obspyDMT database structure''')
cla = parser.parse_args()
picks = autoPyLoT(inputfile=str(cla.inputfile), fnames=str(cla.fnames),
eventid=str(cla.eventid), savepath=str(cla.spath),
ncores=cla.ncores, iplot=str(cla.iplot))
ncores=cla.ncores, iplot=int(cla.iplot), obspyDMT_wfpath=str(cla.obspy_dmt_wfpath))

12
autopylot.sh Normal file
View File

@ -0,0 +1,12 @@
#!/bin/bash
#$ -l low
#$ -cwd
#$ -pe smp 40
##$ -l mem=3G
#$ -l h_vmem=6G
#$ -l os=*stretch
conda activate pylot_311
python ./autoPyLoT.py -i /home/marcel/.pylot/pylot_adriaarray.in -c 20 -dmt processed

77
docs/correlation.md Normal file
View File

@ -0,0 +1,77 @@
# Pick-Correlation Correction
## Introduction
Currently, the pick-correlation correction algorithm is not accessible from they PyLoT GUI. The main file *pick_correlation_correction.py* is located in the directory *pylot\correlation*.
The program only works for an obspy dmt database structure.
The basic workflow of the algorithm is shown in the following diagram. The first step **(1)** is the normal (automatic) picking procedure in PyLoT. Everything from step **(2)** to **(5)** is part of the correlation correction algorithm.
*Note: The first step is not required in case theoretical onsets are used instead of external picks when the parameter use_taupy_onsets is set to True. However, an existing event quakeML (.xml) file generated by PyLoT might be required for each event in case not external picks are used.*
![images/workflow_stacking.png](images/workflow_stacking.png)
A detailed description of the algorithm can be found in the corresponding publication:
*Paffrath, M., Friederich, W., and the AlpArray and AlpArray-SWATH D Working Groups: Teleseismic P waves at the AlpArray seismic network: wave fronts, absolute travel times and travel-time residuals, Solid Earth, 12, 16351660, https://doi.org/10.5194/se-12-1635-2021, 2021.*
## How to use
To use the program you have to call the main program providing two mandatory arguments: a path to the obspy dmt database folder *dmt_database_path* and the path to the PyLoT infile *pylot.in* for picking of the beam trace:
```python pick_correlation_correction.py dmt_database_path pylot.in```
By default, the parameter file *parameters.yaml* is used. You can use the command line option *--params* to specify a different parameter file and other optional arguments such as *-pd* for plotting detailed information or *-n 4* to use 4 cores for parallel processing:
```python pick_correlation_correction.py dmt_database_path pylot.in --params parameters_adriaarray.yaml -pd -n 4```
## Cross-Correlation Parameters
The program uses the parameters in the file *parameters.yaml* by default. You can use the command line option *--params* to specify a different parameter file. An example of the parameter file is provided in the *correlation\parameters.yaml* file.
In the top level of the parameter file the logging level *logging* can be set, as well as a list of pick phases *pick_phases* (e.g. ['P', 'S']).
For each pick phase the different parameters can be set in the first sub-level of the parameter file, e.g.:
```yaml
logging: info
pick_phases: ['P', 'S']
P:
min_corr_stacking: 0.8
min_corr_export: 0.6
[...]
S:
min_corr_stacking: 0.7
[...]
```
The following parameters are available:
| Parameter Name | Description | Parameter Type |
|--------------------------------|----------------------------------------------------------------------------------------------------|----------------|
| min_corr_stacking | Minimum correlation coefficient for building beam trace | float |
| min_corr_export | Minimum correlation coefficient for pick export | float |
| min_stack | Minimum number of stations for building beam trace | int |
| t_before | Correlation window before reference pick | float |
| t_after | Correlation window after reference pick | float |
| cc_maxlag | Maximum shift for initial correlation | float |
| cc_maxlag2 | Maximum shift for second (final) correlation (also for calculating pick uncertainty) | float |
| initial_pick_outlier_threshold | Threshold for excluding large outliers of initial (AIC) picks | float |
| export_threshold | Automatically exclude all onsets which deviate more than this threshold from corrected taup onsets | float |
| min_picks_export | Minimum number of correlated picks for export | int |
| min_picks_autopylot | Minimum number of reference auto picks to continue with event | int |
| check_RMS | Do RMS check to search for restitution errors (very experimental) | bool |
| use_taupy_onsets | Use taupy onsets as reference picks instead of external picks | bool |
| station_list | Use the following stations as reference for stacking | list[str] |
| use_stacked_trace | Use existing stacked trace if found (spare re-computation) | bool |
| data_dir | obspyDMT data subdirectory (e.g. 'raw', 'processed') | str |
| pickfile_extension | Use quakeML files (PyLoT output) with the following extension | str |
| dt_stacking | Time difference for stacking window (in seconds) | list[float] |
| filter_options | Filter for first correlation (rough) | dict |
| filter_options_final | Filter for second correlation (fine) | dict |
| filter_type | Filter type (e.g. bandpass) | str |
| sampfreq | Sampling frequency (in Hz) | float |
## Example Dataset
An example dataset with waveform data, metadata and automatic picks in the obspy-dmt dataset format for testing can be found at https://zenodo.org/doi/10.5281/zenodo.13759803

472
docs/gui.md Normal file
View File

@ -0,0 +1,472 @@
# PyLoT Documentation
- [PyLoT Documentation](#pylot-documentation)
- [PyLoT GUI](#pylot-gui)
- [First start](#first-start)
- [Main Screen](#main-screen)
- [Waveform Plot](#waveform-plot)
- [Mouse view controls](#mouse-view-controls)
- [Buttons](#buttons)
- [Array Map](#array-map)
- [Eventlist](#eventlist)
- [Usage](#usage)
- [Projects and Events](#projects-and-events)
- [Event folder structure](#event-folder-structure)
- [Loading event information from CSV file](#loading-event-information-from-csv-file)
- [Adding events to project](#adding-events-to-project)
- [Saving projects](#saving-projects)
- [Adding metadata](#adding-metadata)
- [Picking](#picking)
- [Manual Picking](#manual-picking)
- [Picking window](#picking-window)
- [Picking Window Settings](#picking-window-settings)
- [Filtering](#filtering)
- [Export and Import of manual picks](#export-and-import-of-manual-picks)
- [Export](#export)
- [Import](#import)
- [Automatic Picking](#automatic-picking)
- [Tuning](#tuning)
- [Production run of the autopicker](#production-run-of-the-autopicker)
- [Evaluation of automatic picks](#evaluation-of-automatic-picks)
- [1. Jackknife check](#1-jackknife-check)
- [2. Wadati check](#2-wadati-check)
- [Comparison between automatic and manual picks](#comparison-between-automatic-and-manual-picks)
- [Export and Import of automatic picks](#export-and-import-of-automatic-picks)
- [Location determination](#location-determination)
- [FAQ](#faq)
# PyLoT GUI
This section describes how to use PyLoT graphically to view waveforms and create manual or automatic picks.
## First start
After opening PyLoT for the first time, the setup routine asks for the following information:
Questions:
1. Full Name
2. Authority: Enter authority/institution name
3. Format: Enter output format (*.xml, *.cnv, *.obs)
[//]: <> (TODO: explain what these things mean, where they are used)
## Main Screen
After entering the [information](#first-start), PyLoTs main window is shown. It defaults to a view of
the [Waveform Plot](#waveform-plot), which starts empty.
<img src=images/gui/pylot-main-screen.png alt="Tune autopicks button" title="Tune autopicks button">
Add trace data by [loading a project](#projects-and-events) or by [adding event data](#adding-events-to-project).
### Waveform Plot
The waveform plot shows a trace list of all stations of an event.
Click on any trace to open the stations [picking window](#picking-window), where you can review automatic and manual
picks.
<img src=images/gui/pylot-waveform-plot.png alt="A Waveform Plot showing traces of one event">
Above the traces the currently displayed event can be selected. In the bottom bar information about the trace under the
mouse cursor is shown. This information includes the station name (station), the absolute UTC time (T) of the point
under the mouse cursor and the relative time since the first trace start in seconds (t) as well as a trace count.
#### Mouse view controls
Hold left mouse button and drag to pan view.
Hold right mouse button and Direction | Result --- | --- Move the mouse up | Increase amplitude scale Move the mouse
down | Decrease amplitude scale Move the mouse right | Increase time scale Move the mouse left | Decrease time scale
Press right mouse button and click "View All" from the context menu to reset the view.
#### Buttons
[//]: <> (Hack: We need these invisible spaces to add space to the first column, otherwise )
| Icon &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | Description |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <img src="../icons/newfile.png" alt="Create new project" width="64" height="64"> | Create a new project, for more information about projects see [Projects and Events](#projects-and-events). |
| <img src="../icons/openproject.png" alt="Open project" width="64" height="64"> | Load a project file from disk. |
| <img src="../icons/saveproject.png" alt="Save Project" width="64" height="64"> | Save all current events into an associated project file on disk. If there is no project file currently associated, you will be asked to create a new one. |
| <img src="../icons/saveprojectas.png" alt="Save Project as" width="64" height="64"> | Save all current events into a new project file on disk. See [Saving projects](#saving-projects). |
| <img src="../icons/add.png" alt="Add event data" width="64" height="64"> | Add event data by selecting directories containing waveforms. For more information see [Event folder structure](#event-folder-structure). |
| <img src="../icons/openpick.png" alt="Load event information" width="64" height="64"> | Load picks/origins from disk into the currently displayed event. If a pick already exists for a station, the one from file will overwrite the existing one. |
| <img src="../icons/openpicks.png" alt="Load information for all events" width="64" height="64"> | Load picks/origins for all events of the current project. PyLoT searches for files within the directory of the event and tries to load them for that event. For this function to work, the files containing picks/origins have to be named as described in [Event folder structure](#event-folder-structure). If a pick already exists for a station, the one from file will overwrite the existing one. |
| <img src="../icons/savepicks.png" alt="Save picks" width="64" height="64"> | Save event information such as picks and origin to file. You will be asked to select a directory in which this information should be saved. |
| <img src="../icons/openloc.png" alt="Load location information" width="64" height="64"> | Load location information from disk, |
| <img src="../icons/Matlab_PILOT_icon.png" alt="Load legacy information" width="64" height="64"> | Load event information from a previous, MatLab based PILOT version. |
| <img src="../icons/key_Z.png" alt="Display Z" width="64" height="64"> | Display Z component of streams in waveform plot. |
| <img src="../icons/key_N.png" alt="Display N" width="64" height="64"> | Display N component of streams in waveform plot. |
| <img src="../icons/key_E.png" alt="Display E" width="64" height="64"> | Display E component of streams in waveform plot. |
| <img src="../icons/tune.png" alt="Tune Autopicker" width="64" height="64"> | Open the [Tune Autopicker window](#tuning). |
| <img src="../icons/autopylot_button.png" alt="" width="64" height="64"> | Opens a window that allows starting the autopicker for all events ([Production run of the AutoPicker](#production-run-of-the-autopicker)). |
| <img src="../icons/compare_button.png" alt="Comparison" width="64" height="64"> | Compare automatic and manual picks, only available if automatic and manual picks for an event exist. See [Comparison between automatic and manual picks](#comparison-between-automatic-and-manual-picks). |
| <img src="../icons/locate_button.png" alt="Locate event" width="64" height="64"> | Run a location routine (NonLinLoc) as configured in the settings on the picks. See [Location determination](#location-determination). |
### Array Map
The array map will display a color diagram to allow a visual check of the consistency of picks across multiple stations.
This works by calculating the time difference of every onset to the earliest onset. Then isolines are drawn between
stations with the same time difference and the areas between isolines are colored.
The result should resemble a color gradient as the wavefront rolls over the network area. Stations where picks are
earlier/later than their neighbours can be reviewed by clicking on them, which opens
the [picking window](#picking-window).
Above the Array Map the picks that are used to create the map can be customized. The phase of picks that should be used
can be selected, which allows checking the consistency of the P- and S-phase separately. Additionally the pick type can
be set to manual, automatic or hybrid, meaning display only manual picks, automatic picks or only display automatic
picks for stations where there are no manual ones.
![Array Map](images/gui/arraymap-example.png "Array Map")
*Array Map for an event at the Northern Mid Atlantic Ridge, between North Africa and Mexico (Lat. 22.58, Lon. -45.11).
The wavefront moved from west to east over the network area (Alps and Balcan region), with the earliest onsets in blue
in the west.*
To be able to display an array map PyLoT needs to load an inventory file, where the metadata of seismic stations is
kept. For more information see [Metadata](#adding-metadata). Additionally automatic or manual picks need to exist for
the current event.
### Eventlist
The eventlist displays event parameters. The displayed parameters are saved in the .xml file in the event folder. Events
can be deleted from the project by pressing the red X in the leftmost column of the corresponding event.
<img src="images/gui/eventlist.png" alt="Eventlist">
| Column | Description |
|------------|--------------------------------------------------------------------------------------------------------------------|
| Event | Full path to the events folder. |
| Time | Time of event. |
| Lat | Latitude in degrees of event location. |
| Lon | Longitude in degrees of event location. |
| Depth | Depth in km of event. |
| Mag | Magnitude of event. |
| [N] MP | Number of manual picks. |
| [N] AP | Number of automatic picks. |
| Tuning Set | Select whether this event is a Tuning event. See [Automatic Picking](#automatic-picking). |
| Test Set | Select whether this event is a Test event. See [Automatic Picking](#automatic-picking). |
| Notes | Free form text field for notes regarding this event. Text will be saved in the notes.txt file in the event folder. |
## Usage
### Projects and Events
PyLoT uses projects to categorize different seismic data. A project consists of one or multiple events. Events contain
seismic traces from one or multiple stations. An event also contains further information, e.g. origin time, source
parameters and automatic as well as manual picks. Projects are used to group events which should be analysed together. A
project could contain all events from a specific region within a timeframe of interest or all recorded events of a
seismological experiment.
### Event folder structure
PyLoT expects the following folder structure for seismic data:
* Every event should be in it's own folder with the following naming scheme for the folders:
``e[id].[doy].[yy]``, where ``[id]`` is a four-digit numerical id increasing from 0001, ``[doy]`` the three digit day
of year and ``[yy]`` the last two digits of the year of the event. This structure has to be created by the user of
PyLoT manually.
* These folders should contain the seismic data for their event as ``.mseed`` or other supported filetype
* All automatic and manual picks should be in an ``.xml`` file in their event folder. PyLoT saves picks in this file.
This file does not have to be added manually unless there are picks to be imported. The format used to save picks is
QUAKEML.
Picks are saved in a file with the same filename as the event folder with ``PyLoT_`` prepended.
* The file ``notes.txt`` is used for saving analysts comments. Everything saved here will be displayed in the 'Notes'
column of the eventlist.
### Loading event information from CSV file
Event information can be saved in a ``.csv`` file located in the rootpath. The file is made from one header line, which
is followed by one or multiple data lines. Values are separated by comma, while a dot is used as a decimal separator.
This information is then shown in the table in the [Eventlist tab](#Eventlist).
One example header and data line is shown below.
```event,Date,Time,Magnitude,Lat,Long,Depth,Region,Basis Lat,Basis Long,Distance [km],Distance [rad],Distance [deg]```
```e0001.024.16,24/01/16,10:30:30,7.1,59.66,-153.45,128,Southern Alaska,46.62,10.26,8104.65,1.27,72.89,7.1```
The meaning of the header entries is:
| Header | description |
|----------------------|------------------------------------------------------------------------------------------------|
| event | Event id, has to be the same as the folder name in which waveform data for this event is kept. |
| Data | Origin date of the event, format DD/MM/YY or DD/MM/YYYY. |
| Time | Origin time of the event. Format HH:MM:SS. |
| Lat, Long | Origin latitude and longitude in decimal degrees. |
| Region | Flinn-Engdahl region name. |
| Basis Lat, Basis Lon | Latitude and longitude of the basis of the station network in decimal degrees. |
| Distance [km] | Distance from origin coordinates to basis coordinates in km. |
| Distance [rad] | Distance from origin coordinates to basis coordinates in rad. |
### Adding events to project
PyLoT GUI starts with an empty project. To add events, use the add event data button. Select one or multiple folders
containing events.
### Saving projects
Save the current project from the menu with File->Save project or File->Save project as. PyLoT uses ``.plp`` files to
save project information. This file format is not interchangeable between different versions of Python interpreters.
Saved projects contain the automatic and manual picks. Seismic trace data is not included into the ``.plp`` file, but
read from its location used when saving the file.
### Adding metadata
[//]: <> (TODO: Add picture of metadata "manager" when it is done)
PyLoT can handle ``.dless``, ``.xml``, ``.resp`` and ``.dseed`` file formats for Metadata. Metadata files stored on disk
can be added to a project by clicking *Edit*->*Manage Inventories*. This opens up a window where the folders which
contain metadata files can be selected. PyLoT will then search these files for the station names when it needs the
information.
# Picking
PyLoTs automatic and manual pick determination works as following:
* Using certain parameters, a first initial/coarse pick is determined. The first manual pick is determined by visual
review of the whole waveform and selection of the most likely onset by the analyst. The first automatic pick is
determined by calculation of a characteristic function (CF) for the seismic trace. When a wave arrives, the CFs
properties change, which is determined as the signals onset.
* Afterwards, a refined set of parameters is applied to a small part of the waveform around the initial onset. For
manual picks this means a closer view of the trace, for automatic picks this is done by a recalculated CF with
different parameters.
* This second picking phase results in the precise pick, which is treated as the onset time.
## Manual Picking
To create manual picks, you will need to open or create a project that contains seismic trace data (
see [Adding events to projects](#adding-events-to-project)). Click on a trace to open
the [Picking window](#picking-window).
### Picking window
Open the picking window of a station by leftclicking on any trace in the waveform plot. Here you can create manual picks
for the selected station.
<img src="images/gui/picking/pickwindow.png" alt="Picking window">
*Picking window of a station.*
#### Picking Window Settings
| Icon | Shortcut | Menu Alternative | Description |
|----------------------------------------------------------------------------------|----------------|-----------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <img src="../icons/filter_p.png" alt="Filter P" width="64" height="64"> | p | Filter->Apply P Filter | Filter all channels according to the options specified in Filter parameter, P Filter section. |
| <img src="../icons/filter_s.png" alt="Filter S" width="64" height="64"> | s | Filter->Apply S Filter | Filter all channels according to the options specified in Filter parameter, S Filter section. |
| <img src="../icons/key_A.png" alt="Filter Automatically" width="64" height="64"> | Ctrl + a | Filter->Automatic Filtering | If enabled, automatically select the correct filter option (P, S) depending on the selected phase to be picked. |
| ![desc](images/gui/picking/phase_selection.png "Phase selection") | 1 (P) or 5 (S) | Picks->P or S | Select phase to pick. If Automatic Filtering is enabled, this will apply the appropriate filter depending on the phase. |
| ![Zoom into](../icons/zoom_in.png "Zoom into waveform") | - | - | Zoom into waveform. |
| ![Reset zoom](../icons/zoom_0.png "Reset zoom") | - | - | Reset zoom to default view. |
| ![Delete picks](../icons/delete.png "Delete picks") | - | - | Delete all manual picks on this station. |
| ![Rename a phase](../icons/sync.png "Rename a phase") | - | - | Click this button and then the picked phase to rename it. |
| ![Continue](images/gui/picking/continue.png "Continue with next station") | - | - | If checked, after accepting the manual picks for this station with 'OK', the picking window for the next station will be opened. This option is useful for fast manual picking of a complete event. |
| Estimated onsets | - | - | Show the theoretical onsets for this station. Needs metadata and origin information. |
| Compare to channel | - | - | Select a data channel to compare against. The selected channel will be displayed in the picking window behind every channel allowing the analyst to visually compare signal correlation between different channels. |
| Scaling | - | - | Individual means every channel is scaled to its own maximum. If a channel is selected here, all channels will be scaled relatively to this channel. |
| Menu Command | Shortcut | Description |
|---------------------------|----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| P Channels and S Channels | - | Select which channels should be treated as P or S channels during picking. When picking a phase, only the corresponding channels will be shown during the precise pick. Normally, the Z channel should be selected for the P phase and the N and E channel for the S phase. |
### Filtering
Access the Filter options by pressing Ctrl+f on the Waveform plot or by the menu under *Edit*->*Filter Parameter*.
<img src=images/gui/pylot-filter-options.png>
Here you are able to select filter type, order and frequencies for the P and S pick separately. These settings are used
in the GUI for displaying the filtered waveform data and during manual picking. The values used by PyLoT for automatic
picking are displayed next to the manual values. They can be changed in the [Tune Autopicker dialog](#tuning).
A green value automatic value means the automatic and manual filter parameter is configured the same, red means they are
configured differently. By toggling the "Overwrite filteroptions" checkmark you can set whether the manual
precise/second pick uses the filter settings for the automatic picker (unchecked) or whether it uses the filter options
in this dialog (checked). To guarantee consistent picking results between automatic and manual picking it is recommended
to use the same filter settings for the determination of automatic and manual picks.
### Export and Import of manual picks
#### Export
After the creation of manual picks they can either be saved in the project file (
see [Saving projects](#saving-projects)). Alternatively the picks can be exported by pressing
the <img src="../icons/savepicks.png" alt="Save event information button" title="Save picks button" height=24 width=24>
button above the waveform plot or in the menu File->Save event information (shortcut Ctrl+p). Select the event directory
in which to save the file. The filename will be ``PyLoT_[event_folder_name].[filetype selected during first startup]``
.
You can rename and copy this file, but PyLoT will then no longer be able to automatically recognize the correct picks
for an event and the file will have to be manually selected when loading.
#### Import
To import previously saved picks press
the <img src="../icons/openpick.png" alt="Load event information button" width="24" height="24"> button and select the
file to load. You will be asked to save the current state of your project if you have not done so before. You can
continue without saving by pressing "Discard". This does not delete any information from your project, it just means
that no project file is saved before the changes of importing picks are applied. PyLoT will automatically load files
named after the scheme it uses when saving picks, described in the paragraph above. If it can't find any matching files,
a file dialogue will open and you can select the file you wish to load.
If you see a warning "Mismatch in event identifiers" and are asked whether to continue loading the picks, this means
that PyLoT doesn't recognize the picks in the file as belonging to this specific event. They could have either been
saved under a different installation of PyLoT but with the same waveform data, which means they are still compatible and
you can continue loading them. Or they could be picks from a different event, in which case loading them is not
recommended.
## Automatic Picking
The general workflow for automatic picking is as following:
- After setting up the project by loading waveforms and optionally metadata, the right parameters for the autopicker
have to be determined
- This [tuning](#tuning) is done for single stations with immediate graphical feedback of all picking results
- Afterwards the autopicker can be run for all or a subset of events from the project
For automatic picking PyLoT discerns between tune and test events, which the user has to set as such. Tune events are
used to calibrate the autopicking algorithm, test events are then used to test the calibration. The purpose of that is
controlling whether the parameters found during tuning are able to reliably pick the "unknown" test events.
If this behaviour is not desired and all events should be handled the same, dont mark any events. Since this is just a
way to group events to compare the picking results, nothing else will change.
### Tuning
Tuning describes the process of adjusting the autopicker settings to the characteristics of your data set. To do this in
PyLoT, use the <img src=../icons/tune.png height=24 alt="Tune autopicks button" title="Tune autopicks button"> button to
open the Tune Autopicker.
<img src=images/gui/tuning/tune_autopicker.png>
View of a station in the Tune Autopicker window.
1. Select the event to be displayed and processed.
2. Select the station from the event.
3. To pick the currently displayed trace, click
the <img src=images/gui/tuning/autopick_trace_button.png alt="Pick trace button" title="Autopick trace button" height=16>
button.
4. These tabs are used to select the current view. __Traces Plot__ contains a plot of the stations traces, where manual
picks can be created/edited. __Overview__ contains graphical results of the automatic picking process. The __P and S
tabs__ contain the automatic picking results of the P and S phase, while __log__ contains a useful text output of
automatic picking.
5. These buttons are used to load/save/reset settings for automatic picking. The parameters can be saved in PyLoT input
files, which have the file ending *.in*. They are human readable text files, which can also be edited by hand. Saving
the parameters allows you to load them again later, even on different machines.
6. These menus control the behaviour of the creation of manual picks from the Tune Autopicker window. Picks allows to
select the phase for which a manual pick should be created, Filter allows to filter waveforms and edit the filter
parameters. P-Channels and S-Channels allow to select the channels that should be displayed when creating a manual P
or S pick.
7. This menu is the same as in the [Picking Window](#picking-window-settings), with the exception of the __Manual
Onsets__ options. The __Manual Onsets__ buttons accepts or reject the manual picks created in the Tune Autopicker
window, pressing accept adds them to the manual picks for the event, while reject removes them.
8. The traces plot in the centre allows creating manual picks and viewing the waveforms.
9. The parameters which influence the autopicking result are in the Main settings and Advanced settings tabs on the left
side. For a description of all the parameters see the [tuning documentation](tuning.md).
### Production run of the autopicker
After the settings used during tuning give the desired results, the autopicker can be used on the complete dataset. To
invoke the autopicker on the whole set of events, click
the <img src=../icons/autopylot_button.png alt="Autopick" title="Autopick" height=32> button.
### Evaluation of automatic picks
PyLoT has two internal consistency checks for automatic picks that were determined for an event:
1. Jackknife check
2. Wadati check
#### 1. Jackknife check
The jackknife test in PyLoT checks the consistency of automatically determined P-picks by checking the statistical
variance of the picks. The variance of all P-picks is calculated and compared to the variance of subsets, in which one
pick is removed.
The idea is, that picks that are close together in time should not influence the estimation of the variance much, while
picks whose positions deviates from the norm influence the variance to a greater extent. If the estimated variance of a
subset with a pick removed differs to much from the estimated variance of all picks, the pick that was removed from the
subset will be marked as invalid.
The factor by which picks are allowed to skew from the estimation of variance can be configured, it is called *
jackfactor*, see [here](tuning.md#Pick-quality-control).
Additionally, the deviation of picks from the median is checked. For that, the median of all P-picks that passed the
Jackknife test is calculated. Picks whose onset times deviate from the mean onset time by more than the *mdttolerance*
are marked as invalid.
<img src=images/gui/jackknife_plot.png title="Jackknife/Median test diagram">
*The result of both tests (Jackknife and Median) is shown in a diagram afterwards. The onset time is plotted against a
running number of stations. Picks that failed either the Jackknife or the median test are colored red. The median is
plotted as a green line.*
The Jackknife and median check are suitable to check for picks that are outside of the expected time window, for
example, when a wrong phase was picked. It won't recognize picks that are in close proximity to the right onset which
are just slightly to late/early.
#### 2. Wadati check
The Wadati check checks the consistency of S picks. For this the SP-time, the time difference between S and P onset is
plotted against the P onset time. A line is fitted to the points, which minimizes the error. Then the deviation of
single picks to this line is checked. If the deviation in seconds is above the *wdttolerance*
parameter ([see here](tuning.md#Pick-quality-control)), the pick is marked as invalid.
<img src=images/gui/wadati_plot.png title="Output diagram of Wadati check">
*The Wadati plot in PyLoT shows the SP onset time difference over the P onset time. A first line is fitted (black). All
picks which deviate to much from this line are marked invalid (red). Then a second line is fitted which excludes the
invalid picks. From this lines slope, the ratio of P and S wave velocity is determined.*
### Comparison between automatic and manual picks
Every pick in PyLoT consists of an earliest possible, latest possible and most likely onset time. The earliest and
latest possible onset time characterize the uncertainty of a pick. This approach is described in Diel, Kissling and
Bormann (2012) - Tutorial for consistent phase picking at local to regional distances. These times are represented as a
Probability Density Function (PDF) for every pick. The PDF is implemented as two exponential distributions around the
most likely onset as the expected value.
To compare two single picks, their PDFs are cross correlated to create a new PDF. This corresponds to the subtraction of
the automatic pick from the manual pick.
<img src=images/gui/comparison/comparison_pdf.png title="Comparison between automatic and manual pick">
*Comparison between an automatic and a manual pick for a station in PyLoT by comparing their PDFs.*
*The upper plot shows the difference between the two single picks that are shown in the lower plot.*
*The difference is implemented as a cross correlation between the two PDFs. and results in a new PDF, the comparison
PDF.*
*The expected value of the comparison PDF corresponds to the time distance between the automatic and manual picks most
likely onset.*
*The standard deviation corresponds to the combined uncertainty.*
To compare the automatic and manual picks between multiple stations of an event, the properties of all the comparison
PDFs are shown in a histogram.
<img src=images/gui/comparison/compare_widget.png title="Comparison between picks of an event">
*Comparison between the automatic and manual picks for an event in PyLoT.*
*The top left plot shows the standard deviation of the comparison PDFs for P picks.*
*The bottom left plot shows the expected values of the comparison PDFs for P picks.*
*The top right plot shows the standard deviation of the comparison PDFs for S picks.*
*The bottom right plot shows the expected values of the comparison PDFs for S picks.*
*The standard deviation plots show that most P picks have an uncertainty between 1 and 2 seconds, while S pick
uncertainties have a much larger spread between 1 to 15 seconds.*
*This means P picks have higher quality classes on average than S picks.*
*The expected values are largely negative, meaning that the algorithm tends to pick earlier than the analyst with the
applied settings (Manual - Automatic).*
*The number of samples mentioned in the plots legends is the amount of stations that have an automatic and a manual P
pick.*
### Export and Import of automatic picks
Picks can be saved in *.xml* format.
# Location determination
To be added.
# FAQ
Q: During manual picking the error "No channel to plot for phase ..." is displayed, and I am unable to create a pick.
A: Select a channel that should be used for the corresponding phase in the Pickwindow. For further information
read [Picking Window settings](#picking-window-settings).
Q: I see a warning "Mismatch in event identifiers" when loading picks from a file.
A: This means that PyLoT doesn't recognize the picks in the file as belonging to this specific event. They could have
been saved under a different installation of PyLoT but with the same waveform data, which means they are still
compatible and you can continue loading them or they could be the picks of a different event, in which case loading them
is not recommended.

BIN
docs/gui.pdf Normal file

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 89 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 191 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 123 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 597 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 166 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 127 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 876 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 254 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

151
docs/tuning.md Normal file
View File

@ -0,0 +1,151 @@
# AutoPyLoT Tuning
A description of the parameters used for determining automatic picks.
## Filter parameters and cut times
Parameters applied to the traces before picking algorithm starts.
| Name | Description |
|---------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| *P Start*, *P | |
| Stop* | Define time interval relative to trace start time for CF calculation on vertical trace. Value is relative to theoretical onset time if 'Use TauPy' option is enabled in main settings of 'Tune Autopicker' dialogue. |
| *S Start*, *S | |
| Stop* | Define time interval relative to trace start time for CF calculation on horizontal traces. Value is relative to theoretical onset time if 'Use TauPy' option is enabled in main settings of 'Tune Autopicker' dialogue. |
| *Bandpass | |
| Z1* | Filter settings for Butterworth bandpass applied to vertical trace for calculation of initial P pick. |
| *Bandpass | |
| Z2* | Filter settings for Butterworth bandpass applied to vertical trace for calculation of precise P pick. |
| *Bandpass | |
| H1* | Filter settings for Butterworth bandpass applied to horizontal traces for calculation of initial S pick. |
| *Bandpass | |
| H2* | Filter settings for Butterworth bandpass applied to horizontal traces for calculation of precise S pick. |
## Inital P pick
Parameters used for determination of initial P pick.
| Name | Description |
|-------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------|
| * | |
| tLTA* | Size of gliding LTA window in seconds used for calculation of HOS-CF. |
| *pickwin | |
| P* | Size of time window in seconds in which the minimum of the AIC-CF in front of the maximum of the HOS-CF is determined. |
| * | |
| AICtsmooth* | Average of samples in this time window will be used for smoothing of the AIC-CF. |
| * | |
| checkwinP* | Time in front of the global maximum of the HOS-CF in which to search for a second local extrema. |
| *minfactorP* | Used with * |
| checkwinP*. If a second local maximum is found, it has to be at least as big as the first maximum * *minfactorP*. | |
| * | |
| tsignal* | Time window in seconds after the initial P pick used for determining signal amplitude. |
| * | |
| tnoise* | Time window in seconds in front of initial P pick used for determining noise amplitude. |
| *tsafetey* | Time in seconds between *tsignal* and * |
| tnoise*. | |
| * | |
| tslope* | Time window in seconds after initial P pick in which the slope of the onset is calculated. |
## Inital S pick
Parameters used for determination of initial S pick
| Name | Description |
|-------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------|
| * | |
| tdet1h* | Length of time window in seconds in which AR params of the waveform are determined. |
| * | |
| tpred1h* | Length of time window in seconds in which the waveform is predicted using the AR model. |
| * | |
| AICtsmoothS* | Average of samples in this time window is used for smoothing the AIC-CF. |
| * | |
| pickwinS* | Time window in which the minimum in the AIC-CF in front of the maximum in the ARH-CF is determined. |
| * | |
| checkwinS* | Time in front of the global maximum of the ARH-CF in which to search for a second local extrema. |
| *minfactorP* | Used with * |
| checkwinS*. If a second local maximum is found, it has to be at least as big as the first maximum * *minfactorS*. | |
| * | |
| tsignal* | Time window in seconds after the initial P pick used for determining signal amplitude. |
| * | |
| tnoise* | Time window in seconds in front of initial P pick used for determining noise amplitude. |
| *tsafetey* | Time in seconds between *tsignal* and * |
| tnoise*. | |
| * | |
| tslope* | Time window in seconds after initial P pick in which the slope of the onset is calculated. |
## Precise P pick
Parameters used for determination of precise P pick.
| Name | Description |
|-------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| *Precalcwin* | Time window in seconds for recalculation of the HOS-CF. The new CF will be two times the size of * |
| Precalcwin*, since it will be calculated from the initial pick to +/- *Precalcwin*. | |
| * | |
| tsmoothP* | Average of samples in this time window will be used for smoothing the second HOS-CF. |
| * | |
| ausP* | Controls artificial uplift of samples during precise picking. A common local minimum of the smoothed and unsmoothed HOS-CF is found when the previous sample is larger or equal to the current sample times (1+* |
| ausP*). | |
## Precise S pick
Parameters used for determination of precise S pick.
| Name | Description |
|--------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| * | |
| tdet2h* | Time window for determination of AR coefficients. |
| * | |
| tpred2h* | Time window in which the waveform is predicted using the determined AR parameters. |
| *Srecalcwin* | Time window for recalculation of ARH-CF. New CF will be calculated from initial pick +/- * |
| Srecalcwin*. | |
| * | |
| tsmoothS* | Average of samples in this time window will be used for smoothing the second ARH-CF. |
| * | |
| ausS* | Controls artificial uplift of samples during precise picking. A common local minimum of the smoothed and unsmoothed ARH-CF is found when the previous sample is larger or equal to the current sample times (1+* |
| ausS*). | |
| * | |
| pickwinS* | Time window around initial pick in which to look for a precise pick. |
## Pick quality control
Parameters used for checking quality and integrity of automatic picks.
| Name | Description |
|--------------------------------------------|-----------------------------------------------------------------------|
| * | |
| minAICPslope* | Initial P picks with a slope lower than this value will be discared. |
| * | |
| minAICPSNR* | Initial P picks with a SNR below this value will be discarded. |
| * | |
| minAICSslope* | Initial S picks with a slope lower than this value will be discarded. |
| * | |
| minAICSSNR* | Initial S picks with a SNR below this value will be discarded. |
| *minsiglength*, *noisefacor*. *minpercent* | Parameters for checking signal length. In the time window of size * |
minsiglength* after the initial P pick *
minpercent* of samples have to be larger than the RMS value. |
| *
zfac* | To recognize misattributed S picks, the RMS amplitude of vertical and horizontal traces are compared. The RMS amplitude of the vertical traces has to be at least *
zfac* higher than the RMS amplitude on the horizontal traces for the pick to be accepted as a valid P pick. |
| *
jackfactor* | A P pick is removed if the jackknife pseudo value of the variance of his subgroup is larger than the variance of all picks multiplied with the *
jackfactor*. |
| *
mdttolerance* | Maximum allowed deviation of P onset times from the median. Value in seconds. |
| *
wdttolerance* | Maximum allowed deviation of S onset times from the line during the Wadati test. Value in seconds. |
## Pick quality determination
Parameters for discrete quality classes.
| Name | Description |
|--------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------|
| * | |
| timeerrorsP* | Width of the time windows in seconds between earliest and latest possible pick which represent the quality classes 0, 1, 2, 3 for P onsets. |
| * | |
| timeerrorsS* | Width of the time windows in seconds between earliest and latest possible pick which represent the quality classes 0, 1, 2, 3 for S onsets. |
| *nfacP*, *nfacS* | For determination of latest possible onset time. The time when the signal reaches an amplitude of * |
| nfac* * mean value of the RMS amplitude in the time window *tnoise* corresponds to the latest possible onset time. | |

BIN
docs/tuning.pdf Normal file

Binary file not shown.

View File

@ -1,8 +0,0 @@
git pull
Entferne qrc_resources.py
KONFLIKT (ändern/löschen): pylot/core/pick/getSNR.py gelöscht in HEAD und geändert in 67dd66535a213ba5c7cfe2be52aa6d5a7e8b7324. Stand 67dd66535a213ba5c7cfe2be52aa6d5a7e8b7324 von pylot/core/pick/getSNR.py wurde im Arbeitsbereich gelassen.
KONFLIKT (ändern/löschen): pylot/core/pick/fmpicker.py gelöscht in HEAD und geändert in 67dd66535a213ba5c7cfe2be52aa6d5a7e8b7324. Stand 67dd66535a213ba5c7cfe2be52aa6d5a7e8b7324 von pylot/core/pick/fmpicker.py wurde im Arbeitsbereich gelassen.
KONFLIKT (ändern/löschen): pylot/core/pick/earllatepicker.py gelöscht in HEAD und geändert in 67dd66535a213ba5c7cfe2be52aa6d5a7e8b7324. Stand 67dd66535a213ba5c7cfe2be52aa6d5a7e8b7324 von pylot/core/pick/earllatepicker.py wurde im Arbeitsbereich gelassen.
Automatisches Zusammenfügen von icons.qrc
Automatischer Merge fehlgeschlagen; beheben Sie die Konflikte und committen Sie dann das Ergebnis.

View File

@ -2,6 +2,8 @@
<qresource>
<file>icons/pylot.ico</file>
<file>icons/pylot.png</file>
<file>icons/back.png</file>
<file>icons/home.png</file>
<file>icons/newfile.png</file>
<file>icons/open.png</file>
<file>icons/openproject.png</file>
@ -27,10 +29,13 @@
<file>icons/map.png</file>
<file>icons/openloc.png</file>
<file>icons/compare_button.png</file>
<file>icons/pick_qualities_button.png</file>
<file>icons/eventlist_xml_button.png</file>
<file>icons/locate_button.png</file>
<file>icons/Matlab_PILOT_icon.png</file>
<file>icons/printer.png</file>
<file>icons/delete.png</file>
<file>icons/key_A.png</file>
<file>icons/key_E.png</file>
<file>icons/key_N.png</file>
<file>icons/key_P.png</file>
@ -43,6 +48,8 @@
<file>icons/key_W.png</file>
<file>icons/key_Z.png</file>
<file>icons/filter.png</file>
<file>icons/filter_p.png</file>
<file>icons/filter_s.png</file>
<file>icons/sync.png</file>
<file>icons/zoom_0.png</file>
<file>icons/zoom_in.png</file>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

After

Width:  |  Height:  |  Size: 30 KiB

BIN
icons/back.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
icons/filter_p.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.4 KiB

BIN
icons/filter_s.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.8 KiB

BIN
icons/home.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

BIN
icons/key_A.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 39 KiB

File diff suppressed because it is too large Load Diff

212222
icons_rc_3.py

File diff suppressed because it is too large Load Diff

View File

@ -1,100 +0,0 @@
%This is a parameter input file for autoPyLoT.
%All main and special settings regarding data handling
%and picking are to be set here!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#main settings#
/DATA/Insheim #rootpath# %project path
EVENT_DATA/LOCAL #datapath# %data path
2013.02_Insheim #database# %name of data base
e0019.048.13 #eventID# %certain evnt ID for processing
True #apverbose#
PILOT #datastructure# %choose data structure
0 #iplot# %flag for plotting: 0 none, 1, partly, >1 everything
AUTOPHASES_AIC_HOS4_ARH #phasefile# %name of autoPILOT output phase file
AUTOLOC_AIC_HOS4_ARH #locfile# %name of autoPILOT output location file
AUTOFOCMEC_AIC_HOS4_ARH.in #focmecin# %name of focmec input file containing polarities
HYPOSAT #locrt# %location routine used ("HYPOINVERSE" or "HYPOSAT")
6 #pmin# %minimum required P picks for location
4 #p0min# %minimum required P picks for location if at least
%3 excellent P picks are found
2 #smin# %minimum required S picks for location
/home/ludger/bin/run_HYPOSAT4autoPILOT.csh #cshellp# %path and name of c-shell script to run location routine
7.6 8.5 #blon# %longitude bounding for location map
49 49.4 #blat# %lattitude bounding for location map
#parameters for moment magnitude estimation#
5000 #vp# %average P-wave velocity
2800 #vs# %average S-wave velocity
2200 #rho# %rock density [kg/m^3]
300 #Qp# %quality factor for P waves
100 #Qs# %quality factor for S waves
#common settings picker#
15 #pstart# %start time [s] for calculating CF for P-picking
40 #pstop# %end time [s] for calculating CF for P-picking
-1.0 #sstart# %start time [s] after or before(-) P-onset for calculating CF for S-picking
7 #sstop# %end time [s] after P-onset for calculating CF for S-picking
2 20 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz]
2 30 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz]
2 15 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz]
2 20 #bph2# %lower/upper corner freq. of second band pass filter z-comp. [Hz]
#special settings for calculating CF#
%!!Be careful when editing the following!!
#Z-component#
HOS #algoP# %choose algorithm for P-onset determination (HOS, ARZ, or AR3)
7 #tlta# %for HOS-/AR-AIC-picker, length of LTA window [s]
4 #hosorder# %for HOS-picker, order of Higher Order Statistics
2 #Parorder# %for AR-picker, order of AR process of Z-component
1.2 #tdet1z# %for AR-picker, length of AR determination window [s] for Z-component, 1st pick
0.4 #tpred1z# %for AR-picker, length of AR prediction window [s] for Z-component, 1st pick
0.6 #tdet2z# %for AR-picker, length of AR determination window [s] for Z-component, 2nd pick
0.2 #tpred2z# %for AR-picker, length of AR prediction window [s] for Z-component, 2nd pick
0.001 #addnoise# %add noise to seismogram for stable AR prediction
3 0.1 0.5 0.1 #tsnrz# %for HOS/AR, window lengths for SNR-and slope estimation [tnoise,tsafetey,tsignal,tslope] [s]
3 #pickwinP# %for initial AIC pick, length of P-pick window [s]
8 #Precalcwin# %for HOS/AR, window length [s] for recalculation of CF (relative to 1st pick)
0 #peps4aic# %for HOS/AR, artificial uplift of samples of AIC-function (P)
0.2 #aictsmooth# %for HOS/AR, take average of samples for smoothing of AIC-function [s]
0.1 #tsmoothP# %for HOS/AR, take average of samples for smoothing CF [s]
0.001 #ausP# %for HOS/AR, artificial uplift of samples (aus) of CF (P)
1.3 #nfacP# %for HOS/AR, noise factor for noise level determination (P)
#H-components#
ARH #algoS# %choose algorithm for S-onset determination (ARH or AR3)
0.8 #tdet1h# %for HOS/AR, length of AR-determination window [s], H-components, 1st pick
0.4 #tpred1h# %for HOS/AR, length of AR-prediction window [s], H-components, 1st pick
0.6 #tdet2h# %for HOS/AR, length of AR-determinaton window [s], H-components, 2nd pick
0.3 #tpred2h# %for HOS/AR, length of AR-prediction window [s], H-components, 2nd pick
4 #Sarorder# %for AR-picker, order of AR process of H-components
6 #Srecalcwin# %for AR-picker, window length [s] for recalculation of CF (2nd pick) (H)
3 #pickwinS# %for initial AIC pick, length of S-pick window [s]
2 0.2 1.5 0.5 #tsnrh# %for ARH/AR3, window lengths for SNR-and slope estimation [tnoise,tsafetey,tsignal,tslope] [s]
0.05 #aictsmoothS# %for AIC-picker, take average of samples for smoothing of AIC-function [s]
0.02 #tsmoothS# %for AR-picker, take average of samples for smoothing CF [s] (S)
0.2 #pepsS# %for AR-picker, artificial uplift of samples of CF (S)
0.4 #ausS# %for HOS/AR, artificial uplift of samples (aus) of CF (S)
1.5 #nfacS# %for AR-picker, noise factor for noise level determination (S)
%first-motion picker%
1 #minfmweight# %minimum required p weight for first-motion determination
2 #minFMSNR# %miniumum required SNR for first-motion determination
0.2 #fmpickwin# %pick window around P onset for calculating zero crossings
%quality assessment%
#inital AIC onset#
0.01 0.02 0.04 0.08 #timeerrorsP# %discrete time errors [s] corresponding to picking weights [0 1 2 3] for P
0.04 0.08 0.16 0.32 #timeerrorsS# %discrete time errors [s] corresponding to picking weights [0 1 2 3] for S
80 #minAICPslope# %below this slope [counts/s] the initial P pick is rejected
1.2 #minAICPSNR# %below this SNR the initial P pick is rejected
50 #minAICSslope# %below this slope [counts/s] the initial S pick is rejected
1.5 #minAICSSNR# %below this SNR the initial S pick is rejected
#check duration of signal using envelope function#
1.5 #prepickwin# %pre-signal window length [s] for noise level estimation
0.7 #minsiglength# %minimum required length of signal [s]
0.2 #sgap# %safety gap between noise and signal window [s]
2 #noisefactor# %noiselevel*noisefactor=threshold
60 #minpercent# %per cent of samples required higher than threshold
#check for spuriously picked S-onsets#
3.0 #zfac# %P-amplitude must exceed zfac times RMS-S amplitude
#jackknife-processing for P-picks#
3 #thresholdweight#%minimum required weight of picks
3 #dttolerance# %maximum allowed deviation of P picks from median [s]
4 #minstats# %minimum number of stations with reliable P picks
3 #Sdttolerance# %maximum allowed deviation from Wadati-diagram

View File

@ -1,99 +0,0 @@
%This is a parameter input file for autoPyLoT.
%All main and special settings regarding data handling
%and picking are to be set here!
%Parameters are optimized for local data sets!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#main settings#
/DATA/Insheim #rootpath# %project path
EVENT_DATA/LOCAL #datapath# %data path
2016.08_Insheim #database# %name of data base
e0007.224.16 #eventID# %event ID for single event processing
/DATA/Insheim/STAT_INFO #invdir# %full path to inventory or dataless-seed file
PILOT #datastructure#%choose data structure
0 #iplot# %flag for plotting: 0 none, 1 partly, >1 everything
True #apverbose# %choose 'True' or 'False' for terminal output
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#NLLoc settings#
/home/ludger/NLLOC #nllocbin# %path to NLLoc executable
/home/ludger/NLLOC/Insheim #nllocroot# %root of NLLoc-processing directory
AUTOPHASES.obs #phasefile# %name of autoPyLoT-output phase file for NLLoc
%(in nllocroot/obs)
Insheim_min1d032016_auto.in #ctrfile# %name of autoPyLoT-output control file for NLLoc
%(in nllocroot/run)
ttime #ttpatter# %pattern of NLLoc ttimes from grid
%(in nllocroot/times)
AUTOLOC_nlloc #outpatter# %pattern of NLLoc-output file
%(returns 'eventID_outpatter')
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#parameters for seismic moment estimation#
3530 #vp# %average P-wave velocity
2500 #rho# %average rock density [kg/m^3]
300 0.8 #Qp# %quality factor for P waves ([Qp, ap], Qp*f^a)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
AUTOFOCMEC_AIC_HOS4_ARH.in #focmecin# %name of focmec input file containing derived polarities
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#common settings picker#
15.0 #pstart# %start time [s] for calculating CF for P-picking
60.0 #pstop# %end time [s] for calculating CF for P-picking
-1.0 #sstart# %start time [s] relative to P-onset for calculating CF for S-picking
10.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking
2 20 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz]
2 30 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz]
2 15 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz]
2 20 #bph2# %lower/upper corner freq. of second band pass filter z-comp. [Hz]
#special settings for calculating CF#
%!!Edit the following only if you know what you are doing!!%
#Z-component#
HOS #algoP# %choose algorithm for P-onset determination (HOS, ARZ, or AR3)
7.0 #tlta# %for HOS-/AR-AIC-picker, length of LTA window [s]
4 #hosorder# %for HOS-picker, order of Higher Order Statistics
2 #Parorder# %for AR-picker, order of AR process of Z-component
1.2 #tdet1z# %for AR-picker, length of AR determination window [s] for Z-component, 1st pick
0.4 #tpred1z# %for AR-picker, length of AR prediction window [s] for Z-component, 1st pick
0.6 #tdet2z# %for AR-picker, length of AR determination window [s] for Z-component, 2nd pick
0.2 #tpred2z# %for AR-picker, length of AR prediction window [s] for Z-component, 2nd pick
0.001 #addnoise# %add noise to seismogram for stable AR prediction
3 0.1 0.5 0.5 #tsnrz# %for HOS/AR, window lengths for SNR-and slope estimation [tnoise,tsafetey,tsignal,tslope] [s]
3.0 #pickwinP# %for initial AIC pick, length of P-pick window [s]
6.0 #Precalcwin# %for HOS/AR, window length [s] for recalculation of CF (relative to 1st pick)
0.2 #aictsmooth# %for HOS/AR, take average of samples for smoothing of AIC-function [s]
0.1 #tsmoothP# %for HOS/AR, take average of samples for smoothing CF [s]
0.001 #ausP# %for HOS/AR, artificial uplift of samples (aus) of CF (P)
1.3 #nfacP# %for HOS/AR, noise factor for noise level determination (P)
#H-components#
ARH #algoS# %choose algorithm for S-onset determination (ARH or AR3)
0.8 #tdet1h# %for HOS/AR, length of AR-determination window [s], H-components, 1st pick
0.4 #tpred1h# %for HOS/AR, length of AR-prediction window [s], H-components, 1st pick
0.6 #tdet2h# %for HOS/AR, length of AR-determinaton window [s], H-components, 2nd pick
0.3 #tpred2h# %for HOS/AR, length of AR-prediction window [s], H-components, 2nd pick
4 #Sarorder# %for AR-picker, order of AR process of H-components
5.0 #Srecalcwin# %for AR-picker, window length [s] for recalculation of CF (2nd pick) (H)
3.0 #pickwinS# %for initial AIC pick, length of S-pick window [s]
2 0.2 1.5 0.5 #tsnrh# %for ARH/AR3, window lengths for SNR-and slope estimation [tnoise,tsafetey,tsignal,tslope] [s]
0.5 #aictsmoothS# %for AIC-picker, take average of samples for smoothing of AIC-function [s]
0.7 #tsmoothS# %for AR-picker, take average of samples for smoothing CF [s] (S)
0.9 #ausS# %for HOS/AR, artificial uplift of samples (aus) of CF (S)
1.5 #nfacS# %for AR-picker, noise factor for noise level determination (S)
%first-motion picker%
1 #minfmweight# %minimum required P weight for first-motion determination
2 #minFMSNR# %miniumum required SNR for first-motion determination
0.2 #fmpickwin# %pick window around P onset for calculating zero crossings
%quality assessment%
#inital AIC onset#
0.05 0.10 0.20 0.40 #timeerrorsP# %discrete time errors [s] corresponding to picking weights [0 1 2 3] for P
0.10 0.20 0.40 0.80 #timeerrorsS# %discrete time errors [s] corresponding to picking weights [0 1 2 3] for S
4 #minAICPslope# %below this slope [counts/s] the initial P pick is rejected
1.2 #minAICPSNR# %below this SNR the initial P pick is rejected
2 #minAICSslope# %below this slope [counts/s] the initial S pick is rejected
1.5 #minAICSSNR# %below this SNR the initial S pick is rejected
#check duration of signal using envelope function#
3 #minsiglength# %minimum required length of signal [s]
1.0 #noisefactor# %noiselevel*noisefactor=threshold
40 #minpercent# %required percentage of samples higher than threshold
#check for spuriously picked S-onsets#
2.0 #zfac# %P-amplitude must exceed at least zfac times RMS-S amplitude
#check statistics of P onsets#
2.5 #mdttolerance# %maximum allowed deviation of P picks from median [s]
#wadati check#
1.0 #wdttolerance# %maximum allowed deviation from Wadati-diagram

View File

@ -1,100 +0,0 @@
%This is a parameter input file for autoPyLoT.
%All main and special settings regarding data handling
%and picking are to be set here!
%Parameters are optimized for regional data sets!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#main settings#
/DATA/Egelados #rootpath# %project path
EVENT_DATA/LOCAL #datapath# %data path
2006.01_Nisyros #database# %name of data base
e1412.008.06 #eventID# %event ID for single event processing
/DATA/Egelados/STAT_INFO #invdir# %full path to inventory or dataless-seed file
PILOT #datastructure# %choose data structure
0 #iplot# %flag for plotting: 0 none, 1, partly, >1 everything
True #apverbose# %choose 'True' or 'False' for terminal output
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#NLLoc settings#
/home/ludger/NLLOC #nllocbin# %path to NLLoc executable
/home/ludger/NLLOC/Insheim #nllocroot# %root of NLLoc-processing directory
AUTOPHASES.obs #phasefile# %name of autoPyLoT-output phase file for NLLoc
%(in nllocroot/obs)
Insheim_min1d2015_auto.in #ctrfile# %name of autoPyLoT-output control file for NLLoc
%(in nllocroot/run)
ttime #ttpatter# %pattern of NLLoc ttimes from grid
%(in nllocroot/times)
AUTOLOC_nlloc #outpatter# %pattern of NLLoc-output file
%(returns 'eventID_outpatter')
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#parameters for seismic moment estimation#
3530 #vp# %average P-wave velocity
2700 #rho# %average rock density [kg/m^3]
1000f**0.8 #Qp# %quality factor for P waves (Qp*f^a)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
AUTOFOCMEC_AIC_HOS4_ARH.in #focmecin# %name of focmec input file containing derived polarities
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#common settings picker#
20 #pstart# %start time [s] for calculating CF for P-picking
100 #pstop# %end time [s] for calculating CF for P-picking
1.0 #sstart# %start time [s] after or before(-) P-onset for calculating CF for S-picking
100 #sstop# %end time [s] after P-onset for calculating CF for S-picking
3 10 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz]
3 12 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz]
3 8 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz]
3 6 #bph2# %lower/upper corner freq. of second band pass filter H-comp. [Hz]
#special settings for calculating CF#
%!!Be careful when editing the following!!
#Z-component#
HOS #algoP# %choose algorithm for P-onset determination (HOS, ARZ, or AR3)
7 #tlta# %for HOS-/AR-AIC-picker, length of LTA window [s]
4 #hosorder# %for HOS-picker, order of Higher Order Statistics
2 #Parorder# %for AR-picker, order of AR process of Z-component
1.2 #tdet1z# %for AR-picker, length of AR determination window [s] for Z-component, 1st pick
0.4 #tpred1z# %for AR-picker, length of AR prediction window [s] for Z-component, 1st pick
0.6 #tdet2z# %for AR-picker, length of AR determination window [s] for Z-component, 2nd pick
0.2 #tpred2z# %for AR-picker, length of AR prediction window [s] for Z-component, 2nd pick
0.001 #addnoise# %add noise to seismogram for stable AR prediction
5 0.2 3.0 1.5 #tsnrz# %for HOS/AR, window lengths for SNR-and slope estimation [tnoise,tsafetey,tsignal,tslope] [s]
3 #pickwinP# %for initial AIC and refined pick, length of P-pick window [s]
8 #Precalcwin# %for HOS/AR, window length [s] for recalculation of CF (relative to 1st pick)
1.0 #aictsmooth# %for HOS/AR, take average of samples for smoothing of AIC-function [s]
0.3 #tsmoothP# %for HOS/AR, take average of samples for smoothing CF [s]
0.3 #ausP# %for HOS/AR, artificial uplift of samples (aus) of CF (P)
1.3 #nfacP# %for HOS/AR, noise factor for noise level determination (P)
#H-components#
ARH #algoS# %choose algorithm for S-onset determination (ARH or AR3)
0.8 #tdet1h# %for HOS/AR, length of AR-determination window [s], H-components, 1st pick
0.4 #tpred1h# %for HOS/AR, length of AR-prediction window [s], H-components, 1st pick
0.6 #tdet2h# %for HOS/AR, length of AR-determinaton window [s], H-components, 2nd pick
0.3 #tpred2h# %for HOS/AR, length of AR-prediction window [s], H-components, 2nd pick
4 #Sarorder# %for AR-picker, order of AR process of H-components
10 #Srecalcwin# %for AR-picker, window length [s] for recalculation of CF (2nd pick) (H)
25 #pickwinS# %for initial AIC and refined pick, length of S-pick window [s]
5 0.2 3.0 3.0 #tsnrh# %for ARH/AR3, window lengths for SNR-and slope estimation [tnoise,tsafetey,tsignal,tslope] [s]
3.5 #aictsmoothS# %for AIC-picker, take average of samples for smoothing of AIC-function [s]
1.0 #tsmoothS# %for AR-picker, take average of samples for smoothing CF [s] (S)
0.2 #ausS# %for HOS/AR, artificial uplift of samples (aus) of CF (S)
1.5 #nfacS# %for AR-picker, noise factor for noise level determination (S)
%first-motion picker%
1 #minfmweight# %minimum required p weight for first-motion determination
2 #minFMSNR# %miniumum required SNR for first-motion determination
6.0 #fmpickwin# %pick window around P onset for calculating zero crossings
%quality assessment%
#inital AIC onset#
0.04 0.08 0.16 0.32 #timeerrorsP# %discrete time errors [s] corresponding to picking weights [0 1 2 3] for P
0.04 0.08 0.16 0.32 #timeerrorsS# %discrete time errors [s] corresponding to picking weights [0 1 2 3] for S
3 #minAICPslope# %below this slope [counts/s] the initial P pick is rejected
1.2 #minAICPSNR# %below this SNR the initial P pick is rejected
5 #minAICSslope# %below this slope [counts/s] the initial S pick is rejected
2.5 #minAICSSNR# %below this SNR the initial S pick is rejected
#check duration of signal using envelope function#
30 #minsiglength# %minimum required length of signal [s]
2.5 #noisefactor# %noiselevel*noisefactor=threshold
60 #minpercent# %required percentage of samples higher than threshold
#check for spuriously picked S-onsets#
0.5 #zfac# %P-amplitude must exceed at least zfac times RMS-S amplitude
#check statistics of P onsets#
45 #mdttolerance# %maximum allowed deviation of P picks from median [s]
#wadati check#
3.0 #wdttolerance# %maximum allowed deviation from Wadati-diagram

View File

@ -1,2 +0,0 @@
P bandpass 4 2.0 20.0
S bandpass 4 2.0 15.0

View File

@ -4,19 +4,17 @@
%Parameters are optimized for %extent data sets!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#main settings#
/home/marcel/marcel_scratch #rootpath# %project path
alparray #datapath# %data path
waveforms #database# %name of data base
e0006.036.13 #eventID# %event ID for single event processing (* for all events found in database)
None #invdir# %full path to inventory or dataless-seed file
#datapath# %data path
#eventID# %event ID for single event processing (* for all events found in datapath)
#invdir# %full path to inventory or dataless-seed file
PILOT #datastructure# %choose data structure
True #apverbose# %choose 'True' or 'False' for terminal output
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#NLLoc settings#
/progs/bin #nllocbin# %path to NLLoc executable
/home/ludger/NLLOC/Insheim #nllocroot# %root of NLLoc-processing directory
AUTOPHASES.obs #phasefile# %name of autoPyLoT-output phase file for NLLoc
Insheim_min1d032016_auto.in #ctrfile# %name of autoPyLoT-output control file for NLLoc
None #nllocbin# %path to NLLoc executable
None #nllocroot# %root of NLLoc-processing directory
None #phasefile# %name of autoPyLoT-output phase file for NLLoc
None #ctrfile# %name of autoPyLoT-output control file for NLLoc
ttime #ttpatter# %pattern of NLLoc ttimes from grid
AUTOLOC_nlloc #outpatter# %pattern of NLLoc-output file
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
@ -37,62 +35,65 @@ bandpass bandpass #filter_type# %filter type
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#common settings picker#
global #extent# %extent of array ("local", "regional" or "global")
50.0 #pstart# %start time [s] for calculating CF for P-picking
600.0 #pstop# %end time [s] for calculating CF for P-picking
-150.0 #pstart# %start time [s] for calculating CF for P-picking (if TauPy: seconds relative to estimated onset)
600.0 #pstop# %end time [s] for calculating CF for P-picking (if TauPy: seconds relative to estimated onset)
200.0 #sstart# %start time [s] relative to P-onset for calculating CF for S-picking
1150.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking
True #use_taup# %use estimated traveltimes from taupy for calculating windows for CF
True #use_taup# %use estimated traveltimes from TauPy for calculating windows for CF
iasp91 #taup_model# %define TauPy model for traveltime estimation. Possible values: 1066a, 1066b, ak135, ak135f, herrin, iasp91, jb, prem, pwdk, sp6
P,Pdiff #taup_phases# %Specify possible phases for TauPy (comma separated). See Obspy TauPy documentation for possible values.
0.05 0.5 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz]
0.01 0.5 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz]
0.001 0.5 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz]
0.05 0.5 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz]
0.01 0.5 #bph2# %lower/upper corner freq. of second band pass filter z-comp. [Hz]
0.001 0.5 #bph2# %lower/upper corner freq. of second band pass filter z-comp. [Hz]
#special settings for calculating CF#
%!!Edit the following only if you know what you are doing!!%
#Z-component#
HOS #algoP# %choose algorithm for P-onset determination (HOS, ARZ, or AR3)
15.0 #tlta# %for HOS-/AR-AIC-picker, length of LTA window [s]
150.0 #tlta# %for HOS-/AR-AIC-picker, length of LTA window [s]
4 #hosorder# %for HOS-picker, order of Higher Order Statistics
2 #Parorder# %for AR-picker, order of AR process of Z-component
6.0 #tdet1z# %for AR-picker, length of AR determination window [s] for Z-component, 1st pick
2.0 #tpred1z# %for AR-picker, length of AR prediction window [s] for Z-component, 1st pick
3.0 #tdet2z# %for AR-picker, length of AR determination window [s] for Z-component, 2nd pick
1.0 #tpred2z# %for AR-picker, length of AR prediction window [s] for Z-component, 2nd pick
16.0 #tdet1z# %for AR-picker, length of AR determination window [s] for Z-component, 1st pick
10.0 #tpred1z# %for AR-picker, length of AR prediction window [s] for Z-component, 1st pick
12.0 #tdet2z# %for AR-picker, length of AR determination window [s] for Z-component, 2nd pick
6.0 #tpred2z# %for AR-picker, length of AR prediction window [s] for Z-component, 2nd pick
0.001 #addnoise# %add noise to seismogram for stable AR prediction
60.0 10.0 150.0 3.0 #tsnrz# %for HOS/AR, window lengths for SNR-and slope estimation [tnoise, tsafetey, tsignal, tslope] [s]
10.0 #pickwinP# %for initial AIC pick, length of P-pick window [s]
20.0 #Precalcwin# %for HOS/AR, window length [s] for recalculation of CF (relative to 1st pick)
60.0 10.0 40.0 10.0 #tsnrz# %for HOS/AR, window lengths for SNR-and slope estimation [tnoise, tsafetey, tsignal, tslope] [s]
150.0 #pickwinP# %for initial AIC pick, length of P-pick window [s]
35.0 #Precalcwin# %for HOS/AR, window length [s] for recalculation of CF (relative to 1st pick)
6.0 #aictsmooth# %for HOS/AR, take average of samples for smoothing of AIC-function [s]
4.0 #tsmoothP# %for HOS/AR, take average of samples for smoothing CF [s]
0.001 #ausP# %for HOS/AR, artificial uplift of samples (aus) of CF (P)
1.1 #nfacP# %for HOS/AR, noise factor for noise level determination (P)
#H-components#
ARH #algoS# %choose algorithm for S-onset determination (ARH or AR3)
6.0 #tdet1h# %for HOS/AR, length of AR-determination window [s], H-components, 1st pick
4.0 #tpred1h# %for HOS/AR, length of AR-prediction window [s], H-components, 1st pick
6.0 #tdet2h# %for HOS/AR, length of AR-determinaton window [s], H-components, 2nd pick
3.0 #tpred2h# %for HOS/AR, length of AR-prediction window [s], H-components, 2nd pick
12.0 #tdet1h# %for HOS/AR, length of AR-determination window [s], H-components, 1st pick
6.0 #tpred1h# %for HOS/AR, length of AR-prediction window [s], H-components, 1st pick
8.0 #tdet2h# %for HOS/AR, length of AR-determinaton window [s], H-components, 2nd pick
4.0 #tpred2h# %for HOS/AR, length of AR-prediction window [s], H-components, 2nd pick
4 #Sarorder# %for AR-picker, order of AR process of H-components
5.0 #Srecalcwin# %for AR-picker, window length [s] for recalculation of CF (2nd pick) (H)
15.0 #pickwinS# %for initial AIC pick, length of S-pick window [s]
100.0 10.0 40.0 6.0 #tsnrh# %for ARH/AR3, window lengths for SNR-and slope estimation [tnoise, tsafetey, tsignal, tslope] [s]
2.0 #aictsmoothS# %for AIC-picker, take average of samples for smoothing of AIC-function [s]
3.0 #tsmoothS# %for AR-picker, take average of samples for smoothing CF [s] (S)
0.9 #ausS# %for HOS/AR, artificial uplift of samples (aus) of CF (S)
30.0 #Srecalcwin# %for AR-picker, window length [s] for recalculation of CF (2nd pick) (H)
195.0 #pickwinS# %for initial AIC pick, length of S-pick window [s]
100.0 10.0 45.0 10.0 #tsnrh# %for ARH/AR3, window lengths for SNR-and slope estimation [tnoise, tsafetey, tsignal, tslope] [s]
22.0 #aictsmoothS# %for AIC-picker, take average of samples for smoothing of AIC-function [s]
10.0 #tsmoothS# %for AR-picker, take average of samples for smoothing CF [s] (S)
0.001 #ausS# %for HOS/AR, artificial uplift of samples (aus) of CF (S)
1.2 #nfacS# %for AR-picker, noise factor for noise level determination (S)
#first-motion picker#
1 #minfmweight# %minimum required P weight for first-motion determination
2.0 #minFMSNR# %miniumum required SNR for first-motion determination
0.2 #fmpickwin# %pick window around P onset for calculating zero crossings
3.0 #minFMSNR# %miniumum required SNR for first-motion determination
10.0 #fmpickwin# %pick window around P onset for calculating zero crossings
#quality assessment#
1.0 2.0 4.0 8.0 #timeerrorsP# %discrete time errors [s] corresponding to picking weights [0 1 2 3] for P
4.0 8.0 16.0 32.0 #timeerrorsS# %discrete time errors [s] corresponding to picking weights [0 1 2 3] for S
0.5 #minAICPslope# %below this slope [counts/s] the initial P pick is rejected
1.1 #minAICPSNR# %below this SNR the initial P pick is rejected
1.0 #minAICSslope# %below this slope [counts/s] the initial S pick is rejected
1.5 #minAICSSNR# %below this SNR the initial S pick is rejected
1.3 #minAICSSNR# %below this SNR the initial S pick is rejected
5.0 #minsiglength# %length of signal part for which amplitudes must exceed noiselevel [s]
1.0 #noisefactor# %noiselevel*noisefactor=threshold
10.0 #minpercent# %required percentage of amplitudes exceeding threshold
1.5 #zfac# %P-amplitude must exceed at least zfac times RMS-S amplitude
6.0 #mdttolerance# %maximum allowed deviation of P picks from median [s]
1.0 #wdttolerance# %maximum allowed deviation from Wadati-diagram
1.2 #zfac# %P-amplitude must exceed at least zfac times RMS-S amplitude
25.0 #mdttolerance# %maximum allowed deviation of P picks from median [s]
50.0 #wdttolerance# %maximum allowed deviation from Wadati-diagram
5.0 #jackfactor# %pick is removed if the variance of the subgroup with the pick removed is larger than the mean variance of all subgroups times safety factor

99
inputs/pylot_local.in Normal file
View File

@ -0,0 +1,99 @@
%This is a parameter input file for PyLoT/autoPyLoT.
%All main and special settings regarding data handling
%and picking are to be set here!
%Parameters are optimized for %extent data sets!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#main settings#
/DATA/Insheim/EVENT_DATA/LOCAL/2018.02_Insheim #datapath# %data path
e0006.038.18 #eventID# %event ID for single event processing (* for all events found in datapath)
/DATA/Insheim/STAT_INFO #invdir# %full path to inventory or dataless-seed file
PILOT #datastructure# %choose data structure
True #apverbose# %choose 'True' or 'False' for terminal output
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#NLLoc settings#
/home/ludger/NLLOC #nllocbin# %path to NLLoc executable
/home/ludger/NLLOC/Insheim #nllocroot# %root of NLLoc-processing directory
AUTOPHASES.obs #phasefile# %name of autoPyLoT-output phase file for NLLoc
Insheim_min1d032016_auto.in #ctrfile# %name of autoPyLoT-output control file for NLLoc
ttime #ttpatter# %pattern of NLLoc ttimes from grid
AUTOLOC_nlloc #outpatter# %pattern of NLLoc-output file
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#parameters for seismic moment estimation#
3530.0 #vp# %average P-wave velocity
2500.0 #rho# %average rock density [kg/m^3]
300.0 0.8 #Qp# %quality factor for P waves (Qp*f^a); list(Qp, a)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#settings local magnitude#
1.11 0.0009 -2.0 #WAscaling# %Scaling relation (log(Ao)+Alog(r)+Br+C) of Wood-Anderson amplitude Ao [nm] If zeros are set, original Richter magnitude is calculated!
0.0 0.0 #magscaling# %Scaling relation for derived local magnitude [a*Ml+b]. If zeros are set, no scaling of network magnitude is applied!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#filter settings#
2.0 2.0 #minfreq# %Lower filter frequency [P, S]
30.0 15.0 #maxfreq# %Upper filter frequency [P, S]
3 3 #filter_order# %filter order [P, S]
bandpass bandpass #filter_type# %filter type (bandpass, bandstop, lowpass, highpass) [P, S]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#common settings picker#
local #extent# %extent of array ("local", "regional" or "global")
7.0 #pstart# %start time [s] for calculating CF for P-picking
16.0 #pstop# %end time [s] for calculating CF for P-picking
-0.5 #sstart# %start time [s] relative to P-onset for calculating CF for S-picking
10.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking
False #use_taup# %use estimated traveltimes from TauPy for calculating windows for CF
iasp91 #taup_model# %define TauPy model for traveltime estimation
P #taup_phases# %Specify possible phases for TauPy (comma separated). See Obspy TauPy documentation for possible values.
2.0 20.0 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz]
2.0 30.0 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz]
2.0 10.0 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz]
2.0 15.0 #bph2# %lower/upper corner freq. of second band pass filter z-comp. [Hz]
#special settings for calculating CF#
%!!Edit the following only if you know what you are doing!!%
#Z-component#
HOS #algoP# %choose algorithm for P-onset determination (HOS, ARZ, or AR3)
4.0 #tlta# %for HOS-/AR-AIC-picker, length of LTA window [s]
4 #hosorder# %for HOS-picker, order of Higher Order Statistics
2 #Parorder# %for AR-picker, order of AR process of Z-component
1.2 #tdet1z# %for AR-picker, length of AR determination window [s] for Z-component, 1st pick
0.4 #tpred1z# %for AR-picker, length of AR prediction window [s] for Z-component, 1st pick
0.6 #tdet2z# %for AR-picker, length of AR determination window [s] for Z-component, 2nd pick
0.2 #tpred2z# %for AR-picker, length of AR prediction window [s] for Z-component, 2nd pick
0.001 #addnoise# %add noise to seismogram for stable AR prediction
3.0 0.0 1.0 0.5 #tsnrz# %for HOS/AR, window lengths for SNR-and slope estimation [tnoise, tsafetey, tsignal, tslope] [s]
3.0 #pickwinP# %for initial AIC pick, length of P-pick window [s]
6.0 #Precalcwin# %for HOS/AR, window length [s] for recalculation of CF (relative to 1st pick)
0.4 #aictsmooth# %for HOS/AR, take average of samples for smoothing of AIC-function [s]
0.1 #tsmoothP# %for HOS/AR, take average of samples for smoothing CF [s]
0.4 #ausP# %for HOS/AR, artificial uplift of samples (aus) of CF (P)
1.3 #nfacP# %for HOS/AR, noise factor for noise level determination (P)
#H-components#
ARH #algoS# %choose algorithm for S-onset determination (ARH or AR3)
0.8 #tdet1h# %for HOS/AR, length of AR-determination window [s], H-components, 1st pick
0.4 #tpred1h# %for HOS/AR, length of AR-prediction window [s], H-components, 1st pick
0.6 #tdet2h# %for HOS/AR, length of AR-determinaton window [s], H-components, 2nd pick
0.3 #tpred2h# %for HOS/AR, length of AR-prediction window [s], H-components, 2nd pick
4 #Sarorder# %for AR-picker, order of AR process of H-components
5.0 #Srecalcwin# %for AR-picker, window length [s] for recalculation of CF (2nd pick) (H)
4.0 #pickwinS# %for initial AIC pick, length of S-pick window [s]
2.0 0.2 1.5 1.0 #tsnrh# %for ARH/AR3, window lengths for SNR-and slope estimation [tnoise, tsafetey, tsignal, tslope] [s]
1.0 #aictsmoothS# %for AIC-picker, take average of samples for smoothing of AIC-function [s]
0.7 #tsmoothS# %for AR-picker, take average of samples for smoothing CF [s] (S)
0.9 #ausS# %for HOS/AR, artificial uplift of samples (aus) of CF (S)
1.5 #nfacS# %for AR-picker, noise factor for noise level determination (S)
#first-motion picker#
1 #minfmweight# %minimum required P weight for first-motion determination
2.0 #minFMSNR# %miniumum required SNR for first-motion determination
0.2 #fmpickwin# %pick window around P onset for calculating zero crossings
#quality assessment#
0.04 0.08 0.16 0.32 #timeerrorsP# %discrete time errors [s] corresponding to picking weights [0 1 2 3] for P
0.05 0.10 0.20 0.40 #timeerrorsS# %discrete time errors [s] corresponding to picking weights [0 1 2 3] for S
0.8 #minAICPslope# %below this slope [counts/s] the initial P pick is rejected
1.1 #minAICPSNR# %below this SNR the initial P pick is rejected
1.0 #minAICSslope# %below this slope [counts/s] the initial S pick is rejected
1.5 #minAICSSNR# %below this SNR the initial S pick is rejected
1.0 #minsiglength# %length of signal part for which amplitudes must exceed noiselevel [s]
1.1 #noisefactor# %noiselevel*noisefactor=threshold
50.0 #minpercent# %required percentage of amplitudes exceeding threshold
1.1 #zfac# %P-amplitude must exceed at least zfac times RMS-S amplitude
5.0 #mdttolerance# %maximum allowed deviation of P picks from median [s]
1.0 #wdttolerance# %maximum allowed deviation from Wadati-diagram
2.0 #jackfactor# %pick is removed if the variance of the subgroup with the pick removed is larger than the mean variance of all subgroups times safety factor

99
inputs/pylot_regional.in Normal file
View File

@ -0,0 +1,99 @@
%This is a parameter input file for PyLoT/autoPyLoT.
%All main and special settings regarding data handling
%and picking are to be set here!
%Parameters are optimized for %extent data sets!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#main settings#
#datapath# %data path
#eventID# %event ID for single event processing (* for all events found in datapath)
#invdir# %full path to inventory or dataless-seed file
PILOT #datastructure# %choose data structure
True #apverbose# %choose 'True' or 'False' for terminal output
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#NLLoc settings#
None #nllocbin# %path to NLLoc executable
None #nllocroot# %root of NLLoc-processing directory
None #phasefile# %name of autoPyLoT-output phase file for NLLoc
None #ctrfile# %name of autoPyLoT-output control file for NLLoc
ttime #ttpatter# %pattern of NLLoc ttimes from grid
AUTOLOC_nlloc #outpatter# %pattern of NLLoc-output file
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#parameters for seismic moment estimation#
3530.0 #vp# %average P-wave velocity
2500.0 #rho# %average rock density [kg/m^3]
300.0 0.8 #Qp# %quality factor for P waves (Qp*f^a); list(Qp, a)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#settings local magnitude#
1.11 0.0009 -2.0 #WAscaling# %Scaling relation (log(Ao)+Alog(r)+Br+C) of Wood-Anderson amplitude Ao [nm] If zeros are set, original Richter magnitude is calculated!
1.0382 -0.447 #magscaling# %Scaling relation for derived local magnitude [a*Ml+b]. If zeros are set, no scaling of network magnitude is applied!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#filter settings#
1.0 1.0 #minfreq# %Lower filter frequency [P, S]
10.0 10.0 #maxfreq# %Upper filter frequency [P, S]
2 2 #filter_order# %filter order [P, S]
bandpass bandpass #filter_type# %filter type (bandpass, bandstop, lowpass, highpass) [P, S]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
#common settings picker#
local #extent# %extent of array ("local", "regional" or "global")
15.0 #pstart# %start time [s] for calculating CF for P-picking
60.0 #pstop# %end time [s] for calculating CF for P-picking
-1.0 #sstart# %start time [s] relative to P-onset for calculating CF for S-picking
10.0 #sstop# %end time [s] after P-onset for calculating CF for S-picking
True #use_taup# %use estimated traveltimes from TauPy for calculating windows for CF
iasp91 #taup_model# %define TauPy model for traveltime estimation
P #taup_phases# %Specify possible phases for TauPy (comma separated). See Obspy TauPy documentation for possible values.
2.0 10.0 #bpz1# %lower/upper corner freq. of first band pass filter Z-comp. [Hz]
2.0 12.0 #bpz2# %lower/upper corner freq. of second band pass filter Z-comp. [Hz]
2.0 8.0 #bph1# %lower/upper corner freq. of first band pass filter H-comp. [Hz]
2.0 10.0 #bph2# %lower/upper corner freq. of second band pass filter z-comp. [Hz]
#special settings for calculating CF#
%!!Edit the following only if you know what you are doing!!%
#Z-component#
HOS #algoP# %choose algorithm for P-onset determination (HOS, ARZ, or AR3)
7.0 #tlta# %for HOS-/AR-AIC-picker, length of LTA window [s]
4 #hosorder# %for HOS-picker, order of Higher Order Statistics
2 #Parorder# %for AR-picker, order of AR process of Z-component
1.2 #tdet1z# %for AR-picker, length of AR determination window [s] for Z-component, 1st pick
0.4 #tpred1z# %for AR-picker, length of AR prediction window [s] for Z-component, 1st pick
0.6 #tdet2z# %for AR-picker, length of AR determination window [s] for Z-component, 2nd pick
0.2 #tpred2z# %for AR-picker, length of AR prediction window [s] for Z-component, 2nd pick
0.001 #addnoise# %add noise to seismogram for stable AR prediction
3.0 0.1 0.5 1.0 #tsnrz# %for HOS/AR, window lengths for SNR-and slope estimation [tnoise, tsafetey, tsignal, tslope] [s]
3.0 #pickwinP# %for initial AIC pick, length of P-pick window [s]
6.0 #Precalcwin# %for HOS/AR, window length [s] for recalculation of CF (relative to 1st pick)
0.2 #aictsmooth# %for HOS/AR, take average of samples for smoothing of AIC-function [s]
0.1 #tsmoothP# %for HOS/AR, take average of samples for smoothing CF [s]
0.001 #ausP# %for HOS/AR, artificial uplift of samples (aus) of CF (P)
1.3 #nfacP# %for HOS/AR, noise factor for noise level determination (P)
#H-components#
ARH #algoS# %choose algorithm for S-onset determination (ARH or AR3)
0.8 #tdet1h# %for HOS/AR, length of AR-determination window [s], H-components, 1st pick
0.4 #tpred1h# %for HOS/AR, length of AR-prediction window [s], H-components, 1st pick
0.6 #tdet2h# %for HOS/AR, length of AR-determinaton window [s], H-components, 2nd pick
0.3 #tpred2h# %for HOS/AR, length of AR-prediction window [s], H-components, 2nd pick
4 #Sarorder# %for AR-picker, order of AR process of H-components
5.0 #Srecalcwin# %for AR-picker, window length [s] for recalculation of CF (2nd pick) (H)
4.0 #pickwinS# %for initial AIC pick, length of S-pick window [s]
2.0 0.3 1.5 1.0 #tsnrh# %for ARH/AR3, window lengths for SNR-and slope estimation [tnoise, tsafetey, tsignal, tslope] [s]
1.0 #aictsmoothS# %for AIC-picker, take average of samples for smoothing of AIC-function [s]
0.7 #tsmoothS# %for AR-picker, take average of samples for smoothing CF [s] (S)
0.9 #ausS# %for HOS/AR, artificial uplift of samples (aus) of CF (S)
1.5 #nfacS# %for AR-picker, noise factor for noise level determination (S)
#first-motion picker#
1 #minfmweight# %minimum required P weight for first-motion determination
2.0 #minFMSNR# %miniumum required SNR for first-motion determination
0.2 #fmpickwin# %pick window around P onset for calculating zero crossings
#quality assessment#
0.02 0.04 0.08 0.16 #timeerrorsP# %discrete time errors [s] corresponding to picking weights [0 1 2 3] for P
0.04 0.08 0.16 0.32 #timeerrorsS# %discrete time errors [s] corresponding to picking weights [0 1 2 3] for S
0.8 #minAICPslope# %below this slope [counts/s] the initial P pick is rejected
1.1 #minAICPSNR# %below this SNR the initial P pick is rejected
1.0 #minAICSslope# %below this slope [counts/s] the initial S pick is rejected
1.5 #minAICSSNR# %below this SNR the initial S pick is rejected
1.0 #minsiglength# %length of signal part for which amplitudes must exceed noiselevel [s]
1.0 #noisefactor# %noiselevel*noisefactor=threshold
10.0 #minpercent# %required percentage of amplitudes exceeding threshold
1.5 #zfac# %P-amplitude must exceed at least zfac times RMS-S amplitude
6.0 #mdttolerance# %maximum allowed deviation of P picks from median [s]
1.0 #wdttolerance# %maximum allowed deviation from Wadati-diagram
5.0 #jackfactor# %pick is removed if the variance of the subgroup with the pick removed is larger than the mean variance of all subgroups times safety factor

View File

@ -158,23 +158,31 @@ def buildPyLoT(verbosity=None):
def installPyLoT(verbosity=None):
files_to_copy = {'autoPyLoT_local.in': ['~', '.pylot'],
'autoPyLoT_regional.in': ['~', '.pylot']}
files_to_copy = {'pylot_local.in': ['~', '.pylot'],
'pylot_regional.in': ['~', '.pylot'],
'pylot_global.in': ['~', '.pylot']}
if verbosity > 0:
print('starting installation of PyLoT ...')
if verbosity > 1:
print('copying input files into destination folder ...')
ans = input('please specify scope of interest '
'([0]=local, 1=regional) :') or 0
'([0]=local, 1=regional, 2=global, 3=active) :') or 0
if not isinstance(ans, int):
ans = int(ans)
ans = 'local' if ans is 0 else 'regional'
if ans == 0:
ans = 'local'
elif ans == 1:
ans = 'regional'
elif ans == 2:
ans = 'global'
elif ans == 3:
ans = 'active'
link_dest = []
for file, destination in files_to_copy.items():
link_file = ans in file
if link_file:
link_dest = copy.deepcopy(destination)
link_dest.append('autoPyLoT.in')
link_dest.append('pylot.in')
link_dest = os.path.join(*link_dest)
destination.append(file)
destination = os.path.join(*destination)

12
pylot.yml Normal file
View File

@ -0,0 +1,12 @@
name: pylot_311
channels:
- conda-forge
- defaults
dependencies:
- cartopy=0.23.0=py311hcf9f919_1
- joblib=1.4.2=pyhd8ed1ab_0
- obspy=1.4.1=py311he736701_3
- pyaml=24.7.0=pyhd8ed1ab_0
- pyqtgraph=0.13.7=pyhd8ed1ab_0
- pyside2=5.15.8=py311h3d699ce_4
- pytest=8.3.2=pyhd8ed1ab_0

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.2 KiB

View File

@ -9,7 +9,7 @@ PyLoT - the Python picking and Localization Tool
This python library contains a graphical user interfaces for picking
seismic phases. This software needs ObsPy (http://github.com/obspy/obspy/wiki)
and the Qt4 libraries to be installed first.
and the Qt libraries to be installed first.
PILOT has been developed in Mathworks' MatLab. In order to distribute
PILOT without facing portability problems, it has been decided to re-

View File

@ -6,15 +6,17 @@ Revised/extended summer 2017.
:author: Ludger Küperkoch / MAGS2 EP3 working group
"""
import matplotlib.pyplot as plt
import numpy as np
import obspy.core.event as ope
from obspy.geodetics import degrees2kilometers
from scipy import integrate, signal
from scipy.optimize import curve_fit
from pylot.core.pick.utils import getsignalwin, crossings_nonzero_all, \
select_for_phase
from pylot.core.util.utils import common_range, fit_curve
from scipy import integrate, signal
from scipy.optimize import curve_fit
def richter_magnitude_scaling(delta):
@ -117,12 +119,20 @@ class Magnitude(object):
pass
def updated_event(self, magscaling=None):
self.event.magnitudes.append(self.net_magnitude(magscaling))
net_ml = self.net_magnitude(magscaling)
if net_ml:
self.event.magnitudes.append(net_ml)
return self.event
def net_magnitude(self, magscaling=None):
if self:
if magscaling is not None and str(magscaling) is not '[0.0, 0.0]':
if magscaling is None:
scaling = False
elif magscaling[0] == 0.0 and magscaling[1] == 0.0:
scaling = False
else:
scaling = True
if scaling == True:
# scaling necessary
print("Scaling network magnitude ...")
mag = ope.Magnitude(
@ -133,7 +143,11 @@ class Magnitude(object):
station_count=len(self.magnitudes),
azimuthal_gap=self.origin_id.get_referred_object().quality.azimuthal_gap)
else:
# no saling necessary
# no scaling necessary
# Temporary fix needs rework
if (len(self.magnitudes.keys()) == 0):
print("Error in local magnitude calculation ")
return None
mag = ope.Magnitude(
mag=np.median([M.mag for M in self.magnitudes.values()]),
magnitude_type=self.type,
@ -141,7 +155,6 @@ class Magnitude(object):
station_count=len(self.magnitudes),
azimuthal_gap=self.origin_id.get_referred_object().quality.azimuthal_gap)
return mag
return None
class LocalMagnitude(Magnitude):
@ -212,15 +225,20 @@ class LocalMagnitude(Magnitude):
power = [np.power(tr.data, 2) for tr in st if tr.stats.channel[-1] not
in 'Z3']
if len(power) != 2:
raise ValueError('Wood-Anderson amplitude defintion only valid for '
'two horizontals: {0} given'.format(len(power)))
# checking horizontal count and calculating power_sum accordingly
if len(power) == 1:
print('WARNING: Only one horizontal found for station {0}.'.format(st[0].stats.station))
power_sum = power[0]
elif len(power) == 2:
power_sum = power[0] + power[1]
#
else:
raise ValueError('Wood-Anderson aomplitude defintion only valid for'
' up to two horizontals: {0} given'.format(len(power)))
sqH = np.sqrt(power_sum)
# get time array
th = np.arange(0, len(sqH) * dt, dt)
th = np.arange(0, st[0].stats.npts / st[0].stats.sampling_rate, dt)
# get maximum peak within pick window
iwin = getsignalwin(th, t0 - stime, self.calc_win)
ii = min([iwin[len(iwin) - 1], len(th)])
@ -233,17 +251,34 @@ class LocalMagnitude(Magnitude):
# check for plot flag (for debugging only)
fig = None
if iplot > 1:
st.plot()
fig = plt.figure()
ax = fig.add_subplot(111)
ax = fig.add_subplot(211)
ax.plot(th, st[0].data, 'k')
ax.plot(th, sqH)
ax.plot(th[iwin], sqH[iwin], 'g')
ax.plot([t0, t0], [0, max(sqH)], 'r', linewidth=2)
ax.title(
'Station %s, RMS Horizontal Traces, WA-peak-to-peak=%4.1f mm' \
% (st[0].stats.station, wapp))
ax.plot([t0 - stime, t0 - stime], [0, max(sqH)], 'r', linewidth=2)
ax.set_title('Station %s, Channel %s, RMS Horizontal Trace, '
'WA-peak-to-peak=%6.3f mm' % (st[0].stats.station,
st[0].stats.channel,
wapp))
ax.set_xlabel('Time [s]')
ax.set_ylabel('Displacement [mm]')
ax = fig.add_subplot(212)
ax.plot(th, st[1].data, 'k')
ax.plot(th, sqH)
ax.plot(th[iwin], sqH[iwin], 'g')
ax.plot([t0 - stime, t0 - stime], [0, max(sqH)], 'r', linewidth=2)
ax.set_title('Channel %s, RMS Horizontal Trace, '
'WA-peak-to-peak=%6.3f mm' % (st[1].stats.channel,
wapp))
ax.set_xlabel('Time [s]')
ax.set_ylabel('Displacement [mm]')
fig.show()
try:
input()
except SyntaxError:
pass
plt.close(fig)
return wapp, fig
@ -253,6 +288,11 @@ class LocalMagnitude(Magnitude):
continue
pick = a.pick_id.get_referred_object()
station = pick.waveform_id.station_code
# make sure calculating Ml only from reliable onsets
# NLLoc: time_weight = 0 => do not use onset!
if a.time_weight == 0:
print("Uncertain pick at Station {}, do not use it!".format(station))
continue
wf = select_for_phase(self.stream.select(
station=station), a.phase)
if not wf:
@ -286,6 +326,10 @@ class LocalMagnitude(Magnitude):
+ self.wascaling[0] * np.log10(delta) + self.wascaling[1]
* delta + self.wascaling[
2])
if self.verbose:
print(
"Local Magnitude for station {0}: ML = {1:3.1f}".format(
station, magnitude.mag))
magnitude.origin_id = self.origin_id
magnitude.waveform_id = pick.waveform_id
magnitude.amplitude_id = amplitude.resource_id
@ -349,26 +393,43 @@ class MomentMagnitude(Magnitude):
for a in self.arrivals:
if a.phase not in 'pP':
continue
# make sure calculating Mo only from reliable onsets
# NLLoc: time_weight = 0 => do not use onset!
if a.time_weight == 0:
continue
pick = a.pick_id.get_referred_object()
station = pick.waveform_id.station_code
scopy = self.stream.copy()
wf = scopy.select(station=station)
if len(self.stream) <= 2:
print("Station:" '{0}'.format(station))
print("WARNING: No instrument corrected data available,"
" no magnitude calculation possible! Go on.")
continue
wf = self.stream.select(station=station)
if not wf:
continue
try:
scopy = wf.copy()
except AssertionError:
print("WARNING: Something's wrong with the data,"
"station {},"
"no calculation of moment magnitude possible! Go on.".format(station))
continue
onset = pick.time
distance = degrees2kilometers(a.distance)
azimuth = a.azimuth
incidence = a.takeoff_angle
w0, fc = calcsourcespec(wf, onset, self.p_velocity, distance,
if not 0. <= incidence <= 360.:
if self.verbose:
print(f'WARNING: Incidence angle outside bounds - {incidence}')
return
w0, fc = calcsourcespec(scopy, onset, self.p_velocity, distance,
azimuth, incidence, self.p_attenuation,
self.plot_flag, self.verbose)
if w0 is None or fc is None:
if self.verbose:
print("WARNING: insufficient frequency information")
continue
WF = select_for_phase(self.stream.select(
station=station), a.phase)
WF = select_for_phase(WF, "P")
WF = select_for_phase(scopy, "P")
m0, mw = calcMoMw(WF, w0, self.rock_density, self.p_velocity,
distance, self.verbose)
self.moment_props = (station, dict(w0=w0, fc=fc, Mo=m0))
@ -379,6 +440,40 @@ class MomentMagnitude(Magnitude):
self.event.station_magnitudes.append(magnitude)
self.magnitudes = (station, magnitude)
# WIP JG
def getSourceSpec(self):
for a in self.arrivals:
if a.phase not in 'pP':
continue
# make sure calculating Mo only from reliable onsets
# NLLoc: time_weight = 0 => do not use onset!
if a.time_weight == 0:
continue
pick = a.pick_id.get_referred_object()
station = pick.waveform_id.station_code
if len(self.stream) <= 2:
print("Station:" '{0}'.format(station))
print("WARNING: No instrument corrected data available,"
" no magnitude calculation possible! Go on.")
continue
wf = self.stream.select(station=station)
if not wf:
continue
try:
scopy = wf.copy()
except AssertionError:
print("WARNING: Something's wrong with the data,"
"station {},"
"no calculation of moment magnitude possible! Go on.".format(station))
continue
onset = pick.time
distance = degrees2kilometers(a.distance)
azimuth = a.azimuth
incidence = a.takeoff_angle
w0, fc, plt = calcsourcespec(scopy, onset, self.p_velocity, distance,
azimuth, incidence, self.p_attenuation,
3, self.verbose)
return w0, fc, plt
def calcMoMw(wfstream, w0, rho, vp, delta, verbosity=False):
'''
@ -406,17 +501,18 @@ def calcMoMw(wfstream, w0, rho, vp, delta, verbosity=False):
if verbosity:
print(
"calcMoMw: Calculating seismic moment Mo and moment magnitude Mw for station {0} ...".format(
tr.stats.station))
"calcMoMw: Calculating seismic moment Mo and moment magnitude Mw \
for station {0} ...".format(tr.stats.station))
# additional common parameters for calculating Mo
rP = 2 / np.sqrt(
15) # average radiation pattern of P waves (Aki & Richards, 1980)
# average radiation pattern of P waves (Aki & Richards, 1980)
rP = 2 / np.sqrt(15)
freesurf = 2.0 # free surface correction, assuming vertical incidence
Mo = w0 * 4 * np.pi * rho * np.power(vp, 3) * delta / (rP * freesurf)
# Mw = np.log10(Mo * 1e07) * 2 / 3 - 10.7 # after Hanks & Kanamori (1979), defined for [dyn*cm]!
# Mw = np.log10(Mo * 1e07) * 2 / 3 - 10.7 # after Hanks & Kanamori (1979),
# defined for [dyn*cm]!
Mw = np.log10(Mo) * 2 / 3 - 6.7 # for metric units
if verbosity:
@ -476,11 +572,15 @@ def calcsourcespec(wfstream, onset, vp, delta, azimuth, incidence,
dist = delta * 1000 # hypocentral distance in [m]
fc = None
Fc = None
w0 = None
zdat = select_for_phase(wfstream, "P")
if len(zdat) == 0:
print("No vertical component found in stream:\n{}".format(wfstream))
print("No calculation of source spectrum possible!")
return w0, Fc
dt = zdat[0].stats.delta
freq = zdat[0].stats.sampling_rate
@ -488,7 +588,6 @@ def calcsourcespec(wfstream, onset, vp, delta, azimuth, incidence,
# trim traces to common range (for rotation)
trstart, trend = common_range(wfstream)
wfstream.trim(trstart, trend)
# rotate into LQT (ray-coordindate-) system using Obspy's rotate
# L: P-wave direction
# Q: SV-wave direction
@ -529,9 +628,7 @@ def calcsourcespec(wfstream, onset, vp, delta, azimuth, incidence,
print("calcsourcespec: Something is wrong with the waveform, "
"no zero crossings derived!\n")
print("No calculation of source spectrum possible!")
plotflag = 0
else:
plotflag = 1
index = min([3, len(zc) - 1])
calcwin = (zc[index] - zc[0]) * dt
iwin = getsignalwin(t, rel_onset, calcwin)
@ -539,14 +636,15 @@ def calcsourcespec(wfstream, onset, vp, delta, azimuth, incidence,
# fft
fny = freq / 2
l = len(xdat) / freq
# l = len(xdat) / freq
# number of fft bins after Bath
n = freq * l
# n = freq * l
# find next power of 2 of data length
m = pow(2, np.ceil(np.log(len(xdat)) / np.log(2)))
N = int(np.power(m, 2))
N = min(int(np.power(m, 2)), 16384)
# N = int(np.power(m, 2))
y = dt * np.fft.fft(xdat, N)
Y = abs(y[: N / 2])
Y = abs(y[: int(N / 2)])
L = (N - 1) / freq
f = np.arange(0, fny, 1 / L)
@ -573,26 +671,23 @@ def calcsourcespec(wfstream, onset, vp, delta, azimuth, incidence,
# use of implicit scipy otimization function
fit = synthsourcespec(F, w0in, Fcin)
[optspecfit, _] = curve_fit(synthsourcespec, F, YYcor, [w0in, Fcin])
w0 = optspecfit[0]
fc = optspecfit[1]
# w01 = optspecfit[0]
# fc1 = optspecfit[1]
w01 = optspecfit[0]
fc1 = optspecfit[1]
if verbosity:
print("calcsourcespec: Determined w0-value: %e m/Hz, \n"
"calcsourcespec: Determined corner frequency: %f Hz" % (w0, fc))
"calcsourcespec: Determined corner frequency: %f Hz" % (w01, fc1))
# use of conventional fitting
# [w02, fc2] = fitSourceModel(F, YYcor, Fcin, iplot, verbosity)
[w02, fc2] = fitSourceModel(F, YYcor, Fcin, iplot, verbosity)
# get w0 and fc as median of both
# source spectrum fits
# w0 = np.median([w01, w02])
# fc = np.median([fc1, fc2])
# if verbosity:
# print("calcsourcespec: Using w0-value = %e m/Hz and fc = %f Hz" % (
# w0, fc))
if iplot > 1:
w0 = np.median([w01, w02])
Fc = np.median([fc1, fc2])
if verbosity:
print("calcsourcespec: Using w0-value = %e m/Hz and fc = %f Hz" % (
w0, Fc))
if iplot >= 1:
f1 = plt.figure()
tLdat = np.arange(0, len(Ldat) * dt, dt)
plt.subplot(2, 1, 1)
@ -600,7 +695,7 @@ def calcsourcespec(wfstream, onset, vp, delta, azimuth, incidence,
p1, = plt.plot(t, np.multiply(inttrz, 1000), 'k')
p2, = plt.plot(tLdat, np.multiply(Ldat, 1000))
plt.legend([p1, p2], ['Displacement', 'Rotated Displacement'])
if plotflag == 1:
if iplot == 1:
plt.plot(t[iwin], np.multiply(xdat, 1000), 'g')
plt.title('Seismogram and P Pulse, Station %s-%s' \
% (zdat[0].stats.station, zdat[0].stats.channel))
@ -610,24 +705,32 @@ def calcsourcespec(wfstream, onset, vp, delta, azimuth, incidence,
plt.xlabel('Time since %s' % zdat[0].stats.starttime)
plt.ylabel('Displacement [mm]')
if plotflag == 1:
if iplot > 1:
plt.subplot(2, 1, 2)
p1, = plt.loglog(f, Y.real, 'k')
p2, = plt.loglog(F, YY.real)
p3, = plt.loglog(F, YYcor, 'r')
p4, = plt.loglog(F, fit, 'g')
plt.loglog([fc, fc], [w0 / 100, w0], 'g')
plt.legend([p1, p2, p3, p4], ['Raw Spectrum', \
'Used Raw Spectrum', \
'Q-Corrected Spectrum', \
plt.loglog([Fc, Fc], [w0 / 100, w0], 'g')
plt.legend([p1, p2, p3, p4], ['Raw Spectrum',
'Used Raw Spectrum',
'Q-Corrected Spectrum',
'Fit to Spectrum'])
plt.title('Source Spectrum from P Pulse, w0=%e m/Hz, fc=%6.2f Hz' \
% (w0, fc))
% (w0, Fc))
plt.xlabel('Frequency [Hz]')
plt.ylabel('Amplitude [m/Hz]')
plt.grid()
if iplot == 3:
return w0, Fc, plt
plt.show()
try:
input()
except SyntaxError:
pass
plt.close(f1)
return w0, fc
return w0, Fc
def synthsourcespec(f, omega0, fcorner):
@ -701,13 +804,14 @@ def fitSourceModel(f, S, fc0, iplot, verbosity=False):
# check difference of il and ir in order to
# keep calculation time acceptable
idiff = ir - il
if idiff > 10000:
if idiff > 100000:
increment = 1000
elif idiff <= 100000 and idiff > 10000:
increment = 100
elif idiff <= 20:
increment = 1
else:
increment = 10
for i in range(il, ir, increment):
FC = f[i]
indexdc = np.where((f > 0) & (f <= FC))
@ -725,37 +829,43 @@ def fitSourceModel(f, S, fc0, iplot, verbosity=False):
# get best found w0 anf fc from minimum
if len(STD) > 0:
fc = fc[np.argmin(STD)]
Fc = fc[np.argmin(STD)]
w0 = w0[np.argmin(STD)]
elif len(STD) == 0:
fc = fc0
Fc = fc0
w0 = max(S)
if verbosity:
print(
"fitSourceModel: best fc: {0} Hz, best w0: {1} m/Hz".format(fc, w0))
if iplot > 1:
"fitSourceModel: best fc: {0} Hz, best w0: {1} m/Hz".format(Fc, w0))
if iplot >= 1:
plt.figure() # iplot)
plt.loglog(f, S, 'k')
plt.loglog([f[0], fc], [w0, w0], 'g')
plt.loglog([fc, fc], [w0 / 100, w0], 'g')
plt.loglog([f[0], Fc], [w0, w0], 'g')
plt.loglog([Fc, Fc], [w0 / 100, w0], 'g')
plt.title('Calculated Source Spectrum, Omega0=%e m/Hz, fc=%6.2f Hz' \
% (w0, fc))
% (w0, Fc))
plt.xlabel('Frequency [Hz]')
plt.ylabel('Amplitude [m/Hz]')
plt.grid()
if iplot == 2:
plt.figure() # iplot + 1)
plt.subplot(311)
plt.plot(f[il:ir], STD, '*')
plt.plot(fc, STD, '*')
plt.title('Common Standard Deviations')
plt.xticks([])
plt.subplot(312)
plt.plot(f[il:ir], stdw0, '*')
plt.plot(fc, stdw0, '*')
plt.title('Standard Deviations of w0-Values')
plt.xticks([])
plt.subplot(313)
plt.plot(f[il:ir], stdfc, '*')
plt.plot(fc, stdfc, '*')
plt.title('Standard Deviations of Corner Frequencies')
plt.xlabel('Corner Frequencies [Hz]')
plt.show()
try:
input()
except SyntaxError:
pass
plt.close()
return w0, fc
return w0, Fc

View File

@ -1,240 +0,0 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Created August/September 2015.
:author: Ludger Küperkoch / MAGS2 EP3 working group
"""
import matplotlib.pyplot as plt
import numpy as np
from obspy.core import Stream
from pylot.core.pick.utils import getsignalwin
from scipy.optimize import curve_fit
class Magnitude(object):
'''
Superclass for calculating Wood-Anderson peak-to-peak
amplitudes, local magnitudes and moment magnitudes.
'''
def __init__(self, wfstream, To, pwin, iplot):
'''
:param: wfstream
:type: `~obspy.core.stream.Stream
:param: To, onset time, P- or S phase
:type: float
:param: pwin, pick window [To To+pwin] to get maximum
peak-to-peak amplitude (WApp) or to calculate
source spectrum (DCfc)
:type: float
:param: iplot, no. of figure window for plotting interims results
:type: integer
'''
assert isinstance(wfstream, Stream), "%s is not a stream object" % str(wfstream)
self.setwfstream(wfstream)
self.setTo(To)
self.setpwin(pwin)
self.setiplot(iplot)
self.calcwapp()
self.calcsourcespec()
def getwfstream(self):
return self.wfstream
def setwfstream(self, wfstream):
self.wfstream = wfstream
def getTo(self):
return self.To
def setTo(self, To):
self.To = To
def getpwin(self):
return self.pwin
def setpwin(self, pwin):
self.pwin = pwin
def getiplot(self):
return self.iplot
def setiplot(self, iplot):
self.iplot = iplot
def getwapp(self):
return self.wapp
def getw0(self):
return self.w0
def getfc(self):
return self.fc
def calcwapp(self):
self.wapp = None
def calcsourcespec(self):
self.sourcespek = None
class WApp(Magnitude):
'''
Method to derive peak-to-peak amplitude as seen on a Wood-Anderson-
seismograph. Has to be derived from instrument corrected traces!
'''
def calcwapp(self):
print("Getting Wood-Anderson peak-to-peak amplitude ...")
print("Simulating Wood-Anderson seismograph ...")
self.wapp = None
stream = self.getwfstream()
# poles, zeros and sensitivity of WA seismograph
# (see Uhrhammer & Collins, 1990, BSSA, pp. 702-716)
paz_wa = {
'poles': [5.6089 - 5.4978j, -5.6089 - 5.4978j],
'zeros': [0j, 0j],
'gain': 2080,
'sensitivity': 1}
stream.simulate(paz_remove=None, paz_simulate=paz_wa)
trH1 = stream[0].data
trH2 = stream[1].data
ilen = min([len(trH1), len(trH2)])
# get RMS of both horizontal components
sqH = np.sqrt(np.power(trH1[0:ilen], 2) + np.power(trH2[0:ilen], 2))
# get time array
th = np.arange(0, len(sqH) * stream[0].stats.delta, stream[0].stats.delta)
# get maximum peak within pick window
iwin = getsignalwin(th, self.getTo(), self.getpwin())
self.wapp = np.max(sqH[iwin])
print("Determined Wood-Anderson peak-to-peak amplitude: %f mm") % self.wapp
if self.getiplot() > 1:
stream.plot()
f = plt.figure(2)
plt.plot(th, sqH)
plt.plot(th[iwin], sqH[iwin], 'g')
plt.plot([self.getTo(), self.getTo()], [0, max(sqH)], 'r', linewidth=2)
plt.title('Station %s, RMS Horizontal Traces, WA-peak-to-peak=%4.1f mm' \
% (stream[0].stats.station, self.wapp))
plt.xlabel('Time [s]')
plt.ylabel('Displacement [mm]')
plt.show()
raw_input()
plt.close(f)
class DCfc(Magnitude):
'''
Method to calculate the source spectrum and to derive from that the plateau
(so-called DC-value) and the corner frequency assuming Aki's omega-square
source model. Has to be derived from instrument corrected displacement traces!
'''
def calcsourcespec(self):
print("Calculating source spectrum ....")
self.w0 = None # DC-value
self.fc = None # corner frequency
stream = self.getwfstream()
tr = stream[0]
# get time array
t = np.arange(0, len(tr) * tr.stats.delta, tr.stats.delta)
iwin = getsignalwin(t, self.getTo(), self.getpwin())
xdat = tr.data[iwin]
# fft
fny = tr.stats.sampling_rate / 2
l = len(xdat) / tr.stats.sampling_rate
n = tr.stats.sampling_rate * l # number of fft bins after Bath
# find next power of 2 of data length
m = pow(2, np.ceil(np.log(len(xdat)) / np.log(2)))
N = int(np.power(m, 2))
y = tr.stats.delta * np.fft.fft(xdat, N)
Y = abs(y[: N / 2])
L = (N - 1) / tr.stats.sampling_rate
f = np.arange(0, fny, 1 / L)
# remove zero-frequency and frequencies above
# corner frequency of seismometer (assumed
# to be 100 Hz)
fi = np.where((f >= 1) & (f < 100))
F = f[fi]
YY = Y[fi]
# get plateau (DC value) and corner frequency
# initial guess of plateau
DCin = np.mean(YY[0:100])
# initial guess of corner frequency
# where spectral level reached 50% of flat level
iin = np.where(YY >= 0.5 * DCin)
Fcin = F[iin[0][np.size(iin) - 1]]
fit = synthsourcespec(F, DCin, Fcin)
[optspecfit, pcov] = curve_fit(synthsourcespec, F, YY.real, [DCin, Fcin])
self.w0 = optspecfit[0]
self.fc = optspecfit[1]
print("DCfc: Determined DC-value: %e m/Hz, \n" \
"Determined corner frequency: %f Hz" % (self.w0, self.fc))
# if self.getiplot() > 1:
iplot = 2
if iplot > 1:
print("DCfc: Determined DC-value: %e m/Hz, \n"
"Determined corner frequency: %f Hz" % (self.w0, self.fc))
if self.getiplot() > 1:
f1 = plt.figure()
plt.subplot(2, 1, 1)
# show displacement in mm
plt.plot(t, np.multiply(tr, 1000), 'k')
plt.plot(t[iwin], np.multiply(xdat, 1000), 'g')
plt.title('Seismogram and P pulse, station %s' % tr.stats.station)
plt.xlabel('Time since %s' % tr.stats.starttime)
plt.ylabel('Displacement [mm]')
plt.subplot(2, 1, 2)
plt.loglog(f, Y.real, 'k')
plt.loglog(F, YY.real)
plt.loglog(F, fit, 'g')
plt.title('Source Spectrum from P Pulse, DC=%e m/Hz, fc=%4.1f Hz' \
% (self.w0, self.fc))
plt.xlabel('Frequency [Hz]')
plt.ylabel('Amplitude [m/Hz]')
plt.grid()
plt.show()
raw_input()
plt.close(f1)
def synthsourcespec(f, omega0, fcorner):
'''
Calculates synthetic source spectrum from given plateau and corner
frequency assuming Akis omega-square model.
:param: f, frequencies
:type: array
:param: omega0, DC-value (plateau) of source spectrum
:type: float
:param: fcorner, corner frequency of source spectrum
:type: float
'''
# ssp = omega0 / (pow(2, (1 + f / fcorner)))
ssp = omega0 / (1 + pow(2, (f / fcorner)))
return ssp

View File

@ -2,35 +2,51 @@
# -*- coding: utf-8 -*-
import copy
import logging
import os
from PySide2.QtWidgets import QMessageBox
from obspy import read_events
from obspy.core import read, Stream, UTCDateTime
from obspy.core.event import Event as ObsPyEvent
from obspy.io.sac import SacIOError
import pylot.core.loc.focmec as focmec
import pylot.core.loc.hypodd as hypodd
import pylot.core.loc.velest as velest
from pylot.core.io.phases import readPILOTEvent, picks_from_picksdict, \
picksdict_from_pilot, merge_picks
picksdict_from_pilot, merge_picks, PylotParameter
from pylot.core.util.errors import FormatError, OverwriteError
from pylot.core.util.event import Event
from pylot.core.util.utils import fnConstructor, full_range
import pylot.core.loc.velest as velest
from pylot.core.util.obspyDMT_interface import qml_from_obspyDMT
from pylot.core.util.utils import fnConstructor, full_range, check4rotated, \
check_for_gaps_and_merge, trim_station_components, check_for_nan
class Data(object):
"""
Data container with attributes wfdata holding ~obspy.core.stream.
:type parent: PySide.QtGui.QWidget object, optional
:param parent: A PySide.QtGui.QWidget object utilized when
called by a GUI to display a PySide.QtGui.QMessageBox instead of printing
:type parent: PySide2.QtWidgets.QWidget object, optional
:param parent: A PySide2.QtWidgets.QWidget object utilized when
called by a GUI to display a PySide2.QtWidgets.QMessageBox instead of printing
to standard out.
:type evtdata: ~obspy.core.event.Event object, optional
:param evtdata ~obspy.core.event.Event object containing all derived or
loaded event. Container object holding, e.g. phase arrivals, etc.
"""
def __init__(self, parent=None, evtdata=None):
def __init__(self, parent=None, evtdata=None, picking_parameter=None):
self._parent = parent
if not picking_parameter:
if hasattr(parent, '_inputs'):
picking_parameter = parent._inputs
else:
logging.warning('No picking parameters found! Using default input parameters!!!')
picking_parameter = PylotParameter()
self.picking_parameter = picking_parameter
if self.getParent():
self.comp = parent.getComponent()
else:
@ -45,7 +61,7 @@ class Data(object):
elif isinstance(evtdata, str):
try:
cat = read_events(evtdata)
if len(cat) is not 1:
if len(cat) != 1:
raise ValueError('ambiguous event information for file: '
'{file}'.format(file=evtdata))
evtdata = cat[0]
@ -58,7 +74,9 @@ class Data(object):
elif 'LOC' in evtdata:
raise NotImplementedError('PILOT location information '
'read support not yet '
'implemeted.')
'implemented.')
elif 'event.pkl' in evtdata:
evtdata = qml_from_obspyDMT(evtdata)
else:
raise e
else:
@ -71,6 +89,7 @@ class Data(object):
self.wforiginal = None
self.cuttimes = None
self.dirty = False
self.processed = None
def __str__(self):
return str(self.wfdata)
@ -82,9 +101,17 @@ class Data(object):
if other.isNew() and not self.isNew():
picks_to_add = other.get_evt_data().picks
old_picks = self.get_evt_data().picks
for pick in picks_to_add:
if pick not in old_picks:
old_picks.append(pick)
wf_ids_old = [pick.waveform_id for pick in old_picks]
for new_pick in picks_to_add:
wf_id = new_pick.waveform_id
if wf_id in wf_ids_old:
for old_pick in old_picks:
comparison = [old_pick.waveform_id == new_pick.waveform_id,
old_pick.phase_hint == new_pick.phase_hint,
old_pick.method_id == new_pick.method_id]
if all(comparison):
del (old_pick)
old_picks.append(new_pick)
elif not other.isNew() and self.isNew():
new = other + self
self.evtdata = new.get_evt_data()
@ -99,6 +126,11 @@ class Data(object):
return self
def getPicksStr(self):
"""
Return picks in event data
:return: picks seperated by newlines
:rtype: str
"""
picks_str = ''
for pick in self.get_evt_data().picks:
picks_str += str(pick) + '\n'
@ -106,18 +138,11 @@ class Data(object):
def getParent(self):
"""
:return:
Get PySide.QtGui.QWidget parent object
"""
return self._parent
def isNew(self):
"""
:return:
"""
return self._new
def setNew(self):
@ -125,9 +150,9 @@ class Data(object):
def getCutTimes(self):
"""
:return:
Returns earliest start and latest end of all waveform data
:return: minimum start time and maximum end time as a tuple
:rtype: (UTCDateTime, UTCDateTime)
"""
if self.cuttimes is None:
self.updateCutTimes()
@ -135,22 +160,33 @@ class Data(object):
def updateCutTimes(self):
"""
Update cuttimes to contain earliest start and latest end time
of all waveform data
:rtype: None
"""
self.cuttimes = full_range(self.getWFData())
def getEventFileName(self):
"""
:return:
"""
ID = self.getID()
# handle forbidden filenames especially on windows systems
return fnConstructor(str(ID))
def checkEvent(self, event, fcheck, forceOverwrite=False):
"""
Check information in supplied event and own event and replace with own
information if no other information are given or forced by forceOverwrite
:param event: Event that supplies information for comparison
:type event: pylot.core.util.event.Event
:param fcheck: check and delete existing information
can be a str or a list of strings of ['manual', 'auto', 'origin', 'magnitude']
:type fcheck: str, [str]
:param forceOverwrite: Set to true to force overwrite own information. If false,
supplied information from event is only used if there is no own information in that
category (given in fcheck: manual, auto, origin, magnitude)
:type forceOverwrite: bool
:return:
:rtype: None
"""
if 'origin' in fcheck:
self.replaceOrigin(event, forceOverwrite)
if 'magnitude' in fcheck:
@ -161,26 +197,63 @@ class Data(object):
self.replacePicks(event, 'manual')
def replaceOrigin(self, event, forceOverwrite=False):
"""
Replace own origin with the one supplied in event if own origin is not
existing or forced by forceOverwrite = True
:param event: Event that supplies information for comparison
:type event: pylot.core.util.event.Event
:param forceOverwrite: always replace own information with supplied one if true
:type forceOverwrite: bool
:return:
:rtype: None
"""
if self.get_evt_data().origins or forceOverwrite:
if event.origins:
print("Found origin, replace it by new origin.")
event.origins = self.get_evt_data().origins
def replaceMagnitude(self, event, forceOverwrite=False):
"""
Replace own magnitude with the one supplied in event if own magnitude is not
existing or forced by forceOverwrite = True
:param event: Event that supplies information for comparison
:type event: pylot.core.util.event.Event
:param forceOverwrite: always replace own information with supplied one if true
:type forceOverwrite: bool
:return:
:rtype: None
"""
if self.get_evt_data().magnitudes or forceOverwrite:
if event.magnitudes:
print("Found magnitude, replace it by new magnitude")
event.magnitudes = self.get_evt_data().magnitudes
def replacePicks(self, event, picktype):
checkflag = 0
"""
Replace picks in event with own picks
:param event: Event that supplies information for comparison
:type event: pylot.core.util.event.Event
:param picktype: 'auto' or 'manual' picks
:type picktype: str
:return:
:rtype: None
"""
checkflag = 1
picks = event.picks
# remove existing picks
for j, pick in reversed(list(enumerate(picks))):
try:
if picktype in str(pick.method_id.id):
picks.pop(j)
checkflag = 1
if checkflag:
checkflag = 2
except AttributeError as e:
msg = '{}'.format(e)
print(e)
checkflag = 0
if checkflag > 0:
if checkflag == 1:
print("Write new %s picks to catalog." % picktype)
if checkflag == 2:
print("Found %s pick(s), remove them and append new picks to catalog." % picktype)
# append new picks
@ -189,15 +262,14 @@ class Data(object):
picks.append(pick)
def exportEvent(self, fnout, fnext='.xml', fcheck='auto', upperErrors=None):
"""
Export event to file
:param fnout: basename of file
:param fnext: file extension
:param fnext: file extensions xml, cnv, obs, focmec, or/and pha
:param fcheck: check and delete existing information
can be a str or a list of strings of ['manual', 'auto', 'origin', 'magnitude']
"""
from pylot.core.util.defaults import OUTPUTFORMATS
if not type(fcheck) == list:
fcheck = [fcheck]
@ -208,6 +280,13 @@ class Data(object):
'supported'.format(e, fnext)
raise FormatError(errmsg)
if hasattr(self.get_evt_data(), 'notes'):
try:
with open(os.path.join(os.path.dirname(fnout), 'notes.txt'), 'w') as notes_file:
notes_file.write(self.get_evt_data().notes)
except Exception as e:
print('Warning: Could not save notes.txt: ', str(e))
# check for already existing xml-file
if fnext == '.xml':
if os.path.isfile(fnout + fnext):
@ -219,11 +298,13 @@ class Data(object):
raise IOError('No event information in file {}'.format(fnout + fnext))
event = cat[0]
if not event.resource_id == self.get_evt_data().resource_id:
raise IOError("Missmatching event resource id's: {} and {}".format(event.resource_id,
self.get_evt_data().resource_id))
QMessageBox.warning(self, 'Warning', 'Different resource IDs!')
return
self.checkEvent(event, fcheck)
self.setEvtData(event)
self.get_evt_data().write(fnout + fnext, format=evtformat)
# try exporting event
else:
evtdata_org = self.get_evt_data()
@ -240,44 +321,70 @@ class Data(object):
mstation = picks[i].waveform_id.station_code
mstation_ext = mstation + '_'
for k in range(len(picks_copy)):
if ((picks_copy[k].waveform_id.station_code == mstation) or \
if ((picks_copy[k].waveform_id.station_code == mstation) or
(picks_copy[k].waveform_id.station_code == mstation_ext)) and \
(picks_copy[k].method_id == 'auto'):
del picks_copy[k]
break
lendiff = len(picks) - len(picks_copy)
if lendiff is not 0:
if lendiff != 0:
print("Manual as well as automatic picks available. Prefered the {} manual ones!".format(lendiff))
no_uncertainties_p = []
no_uncertainties_s = []
if upperErrors:
# check for pick uncertainties exceeding adjusted upper errors
# Picks with larger uncertainties will not be saved in output file!
for j in range(len(picks)):
for i in range(len(picks_copy)):
if picks_copy[i].phase_hint[0] == 'P':
if (picks_copy[i].time_errors['upper_uncertainty'] >= upperErrors[0]) or \
(picks_copy[i].time_errors['uncertainty'] == None):
# Skipping pick if no upper_uncertainty is found and warning user
if picks_copy[i].time_errors['upper_uncertainty'] is None:
#print("{1} P-Pick of station {0} does not have upper_uncertainty and cant be checked".format(
# picks_copy[i].waveform_id.station_code,
# picks_copy[i].method_id))
if not picks_copy[i].waveform_id.station_code in no_uncertainties_p:
no_uncertainties_p.append(picks_copy[i].waveform_id.station_code)
continue
#print ("checking for upper_uncertainty")
if (picks_copy[i].time_errors['uncertainty'] is None) or \
(picks_copy[i].time_errors['upper_uncertainty'] >= upperErrors[0]):
print("Uncertainty exceeds or equal adjusted upper time error!")
print("Adjusted uncertainty: {}".format(upperErrors[0]))
print("Pick uncertainty: {}".format(picks_copy[i].time_errors['uncertainty']))
print("{1} P-Pick of station {0} will not be saved in outputfile".format(
picks_copy[i].waveform_id.station_code,
picks_copy[i].method_id))
print("#")
del picks_copy[i]
break
if picks_copy[i].phase_hint[0] == 'S':
if (picks_copy[i].time_errors['upper_uncertainty'] >= upperErrors[1]) or \
(picks_copy[i].time_errors['uncertainty'] == None):
# Skipping pick if no upper_uncertainty is found and warning user
if picks_copy[i].time_errors['upper_uncertainty'] is None:
#print("{1} S-Pick of station {0} does not have upper_uncertainty and cant be checked".format(
#picks_copy[i].waveform_id.station_code,
#picks_copy[i].method_id))
if not picks_copy[i].waveform_id.station_code in no_uncertainties_s:
no_uncertainties_s.append(picks_copy[i].waveform_id.station_code)
continue
if (picks_copy[i].time_errors['uncertainty'] is None) or \
(picks_copy[i].time_errors['upper_uncertainty'] >= upperErrors[1]):
print("Uncertainty exceeds or equal adjusted upper time error!")
print("Adjusted uncertainty: {}".format(upperErrors[1]))
print("Pick uncertainty: {}".format(picks_copy[i].time_errors['uncertainty']))
print("{1} S-Pick of station {0} will not be saved in outputfile".format(
picks_copy[i].waveform_id.station_code,
picks_copy[i].method_id))
print("#")
del picks_copy[i]
break
for s in no_uncertainties_p:
print("P-Pick of station {0} does not have upper_uncertainty and cant be checked".format(s))
for s in no_uncertainties_s:
print("S-Pick of station {0} does not have upper_uncertainty and cant be checked".format(s))
if fnext == '.obs':
try:
@ -287,6 +394,14 @@ class Data(object):
header = '# EQEVENT: Label: EQ%s Loc: X 0.00 Y 0.00 Z 10.00 OT 0.00 \n' % evid
nllocfile = open(fnout + fnext)
l = nllocfile.readlines()
# Adding A0/Generic Amplitude to .obs file
# l2 = []
# for li in l:
# for amp in evtdata_org.amplitudes:
# if amp.waveform_id.station_code == li[0:5].strip():
# li = li[0:64] + '{:0.2e}'.format(amp.generic_amplitude) + li[73:-1] + '\n'
# l2.append(li)
# l = l2
nllocfile.close()
l.insert(0, header)
nllocfile = open(fnout + fnext, 'w')
@ -297,24 +412,32 @@ class Data(object):
not implemented: {1}'''.format(evtformat, e))
if fnext == '.cnv':
try:
velest.export(picks_copy, fnout + fnext, eventinfo=self.get_evt_data())
velest.export(picks_copy, fnout + fnext, self.picking_parameter, eventinfo=self.get_evt_data())
except KeyError as e:
raise KeyError('''{0} export format
not implemented: {1}'''.format(evtformat, e))
if fnext == '_focmec.in':
try:
focmec.export(picks_copy, fnout + fnext, self.picking_parameter, eventinfo=self.get_evt_data())
except KeyError as e:
raise KeyError('''{0} export format
not implemented: {1}'''.format(evtformat, e))
if fnext == '.pha':
try:
hypodd.export(picks_copy, fnout + fnext, self.picking_parameter, eventinfo=self.get_evt_data())
except KeyError as e:
raise KeyError('''{0} export format
not implemented: {1}'''.format(evtformat, e))
def getComp(self):
"""
:return:
Get component (ZNE)
"""
return self.comp
def getID(self):
"""
:return:
Get unique resource id
"""
try:
return self.evtdata.get('resource_id').id
@ -323,31 +446,84 @@ class Data(object):
def filterWFData(self, kwargs):
"""
:param kwargs:
Filter waveform data
:param kwargs: arguments to pass through to filter function
"""
self.getWFData().filter(**kwargs)
data = self.getWFData()
data.detrend('linear')
data.taper(0.02, type='cosine')
data.filter(**kwargs)
self.dirty = True
def setWFData(self, fnames):
def setWFData(self, fnames, fnames_alt=None, checkRotated=False, metadata=None, tstart=0, tstop=0):
"""
Clear current waveform data and set given waveform data
:param fnames: waveform data names to append
:param fnames_alt: alternative data to show (e.g. synthetic/processed)
:type fnames: list
"""
def check_fname_exists(filenames: list) -> list:
if filenames:
filenames = [fn for fn in filenames if os.path.isfile(fn)]
return filenames
:param fnames:
"""
self.wfdata = Stream()
self.wforiginal = None
self.wf_alt = Stream()
if tstart == tstop:
tstart = tstop = None
self.tstart = tstart
self.tstop = tstop
# remove directories
fnames = check_fname_exists(fnames)
fnames_alt = check_fname_exists(fnames_alt)
# if obspy_dmt:
# wfdir = 'raw'
# self.processed = False
# for fname in fnames:
# if fname.endswith('processed'):
# wfdir = 'processed'
# self.processed = True
# break
# for fpath in fnames:
# if fpath.endswith(wfdir):
# wffnames = [os.path.join(fpath, fname) for fname in os.listdir(fpath)]
# if 'syngine' in fpath.split('/')[-1]:
# wffnames_syn = [os.path.join(fpath, fname) for fname in os.listdir(fpath)]
# else:
# wffnames = fnames
if fnames is not None:
self.appendWFData(fnames)
if fnames_alt is not None:
self.appendWFData(fnames_alt, alternative=True)
else:
return False
# various pre-processing steps:
# remove possible underscores in station names
# self.wfdata = remove_underscores(self.wfdata)
# check for gaps and merge
self.wfdata, _ = check_for_gaps_and_merge(self.wfdata)
# check for nans
check_for_nan(self.wfdata)
# check for stations with rotated components
if checkRotated and metadata is not None:
self.wfdata = check4rotated(self.wfdata, metadata, verbosity=0)
# trim station components to same start value
trim_station_components(self.wfdata, trim_start=True, trim_end=False)
# make a copy of original data
self.wforiginal = self.getWFData().copy()
self.dirty = False
return True
def appendWFData(self, fnames):
def appendWFData(self, fnames, alternative=False):
"""
:param fnames:
Read waveform data from fnames and append it to current wf data
:param fnames: waveform data to append
:type fnames: list
"""
assert isinstance(fnames, list), "input parameter 'fnames' is " \
"supposed to be of type 'list' " \
@ -356,70 +532,71 @@ class Data(object):
if self.dirty:
self.resetWFData()
orig_or_alternative_data = {True: self.wf_alt,
False: self.wfdata}
warnmsg = ''
for fname in fnames:
for fname in set(fnames):
try:
self.wfdata += read(fname)
orig_or_alternative_data[alternative] += read(fname, starttime=self.tstart, endtime=self.tstop)
except TypeError:
try:
self.wfdata += read(fname, format='GSE2')
orig_or_alternative_data[alternative] += read(fname, format='GSE2', starttime=self.tstart, endtime=self.tstop)
except Exception as e:
try:
orig_or_alternative_data[alternative] += read(fname, format='SEGY', starttime=self.tstart,
endtime=self.tstop)
except Exception as e:
warnmsg += '{0}\n{1}\n'.format(fname, e)
except SacIOError as se:
warnmsg += '{0}\n{1}\n'.format(fname, se)
if warnmsg:
warnmsg = 'WARNING: unable to read\n' + warnmsg
warnmsg = 'WARNING in appendWFData: unable to read waveform data\n' + warnmsg
print(warnmsg)
def getWFData(self):
"""
:return:
"""
return self.wfdata
def getOriginalWFData(self):
"""
:return:
"""
return self.wforiginal
def getAltWFdata(self):
return self.wf_alt
def resetWFData(self):
"""
Set waveform data to original waveform data
"""
if self.getOriginalWFData():
self.wfdata = self.getOriginalWFData().copy()
else:
self.wfdata = Stream()
self.dirty = False
def resetPicks(self):
"""
Clear all picks from event
"""
self.get_evt_data().picks = []
def get_evt_data(self):
"""
:return:
"""
return self.evtdata
def setEvtData(self, event):
self.evtdata = event
def applyEVTData(self, data, typ='pick', authority_id='rub'):
def applyEVTData(self, data, typ='pick'):
"""
:param data:
:param typ:
:param authority_id:
Either takes an `obspy.core.event.Event` object and applies all new
information on the event to the actual data if typ is 'event or
creates ObsPy pick objects and append it to the picks list from the
PyLoT dictionary contain all picks if type is pick
:param data: data to apply, either picks or complete event
:type data:
:param typ: which event data to apply, 'pick' or 'event'
:type typ: str
:param authority_id: (currently unused)
:type: str
:raise OverwriteError:
"""
@ -435,14 +612,15 @@ class Data(object):
# check for automatic picks
print("Writing phases to ObsPy-quakeml file")
for key in picks:
if not picks[key].get('P'):
continue
if picks[key]['P']['picker'] == 'auto':
print("Existing picks will be overwritten!")
print("Existing auto-picks will be overwritten in pick-dictionary!")
picks = picks_from_picksdict(picks)
break
else:
if self.get_evt_data().picks:
raise OverwriteError('Existing picks would be overwritten!')
break
else:
picks = picks_from_picksdict(picks)
break
@ -459,6 +637,9 @@ class Data(object):
information on the event to the actual data
:param event:
"""
if event is None:
print("applyEvent: Received None")
return
if self.isNew():
self.setEvtData(event)
else:
@ -655,8 +836,24 @@ class PilotDataStructure(GenericDataStructure):
def __init__(self, **fields):
if not fields:
fields = {'database': '2006.01',
'root': '/data/Egelados/EVENT_DATA/LOCAL'}
fields = {'database': '',
'root': ''}
GenericDataStructure.__init__(self, **fields)
self.setExpandFields(['root', 'database'])
class ObspyDMTdataStructure(GenericDataStructure):
"""
Object containing the data access information for the old PILOT data
structure.
"""
def __init__(self, **fields):
if not fields:
fields = {'database': '',
'root': ''}
GenericDataStructure.__init__(self, **fields)

View File

@ -1,35 +1,30 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import numpy as np
defaults = {'rootpath': {'type': str,
'tooltip': 'project path',
'value': '',
'namestring': 'Root path'},
"""
Default parameters used for picking
"""
'datapath': {'type': str,
'tooltip': 'data path',
defaults = {'datapath': {'type': str,
'tooltip': 'path to eventfolders',
'value': '',
'namestring': 'Data path'},
'database': {'type': str,
'tooltip': 'name of data base',
'value': '',
'namestring': 'Database path'},
'eventID': {'type': str,
'tooltip': 'event ID for single event processing (* for all events found in database)',
'value': '',
'tooltip': 'event ID for single event processing (* for all events found in datapath)',
'value': '*',
'namestring': 'Event ID'},
'extent': {'type': str,
'tooltip': 'extent of array ("local", "regional" or "global")',
'tooltip': 'extent of array ("active", "local", "regional" or "global")',
'value': 'local',
'namestring': 'Array extent'},
'invdir': {'type': str,
'tooltip': 'full path to inventory or dataless-seed file',
'value': '',
'namestring': 'Inversion dir'},
'namestring': 'Inventory directory'},
'datastructure': {'type': str,
'tooltip': 'choose data structure',
@ -58,7 +53,7 @@ defaults = {'rootpath': {'type': str,
'ctrfile': {'type': str,
'tooltip': 'name of autoPyLoT-output control file for NLLoc',
'value': 'Insheim_min1d2015_auto.in',
'value': '',
'namestring': 'Control filename'},
'ttpatter': {'type': str,
@ -74,11 +69,15 @@ defaults = {'rootpath': {'type': str,
'vp': {'type': float,
'tooltip': 'average P-wave velocity',
'value': 3530.,
'min': 0.,
'max': np.inf,
'namestring': 'P-velocity'},
'rho': {'type': float,
'tooltip': 'average rock density [kg/m^3]',
'value': 2500.,
'min': 0.,
'max': np.inf,
'namestring': 'Density'},
'Qp': {'type': (float, float),
@ -90,42 +89,58 @@ defaults = {'rootpath': {'type': str,
'tooltip': 'start time [s] for calculating CF for P-picking (if TauPy:'
' seconds relative to estimated onset)',
'value': 15.0,
'min': -np.inf,
'max': np.inf,
'namestring': 'P start'},
'pstop': {'type': float,
'tooltip': 'end time [s] for calculating CF for P-picking (if TauPy:'
' seconds relative to estimated onset)',
'value': 60.0,
'min': -np.inf,
'max': np.inf,
'namestring': 'P stop'},
'sstart': {'type': float,
'tooltip': 'start time [s] relative to P-onset for calculating CF for S-picking',
'value': -1.0,
'min': -np.inf,
'max': np.inf,
'namestring': 'S start'},
'sstop': {'type': float,
'tooltip': 'end time [s] after P-onset for calculating CF for S-picking',
'value': 10.0,
'min': -np.inf,
'max': np.inf,
'namestring': 'S stop'},
'bpz1': {'type': (float, float),
'tooltip': 'lower/upper corner freq. of first band pass filter Z-comp. [Hz]',
'value': (2, 20),
'min': (0., 0.),
'max': (np.inf, np.inf),
'namestring': ('Z-bandpass 1', 'Lower', 'Upper')},
'bpz2': {'type': (float, float),
'tooltip': 'lower/upper corner freq. of second band pass filter Z-comp. [Hz]',
'value': (2, 30),
'min': (0., 0.),
'max': (np.inf, np.inf),
'namestring': ('Z-bandpass 2', 'Lower', 'Upper')},
'bph1': {'type': (float, float),
'tooltip': 'lower/upper corner freq. of first band pass filter H-comp. [Hz]',
'value': (2, 15),
'min': (0., 0.),
'max': (np.inf, np.inf),
'namestring': ('H-bandpass 1', 'Lower', 'Upper')},
'bph2': {'type': (float, float),
'tooltip': 'lower/upper corner freq. of second band pass filter z-comp. [Hz]',
'value': (2, 20),
'min': (0., 0.),
'max': (np.inf, np.inf),
'namestring': ('H-bandpass 2', 'Lower', 'Upper')},
'algoP': {'type': str,
@ -136,76 +151,106 @@ defaults = {'rootpath': {'type': str,
'tlta': {'type': float,
'tooltip': 'for HOS-/AR-AIC-picker, length of LTA window [s]',
'value': 7.0,
'min': 0.,
'max': np.inf,
'namestring': 'LTA window'},
'hosorder': {'type': int,
'tooltip': 'for HOS-picker, order of Higher Order Statistics',
'value': 4,
'min': 0,
'max': np.inf,
'namestring': 'HOS order'},
'Parorder': {'type': int,
'tooltip': 'for AR-picker, order of AR process of Z-component',
'value': 2,
'min': 0,
'max': np.inf,
'namestring': 'AR order P'},
'tdet1z': {'type': float,
'tooltip': 'for AR-picker, length of AR determination window [s] for Z-component, 1st pick',
'value': 1.2,
'min': 0.,
'max': np.inf,
'namestring': 'AR det. window Z 1'},
'tpred1z': {'type': float,
'tooltip': 'for AR-picker, length of AR prediction window [s] for Z-component, 1st pick',
'value': 0.4,
'min': 0.,
'max': np.inf,
'namestring': 'AR pred. window Z 1'},
'tdet2z': {'type': float,
'tooltip': 'for AR-picker, length of AR determination window [s] for Z-component, 2nd pick',
'value': 0.6,
'min': 0.,
'max': np.inf,
'namestring': 'AR det. window Z 2'},
'tpred2z': {'type': float,
'tooltip': 'for AR-picker, length of AR prediction window [s] for Z-component, 2nd pick',
'value': 0.2,
'min': 0.,
'max': np.inf,
'namestring': 'AR pred. window Z 2'},
'addnoise': {'type': float,
'tooltip': 'add noise to seismogram for stable AR prediction',
'value': 0.001,
'min': 0.,
'max': np.inf,
'namestring': 'Add noise'},
'tsnrz': {'type': (float, float, float, float),
'tooltip': 'for HOS/AR, window lengths for SNR-and slope estimation [tnoise, tsafetey, tsignal, tslope] [s]',
'value': (3, 0.1, 0.5, 1.0),
'min': (0., 0., 0., 0.),
'max': (np.inf, np.inf, np.inf, np.inf),
'namestring': ('SNR windows P', 'Noise', 'Safety', 'Signal', 'Slope')},
'pickwinP': {'type': float,
'tooltip': 'for initial AIC pick, length of P-pick window [s]',
'value': 3.0,
'min': 0.,
'max': np.inf,
'namestring': 'AIC window P'},
'Precalcwin': {'type': float,
'tooltip': 'for HOS/AR, window length [s] for recalculation of CF (relative to 1st pick)',
'value': 6.0,
'min': 0.,
'max': np.inf,
'namestring': 'Recal. window P'},
'aictsmooth': {'type': float,
'tooltip': 'for HOS/AR, take average of samples for smoothing of AIC-function [s]',
'value': 0.2,
'min': 0.,
'max': np.inf,
'namestring': 'AIC smooth P'},
'tsmoothP': {'type': float,
'tooltip': 'for HOS/AR, take average of samples for smoothing CF [s]',
'tooltip': 'for HOS/AR, take average of samples in this time window for smoothing CF [s]',
'value': 0.1,
'min': 0.,
'max': np.inf,
'namestring': 'CF smooth P'},
'ausP': {'type': float,
'tooltip': 'for HOS/AR, artificial uplift of samples (aus) of CF (P)',
'value': 0.001,
'min': 0.,
'max': np.inf,
'namestring': 'Artificial uplift P'},
'nfacP': {'type': float,
'tooltip': 'for HOS/AR, noise factor for noise level determination (P)',
'value': 1.3,
'min': 0.,
'max': np.inf,
'namestring': 'Noise factor P'},
'algoS': {'type': str,
@ -216,61 +261,85 @@ defaults = {'rootpath': {'type': str,
'tdet1h': {'type': float,
'tooltip': 'for HOS/AR, length of AR-determination window [s], H-components, 1st pick',
'value': 0.8,
'min': 0.,
'max': np.inf,
'namestring': 'AR det. window H 1'},
'tpred1h': {'type': float,
'tooltip': 'for HOS/AR, length of AR-prediction window [s], H-components, 1st pick',
'value': 0.4,
'min': 0.,
'max': np.inf,
'namestring': 'AR pred. window H 1'},
'tdet2h': {'type': float,
'tooltip': 'for HOS/AR, length of AR-determinaton window [s], H-components, 2nd pick',
'value': 0.6,
'min': 0.,
'max': np.inf,
'namestring': 'AR det. window H 2'},
'tpred2h': {'type': float,
'tooltip': 'for HOS/AR, length of AR-prediction window [s], H-components, 2nd pick',
'value': 0.3,
'min': 0.,
'max': np.inf,
'namestring': 'AR pred. window H 2'},
'Sarorder': {'type': int,
'tooltip': 'for AR-picker, order of AR process of H-components',
'value': 4,
'min': 0,
'max': np.inf,
'namestring': 'AR order S'},
'Srecalcwin': {'type': float,
'tooltip': 'for AR-picker, window length [s] for recalculation of CF (2nd pick) (H)',
'value': 5.0,
'min': 0.,
'max': np.inf,
'namestring': 'Recal. window S'},
'pickwinS': {'type': float,
'tooltip': 'for initial AIC pick, length of S-pick window [s]',
'value': 3.0,
'min': 0.,
'max': np.inf,
'namestring': 'AIC window S'},
'tsnrh': {'type': (float, float, float, float),
'tooltip': 'for ARH/AR3, window lengths for SNR-and slope estimation [tnoise, tsafetey, tsignal, tslope] [s]',
'value': (2, 0.2, 1.5, 0.5),
'min': (0., 0., 0., 0.),
'max': (np.inf, np.inf, np.inf, np.inf),
'namestring': ('SNR windows S', 'Noise', 'Safety', 'Signal', 'Slope')},
'aictsmoothS': {'type': float,
'tooltip': 'for AIC-picker, take average of samples for smoothing of AIC-function [s]',
'tooltip': 'for AIC-picker, take average of samples in this time window for smoothing of AIC-function [s]',
'value': 0.5,
'min': 0.,
'max': np.inf,
'namestring': 'AIC smooth S'},
'tsmoothS': {'type': float,
'tooltip': 'for AR-picker, take average of samples for smoothing CF [s] (S)',
'value': 0.7,
'min': 0.,
'max': np.inf,
'namestring': 'CF smooth S'},
'ausS': {'type': float,
'tooltip': 'for HOS/AR, artificial uplift of samples (aus) of CF (S)',
'value': 0.9,
'min': 0.,
'max': np.inf,
'namestring': 'Artificial uplift S'},
'nfacS': {'type': float,
'tooltip': 'for AR-picker, noise factor for noise level determination (S)',
'value': 1.5,
'min': 0.,
'max': np.inf,
'namestring': 'Noise factor S'},
'minfmweight': {'type': int,
@ -281,103 +350,143 @@ defaults = {'rootpath': {'type': str,
'minFMSNR': {'type': float,
'tooltip': 'miniumum required SNR for first-motion determination',
'value': 2.,
'min': 0.,
'max': np.inf,
'namestring': 'Min SNR'},
'fmpickwin': {'type': float,
'tooltip': 'pick window around P onset for calculating zero crossings',
'tooltip': 'pick window [s] around P onset for calculating zero crossings',
'value': 0.2,
'min': 0.,
'max': np.inf,
'namestring': 'Zero crossings window'},
'timeerrorsP': {'type': (float, float, float, float),
'tooltip': 'discrete time errors [s] corresponding to picking weights [0 1 2 3] for P',
'value': (0.01, 0.02, 0.04, 0.08),
'min': (0., 0., 0., 0.),
'max': (np.inf, np.inf, np.inf, np.inf),
'namestring': ('Time errors P', '0', '1', '2', '3')},
'timeerrorsS': {'type': (float, float, float, float),
'tooltip': 'discrete time errors [s] corresponding to picking weights [0 1 2 3] for S',
'value': (0.04, 0.08, 0.16, 0.32),
'min': (0., 0., 0., 0.),
'max': (np.inf, np.inf, np.inf, np.inf),
'namestring': ('Time errors S', '0', '1', '2', '3')},
'minAICPslope': {'type': float,
'tooltip': 'below this slope [counts/s] the initial P pick is rejected',
'value': 0.8,
'min': 0.,
'max': np.inf,
'namestring': 'Min. slope P'},
'minAICPSNR': {'type': float,
'tooltip': 'below this SNR the initial P pick is rejected',
'value': 1.1,
'min': 0.,
'max': np.inf,
'namestring': 'Min. SNR P'},
'minAICSslope': {'type': float,
'tooltip': 'below this slope [counts/s] the initial S pick is rejected',
'value': 1.,
'min': 0.,
'max': np.inf,
'namestring': 'Min. slope S'},
'minAICSSNR': {'type': float,
'tooltip': 'below this SNR the initial S pick is rejected',
'value': 1.5,
'min': 0.,
'max': np.inf,
'namestring': 'Min. SNR S'},
'minsiglength': {'type': float,
'tooltip': 'length of signal part for which amplitudes must exceed noiselevel [s]',
'value': 1.,
'min': 0.,
'max': np.inf,
'namestring': 'Min. signal length'},
'noisefactor': {'type': float,
'tooltip': 'noiselevel*noisefactor=threshold',
'value': 1.0,
'min': 0.,
'max': np.inf,
'namestring': 'Noise factor'},
'minpercent': {'type': float,
'tooltip': 'required percentage of amplitudes exceeding threshold',
'value': 10.,
'min': 0.,
'max': np.inf,
'namestring': 'Min amplitude [%]'},
'zfac': {'type': float,
'tooltip': 'P-amplitude must exceed at least zfac times RMS-S amplitude',
'value': 1.5,
'min': 0.,
'max': np.inf,
'namestring': 'Z factor'},
'mdttolerance': {'type': float,
'tooltip': 'maximum allowed deviation of P picks from median [s]',
'value': 6.0,
'min': 0.,
'max': np.inf,
'namestring': 'Median tolerance'},
'wdttolerance': {'type': float,
'tooltip': 'maximum allowed deviation from Wadati-diagram',
'value': 1.0,
'min': 0.,
'max': np.inf,
'namestring': 'Wadati tolerance'},
'jackfactor': {'type': float,
'tooltip': 'pick is removed if the variance of the subgroup with the pick removed is larger than the mean variance of all subgroups times safety factor',
'value': 5.0,
'min': 0.,
'max': np.inf,
'namestring': 'Jackknife safety factor'},
'WAscaling': {'type': (float, float, float),
'tooltip': 'Scaling relation (log(Ao)+Alog(r)+Br+C) of Wood-Anderson amplitude Ao [nm] \
If zeros are set, original Richter magnitude is calculated!',
'value': (0., 0., 0.),
'min': (-np.inf, -np.inf, -np.inf),
'max': (np.inf, np.inf, np.inf),
'namestring': ('Wood-Anderson scaling', '', '', '')},
'magscaling': {'type': (float, float),
'tooltip': 'Scaling relation for derived local magnitude [a*Ml+b]. \
If zeros are set, no scaling of network magnitude is applied!',
'value': (0., 0.),
'min': (0., -np.inf),
'max': (np.inf, np.inf),
'namestring': ('Local mag. scaling', '', '')},
'minfreq': {'type': (float, float),
'tooltip': 'Lower filter frequency [P, S]',
'value': (1.0, 1.0),
'min': (0., 0.),
'max': (np.inf, np.inf),
'namestring': ('Lower freq.', 'P', 'S')},
'maxfreq': {'type': (float, float),
'tooltip': 'Upper filter frequency [P, S]',
'value': (10.0, 10.0),
'min': (0., 0.),
'max': (np.inf, np.inf),
'namestring': ('Upper freq.', 'P', 'S')},
'filter_order': {'type': (int, int),
'tooltip': 'filter order [P, S]',
'value': (2, 2),
'min': (0, 0),
'max': (np.inf, np.inf),
'namestring': ('Order', 'P', 'S')},
'filter_type': {'type': (str, str),
@ -391,16 +500,19 @@ defaults = {'rootpath': {'type': str,
'namestring': 'Use TauPy'},
'taup_model': {'type': str,
'tooltip': 'define TauPy model for traveltime estimation. Possible values: 1066a, 1066b, ak135, ak135f, herrin, iasp91, jb, prem, pwdk, sp6',
'tooltip': 'Define TauPy model for traveltime estimation. Possible values: 1066a, 1066b, ak135, ak135f, herrin, iasp91, jb, prem, pwdk, sp6',
'value': 'iasp91',
'namestring': 'TauPy model'}
'namestring': 'TauPy model'},
'taup_phases': {'type': str,
'tooltip': 'Specify possible phases for TauPy (comma separated). See Obspy TauPy documentation for possible values.',
'value': 'ttall',
'namestring': 'TauPy phases'},
}
settings_main = {
'dirs': [
'rootpath',
'datapath',
'database',
'eventID',
'invdir',
'datastructure',
@ -432,6 +544,7 @@ settings_main = {
'sstop',
'use_taup',
'taup_model',
'taup_phases',
'bpz1',
'bpz2',
'bph1',

View File

@ -0,0 +1,84 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
"""
Script to get event parameters from PyLoT-xml file to write
them into eventlist.
LK, igem, 03/2021
Edited for use in PyLoT
JG, igem, 01/2022
"""
import os
import argparse
import numpy as np
import matplotlib.pyplot as plt
import glob
from obspy.core.event import read_events
from pyproj import Proj
"""
Creates an eventlist file summarizing all events found in a certain folder. Only called by pressing UI Button eventlis_xml_action
:rtype:
:param path: Path to root folder where single Event folder are to found
"""
def geteventlistfromxml(path, outpath):
p = Proj(proj='utm', zone=32, ellps='WGS84')
# open eventlist file and write header
evlist = outpath + '/eventlist'
evlistobj = open(evlist, 'w')
evlistobj.write(
'EventID Date To Lat Lon EAST NORTH Dep Ml NoP NoS RMS errH errZ Gap \n')
# data path
dp = path + "/e*/*.xml"
# list of all available xml-files
xmlnames = glob.glob(dp)
# read all onset weights
for names in xmlnames:
print("Getting location parameters from {}".format(names))
cat = read_events(names)
try:
st = cat.events[0].origins[0].time
Lat = cat.events[0].origins[0].latitude
Lon = cat.events[0].origins[0].longitude
EAST, NORTH = p(Lon, Lat)
Dep = cat.events[0].origins[0].depth / 1000
Ml = cat.events[0].magnitudes[1].mag
NoP = []
NoS = []
except IndexError:
print('Insufficient data found for event (not localised): ' + names.split('/')[-1].split('_')[-1][
:-4] + ' Skipping event for eventlist.')
continue
for i in range(len(cat.events[0].origins[0].arrivals)):
if cat.events[0].origins[0].arrivals[i].phase == 'P':
NoP.append(cat.events[0].origins[0].arrivals[i].phase)
elif cat.events[0].origins[0].arrivals[i].phase == 'S':
NoS.append(cat.events[0].origins[0].arrivals[i].phase)
# NoP = cat.events[0].origins[0].quality.used_station_count
errH = cat.events[0].origins[0].origin_uncertainty.max_horizontal_uncertainty
errZ = cat.events[0].origins[0].depth_errors.uncertainty
Gap = cat.events[0].origins[0].quality.azimuthal_gap
# evID = names.split('/')[6]
evID = names.split('/')[-1].split('_')[-1][:-4]
Date = str(st.year) + str('%02d' % st.month) + str('%02d' % st.day)
To = str('%02d' % st.hour) + str('%02d' % st.minute) + str('%02d' % st.second) + \
'.' + str('%06d' % st.microsecond)
# write into eventlist
evlistobj.write('%s %s %s %9.6f %9.6f %13.6f %13.6f %8.6f %3.1f %d %d NaN %d %d %d\n' % (evID, \
Date, To, Lat, Lon,
EAST, NORTH, Dep, Ml,
len(NoP), len(NoS),
errH, errZ, Gap))
print('Adding Event ' + names.split('/')[-1].split('_')[-1][:-4] + ' to eventlist')
print('Eventlist created and saved in: ' + outpath)
evlistobj.close()

View File

@ -1,5 +1,7 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
import os
from pylot.core.io import default_parameters
from pylot.core.util.errors import ParameterError
@ -48,12 +50,19 @@ class PylotParameter(object):
self.__init_default_paras()
self.__init_subsettings()
self.__filename = fnin
self.__parameter = {}
self._verbosity = verbosity
self._parFileCont = {}
# io from parsed arguments alternatively
for key, val in kwargs.items():
self._parFileCont[key] = val
self.from_file()
# if no filename or kwargs given, use default values
if not fnin and not kwargs:
self.reset_defaults()
if fnout:
self.export2File(fnout)
@ -70,6 +79,7 @@ class PylotParameter(object):
# Set default values of parameter names
def __init_default_paras(self):
"""set default values of parameter names"""
parameters = default_parameters.defaults
self.__defaults = parameters
@ -86,12 +96,17 @@ class PylotParameter(object):
return bool(self.__parameter)
def __getitem__(self, key):
try:
if key in self.__parameter:
return self.__parameter[key]
except:
return None
else:
logging.warning(f'{key} not found in PylotParameter')
def __setitem__(self, key, value):
try:
value = self.check_range(value, self.__defaults[key]['max'], self.__defaults[key]['min'])
except KeyError:
# no min/max values in defaults
pass
self.__parameter[key] = value
def __delitem__(self, key):
@ -104,15 +119,32 @@ class PylotParameter(object):
return len(self.__parameter.keys())
def iteritems(self):
"""
Iterate over parameters
:return: key, value tupel
:rtype:
"""
for key, value in self.__parameter.items():
yield key, value
def hasParam(self, parameter):
if parameter in self.__parameter.keys():
return True
return False
"""
Check if parameter is in keys
:param parameter: parameter to look for in keys
:type parameter:
:return:
:rtype: bool
"""
return parameter in self.__parameter.keys()
def get(self, *args):
"""
Get first available parameter in args
:param args:
:type args:
:return:
:rtype:
"""
try:
for param in args:
try:
@ -128,15 +160,35 @@ class PylotParameter(object):
raise ParameterError(e)
def get_defaults(self):
"""
get default parameters
:return:
:rtype: dict
"""
return self.__defaults
def get_main_para_names(self):
"""
Get main parameter names
:return: list of keys available in parameters
:rtype:
"""
return self._settings_main
def get_special_para_names(self):
"""
Get pick parameter names
:return: list of keys available in parameters
:rtype:
"""
return self._settings_special_pick
def get_all_para_names(self):
"""
Get all parameter names
:return:
:rtype: list
"""
all_names = []
all_names += self.get_main_para_names()['dirs']
all_names += self.get_main_para_names()['nlloc']
@ -150,7 +202,46 @@ class PylotParameter(object):
all_names += self.get_special_para_names()['quality']
return all_names
def reinit_default_parameters(self):
self.__init_default_paras()
@staticmethod
def check_range(value, max_value, min_value):
"""
Check if value is within the min/max values defined in default_parameters. Works for tuple and scalar values.
:param value: Value to be checked against min/max range
:param max_value: Maximum allowed value, tuple or scalar
:param min_value: Minimum allowed value, tuple or scalar
:return: value tuple/scalar clamped to the valid range
>>> checkRange(-5, 10, 0)
0
>>> checkRange((-5., 100.), (10., 10.), (0., 0.))
(0.0, 10.0)
"""
try:
# Try handling tuples by comparing their elements
comparisons = [(a > b) for a, b in zip(value, max_value)]
if True in comparisons:
value = tuple(max_value[i] if comp else value[i] for i, comp in enumerate(comparisons))
comparisons = [(a < b) for a, b in zip(value, min_value)]
if True in comparisons:
value = tuple(min_value[i] if comp else value[i] for i, comp in enumerate(comparisons))
except TypeError:
value = max(min_value, min(max_value, value))
return value
def checkValue(self, param, value):
"""
Check type of value against expected type of param.
Print warning message if type check fails
:param param:
:type param:
:param value:
:type value:
:return:
:rtype:
"""
is_type = type(value)
expect_type = self.get_defaults()[param]['type']
if not is_type == expect_type and not is_type == tuple:
@ -159,9 +250,25 @@ class PylotParameter(object):
print(Warning(message))
def setParamKV(self, param, value):
"""
set parameter param to value
:param param:
:type param:
:param value:
:type value:
:return:
:rtype: None
"""
self.__setitem__(param, value)
def setParam(self, **kwargs):
"""
Set multiple parameters
:param kwargs:
:type kwargs:
:return:
:rtype: None
"""
for key in kwargs:
self.__setitem__(key, kwargs[key])
@ -170,11 +277,23 @@ class PylotParameter(object):
print('ParameterError:\n non-existent parameter %s' % errmsg)
def reset_defaults(self):
"""
Reset current parameters to default parameters
:return:
:rtype: None
"""
defaults = self.get_defaults()
for param in defaults:
self.setParamKV(param, defaults[param]['value'])
for param_name, param in defaults.items():
self.setParamKV(param_name, param['value'])
def from_file(self, fnin=None):
"""
read parameters from file and set values to read values
:param fnin: filename
:type fnin:
:return:
:rtype: None
"""
if not fnin:
if self.__filename is not None:
fnin = self.__filename
@ -221,6 +340,13 @@ class PylotParameter(object):
self.__parameter = self._parFileCont
def export2File(self, fnout):
"""
Export parameters to file
:param fnout: Filename of export file
:type fnout: str
:return:
:rtype:
"""
fid_out = open(fnout, 'w')
lines = []
# for key, value in self.iteritems():
@ -257,6 +383,19 @@ class PylotParameter(object):
'quality assessment', None)
def write_section(self, fid, names, title, separator):
"""
write a section of parameters to file
:param fid: File object to write to
:type fid:
:param names: which parameter names to write to file
:type names:
:param title: title of section
:type title: str
:param separator: section separator, written at start of section
:type separator: str
:return:
:rtype:
"""
if separator:
fid.write(separator)
fid.write('#{}#\n'.format(title))
@ -287,6 +426,28 @@ class PylotParameter(object):
line = value + name + ttip
fid.write(line)
@staticmethod
def check_deprecated_parameters(parameters):
if parameters.hasParam('database') and parameters.hasParam('rootpath'):
parameters['datapath'] = os.path.join(parameters['rootpath'], parameters['datapath'],
parameters['database'])
logging.warning(
f'Parameters database and rootpath are deprecated. '
f'Tried to merge them to now path: {parameters["datapath"]}.'
)
remove_keys = []
for key in parameters:
if not key in default_parameters.defaults.keys():
remove_keys.append(key)
logging.warning(f'Removing deprecated parameter: {key}')
for key in remove_keys:
del parameters[key]
parameters._settings_main = default_parameters.settings_main
parameters._settings_special_pick = default_parameters.settings_special_pick
class FilterOptions(object):
'''
@ -341,7 +502,9 @@ class FilterOptions(object):
def parseFilterOptions(self):
if self:
robject = {'type': self.getFilterType(), 'corners': self.getOrder()}
robject = {'type': self.getFilterType(),
'corners': self.getOrder(),
'zerophase': False}
if not self.getFilterType() in ['highpass', 'lowpass']:
robject['freqmin'] = self.getFreq()[0]
robject['freqmax'] = self.getFreq()[1]

View File

@ -1,7 +1,7 @@
from obspy import UTCDateTime
from obspy.core import event as ope
from pylot.core.util.utils import getLogin, getHash
from pylot.core.util.utils import get_login, get_hash
def create_amplitude(pickID, amp, unit, category, cinfo):
@ -54,14 +54,14 @@ def create_arrival(pickresID, cinfo, phase, azimuth=None, dist=None):
def create_creation_info(agency_id=None, creation_time=None, author=None):
'''
get creation info of obspy event
:param agency_id:
:param creation_time:
:param author:
:return:
'''
if author is None:
author = getLogin()
author = get_login()
if creation_time is None:
creation_time = UTCDateTime()
return ope.CreationInfo(agency_id=agency_id, author=author,
@ -197,9 +197,9 @@ def create_pick(origintime, picknum, picktime, eventnum, cinfo, phase, station,
def create_resourceID(timetohash, restype, authority_id=None, hrstr=None):
'''
:param timetohash:
:type timetohash
create unique resource id
:param timetohash: event origin time to hash
:type timetohash: class: `~obspy.core.utcdatetime.UTCDateTime` object
:param restype: type of the resource, e.g. 'orig', 'earthquake' ...
:type restype: str
:param authority_id: name of the institution carrying out the processing
@ -210,7 +210,7 @@ def create_resourceID(timetohash, restype, authority_id=None, hrstr=None):
'''
assert isinstance(timetohash, UTCDateTime), "'timetohash' is not an ObsPy" \
"UTCDateTime object"
hid = getHash(timetohash)
hid = get_hash(timetohash)
if hrstr is None:
resID = ope.ResourceIdentifier(restype + '/' + hid[0:6])
else:

View File

@ -1,7 +1,7 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import glob
import logging
import os
import warnings
@ -12,11 +12,13 @@ import scipy.io as sio
from obspy.core import UTCDateTime
from obspy.core.event import read_events
from obspy.core.util import AttribDict
from pylot.core.io.inputs import PylotParameter
from pylot.core.io.location import create_event, \
create_magnitude
from pylot.core.pick.utils import select_for_phase
from pylot.core.util.utils import getOwner, full_range, four_digits
from pylot.core.pick.utils import select_for_phase, get_quality_class
from pylot.core.util.utils import get_owner, full_range, four_digits, transformFilterString4Export, \
backtransformFilterString, loopIdentifyPhase, identifyPhase
def add_amplitudes(event, amplitudes):
@ -57,7 +59,7 @@ def readPILOTEvent(phasfn=None, locfn=None, authority_id='RUB', **kwargs):
if phasfn is not None and os.path.isfile(phasfn):
phases = sio.loadmat(phasfn)
phasctime = UTCDateTime(os.path.getmtime(phasfn))
phasauthor = getOwner(phasfn)
phasauthor = get_owner(phasfn)
else:
phases = None
phasctime = None
@ -65,7 +67,7 @@ def readPILOTEvent(phasfn=None, locfn=None, authority_id='RUB', **kwargs):
if locfn is not None and os.path.isfile(locfn):
loc = sio.loadmat(locfn)
locctime = UTCDateTime(os.path.getmtime(locfn))
locauthor = getOwner(locfn)
locauthor = get_owner(locfn)
else:
loc = None
locctime = None
@ -118,6 +120,13 @@ def readPILOTEvent(phasfn=None, locfn=None, authority_id='RUB', **kwargs):
def picksdict_from_pilot(fn):
"""
Create pick dictionary from matlab file
:param fn: matlab file
:type fn:
:return: pick dictionary
:rtype: dict
"""
from pylot.core.util.defaults import TIMEERROR_DEFAULTS
picks = dict()
phases_pilot = sio.loadmat(fn)
@ -147,6 +156,13 @@ def picksdict_from_pilot(fn):
def stations_from_pilot(stat_array):
"""
Create stations list from pilot station array
:param stat_array:
:type stat_array:
:return:
:rtype: list
"""
stations = list()
cur_stat = None
for stat in stat_array:
@ -164,6 +180,13 @@ def stations_from_pilot(stat_array):
def convert_pilot_times(time_array):
"""
Convert pilot times to UTCDateTimes
:param time_array: pilot times
:type time_array:
:return:
:rtype:
"""
times = [int(time) for time in time_array]
microseconds = int((time_array[-1] - times[-1]) * 1e6)
times.append(microseconds)
@ -171,6 +194,13 @@ def convert_pilot_times(time_array):
def picksdict_from_obs(fn):
"""
create pick dictionary from obs file
:param fn: filename
:type fn:
:return:
:rtype:
"""
picks = dict()
station_name = str()
for line in open(fn, 'r'):
@ -188,7 +218,7 @@ def picksdict_from_obs(fn):
return picks
def picksdict_from_picks(evt):
def picksdict_from_picks(evt, parameter=None):
"""
Takes an Event object and return the pick dictionary commonly used within
PyLoT
@ -201,20 +231,30 @@ def picksdict_from_picks(evt):
'auto': {}
}
for pick in evt.picks:
errors = None
phase = {}
station = pick.waveform_id.station_code
if pick.waveform_id.channel_code is None:
channel = ''
else:
channel = pick.waveform_id.channel_code
network = pick.waveform_id.network_code
mpp = pick.time
spe = pick.time_errors.uncertainty
if pick.filter_id:
filter_id = backtransformFilterString(str(pick.filter_id.id))
else:
filter_id = None
try:
picker = str(pick.method_id)
if picker.startswith('smi:local/'):
picker = picker.split('smi:local/')[1]
pick_method = str(pick.method_id)
if pick_method.startswith('smi:local/'):
pick_method = pick_method.split('smi:local/')[1]
except IndexError:
picker = 'manual' # MP MP TODO maybe improve statement
pick_method = 'manual' # MP MP TODO maybe improve statement
if pick_method == 'None':
pick_method = 'manual'
try:
onsets = picksdict[picker][station]
onsets = picksdict[pick_method][station]
except KeyError as e:
# print(e)
onsets = {}
@ -222,24 +262,59 @@ def picksdict_from_picks(evt):
lpp = mpp + pick.time_errors.upper_uncertainty
epp = mpp - pick.time_errors.lower_uncertainty
except TypeError as e:
msg = e + ',\n falling back to symmetric uncertainties'
warnings.warn(msg)
if not spe:
msg = 'No uncertainties found for pick: {}. Uncertainty set to 0'.format(pick)
lpp = mpp
epp = mpp
else:
msg = str(e) + ',\n falling back to symmetric uncertainties'
lpp = mpp + spe
epp = mpp - spe
warnings.warn(msg)
phase['mpp'] = mpp
phase['epp'] = epp
phase['lpp'] = lpp
phase['spe'] = spe
weight = phase.get('weight')
if not weight:
if not parameter:
logging.warning('Using default input parameter')
parameter = PylotParameter()
pick.phase_hint = identifyPhase(pick.phase_hint)
if pick.phase_hint == 'P':
errors = parameter['timeerrorsP']
elif pick.phase_hint == 'S':
errors = parameter['timeerrorsS']
if errors:
weight = get_quality_class(spe, errors)
phase['weight'] = weight
phase['channel'] = channel
phase['network'] = network
phase['picker'] = picker
phase['picker'] = pick_method
if pick.polarity == 'positive':
phase['fm'] = 'U'
elif pick.polarity == 'negative':
phase['fm'] = 'D'
else:
phase['fm'] = 'N'
phase['filter_id'] = filter_id if filter_id is not None else ''
onsets[pick.phase_hint] = phase.copy()
picksdict[picker][station] = onsets.copy()
picksdict[pick_method][station] = onsets.copy()
return picksdict
def picks_from_picksdict(picks, creation_info=None):
"""
Create a list of picks out of a pick dictionary
:param picks: pick dictionary
:type picks: dict
:param creation_info: obspy creation information to apply to picks
:type creation_info:
:param creation_info: obspy creation information to apply to picks
:return: list of picks
:rtype: list
"""
picks_list = list()
for station, onsets in picks.items():
for label, phase in onsets.items():
@ -275,25 +350,30 @@ def picks_from_picksdict(picks, creation_info=None):
channel_code=ccode,
network_code=ncode)
try:
polarity = phase['fm']
if polarity == 'U' or '+':
filter_id = phase['filteroptions']
filter_id = transformFilterString4Export(filter_id)
except KeyError as e:
warnings.warn(str(e), RuntimeWarning)
filter_id = ''
pick.filter_id = filter_id
try:
polarity = picks[station][label]['fm']
if polarity == 'U' or polarity == '+':
pick.polarity = 'positive'
elif polarity == 'D' or '-':
elif polarity == 'D' or polarity == '-':
pick.polarity = 'negative'
else:
pick.polarity = 'undecidable'
except KeyError as e:
if 'fm' in str(e): # no polarity information found for this phase
pass
else:
raise e
except:
pick.polarity = 'undecidable'
print("No polarity information available!")
picks_list.append(pick)
return picks_list
def reassess_pilot_db(root_dir, db_dir, out_dir=None, fn_param=None, verbosity=0):
import glob
# TODO: change root to datapath
db_root = os.path.join(root_dir, db_dir)
evt_list = glob.glob1(db_root, 'e????.???.??')
@ -308,9 +388,7 @@ def reassess_pilot_event(root_dir, db_dir, event_id, out_dir=None, fn_param=None
from pylot.core.io.inputs import PylotParameter
from pylot.core.pick.utils import earllatepicker
if fn_param is None:
fn_param = defaults.AUTOMATIC_DEFAULTS
# TODO: change root to datapath
default = PylotParameter(fn_param, verbosity)
@ -400,7 +478,6 @@ def reassess_pilot_event(root_dir, db_dir, event_id, out_dir=None, fn_param=None
os.makedirs(out_dir)
fnout_prefix = os.path.join(out_dir, 'PyLoT_{0}.'.format(event_id))
evt.write(fnout_prefix + 'xml', format='QUAKEML')
# evt.write(fnout_prefix + 'cnv', format='VELEST')
def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
@ -408,44 +485,43 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
Function of methods to write phases to the following standard file
formats used for locating earthquakes:
HYPO71, NLLoc, VELEST, HYPOSAT, and hypoDD
HYPO71, NLLoc, VELEST, HYPOSAT, FOCMEC, and hypoDD
:param: arrivals
:type: dictionary containing all phase information including
station ID, phase, first motion, weight (uncertainty),
....
:param arrivals:dictionary containing all phase information including
station ID, phase, first motion, weight (uncertainty), ...
:type arrivals: dict
:param: fformat
:type: string, chosen file format (location routine),
:param fformat: chosen file format (location routine),
choose between NLLoc, HYPO71, HYPOSAT, VELEST,
HYPOINVERSE, and hypoDD
HYPOINVERSE, FOCMEC, and hypoDD
:type fformat: str
:param: filename, full path and name of phase file
:type: string
:param filename: full path and name of phase file
:type filename: string
:param: parameter, all input information
:type: object
:param parameter: all input information
:type parameter: object
:param: eventinfo, optional, needed for VELEST-cnv file
:param eventinfo: optional, needed for VELEST-cnv file
and FOCMEC- and HASH-input files
:type: `obspy.core.event.Event` object
:type eventinfo: `obspy.core.event.Event` object
"""
if fformat == 'NLLoc':
print("Writing phases to %s for NLLoc" % filename)
fid = open("%s" % filename, 'w')
# write header
fid.write('# EQEVENT: %s Label: EQ%s Loc: X 0.00 Y 0.00 Z 10.00 OT 0.00 \n' %
(parameter.get('database'), parameter.get('eventID')))
(parameter.get('datapath'), parameter.get('eventID')))
arrivals = chooseArrivals(arrivals)
for key in arrivals:
# P onsets
if arrivals[key].has_key('P'):
if 'P' in arrivals[key]:
try:
fm = arrivals[key]['P']['fm']
except KeyError as e:
print(e)
fm = None
if fm == None:
if fm is None:
fm = '?'
onset = arrivals[key]['P']['mpp']
year = onset.year
@ -460,6 +536,7 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
try:
if arrivals[key]['P']['weight'] >= 4:
pweight = 0 # do not use pick
print("Station {}: Uncertain pick, do not use it!".format(key))
except KeyError as e:
print(e.message + '; no weight set during processing')
fid.write('%s ? ? ? P %s %d%02d%02d %02d%02d %7.4f GAU 0 0 0 0 %d \n' % (key,
@ -472,7 +549,7 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
ss_ms,
pweight))
# S onsets
if arrivals[key].has_key('S') and arrivals[key]['S']:
if 'S' in arrivals[key] and arrivals[key]['S']['mpp'] is not None:
fm = '?'
onset = arrivals[key]['S']['mpp']
year = onset.year
@ -489,7 +566,11 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
sweight = 0 # do not use pick
except KeyError as e:
print(str(e) + '; no weight set during processing')
fid.write('%s ? ? ? S %s %d%02d%02d %02d%02d %7.4f GAU 0 0 0 0 %d \n' % (key,
Ao = arrivals[key]['S']['Ao'] # peak-to-peak amplitude
if Ao == None:
Ao = 0.0
# fid.write('%s ? ? ? S %s %d%02d%02d %02d%02d %7.4f GAU 0 0 0 0 %d \n' % (key,
fid.write('%s ? ? ? S %s %d%02d%02d %02d%02d %7.4f GAU 0 %9.2f 0 0 %d \n' % (key,
fm,
year,
month,
@ -497,6 +578,7 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
hh,
mm,
ss_ms,
Ao,
sweight))
fid.close()
@ -506,6 +588,7 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
# write header
fid.write(' %s\n' %
parameter.get('eventID'))
arrivals = chooseArrivals(arrivals) # MP MP what is chooseArrivals? It is not defined anywhere
for key in arrivals:
if arrivals[key]['P']['weight'] < 4:
stat = key
@ -581,10 +664,11 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
print("Writing phases to %s for HYPOSAT" % filename)
fid = open("%s" % filename, 'w')
# write header
fid.write('%s, event %s \n' % (parameter.get('database'), parameter.get('eventID')))
fid.write('%s, event %s \n' % (parameter.get('datapath'), parameter.get('eventID')))
arrivals = chooseArrivals(arrivals)
for key in arrivals:
# P onsets
if arrivals[key].has_key('P'):
if 'P' in arrivals[key] and arrivals[key]['P']['mpp'] is not None:
if arrivals[key]['P']['weight'] < 4:
Ponset = arrivals[key]['P']['mpp']
pyear = Ponset.year
@ -598,10 +682,22 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
# use symmetrized picking error as std
# (read the HYPOSAT manual)
pstd = arrivals[key]['P']['spe']
if pstd is None:
errorsP = parameter.get('timeerrorsP')
if arrivals[key]['P']['weight'] == 0:
pstd = errorsP[0]
elif arrivals[key]['P']['weight'] == 1:
pstd = errorsP[1]
elif arrivals[key]['P']['weight'] == 2:
pstd = errorsP[2]
elif arrivals[key]['P']['weight'] == 3:
psrd = errorsP[3]
else:
pstd = errorsP[4]
fid.write('%-5s P1 %4.0f %02d %02d %02d %02d %05.02f %5.3f -999. 0.00 -999. 0.00\n'
% (key, pyear, pmonth, pday, phh, pmm, Pss, pstd))
# S onsets
if arrivals[key].has_key('S') and arrivals[key]['S']:
if 'S' in arrivals[key] and arrivals[key]['S']['mpp'] is not None:
if arrivals[key]['S']['weight'] < 4:
Sonset = arrivals[key]['S']['mpp']
syear = Sonset.year
@ -613,6 +709,18 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
sms = Sonset.microsecond
Sss = sss + sms / 1000000.0
sstd = arrivals[key]['S']['spe']
if pstd is None:
errorsS = parameter.get('timeerrorsS')
if arrivals[key]['S']['weight'] == 0:
pstd = errorsS[0]
elif arrivals[key]['S']['weight'] == 1:
pstd = errorsS[1]
elif arrivals[key]['S']['weight'] == 2:
pstd = errorsS[2]
elif arrivals[key]['S']['weight'] == 3:
psrd = errorsS[3]
else:
pstd = errorsP[4]
fid.write('%-5s S1 %4.0f %02d %02d %02d %02d %05.02f %5.3f -999. 0.00 -999. 0.00\n'
% (key, syear, smonth, sday, shh, smm, Sss, sstd))
fid.close()
@ -647,60 +755,87 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
syear, stime.month, stime.day, stime.hour, stime.minute, stime.second, eventsource['latitude'],
cns, eventsource['longitude'], cew, eventsource['depth'], eventinfo.magnitudes[0]['mag'], ifx))
n = 0
for key in arrivals:
# check whether arrivals are dictionaries (autoPyLoT) or pick object (PyLoT)
if isinstance(arrivals, dict) is False:
# convert pick object (PyLoT) into dictionary
evt = ope.Event(resource_id=eventinfo['resource_id'])
evt.picks = arrivals
arrivals = picksdict_from_picks(evt, parameter=parameter)
# check for automatic and manual picks
# prefer manual picks
usedarrivals = chooseArrivals(arrivals)
for key in usedarrivals:
# P onsets
if arrivals[key].has_key('P'):
if arrivals[key]['P']['weight'] < 4:
if 'P' in usedarrivals[key]:
if usedarrivals[key]['P']['weight'] < 4:
n += 1
stat = key
if len(stat) > 4: # VELEST handles only 4-string station IDs
stat = stat[1:5]
Ponset = arrivals[key]['P']['mpp']
Pweight = arrivals[key]['P']['weight']
Ponset = usedarrivals[key]['P']['mpp']
Pweight = usedarrivals[key]['P']['weight']
Prt = Ponset - stime # onset time relative to source time
if n % 6 is not 0:
if n % 6 != 0:
fid.write('%-4sP%d%6.2f' % (stat, Pweight, Prt))
else:
fid.write('%-4sP%d%6.2f\n' % (stat, Pweight, Prt))
# S onsets
if arrivals[key].has_key('S'):
if arrivals[key]['S']['weight'] < 4:
if 'S' in usedarrivals[key]:
if usedarrivals[key]['S']['weight'] < 4:
n += 1
stat = key
if len(stat) > 4: # VELEST handles only 4-string station IDs
stat = stat[1:5]
Sonset = arrivals[key]['S']['mpp']
Sweight = arrivals[key]['S']['weight']
Sonset = usedarrivals[key]['S']['mpp']
Sweight = usedarrivals[key]['S']['weight']
Srt = Ponset - stime # onset time relative to source time
if n % 6 is not 0:
if n % 6 != 0:
fid.write('%-4sS%d%6.2f' % (stat, Sweight, Srt))
else:
fid.write('%-4sS%d%6.2f\n' % (stat, Sweight, Srt))
fid.close()
elif fformat == 'hypoDD':
elif fformat == 'HYPODD':
print("Writing phases to %s for hypoDD" % filename)
fid = open("%s" % filename, 'w')
# get event information needed for hypoDD-phase file
try:
eventsource = eventinfo.origins[0]
except:
print("No source origin calculated yet, thus no hypoDD-infile creation possible!")
return
stime = eventsource['time']
event = parameter.get('eventID')
try:
event = eventinfo['pylot_id']
hddID = event.split('.')[0][1:5]
except:
print("Error 1111111!")
hddID = "00000"
# write header
fid.write('# %d %d %d %d %d %5.2f %7.4f +%6.4f %7.4f %4.2f 0.1 0.5 %4.2f %s\n' % (
stime.year, stime.month, stime.day, stime.hour, stime.minute, stime.second,
eventsource['latitude'], eventsource['longitude'], eventsource['depth'] / 1000,
eventinfo.magnitudes[0]['mag'], eventsource['quality']['standard_error'], hddID))
for key in arrivals:
if arrivals[key].has_key('P'):
# check whether arrivals are dictionaries (autoPyLoT) or pick object (PyLoT)
if isinstance(arrivals, dict) == False:
# convert pick object (PyLoT) into dictionary
evt = ope.Event(resource_id=eventinfo['resource_id'])
evt.picks = arrivals
arrivals = picksdict_from_picks(evt, parameter=parameter)
# check for automatic and manual picks
# prefer manual picks
usedarrivals = chooseArrivals(arrivals)
for key in usedarrivals:
if 'P' in usedarrivals[key]:
# P onsets
if arrivals[key]['P']['weight'] < 4:
Ponset = arrivals[key]['P']['mpp']
if usedarrivals[key]['P']['weight'] < 4:
Ponset = usedarrivals[key]['P']['mpp']
Prt = Ponset - stime # onset time relative to source time
fid.write('%s %6.3f 1 P\n' % (key, Prt))
if 'S' in usedarrivals[key]:
# S onsets
if arrivals[key]['S']['weight'] < 4:
Sonset = arrivals[key]['S']['mpp']
if usedarrivals[key]['S']['weight'] < 4:
Sonset = usedarrivals[key]['S']['mpp']
Srt = Sonset - stime # onset time relative to source time
fid.write('%-5s %6.3f 1 S\n' % (key, Srt))
@ -710,10 +845,21 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
print("Writing phases to %s for FOCMEC" % filename)
fid = open("%s" % filename, 'w')
# get event information needed for FOCMEC-input file
try:
eventsource = eventinfo.origins[0]
except:
print("No source origin calculated yet, thus no FOCMEC-infile creation possible!")
return
stime = eventsource['time']
# avoid printing '*' in focmec-input file
if parameter.get('eventid') == '*' or parameter.get('eventid') is None:
evID = 'e0000'
else:
evID = parameter.get('eventid')
# write header line including event information
fid.write('%s %d%02d%02d%02d%02d%02.0f %7.4f %6.4f %3.1f %3.1f\n' % (parameter.get('eventID'),
fid.write('%s %d%02d%02d%02d%02d%02.0f %7.4f %6.4f %3.1f %3.1f\n' % (evID,
stime.year, stime.month, stime.day,
stime.hour, stime.minute, stime.second,
eventsource['latitude'],
@ -721,9 +867,18 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
eventsource['depth'] / 1000,
eventinfo.magnitudes[0]['mag']))
picks = eventinfo.picks
for key in arrivals:
if arrivals[key].has_key('P'):
if arrivals[key]['P']['weight'] < 4 and arrivals[key]['P']['fm'] is not None:
# check whether arrivals are dictionaries (autoPyLoT) or pick object (PyLoT)
if isinstance(arrivals, dict) == False:
# convert pick object (PyLoT) into dictionary
evt = ope.Event(resource_id=eventinfo['resource_id'])
evt.picks = arrivals
arrivals = picksdict_from_picks(evt, parameter=parameter)
# check for automatic and manual picks
# prefer manual picks
usedarrivals = chooseArrivals(arrivals)
for key in usedarrivals:
if 'P' in usedarrivals[key]:
if usedarrivals[key]['P']['weight'] < 4 and usedarrivals[key]['P']['fm'] is not None:
stat = key
for i in range(len(picks)):
station = picks[i].waveform_id.station_code
@ -742,7 +897,7 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
fid.write('%-4s %6.2f %6.2f%s \n' % (stat,
az,
inz,
arrivals[key]['P']['fm']))
usedarrivals[key]['P']['fm']))
break
fid.close()
@ -757,6 +912,11 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
print("Writing phases to %s for HASH for HASH-driver 2" % filename2)
fid2 = open("%s" % filename2, 'w')
# get event information needed for HASH-input file
try:
eventsource = eventinfo.origins[0]
except:
print("No source origin calculated yet, thus no cnv-file creation possible!")
return
eventsource = eventinfo.origins[0]
event = parameter.get('eventID')
hashID = event.split('.')[0][1:5]
@ -795,10 +955,11 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
eventsource['quality']['used_phase_count'],
erh, erz, eventinfo.magnitudes[0]['mag'],
hashID))
# Prefer Manual Picks over automatic ones if possible
arrivals = chooseArrivals(arrivals) # MP MP what is chooseArrivals? It is not defined anywhere
# write phase lines
for key in arrivals:
if arrivals[key].has_key('P'):
if 'P' in arrivals[key]:
if arrivals[key]['P']['weight'] < 4 and arrivals[key]['P']['fm'] is not None:
stat = key
ccode = arrivals[key]['P']['channel']
@ -843,6 +1004,25 @@ def writephases(arrivals, fformat, filename, parameter=None, eventinfo=None):
fid2.close()
def chooseArrivals(arrivals):
"""
takes arrivals and returns the manual picks if manual and automatic ones are there
returns automatic picks if only automatic picks are there
:param arrivals: 'dictionary' with automatic and or manual arrivals
:return: arrivals but with the manual picks prefered if possible
"""
# If len of arrivals is greater than 2 it comes from autopicking so only autopicks are available
if len(arrivals) > 2:
return arrivals
if arrivals['auto'] and arrivals['manual']:
usedarrivals = arrivals['manual']
elif arrivals['auto']:
usedarrivals = arrivals['auto']
elif arrivals['manual']:
usedarrivals = arrivals['manual']
return usedarrivals
def merge_picks(event, picks):
"""
takes an event object and a list of picks and searches for matching
@ -862,37 +1042,73 @@ def merge_picks(event, picks):
network = pick.waveform_id.network_code
method = pick.method_id
for p in event.picks:
if p.waveform_id.station_code == station\
and p.waveform_id.network_code == network\
and p.phase_hint == phase\
and (str(p.method_id) in str(method)\
if p.waveform_id.station_code == station \
and p.waveform_id.network_code == network \
and p.phase_hint == phase \
and (str(p.method_id) in str(method)
or str(method) in str(p.method_id)):
p.time, p.time_errors, p.waveform_id.network_code, p.method_id = time, err, network, method
del time, err, phase, station, network, method
return event
def getQualitiesfromxml(xmlnames, ErrorsP, ErrorsS, plotflag=1):
def getQualitiesfromxml(path, errorsP, errorsS, plotflag=1, figure=None, verbosity=0):
"""
Script to get onset uncertainties from Quakeml.xml files created by PyLoT.
Uncertainties are tranformed into quality classes and visualized via histogram if desired.
Ludger Küperkoch, BESTEC GmbH, 07/2017
:param path: path containing xml files
:type path: str
:param errorsP: time errors of P waves for the four discrete quality classes
:type errorsP:
:param errorsS: time errors of S waves for the four discrete quality classes
:type errorsS:
:param plotflag:
:type plotflag:
:return:
:rtype:
"""
from pylot.core.pick.utils import getQualityFromUncertainty
from pylot.core.util.utils import loopIdentifyPhase, identifyPhase
def calc_perc(uncertainties, ntotal):
''' simple function that calculates percentage of number of uncertainties (list length)'''
if len(uncertainties) == 0:
return 0
else:
return 100. / ntotal * len(uncertainties)
def calc_weight_perc(psweights, weight_ids):
''' calculate percentages of different weights (pick classes!?) of total number of uncertainties of a phase'''
# count total number of list items for this phase
numWeights = np.sum([len(weight) for weight in psweights.values()])
# iterate over all available weights to return a list with percentages for plotting
plot_list = []
for weight_id in weight_ids:
plot_list.append(calc_perc(psweights[weight_id], numWeights))
return plot_list, numWeights
# get all xmlfiles in path (maybe this should be changed to one xml file for this function, selectable via GUI?)
xmlnames = glob.glob(os.path.join(path, '*.xml'))
if len(xmlnames) == 0:
print(f'No files found in path {path}.')
return False
# first define possible phases here
phases = ['P', 'S']
# define possible weights (0-4)
weight_ids = list(range(5))
# put both error lists in a dictionary with P/S key so that amount of code can be halfed by simply using P/S as key
errors = dict(P=errorsP, S=errorsS)
# create dictionaries for each phase (P/S) with a dictionary of empty list for each weight defined in weights
# tuple above
weights = {}
for phase in phases:
weights[phase] = {weight_id: [] for weight_id in weight_ids}
# read all onset weights
Pw0 = []
Pw1 = []
Pw2 = []
Pw3 = []
Pw4 = []
Sw0 = []
Sw1 = []
Sw2 = []
Sw3 = []
Sw4 = []
for names in xmlnames:
print("Getting onset weights from {}".format(names))
cat = read_events(names)
@ -900,117 +1116,60 @@ def getQualitiesfromxml(xmlnames, ErrorsP, ErrorsS, plotflag=1):
arrivals = cat.events[0].picks
arrivals_copy = cat_copy.events[0].picks
# Prefere manual picks if qualities are sufficient!
for Pick in arrivals:
if (Pick.method_id.id).split('/')[1] == 'manual':
mstation = Pick.waveform_id.station_code
for pick in arrivals:
if pick.method_id.id.split('/')[1] == 'manual':
mstation = pick.waveform_id.station_code
mstation_ext = mstation + '_'
for mpick in arrivals_copy:
phase = identifyPhase(loopIdentifyPhase(Pick.phase_hint))
if phase == 'P':
if ((mpick.waveform_id.station_code == mstation) or \
phase = identifyPhase(loopIdentifyPhase(pick.phase_hint)) # MP MP catch if this fails?
if ((mpick.waveform_id.station_code == mstation) or
(mpick.waveform_id.station_code == mstation_ext)) and \
((mpick.method_id).split('/')[1] == 'auto') and \
(mpick.time_errors['uncertainty'] <= ErrorsP[3]):
del mpick
break
elif phase == 'S':
if ((mpick.waveform_id.station_code == mstation) or \
(mpick.waveform_id.station_code == mstation_ext)) and \
((mpick.method_id).split('/')[1] == 'auto') and \
(mpick.time_errors['uncertainty'] <= ErrorsS[3]):
(mpick.method_id.id.split('/')[1] == 'auto') and \
(mpick.time_errors['uncertainty'] <= errors[phase][3]):
del mpick
break
lendiff = len(arrivals) - len(arrivals_copy)
if lendiff is not 0:
if lendiff != 0:
print("Found manual as well as automatic picks, prefered the {} manual ones!".format(lendiff))
for Pick in arrivals_copy:
phase = identifyPhase(loopIdentifyPhase(Pick.phase_hint))
if phase == 'P':
Pqual = getQualityFromUncertainty(Pick.time_errors.uncertainty, ErrorsP)
if Pqual == 0:
Pw0.append(Pick.time_errors.uncertainty)
elif Pqual == 1:
Pw1.append(Pick.time_errors.uncertainty)
elif Pqual == 2:
Pw2.append(Pick.time_errors.uncertainty)
elif Pqual == 3:
Pw3.append(Pick.time_errors.uncertainty)
elif Pqual == 4:
Pw4.append(Pick.time_errors.uncertainty)
elif phase == 'S':
Squal = getQualityFromUncertainty(Pick.time_errors.uncertainty, ErrorsS)
if Squal == 0:
Sw0.append(Pick.time_errors.uncertainty)
elif Squal == 1:
Sw1.append(Pick.time_errors.uncertainty)
elif Squal == 2:
Sw2.append(Pick.time_errors.uncertainty)
elif Squal == 3:
Sw3.append(Pick.time_errors.uncertainty)
elif Squal == 4:
Sw4.append(Pick.time_errors.uncertainty)
else:
for pick in arrivals_copy:
phase = identifyPhase(loopIdentifyPhase(pick.phase_hint))
uncertainty = pick.time_errors.uncertainty
if not uncertainty:
if verbosity > 0:
print('No uncertainty, pick {} invalid!'.format(pick.method_id.id))
continue
# check P/S phase
if phase not in phases:
print("Phase hint not defined for picking!")
pass
continue
qual = get_quality_class(uncertainty, errors[phase])
weights[phase][qual].append(uncertainty)
if plotflag == 0:
Punc = [Pw0, Pw1, Pw2, Pw3, Pw4]
Sunc = [Sw0, Sw1, Sw2, Sw3, Sw4]
return Punc, Sunc
p_unc = [weights['P'][weight_id] for weight_id in weight_ids]
s_unc = [weights['S'][weight_id] for weight_id in weight_ids]
return p_unc, s_unc
else:
if not figure:
fig = plt.figure()
ax = fig.add_subplot(111)
# get percentage of weights
numPweights = np.sum([len(Pw0), len(Pw1), len(Pw2), len(Pw3), len(Pw4)])
numSweights = np.sum([len(Sw0), len(Sw1), len(Sw2), len(Sw3), len(Sw4)])
if len(Pw0) > 0:
P0perc = 100 / numPweights * len(Pw0)
else:
P0perc = 0
if len(Pw1) > 0:
P1perc = 100 / numPweights * len(Pw1)
else:
P1perc = 0
if len(Pw2) > 0:
P2perc = 100 / numPweights * len(Pw2)
else:
P2perc = 0
if len(Pw3) > 0:
P3perc = 100 / numPweights * len(Pw3)
else:
P3perc = 0
if len(Pw4) > 0:
P4perc = 100 / numPweights * len(Pw4)
else:
P4perc = 0
if len(Sw0) > 0:
S0perc = 100 / numSweights * len(Sw0)
else:
S0perc = 0
if len(Sw1) > 0:
S1perc = 100 / numSweights * len(Sw1)
else:
S1perc = 0
if len(Sw2) > 0:
S2perc = 100 / numSweights * len(Sw2)
else:
S2perc = 0
if len(Sw3) > 0:
S3perc = 100 / numSweights * len(Sw3)
else:
S3perc = 0
if len(Sw4) > 0:
S4perc = 100 / numSweights * len(Sw4)
else:
S4perc = 0
listP, numPweights = calc_weight_perc(weights['P'], weight_ids)
listS, numSweights = calc_weight_perc(weights['S'], weight_ids)
weights = ('0', '1', '2', '3', '4')
y_pos = np.arange(len(weights))
y_pos = np.arange(len(weight_ids))
width = 0.34
plt.bar(y_pos - width, [P0perc, P1perc, P2perc, P3perc, P4perc], width, color='black')
plt.bar(y_pos, [S0perc, S1perc, S2perc, S3perc, S4perc], width, color='red')
plt.ylabel('%')
plt.xticks(y_pos, weights)
plt.xlim([-0.5, 4.5])
plt.xlabel('Qualities')
plt.title('{0} P-Qualities, {1} S-Qualities'.format(numPweights, numSweights))
plt.show()
ax.bar(y_pos - width, listP, width, color='black')
ax.bar(y_pos, listS, width, color='red')
ax.set_ylabel('%')
ax.set_xticks(y_pos, weight_ids)
ax.set_xlim([-0.5, 4.5])
ax.set_xlabel('Qualities')
ax.set_title('{0} P-Qualities, {1} S-Qualities'.format(numPweights, numSweights))
if not figure:
fig.show()
return listP, listS

View File

@ -18,11 +18,11 @@ def export(picks, fnout, parameter, eventinfo):
:param fnout: complete path to the exporting obs file
:type fnout: str
:param: parameter, all input information
:type: object
:param parameter: all input information
:type parameter: object
:param: eventinfo, source information needed for focmec format
:type: list object
:param eventinfo: source information needed for focmec format
:type eventinfo: list object
'''
# write phases to FOCMEC-phase file
writephases(picks, 'FOCMEC', fnout, parameter, eventinfo)

View File

@ -18,11 +18,11 @@ def export(picks, fnout, parameter, eventinfo):
:param fnout: complete path to the exporting obs file
:type fnout: str
:param: parameter, all input information
:type: object
:param parameter: all input information
:type parameter: object
:param: eventinfo, source information needed for HASH format
:type: list object
:param eventinfo: source information needed for HASH format
:type eventinfo: list object
'''
# write phases to HASH-phase file
writephases(picks, 'HASH', fnout, parameter, eventinfo)

View File

@ -18,8 +18,8 @@ def export(picks, fnout, parameter):
:param fnout: complete path to the exporting obs file
:type fnout: str
:param: parameter, all input information
:type: object
:param parameter: all input information
:type parameter: object
'''
# write phases to HYPO71-phase file
writephases(picks, 'HYPO71', fnout, parameter)

View File

@ -18,11 +18,11 @@ def export(picks, fnout, parameter, eventinfo):
:param fnout: complete path to the exporting obs file
:type fnout: str
:param: parameter, all input information
:type: object
:param parameter: all input information
:type parameter: object
:param: eventinfo, source information needed for hypoDD format
:type: list object
:param eventinfo: source information needed for hypoDD format
:type eventinfo: list object
'''
# write phases to hypoDD-phase file
writephases(picks, 'hypoDD', fnout, parameter, eventinfo)
writephases(picks, 'HYPODD', fnout, parameter, eventinfo)

View File

@ -18,8 +18,8 @@ def export(picks, fnout, parameter):
:param fnout: complete path to the exporting obs file
:type fnout: str
:param: parameter, all input information
:type: object
:param parameter: all input information
:type parameter: object
'''
# write phases to HYPOSAT-phase file
writephases(picks, 'HYPOSAT', fnout, parameter)

View File

@ -6,8 +6,10 @@ import os
import subprocess
from obspy import read_events
from pylot.core.io.phases import writephases
from pylot.core.util.utils import getPatternLine, runProgram, which
from pylot.core.util.gui import which
from pylot.core.util.utils import getPatternLine, runProgram
from pylot.core.util.version import get_git_version as _getVersionString
__version__ = _getVersionString()
@ -28,8 +30,8 @@ def export(picks, fnout, parameter):
:param fnout: complete path to the exporting obs file
:type fnout: str
:param: parameter, all input information
:type: object
:param parameter: all input information
:type parameter: object
'''
# write phases to NLLoc-phase file
writephases(picks, 'NLLoc', fnout, parameter)
@ -38,19 +40,19 @@ def export(picks, fnout, parameter):
def modify_inputs(ctrfn, root, nllocoutn, phasefn, tttn):
'''
:param ctrfn: name of NLLoc-control file
:type: str
:type ctrfn: str
:param root: root path to NLLoc working directory
:type: str
:type root: str
:param nllocoutn: name of NLLoc-location output file
:type: str
:type nllocoutn: str
:param phasefn: name of NLLoc-input phase file
:type: str
:type phasefn: str
:param tttn: pattern of precalculated NLLoc traveltime tables
:type: str
:type tttn: str
'''
# For locating the event the NLLoc-control file has to be modified!
# create comment line for NLLoc-control file NLLoc-output file
@ -73,18 +75,15 @@ def modify_inputs(ctrfn, root, nllocoutn, phasefn, tttn):
nllfile.close()
def locate(fnin, infile=None):
def locate(fnin, parameter=None):
"""
takes an external program name
:param fnin:
:return:
takes an external program name and tries to run it
:param parameter: PyLoT Parameter object
:param fnin: external program name
:return: None
"""
if infile is None:
exe_path = which('NLLoc')
else:
exe_path = which('NLLoc', infile)
if exe_path is None:
exe_path = os.path.join(parameter['nllocbin'], 'NLLoc')
if not os.path.isfile(exe_path):
raise NLLocError('NonLinLoc executable not found; check your '
'environment variables')
@ -97,10 +96,15 @@ def locate(fnin, infile=None):
def read_location(fn):
path, file = os.path.split(fn)
file = glob.glob1(path, file + '.[0-9]*.grid0.loc.hyp')
if len(file) > 1:
raise IOError('ambiguous location name {0}'.format(file))
fn = os.path.join(path, file[0])
nllfile = glob.glob1(path, file + '.[0-9]*.grid0.loc.hyp')
if len(nllfile) > 1:
# get most recent file
print("Found several location files matching pattern!")
print("Using the most recent one ...")
files_to_search = '{0}/{1}'.format(path, file) + '.[0-9]*.grid0.loc.hyp'
fn = max(glob.glob(files_to_search), key=os.path.getctime)
else:
fn = os.path.join(path, nllfile[0])
return read_events(fn)[0]

View File

@ -18,11 +18,11 @@ def export(picks, fnout, eventinfo, parameter=None):
:param fnout: complete path to the exporting obs file
:type fnout: str
:param: eventinfo, source time needed for VELEST-cnv format
:type: list object
:param eventinfo: source time needed for VELEST-cnv format
:type eventinfo: list object
:param: parameter, all input information
:type: object
:param parameter: all input information
:type parameter: object
'''
# write phases to VELEST-phase file
writephases(picks, 'VELEST', fnout, parameter, eventinfo)

File diff suppressed because it is too large Load Diff

View File

@ -16,39 +16,39 @@ autoregressive prediction: application ot local and regional distances, Geophys.
:author: MAGS2 EP3 working group
"""
import numpy as np
try:
from scipy.signal import tukey
except ImportError:
from scipy.signal.windows import tukey
from obspy.core import Stream
from pylot.core.pick.utils import PickingFailedException
class CharacteristicFunction(object):
'''
"""
SuperClass for different types of characteristic functions.
'''
"""
def __init__(self, data, cut, t2=None, order=None, t1=None, fnoise=None, stealthMode=False):
'''
def __init__(self, data, cut, t2=None, order=None, t1=None, fnoise=None):
"""
Initialize data type object with information from the original
Seismogram.
:param: data
:type: `~obspy.core.stream.Stream`
:param: cut
:type: tuple
:param: t2
:type: float
:param: order
:type: int
:param: t1
:type: float (optional, only for AR)
:param: fnoise
:type: float (optional, only for AR)
'''
:param data: stream object containing traces for which the cf should
be calculated
:type data: ~obspy.core.stream.Stream
:param cut: (starttime, endtime) in seconds relative to beginning of trace
:type cut: tuple
:param t2:
:type t2: float
:param order:
:type order: int
:param t1: float (optional, only for AR)
:param fnoise: (optional, only for AR)
:type fnoise: float
"""
assert isinstance(data, Stream), "%s is not a stream object" % str(data)
@ -60,10 +60,9 @@ class CharacteristicFunction(object):
self.setOrder(order)
self.setFnoise(fnoise)
self.setARdetStep(t2)
self.calcCF(self.getDataArray())
self.calcCF()
self.arpara = np.array([])
self.xpred = np.array([])
self._stealthMode = stealthMode
def __str__(self):
return '''\n\t{name} object:\n
@ -79,13 +78,13 @@ class CharacteristicFunction(object):
t2=self.getTime2(),
order=self.getOrder(),
fnoise=self.getFnoise(),
ardetstep=self.getARdetStep[0]())
ardetstep=self.getARdetStep()[0]())
def getCut(self):
return self.cut
def setCut(self, cut):
self.cut = cut
self.cut = (int(cut[0]), int(cut[1]))
def getTime1(self):
return self.t1
@ -121,6 +120,10 @@ class CharacteristicFunction(object):
return self.dt
def getTimeArray(self):
"""
:return: array if time indices
:rtype: np.array
"""
incr = self.getIncrement()
self.TimeArray = np.arange(0, len(self.getCF()) * incr, incr) + self.getCut()[0]
return self.TimeArray
@ -137,23 +140,22 @@ class CharacteristicFunction(object):
def getXCF(self):
return self.xcf
def _getStealthMode(self):
return self._stealthMode()
def getDataArray(self, cut=None):
'''
"""
If cut times are given, time series is cut from cut[0] (start time)
till cut[1] (stop time) in order to calculate CF for certain part
only where you expect the signal!
input: cut (tuple) ()
cutting window
'''
:param cut: contains (start time, stop time) for cutting the time series
:type cut: tuple
:return: cut data/time series
:rtype:
"""
if cut is not None:
if len(self.orig_data) == 1:
if self.cut[0] == 0 and self.cut[1] == 0:
start = 0
stop = len(self.orig_data[0])
elif self.cut[0] == 0 and self.cut[1] is not 0:
elif self.cut[0] == 0 and self.cut[1] != 0:
start = 0
stop = self.cut[1] / self.dt
else:
@ -162,13 +164,15 @@ class CharacteristicFunction(object):
zz = self.orig_data.copy()
z1 = zz[0].copy()
zz[0].data = z1.data[int(start):int(stop)]
if zz[0].stats.npts == 0: # cut times do not fit data length!
zz[0].data = z1.data # take entire data
data = zz
return data
elif len(self.orig_data) == 2:
if self.cut[0] == 0 and self.cut[1] == 0:
start = 0
stop = min([len(self.orig_data[0]), len(self.orig_data[1])])
elif self.cut[0] == 0 and self.cut[1] is not 0:
elif self.cut[0] == 0 and self.cut[1] != 0:
start = 0
stop = min([self.cut[1] / self.dt, len(self.orig_data[0]),
len(self.orig_data[1])])
@ -188,7 +192,7 @@ class CharacteristicFunction(object):
start = 0
stop = min([self.cut[1] / self.dt, len(self.orig_data[0]),
len(self.orig_data[1]), len(self.orig_data[2])])
elif self.cut[0] == 0 and self.cut[1] is not 0:
elif self.cut[0] == 0 and self.cut[1] != 0:
start = 0
stop = self.cut[1] / self.dt
else:
@ -208,29 +212,25 @@ class CharacteristicFunction(object):
data = self.orig_data.copy()
return data
def calcCF(self, data=None):
self.cf = data
def calcCF(self):
pass
class AICcf(CharacteristicFunction):
'''
Function to calculate the Akaike Information Criterion (AIC) after
Maeda (1985).
:param: data, time series (whether seismogram or CF)
:type: tuple
Output: AIC function
'''
def calcCF(self, data):
# if self._getStealthMode() is False:
# print 'Calculating AIC ...'
def calcCF(self):
"""
Function to calculate the Akaike Information Criterion (AIC) after Maeda (1985).
:return: AIC function
:rtype:
"""
x = self.getDataArray()
xnp = x[0].data
nn = np.isnan(xnp)
if len(nn) > 1:
xnp[nn] = 0
ind = np.where(~np.isnan(xnp))[0]
if ind.size:
xnp[:ind[0]] = xnp[ind[0]]
xnp = tukey(len(xnp), alpha=0.05) * xnp
xnp = xnp - np.mean(xnp)
datlen = len(xnp)
k = np.arange(1, datlen)
cf = np.zeros(datlen)
@ -241,7 +241,7 @@ class AICcf(CharacteristicFunction):
np.log((cumsumcf[datlen - 1] - cumsumcf[k - 1]) / (datlen - k + 1)))
cf[0] = cf[1]
inf = np.isinf(cf)
ff = np.where(inf == True)
ff = np.where(inf is True)
if len(ff) >= 1:
cf[ff] = 0
@ -250,27 +250,31 @@ class AICcf(CharacteristicFunction):
class HOScf(CharacteristicFunction):
'''
def __init__(self, data, cut, pickparams):
"""
Call parent constructor while extracting the right parameters:
:param pickparams: PylotParameters instance
"""
super(HOScf, self).__init__(data, cut, pickparams["tlta"], pickparams["hosorder"])
def calcCF(self):
"""
Function to calculate skewness (statistics of order 3) or kurtosis
(statistics of order 4), using one long moving window, as published
in Kueperkoch et al. (2010).
'''
def calcCF(self, data):
in Kueperkoch et al. (2010), or order 2, i.e. STA/LTA.
:return: HOS cf
:rtype:
"""
x = self.getDataArray(self.getCut())
xnp = x[0].data
nn = np.isnan(xnp)
if len(nn) > 1:
xnp[nn] = 0
if self.getOrder() == 3: # this is skewness
# if self._getStealthMode() is False:
# print 'Calculating skewness ...'
y = np.power(xnp, 3)
y1 = np.power(xnp, 2)
elif self.getOrder() == 4: # this is kurtosis
# if self._getStealthMode() is False:
# print 'Calculating kurtosis ...'
y = np.power(xnp, 4)
y1 = np.power(xnp, 2)
@ -302,13 +306,24 @@ class HOScf(CharacteristicFunction):
if ind.size:
first = ind[0]
LTA[:first] = LTA[first]
self.cf = LTA
self.xcf = x
class ARZcf(CharacteristicFunction):
def calcCF(self, data):
def __init__(self, data, cut, t1, t2, pickparams):
super(ARZcf, self).__init__(data, cut, t1=t1, t2=t2, order=pickparams["Parorder"],
fnoise=pickparams["addnoise"])
def calcCF(self):
"""
function used to calculate the AR prediction error from a single vertical trace. Can be used to pick
P onsets.
:return: ARZ cf
:rtype:
"""
print('Calculating AR-prediction error from single trace ...')
x = self.getDataArray(self.getCut())
xnp = x[0].data
@ -345,13 +360,14 @@ class ARZcf(CharacteristicFunction):
cf = tap * cf
io = np.where(cf == 0)
ino = np.where(cf > 0)
if np.size(ino):
cf[io] = cf[ino[0][0]]
self.cf = cf
self.xcf = x
def arDetZ(self, data, order, rind, ldet):
'''
"""
Function to calculate AR parameters arpara after Thomas Meier (CAU), published
in Kueperkoch et al. (2012). This function solves SLE using the Moore-
Penrose inverse, i.e. the least-squares approach.
@ -368,7 +384,7 @@ class ARZcf(CharacteristicFunction):
:type: int
Output: AR parameters arpara
'''
"""
# recursive calculation of data vector (right part of eq. 6.5 in Kueperkoch et al. (2012)
rhs = np.zeros(self.getOrder())
@ -392,7 +408,7 @@ class ARZcf(CharacteristicFunction):
self.arpara = np.dot(np.linalg.pinv(A), rhs)
def arPredZ(self, data, arpara, rind, lpred):
'''
"""
Function to predict waveform, assuming an autoregressive process of order
p (=size(arpara)), with AR parameters arpara calculated in arDet. After
Thomas Meier (CAU), published in Kueperkoch et al. (2012).
@ -409,8 +425,8 @@ class ARZcf(CharacteristicFunction):
:type: int
Output: predicted waveform z
'''
# be sure of the summation indeces
"""
# be sure of the summation indices
if rind < len(arpara):
rind = len(arpara)
if rind > len(data) - lpred:
@ -430,11 +446,27 @@ class ARZcf(CharacteristicFunction):
class ARHcf(CharacteristicFunction):
def calcCF(self, data):
def __init__(self, data, cut, t1, t2, pickparams):
super(ARHcf, self).__init__(data, cut, t1=t1, t2=t2, order=pickparams["Sarorder"],
fnoise=pickparams["addnoise"])
def calcCF(self):
"""
Function to calculate a characteristic function using autoregressive modelling of the waveform of
both horizontal traces.
The waveform is predicted in a moving time window using the calculated AR parameters. The difference
between the predicted and the actual waveform servers as a characteristic function.
:return: ARH cf
:rtype:
"""
print('Calculating AR-prediction error from both horizontal traces ...')
xnp = self.getDataArray(self.getCut())
if len(xnp[0]) == 0:
raise PickingFailedException('calcCF: Found empty data trace for cut times. Return')
n0 = np.isnan(xnp[0].data)
if len(n0) > 1:
xnp[0].data[n0] = 0
@ -466,9 +498,9 @@ class ARHcf(CharacteristicFunction):
# AR prediction of waveform using calculated AR coefficients
self.arPredH(xnp, self.arpara, i + 1, lpred)
# prediction error = CF
cf[i + lpred] = np.sqrt(np.sum(np.power(self.xpred[0][i:i + lpred] - xnp[0][i:i + lpred], 2) \
+ np.power(self.xpred[1][i:i + lpred] - xnp[1][i:i + lpred], 2)) / (
2 * lpred))
cf[i + lpred] = np.sqrt(np.sum(np.power(self.xpred[0][i:i + lpred] - xnp[0][i:i + lpred], 2)
+ np.power(self.xpred[1][i:i + lpred] - xnp[1][i:i + lpred], 2)
) / (2 * lpred))
nn = np.isnan(cf)
if len(nn) > 1:
cf[nn] = 0
@ -477,13 +509,14 @@ class ARHcf(CharacteristicFunction):
cf = tap * cf
io = np.where(cf == 0)
ino = np.where(cf > 0)
if np.size(ino):
cf[io] = cf[ino[0][0]]
self.cf = cf
self.xcf = xnp
def arDetH(self, data, order, rind, ldet):
'''
"""
Function to calculate AR parameters arpara after Thomas Meier (CAU), published
in Kueperkoch et al. (2012). This function solves SLE using the Moore-
Penrose inverse, i.e. the least-squares approach. "data" is a structured array.
@ -502,7 +535,7 @@ class ARHcf(CharacteristicFunction):
:type: int
Output: AR parameters arpara
'''
"""
# recursive calculation of data vector (right part of eq. 6.5 in Kueperkoch et al. (2012)
rhs = np.zeros(self.getOrder())
@ -517,15 +550,15 @@ class ARHcf(CharacteristicFunction):
for i in range(rind, ldet):
ki = k - 1
ji = j - 1
A[ki, ji] = A[ki, ji] + data[0, i - ji] * data[0, i - ki] + data[1, i - ji] * data[1, i - ki]
A[ki, ji] = A[ki, ji] + data[0, i - ji] * data[0, i - ki] \
+ data[1, i - ji] * data[1, i - ki]
A[ji, ki] = A[ki, ji]
# apply Moore-Penrose inverse for SVD yielding the AR-parameters
self.arpara = np.dot(np.linalg.pinv(A), rhs)
def arPredH(self, data, arpara, rind, lpred):
'''
"""
Function to predict waveform, assuming an autoregressive process of order
p (=size(arpara)), with AR parameters arpara calculated in arDet. After
Thomas Meier (CAU), published in Kueperkoch et al. (2012).
@ -543,7 +576,7 @@ class ARHcf(CharacteristicFunction):
Output: predicted waveform z
:type: structured array
'''
"""
# be sure of the summation indeces
if rind < len(arpara) + 1:
rind = len(arpara) + 1
@ -567,8 +600,20 @@ class ARHcf(CharacteristicFunction):
class AR3Ccf(CharacteristicFunction):
def calcCF(self, data):
def __init__(self, data, cut, t1, t2, pickparams):
super(AR3Ccf, self).__init__(data, cut, t1=t1, t2=t2, order=pickparams["Sarorder"],
fnoise=pickparams["addnoise"])
def calcCF(self):
"""
Function to calculate a characteristic function using autoregressive modelling of the waveform of
all three traces.
The waveform is predicted in a moving time window using the calculated AR parameters. The difference
between the predicted and the actual waveform servers as a characteristic function
:return: AR3C cf
:rtype:
"""
print('Calculating AR-prediction error from all 3 components ...')
xnp = self.getDataArray(self.getCut())
@ -607,10 +652,10 @@ class AR3Ccf(CharacteristicFunction):
# AR prediction of waveform using calculated AR coefficients
self.arPred3C(xnp, self.arpara, i + 1, lpred)
# prediction error = CF
cf[i + lpred] = np.sqrt(np.sum(np.power(self.xpred[0][i:i + lpred] - xnp[0][i:i + lpred], 2) \
+ np.power(self.xpred[1][i:i + lpred] - xnp[1][i:i + lpred], 2) \
+ np.power(self.xpred[2][i:i + lpred] - xnp[2][i:i + lpred], 2)) / (
3 * lpred))
cf[i + lpred] = np.sqrt(np.sum(np.power(self.xpred[0][i:i + lpred] - xnp[0][i:i + lpred], 2)
+ np.power(self.xpred[1][i:i + lpred] - xnp[1][i:i + lpred], 2)
+ np.power(self.xpred[2][i:i + lpred] - xnp[2][i:i + lpred], 2)
) / (3 * lpred))
nn = np.isnan(cf)
if len(nn) > 1:
cf[nn] = 0
@ -619,13 +664,14 @@ class AR3Ccf(CharacteristicFunction):
cf = tap * cf
io = np.where(cf == 0)
ino = np.where(cf > 0)
if np.size(ino):
cf[io] = cf[ino[0][0]]
self.cf = cf
self.xcf = xnp
def arDet3C(self, data, order, rind, ldet):
'''
"""
Function to calculate AR parameters arpara after Thomas Meier (CAU), published
in Kueperkoch et al. (2012). This function solves SLE using the Moore-
Penrose inverse, i.e. the least-squares approach. "data" is a structured array.
@ -644,7 +690,7 @@ class AR3Ccf(CharacteristicFunction):
:type: int
Output: AR parameters arpara
'''
"""
# recursive calculation of data vector (right part of eq. 6.5 in Kueperkoch et al. (2012)
rhs = np.zeros(self.getOrder())
@ -660,7 +706,8 @@ class AR3Ccf(CharacteristicFunction):
for i in range(rind, ldet):
ki = k - 1
ji = j - 1
A[ki, ji] = A[ki, ji] + data[0, i - ji] * data[0, i - ki] + data[1, i - ji] * data[1, i - ki] \
A[ki, ji] = A[ki, ji] + data[0, i - ji] * data[0, i - ki] \
+ data[1, i - ji] * data[1, i - ki] \
+ data[2, i - ji] * data[2, i - ki]
A[ji, ki] = A[ki, ji]
@ -669,7 +716,7 @@ class AR3Ccf(CharacteristicFunction):
self.arpara = np.dot(np.linalg.pinv(A), rhs)
def arPred3C(self, data, arpara, rind, lpred):
'''
"""
Function to predict waveform, assuming an autoregressive process of order
p (=size(arpara)), with AR parameters arpara calculated in arDet3C. After
Thomas Meier (CAU), published in Kueperkoch et al. (2012).
@ -687,7 +734,7 @@ class AR3Ccf(CharacteristicFunction):
Output: predicted waveform z
:type: structured array
'''
"""
# be sure of the summation indeces
if rind < len(arpara) + 1:
rind = len(arpara) + 1

View File

@ -7,9 +7,8 @@ import os
import matplotlib.pyplot as plt
import numpy as np
from obspy import read_events
from obspy.core import AttribDict
from pylot.core.io.phases import picksdict_from_picks
from pylot.core.util.pdf import ProbabilityDensityFunction
from pylot.core.util.utils import find_in_list
from pylot.core.util.version import get_git_version as _getVersionString
@ -109,25 +108,27 @@ class Comparison(object):
Comparison is carried out with the help of pdf representation of the picks
and a probabilistic approach to the time difference of two onset
measurements.
:param a: filename for pickset A
:type a: str
:param b: filename for pickset B
:type b: str
:param type: type of the returned `~pylot.core.util.pdf.ProbabilityDensityFunction` object.
Possible values: 'exp' and 'gauss', representing the type of branches of the PDF
:type type: str
:return: dictionary containing the resulting comparison pdfs for all picks
:rtype: dict
"""
compare_pdfs = dict()
pdf_a = self.get(self.names[0]).generate_pdf_data(type)
pdf_b = self.get(self.names[1]).generate_pdf_data(type)
pdf_a = self.get('auto').generate_pdf_data(type)
pdf_b = self.get('manu').generate_pdf_data(type)
for station, phases in pdf_a.items():
if station in pdf_b.keys():
compare_pdf = dict()
for phase in phases:
if phase in pdf_b[station].keys():
try:
compare_pdf[phase] = phases[phase] - pdf_b[station][
phase]
except:
compare_pdf = None
if compare_pdf is not None:
compare_pdfs[station] = compare_pdf
@ -142,8 +143,7 @@ class Comparison(object):
istations = range(nstations)
fig, axarr = plt.subplots(nstations, 2, sharex='col', sharey='row')
for n in istations:
station = stations[n]
for n, station in enumerate(stations):
if station not in self.comparison.keys():
continue
compare_pdf = self.comparison[station]
@ -190,6 +190,20 @@ class Comparison(object):
return self.get_array(phase, 'standard_deviation')
def hist_expectation(self, phases='all', bins=20, normed=False):
"""
Plot a histogram of the expectation values of the PDFs.
Expectation represents the time difference between two most likely arrival times
:param phases: type of phases to compare
:type phases: str
:param bins: number of bins in histogram
:type bins: int
:param normed: Normalize histogram
:type normed: bool
:return: None
:rtype: None
"""
phases.strip()
if phases.find('all') is 0:
phases = 'ps'
@ -210,6 +224,20 @@ class Comparison(object):
plt.show()
def hist_standard_deviation(self, phases='all', bins=20, normed=False):
"""
Plot a histogram of the compared standard deviation values of two arrivals.
Standard deviation of two compared picks represents the combined uncertainties/pick errors
(earliest possible pick, latest possible pick)
:param phases: type of phases to compare
:type phases: str
:param bins: number of bins in histogram
:type bins: int
:param normed: Normalize histogram
:type normed: bool
:return: None
:rtype: None
"""
phases.strip()
if phases.find('all') == 0:
phases = 'ps'
@ -370,10 +398,12 @@ class PDFDictionary(object):
class PDFstatistics(object):
"""
This object can be used to get various statistic values from probabillity density functions.
This object can be used to get various statistic values from probability density functions.
Takes a path as argument.
"""
# TODO: change root to datapath
def __init__(self, directory):
"""Initiates some values needed when dealing with pdfs later"""
self._rootdir = directory
@ -480,7 +510,8 @@ class PDFstatistics(object):
return rlist
def writeThetaToFile(self, array, out_dir):
@staticmethod
def writeThetaToFile(array, out_dir):
"""
Method to write array like data to file. Useful since acquiring can take
serious amount of time when dealing with large databases.

View File

@ -23,47 +23,52 @@ import warnings
import matplotlib.pyplot as plt
import numpy as np
from scipy.signal import argrelmax, argrelmin
from pylot.core.pick.charfuns import CharacteristicFunction
from pylot.core.pick.utils import getnoisewin, getsignalwin
class AutoPicker(object):
'''
"""
Superclass of different, automated picking algorithms applied on a CF determined
using AIC, HOS, or AR prediction.
'''
"""
warnings.simplefilter('ignore')
def __init__(self, cf, TSNR, PickWindow, iplot=0, aus=None, Tsmooth=None, Pick1=None, fig=None):
'''
:param: cf, characteristic function, on which the picking algorithm is applied
:type: `~pylot.core.pick.CharFuns.CharacteristicFunction` object
:param: TSNR, length of time windows around pick used to determine SNR [s]
:type: tuple (T_noise, T_gap, T_signal)
:param: PickWindow, length of pick window [s]
:type: float
:param: iplot, no. of figure window for plotting interims results
:type: integer
:param: aus ("artificial uplift of samples"), find local minimum at i if aic(i-1)*(1+aus) >= aic(i)
:type: float
:param: Tsmooth, length of moving smoothing window to calculate smoothed CF [s]
:type: float
:param: Pick1, initial (prelimenary) onset time, starting point for PragPicker and
EarlLatePicker
:type: float
'''
def __init__(self, cf, TSNR, PickWindow, iplot=0, aus=None, Tsmooth=None, Pick1=None,
fig=None, linecolor='k', ogstream=None):
"""
Create AutoPicker object
:param cf: characteristic function, on which the picking algorithm is applied
:type cf: `~pylot.core.pick.CharFuns.CharacteristicFunction`
:param TSNR: length of time windows around pick used to determine SNR [s], tuple (T_noise, T_gap, T_signal)
:type TSNR: (float, float, float)
:param PickWindow: length of pick window [s]
:type PickWindow: float
:param iplot: flag used for plotting, if > 1, results will be plotted. Use iplot = 0 to disable plotting
:type iplot: int
:param aus: ("artificial uplift of samples"), find local minimum at i if aic(i-1)*(1+aus) >= aic(i)
:type aus: float
:param Tsmooth: length of moving smoothing window to calculate smoothed CF [s]
:type Tsmooth: float
:param Pick1: initial (preliminary) onset time, starting point for PragPicker and EarlLatePicker
:type Pick1: float
:param fig: matplotlib figure used for plotting. If not given and plotting is enabled, a new figure will
be created
:type fig: `~matplotlib.figure.Figure`
:param linecolor: matplotlib line color string
:type linecolor: str
:param ogstream: original stream (waveform), e.g. for plotting purposes
:type ogstream: `~obspy.core.stream.Stream`
"""
assert isinstance(cf, CharacteristicFunction), "%s is not a CharacteristicFunction object" % str(cf)
self._linecolor = linecolor
self._pickcolor_p = 'b'
self.cf = cf.getCF()
self.ogstream = ogstream
self.Tcf = cf.getTimeArray()
self.Data = cf.getXCF()
self.dt = cf.getIncrement()
@ -77,6 +82,11 @@ class AutoPicker(object):
self.calcPick()
def __str__(self):
"""
String representation of AutoPicker object
:return:
:rtype: str
"""
return '''\n\t{name} object:\n
TSNR:\t\t\t{TSNR}\n
PickWindow:\t{PickWindow}\n
@ -140,12 +150,12 @@ class AutoPicker(object):
class AICPicker(AutoPicker):
'''
"""
Method to derive the onset time of an arriving phase based on CF
derived from AIC. In order to get an impression of the quality of this inital pick,
derived from AIC. In order to get an impression of the quality of this initial pick,
a quality assessment is applied based on SNR and slope determination derived from the CF,
from which the AIC has been calculated.
'''
"""
def calcPick(self):
@ -167,12 +177,14 @@ class AICPicker(AutoPicker):
nn = np.isnan(self.cf)
if len(nn) > 1:
self.cf[nn] = 0
# taper AIC-CF to get rid off side maxima
# taper AIC-CF to get rid of side maxima
tap = np.hanning(len(self.cf))
aic = tap * self.cf + max(abs(self.cf))
# smooth AIC-CF
ismooth = int(round(self.Tsmooth / self.dt))
aicsmooth = np.zeros(len(aic))
# MP MP better start with original data than zeros if array shall be smoothed, created artificial value before
# when starting with i in range(1...) loop below and subtracting offset afterwards
aicsmooth = np.copy(aic)
if len(aic) < ismooth:
print('AICPicker: Tsmooth larger than CF!')
return
@ -182,20 +194,39 @@ class AICPicker(AutoPicker):
ii1 = i - ismooth
aicsmooth[i] = aicsmooth[i - 1] + (aic[i] - aic[ii1]) / ismooth
else:
aicsmooth[i] = np.mean(aic[1: i])
aicsmooth[i] = np.mean(aic[0: i]) # MP MP created np.nan for i=1
# remove offset in AIC function
offset = abs(min(aic) - min(aicsmooth))
aicsmooth = aicsmooth - offset
cf = self.Data[0].data
# get maximum of HOS/AR-CF as startimg point for searching
# minimum in AIC function
icfmax = np.argmax(self.Data[0].data)
icfmax = np.argmax(cf)
# TODO: If this shall be kept, maybe add thresh_factor to pylot parameters
thresh_hit = False
thresh_factor = 0.7
thresh = thresh_factor * cf[icfmax]
for index, sample in enumerate(cf):
if sample >= thresh:
thresh_hit = True
# go on searching for the following maximum
if index > 0 and thresh_hit:
if sample <= cf[index - 1]:
icfmax = index - 1
break
# find minimum in AIC-CF front of maximum of HOS/AR-CF
lpickwindow = int(round(self.PickWindow / self.dt))
for i in range(icfmax - 1, max([icfmax - lpickwindow, 2]), -1):
if aicsmooth[i - 1] >= aicsmooth[i]:
self.Pick = self.Tcf[i]
break
tsafety = self.TSNR[1] # safety gap, AIC is usually a little bit too late
left_corner_ind = max([icfmax - lpickwindow, 2])
right_corner_ind = icfmax + int(tsafety / self.dt)
aic_snip = aicsmooth[left_corner_ind: right_corner_ind]
minima = argrelmin(aic_snip)[0] # 0th entry of tuples for axes
if len(minima) > 0:
pickindex = minima[-1] + left_corner_ind
self.Pick = self.Tcf[pickindex]
# if no minimum could be found:
# search in 1st derivative of AIC-CF
if self.Pick is None:
@ -210,18 +241,12 @@ class AICPicker(AutoPicker):
for i in range(icfmax - 1, max([icfmax - lpickwindow, 2]), -1):
if diffcf[i - 1] >= diffcf[i]:
self.Pick = self.Tcf[i]
pickindex = i
break
# quality assessment using SNR and slope from CF
if self.Pick is not None:
# get noise window
inoise = getnoisewin(self.Tcf, self.Pick, self.TSNR[0], self.TSNR[1])
# check, if these are counts or m/s, important for slope estimation!
# this is quick and dirty, better solution?
if max(self.Data[0].data < 1e-3) and max(self.Data[0].data >= 1e-6):
self.Data[0].data = self.Data[0].data * 1000000.
elif max(self.Data[0].data < 1e-6):
self.Data[0].data = self.Data[0].data * 1e13
# get signal window
isignal = getsignalwin(self.Tcf, self.Pick, self.TSNR[2])
if len(isignal) == 0:
@ -229,31 +254,41 @@ class AICPicker(AutoPicker):
ii = min([isignal[len(isignal) - 1], len(self.Tcf)])
isignal = isignal[0:ii]
try:
self.Data[0].data[isignal]
cf[isignal]
except IndexError as e:
msg = "Time series out of bounds! {}".format(e)
print(msg)
return
# calculate SNR from CF
self.SNR = max(abs(self.Data[0].data[isignal] - np.mean(self.Data[0].data[isignal]))) / \
max(abs(self.Data[0].data[inoise] - np.mean(self.Data[0].data[inoise])))
self.SNR = max(abs(cf[isignal])) / \
abs(np.mean(cf[inoise]))
# calculate slope from CF after initial pick
# get slope window
tslope = self.TSNR[3] # slope determination window
islope = np.where((self.Tcf <= min([self.Pick + tslope, self.Tcf[-1]])) \
if tsafety >= 0:
islope = np.where((self.Tcf <= min([self.Pick + tslope + tsafety, self.Tcf[-1]])) \
& (self.Tcf >= self.Pick)) # TODO: put this in a seperate function like getsignalwin
else:
islope = np.where((self.Tcf <= min([self.Pick + tslope, self.Tcf[-1]])) \
& (
self.Tcf >= self.Pick + tsafety)) # TODO: put this in a seperate function like getsignalwin
# find maximum within slope determination window
# 'cause slope should be calculated up to first local minimum only!
try:
dataslope = self.Data[0].data[islope[0][0:-1]]
dataslope = cf[islope[0][0:-1]]
except IndexError:
print("Slope Calculation: empty array islope, check signal window")
return
if len(dataslope) < 1:
print('No data in slope window found!')
if len(dataslope) < 2:
print('No or not enough data in slope window found!')
return
try:
imaxs, = argrelmax(dataslope)
imax = imaxs[0]
except (ValueError, IndexError) as e:
print(e, 'picker: argrelmax not working!')
imax = np.argmax(dataslope)
iislope = islope[0][0:imax+1]
iislope = islope[0][0:imax + 1]
if len(iislope) < 2:
# calculate slope from initial onset to maximum of AIC function
print("AICPicker: Not enough data samples left for slope calculation!")
@ -263,52 +298,65 @@ class AICPicker(AutoPicker):
print("AICPicker: Maximum for slope determination right at the beginning of the window!")
print("Choose longer slope determination window!")
if self.iplot > 1:
if self.fig == None or self.fig == 'None':
fig = plt.figure() # self.iplot) ### WHY? MP MP
plt_flag = 1
if self.fig is None or self.fig == 'None':
fig = plt.figure()
plt_flag = iplot
else:
fig = self.fig
ax = fig.add_subplot(111)
x = self.Data[0].data
ax.plot(self.Tcf, x / max(x), 'k', label='(HOS-/AR-) Data')
cf = cf
ax.plot(self.Tcf, cf / max(cf), color=self._linecolor, linewidth=0.7, label='(HOS-/AR-) Data')
ax.plot(self.Tcf, aicsmooth / max(aicsmooth), 'r', label='Smoothed AIC-CF')
ax.legend(loc=1)
ax.set_xlabel('Time [s] since %s' % self.Data[0].stats.starttime)
ax.set_yticks([])
ax.set_title(self.Data[0].stats.station)
if plt_flag == 1:
if plt_flag in [1, 2]:
fig.show()
try: input()
except SyntaxError: pass
try:
input()
except SyntaxError:
pass
plt.close(fig)
return
iislope = islope[0][0:imax+1]
iislope = islope[0][0:imax + 1]
dataslope = self.Data[0].data[iislope]
# calculate slope as polynomal fit of order 1
xslope = np.arange(0, len(dataslope), 1)
try:
P = np.polyfit(xslope, dataslope, 1)
datafit = np.polyval(P, xslope)
if datafit[0] >= datafit[-1]:
print('AICPicker: Negative slope, bad onset skipped!')
return
else:
self.slope = 1 / (len(dataslope) * self.Data[0].stats.delta) * (datafit[-1] - datafit[0])
# normalize slope to maximum of cf to make it unit independent
self.slope /= self.Data[0].data[icfmax]
except Exception as e:
print("AICPicker: Problems with data fitting! {}".format(e))
else:
self.SNR = None
self.slope = None
if iplot > 1:
if self.fig == None or self.fig == 'None':
if self.fig is None or self.fig == 'None':
fig = plt.figure() # self.iplot)
plt_flag = 1
plt_flag = iplot
else:
fig = self.fig
fig._tight = True
ax1 = fig.add_subplot(211)
x = self.Data[0].data
if len(self.Tcf) > len(self.Data[0].data): # why? LK
self.Tcf = self.Tcf[0:len(self.Tcf)-1]
ax1.plot(self.Tcf, x / max(x), 'k', label='(HOS-/AR-) Data')
if len(self.Tcf) > len(cf): # why? LK
self.Tcf = self.Tcf[0:len(self.Tcf) - 1]
ax1.plot(self.Tcf, cf / max(cf), color=self._linecolor, linewidth=0.7, label='(HOS-/AR-) Data')
ax1.plot(self.Tcf, aicsmooth / max(aicsmooth), 'r', label='Smoothed AIC-CF')
# plot the original waveform also for evaluation of the CF and pick
if self.ogstream:
data = self.ogstream[0].data
if len(data) == len(self.Tcf):
ax1.plot(self.Tcf, 0.5 * data / max(data), 'k', label='Seismogram', alpha=0.3, zorder=0,
lw=0.5)
if self.Pick is not None:
ax1.plot([self.Pick, self.Pick], [-0.1, 0.5], 'b', linewidth=2, label='AIC-Pick')
ax1.set_xlabel('Time [s] since %s' % self.Data[0].stats.starttime)
@ -317,7 +365,7 @@ class AICPicker(AutoPicker):
if self.Pick is not None:
ax2 = fig.add_subplot(2, 1, 2, sharex=ax1)
ax2.plot(self.Tcf, x, 'k', label='Data')
ax2.plot(self.Tcf, aicsmooth, color='r', linewidth=0.7, label='Data')
ax1.axvspan(self.Tcf[inoise[0]], self.Tcf[inoise[-1]], color='y', alpha=0.2, lw=0, label='Noise Window')
ax1.axvspan(self.Tcf[isignal[0]], self.Tcf[isignal[-1]], color='b', alpha=0.2, lw=0,
label='Signal Window')
@ -329,37 +377,43 @@ class AICPicker(AutoPicker):
label='Signal Window')
ax2.axvspan(self.Tcf[iislope[0]], self.Tcf[iislope[-1]], color='g', alpha=0.2, lw=0,
label='Slope Window')
ax2.plot(self.Tcf[iislope], datafit, 'g', linewidth=2, label='Slope')
ax2.plot(self.Tcf[iislope], datafit, 'g', linewidth=2,
label='Slope') # MP MP changed temporarily!
if self.slope is not None:
ax1.set_title('Station %s, SNR=%7.2f, Slope= %12.2f counts/s' % (self.Data[0].stats.station,
self.SNR, self.slope))
else:
ax1.set_title('Station %s, SNR=%7.2f' % (self.Data[0].stats.station, self.SNR))
ax2.set_xlabel('Time [s] since %s' % self.Data[0].stats.starttime)
ax2.set_ylabel('Counts')
ax2.set_yticks([])
ax2.legend(loc=1)
if plt_flag == 1:
fig.show()
try: input()
except SyntaxError: pass
plt.close(fig)
else:
ax1.set_title(self.Data[0].stats.station)
if plt_flag == 1:
fig.show()
try: input()
except SyntaxError: pass
plt.close(fig)
if self.Pick == None:
if plt_flag in [1, 2]:
fig.show()
try:
input()
except SyntaxError:
pass
plt.close(fig)
if plt_flag == 3:
stats = self.Data[0].stats
netstlc = '{}.{}.{}'.format(stats.network, stats.station, stats.location)
fig.savefig('aicfig_{}_{}.png'.format(netstlc, stats.channel))
if self.Pick is None:
print('AICPicker: Could not find minimum, picking window too short?')
return
class PragPicker(AutoPicker):
'''
"""
Method of pragmatic picking exploiting information given by CF.
'''
"""
def calcPick(self):
@ -408,11 +462,11 @@ class PragPicker(AutoPicker):
# prominent trend: decrease aus
# flat: use given aus
cfdiff = np.diff(cfipick)
if len(cfdiff)<20:
if len(cfdiff) < 20:
print('PragPicker: Very few samples for CF. Check LTA window dimensions!')
i0diff = np.where(cfdiff > 0)
cfdiff = cfdiff[i0diff]
if len(cfdiff)<1:
if len(cfdiff) < 1:
print('PragPicker: Negative slope for CF. Check LTA window dimensions! STOP')
self.Pick = None
return
@ -425,20 +479,22 @@ class PragPicker(AutoPicker):
cfpick_r = 0
cfpick_l = 0
lpickwindow = int(round(self.PickWindow / self.dt))
for i in range(max(np.insert(ipick, 0, 2)), min([ipick1 + lpickwindow + 1, len(self.cf) - 1])):
if self.cf[i + 1] > self.cf[i] and self.cf[i - 1] >= self.cf[i]:
if cfsmooth[i - 1] * (1 + aus1) >= cfsmooth[i]:
if cfpick1 >= self.cf[i]:
pick_r = self.Tcf[i]
self.Pick = pick_r
flagpick_l = 1
cfpick_r = self.cf[i]
break
# for i in range(max(np.insert(ipick, 0, 2)), min([ipick1 + lpickwindow + 1, len(self.cf) - 1])):
# # local minimum
# if self.cf[i + 1] > self.cf[i] <= self.cf[i - 1]:
# if cfsmooth[i - 1] * (1 + aus1) >= cfsmooth[i]:
# if cfpick1 >= self.cf[i]:
# pick_r = self.Tcf[i]
# self.Pick = pick_r
# flagpick_l = 1
# cfpick_r = self.cf[i]
# break
# now we look to the left
if len(self.cf) > ipick1 +1:
if len(self.cf) > ipick1 + 1:
for i in range(ipick1, max([ipick1 - lpickwindow + 1, 2]), -1):
if self.cf[i + 1] > self.cf[i] and self.cf[i - 1] >= self.cf[i]:
# local minimum
if self.cf[i + 1] > self.cf[i] <= self.cf[i - 1]:
if cfsmooth[i - 1] * (1 + aus1) >= cfsmooth[i]:
if cfpick1 >= self.cf[i]:
pick_l = self.Tcf[i]
@ -447,7 +503,7 @@ class PragPicker(AutoPicker):
cfpick_l = self.cf[i]
break
else:
msg ='PragPicker: Initial onset too close to start of CF! \
msg = 'PragPicker: Initial onset too close to start of CF! \
Stop finalizing pick to the left.'
print(msg)
@ -455,9 +511,9 @@ class PragPicker(AutoPicker):
if flagpick_l > 0 and flagpick_r > 0 and cfpick_l <= 3 * cfpick_r:
self.Pick = pick_l
pickflag = 1
elif flagpick_l > 0 and flagpick_r > 0 and cfpick_l >= cfpick_r:
self.Pick = pick_r
pickflag = 1
# elif flagpick_l > 0 and flagpick_r > 0 and cfpick_l >= cfpick_r:
# self.Pick = pick_r
# pickflag = 1
elif flagpick_l == 0 and flagpick_r > 0 and cfpick_l >= cfpick_r:
self.Pick = pick_l
pickflag = 1
@ -467,24 +523,28 @@ class PragPicker(AutoPicker):
pickflag = 0
if iplot > 1:
if self.fig == None or self.fig == 'None':
if self.fig is None or self.fig == 'None':
fig = plt.figure() # self.getiplot())
plt_flag = 1
else:
fig = self.fig
fig._tight = True
ax = fig.add_subplot(111)
ax.plot(Tcfpick, cfipick, 'k', label='CF')
ax.plot(Tcfpick, cfipick, color=self._linecolor, linewidth=0.7, label='CF')
ax.plot(Tcfpick, cfsmoothipick, 'r', label='Smoothed CF')
if pickflag > 0:
ax.plot([self.Pick, self.Pick], [min(cfipick), max(cfipick)], 'b', linewidth=2, label='Pick')
ax.plot([self.Pick, self.Pick], [min(cfipick), max(cfipick)], self._pickcolor_p, linewidth=2,
label='Pick')
ax.set_xlabel('Time [s] since %s' % self.Data[0].stats.starttime)
ax.set_yticks([])
ax.set_title(self.Data[0].stats.station)
ax.legend(loc=1)
if plt_flag == 1:
fig.show()
try: input()
except SyntaxError: pass
try:
input()
except SyntaxError:
pass
plt.close(fig)
return

File diff suppressed because it is too large Load Diff

View File

@ -1,998 +0,0 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# -*- coding: utf-8 -*-
"""
Created Mar/Apr 2015
Collection of helpful functions for manual and automatic picking.
:author: Ludger Kueperkoch / MAGS2 EP3 working group
"""
import warnings
import matplotlib.pyplot as plt
import numpy as np
from obspy.core import Stream, UTCDateTime
def earllatepicker(X, nfac, TSNR, Pick1, iplot=0, stealthMode=False):
'''
Function to derive earliest and latest possible pick after Diehl & Kissling (2009)
as reasonable uncertainties. Latest possible pick is based on noise level,
earliest possible pick is half a signal wavelength in front of most likely
pick given by PragPicker or manually set by analyst. Most likely pick
(initial pick Pick1) must be given.
:param: X, time series (seismogram)
:type: `~obspy.core.stream.Stream`
:param: nfac (noise factor), nfac times noise level to calculate latest possible pick
:type: int
:param: TSNR, length of time windows around pick used to determine SNR [s]
:type: tuple (T_noise, T_gap, T_signal)
:param: Pick1, initial (most likely) onset time, starting point for earllatepicker
:type: float
:param: iplot, if given, results are plotted in figure(iplot)
:type: int
'''
assert isinstance(X, Stream), "%s is not a stream object" % str(X)
LPick = None
EPick = None
PickError = None
if stealthMode is False:
print
'earllatepicker: Get earliest and latest possible pick relative to most likely pick ...'
x = X[0].data
t = np.arange(0, X[0].stats.npts / X[0].stats.sampling_rate,
X[0].stats.delta)
inoise = getnoisewin(t, Pick1, TSNR[0], TSNR[1])
# get signal window
isignal = getsignalwin(t, Pick1, TSNR[2])
# remove mean
x = x - np.mean(x[inoise])
# calculate noise level
nlevel = np.sqrt(np.mean(np.square(x[inoise]))) * nfac
# get time where signal exceeds nlevel
ilup, = np.where(x[isignal] > nlevel)
ildown, = np.where(x[isignal] < -nlevel)
if not ilup.size and not ildown.size:
print("earllatepicker: Signal lower than noise level!")
print("Skip this trace!")
return LPick, EPick, PickError
il = min(np.min(ilup) if ilup.size else float('inf'),
np.min(ildown) if ildown.size else float('inf'))
LPick = t[isignal][il]
# get earliest possible pick
EPick = np.nan;
count = 0
pis = isignal
# if EPick stays NaN the signal window size will be doubled
while np.isnan(EPick):
if count > 0:
print("earllatepicker: Doubled signal window size %s time(s) "
"because of NaN for earliest pick." % count)
if stealthMode is False:
print("\nearllatepicker: Doubled signal window size %s time(s) "
"because of NaN for earliest pick." % count)
isigDoubleWinStart = pis[-1] + 1
isignalDoubleWin = np.arange(isigDoubleWinStart,
isigDoubleWinStart + len(pis))
if (isigDoubleWinStart + len(pis)) < X[0].data.size:
pis = np.concatenate((pis, isignalDoubleWin))
else:
print("Could not double signal window. Index out of bounds.")
break
count += 1
# determine all zero crossings in signal window (demeaned)
zc = crossings_nonzero_all(x[pis] - x[pis].mean())
# calculate mean half period T0 of signal as the average of the
T0 = np.mean(np.diff(zc)) * X[0].stats.delta # this is half wave length
# T0/4 is assumed as time difference between most likely and earliest possible pick!
EPick = Pick1 - T0 / 2
# get symmetric pick error as mean from earliest and latest possible pick
# by weighting latest possible pick two times earliest possible pick
diffti_tl = LPick - Pick1
diffti_te = Pick1 - EPick
PickError = (diffti_te + 2 * diffti_tl) / 3
if iplot > 1:
p = plt.figure(iplot)
p1, = plt.plot(t, x, 'k')
p2, = plt.plot(t[inoise], x[inoise])
p3, = plt.plot(t[isignal], x[isignal], 'r')
p4, = plt.plot([t[0], t[int(len(t)) - 1]], [nlevel, nlevel], '--k')
p5, = plt.plot(t[isignal[zc]], np.zeros(len(zc)), '*g',
markersize=14)
plt.legend([p1, p2, p3, p4, p5],
['Data', 'Noise Window', 'Signal Window', 'Noise Level',
'Zero Crossings'],
loc='best')
plt.plot([t[0], t[int(len(t)) - 1]], [-nlevel, -nlevel], '--k')
plt.plot([Pick1, Pick1], [max(x), -max(x)], 'b', linewidth=2)
plt.plot([LPick, LPick], [max(x) / 2, -max(x) / 2], '--k')
plt.plot([EPick, EPick], [max(x) / 2, -max(x) / 2], '--k')
plt.plot([Pick1 + PickError, Pick1 + PickError],
[max(x) / 2, -max(x) / 2], 'r--')
plt.plot([Pick1 - PickError, Pick1 - PickError],
[max(x) / 2, -max(x) / 2], 'r--')
plt.xlabel('Time [s] since %s' % X[0].stats.starttime)
plt.yticks([])
plt.title(
'Earliest-/Latest Possible/Most Likely Pick & Symmetric Pick Error, %s' %
X[0].stats.station)
plt.show()
raw_input()
plt.close(p)
return EPick, LPick, PickError
def fmpicker(Xraw, Xfilt, pickwin, Pick, iplot=0):
'''
Function to derive first motion (polarity) of given phase onset Pick.
Calculation is based on zero crossings determined within time window pickwin
after given onset time.
:param: Xraw, unfiltered time series (seismogram)
:type: `~obspy.core.stream.Stream`
:param: Xfilt, filtered time series (seismogram)
:type: `~obspy.core.stream.Stream`
:param: pickwin, time window after onset Pick within zero crossings are calculated
:type: float
:param: Pick, initial (most likely) onset time, starting point for fmpicker
:type: float
:param: iplot, if given, results are plotted in figure(iplot)
:type: int
'''
warnings.simplefilter('ignore', np.RankWarning)
assert isinstance(Xraw, Stream), "%s is not a stream object" % str(Xraw)
assert isinstance(Xfilt, Stream), "%s is not a stream object" % str(Xfilt)
FM = None
if Pick is not None:
print("fmpicker: Get first motion (polarity) of onset using unfiltered seismogram...")
xraw = Xraw[0].data
xfilt = Xfilt[0].data
t = np.arange(0, Xraw[0].stats.npts / Xraw[0].stats.sampling_rate,
Xraw[0].stats.delta)
# get pick window
ipick = np.where(
(t <= min([Pick + pickwin, len(Xraw[0])])) & (t >= Pick))
# remove mean
xraw[ipick] = xraw[ipick] - np.mean(xraw[ipick])
xfilt[ipick] = xfilt[ipick] - np.mean(xfilt[ipick])
# get zero crossings after most likely pick
# initial onset is assumed to be the first zero crossing
# first from unfiltered trace
zc1 = []
zc1.append(Pick)
index1 = []
i = 0
for j in range(ipick[0][1], ipick[0][len(t[ipick]) - 1]):
i = i + 1
if xraw[j - 1] <= 0 <= xraw[j]:
zc1.append(t[ipick][i])
index1.append(i)
elif xraw[j - 1] > 0 >= xraw[j]:
zc1.append(t[ipick][i])
index1.append(i)
if len(zc1) == 3:
break
# if time difference betweeen 1st and 2cnd zero crossing
# is too short, get time difference between 1st and 3rd
# to derive maximum
if zc1[1] - zc1[0] <= Xraw[0].stats.delta:
li1 = index1[1]
else:
li1 = index1[0]
if np.size(xraw[ipick[0][1]:ipick[0][li1]]) == 0:
print("fmpicker: Onset on unfiltered trace too emergent for first motion determination!")
P1 = None
else:
imax1 = np.argmax(abs(xraw[ipick[0][1]:ipick[0][li1]]))
if imax1 == 0:
imax1 = np.argmax(abs(xraw[ipick[0][1]:ipick[0][index1[1]]]))
if imax1 == 0:
print("fmpicker: Zero crossings too close!")
print("Skip first motion determination!")
return FM
islope1 = np.where((t >= Pick) & (t <= Pick + t[imax1]))
# calculate slope as polynomal fit of order 1
xslope1 = np.arange(0, len(xraw[islope1]), 1)
P1 = np.polyfit(xslope1, xraw[islope1], 1)
datafit1 = np.polyval(P1, xslope1)
# now using filterd trace
# next zero crossings after most likely pick
zc2 = []
zc2.append(Pick)
index2 = []
i = 0
for j in range(ipick[0][1], ipick[0][len(t[ipick]) - 1]):
i = i + 1
if xfilt[j - 1] <= 0 <= xfilt[j]:
zc2.append(t[ipick][i])
index2.append(i)
elif xfilt[j - 1] > 0 >= xfilt[j]:
zc2.append(t[ipick][i])
index2.append(i)
if len(zc2) == 3:
break
# if time difference betweeen 1st and 2cnd zero crossing
# is too short, get time difference between 1st and 3rd
# to derive maximum
if zc2[1] - zc2[0] <= Xfilt[0].stats.delta:
li2 = index2[1]
else:
li2 = index2[0]
if np.size(xfilt[ipick[0][1]:ipick[0][li2]]) == 0:
print("fmpicker: Onset on filtered trace too emergent for first motion determination!")
P2 = None
else:
imax2 = np.argmax(abs(xfilt[ipick[0][1]:ipick[0][li2]]))
if imax2 == 0:
imax2 = np.argmax(abs(xfilt[ipick[0][1]:ipick[0][index2[1]]]))
if imax2 == 0:
print("fmpicker: Zero crossings too close!")
print("Skip first motion determination!")
return FM
islope2 = np.where((t >= Pick) & (t <= Pick + t[imax2]))
# calculate slope as polynomal fit of order 1
xslope2 = np.arange(0, len(xfilt[islope2]), 1)
P2 = np.polyfit(xslope2, xfilt[islope2], 1)
datafit2 = np.polyval(P2, xslope2)
# compare results
if P1 is not None and P2 is not None:
if P1[0] < 0 and P2[0] < 0:
FM = 'D'
elif P1[0] >= 0 > P2[0]:
FM = '-'
elif P1[0] < 0 <= P2[0]:
FM = '-'
elif P1[0] > 0 and P2[0] > 0:
FM = 'U'
elif P1[0] <= 0 < P2[0]:
FM = '+'
elif P1[0] > 0 >= P2[0]:
FM = '+'
print("fmpicker: Found polarity %s" % FM)
if iplot > 1:
plt.figure(iplot)
plt.subplot(2, 1, 1)
plt.plot(t, xraw, 'k')
p1, = plt.plot([Pick, Pick], [max(xraw), -max(xraw)], 'b', linewidth=2)
if P1 is not None:
p2, = plt.plot(t[islope1], xraw[islope1])
p3, = plt.plot(zc1, np.zeros(len(zc1)), '*g', markersize=14)
p4, = plt.plot(t[islope1], datafit1, '--g', linewidth=2)
plt.legend([p1, p2, p3, p4],
['Pick', 'Slope Window', 'Zero Crossings', 'Slope'],
loc='best')
plt.text(Pick + 0.02, max(xraw) / 2, '%s' % FM, fontsize=14)
ax = plt.gca()
plt.yticks([])
plt.title('First-Motion Determination, %s, Unfiltered Data' % Xraw[
0].stats.station)
plt.subplot(2, 1, 2)
plt.title('First-Motion Determination, Filtered Data')
plt.plot(t, xfilt, 'k')
p1, = plt.plot([Pick, Pick], [max(xfilt), -max(xfilt)], 'b',
linewidth=2)
if P2 is not None:
p2, = plt.plot(t[islope2], xfilt[islope2])
p3, = plt.plot(zc2, np.zeros(len(zc2)), '*g', markersize=14)
p4, = plt.plot(t[islope2], datafit2, '--g', linewidth=2)
plt.text(Pick + 0.02, max(xraw) / 2, '%s' % FM, fontsize=14)
ax = plt.gca()
plt.xlabel('Time [s] since %s' % Xraw[0].stats.starttime)
plt.yticks([])
plt.show()
raw_input()
plt.close(iplot)
return FM
def crossings_nonzero_all(data):
pos = data > 0
npos = ~pos
return ((pos[:-1] & npos[1:]) | (npos[:-1] & pos[1:])).nonzero()[0]
def getSNR(X, TSNR, t1):
'''
Function to calculate SNR of certain part of seismogram relative to
given time (onset) out of given noise and signal windows. A safety gap
between noise and signal part can be set. Returns SNR and SNR [dB] and
noiselevel.
:param: X, time series (seismogram)
:type: `~obspy.core.stream.Stream`
:param: TSNR, length of time windows [s] around t1 (onset) used to determine SNR
:type: tuple (T_noise, T_gap, T_signal)
:param: t1, initial time (onset) from which noise and signal windows are calculated
:type: float
'''
assert isinstance(X, Stream), "%s is not a stream object" % str(X)
x = X[0].data
t = np.arange(0, X[0].stats.npts / X[0].stats.sampling_rate,
X[0].stats.delta)
# get noise window
inoise = getnoisewin(t, t1, TSNR[0], TSNR[1])
# get signal window
isignal = getsignalwin(t, t1, TSNR[2])
if np.size(inoise) < 1:
print("getSNR: Empty array inoise, check noise window!")
return
elif np.size(isignal) < 1:
print("getSNR: Empty array isignal, check signal window!")
return
# demean over entire waveform
x = x - np.mean(x[inoise])
# calculate ratios
noiselevel = np.sqrt(np.mean(np.square(x[inoise])))
signallevel = np.sqrt(np.mean(np.square(x[isignal])))
SNR = signallevel / noiselevel
SNRdB = 10 * np.log10(SNR)
return SNR, SNRdB, noiselevel
def getnoisewin(t, t1, tnoise, tgap):
'''
Function to extract indeces of data out of time series for noise calculation.
Returns an array of indeces.
:param: t, array of time stamps
:type: numpy array
:param: t1, time from which relativ to it noise window is extracted
:type: float
:param: tnoise, length of time window [s] for noise part extraction
:type: float
:param: tgap, safety gap between t1 (onset) and noise window to
ensure, that noise window contains no signal
:type: float
'''
# get noise window
inoise, = np.where((t <= max([t1 - tgap, 0])) \
& (t >= max([t1 - tnoise - tgap, 0])))
if np.size(inoise) < 1:
print("getnoisewin: Empty array inoise, check noise window!")
return inoise
def getsignalwin(t, t1, tsignal):
'''
Function to extract data out of time series for signal level calculation.
Returns an array of indeces.
:param: t, array of time stamps
:type: numpy array
:param: t1, time from which relativ to it signal window is extracted
:type: float
:param: tsignal, length of time window [s] for signal level calculation
:type: float
'''
# get signal window
isignal, = np.where((t <= min([t1 + tsignal, len(t)])) \
& (t >= t1))
if np.size(isignal) < 1:
print("getsignalwin: Empty array isignal, check signal window!")
return isignal
def getResolutionWindow(snr):
"""
Number -> Float
produce the half of the time resolution window width from given SNR
value
SNR >= 3 -> 2 sec HRW
3 > SNR >= 2 -> 5 sec MRW
2 > SNR >= 1.5 -> 10 sec LRW
1.5 > SNR -> 15 sec VLRW
see also Diehl et al. 2009
>>> getResolutionWindow(0.5)
7.5
>>> getResolutionWindow(1.8)
5.0
>>> getResolutionWindow(2.3)
2.5
>>> getResolutionWindow(4)
1.0
>>> getResolutionWindow(2)
2.5
"""
res_wins = {'HRW': 2., 'MRW': 5., 'LRW': 10., 'VLRW': 15.}
if snr < 1.5:
time_resolution = res_wins['VLRW']
elif snr < 2.:
time_resolution = res_wins['LRW']
elif snr < 3.:
time_resolution = res_wins['MRW']
else:
time_resolution = res_wins['HRW']
return time_resolution / 2
def wadaticheck(pickdic, dttolerance, iplot):
'''
Function to calculate Wadati-diagram from given P and S onsets in order
to detect S pick outliers. If a certain S-P time deviates by dttolerance
from regression of S-P time the S pick is marked and down graded.
: param: pickdic, dictionary containing picks and quality parameters
: type: dictionary
: param: dttolerance, maximum adjusted deviation of S-P time from
S-P time regression
: type: float
: param: iplot, if iplot > 1, Wadati diagram is shown
: type: int
'''
checkedonsets = pickdic
# search for good quality picks and calculate S-P time
Ppicks = []
Spicks = []
SPtimes = []
for key in pickdic:
if pickdic[key]['P']['weight'] < 4 and pickdic[key]['S']['weight'] < 4:
# calculate S-P time
spt = pickdic[key]['S']['mpp'] - pickdic[key]['P']['mpp']
# add S-P time to dictionary
pickdic[key]['SPt'] = spt
# add P onsets and corresponding S-P times to list
UTCPpick = UTCDateTime(pickdic[key]['P']['mpp'])
UTCSpick = UTCDateTime(pickdic[key]['S']['mpp'])
Ppicks.append(UTCPpick.timestamp)
Spicks.append(UTCSpick.timestamp)
SPtimes.append(spt)
if len(SPtimes) >= 3:
# calculate slope
p1 = np.polyfit(Ppicks, SPtimes, 1)
wdfit = np.polyval(p1, Ppicks)
wfitflag = 0
# calculate vp/vs ratio before check
vpvsr = p1[0] + 1
print("###############################################")
print("wadaticheck: Average Vp/Vs ratio before check: %f" % vpvsr)
checkedPpicks = []
checkedSpicks = []
checkedSPtimes = []
# calculate deviations from Wadati regression
ii = 0
ibad = 0
for key in pickdic:
if pickdic[key].has_key('SPt'):
wddiff = abs(pickdic[key]['SPt'] - wdfit[ii])
ii += 1
# check, if deviation is larger than adjusted
if wddiff > dttolerance:
# mark onset and downgrade S-weight to 9
# (not used anymore)
marker = 'badWadatiCheck'
pickdic[key]['S']['weight'] = 9
ibad += 1
else:
marker = 'goodWadatiCheck'
checkedPpick = UTCDateTime(pickdic[key]['P']['mpp'])
checkedPpicks.append(checkedPpick.timestamp)
checkedSpick = UTCDateTime(pickdic[key]['S']['mpp'])
checkedSpicks.append(checkedSpick.timestamp)
checkedSPtime = pickdic[key]['S']['mpp'] - pickdic[key]['P']['mpp']
checkedSPtimes.append(checkedSPtime)
pickdic[key]['S']['marked'] = marker
if len(checkedPpicks) >= 3:
# calculate new slope
p2 = np.polyfit(checkedPpicks, checkedSPtimes, 1)
wdfit2 = np.polyval(p2, checkedPpicks)
# calculate vp/vs ratio after check
cvpvsr = p2[0] + 1
print("wadaticheck: Average Vp/Vs ratio after check: %f" % cvpvsr)
print("wadatacheck: Skipped %d S pick(s)" % ibad)
else:
print("###############################################")
print("wadatacheck: Not enough checked S-P times available!")
print("Skip Wadati check!")
checkedonsets = pickdic
else:
print("wadaticheck: Not enough S-P times available for reliable regression!")
print("Skip wadati check!")
wfitflag = 1
# plot results
if iplot > 1:
plt.figure(iplot)
f1, = plt.plot(Ppicks, SPtimes, 'ro')
if wfitflag == 0:
f2, = plt.plot(Ppicks, wdfit, 'k')
f3, = plt.plot(checkedPpicks, checkedSPtimes, 'ko')
f4, = plt.plot(checkedPpicks, wdfit2, 'g')
plt.title('Wadati-Diagram, %d S-P Times, Vp/Vs(raw)=%5.2f,' \
'Vp/Vs(checked)=%5.2f' % (len(SPtimes), vpvsr, cvpvsr))
plt.legend([f1, f2, f3, f4], ['Skipped S-Picks', 'Wadati 1',
'Reliable S-Picks', 'Wadati 2'], loc='best')
else:
plt.title('Wadati-Diagram, %d S-P Times' % len(SPtimes))
plt.ylabel('S-P Times [s]')
plt.xlabel('P Times [s]')
plt.show()
raw_input()
plt.close(iplot)
return checkedonsets
def checksignallength(X, pick, TSNR, minsiglength, nfac, minpercent, iplot):
'''
Function to detect spuriously picked noise peaks.
Uses RMS trace of all 3 components (if available) to determine,
how many samples [per cent] after P onset are below certain
threshold, calculated from noise level times noise factor.
: param: X, time series (seismogram)
: type: `~obspy.core.stream.Stream`
: param: pick, initial (AIC) P onset time
: type: float
: param: TSNR, length of time windows around initial pick [s]
: type: tuple (T_noise, T_gap, T_signal)
: param: minsiglength, minium required signal length [s] to
declare pick as P onset
: type: float
: param: nfac, noise factor (nfac * noise level = threshold)
: type: float
: param: minpercent, minimum required percentage of samples
above calculated threshold
: type: float
: param: iplot, if iplot > 1, results are shown in figure
: type: int
'''
assert isinstance(X, Stream), "%s is not a stream object" % str(X)
print("Checking signal length ...")
if len(X) > 1:
# all three components available
# make sure, all components have equal lengths
ilen = min([len(X[0].data), len(X[1].data), len(X[2].data)])
x1 = X[0][0:ilen]
x2 = X[1][0:ilen]
x3 = X[2][0:ilen]
# get RMS trace
rms = np.sqrt((np.power(x1, 2) + np.power(x2, 2) + np.power(x3, 2)) / 3)
else:
x1 = X[0].data
rms = np.sqrt(np.power(2, x1))
t = np.arange(0, ilen / X[0].stats.sampling_rate,
X[0].stats.delta)
# get noise window in front of pick plus saftey gap
inoise = getnoisewin(t, pick - 0.5, TSNR[0], TSNR[1])
# get signal window
isignal = getsignalwin(t, pick, minsiglength)
# calculate minimum adjusted signal level
minsiglevel = max(rms[inoise]) * nfac
# minimum adjusted number of samples over minimum signal level
minnum = len(isignal) * minpercent / 100
# get number of samples above minimum adjusted signal level
numoverthr = len(np.where(rms[isignal] >= minsiglevel)[0])
if numoverthr >= minnum:
print("checksignallength: Signal reached required length.")
returnflag = 1
else:
print("checksignallength: Signal shorter than required minimum signal length!")
print("Presumably picked noise peak, pick is rejected!")
print("(min. signal length required: %s s)" % minsiglength)
returnflag = 0
if iplot == 2:
plt.figure(iplot)
p1, = plt.plot(t, rms, 'k')
p2, = plt.plot(t[inoise], rms[inoise], 'c')
p3, = plt.plot(t[isignal], rms[isignal], 'r')
p4, = plt.plot([t[isignal[0]], t[isignal[len(isignal) - 1]]],
[minsiglevel, minsiglevel], 'g', linewidth=2)
p5, = plt.plot([pick, pick], [min(rms), max(rms)], 'b', linewidth=2)
plt.legend([p1, p2, p3, p4, p5], ['RMS Data', 'RMS Noise Window',
'RMS Signal Window', 'Minimum Signal Level',
'Onset'], loc='best')
plt.xlabel('Time [s] since %s' % X[0].stats.starttime)
plt.ylabel('Counts')
plt.title('Check for Signal Length, Station %s' % X[0].stats.station)
plt.yticks([])
plt.show()
raw_input()
plt.close(iplot)
return returnflag
def checkPonsets(pickdic, dttolerance, iplot):
'''
Function to check statistics of P-onset times: Control deviation from
median (maximum adjusted deviation = dttolerance) and apply pseudo-
bootstrapping jackknife.
: param: pickdic, dictionary containing picks and quality parameters
: type: dictionary
: param: dttolerance, maximum adjusted deviation of P-onset time from
median of all P onsets
: type: float
: param: iplot, if iplot > 1, Wadati diagram is shown
: type: int
'''
checkedonsets = pickdic
# search for good quality P picks
Ppicks = []
stations = []
for key in pickdic:
if pickdic[key]['P']['weight'] < 4:
# add P onsets to list
UTCPpick = UTCDateTime(pickdic[key]['P']['mpp'])
Ppicks.append(UTCPpick.timestamp)
stations.append(key)
# apply jackknife bootstrapping on variance of P onsets
print("###############################################")
print("checkPonsets: Apply jackknife bootstrapping on P-onset times ...")
[xjack, PHI_pseudo, PHI_sub] = jackknife(Ppicks, 'VAR', 1)
# get pseudo variances smaller than average variances
# (times safety factor), these picks passed jackknife test
ij = np.where(PHI_pseudo <= 2 * xjack)
# these picks did not pass jackknife test
badjk = np.where(PHI_pseudo > 2 * xjack)
badjkstations = np.array(stations)[badjk]
print("checkPonsets: %d pick(s) did not pass jackknife test!" % len(badjkstations))
# calculate median from these picks
pmedian = np.median(np.array(Ppicks)[ij])
# find picks that deviate less than dttolerance from median
ii = np.where(abs(np.array(Ppicks)[ij] - pmedian) <= dttolerance)
jj = np.where(abs(np.array(Ppicks)[ij] - pmedian) > dttolerance)
igood = ij[0][ii]
ibad = ij[0][jj]
goodstations = np.array(stations)[igood]
badstations = np.array(stations)[ibad]
print("checkPonsets: %d pick(s) deviate too much from median!" % len(ibad))
print("checkPonsets: Skipped %d P pick(s) out of %d" % (len(badstations) \
+ len(badjkstations), len(stations)))
goodmarker = 'goodPonsetcheck'
badmarker = 'badPonsetcheck'
badjkmarker = 'badjkcheck'
for i in range(0, len(goodstations)):
# mark P onset as checked and keep P weight
pickdic[goodstations[i]]['P']['marked'] = goodmarker
for i in range(0, len(badstations)):
# mark P onset and downgrade P weight to 9
# (not used anymore)
pickdic[badstations[i]]['P']['marked'] = badmarker
pickdic[badstations[i]]['P']['weight'] = 9
for i in range(0, len(badjkstations)):
# mark P onset and downgrade P weight to 9
# (not used anymore)
pickdic[badjkstations[i]]['P']['marked'] = badjkmarker
pickdic[badjkstations[i]]['P']['weight'] = 9
checkedonsets = pickdic
if iplot > 1:
p1, = plt.plot(np.arange(0, len(Ppicks)), Ppicks, 'r+', markersize=14)
p2, = plt.plot(igood, np.array(Ppicks)[igood], 'g*', markersize=14)
p3, = plt.plot([0, len(Ppicks) - 1], [pmedian, pmedian], 'g',
linewidth=2)
for i in range(0, len(Ppicks)):
plt.text(i, Ppicks[i] + 0.2, stations[i])
plt.xlabel('Number of P Picks')
plt.ylabel('Onset Time [s] from 1.1.1970')
plt.legend([p1, p2, p3], ['Skipped P Picks', 'Good P Picks', 'Median'],
loc='best')
plt.title('Check P Onsets')
plt.show()
raw_input()
return checkedonsets
def jackknife(X, phi, h):
'''
Function to calculate the Jackknife Estimator for a given quantity,
special type of boot strapping. Returns the jackknife estimator PHI_jack
the pseudo values PHI_pseudo and the subgroup parameters PHI_sub.
: param: X, given quantity
: type: list
: param: phi, chosen estimator, choose between:
"MED" for median
"MEA" for arithmetic mean
"VAR" for variance
: type: string
: param: h, size of subgroups, optinal, default = 1
: type: integer
'''
PHI_jack = None
PHI_pseudo = None
PHI_sub = None
# determine number of subgroups
g = len(X) / h
if type(g) is not int:
print("jackknife: Cannot divide quantity X in equal sized subgroups!")
print("Choose another size for subgroups!")
return PHI_jack, PHI_pseudo, PHI_sub
else:
# estimator of undisturbed spot check
if phi == 'MEA':
phi_sc = np.mean(X)
elif phi == 'VAR':
phi_sc = np.var(X)
elif phi == 'MED':
phi_sc = np.median(X)
# estimators of subgroups
PHI_pseudo = []
PHI_sub = []
for i in range(0, g - 1):
# subgroup i, remove i-th sample
xx = X[:]
del xx[i]
# calculate estimators of disturbed spot check
if phi == 'MEA':
phi_sub = np.mean(xx)
elif phi == 'VAR':
phi_sub = np.var(xx)
elif phi == 'MED':
phi_sub = np.median(xx)
PHI_sub.append(phi_sub)
# pseudo values
phi_pseudo = g * phi_sc - ((g - 1) * phi_sub)
PHI_pseudo.append(phi_pseudo)
# jackknife estimator
PHI_jack = np.mean(PHI_pseudo)
return PHI_jack, PHI_pseudo, PHI_sub
def checkZ4S(X, pick, zfac, checkwin, iplot):
'''
Function to compare energy content of vertical trace with
energy content of horizontal traces to detect spuriously
picked S onsets instead of P onsets. Usually, P coda shows
larger longitudal energy on vertical trace than on horizontal
traces, where the transversal energy is larger within S coda.
Be careful: there are special circumstances, where this is not
the case!
: param: X, fitered(!) time series, three traces
: type: `~obspy.core.stream.Stream`
: param: pick, initial (AIC) P onset time
: type: float
: param: zfac, factor for threshold determination,
vertical energy must exceed coda level times zfac
to declare a pick as P onset
: type: float
: param: checkwin, window length [s] for calculating P-coda
energy content
: type: float
: param: iplot, if iplot > 1, energy content and threshold
are shown
: type: int
'''
assert isinstance(X, Stream), "%s is not a stream object" % str(X)
print("Check for spuriously picked S onset instead of P onset ...")
returnflag = 0
# split components
zdat = X.select(component="Z")
edat = X.select(component="E")
if len(edat) == 0: # check for other components
edat = X.select(component="2")
ndat = X.select(component="N")
if len(ndat) == 0: # check for other components
ndat = X.select(component="1")
z = zdat[0].data
tz = np.arange(0, zdat[0].stats.npts / zdat[0].stats.sampling_rate,
zdat[0].stats.delta)
# calculate RMS trace from vertical component
absz = np.sqrt(np.power(z, 2))
# calculate RMS trace from both horizontal traces
# make sure, both traces have equal lengths
lene = len(edat[0].data)
lenn = len(ndat[0].data)
minlen = min([lene, lenn])
absen = np.sqrt(np.power(edat[0].data[0:minlen - 1], 2) \
+ np.power(ndat[0].data[0:minlen - 1], 2))
# get signal window
isignal = getsignalwin(tz, pick, checkwin)
# calculate energy levels
zcodalevel = max(absz[isignal])
encodalevel = max(absen[isignal])
# calculate threshold
minsiglevel = encodalevel * zfac
# vertical P-coda level must exceed horizontal P-coda level
# zfac times encodalevel
if zcodalevel < minsiglevel:
print("checkZ4S: Maybe S onset? Skip this P pick!")
else:
print("checkZ4S: P onset passes checkZ4S test!")
returnflag = 1
if iplot > 1:
te = np.arange(0, edat[0].stats.npts / edat[0].stats.sampling_rate,
edat[0].stats.delta)
tn = np.arange(0, ndat[0].stats.npts / ndat[0].stats.sampling_rate,
ndat[0].stats.delta)
plt.plot(tz, z / max(z), 'k')
plt.plot(tz[isignal], z[isignal] / max(z), 'r')
plt.plot(te, edat[0].data / max(edat[0].data) + 1, 'k')
plt.plot(te[isignal], edat[0].data[isignal] / max(edat[0].data) + 1, 'r')
plt.plot(tn, ndat[0].data / max(ndat[0].data) + 2, 'k')
plt.plot(tn[isignal], ndat[0].data[isignal] / max(ndat[0].data) + 2, 'r')
plt.plot([tz[isignal[0]], tz[isignal[len(isignal) - 1]]],
[minsiglevel / max(z), minsiglevel / max(z)], 'g',
linewidth=2)
plt.xlabel('Time [s] since %s' % zdat[0].stats.starttime)
plt.ylabel('Normalized Counts')
plt.yticks([0, 1, 2], [zdat[0].stats.channel, edat[0].stats.channel,
ndat[0].stats.channel])
plt.title('CheckZ4S, Station %s' % zdat[0].stats.station)
plt.show()
raw_input()
return returnflag
def writephases(arrivals, fformat, filename):
'''
Function of methods to write phases to the following standard file
formats used for locating earthquakes:
HYPO71, NLLoc, VELEST, HYPOSAT, HYPOINVERSE and hypoDD
:param: arrivals
:type: dictionary containing all phase information including
station ID, phase, first motion, weight (uncertainty),
....
:param: fformat
:type: string, chosen file format (location routine),
choose between NLLoc, HYPO71, HYPOSAT, VELEST,
HYPOINVERSE, and hypoDD
:param: filename, full path and name of phase file
:type: string
'''
if fformat == 'NLLoc':
print("Writing phases to %s for NLLoc" % filename)
fid = open("%s" % filename, 'w')
# write header
fid.write('# EQEVENT: Label: EQ001 Loc: X 0.00 Y 0.00 Z 10.00 OT 0.00 \n')
for key in arrivals:
if arrivals[key]['P']['weight'] < 4:
# write phase information to NLLoc-phase file
# see the NLLoc tutorial at www.alomax.free.fr/nlloc/
fm = arrivals[key]['P']['fm']
onset = arrivals[key]['P']['mpp']
year = onset.year
month = onset.month
day = onset.day
hh = onset.hour
mm = onset.minute
ss = onset.second
ms = onset.microsecond
ss_ms = ss + (ms / 1E06)
fid.write('%s ? ? ? P %s %d%02d%02d %02d%02d %7.4f GAU 0 0 0 0 1 \n' \
% (key, fm, year, month, day, hh, mm, ss_ms))
if arrivals[key]['S']['weight'] < 4:
fm = '?'
onset = arrivals[key]['S']['mpp']
year = onset.year
month = onset.month
day = onset.day
hh = onset.hour
mm = onset.minute
ss = onset.second
ms = onset.microsecond
ss_ms = ss + (ms / 1E06)
fid.write('%s ? ? ? S %s %d%02d%02d %02d%02d %7.4f GAU 0 0 0 0 1 \n' \
% (key, fm, year, month, day, hh, mm, ss_ms))
fid.close()
if __name__ == '__main__':
import doctest
doctest.testmod()

View File

@ -0,0 +1,742 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import traceback
import cartopy.crs as ccrs
import cartopy.feature as cf
from cartopy.mpl.ticker import LongitudeFormatter, LatitudeFormatter
import matplotlib
import matplotlib.patheffects as PathEffects
import matplotlib.pyplot as plt
import numpy as np
import obspy
from PySide2 import QtWidgets, QtGui
from matplotlib.backends.backend_qt5agg import FigureCanvasQTAgg as FigureCanvas
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
from obspy import UTCDateTime
from pylot.core.util.utils import identifyPhaseID
from scipy.interpolate import griddata
from pylot.core.pick.utils import get_quality_class
from pylot.core.util.widgets import PickDlg
matplotlib.use('Qt5Agg')
class MplCanvas(FigureCanvas):
def __init__(self, extern_axes=None, projection=None, width=15, height=5, dpi=100):
if extern_axes is None:
self.fig = plt.figure(figsize=(width, height), dpi=dpi)
self.axes = self.fig.add_subplot(111, projection=projection)
else:
self.fig = extern_axes.figure
self.axes = extern_axes
super(MplCanvas, self).__init__(self.fig)
class Array_map(QtWidgets.QWidget):
def __init__(self, parent, metadata, parameter=None, axes=None, annotate=True, pointsize=25.,
linewidth=1.5, width=5e6, height=2e6):
QtWidgets.QWidget.__init__(self, parent=parent)
assert (parameter is not None or parent is not None), 'either parent or parameter has to be set'
# set properties
self._parent = parent
self.metadata = metadata
self.pointsize = pointsize
self.linewidth = linewidth
self.extern_plot_axes = axes
self.width = width
self.height = height
self.annotate = annotate
self.picks = None
self.picks_dict = None
self.uncertainties = None
self.autopicks_dict = None
self.hybrids_dict = None
self.eventLoc = None
self.parameter = parameter if parameter else parent._inputs
self.picks_rel = {}
self.picks_rel_mean_corrected = {}
self.marked_stations = []
self.highlighted_stations = []
# call functions to draw everything
self.projection = ccrs.PlateCarree()
self.init_graphics()
self.ax = self.canvas.axes
self.ax.set_adjustable('datalim')
self.init_stations()
self.init_crtpyMap()
self.init_map()
# set original map limits to fall back on when home button is pressed
self.org_xlim = self.ax.get_xlim()
self.org_ylim = self.ax.get_ylim()
# initial map without event
self.ax.set_xlim(self.org_xlim[0], self.org_xlim[1])
self.ax.set_ylim(self.org_ylim[0], self.org_ylim[1])
self._style = None if not hasattr(parent, '_style') else parent._style
def init_map(self):
self.init_colormap()
self.connectSignals()
self.draw_everything()
def init_graphics(self):
"""
Initializes all GUI components and figure elements to be populated by other functions
"""
# initialize figure elements
if self.extern_plot_axes is None:
self.canvas = MplCanvas(projection=self.projection)
else:
self.canvas = MplCanvas(extern_axes=self.extern_plot_axes)
self.plotWidget = self.canvas
# initialize GUI elements
self.status_label = QtWidgets.QLabel()
self.map_reset_button = QtWidgets.QPushButton('Reset Map View')
self.save_map_button = QtWidgets.QPushButton('Save Map')
self.go2eq_button = QtWidgets.QPushButton('Go to Event Location')
self.subtract_mean_cb = QtWidgets.QCheckBox('Subtract mean')
self.main_box = QtWidgets.QVBoxLayout()
self.setLayout(self.main_box)
self.top_row = QtWidgets.QHBoxLayout()
self.main_box.addLayout(self.top_row, 0)
self.comboBox_phase = QtWidgets.QComboBox()
self.comboBox_phase.insertItem(0, 'P')
self.comboBox_phase.insertItem(1, 'S')
self.comboBox_am = QtWidgets.QComboBox()
self.comboBox_am.insertItem(0, 'hybrid (prefer manual)')
self.comboBox_am.insertItem(1, 'manual')
self.comboBox_am.insertItem(2, 'auto')
self.annotations_box = QtWidgets.QCheckBox('Annotate')
self.annotations_box.setChecked(True)
self.auto_refresh_box = QtWidgets.QCheckBox('Automatic refresh')
self.auto_refresh_box.setChecked(True)
self.refresh_button = QtWidgets.QPushButton('Refresh')
self.cmaps_box = QtWidgets.QComboBox()
self.cmaps_box.setMaxVisibleItems(20)
[self.cmaps_box.addItem(map_name) for map_name in sorted(plt.colormaps())]
# try to set to plasma as default
self.cmaps_box.setCurrentIndex(self.cmaps_box.findText('plasma'))
self.top_row.addWidget(QtWidgets.QLabel('Select a phase: '))
self.top_row.addWidget(self.comboBox_phase)
self.top_row.setStretch(1, 1) # set stretch of item 1 to 1
self.top_row.addWidget(QtWidgets.QLabel('Pick type: '))
self.top_row.addWidget(self.comboBox_am)
self.top_row.setStretch(3, 1) # set stretch of item 1 to 1
self.top_row.addWidget(self.cmaps_box)
self.top_row.addWidget(self.annotations_box)
self.top_row.addWidget(self.auto_refresh_box)
self.top_row.addWidget(self.refresh_button)
self.main_box.addWidget(self.plotWidget, 10)
self.bot_row = QtWidgets.QHBoxLayout()
self.main_box.addLayout(self.bot_row, 0)
self.bot_row.addWidget(QtWidgets.QLabel(''), 5)
self.bot_row.addWidget(self.map_reset_button, 2)
self.bot_row.addWidget(self.go2eq_button, 2)
self.bot_row.addWidget(self.save_map_button, 2)
self.bot_row.addWidget(self.subtract_mean_cb, 0)
self.bot_row.addWidget(self.status_label, 5)
def init_colormap(self):
self.init_lat_lon_dimensions()
self.init_lat_lon_grid()
def init_crtpyMap(self):
self.ax.add_feature(cf.LAND)
self.ax.add_feature(cf.OCEAN)
self.ax.add_feature(cf.COASTLINE, linewidth=1, edgecolor='gray')
self.ax.add_feature(cf.BORDERS, alpha=0.7)
self.ax.add_feature(cf.LAKES, alpha=0.7)
self.ax.add_feature(cf.RIVERS, linewidth=1)
# parallels and meridians
self.add_merid_paral()
self.canvas.fig.tight_layout()
def add_merid_paral(self):
self.gridlines = self.ax.gridlines(draw_labels=False, alpha=0.6, color='gray',
linewidth=self.linewidth / 2, zorder=7, crs=ccrs.PlateCarree())
def remove_merid_paral(self):
if len(self.gridlines.xline_artists):
self.gridlines.xline_artists[0].remove()
self.gridlines.yline_artists[0].remove()
def org_map_view(self):
self.ax.set_xlim(self.org_xlim[0], self.org_xlim[1])
self.ax.set_ylim(self.org_ylim[0], self.org_ylim[1])
# parallels and meridians
#self.remove_merid_paral()
#self.add_merid_paral()
self.canvas.draw_idle()
def go2eq(self):
if self.eventLoc:
lats, lons = self.eventLoc
self.ax.set_xlim(lons - 10, lons + 10)
self.ax.set_ylim(lats - 5, lats + 5)
# parallels and meridians
#self.remove_merid_paral()
#self.add_merid_paral()
self.canvas.draw_idle()
else:
self.status_label.setText('No event information available')
def connectSignals(self):
self.comboBox_phase.currentIndexChanged.connect(self._refresh_drawings)
self.comboBox_am.currentIndexChanged.connect(self._refresh_drawings)
self.cmaps_box.currentIndexChanged.connect(self._refresh_drawings)
self.annotations_box.stateChanged.connect(self.switch_annotations)
self.refresh_button.clicked.connect(self._refresh_drawings)
self.map_reset_button.clicked.connect(self.org_map_view)
self.go2eq_button.clicked.connect(self.go2eq)
self.save_map_button.clicked.connect(self.saveFigure)
self.subtract_mean_cb.stateChanged.connect(self.toggle_subtract_mean)
self.plotWidget.mpl_connect('motion_notify_event', self.mouse_moved)
self.plotWidget.mpl_connect('scroll_event', self.mouse_scroll)
self.plotWidget.mpl_connect('button_press_event', self.mouseLeftPress)
self.plotWidget.mpl_connect('button_release_event', self.mouseLeftRelease)
# set mouse events -----------------------------------------------------
def mouse_moved(self, event):
if not event.inaxes == self.ax:
return
else:
cont, inds = self.sc.contains(event)
lat = event.ydata
lon = event.xdata
text = f'Longitude: {lon:3.3f}, Latitude: {lat:3.3f}'
if cont:
indices = inds['ind']
text += ' | Station: ' if len(indices) == 1 else ' | Stations: '
text += ' - '.join([self._station_onpick_ids[index] for index in indices[:5]])
if len(indices) > 5:
text += '...'
self.status_label.setText(text)
def mouse_scroll(self, event):
if not event.inaxes == self.ax:
return
zoom = {'up': 1. / 2., 'down': 2.}
if event.button in zoom:
xlim = self.ax.get_xlim()
ylim = self.ax.get_ylim()
x, y = event.xdata, event.ydata
factor = zoom[event.button]
xdiff = (xlim[1] - xlim[0]) * factor
xl = x - 0.5 * xdiff
xr = x + 0.5 * xdiff
ydiff = (ylim[1] - ylim[0]) * factor
yb = y - 0.5 * ydiff
yt = y + 0.5 * ydiff
self.ax.set_xlim(xl, xr)
self.ax.set_ylim(yb, yt)
# parallels and meridians
#self.remove_merid_paral()
#self.add_merid_paral()
self.ax.figure.canvas.draw_idle()
def mouseLeftPress(self, event):
if not event.inaxes == self.ax:
return
self.map_x = event.xdata
self.map_y = event.ydata
self.map_xlim = self.ax.get_xlim()
self.map_ylim = self.ax.get_ylim()
def mouseLeftRelease(self, event):
if not event.inaxes == self.ax:
return
new_x = event.xdata
new_y = event.ydata
dx = new_x - self.map_x
dy = new_y - self.map_y
self.ax.set_xlim((self.map_xlim[0] - dx, self.map_xlim[1] - dx))
self.ax.set_ylim(self.map_ylim[0] - dy, self.map_ylim[1] - dy)
# parallels and meridians
#self.remove_merid_paral()
#self.add_merid_paral()
self.ax.figure.canvas.draw_idle()
def onpick(self, event):
btn_msg = {1: ' in selection. Aborted', 2: ' to delete a pick on. Aborted', 3: ' to display info.'}
ind = event.ind
button = event.mouseevent.button
msg_reason = None
if len(ind) > 1:
self._parent.update_status(f'Found more than one station {btn_msg.get(button)}')
return
if button == 1:
self.openPickDlg(ind)
elif button == 2:
self.deletePick(ind)
elif button == 3:
self.pickInfo(ind)
# data handling -----------------------------------------------------
def update_hybrids_dict(self):
self.hybrids_dict = self.picks_dict.copy()
for station, pick in self.autopicks_dict.items():
if not station in self.hybrids_dict.keys():
self.hybrids_dict[station] = pick
return self.hybrids_dict
def deletePick(self, ind):
self.update_hybrids_dict()
for index in ind:
network, station = self._station_onpick_ids[index].split('.')[:2]
try:
phase = self.comboBox_phase.currentText()
picks = self.current_picks_dict()[station]
pick = picks.get(phase)
if pick:
picker = pick['picker']
message = 'Deleted {} pick for phase {}, station {}.{} at timestamp {}'
message = message.format(picker, phase, network, station,
pick['mpp'])
if picker == 'auto':
del (self.autopicks_dict[station])
elif picker == 'manual':
del (self.picks_dict[station])
else:
raise TypeError('Unknown "picker" {}'.format(picker))
print(message)
pyl_mw = self._parent
pyl_mw.deletePicks(station, pick, type=picker)
pyl_mw.setDirty(True)
pyl_mw.update_status(message)
if self.auto_refresh_box.isChecked():
self._refresh_drawings()
else:
self.highlight_station(network, station, color='red')
pyl_mw.drawPicks(station)
pyl_mw.draw()
except Exception as e:
print('Could not delete pick for station {}.{}: {}'.format(network, station, e))
def pickInfo(self, ind):
self.update_hybrids_dict()
for index in ind:
network, station = self._station_onpick_ids[index].split('.')[:2]
dic = self.current_picks_dict()[station]
for phase, picks in dic.items():
# because of wadati...
if phase == 'SPt':
continue
print('{} - Pick:'.format(phase))
for key, info in picks.items():
print('{}: {}'.format(key, info))
def _from_dict(self, function, key):
return function(self.stations_dict.values(), key=lambda x: x[key])[key]
def get_min_from_stations(self, key):
return self._from_dict(min, key)
def get_max_from_stations(self, key):
return self._from_dict(max, key)
def current_picks_dict(self):
picktype = self.comboBox_am.currentText().split(' ')[0]
auto_manu = {'auto': self.autopicks_dict,
'manual': self.picks_dict,
'hybrid': self.hybrids_dict}
return auto_manu[picktype]
def init_stations(self):
self.stations_dict = self.metadata.get_all_coordinates()
self.latmin = self.get_min_from_stations('latitude')
self.lonmin = self.get_min_from_stations('longitude')
self.latmax = self.get_max_from_stations('latitude')
self.lonmax = self.get_max_from_stations('longitude')
def init_picks(self):
def get_picks(station_dict):
self.update_hybrids_dict()
picks = {}
uncertainties = {}
# selected phase
phase = self.comboBox_phase.currentText()
for st_id in station_dict.keys():
try:
station_name = st_id.split('.')[-1]
# current_picks_dict: auto or manual
station_picks = self.current_picks_dict().get(station_name)
if not station_picks:
continue
for phase_hint, pick in station_picks.items():
if identifyPhaseID(phase_hint) == phase:
break
else:
continue
if pick['picker'] == 'auto':
if not pick['spe']:
continue
picks[st_id] = pick['mpp']
uncertainties[st_id] = pick['spe']
except KeyError:
continue
except Exception as e:
print('Cannot display pick for station {}. Reason: {}'.format(station_name, e))
return picks, uncertainties
def get_picks_rel(picks, func=min):
picks_rel = {}
picks_utc = []
for pick in picks.values():
if type(pick) is UTCDateTime:
picks_utc.append(pick.timestamp)
if picks_utc:
self._reference_picktime = UTCDateTime(func(picks_utc))
for st_id, pick in picks.items():
if type(pick) is UTCDateTime:
pick -= self._reference_picktime
picks_rel[st_id] = pick
return picks_rel
def get_picks_rel_mean_corr(picks):
return get_picks_rel(picks, func=np.nanmean)
self.picks, self.uncertainties = get_picks(self.stations_dict)
self.picks_rel = get_picks_rel(self.picks)
self.picks_rel_mean_corrected = get_picks_rel_mean_corr(self.picks)
def toggle_subtract_mean(self):
if self.subtract_mean_cb.isChecked():
cmap = 'seismic'
else:
cmap = 'viridis'
self.cmaps_box.setCurrentIndex(self.cmaps_box.findText(cmap))
self._refresh_drawings()
def init_lat_lon_dimensions(self):
# init minimum and maximum lon and lat dimensions
self.londim = self.lonmax - self.lonmin
self.latdim = self.latmax - self.latmin
def init_lat_lon_grid(self, nstep=250):
# create a regular grid to display colormap
lataxis = np.linspace(self.latmin, self.latmax, nstep)
lonaxis = np.linspace(self.lonmin, self.lonmax, nstep)
self.longrid, self.latgrid = np.meshgrid(lonaxis, lataxis)
def init_picksgrid(self):
picks, uncertainties, lats, lons = self.get_picks_lat_lon()
try:
self.picksgrid_active = griddata((lats, lons), picks, (self.latgrid, self.longrid), method='linear')
except Exception as e:
self._warn('Could not init picksgrid: {}'.format(e))
def get_st_lat_lon_for_plot(self):
stations = []
latitudes = []
longitudes = []
for st_id, coords in self.stations_dict.items():
stations.append(st_id)
latitudes.append(coords['latitude'])
longitudes.append(coords['longitude'])
return stations, latitudes, longitudes
def get_picks_lat_lon(self):
picks_rel = self.picks_rel_mean_corrected if self.subtract_mean_cb.isChecked() else self.picks_rel
picks = []
uncertainties = []
latitudes = []
longitudes = []
for st_id, pick in picks_rel.items():
picks.append(pick)
uncertainties.append(self.uncertainties.get(st_id))
latitudes.append(self.stations_dict[st_id]['latitude'])
longitudes.append(self.stations_dict[st_id]['longitude'])
return picks, uncertainties, latitudes, longitudes
# plotting -----------------------------------------------------
def highlight_station(self, network, station, color):
stat_dict = self.stations_dict['{}.{}'.format(network, station)]
lat = stat_dict['latitude']
lon = stat_dict['longitude']
self.highlighted_stations.append(self.ax.scatter(lon, lat, s=self.pointsize, edgecolors=color,
facecolors='none', zorder=12,
transform=ccrs.PlateCarree(), label='deleted'))
def openPickDlg(self, ind):
try:
wfdata = self._parent.get_data().getWFData()
except AttributeError:
QtWidgets.QMessageBox.warning(
self, "PyLoT Warning",
"No waveform data found. Check if they were already loaded in Waveform plot tab."
)
return
wfdata_comp = self._parent.get_data().getAltWFdata()
for index in ind:
network, station = self._station_onpick_ids[index].split('.')[:2]
pyl_mw = self._parent
try:
wfdata = wfdata.select(station=station)
wfdata_comp = wfdata_comp.select(station=station)
if not wfdata:
self._warn('No data for station {}'.format(station))
return
pickDlg = PickDlg(self._parent, parameter=self.parameter,
data=wfdata.copy(), data_compare=wfdata_comp.copy(), network=network, station=station,
picks=self._parent.get_current_event().getPick(station),
autopicks=self._parent.get_current_event().getAutopick(station),
filteroptions=self._parent.filteroptions, metadata=self.metadata,
event=pyl_mw.get_current_event())
except Exception as e:
message = 'Could not generate Plot for station {st}.\n {er}'.format(st=station, er=e)
self._warn(message)
print(message, e)
print(traceback.format_exc())
return
try:
if pickDlg.exec_():
pyl_mw.setDirty(True)
pyl_mw.update_status('picks accepted ({0})'.format(station))
pyl_mw.addPicks(station, pickDlg.getPicks(picktype='manual'), type='manual')
pyl_mw.addPicks(station, pickDlg.getPicks(picktype='auto'), type='auto')
if self.auto_refresh_box.isChecked():
self._refresh_drawings()
else:
self.highlight_station(network, station, color='yellow')
pyl_mw.drawPicks(station)
pyl_mw.draw()
else:
pyl_mw.update_status('picks discarded ({0})'.format(station))
except Exception as e:
message = 'Could not save picks for station {st}.\n{er}'.format(st=station, er=e)
self._warn(message)
print(message, e)
print(traceback.format_exc())
def draw_contour_filled(self, nlevel=51):
if self.subtract_mean_cb.isChecked():
abs_max = self.get_residuals_absmax()
levels = np.linspace(-abs_max, abs_max, nlevel)
else:
levels = np.linspace(min(self.picks_rel.values()), max(self.picks_rel.values()), nlevel)
self.contourf = self.ax.contourf(self.longrid, self.latgrid, self.picksgrid_active, levels,
linewidths=self.linewidth * 5, transform=ccrs.PlateCarree(),
alpha=0.4, zorder=8, cmap=self.get_colormap())
def get_residuals_absmax(self):
return np.max(np.absolute(list(self.picks_rel_mean_corrected.values())))
def get_colormap(self):
return plt.get_cmap(self.cmaps_box.currentText())
def scatter_all_stations(self):
stations, lats, lons = self.get_st_lat_lon_for_plot()
self.sc = self.ax.scatter(lons, lats, s=self.pointsize * 3, facecolor='none', marker='.',
zorder=10, picker=True, edgecolor='0.5', label='Not Picked',
transform=ccrs.PlateCarree())
self.cid = self.plotWidget.mpl_connect('pick_event', self.onpick)
self._station_onpick_ids = stations
if self.eventLoc:
lats, lons = self.eventLoc
self.sc_event = self.ax.scatter(lons, lats, s=5 * self.pointsize, facecolor='red', zorder=11,
label='Event (might be outside map region)', marker='*',
edgecolors='black',
transform=ccrs.PlateCarree())
def scatter_picked_stations(self):
picks, uncertainties, lats, lons = self.get_picks_lat_lon()
if len(lons) < 1 and len(lats) < 1:
return
phase = self.comboBox_phase.currentText()
timeerrors = self.parameter['timeerrors{}'.format(phase)]
sizes = np.array([self.pointsize * (5. - get_quality_class(uncertainty, timeerrors))
for uncertainty in uncertainties])
cmap = self.get_colormap()
vmin = vmax = None
if self.subtract_mean_cb.isChecked():
vmin, vmax = -self.get_residuals_absmax(), self.get_residuals_absmax()
self.sc_picked = self.ax.scatter(lons, lats, s=sizes, edgecolors='white', cmap=cmap, vmin=vmin, vmax=vmax,
c=picks, zorder=11, label='Picked', transform=ccrs.PlateCarree())
def annotate_ax(self):
self.annotations = []
stations, ys, xs = self.get_st_lat_lon_for_plot()
# MP MP testing station highlighting if they have high impact on mean gradient of color map
# if self.picks_rel:
# self.test_gradient()
color_marked = {True: 'red',
False: 'white'}
for st, x, y in zip(stations, xs, ys):
if st in self.picks_rel:
color = 'white'
else:
color = 'lightgrey'
if st in self.marked_stations:
color = 'red'
self.annotations.append(
self.ax.annotate(f'{st}', xy=(x + 0.003, y + 0.003), fontsize=self.pointsize / 4.,
fontweight='semibold', color=color, alpha=0.8,
transform=ccrs.PlateCarree(), zorder=14,
path_effects=[PathEffects.withStroke(
linewidth=self.pointsize / 15., foreground='k')]))
self.legend = self.ax.legend(loc=1, framealpha=1)
self.legend.set_zorder(100)
self.legend.get_frame().set_facecolor((1, 1, 1, 0.95))
def add_cbar(self, label):
self.cbax_bg = inset_axes(self.ax, width="6%", height="75%", loc=5)
cbax = inset_axes(self.ax, width='2%', height='70%', loc=5)
cbar = self.ax.figure.colorbar(self.sc_picked, cax=cbax)
cbar.set_label(label)
cbax.yaxis.tick_left()
cbax.yaxis.set_label_position('left')
for spine in self.cbax_bg.spines.values():
spine.set_visible(False)
self.cbax_bg.yaxis.set_ticks([])
self.cbax_bg.xaxis.set_ticks([])
self.cbax_bg.patch.set_facecolor((1, 1, 1, 0.75))
return cbar
# handle drawings -----------------------------------------------------
def refresh_drawings(self, picks=None, autopicks=None):
self.picks_dict = picks
self.autopicks_dict = autopicks
self._refresh_drawings()
def _refresh_drawings(self):
self.remove_drawings()
self.init_stations()
self.init_colormap()
self.draw_everything()
def switch_annotations(self):
if self.annotations_box.isChecked():
self.annotate = True
else:
self.annotate = False
self._refresh_drawings()
def draw_everything(self):
picktype = self.comboBox_am.currentText()
picks_available = (self.picks_dict and picktype == 'manual') \
or (self.autopicks_dict and picktype == 'auto') \
or ((self.autopicks_dict or self.picks_dict) and picktype.startswith('hybrid'))
if picks_available:
self.init_picks()
if len(self.picks) >= 3:
self.init_picksgrid()
self.draw_contour_filled()
self.scatter_all_stations()
if picks_available:
self.scatter_picked_stations()
if hasattr(self, 'sc_picked'):
self.cbar = self.add_cbar(
label='Time relative to reference onset ({}) [s]'.format(self._reference_picktime)
)
self.comboBox_phase.setEnabled(True)
else:
self.comboBox_phase.setEnabled(False)
if self.annotate:
self.annotate_ax()
self.plotWidget.draw_idle()
def remove_drawings(self):
self.remove_annotations()
for item in reversed(self.highlighted_stations):
item.remove()
self.highlighted_stations.remove(item)
if hasattr(self, 'cbar'):
try:
self.cbar.remove()
self.cbax_bg.remove()
except Exception as e:
print('Warning: could not remove color bar or color bar bg.\nReason: {}'.format(e))
del (self.cbar, self.cbax_bg)
if hasattr(self, 'sc_picked'):
self.sc_picked.remove()
del self.sc_picked
if hasattr(self, 'sc_event'):
self.sc_event.remove()
del self.sc_event
if hasattr(self, 'contourf'):
self.remove_contourf()
del self.contourf
if hasattr(self, 'cid'):
self.plotWidget.mpl_disconnect(self.cid)
del self.cid
try:
self.sc.remove()
except Exception as e:
print('Warning: could not remove station scatter plot.\nReason: {}'.format(e))
try:
self.legend.remove()
except Exception as e:
print('Warning: could not remove legend. Reason: {}'.format(e))
self.plotWidget.draw_idle()
def remove_contourf(self):
for item in self.contourf.collections:
item.remove()
def remove_annotations(self):
for annotation in self.annotations:
annotation.remove()
self.annotations = []
def saveFigure(self):
if self.canvas.fig:
fd = QtWidgets.QFileDialog()
fname, filter = fd.getSaveFileName(self.parent(), filter='Images (*.png *.svg *.jpg)')
if not fname:
return
if not any([fname.endswith(item) for item in ['.png', '.svg', '.jpg']]):
fname += '.png'
self.canvas.fig.savefig(fname)
def _warn(self, message):
self.qmb = QtWidgets.QMessageBox(QtWidgets.QMessageBox.Icon.Warning, 'Warning', message)
self.qmb.show()

View File

@ -2,12 +2,20 @@
# -*- coding: utf-8 -*-
try:
# noinspection PyUnresolvedReferences
from urllib2 import urlopen
except:
from urllib.request import urlopen
def checkurl(url='https://ariadne.geophysik.rub.de/trac/PyLoT'):
def checkurl(url='https://git.geophysik.ruhr-uni-bochum.de/marcel/pylot/'):
"""
check if URL is available
:param url: url
:type url: str
:return: available: True/False
:rtype: bool
"""
try:
urlopen(url, timeout=1)
return True

View File

@ -2,14 +2,344 @@
# -*- coding: utf-8 -*-
import glob
import logging
import os
import sys
import numpy as np
from obspy import UTCDateTime, read_inventory, read
from obspy.io.xseed import Parser
from pylot.core.util.utils import key_for_set_value, find_in_list, \
remove_underscores, gen_Pool
gen_Pool
class Metadata(object):
def __init__(self, inventory=None, verbosity=1):
self.inventories = []
# saves read metadata objects (Parser/inventory) for a filename
self.inventory_files = {}
# saves filenames holding metadata for a seed_id
# seed id as key, path to file as value
self.seed_ids = {}
self.stations_dict = {}
# saves which metadata files are from obspy dmt
self.obspy_dmt_invs = []
if inventory:
# make sure that no accidental backslashes mess up the path
if isinstance(inventory, str):
inventory = inventory.replace('\\', '/')
inventory = os.path.abspath(inventory)
if os.path.isdir(inventory):
self.add_inventory(inventory)
if os.path.isfile(inventory):
self.add_inventory_file(inventory)
self.verbosity = verbosity
def __str__(self):
repr = 'PyLoT Metadata object including the following inventories:\n\n'
ntotal = len(self.inventories)
for index, inventory in enumerate(self.inventories):
if index < 2 or (ntotal - index) < 3:
repr += '{}\n'.format(inventory)
if ntotal > 4 and int(ntotal / 2) == index:
repr += '...\n'
if ntotal > 4:
repr += '\nTotal of {} inventories. Use Metadata.inventories to see all.'.format(ntotal)
return repr
def __repr__(self):
return self.__str__()
def add_inventory(self, path_to_inventory, obspy_dmt_inv=False):
"""
Add path to list of inventories.
:param path_to_inventory: Path to a folder
:type path_to_inventory: str
:return: None
"""
path_to_inventory = path_to_inventory.replace('\\', '/')
path_to_inventory = os.path.abspath(path_to_inventory)
assert (os.path.isdir(path_to_inventory)), '{} is no directory'.format(path_to_inventory)
if path_to_inventory not in self.inventories:
self.inventories.append(path_to_inventory)
if obspy_dmt_inv == True:
self.obspy_dmt_invs.append(path_to_inventory)
def add_inventory_file(self, path_to_inventory_file):
"""
Add the folder in which the file exists to the list of inventories.
:param path_to_inventory_file: full path including filename
:type path_to_inventory_file: str
:return: None
"""
assert (os.path.isfile(path_to_inventory_file)), '{} is no file'.format(path_to_inventory_file)
self.add_inventory(os.path.split(path_to_inventory_file)[0])
if path_to_inventory_file not in self.inventory_files.keys():
self.read_single_file(path_to_inventory_file)
def remove_all_inventories(self):
self.__init__()
def remove_inventory(self, path_to_inventory):
"""
Remove a path from inventories list. If path is not in inventories list, do nothing.
:param path_to_inventory: Path to a folder
"""
if not path_to_inventory in self.inventories:
print('Path {} not in inventories list.'.format(path_to_inventory))
return
self.inventories.remove(path_to_inventory)
for filename in list(self.inventory_files.keys()):
if filename.startswith(path_to_inventory):
del (self.inventory_files[filename])
for seed_id in list(self.seed_ids.keys()):
if self.seed_ids[seed_id].startswith(path_to_inventory):
del (self.seed_ids[seed_id])
# have to clean self.stations_dict as well
# this will be rebuilt for the next init of the arraymap anyway, so just reset it
self.stations_dict = {}
def clear_inventory(self):
for inv in self.obspy_dmt_invs:
self.remove_inventory(inv)
self.obspy_dmt_invs = []
def get_metadata(self, seed_id, time=None):
"""
Get metadata for seed id at time. When time is not specified, metadata for current time is fetched.
:param seed_id: Seed id such as BW.WETR..HHZ (Network.Station.Location.Channel)
:type seed_id: str
:param time: Time for which the metadata should be returned
:type time: UTCDateTime
:return: Dictionary with keys data and invtype.
data is a obspy.io.xseed.parser.Parser or an obspy.core.inventory.inventory.Inventory depending on the metadata
file.
invtype is a string denoting of which type the value of the data key is. It can take the values 'dless',
'dseed', 'xml', 'resp', according to the filetype of the metadata.
:rtype: dict
"""
# try most recent data if no time is specified
if not time:
time = UTCDateTime()
# get metadata for a specific seed_id, if not already read, try to read from inventories
if not seed_id in self.seed_ids.keys():
self._read_inventory_data(seed_id)
# if seed id is not found read all inventories and try to find it there
if not seed_id in self.seed_ids.keys():
if self.verbosity:
print('No data found for seed id {}. Trying to find it in all known inventories...'.format(seed_id))
self.read_all()
for inv_fname, metadata_dict in self.inventory_files.items():
# use get_coordinates to check for seed_id
try:
metadata_dict['data'].get_coordinates(seed_id, time)
self.seed_ids[seed_id] = inv_fname
if self.verbosity:
print('Found metadata for station {}!'.format(seed_id))
return metadata_dict
except Exception as e:
continue
print('Could not find metadata for station {}'.format(seed_id))
return None
fname = self.seed_ids[seed_id]
return self.inventory_files[fname]
def read_all(self):
"""
Read all metadata files found in all inventories
"""
# iterate over all inventory folders
for inventory in self.inventories:
# iterate over all inventory files in the current folder
for inv_fname in os.listdir(inventory):
inv_fname = os.path.join(inventory, inv_fname)
if not self.read_single_file(inv_fname):
continue
def read_single_file(self, inv_fname):
"""
Try to read a single file as Parser/Inventory and add its dictionary to inventory files if reading sudceeded.
:param inv_fname: path/filename of inventory file
:type inv_fname: str
:rtype: None
"""
# return if it was read already
if self.inventory_files.get(inv_fname, None):
return
try:
invtype, robj = self._read_metadata_file(inv_fname)
if robj is None:
return
except Exception as e:
print('Could not read file {}'.format(inv_fname))
return
self.inventory_files[inv_fname] = {'invtype': invtype,
'data': robj}
return True
def get_coordinates(self, seed_id, time=None):
"""
Get coordinates of given seed id.
:param seed_id: Seed id such as BW.WETR..HHZ (Network.Station.Location.Channel)
:type seed_id: str
:param time: Used when a station has data available at multiple time intervals
:type time: UTCDateTime
:return: dict containing position information of the station
:rtype: dict
"""
# try most recent data if no time is specified
if not time:
time = UTCDateTime()
metadata = self.get_metadata(seed_id, time)
if not metadata:
return
try:
return metadata['data'].get_coordinates(seed_id, time)
# no specific exception defined in obspy inventory
except Exception as e:
logging.warning(f'Could not get metadata for {seed_id}')
def get_all_coordinates(self):
def stat_info_from_parser(parser):
for station in parser.stations:
station_name = station[0].station_call_letters
network_name = station[0].network_code
if not station_name in self.stations_dict.keys():
st_id = '{}.{}'.format(network_name, station_name)
self.stations_dict[st_id] = {'latitude': station[0].latitude,
'longitude': station[0].longitude,
'elevation': station[0].elevation}
def stat_info_from_inventory(inventory):
for network in inventory.networks:
for station in network.stations:
station_name = station.code
network_name = network.code
if not station_name in self.stations_dict.keys():
st_id = '{}.{}'.format(network_name, station_name)
self.stations_dict[st_id] = {'latitude': station.latitude,
'longitude': station.longitude,
'elevation': station.elevation}
read_stat = {'xml': stat_info_from_inventory,
'dless': stat_info_from_parser}
self.read_all()
for item in self.inventory_files.values():
inventory = item['data']
invtype = item['invtype']
read_stat[invtype](inventory)
return self.stations_dict
def get_paz(self, seed_id, time):
"""
:param seed_id: Seed id such as BW.WETR..HHZ (Network.Station.Location.Channel)
:type seed_id: str
:param time: Used when a station has data available at multiple time intervals
:type time: UTCDateTime
:rtype: dict
"""
metadata = self.get_metadata(seed_id)
if not metadata:
return
if metadata['invtype'] in ['dless', 'dseed']:
return metadata['data'].get_paz(seed_id, time)
elif metadata['invtype'] in ['resp', 'xml']:
resp = metadata['data'].get_response(seed_id, time)
return resp.get_paz(seed_id)
def _read_inventory_data(self, seed_id):
for inventory in self.inventories:
if self._read_metadata_iterator(path_to_inventory=inventory, station_seed_id=seed_id):
return
def _read_metadata_iterator(self, path_to_inventory, station_seed_id):
"""
Search for metadata for a specific station iteratively.
"""
network, station, location, channel = station_seed_id.split('.')
# seach for station seed id in filenames in invetory
fnames = glob.glob(os.path.join(path_to_inventory, '*' + station_seed_id + '*'))
if not fnames:
# search for station name in filename
fnames = glob.glob(os.path.join(path_to_inventory, '*' + station + '*'))
if not fnames:
if self.verbosity:
print('Could not find filenames matching station name, network name or seed id')
return
for fname in fnames:
if fname in self.inventory_files.keys():
if self.inventory_files[fname]:
# file already read
continue
invtype, robj = self._read_metadata_file(os.path.join(path_to_inventory, fname))
try:
robj.get_coordinates(station_seed_id)
self.inventory_files[fname] = {'invtype': invtype,
'data': robj}
if station_seed_id in self.seed_ids.keys():
print('WARNING: Overwriting metadata for station {}'.format(station_seed_id))
self.seed_ids[station_seed_id] = fname
return True
except Exception as e:
logging.warning(e)
continue
print('Could not find metadata for station_seed_id {} in path {}'.format(station_seed_id, path_to_inventory))
def _read_metadata_file(self, path_to_inventory_filename):
"""
function reading metadata files (either dataless seed, xml or resp)
:param path_to_inventory_filename:
:return: file type/ending, inventory object (Parser or Inventory)
:rtype: (str, obspy.io.xseed.Parser or obspy.core.inventory.inventory.Inventory)
"""
# functions used to read metadata for different file endings (or file types)
read_functions = {'dless': self._read_dless,
'dataless': self._read_dless,
'dseed': self._read_dless,
'xml': self._read_inventory_file,
'resp': self._read_inventory_file}
file_ending = path_to_inventory_filename.split('.')[-1]
if file_ending in read_functions.keys():
robj, exc = read_functions[file_ending](path_to_inventory_filename)
if exc is not None:
raise exc
return file_ending, robj
# in case file endings did not match the above keys, try and error
for file_type in ['dless', 'xml']:
try:
robj, exc = read_functions[file_type](path_to_inventory_filename)
if exc is None:
if self.verbosity:
print('Read file {} as {}'.format(path_to_inventory_filename, file_type))
return file_type, robj
except Exception as e:
if self.verbosity:
print('Could not read file {} as {}'.format(path_to_inventory_filename, file_type))
return None, None
@staticmethod
def _read_dless(path_to_inventory):
exc = None
try:
parser = Parser(path_to_inventory)
except Exception as exc:
parser = None
return parser, exc
@staticmethod
def _read_inventory_file(path_to_inventory):
exc = None
try:
inv = read_inventory(path_to_inventory)
except Exception as exc:
inv = None
return inv, exc
def time_from_header(header):
@ -32,25 +362,25 @@ def check_time(datetime):
:type datetime: list
:return: returns True if Values are in supposed range, returns False otherwise
>>> check_time([1999, 01, 01, 23, 59, 59, 999000])
>>> check_time([1999, 1, 1, 23, 59, 59, 999000])
True
>>> check_time([1999, 01, 01, 23, 59, 60, 999000])
>>> check_time([1999, 1, 1, 23, 59, 60, 999000])
False
>>> check_time([1999, 01, 01, 23, 59, 59, 1000000])
>>> check_time([1999, 1, 1, 23, 59, 59, 1000000])
False
>>> check_time([1999, 01, 01, 23, 60, 59, 999000])
>>> check_time([1999, 1, 1, 23, 60, 59, 999000])
False
>>> check_time([1999, 01, 01, 23, 60, 59, 999000])
>>> check_time([1999, 1, 1, 23, 60, 59, 999000])
False
>>> check_time([1999, 01, 01, 24, 59, 59, 999000])
>>> check_time([1999, 1, 1, 24, 59, 59, 999000])
False
>>> check_time([1999, 01, 31, 23, 59, 59, 999000])
>>> check_time([1999, 1, 31, 23, 59, 59, 999000])
True
>>> check_time([1999, 02, 30, 23, 59, 59, 999000])
>>> check_time([1999, 2, 30, 23, 59, 59, 999000])
False
>>> check_time([1999, 02, 29, 23, 59, 59, 999000])
>>> check_time([1999, 2, 29, 23, 59, 59, 999000])
False
>>> check_time([2000, 02, 29, 23, 59, 59, 999000])
>>> check_time([2000, 2, 29, 23, 59, 59, 999000])
True
>>> check_time([2000, 13, 29, 23, 59, 59, 999000])
False
@ -62,6 +392,7 @@ def check_time(datetime):
return False
# TODO: change root to datapath
def get_file_list(root_dir):
"""
Function uses a directorie to get all the *.gse files from it.
@ -125,7 +456,7 @@ def evt_head_check(root_dir, out_dir=None):
"""
if not out_dir:
print('WARNING files are going to be overwritten!')
inp = str(raw_input('Continue? [y/N]'))
inp = str(input('Continue? [y/N]'))
if not inp == 'y':
sys.exit()
filelist = get_file_list(root_dir)
@ -192,16 +523,55 @@ def read_metadata(path_to_inventory):
robj = inv[invtype]
else:
print("Reading metadata information from inventory-xml file ...")
robj = inv[invtype]
robj = read_inventory(inv[invtype])
return invtype, robj
# idea to optimize read_metadata
# def read_metadata_new(path_to_inventory):
# metadata_objects = []
# # read multiple files from directory
# if os.path.isdir(path_to_inventory):
# fnames = os.listdir(path_to_inventory)
# # read single file
# elif os.path.isfile(path_to_inventory):
# fnames = [path_to_inventory]
# else:
# print("Neither dataless-SEED file, inventory-xml file nor "
# "RESP-file found!")
# print("!!WRONG CALCULATION OF SOURCE PARAMETERS!!")
# fnames = []
#
# for fname in fnames:
# path_to_inventory_filename = os.path.join(path_to_inventory, fname)
# try:
# ftype, robj = read_metadata_file(path_to_inventory_filename)
# metadata_objects.append((ftype, robj))
# except Exception as e:
# print('Could not read metadata file {} '
# 'because of the following Exception: {}'.format(path_to_inventory_filename, e))
# return metadata_objects
def restitute_trace(input_tuple):
tr, invtype, inobj, unit, force = input_tuple
def no_metadata(tr, seed_id):
print('no metadata file found '
'for trace {0}'.format(seed_id))
return tr, True
tr, metadata, unit, force = input_tuple
remove_trace = False
seed_id = tr.get_id()
mdata = metadata.get_metadata(seed_id, time=tr.stats.starttime)
if not mdata:
return no_metadata(tr, seed_id)
invtype = mdata['invtype']
inobj = mdata['data']
# check, whether this trace has already been corrected
if 'processing' in tr.stats.keys() \
and np.any(['remove' in p for p in tr.stats.processing]) \
@ -213,8 +583,7 @@ def restitute_trace(input_tuple):
if invtype == 'resp':
fresp = find_in_list(inobj, seed_id)
if not fresp:
raise IOError('no response file found '
'for trace {0}'.format(seed_id))
return no_metadata(tr, seed_id)
fname = fresp
seedresp = dict(filename=fname,
date=stime,
@ -225,20 +594,16 @@ def restitute_trace(input_tuple):
fname = Parser(find_in_list(inobj, seed_id))
else:
fname = inobj
seedresp = dict(filename=fname,
date=stime,
units=unit)
kwargs = dict(pre_filt=prefilt, seedresp=seedresp)
paz = fname.get_paz(tr.id, datetime=tr.stats.starttime)
kwargs = dict(pre_filt=prefilt, paz_remove=paz, remove_sensitivity=True)
elif invtype == 'xml':
invlist = inobj
if len(invlist) > 1:
finv = find_in_list(invlist, seed_id)
inventory = find_in_list(invlist, seed_id)
else:
finv = invlist[0]
inventory = read_inventory(finv, format='STATIONXML')
elif invtype == None:
print("No restitution possible, as there are no station-meta data available!")
return tr, True
inventory = invlist[0]
elif invtype is None:
return no_metadata(tr, seed_id)
else:
remove_trace = True
# apply restitution to data
@ -248,14 +613,20 @@ def restitute_trace(input_tuple):
if invtype in ['resp', 'dless']:
try:
tr.simulate(**kwargs)
print("Done")
except ValueError as e:
vmsg = '{0}'.format(e)
print(vmsg)
else:
try:
tr.attach_response(inventory)
tr.remove_response(output=unit,
pre_filt=prefilt)
except UnboundLocalError as e:
vmsg = '{0}'.format(e)
print(vmsg)
except ValueError as e:
msg0 = 'Response for {0} not found in Parser'.format(seed_id)
msg1 = 'evalresp failed to calculate response'
@ -269,32 +640,34 @@ def restitute_trace(input_tuple):
return tr, remove_trace
def restitute_data(data, invtype, inobj, unit='VEL', force=False, ncores=0):
def restitute_data(data, metadata, unit='VEL', force=False, ncores=0):
"""
takes a data stream and a path_to_inventory and returns the corrected
waveform data stream
:param data: seismic data stream
:param invtype: type of found metadata
:param inobj: either list of metadata files or `obspy.io.xseed.Parser`
object
:param unit: unit to correct for (default: 'VEL')
:param force: force restitution for already corrected traces (default:
False)
:return: corrected data stream
"""
restflag = list()
data = remove_underscores(data)
# data = remove_underscores(data)
if not data:
return
# loop over traces
input_tuples = []
for tr in data:
input_tuples.append((tr, invtype, inobj, unit, force))
input_tuples.append((tr, metadata, unit, force))
data.remove(tr)
if ncores == 0:
result = []
for input_tuple in input_tuples:
result.append(restitute_trace(input_tuple))
else:
pool = gen_Pool(ncores)
result = pool.map(restitute_trace, input_tuples)
result = pool.imap_unordered(restitute_trace, input_tuples)
pool.close()
for tr, remove_trace in result:
@ -305,10 +678,6 @@ def restitute_data(data, invtype, inobj, unit='VEL', force=False, ncores=0):
# better try restitution for smaller subsets of data (e.g. station by
# station)
# if len(restflag) > 0:
# restflag = bool(np.all(restflag))
# else:
# restflag = False
return data
@ -344,7 +713,7 @@ def get_prefilt(trace, tlow=(0.5, 0.9), thi=(5., 2.), verbosity=0):
fny = trace.stats.sampling_rate / 2
fc21 = fny - (fny * thi[0] / 100.)
fc22 = fny - (fny * thi[1] / 100.)
return (tlow[0], tlow[1], fc21, fc22)
return tlow[0], tlow[1], fc21, fc22
if __name__ == "__main__":

View File

@ -9,13 +9,12 @@ Created on Wed Feb 26 12:31:25 2014
import os
import platform
from pylot.core.util.utils import readDefaultFilterInformation
from pylot.core.loc import hypo71
from pylot.core.loc import hypodd
from pylot.core.loc import hyposat
from pylot.core.loc import nll
from pylot.core.loc import velest
from pylot.core.util.utils import readDefaultFilterInformation
# determine system dependent path separator
system_name = platform.system()
@ -27,9 +26,7 @@ elif system_name == "Windows":
# suffix for phase name if not phase identified by last letter (P, p, etc.)
ALTSUFFIX = ['diff', 'n', 'g', '1', '2', '3']
FILTERDEFAULTS = readDefaultFilterInformation(os.path.join(os.path.expanduser('~'),
'.pylot',
'pylot.in'))
FILTERDEFAULTS = readDefaultFilterInformation()
TIMEERROR_DEFAULTS = os.path.join(os.path.expanduser('~'),
'.pylot',
@ -37,59 +34,8 @@ TIMEERROR_DEFAULTS = os.path.join(os.path.expanduser('~'),
OUTPUTFORMATS = {'.xml': 'QUAKEML',
'.cnv': 'CNV',
'.obs': 'NLLOC_OBS'}
'.obs': 'NLLOC_OBS',
'_focmec.in': 'FOCMEC',
'.pha': 'HYPODD'}
LOCTOOLS = dict(nll=nll, hyposat=hyposat, velest=velest, hypo71=hypo71, hypodd=hypodd)
class SetChannelComponents(object):
def __init__(self):
self.setDefaultCompPosition()
def setDefaultCompPosition(self):
# default component order
self.compPosition_Map = dict(Z=2, N=1, E=0)
self.compName_Map = {'3': 'Z',
'1': 'N',
'2': 'E'}
def _getCurrentPosition(self, component):
for key, value in self.compName_Map.items():
if value == component:
return key, value
errMsg = 'getCurrentPosition: Could not find former position of component {}.'.format(component)
raise ValueError(errMsg)
def _switch(self, component, component_alter):
# Without switching, multiple definitions of the same alter_comp are possible
old_alter_comp, _ = self._getCurrentPosition(component)
old_comp = self.compName_Map[component_alter]
if not old_alter_comp == component_alter and not old_comp == component:
self.compName_Map[old_alter_comp] = old_comp
print('switch: Automatically switched component {} to {}'.format(old_alter_comp, old_comp))
def setCompPosition(self, component_alter, component, switch=True):
component_alter = str(component_alter)
if not component_alter in self.compName_Map.keys():
errMsg = 'setCompPosition: Unrecognized alternative component {}. Expecting one of {}.'
raise ValueError(errMsg.format(component_alter, self.compName_Map.keys()))
if not component in self.compPosition_Map.keys():
errMsg = 'setCompPosition: Unrecognized target component {}. Expecting one of {}.'
raise ValueError(errMsg.format(component, self.compPosition_Map.keys()))
print('setCompPosition: set component {} to {}'.format(component_alter, component))
if switch:
self._switch(component, component_alter)
self.compName_Map[component_alter] = component
def getCompPosition(self, component):
return self._getCurrentPosition(component)[0]
def getPlotPosition(self, component):
component = str(component)
if component in self.compPosition_Map.keys():
return self.compPosition_Map[component]
elif component in self.compName_Map.keys():
return self.compPosition_Map[self.compName_Map[component]]
else:
errMsg = 'getCompPosition: Unrecognized component {}. Expecting one of {} or {}.'
raise ValueError(errMsg.format(component, self.compPosition_Map.keys(), self.compName_Map.keys()))

View File

@ -6,7 +6,9 @@ import os
from obspy import UTCDateTime
from obspy.core.event import Event as ObsPyEvent
from obspy.core.event import Origin, ResourceIdentifier
from pylot.core.io.phases import picks_from_picksdict
from pylot.core.util.obspyDMT_interface import qml_from_obspyDMT
class Event(ObsPyEvent):
@ -15,158 +17,310 @@ class Event(ObsPyEvent):
'''
def __init__(self, path):
"""
Initialize event by event directory
:param path: path to event directory
:type path: str
"""
self.pylot_id = path.split('/')[-1]
# initialize super class
super(Event, self).__init__(resource_id=ResourceIdentifier('smi:local/' + self.pylot_id))
self.path = path
self.database = path.split('/')[-2]
self.datapath = path.split('/')[-3]
self.rootpath = '/' + os.path.join(*path.split('/')[:-3])
self.datapath = os.path.split(path)[0] # path.split('/')[-3]
self.pylot_autopicks = {}
self.pylot_picks = {}
self.notes = ''
self._testEvent = False
self._refEvent = False
self.get_notes()
self.get_obspy_event_info()
self.dirty = False
def get_notes_path(self):
"""
Notes files is freely editable by the user and can contain notes regarding the event
:return: path to notes file
:rtype: str
"""
notesfile = os.path.join(self.path, 'notes.txt')
return notesfile
def get_obspy_event_info(self):
infile_pickle = os.path.join(self.path, 'info/event.pkl')
if not os.path.isfile(infile_pickle):
return
try:
event_dmt = qml_from_obspyDMT(infile_pickle)
except Exception as e:
print('Could not get obspy event info: {}'.format(e))
return
self.magnitudes = event_dmt.magnitudes
self.origins = event_dmt.origins
def get_notes(self):
"""
set self.note attribute to content of notes file
:return:
:rtype: None
"""
notesfile = self.get_notes_path()
if os.path.isfile(notesfile):
with open(notesfile) as infile:
path = str(infile.readlines()[0].split('\n')[0])
text = '[eventInfo: ' + path + ']'
lines = infile.readlines()
if not lines:
return
text = lines[0]
self.addNotes(text)
try:
datetime = UTCDateTime(path.split('/')[-1])
datetime = UTCDateTime(self.path.split('/')[-1])
origin = Origin(resource_id=self.resource_id, time=datetime, latitude=0, longitude=0, depth=0)
self.origins.append(origin)
except:
pass
def addNotes(self, notes):
"""
Set new notes string
:param notes: notes to save in Event object
:type notes: str
:return:
:rtype: None
"""
self.notes = str(notes)
def clearNotes(self):
"""
Clear event notes
:return:
:rtype: None
"""
self.notes = None
def isRefEvent(self):
"""
Return reference event flag
:return: True if event is refence event
:rtype: bool
"""
return self._refEvent
def isTestEvent(self):
"""
Return test event flag
:return: True if event is test event
:rtype: bool
"""
return self._testEvent
def setRefEvent(self, bool):
"""
Set reference event flag
:param bool: new reference event flag
:type bool: bool
:return:
:rtype: None
"""
self._refEvent = bool
if bool: self._testEvent = False
def setTestEvent(self, bool):
"""
Set test event flag
:param bool: new test event flag
:type bool: bool
:return:
:rtype: None
"""
self._testEvent = bool
if bool: self._refEvent = False
def clearObsPyPicks(self, picktype):
"""
Remove picks of a certain type from event
:param picktype: type of picks to remove, 'auto' or 'manual'
:type picktype: str
:return:
:rtype: None
"""
for index, pick in reversed(list(enumerate(self.picks))):
if picktype in str(pick.method_id):
self.picks.pop(index)
self.dirty = True
def addPicks(self, picks):
'''
"""
add pylot picks and overwrite existing ones
'''
:param picks: picks to add to event in pick dictionary
:type picks: dict
:return:
:rtype: None
"""
for station in picks:
self.pylot_picks[station] = picks[station]
# add ObsPy picks (clear old manual and copy all new manual from pylot)
self.clearObsPyPicks('manual')
self.picks += picks_from_picksdict(self.pylot_picks)
self.dirty = True
def addAutopicks(self, autopicks):
"""
Add automatic picks to event
:param autopicks: automatic picks to add to event
:return:
:rtype: None
"""
for station in autopicks:
self.pylot_autopicks[station] = autopicks[station]
# add ObsPy picks (clear old auto and copy all new auto from pylot)
self.clearObsPyPicks('auto')
self.picks += picks_from_picksdict(self.pylot_autopicks)
self.dirty = True
def setPick(self, station, pick):
"""
Set pick for a station
:param station: station name
:type station: str
:param pick:
:type pick: dict
:return:
:rtype:
"""
if pick:
self.pylot_picks[station] = pick
else:
try:
if station in self.pylot_picks:
self.pylot_picks.pop(station)
except Exception as e:
print('Could not remove pick {} from station {}: {}'.format(pick, station, e))
self.clearObsPyPicks('manual')
self.picks += picks_from_picksdict(self.pylot_picks)
self.dirty = True
def setPicks(self, picks):
'''
set pylot picks and delete and overwrite all existing
'''
"""
Set pylot picks and delete and overwrite all existing
:param picks: new picks
:type picks: dict
:return:
:rtype: None
"""
self.pylot_picks = picks
self.clearObsPyPicks('manual')
self.picks += picks_from_picksdict(self.pylot_picks)
self.dirty = True
def getPick(self, station):
"""
Get pick at station
:param station: station name
:type station: str
:return: pick dictionary of station
:rtype: dict
"""
if station in self.pylot_picks.keys():
return self.pylot_picks[station]
def getPicks(self):
"""
Return pylot picks
:return:
:rtype: dict
"""
return self.pylot_picks
def setAutopick(self, station, pick):
"""
Set autopylot pick at station
:param station: station name
:type station: str
:param pick:
:type pick: dict
:return:
:rtype: None
"""
if pick:
self.pylot_autopicks[station] = pick
else:
try:
if station in self.pylot_autopicks:
self.pylot_autopicks.pop(station)
except Exception as e:
print('Could not remove pick {} from station {}: {}'.format(pick, station, e))
self.clearObsPyPicks('auto')
self.picks += picks_from_picksdict(self.pylot_autopicks)
self.dirty = True
def setAutopicks(self, picks):
'''
set pylot picks and delete and overwrite all existing
'''
"""
Set autopylot picks and delete and overwrite all existing
:param picks: new picks
:type picks: dict
:return:
:rtype: None
"""
self.pylot_autopicks = picks
self.clearObsPyPicks('auto')
self.picks += picks_from_picksdict(self.pylot_autopicks)
self.dirty = True
def getAutopick(self, station):
"""
Return autopick at station
:param station: station name
:type station: str
:return: pick dictionary
:rtype: dict
"""
if station in self.pylot_autopicks.keys():
return self.pylot_autopicks[station]
def getAutopicks(self):
"""
Get autopicks of event
:return: dict containing automatic picks
:rtype: dict
"""
return self.pylot_autopicks
def save(self, filename):
'''
"""
Save PyLoT Event to a file.
Can be loaded by using event.load(filename).
'''
Uses pickling to save event object to file
:param filename: filename to save project under
:type filename: str
:return:
:rtype: None
"""
try:
import cPickle
import pickle
except ImportError:
import _pickle as cPickle
import _pickle as pickle
try:
outfile = open(filename, 'wb')
cPickle.dump(self, outfile, -1)
pickle.dump(self, outfile, -1)
self.dirty = False
except Exception as e:
print('Could not pickle PyLoT event. Reason: {}'.format(e))
@staticmethod
def load(filename):
'''
Load project from filename.
'''
"""
Load project from filename
:param filename: to load event file
:type filename: str
:return: event loaded from file
:rtype: Event
"""
try:
import cPickle
import pickle
except ImportError:
import _pickle as cPickle
import _pickle as pickle
infile = open(filename, 'rb')
event = cPickle.load(infile)
event = pickle.load(infile)
event.dirty = False
print('Loaded %s' % filename)
return event

View File

@ -0,0 +1,97 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# small script that creates array maps for each event within a previously generated PyLoT project
import os
num_thread = "16"
os.environ["OMP_NUM_THREADS"] = num_thread
os.environ["OPENBLAS_NUM_THREADS"] = num_thread
os.environ["MKL_NUM_THREADS"] = num_thread
os.environ["VECLIB_MAXIMUM_THREADS"] = num_thread
os.environ["NUMEXPR_NUM_THREADS"] = num_thread
os.environ["NUMEXPR_MAX_THREADS"] = num_thread
import multiprocessing
import sys
import glob
import matplotlib
matplotlib.use('Qt5Agg')
sys.path.append(os.path.join('/'.join(sys.argv[0].split('/')[:-1]), '../../..'))
from PyLoT import Project
from pylot.core.util.dataprocessing import Metadata
from pylot.core.util.array_map import Array_map
import matplotlib.pyplot as plt
import argparse
def main(project_file_path, manual=False, auto=True, file_format='png', f_ext='', ncores=None):
project = Project.load(project_file_path)
nEvents = len(project.eventlist)
input_list = []
print('\n')
for index, event in enumerate(project.eventlist):
kwargs = dict(project=project, event=event, nEvents=nEvents, index=index, manual=manual, auto=auto,
file_format=file_format, f_ext=f_ext)
input_list.append(kwargs)
if ncores == 1:
for item in input_list:
array_map_worker(item)
else:
pool = multiprocessing.Pool(ncores, maxtasksperchild=1000)
pool.map(array_map_worker, input_list)
pool.close()
pool.join()
def array_map_worker(input_dict):
event = input_dict['event']
eventdir = event.path
print('Working on event: {} ({}/{})'.format(eventdir, input_dict['index'] + 1, input_dict['nEvents']))
xml_picks = glob.glob(os.path.join(eventdir, f'*{input_dict["f_ext"]}.xml'))
if not len(xml_picks):
print('Event {} does not have any picks associated with event file extension {}'.format(eventdir,
input_dict['f_ext']))
return
# check for picks
manualpicks = event.getPicks()
autopicks = event.getAutopicks()
# prepare event and get metadata
metadata_path = os.path.join(eventdir, 'resp')
metadata = None
for pick_type in ['manual', 'auto']:
if pick_type == 'manual' and (not manualpicks or not input_dict['manual']):
continue
if pick_type == 'auto' and (not autopicks or not input_dict['auto']):
continue
if not metadata:
metadata = Metadata(inventory=metadata_path, verbosity=0)
# create figure to plot on
fig, ax = plt.subplots(figsize=(15, 9))
# create array map object
map = Array_map(None, metadata, parameter=input_dict['project'].parameter, axes=ax,
width=2.13e6, height=1.2e6, pointsize=25., linewidth=1.0)
# set combobox to auto/manual to plot correct pick type
map.comboBox_am.setCurrentIndex(map.comboBox_am.findText(pick_type))
# add picks to map and save file
map.refresh_drawings(manualpicks, autopicks)
fpath_out = os.path.join(eventdir, 'array_map_{}_{}{}.{}'.format(event.pylot_id, pick_type, input_dict['f_ext'],
input_dict['file_format']))
fig.savefig(fpath_out, dpi=300.)
print('Wrote file: {}'.format(fpath_out))
if __name__ == '__main__':
cl = argparse.ArgumentParser()
cl.add_argument('--dataroot', help='Directory containing the PyLoT .plp file', type=str)
cl.add_argument('--infiles', help='.plp files to use', nargs='+')
cl.add_argument('--ncores', hepl='Specify number of parallel processes', type=int, default=1)
args = cl.parse_args()
for infile in args.infiles:
main(os.path.join(args.dataroot, infile), f_ext='_correlated_0.03-0.1', ncores=args.ncores)

104
pylot/core/util/gui.py Normal file
View File

@ -0,0 +1,104 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
from functools import lru_cache
try:
import pyqtgraph as pg
except Exception as e:
print('Warning: Could not import module pyqtgraph.')
try:
from PySide2 import QtCore
except Exception as e:
print('Warning: Could not import module QtCore.')
from pylot.core.util.utils import pick_color
def pick_linestyle_pg(picktype, key):
"""
Get Qt line style by picktype and pick parameter (earliest/latest possible pick, symmetric picking error or
most probable pick)
:param picktype: 'manual' or 'automatic'
:type picktype: str
:param key: which pick parameter should be plotted, 'mpp', 'epp', 'lpp' or 'spe'
:type key: str
:return: Qt line style parameters
:rtype:
"""
linestyles_manu = {'mpp': (QtCore.Qt.SolidLine, 2),
'epp': (QtCore.Qt.DashLine, 1),
'lpp': (QtCore.Qt.DashLine, 1),
'spe': (QtCore.Qt.DashLine, 1)}
linestyles_auto = {'mpp': (QtCore.Qt.DotLine, 2),
'epp': (QtCore.Qt.DashDotLine, 1),
'lpp': (QtCore.Qt.DashDotLine, 1),
'spe': (QtCore.Qt.DashDotLine, 1)}
linestyles = {'manual': linestyles_manu,
'auto': linestyles_auto}
return linestyles[picktype][key]
def which(program, parameter):
"""
takes a program name and returns the full path to the executable or None
modified after: http://stackoverflow.com/questions/377017/test-if-executable-exists-in-python
:param program: name of the desired external program
:type program: str
:return: full path of the executable file
:rtype: str
"""
try:
from PySide2.QtCore import QSettings
settings = QSettings()
for key in settings.allKeys():
if 'binPath' in key:
os.environ['PATH'] += ':{0}'.format(settings.value(key))
nllocpath = ":" + parameter.get('nllocbin')
os.environ['PATH'] += nllocpath
except Exception as e:
print(e)
def is_exe(fpath):
return os.path.exists(fpath) and os.access(fpath, os.X_OK)
def ext_candidates(fpath):
yield fpath
for ext in os.environ.get("PATHEXT", "").split(os.pathsep):
yield fpath + ext
fpath, fname = os.path.split(program)
if fpath:
if is_exe(program):
return program
else:
for path in os.environ["PATH"].split(os.pathsep):
exe_file = os.path.join(path, program)
for candidate in ext_candidates(exe_file):
if is_exe(candidate):
return candidate
return None
@lru_cache(maxsize=128)
def make_pen(picktype, phase, key, quality):
"""
Make PyQtGraph.QPen
:param picktype: 'manual' or 'automatic'
:type picktype: str
:param phase: 'P' or 'S'
:type phase: str
:param key: 'mpp', 'epp', 'lpp' or 'spe', (earliest/latest possible pick, symmetric picking error or
most probable pick)
:type key: str
:param quality: quality class of pick, decides color modifier
:type quality: int
:return: PyQtGraph QPen
:rtype: `~QPen`
"""
if pg:
rgba = pick_color(picktype, phase, quality)
linestyle, width = pick_linestyle_pg(picktype, key)
pen = pg.mkPen(rgba, width=width, style=linestyle)
return pen

View File

@ -1,376 +0,0 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import matplotlib.pyplot as plt
import numpy as np
import obspy
from PySide import QtGui
from matplotlib.backends.backend_qt4agg import NavigationToolbar2QT as NavigationToolbar
from mpl_toolkits.basemap import Basemap
from pylot.core.util.widgets import PickDlg
from scipy.interpolate import griddata
plt.interactive(False)
class map_projection(QtGui.QWidget):
def __init__(self, parent, figure=None):
'''
:param: picked, can be False, auto, manual
:value: str
'''
QtGui.QWidget.__init__(self)
self._parent = parent
self.metadata = parent.metadata
self.parser = parent.metadata[1]
self.picks = None
self.picks_dict = None
self.eventLoc = None
self.figure = figure
self.init_graphics()
self.init_stations()
self.init_basemap(resolution='l')
self.init_map()
# self.show()
def init_map(self):
self.init_lat_lon_dimensions()
self.init_lat_lon_grid()
self.init_x_y_dimensions()
self.connectSignals()
self.draw_everything()
def onpick(self, event):
ind = event.ind
button = event.mouseevent.button
if ind == [] or not button == 1:
return
data = self._parent.get_data().getWFData()
for index in ind:
station = str(self.station_names[index].split('.')[-1])
try:
pickDlg = PickDlg(self, parameter=self._parent._inputs,
data=data.select(station=station),
station=station,
picks=self._parent.get_current_event().getPick(station),
autopicks=self._parent.get_current_event().getAutopick(station))
except Exception as e:
message = 'Could not generate Plot for station {st}.\n {er}'.format(st=station, er=e)
self._warn(message)
print(message, e)
return
pyl_mw = self._parent
try:
if pickDlg.exec_():
pyl_mw.setDirty(True)
pyl_mw.update_status('picks accepted ({0})'.format(station))
replot = pyl_mw.get_current_event().setPick(station, pickDlg.getPicks())
self._refresh_drawings()
if replot:
pyl_mw.plotWaveformData()
pyl_mw.drawPicks()
pyl_mw.draw()
else:
pyl_mw.drawPicks(station)
pyl_mw.draw()
else:
pyl_mw.update_status('picks discarded ({0})'.format(station))
except Exception as e:
message = 'Could not save picks for station {st}.\n{er}'.format(st=station, er=e)
self._warn(message)
print(message, e)
def connectSignals(self):
self.comboBox_phase.currentIndexChanged.connect(self._refresh_drawings)
self.zoom_id = self.basemap.ax.figure.canvas.mpl_connect('scroll_event', self.zoom)
def init_graphics(self):
if not self.figure:
if not hasattr(self._parent, 'am_figure'):
self.figure = plt.figure()
self.toolbar = NavigationToolbar(self.figure.canvas, self)
else:
self.figure = self._parent.am_figure
self.toolbar = self._parent.am_toolbar
self.main_ax = self.figure.add_subplot(111)
self.canvas = self.figure.canvas
self.main_box = QtGui.QVBoxLayout()
self.setLayout(self.main_box)
self.top_row = QtGui.QHBoxLayout()
self.main_box.addLayout(self.top_row)
self.comboBox_phase = QtGui.QComboBox()
self.comboBox_phase.insertItem(0, 'P')
self.comboBox_phase.insertItem(1, 'S')
self.comboBox_am = QtGui.QComboBox()
self.comboBox_am.insertItem(0, 'auto')
self.comboBox_am.insertItem(1, 'manual')
self.top_row.addWidget(QtGui.QLabel('Select a phase: '))
self.top_row.addWidget(self.comboBox_phase)
self.top_row.setStretch(1, 1) # set stretch of item 1 to 1
self.main_box.addWidget(self.canvas)
self.main_box.addWidget(self.toolbar)
def init_stations(self):
def get_station_names_lat_lon(parser):
station_names = []
lat = []
lon = []
for station in parser.stations:
station_name = station[0].station_call_letters
network = station[0].network_code
if not station_name in station_names:
station_names.append(network + '.' + station_name)
lat.append(station[0].latitude)
lon.append(station[0].longitude)
return station_names, lat, lon
station_names, lat, lon = get_station_names_lat_lon(self.parser)
self.station_names = station_names
self.lat = lat
self.lon = lon
def init_picks(self):
phase = self.comboBox_phase.currentText()
def get_picks(station_names):
picks = []
for station in station_names:
try:
station = station.split('.')[-1]
picks.append(self.picks_dict[station][phase]['mpp'])
except:
picks.append(np.nan)
return picks
def get_picks_rel(picks):
picks_rel = []
picks_utc = []
for pick in picks:
if type(pick) is obspy.core.utcdatetime.UTCDateTime:
picks_utc.append(pick)
minp = min(picks_utc)
for pick in picks:
if type(pick) is obspy.core.utcdatetime.UTCDateTime:
pick -= minp
picks_rel.append(pick)
return picks_rel
self.picks = get_picks(self.station_names)
self.picks_rel = get_picks_rel(self.picks)
def init_picks_active(self):
def remove_nan_picks(picks):
picks_no_nan = []
for pick in picks:
if not np.isnan(pick):
picks_no_nan.append(pick)
return picks_no_nan
self.picks_no_nan = remove_nan_picks(self.picks_rel)
def init_stations_active(self):
def remove_nan_lat_lon(picks, lat, lon):
lat_no_nan = []
lon_no_nan = []
for index, pick in enumerate(picks):
if not np.isnan(pick):
lat_no_nan.append(lat[index])
lon_no_nan.append(lon[index])
return lat_no_nan, lon_no_nan
self.lat_no_nan, self.lon_no_nan = remove_nan_lat_lon(self.picks_rel, self.lat, self.lon)
def init_lat_lon_dimensions(self):
def get_lon_lat_dim(lon, lat):
londim = max(lon) - min(lon)
latdim = max(lat) - min(lat)
return londim, latdim
self.londim, self.latdim = get_lon_lat_dim(self.lon, self.lat)
def init_x_y_dimensions(self):
def get_x_y_dim(x, y):
xdim = max(x) - min(x)
ydim = max(y) - min(y)
return xdim, ydim
self.x, self.y = self.basemap(self.lon, self.lat)
self.xdim, self.ydim = get_x_y_dim(self.x, self.y)
def init_basemap(self, resolution='l'):
# basemap = Basemap(projection=projection, resolution = resolution, ax=self.main_ax)
basemap = Basemap(projection='lcc', resolution=resolution, ax=self.main_ax,
width=5e6, height=2e6,
lat_0=(min(self.lat) + max(self.lat)) / 2.,
lon_0=(min(self.lon) + max(self.lon)) / 2.)
# basemap.fillcontinents(color=None, lake_color='aqua',zorder=1)
basemap.drawmapboundary(zorder=2) # fill_color='darkblue')
basemap.shadedrelief(zorder=3)
basemap.drawcountries(zorder=4)
basemap.drawstates(zorder=5)
basemap.drawcoastlines(zorder=6)
self.basemap = basemap
self.figure.tight_layout()
def init_lat_lon_grid(self):
def get_lat_lon_axis(lat, lon):
steplat = (max(lat) - min(lat)) / 250
steplon = (max(lon) - min(lon)) / 250
lataxis = np.arange(min(lat), max(lat), steplat)
lonaxis = np.arange(min(lon), max(lon), steplon)
return lataxis, lonaxis
def get_lat_lon_grid(lataxis, lonaxis):
longrid, latgrid = np.meshgrid(lonaxis, lataxis)
return latgrid, longrid
self.lataxis, self.lonaxis = get_lat_lon_axis(self.lat, self.lon)
self.latgrid, self.longrid = get_lat_lon_grid(self.lataxis, self.lonaxis)
def init_picksgrid(self):
self.picksgrid_no_nan = griddata((self.lat_no_nan, self.lon_no_nan),
self.picks_no_nan, (self.latgrid, self.longrid),
method='linear') ##################
def draw_contour_filled(self, nlevel='50'):
levels = np.linspace(min(self.picks_no_nan), max(self.picks_no_nan), nlevel)
self.contourf = self.basemap.contourf(self.longrid, self.latgrid, self.picksgrid_no_nan,
levels, latlon=True, zorder=9, alpha=0.5)
def scatter_all_stations(self):
self.sc = self.basemap.scatter(self.lon, self.lat, s=50, facecolor='none', latlon=True,
zorder=10, picker=True, edgecolor='m', label='Not Picked')
self.cid = self.canvas.mpl_connect('pick_event', self.onpick)
if self.eventLoc:
lat, lon = self.eventLoc
self.sc_event = self.basemap.scatter(lon, lat, s=100, facecolor='red',
latlon=True, zorder=11, label='Event (might be outside map region)')
def scatter_picked_stations(self):
lon = self.lon_no_nan
lat = self.lat_no_nan
# workaround because of an issue with latlon transformation of arrays with len <3
if len(lon) <= 2 and len(lat) <= 2:
self.sc_picked = self.basemap.scatter(lon[0], lat[0], s=50, facecolor='white',
c=self.picks_no_nan[0], latlon=True, zorder=11, label='Picked')
if len(lon) == 2 and len(lat) == 2:
self.sc_picked = self.basemap.scatter(lon[1], lat[1], s=50, facecolor='white',
c=self.picks_no_nan[1], latlon=True, zorder=11)
else:
self.sc_picked = self.basemap.scatter(lon, lat, s=50, facecolor='white',
c=self.picks_no_nan, latlon=True, zorder=11, label='Picked')
def annotate_ax(self):
self.annotations = []
for index, name in enumerate(self.station_names):
self.annotations.append(self.main_ax.annotate(' %s' % name, xy=(self.x[index], self.y[index]),
fontsize='x-small', color='white', zorder=12))
self.legend = self.main_ax.legend(loc=1)
def add_cbar(self, label):
cbar = self.main_ax.figure.colorbar(self.sc_picked, fraction=0.025)
cbar.set_label(label)
return cbar
def refresh_drawings(self, picks=None):
self.picks_dict = picks
self._refresh_drawings()
def _refresh_drawings(self):
self.remove_drawings()
self.draw_everything()
def draw_everything(self):
if self.picks_dict:
self.init_picks()
self.init_picks_active()
self.init_stations_active()
if len(self.picks_no_nan) >= 3:
self.init_picksgrid()
self.draw_contour_filled()
self.scatter_all_stations()
if self.picks_dict:
self.scatter_picked_stations()
self.cbar = self.add_cbar(label='Time relative to first onset [s]')
self.comboBox_phase.setEnabled(True)
else:
self.comboBox_phase.setEnabled(False)
self.annotate_ax()
self.canvas.draw()
def remove_drawings(self):
if hasattr(self, 'sc_picked'):
self.sc_picked.remove()
del (self.sc_picked)
if hasattr(self, 'sc_event'):
self.sc_event.remove()
del (self.sc_event)
if hasattr(self, 'cbar'):
self.cbar.remove()
del (self.cbar)
if hasattr(self, 'contourf'):
self.remove_contourf()
del (self.contourf)
if hasattr(self, 'cid'):
self.canvas.mpl_disconnect(self.cid)
del (self.cid)
try:
self.sc.remove()
except Exception as e:
print('Warning: could not remove station scatter plot.\nReason: {}'.format(e))
try:
self.legend.remove()
except Exception as e:
print('Warning: could not remove legend. Reason: {}'.format(e))
self.canvas.draw()
def remove_contourf(self):
for item in self.contourf.collections:
item.remove()
def remove_annotations(self):
for annotation in self.annotations:
annotation.remove()
def zoom(self, event):
map = self.basemap
xlim = map.ax.get_xlim()
ylim = map.ax.get_ylim()
x, y = event.xdata, event.ydata
zoom = {'up': 1. / 2.,
'down': 2.}
if not event.xdata or not event.ydata:
return
if event.button in zoom:
factor = zoom[event.button]
xdiff = (xlim[1] - xlim[0]) * factor
xl = x - 0.5 * xdiff
xr = x + 0.5 * xdiff
ydiff = (ylim[1] - ylim[0]) * factor
yb = y - 0.5 * ydiff
yt = y + 0.5 * ydiff
if xl < map.xmin or yb < map.ymin or xr > map.xmax or yt > map.ymax:
xl, xr = map.xmin, map.xmax
yb, yt = map.ymin, map.ymax
map.ax.set_xlim(xl, xr)
map.ax.set_ylim(yb, yt)
map.ax.figure.canvas.draw()
def _warn(self, message):
self.qmb = QtGui.QMessageBox(QtGui.QMessageBox.Icon.Warning,
'Warning', message)
self.qmb.show()

View File

@ -0,0 +1,59 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
from obspy import UTCDateTime
def check_obspydmt_structure(path):
'''
Check path for obspyDMT event structure.
:param path:
:return:
'''
ev_info = os.path.join(path, 'EVENTS-INFO')
if os.path.isdir(ev_info):
if os.path.isfile(os.path.join(ev_info, 'logger_command.txt')):
return True
return False
def check_obspydmt_eventfolder(folder):
try:
time = folder.split('.')[0]
time = time.replace('_', 'T')
time = UTCDateTime(time)
return True, time
except Exception as e:
return False, e
def qml_from_obspyDMT(path):
import pickle
from obspy.core.event import Event, Magnitude, Origin
if not os.path.exists(path):
return IOError('Could not find Event at {}'.format(path))
with open(path, 'rb') as infile:
event_dmt = pickle.load(infile) # , fix_imports=True)
event_dmt['origin_id'].id = str(event_dmt['origin_id'].id)
ev = Event(resource_id=event_dmt['event_id'])
# small bugfix "unhashable type: 'newstr' "
event_dmt['origin_id'].id = str(event_dmt['origin_id'].id)
origin = Origin(resource_id=event_dmt['origin_id'],
time=event_dmt['datetime'],
longitude=event_dmt['longitude'],
latitude=event_dmt['latitude'],
depth=event_dmt['depth'])
mag = Magnitude(mag=event_dmt['magnitude'],
magnitude_type=event_dmt['magnitude_type'],
origin_id=event_dmt['origin_id'])
ev.magnitudes.append(mag)
ev.origins.append(origin)
return ev

View File

@ -5,6 +5,7 @@ import warnings
import numpy as np
from obspy import UTCDateTime
from pylot.core.util.utils import fit_curve, clims
from pylot.core.util.version import get_git_version as _getVersionString

View File

@ -6,7 +6,7 @@ Created on Wed Jan 26 17:47:25 2015
@author: sebastianw
"""
from pylot.core.io.data import SeiscompDataStructure, PilotDataStructure
from pylot.core.io.data import SeiscompDataStructure, PilotDataStructure, ObspyDMTdataStructure
DATASTRUCTURE = {'PILOT': PilotDataStructure, 'SeisComP': SeiscompDataStructure,
None: None}
'obspyDMT': ObspyDMTdataStructure, None: PilotDataStructure}

View File

@ -1,43 +1,11 @@
# -*- coding: utf-8 -*-
import sys, os, traceback
import multiprocessing
from PySide.QtCore import QThread, Signal, Qt, Slot, QRunnable, QObject
from PySide.QtGui import QDialog, QProgressBar, QLabel, QHBoxLayout, QPushButton
import os
import sys
import traceback
class AutoPickThread(QThread):
message = Signal(str)
finished = Signal()
def __init__(self, parent, func, infile, fnames, eventid, savepath):
super(AutoPickThread, self).__init__()
self.setParent(parent)
self.func = func
self.infile = infile
self.fnames = fnames
self.eventid = eventid
self.savepath = savepath
def run(self):
sys.stdout = self
picks = self.func(None, None, self.infile, self.fnames, self.eventid, self.savepath)
print("Autopicking finished!\n")
try:
for station in picks:
self.parent().addPicks(station, picks[station], type='auto')
except AttributeError:
print(picks)
sys.stdout = sys.__stdout__
self.finished.emit()
def write(self, text):
self.message.emit(text)
def flush(self):
pass
from PySide2.QtCore import QThread, Signal, Qt, Slot, QRunnable, QObject
from PySide2.QtWidgets import QDialog, QProgressBar, QLabel, QHBoxLayout, QPushButton
class Thread(QThread):
@ -54,12 +22,14 @@ class Thread(QThread):
self.abortButton = abortButton
self.finished.connect(self.hideProgressbar)
self.showProgressbar()
self.old_stdout = None
def run(self):
if self.redirect_stdout:
self.old_stdout = sys.stdout
sys.stdout = self
try:
if self.arg:
if self.arg is not None:
self.data = self.func(self.arg)
else:
self.data = self.func()
@ -68,13 +38,19 @@ class Thread(QThread):
self._executed = False
self._executedError = e
traceback.print_exc()
exctype, value = sys.exc_info ()[:2]
self._executedErrorInfo = '{} {} {}'.\
exctype, value = sys.exc_info()[:2]
self._executedErrorInfo = '{} {} {}'. \
format(exctype, value, traceback.format_exc())
sys.stdout = sys.__stdout__
if self.redirect_stdout:
sys.stdout = self.old_stdout
def showProgressbar(self):
if self.progressText:
# # generate widget if not given in init
# if not self.pb_widget:
# self.pb_widget = ProgressBarWidget(self.parent())
# self.pb_widget.setWindowFlags(Qt.SplashScreen)
# self.pb_widget.setModal(True)
# generate widget if not given in init
if not self.pb_widget:
@ -110,6 +86,7 @@ class Worker(QRunnable):
'''
Worker class to be run by MultiThread(QThread).
'''
def __init__(self, fun, args,
progressText=None,
pb_widget=None,
@ -117,28 +94,30 @@ class Worker(QRunnable):
super(Worker, self).__init__()
self.fun = fun
self.args = args
#self.kwargs = kwargs
# self.kwargs = kwargs
self.signals = WorkerSignals()
self.progressText = progressText
self.pb_widget = pb_widget
self.redirect_stdout = redirect_stdout
self.old_stdout = None
@Slot()
def run(self):
if self.redirect_stdout:
self.old_stdout = sys.stdout
sys.stdout = self
try:
result = self.fun(self.args)
except:
#traceback.print_exc()
exctype, value = sys.exc_info ()[:2]
exctype, value = sys.exc_info()[:2]
print(exctype, value, traceback.format_exc())
self.signals.error.emit ((exctype, value, traceback.format_exc ()))
self.signals.error.emit((exctype, value, traceback.format_exc()))
else:
self.signals.result.emit(result)
finally:
self.signals.finished.emit('Done')
sys.stdout = self.old_stdout
def write(self, text):
self.signals.message.emit(text)
@ -170,18 +149,20 @@ class MultiThread(QThread):
self.progressText = progressText
self.pb_widget = pb_widget
self.redirect_stdout = redirect_stdout
self.old_stdout = None
self.finished.connect(self.hideProgressbar)
self.showProgressbar()
def run(self):
if self.redirect_stdout:
self.old_stdout = sys.stdout
sys.stdout = self
try:
if not self.ncores:
self.ncores = multiprocessing.cpu_count()
pool = multiprocessing.Pool(self.ncores)
pool = multiprocessing.Pool(self.ncores, maxtasksperchild=1000)
self.data = pool.map_async(self.func, self.args, callback=self.emitDone)
#self.data = pool.apply_async(self.func, self.shotlist, callback=self.emitDone) #emit each time returned
# self.data = pool.apply_async(self.func, self.shotlist, callback=self.emitDone) #emit each time returned
pool.close()
self._executed = True
except Exception as e:
@ -190,7 +171,7 @@ class MultiThread(QThread):
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
print('Exception: {}, file: {}, line: {}'.format(exc_type, fname, exc_tb.tb_lineno))
sys.stdout = sys.__stdout__
sys.stdout = self.old_stdout
def showProgressbar(self):
if self.progressText:

File diff suppressed because it is too large Load Diff

View File

@ -35,9 +35,9 @@ from __future__ import print_function
__all__ = "get_git_version"
import inspect
# NO IMPORTS FROM PYLOT IN THIS FILE! (file gets used at installation time)
import os
import inspect
from subprocess import Popen, PIPE
# NO IMPORTS FROM PYLOT IN THIS FILE! (file gets used at installation time)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,2 @@
# -*- coding: utf-8 -*-
#

View File

@ -0,0 +1,101 @@
############################# correlation parameters #####################################
# min_corr_stacking: minimum correlation coefficient for building beam trace
# min_corr_export: minimum correlation coefficient for pick export
# min_stack: minimum number of stations for building beam trace
# t_before: correlation window before pick
# t_after: correlation window after pick#
# cc_maxlag: maximum shift for initial correlation
# cc_maxlag2: maximum shift for second (final) correlation (also for calculating pick uncertainty)
# initial_pick_outlier_threshold: (hopefully) threshold for excluding large outliers of initial (AIC) picks
# export_threshold: automatically exclude all onsets which deviate more than this threshold from corrected taup onsets
# min_picks_export: minimum number of correlated picks for export
# min_picks_autopylot: minimum number of reference auto picks to continue with event
# check_RMS: do RMS check to search for restitution errors (very experimental)
# use_taupy_onsets: use taupy onsets as reference picks instead of external picks
# station_list: use the following stations as reference for stacking
# use_stacked_trace: use existing stacked trace if found (spare re-computation)
# data_dir: obspyDMT data subdirectory (e.g. 'raw', 'processed')
# pickfile_extension: use quakeML files (PyLoT output) with the following extension, e.g. '_autopylot' for pickfiles
# such as 'PyLoT_20170501_141822_autopylot.xml'
# dt_stacking: time shift for stacking (e.g. [0, 250] for 0 and 250 seconds shift)
# filter_options: filter for first correlation (rough)
# filter_options_final: filter for second correlation (fine)
# filter_type: e.g. 'bandpass'
# sampfreq: sampling frequency of the data
logging: info
pick_phases: ['P', 'S']
# P-phase
P:
min_corr_stacking: 0.8
min_corr_export: 0.6
min_stack: 20
t_before: 30.
t_after: 50.
cc_maxlag: 50.
cc_maxlag2: 5.
initial_pick_outlier_threshold: 30.
export_threshold: 2.5
min_picks_export: 100
min_picks_autopylot: 50
check_RMS: True
use_taupy_onsets: False
station_list: ['HU.MORH', 'HU.TIH', 'OX.FUSE', 'OX.BAD']
use_stacked_trace: False
data_dir: 'processed'
pickfile_extension: '_autopylot'
dt_stacking: [250, 250]
# filter for first correlation (rough)
filter_options:
freqmax: 0.5
freqmin: 0.03
# filter for second correlation (fine)
filter_options_final:
freqmax: 0.5
freqmin: 0.03
filter_type: bandpass
sampfreq: 20.0
# ignore if autopylot fails to pick master-trace (not recommended if absolute onset times matter)
ignore_autopylot_fail_on_master: True
# S-phase
S:
min_corr_stacking: 0.7
min_corr_export: 0.6
min_stack: 20
t_before: 60.
t_after: 60.
cc_maxlag: 100.
cc_maxlag2: 25.
initial_pick_outlier_threshold: 30.
export_threshold: 5.0
min_picks_export: 200
min_picks_autopylot: 50
check_RMS: True
use_taupy_onsets: False
station_list: ['HU.MORH','HU.TIH', 'OX.FUSE', 'OX.BAD']
use_stacked_trace: False
data_dir: 'processed'
pickfile_extension: '_autopylot'
dt_stacking: [250, 250]
# filter for first correlation (rough)
filter_options:
freqmax: 0.1
freqmin: 0.01
# filter for second correlation (fine)
filter_options_final:
freqmax: 0.2
freqmin: 0.01
filter_type: bandpass
sampfreq: 20.0
# ignore if autopylot fails to pick master-trace (not recommended if absolute onset times matter)
ignore_autopylot_fail_on_master: True

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,41 @@
#!/bin/bash
#ulimit -s 8192
#ulimit -v $(ulimit -v | awk '{printf("%d",$1*0.95)}')
#ulimit -v
#655360
source /opt/anaconda3/etc/profile.d/conda.sh
conda activate pylot_311
NSLOTS=20
#qsub -l low -cwd -l "os=*stretch" -pe smp 40 submit_pick_corr_correction.sh
#$ -l low
#$ -l h_vmem=6G
#$ -cwd
#$ -pe smp 20
#$ -N corr_pick
export PYTHONPATH="$PYTHONPATH:/home/marcel/git/pylot_tools/"
export PYTHONPATH="$PYTHONPATH:/home/marcel/git/"
export PYTHONPATH="$PYTHONPATH:/home/marcel/git/pylot/"
#export MKL_NUM_THREADS=${NSLOTS:=1}
#export NUMEXPR_NUM_THREADS=${NSLOTS:=1}
#export OMP_NUM_THREADS=${NSLOTS:=1}
#python pick_correlation_correction.py '/data/AlpArray_Data/dmt_database_mantle_M5.8-6.0' '/home/marcel/.pylot/pylot_alparray_mantle_corr_stack_0.03-0.5.in' -pd -n ${NSLOTS:=1} -istart 0 -istop 100
#python pick_correlation_correction.py '/data/AlpArray_Data/dmt_database_mantle_M5.8-6.0' '/home/marcel/.pylot/pylot_alparray_mantle_corr_stack_0.03-0.5.in' -pd -n ${NSLOTS:=1} -istart 100 -istop 200
#python pick_correlation_correction.py '/data/AlpArray_Data/dmt_database_mantle_M6.0-6.5' '/home/marcel/.pylot/pylot_alparray_mantle_corr_stack_0.03-0.5.in' -pd -n ${NSLOTS:=1} -istart 0 -istop 100
#python pick_correlation_correction.py '/data/AlpArray_Data/dmt_database_mantle_M5.8-6.0' '/home/marcel/.pylot/pylot_alparray_mantle_corr_stack_0.03-0.5.in' -pd -n ${NSLOTS:=1} -istart 100 -istop 200
#python pick_correlation_correction.py 'H:\sciebo\dmt_database' 'H:\Sciebo\dmt_database\pylot_alparray_mantle_corr_S_0.01-0.2.in' -pd -n 4 -t
#pylot_infile='/home/marcel/.pylot/pylot_alparray_syn_fwi_mk6_it3.in'
pylot_infile='/home/marcel/.pylot/pylot_adriaarray_corr_P_and_S.in'
# THIS SCRIPT SHOLD BE CALLED BY "submit_to_grid_engine.py" using the following line:
# use -pd for detailed plots in eventdir/correlation_XX_XX/figures
python pick_correlation_correction.py $1 $pylot_infile -n ${NSLOTS:=1} -istart $2 --params 'parameters_adriaarray.yaml' # -pd
#--event_blacklist eventlist.txt

View File

@ -0,0 +1,28 @@
#!/usr/bin/env python
import subprocess
fnames = [
('/data/AdriaArray_Data/dmt_database_mantle_M5.0-5.4', 0),
('/data/AdriaArray_Data/dmt_database_mantle_M5.4-5.7', 0),
('/data/AdriaArray_Data/dmt_database_mantle_M5.7-6.0', 0),
('/data/AdriaArray_Data/dmt_database_mantle_M6.0-6.3', 0),
('/data/AdriaArray_Data/dmt_database_mantle_M6.3-10.0', 0),
# ('/data/AdriaArray_Data/dmt_database_ISC_mantle_M5.0-5.4', 0),
# ('/data/AdriaArray_Data/dmt_database_ISC_mantle_M5.4-5.7', 0),
# ('/data/AdriaArray_Data/dmt_database_ISC_mantle_M5.7-6.0', 0),
# ('/data/AdriaArray_Data/dmt_database_ISC_mantle_M6.0-10.0', 0),
]
#fnames = [('/data/AlpArray_Data/dmt_database_mantle_0.01-0.2_SKS-phase', 0),
# ('/data/AlpArray_Data/dmt_database_mantle_0.01-0.2_S-phase', 0),]
####
script_location = '/home/marcel/VersionCtrl/git/pylot/pylot/correlation/submit_pick_corr_correction.sh'
####
for fnin, istart in fnames:
input_cmds = f'qsub -q low.q@minos15,low.q@minos14,low.q@minos13,low.q@minos12,low.q@minos11 {script_location} {fnin} {istart}'
print(input_cmds)
print(subprocess.check_output(input_cmds.split()))

View File

@ -0,0 +1,61 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import glob
import json
from obspy import read_events
from pylot.core.util.dataprocessing import Metadata
from pylot.core.util.obspyDMT_interface import qml_from_obspyDMT
def get_event_obspy_dmt(eventdir):
event_pkl_file = os.path.join(eventdir, 'info', 'event.pkl')
if not os.path.exists(event_pkl_file):
raise IOError('Could not find event path for event: {}'.format(eventdir))
event = qml_from_obspyDMT(event_pkl_file)
return event
def get_event_pylot(eventdir, extension=''):
event_id = get_event_id(eventdir)
filename = os.path.join(eventdir, 'PyLoT_{}{}.xml'.format(event_id, extension))
if not os.path.isfile(filename):
return
cat = read_events(filename)
return cat[0]
def get_event_id(eventdir):
event_id = os.path.split(eventdir)[-1]
return event_id
def get_picks(eventdir, extension=''):
event_id = get_event_id(eventdir)
filename = 'PyLoT_{}{}.xml'
filename = filename.format(event_id, extension)
fpath = os.path.join(eventdir, filename)
fpaths = glob.glob(fpath)
if len(fpaths) == 1:
cat = read_events(fpaths[0])
picks = cat[0].picks
return picks
elif len(fpaths) == 0:
print('get_picks: File not found: {}'.format(fpath))
return
print(f'WARNING: Ambiguous pick file specification. Found the following pick files {fpaths}\nFilemask: {fpath}')
return
def write_json(object, fname):
with open(fname, 'w') as outfile:
json.dump(object, outfile, sort_keys=True, indent=4)
def get_metadata(eventdir):
metadata_path = os.path.join(eventdir, 'resp')
metadata = Metadata(inventory=metadata_path, verbosity=0)
return metadata

319295
pylot/os

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

1
pylot/styles/__init__.py Normal file
View File

@ -0,0 +1 @@
# -*- coding: utf-8 -*-

258
pylot/styles/bright.qss Normal file
View File

@ -0,0 +1,258 @@
QMainWindow{
background-color: qlineargradient(spread:reflect, x1:0, y1:0, x2:0, y2:0.5, stop:0 rgba(230, 230, 230, 255), stop:1 rgba(255, 255, 255, 255));
color: rgba(0, 0, 0, 255);
}
QWidget{
background-color: qlineargradient(spread:pad, x1:0, y1:0, x2:0, y2:1, stop:0 rgba(235, 235, 235, 255), stop:1 rgba(230, 230, 230, 255));
color: rgba(0, 0, 0, 255);
}
QToolBar QWidget:checked{
background-color: transparent;
border-color: rgba(230, 230, 230, 255);
border-width: 2px;
border-style:inset;
}
QComboBox{
background-color: rgba(255, 255, 255, 255);
color: rgba(0, 0, 0, 255);
min-height: 1.5em;
selection-background-color: qlineargradient(spread:pad, x1:0, y1:0, x2:1, y2:0, stop:0 rgba(0, 55, 140, 150), stop:1 rgba(0, 70, 180, 150));
}
QComboBox *{
background-color: rgba(255, 255, 255, 255);
color: rgba(0, 0, 0, 255);
selection-background-color: qlineargradient(spread:pad, x1:0, y1:0, x2:1, y2:0, stop:0 rgba(0, 55, 140, 150), stop:1 rgba(0, 70, 180, 150));
selection-color: rgba(255, 255, 255, 255);
}
QMenuBar{
background-color: qlineargradient(spread:pad, x1:0, y1:0, x2:0, y2:2, stop:0 rgba(240, 240, 240, 255), stop:1 rgba(230, 230, 230, 255));
padding:1px;
}
QMenuBar::item{
background-color: qlineargradient(spread:pad, x1:0, y1:0, x2:0, y2:2, stop:0 rgba(240, 240, 240, 255), stop:1 rgba(230, 230, 230, 255));
color: rgba(0, 0, 0, 255);
padding:3px;
padding-left:5px;
padding-right:5px;
}
QMenu{
background-color: qlineargradient(spread:reflect, x1:0, y1:0, x2:0, y2:0.5, stop:0 rgba(230, 230, 230, 255), stop:1 rgba(230, 230, 230 255));
color: rgba(0, 0, 0, 255);
padding:0;
}
*::item:selected{
color: rgba(0, 0, 0, 255);
background-color: qlineargradient(spread:pad, x1:0, y1:0, x2:1, y2:0, stop:0 rgba(0, 55, 140, 150), stop:1 rgba(0, 70, 180, 150));
}
QToolBar{
background-color: qlineargradient(spread:reflect, x1:0, y1:0, x2:0, y2:0.5, stop:0 rgba(230, 230, 230, 255), stop:1 rgba(255, 255, 255, 255));
border-style:solid;
border-color:rgba(200, 200, 200, 150);
border-width:1px;
}
QToolBar *{
background-color: qlineargradient(spread:reflect, x1:0, y1:0, x2:0, y2:0.5, stop:0 rgba(230, 230, 230, 255), stop:1 rgba(255, 255, 255, 255));
}
QMessageBox{
background-color: rgba(255, 255, 255, 255);
color: rgba(0, 0, 0, 255);
}
QTableWidget{
background-color: rgba(255, 255, 255, 255);
color:rgba(0, 0, 0, 255);
border-color:rgba(0, 0, 0, 255);
selection-background-color: rgba(200, 210, 230, 255);
}
QTableCornerButton::section{
border: none;
background-color: qlineargradient(spread:pad, x1:0, y1:0, x2:1, y2:0, stop:0 rgba(255, 255, 255, 255), stop:1 rgba(230, 230, 230, 255));
}
QHeaderView::section{
background-color:qlineargradient(spread:pad, x1:0, y1:0, x2:1, y2:0, stop:0 rgba(255, 255, 255, 255), stop:1 rgba(230, 230, 230, 255));
border:none;
border-top-style:solid;
border-width:1px;
border-top-color:qlineargradient(spread:pad, x1:0, y1:0, x2:1, y2:0, stop:0 rgba(255, 255, 255, 255), stop:1 rgba(230, 230, 230, 255));
color:rgba(0, 0, 0, 255);
padding:5px;
}
QHeaderView::section:checked{
background-color:qlineargradient(spread:pad, x1:0, y1:0, x2:1, y2:0, stop:0 rgba(0, 55, 140, 150), stop:1 rgba(0, 70, 180, 150));
border-top-color:qlineargradient(spread:pad, x1:0, y1:0, x2:1, y2:0, stop:0 rgba(0, 55, 140, 150), stop:1 rgba(0, 70, 180, 150));
}
QHeaderView{
background-color:qlineargradient(spread:pad, x1:0, y1:0, x2:1, y2:0, stop:0 rgba(255, 255, 255, 255), stop:1 rgba(230, 230, 230, 255));
border:none;
border-top-style:solid;
border-width:1px;
border-top-color:rgba(230, 230, 230, 255);
color:rgba(0, 0, 0, 255);
}
QListWidget{
background-color:rgba(230, 230, 230, 255);
color:rgba(0, 0, 0, 255);
}
QStatusBar{
background-color:rgba(255, 255, 255, 255);
color:rgba(0, 0, 0, 255);
}
QPushButton{
background-color:qlineargradient(spread:reflect, x1:0, y1:0, x2:0, y2:0.5, stop:0 rgba(230, 230, 230, 255), stop:1 rgba(245, 245, 245, 255));
color:rgba(0, 0, 0, 255);
border-style: outset;
border-width: 1px;
border-color: rgba(100, 100, 120, 255);
padding: 4px;
padding-left:5px;
padding-right:5px;
border-radius: 2px;
}
QPushButton:pressed{
background-color: rgba(230, 230, 230, 255);
border-style: inset;
}
QPushButton:checked{
background-color: rgba(230, 230, 230, 255);
border-style: inset;
}
*:disabled{
color:rgba(100, 100, 120, 255);
}
QTabBar{
background-color:transparent;
}
QTabBar::tab{
background-color:qlineargradient(spread:pad, x1:0, y1:0, x2:0, y2:1, stop:0 rgba(230, 230, 230, 255), stop:1 rgba(210, 210, 210, 255));
color: rgba(0, 0, 0, 255);
border-style:solid;
border-color:rgba(210, 210, 210 255);
border-bottom-color: transparent;
border-width:1px;
padding:5px;
}
QTabBar::tab:selected{
background-color:qlineargradient(spread:pad, x1:0, y1:0, x2:0, y2:1, stop:0 rgba(255, 255, 255, 255), stop:1 rgba(245, 245, 245, 255));
color: rgba(0, 0, 0, 255);
border-style:solid;
border-color:rgba(245, 245, 245, 255);
border-bottom-color: transparent;
border-width:1px;
padding:5px;
}
QTabBar::tab:disabled{
background-color:qlineargradient(spread:pad, x1:0, y1:0, x2:0, y2:1, stop:0 rgba(230, 230, 230, 255), stop:1 rgba(210, 210, 210, 255));
color: rgba(100, 100, 120, 255);
}
QTabWidget{
background-color:transparent;
}
QTabWidget::pane{
background-color:rgba(0, 0, 0, 255);
margin: 0px, 0px, 0px, 0px;
padding: 0px;
border-width:0px;
}
QTabWidget::tab{
background-color:rgba(255, 255, 255, 255);
}
QTabWidget > QWidget{
background-color: rgba(245, 245, 245, 255);
color: rgba(0, 0, 0, 255);
}
QScrollArea{
background: transparent;
}
QScrollArea>QWidget>QWidget{
background: transparent;
}
QLabel{
color: rgba(0, 0, 0, 255);
background-color: transparent;
}
QTextEdit{
color: rgba(0, 0, 0, 255);
background-color: rgba(255, 255, 255, 255);
}
QSpinBox{
color: rgba(0, 0, 0, 255);
background-color: rgba(255, 255, 255, 255);
}
QDoubleSpinBox{
color: rgba(0, 0, 0, 255);
background-color: rgba(255, 255, 255, 255);
}
QCheckBox{
background-color:transparent;
border:none;
}
QLineEdit{
background-color: rgba(255, 255, 255, 255);
border: 1px inset;
border-radius:0;
border-color: rgba(100, 100, 120, 255);
}
QLineEdit:disabled{
background-color: rgba(255, 255, 255, 255);
border: 1px inset;
border-radius:0;
border-color: rgba(200, 200, 200, 255);
}
QListWidget{
background-color:rgba(255, 255, 255, 255)
}
QProgressBar{
background-color:rgba(230, 230, 230, 255);
}
QProgressBar::chunk{
background-color:qlineargradient(spread:reflect, x1:0, y1:0, x2:0.5, y2:0, stop:0 transparent, stop:1 rgba(0, 70, 180, 150));
}
QStatusBar{
background-color: qlineargradient(spread:pad, x1:0, y1:0, x2:0, y2:1, stop:0 rgba(235, 235, 235, 255), stop:1 rgba(230, 230, 230, 255));
color: rgba(0, 0, 0, 255);
}

257
pylot/styles/dark.qss Normal file
View File

@ -0,0 +1,257 @@
QMainWindow{
background-color: qlineargradient(spread:reflect, x1:0, y1:0, x2:0, y2:0.5, stop:0 rgba(70, 70, 80, 255), stop:1 rgba(60, 60, 70, 255));
color: rgba(255, 255, 255, 255);
}
QWidget{
background-color: qlineargradient(spread:reflect, x1:0, y1:0, x2:0, y2:0.5, stop:0 rgba(70, 70, 80, 255), stop:1 rgba(60, 60, 70, 255));
color: rgba(255, 255, 255, 255);
}
QToolBar QWidget:checked{
background-color: transparent;
border-color: rgba(100, 100, 120, 255);
border-width: 2px;
border-style:inset;
}
QComboBox{
background-color: rgba(90, 90, 100, 255);
color: rgba(255, 255, 255, 255);
min-height: 1.5em;
selection-background-color: qlineargradient(spread:pad, x1:0, y1:0, x2:1, y2:0, stop:0 rgba(0, 144, 180, 255), stop:1 rgba(0, 150, 190, 255));
}
QComboBox *{
background-color: rgba(90, 90, 100, 255);
color: rgba(255, 255, 255, 255);
selection-background-color: qlineargradient(spread:pad, x1:0, y1:0, x2:1, y2:0, stop:0 rgba(0, 144, 180, 255), stop:1 rgba(0, 150, 190, 255));
selection-color: rgba(255, 255, 255, 255);
}
QMenuBar{
background-color: qlineargradient(spread:pad, x1:0, y1:0, x2:0, y2:1, stop:0 rgba(70, 70, 80, 255), stop:1 rgba(60, 60, 70, 255));
padding:1px;
}
QMenuBar::item{
background-color: qlineargradient(spread:pad, x1:0, y1:0, x2:0, y2:1, stop:0 rgba(70, 70, 80, 255), stop:1 rgba(60, 60, 70, 255));
color: rgba(255, 255, 255, 255);
padding:3px;
padding-left:5px;
padding-right:5px;
}
QMenu{
background-color: qlineargradient(spread:pad, x1:0, y1:0, x2:0, y2:1, stop:0 rgba(70, 70, 80, 255), stop:1 rgba(60, 60, 70, 255));
color: rgba(255, 255, 255, 255);
padding:0;
}
*::item:selected{
color: rgba(255, 255, 255, 255);
background-color: qlineargradient(spread:pad, x1:0, y1:0, x2:1, y2:0, stop:0 rgba(0, 144, 180, 255), stop:1 rgba(0, 150, 190, 255));
}
QToolBar{
background-color: qlineargradient(spread:reflect, x1:0, y1:0, x2:0, y2:0.5, stop:0 rgba(70, 70, 80, 255), stop:1 rgba(60, 60, 70, 255));
border-style:solid;
border-color:rgba(80, 80, 90, 255);
border-width:1px;
}
QToolBar *{
background-color: qlineargradient(spread:reflect, x1:0, y1:0, x2:0, y2:0.5, stop:0 rgba(70, 70, 80, 255), stop:1 rgba(60, 60, 70, 255));
}
QMessageBox{
background-color: rgba(60, 60, 70, 255);
color: rgba(255, 255, 255, 255);
}
QTableWidget{
background-color: rgba(80, 80, 90, 255);
color:rgba(255, 255, 255, 255);
border-color:rgba(255, 255, 255, 255);
selection-background-color: rgba(200, 210, 230, 255);
}
QTableCornerButton::section{
border: none;
background-color: qlineargradient(spread:pad, x1:0, y1:0, x2:1, y2:0, stop:0 rgba(60, 60, 70, 255), stop:1 rgba(70, 70, 80, 255));
}
QHeaderView::section{
background-color:qlineargradient(spread:pad, x1:0, y1:0, x2:1, y2:0, stop:0 rgba(60, 60, 70, 255), stop:1 rgba(70, 70, 80, 255));
border:none;
border-top-style:solid;
border-width:1px;
border-top-color:qlineargradient(spread:pad, x1:0, y1:0, x2:1, y2:0, stop:0 rgba(60, 60, 70, 255), stop:1 rgba(70, 70, 80, 255));
color:rgba(255, 255, 255, 255);
padding:5px;
}
QHeaderView::section:checked{
background-color:qlineargradient(spread:pad, x1:0, y1:0, x2:1, y2:0, stop:0 rgba(0, 120, 150, 255), stop:1 rgba(0, 150, 190, 255));
border-top-color:qlineargradient(spread:pad, x1:0, y1:0, x2:1, y2:0, stop:0 rgba(0, 120, 150, 255), stop:1 rgba(0, 150, 190, 255));
}
QHeaderView{
background-color:qlineargradient(spread:pad, x1:0, y1:0, x2:1, y2:0, stop:0 rgba(60, 60, 70, 255), stop:1 rgba(70, 70, 80, 255));
border:none;
border-top-style:solid;
border-width:1px;
border-top-color:rgba(70, 70, 80, 255);
color:rgba(255, 255, 255, 255);
}
QListWidget{
background-color:rgba(200, 200, 200, 255);
color:rgba(255, 255, 255, 255);
}
QStatusBar{
background-color:rgba(60, 60, 70, 255);
color:rgba(255, 255, 255, 255);
}
QPushButton{
background-color:qlineargradient(spread:reflect, x1:0, y1:0, x2:0, y2:0.5, stop:0 rgba(70, 70, 80, 255), stop:1 rgba(60, 60, 70, 255));
color:rgba(255, 255, 255, 255);
border-style: outset;
border-width: 2px;
border-color: rgba(50, 50, 60, 255);
padding: 4px;
padding-left:5px;
padding-right:5px;
border-radius: 2px;
}
QPushButton:pressed{
background-color: qlineargradient(spread:reflect, x1:0, y1:0, x2:0, y2:0.5, stop:0 rgba(80, 80, 90, 255), stop:1 rgba(70, 70, 80, 255));
border-style: inset;
}
QPushButton:checked{
background-color: qlineargradient(spread:reflect, x1:0, y1:0, x2:0, y2:0.5, stop:0 rgba(80, 80, 90, 255), stop:1 rgba(70, 70, 80, 255));
border-style: inset;
}
*:disabled{
color: rgba(130, 130, 130, 255);
}
QTabBar{
background-color:transparent;
}
QTabBar::tab{
background-color:qlineargradient(spread:pad, x1:0, y1:0, x2:0, y2:1, stop:0 rgba(70, 70, 80, 255), stop:1 rgba(60, 60, 70, 255));
color: rgba(255, 255, 255, 255);
border-style:solid;
border-color:rgba(70, 70, 80, 255);
border-bottom-color: transparent;
border-width:1px;
padding:5px;
}
QTabBar::tab:selected{
background-color:qlineargradient(spread:pad, x1:0, y1:0, x2:0, y2:1, stop:0 rgba(80, 80, 90, 255), stop:1 rgba(70, 70, 80, 255));
color: rgba(255, 255, 255, 255);
border-style:solid;
border-color:rgba(70, 70, 80, 255);
border-bottom-color: transparent;
border-width:1px;
padding:5px;
}
QTabBar::tab:disabled{
background-color:qlineargradient(spread:pad, x1:0, y1:0, x2:0, y2:1, stop:0 rgba(70, 70, 80, 255), stop:1 rgba(60, 60, 70, 255));
color: rgba(100, 100, 120, 255);
}
QTabWidget{
background-color:transparent;
}
QTabWidget::pane{
background-color:rgba(70, 70, 80, 255);
margin: 0px, 0px, 0px, 0px;
padding: 0px;
border-width:0px;
}
QTabWidget::tab{
background-color:rgba(70, 70, 80, 255);
}
QTabWidget > QWidget{
background-color: rgba(70, 70, 80, 255);
color: rgba(255, 255, 255, 255);
}
QScrollArea{
background: transparent;
}
QScrollArea>QWidget>QWidget{
background: transparent;
}
QLabel{
color: rgba(255, 255, 255, 255);
background-color: transparent;
}
QTextEdit{
color: rgba(255, 255, 255, 255);
background-color: rgba(90, 90, 100, 255);
}
QSpinBox{
color: rgba(255, 255, 255, 255);
background-color: rgba(90, 90, 100, 255);
}
QDoubleSpinBox{
color: rgba(255, 255, 255, 255);
background-color: rgba(90, 90, 100, 255);
}
QCheckBox{
background-color:transparent;
border:none;
}
QLineEdit{
background-color: rgba(90, 90, 100, 255);
border: 1px inset;
border-radius:0;
border-color: rgba(100, 100, 120, 255);
}
QLineEdit:disabled{
background-color: rgba(90, 90, 100, 255);
border: 1px inset;
border-radius:0;
border-color: rgba(200, 200, 200, 255);
}
QListWidget{
background-color:rgba(60, 60, 70, 255)
}
QProgressBar{
background-color:rgba(60, 60, 70, 255);
}
QProgressBar::chunk{
background-color:qlineargradient(spread:reflect, x1:0, y1:0, x2:0.5, y2:0, stop:0 transparent, stop:1 rgba(0, 150, 190, 255));
}
QStatusBar{
background-color: qlineargradient(spread:reflect, x1:0, y1:0, x2:0, y2:0.5, stop:0 rgba(70, 70, 80, 255), stop:1 rgba(60, 60, 70, 255));
color: rgba(255, 255, 255, 255);
}

View File

@ -0,0 +1,69 @@
# -*- coding: utf-8 -*-
# Set base phase colors for manual and automatic picks
# together with a modifier (r, g, or b) used to alternate
# the base color
phasecolors = {
'manual': {
'P': {
'rgba': (0, 0, 255, 255),
'modifier': 'g'},
'S': {
'rgba': (255, 0, 0, 255),
'modifier': 'b'}
},
'auto': {
'P': {
'rgba': (140, 0, 255, 255),
'modifier': 'g'},
'S': {
'rgba': (255, 140, 0, 255),
'modifier': 'b'}
}
}
# Set plot colors and stylesheet for each style
stylecolors = {
'default': {
'linecolor': {
'rgba': (0, 0, 0, 255)},
'background': {
'rgba': (255, 255, 255, 255)},
'multicursor': {
'rgba': (255, 190, 0, 255)},
'ref': {
'rgba': (200, 210, 230, 255)},
'test': {
'rgba': (200, 230, 200, 255)},
'stylesheet': {
'filename': None}
},
'dark': {
'linecolor': {
'rgba': (230, 230, 230, 255)},
'background': {
'rgba': (50, 50, 60, 255)},
'multicursor': {
'rgba': (0, 150, 190, 255)},
'ref': {
'rgba': (80, 110, 170, 255)},
'test': {
'rgba': (130, 190, 100, 255)},
'stylesheet': {
'filename': 'dark.qss'}
},
'bright': {
'linecolor': {
'rgba': (0, 0, 0, 255)},
'background': {
'rgba': (255, 255, 255, 255)},
'multicursor': {
'rgba': (100, 100, 190, 255)},
'ref': {
'rgba': (200, 210, 230, 255)},
'test': {
'rgba': (200, 230, 200, 255)},
'stylesheet': {
'filename': 'bright.qss'}
}
}

View File

@ -1,13 +0,0 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
from PySide.QtGui import QApplication
from pylot.core.util.widgets import HelpForm
app = QApplication(sys.argv)
win = HelpForm()
win.show()
app.exec_()

Some files were not shown because too many files have changed in this diff Show More