Earthquake Reaearch in China  2018, Vol. 32 Issue (1): 1-14
A Review on the Research Progress in Operational Earthquake Forecasting (OEF) in the World
Bi Jinmeng1, Jiang Changsheng2
1. Tianjin Earthquake Agency, Tianjin 200301, China;
2. Institute of Geophysics, China Earthquake Administration, Beijing 100081, China
Abstract: In this paper, the research progress of the Operational Earthquake Forecasting (OEF) is introduced from the major areas of concern, the concept of probability gain, hybrid model development, and the application to earthquake disaster reduction.Due to the development of OEF based on the global "Collaboratory for the Study of Earthquake Predictability (CSEP)" plan, it provides a significant technical foundation for earthquake forecast modeling and a practical foundation for solving the actual problems in earthquake preparedness and disaster mitigation.Therefore, related research and technical ideas provide inspirational and referential significance for earthquake forecasting/prediction.
Key words: Operational earthquake forecasting     Probability gain     Hybrid model     Emergency evacuation     Seismic fortification

INTRODUCTION

On April 6, 2009, an MW6.3 earthquake beneath the city of L'Aquila, central Italy, triggered the "trial of seismologists" event, which directly promoted the development and research of "Operational Earthquake Forecasting (OEF)" (Jordan et al., 2010; 2011). Unlike common earthquake forecasting/prediction, the core objective of OEF is to provide authoritative time related to seismic hazard information for the public. Its main application is to guide the public to perform good earthquake resistance protection in advance for a potential destructive earthquake, in order to effectively reduce earthquake disaster (Jordan et al., 2010; 2014).

Since the introduction of OEF, it has received extensive attention from seismologists worldwide. For example, the "Operational Earthquake Forecasting and Decision Making" conference was held on June 8 to 11, 2014 in Varenna, Italy, which was related to OEF topics including "The Problem of Earthquake Forecasting", "Testing Short-term Earthquake Forecasting Models", "Operational Earthquake Forecasting", "From Short-term Hazard to Risk", and "Decision-making Process and Risk Communication in a Low-probability Forecasting Environment". In addition, the United States Geological Survey (USGS) has held a series of OEF related working meetings since 2015, and the AGU Fall Meeting 2016 has also set up thematics on OEF related issues.

To accurately comprehensively understand the overall context of OEF research and development trends, this paper teases out the general ideas of current OEF research, the key technologies involved, including "probability gain", the model foundation and the OEF application fields, and tries to appropriately discuss the issue in combination with earthquake prediction research experience in China to provide references for the study of earthquake forecasting and prediction in China.

1 THE MAJOR CONCERN AREAS OF OEF

At present, the research and development of OEF is embodied mainly in the practical application of earthquake forecasting, emphasis on serving the public and provision of valuable reference information to decision-makers. It involves the following elements.

(1) The concept of "probability gain" has become an important technical basis for the realization of OEF. As a core measuring tool of future earthquake possibility assessment, the recent seismicity in a region may generate a higher probability gain of relative background seismicity, though it has little effect on earthquake rate. The use of the "probability gain" can show the change of earthquake occurrence rate by orders of magnitude, which is more conducive to the "operability" of earthquake forecasting. For example, retrospective ETAS calculations for the day before the L'Aquila main shock yield probability gains of 5-25 in a large area (-3600km2) around the hypocenter. In other words, the occurrence of a L'Aquila-size event was 5-25 times more likely on April 6, 2009 than forecast in this area from the long-term reference model (Marzocchi et al., 2009). In the operational earthquake forecasting, the United States Geological Survey (USGS) has studied the "Short-term Earthquake Probability (STEP)" model (Gerstenberger et al., 2005) and compared with the long-term earthquake forecast model, small earthquake events (M=3.0-4.0) yield probability gains of 10-100 around the hypocenter (Gerstenberger et al., 2007). The ETAS model is used to study the earthquake sequences of California, USA and Italy which yielding probability gains of 10-100 (Helmstetter et al., 2006; Console et al., 2010). Thus, the introduction of this "probability gain" describing the degree of relative seismic risk will bring more scientific decision-making reference information for reducing earthquake disasters.

(2) The global "Collaboratory for the Study of Earthquake Predictability (CSEP)" plan (http://www.cseptesting.org) provides an important forecasting model foundation for OEF. In 2006, the CSEP plan, launched by the Southern California Earthquake Center (SCEC), has developed a large number of earthquake forecasting models with different time scales after nearly 10 years of development and perfection. CSEP adopts a unified research area, seismic catalogue and strict seismic statistical testing methods. The research and development of the earthquake forecast model participates by "competition" in the retrospective forecasting test, in order to screen out the "superiority" model and prospective forecast. Thus, the forecast model obtained through CSEP provides the possibility of building a model for the construction of an OEF system for more effective disaster reduction. In order to eliminate the limitation of the single forecast model in the development of the CSEP plan, we try to use the single forecast model based on different "precursor" hypotheses, and form a "hybrid model" to carry out the forecast efficiency test. At the same time, the hybrid model has gradually become the main choice for the construction of the OEF system.

(3) Attach importance to the combination of seismic fortification and emergency preparedness, and build a practical application-specific OEF system. In order to achieve the target of earthquake disaster reduction, OEF attaches great importance to its role in earthquake emergency evacuation and seismic fortification. At present, seismic fortification is mainly based on medium-long term earthquake forecasting and corresponding "probabilistic seismic hazard analysis (PSHA)", which guides the code for design of building foundation and earthquake resistant design. According to the needs of medium-long term earthquake forecast/seismic fortification, OEF makes corresponding disaster-reducing decisions based on seismic risk probability and corresponding cost-benefit analysis (CBA). In order to realize the operable earthquake forecast, some researches have begun to build the applicable the OEF system. The OEF system is being developed in the direction of being public, transparent, repeatable and verifiable (Marzocchi et al., 2014). At present, the OEF systems have been established and realized as part of the automation of operation in the United States, Italy and New Zealand (Jordan et al., 2011).

2 PROBABILITY GAIN 2.1 Definition and Representation of "Probability Gain"

The important reason for why probability gain is introduced in the OEF study is to solve the problem of seismic risk assessment in the area of low earthquake occurrence rate. For the "probability gain", the conditional probability P (B|A) of the earthquake B under the "precursor" A condition is given by the earthquake forecast model.

 ${\rm{P}}\left({B|A} \right) = \left[ {\frac{{P\left({A|B} \right)}}{{P\left(A \right)}}} \right]P\left(B \right)$ (1)

The right square brackets are "probability gain" (Aki, 1981; McGuire et al., 2005). In addition, based on the Molchan Error Diagram which can evaluate the effectiveness of earthquake forecast (Molchan, 1991), the "probability gain" can be specifically expressed as

 ${\rm{G}} = \frac{{p\left({B|A} \right)}}{{p\left(B \right)}} = \frac{{1 - \upsilon }}{\tau }$ (2)

Where, τ is fraction of space-time occupied by alarm, υ is miss rate, G is slope of the line from (0, 1) to (τ, υ) in the Molchan Error Diagram.

In addition, we can also use the classical paired t-test (Student's) to provide a confidence interval on the information gain per earthquake (IGPE) of model A over model B (Imoto, 2007), where

 ${\rm{IGPE}}\left({A, B} \right) = \frac{{\ln {L_B} - \ln {L_A}}}{N}$ (3)

N is the number of target earthquakes. Likelihood function can be expressed by

 $\ln L = \sum\limits_{m = 1}^N {\ln \lambda \left({{i_m}} \right)} - \sum\limits_{i = 1}^n {\lambda \left(i \right)}$ (4)

Where, n is the total number of cells, m is the serial number of an earthquake event and λ(i) is the expected number of earthquakes in the ith cell. According to Akaike Information Criteria (AIC) (Akaike, 1974), we get:

 ${\rm{AIC}} = - 2\ln L + 2k$ (5)

Where, k is the number of undetermined fitting parameters. The AIC value is used to describe the applicability of the model to the actual data, and the smaller the value, the better the applicability of the model. Specifically, the IGPE of model A over model B can be expressed as,

 ${\rm{IGPE}}\left({A, B} \right) = \frac{{\mathit{\Delta }{\rm{AIC}}}}{{2N}} = \frac{{{\rm{AI}}{{\rm{C}}_B} - {\rm{AI}}{{\rm{C}}_A}}}{{2N}}$ (6)

Therefore, the "probability gain" is obtained as

 $G = \exp \left[ {\mathit{\Delta }{\rm{AIC/2}}N} \right]$ (7)

The Stationary Uniform Poisson (SUP) model and Relative Intensity (RI) model are commonly used as reference models in earthquake forecasting, and they are also the basis for calculating the probability gain of other forecast models. The intensity of the seismic occurrence rate of the Poisson model is expressed as

 ${\lambda _{{\rm{SUP}}}}\left({x, y, M} \right) = {\mu _0}\left({x, y} \right)\beta {{\rm{e}}^{ - \beta \left({M - {M_c}} \right)}}$ (8)

where, β=bln10, μ0 is the background earthquake rate, M is the magnitude of the earthquake events, Mc is the magnitude of completeness. The intensity of the seismic occurrence rate of the RI model is expressed as

 ${\lambda _{{\rm{RI}}}} = \frac{{{P_i}}}{{{P_{\max }}}} = \frac{{{N_i}}}{{{N_{\max }}}}$ (9)
2.2 The Main Research Results of "Probability Gain" in OEF

Console et al. (2010) made a retrospective study on 33 earthquake events with M≥5.0 occurring in Italy from 1990 to 2006, and found the probability gain of the ETAS model over the Poisson model is 0.93-32, 000.00. In Japan, the probability gain of the ETAS model over the Poisson model is 973 (Zhuang, 2011). Retrospective study by Lombardi et al. (2009) on M≥5.5 earthquakes occurring in Italy revealed the probability gain of the Double Branching Model (DBM) (Marzocchi et al., 2008) over the Poisson model to be 1.54. Rhoades et al. (2009) made retrospective study on M≥5.0 earthquakes occurring in California from 1984 to 2004, and found that the probability gain of the Proximity to Past Earthquakes (PPE) model, the Every Earthquake a Precursor According to Scale (EEPAS) model and the STEP model over the SUP model are 5.31, 8.76 and 10.13, respectively. At the confidence level of 95%, the probability gain of the Geodetic model, Neokinema model, Pattern Informatics (PI), Kagan model, geodetic 8.1 model, geodetic 8.5 model over the Helmstetter-Kagan-Jackson (HJK) model are about 1.77, 1.35, 1.30, 1.22, 1.20, 1.22 in the retrospective test of the 5-year scale in California, respectively (Rhoades et al., 2014). The probability gain of the stress release model over the Poisson model is 1.11-1.22 (Zheng et al., 1994). Some researchers found that the probability gain of the PPE model over the HKJ model is about 4.5, 1.6, 1.6, 3.4 in New Zealand, Japan, Greece, California, U.S.A., respectively, while the probability gain of the EEPAS model over the SUP model is about 1.82, 3.65, 7.69 in Japan, Greece, California, U.S.A., respectively (Rhoades et al., 2005; Console et al., 2006; Rhoades, 2007).

Retrospective test using the "EAST*EEPAS" model, EEPAS model and EASTR model to evaluate the forecast efficiency in California, U.S.A, Shebalin et al. (2014) found that the probability gain of the "EAST*EEPAS" model over the EEPAS and EASTR models is 1.65 and 1.05, respectively. The space probability gain of the "EAST*EEPAS" model over the EASTR models is 1.28. It shows that this hybrid model has a better predictive efficiency. The probability gain of the "HKJ&Neokinema&PI" and "HKJ&Geodetic&PI" hybrid models over the HKJ model is 1.42, and 2.20 by Rhoades et al. (2014), respectively. Some researchers have also studied the "probability gain" of many other hybrid models (Rhoades, 2013; Rhoades et al., 2015, 2016).

Helmstetter et al. (2006) used the ETES model for short-term forecast research, considering the influence of minimum magnitude on forecast effect, especially on "probability gain" (G), given by Mmin=2, G=11.13; Mmin=3, G=11.94; Mmin=4, G=8.46; Mmin=5, G=4.41; Mmin=6, G=2.46. For the long time scale Renewal model (Daley et al., 2004), when the time interval is fixed, the correspondence between the variation of parameter κ and the probability gain is as follows: κ=0.2, G=1.9; κ=5, G=0.4; κ=25, G=1.2 (Harte et al., 2005).

3 EARTHQUAKE FORECAST HYBRID MODELS

In the construction of the OEF technology system, we usually adopt the best forecast model of screening efficiency, and use different time scales or the same time scale to mix based on certain weight coefficient. Then, after the optimal design, we obtain the specific "hybrid models".

3.1 "STEP&EEPAS&NSHMBG&PPE" Hybrid Models

According to the experience summarized by experts at New Zealand GNS Science Symposium in 2011, McVerry (2012) defines a hybrid model called "Expert Elicitation (EE)". The EE hybrid model based on a simple, informal, Average-Maximum (AVMAX) hybrid model is defined as the maximum of a long-term component and a time-varying component (Gerstenberger et al., 2014). In order to find the best model with the structure of the EE model at each time-lag, we derive optimal weighting parameters for the long-term and time-varying model components using the test dataset. The new optimum model is denoted as LT_TV_OPT hybrid models by Rhoades et al. (2016).

On the basis of expert opinions, model weightings in the AVMAX and EE hybrid models are as follows:

 ${\rm{AVMAX}} = {p_1}{\rm{STEP\_TV}} + {p_2}{\rm{EEPAS\_0F\_TV}} + q{\rm{PPE}}$ (10)
 $\begin{array}{l} {\rm{EE}} = {m_1}{\rm{STEP\_TV}} + {m_2}{\rm{ETAS\_TV}} + \\ {m_3}{\rm{EEPAS\_0F\_TV}} + {m_4}{\rm{EEPAS\_1F\_TV}}\\ + {n_1}{\rm{NSHMBG\_B\_POLY}} + {n_2}{\rm{PPE}} + \\ {n_3}{\rm{PPE\_FROM\_}}1840 + {n_4}{\rm{PPE\_DECLUS}} \end{array}$ (11)

Where, pi and mi are the long-term component; qi and ni are the time-varying component; p1, p2 are 0.5, 0.5, respectively; q is 1; m1, m2, m3, m4 are 0.36, 0.19, 0.24, 0.21, respectively; n1, n2, n3, n4 are 0.58, 0.13, 0.16, 0.12, respectively. The long-term component (LT) and the time-varying (TV) model are thus of the form

 $\begin{array}{l} {\rm{LT}} = {a_1}{\rm{PPE}} + {a_2}{\rm{PPE\_DECLUS}} + \\ {a_3}{\rm{NSHMBG\_B\_POLY}} + {a_4}{\rm{SUP}} + {a_5}{\rm{PPE\_1950}} \end{array}$ (12)
 $\begin{array}{l} {\rm{TV}} = {b_1}{\rm{STEP\_TV}} + {b_2}{\rm{ETAS\_TV}} + \\ {b_3}{\rm{EEPAS\_0F\_TV}} + {b_4}{\rm{EEPAS\_1F\_TV}} \end{array}$ (13)

where, $\sum\limits_{i = 1}^5 {{a_i} = 1}, \sum\limits_{i = 1}^4 {{b_i} = 1}$. The parameters of the two components are simultaneously optimized to maximize the log likelihood of the LT_TV_OPT hybrid over the whole catalogue, using the Downhill Simplex method (Nelder and Mead, 1965).

Retrospective tests on the operational hybrid models ("AVMAX" "EE" "LT_TV_OPT") and the individual models in the New Zealand Testing Center were carried out by Rhoades et al. (2016). Based on the tests results, the operational hybrid models are expected to outperform most of the individual models.

3.2 SMA Hybrid Models

Marzocchi et al. (2014) set up the OEF-Italy system by combining the ETES model (Falcone et al., 2010), the ETAS model (Lombardi and Marzocchi, 2010) and the STEP model (Woessner et al., 2010) with different weights. Then, Marzocchi et al. (2014) combined them through the ensemble procedure proposed by Marzocchi et al. (2012), known as Score Model Averaging (SMA). The SMA seismicity rate λij for the i th spatial cell and the jth magnitude bin is calculated as

 ${\lambda _{ij}} = \sum\limits_{n = 1}^3 {\lambda _{ij}^{\left(n \right)}} {\omega _n}$ (14)

Where, n represents the n model; The index n refers to each single forecasting model, ωn represents the size of weight.and the weight ωn is defined as

 ${\omega _{\rm{n}}} = 1/{L_n}$ (15)

Where, Ln is the logarithm of the cumulative space and time likelihood scored by the n model until the time in which the forecasts start.

At present, the Score Model Averaging (SMA) has been used in the OEF-Italy system, Woo and Marzocchi (2013) explored some possible ways in which the scientific information provided by OEF could be used to reduce the risk in the short term.

3.3 "WGCEP-UCERF2&ETAS" Hybrid Models

The 2014 Working Group on California Earthquake Probabilities (WGCEP14) presented the time-independent component of the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3), which combines the long-term forecast model WGCEP-UCERF2 and the short-term forecast model ETAS, then solved the problem of the difference in the forecast of the earthquake occurrence rate caused by different time scales. The time-independent UCERF3 model (Field et al., 2014) is also applied to the 2014 "National Seismic Hazard Distribution Map" (Petersen et al., 2014) published by the USGS. At the same time, based on the elastic rebound theory, long-term earthquake forecast model of time dependent has been developed (Field et al., 2015).

3.4 "EAST&EEPAS" Hybrid Models

Shebalin et al. (2014) developed a new method which combined the Early Aftershocks Statistics (EAST) models (Shebalin et al., 2011) with the Every Earthquake a Precursor According to Scale (EEPAS) model (Rhoades et al., 2004, 2005, 2006, 2011; Console et al., 2006; Rhoades, 2007), then obtained two hybrid models "EAST*EEPAS" and "EASTR+EEPAS". Among them, the EASTR model is a mixture of the EAST model and the RI model, the "EAST*EEPAS" model is mixed by the EAST model and the EEPAS model, and the "EASTR+EEPAS" model is made up of an average EASTR model and a EEPAS model. In addition, the comparison tests were made on the forecast of four hybrid models EAST, EEPAS, "EAST*EEPAS" and "EASTR+EEPAS" in California. Based on the test results, the "EAST*EEPAS" hybrid model is expected to outperform most of the "EASTR+EEPAS" hybrid models. By and large, the operational hybrid models are expected to outperform most of the individual models.

3.5 "STEP&EEPAS&PPE" Hybrid Models

According to different mixing methods, Rhoades et al. (2009) considered a linear combination of EEPAS and different component STEP model, denoted as SE1, SE2, SE3 hybrid models, respectively. Meanwhile, a combination of STEP and PPE forms the SP1 and SP2 hybrid models. For the different models, the earthquake occurrence rate density is given by

 ${\lambda _{{\rm{STEP}}}}\left({t, M, x, y} \right) = \max [{\lambda _{{\rm{CLUST}}}}\left({t, M, x, y} \right), {\lambda _{{\rm{STEP}}}}\left({t, M, x, y} \right)]$ (16)
 ${\lambda _{{\rm{SE1}}}}\left({t, M, x, y} \right) = {\lambda _{{\rm{CLUST}}}}\left({t, M, x, y} \right) + w{\lambda _{{\rm{EEPAS}}}}\left({t, M, x, y} \right)$ (17)
 ${\lambda _{{\rm{SE2}}}}\left({t, M, x, y} \right) = \left({1 - r} \right){\lambda _{{\rm{STEP}}}}\left({t, M, x, y} \right) + r{\lambda _{{\rm{EEPAS}}}}\left({t, M, x, y} \right)$ (18)
 ${\lambda _{{\rm{SP1}}}}\left({t, M, x, y} \right) = {\lambda _{{\rm{CLUST}}}}\left({t, M, x, y} \right) + u{\lambda _{{\rm{PPE}}}}\left({t, M, x, y} \right)$ (19)
 ${\lambda _{{\rm{SP2}}}}\left({t, M, x, y} \right) = \left({1 - t} \right){\lambda _{{\rm{STEP}}}}\left({t, M, x, y} \right) + t{\lambda _{{\rm{PPE}}}}\left({t, M, x, y} \right)$ (20)

where, (w, r, u, t)∈[0, 1).

 ${\lambda _{{\rm{SE3}}}}\left({t, M, x, y} \right) = {\lambda _{{\rm{CLUST}}}}\left({t, M, x, y} \right) + {\rm{P}}\left(M \right){\lambda _{{\rm{EEPAS}}}}\left({t, M, x, y} \right)$ (21)

where, P(M) denotes the corresponding probability of the earthquake with magnitude M.

Using hybrid models SE1, SE2, SE3, SP1, SP2 and individual models SUP, STEP, EEPAS, PPE, Rhoades et al. (2009) made retrospective studies on the M≥5.0 earthquakes occurring in California from 1984 to 2004. Based on the test results, the five hybrid models are expected to outperform most of the individual models, and the average probability gain of the hybrid models is 0.2 higher than that of the individual models. Comparing the hybrid models, it is found that SE1 is expected to outperform SP1, SE2 is expected to outperform SP2, and SE1 is the same as SP3. Models SP1, SP2, SE1, and SE2 models will be submitted for CSEP testing (SCEC Testing Center) of short and medium term forecasting on a time scale of one day and three months, respectively.

3.6 "Janus" Hybrid Models

Rhoades (2013) combines ETAS (Ogata, 1988, 1989, 2001; Zhuang, 2011), EEPAS and PPE etc.3 models in a certain proportion relationship, which is called the "Janus" model. The "Janus "model will be submitted for CSEP testing in the New Zealand and California, USA regions.

For the ETAS model in the "Janus" model, the earthquake occurrence rate density is given by

 ${\lambda _{{\rm{ETAS}}}}\left({t, M, x, y} \right) = \upsilon {\lambda _{{\rm{PPE}}}}\left({t, M, x, y} \right) + {\lambda _{{\rm{AS}}}}\left({t, M, x, y} \right)$ (22)

Where, υ is a constant (0≤υ < 1), λPPEis the rate density of the PPE model, and λAS is the aftershock component. The EEPAS rate density has the general form of

 ${\lambda _{{\rm{EEPAS}}}}\left({t, M, x, y} \right) = \mu {\lambda _{{\rm{PPE}}}}\left({t, M, x, y} \right) + {\lambda _\Psi }\left({t, M, x, y} \right)$ (23)

Where, μ is a constant (0≤μ < 1) and λΨ is the time varying component motivated by the predictive scaling relationships associated with the Ψ phenomenon (Evison et al., 2002, 2004). The "Janus" rate density is thus given by

 ${\lambda _{{\rm{JANUS}}}} = \frac{q}{{1 - \upsilon }}{\lambda _{{\rm{AS}}}} + \left({1 - q} \right){\lambda _{{\rm{EEPAS}}}}$ (24)

Where, q is a constant parameter to be optimized, (0≤q≤1). From equations (23) and (24), we have

 ${\lambda _{{\rm{JANUS}}}} = \frac{q}{{1 - \upsilon }}{\lambda _{{\rm{AS}}}} + \left({1 - q} \right)\mu {\lambda _{{\rm{PPE}}}} + \left({1 - q} \right){\lambda _\Psi }$ (25)

These normalized components, which we will loosely refer to simply as the ETAS, PPE, and EEPAS components, have coefficients $\frac{q}{{1 - \upsilon }}, \left({1 - q} \right)\mu$, and (1-q), respectively, which represent their mixing proportions in the "Janus" model. According to the research results by Rhoades (2013), the Janus model outperforms the most informative of its component models with IGPEs ranging from 0.2 to 0.5.

3.7 "STEP&C-RS" Hybrid Models

Steacy et al. (2013) developed a new method which combines the statistical power of the short-term earthquake probability (STEP) model (Gerstenberger et al., 2005) with the spatial constraints from the Coulomb model (Dieterich, 1994; Parsons et al., 2000; Toda et al., 2005), and put forward two hybrid models of STEPC1 and STEPC2 for short-term aftershock forecast. With these two hybrid models, retrospectively and pseudo prospectively tests were made for the Canterbury earthquake sequence, using data of the first 10d before the following each main event to forecast the rate of M≥4.0 events in the following 100d. The results suggest that incorporating the physical constraints from Coulomb stress changes can increase the forecasting power of statistical models and clearly show the importance of good data quality if prospective forecasts are to be implemented in practice. This is consistent with one prior study of a hybrid model in which Bach and Hainzl (2012) retrospectively tested a combined Coulomb/ETAS model for three earthquakes (1992 Landers, 1999 Hector Mine, and 2004 Parkfield) and found that the addition of the Coulomb information improved forecast of location of off-fault aftershocks.

3.8 "Multiplicative" Hybrid Models

Rhoades et al. (2014) studied the two-model hybrids and three-model hybrids. The two-model hybrids use the HKJ model as the baseline(Helmstetter et al., 2007), and combine it respectively with the Asperity-based Likelihood Model (ALM) (Wiemer et al., 2007), the Pattern Informatics (PI) model (Holliday et al., 2007), the Geodetic model (Shen et al., 2007), the Neokinema model (Bird et al., 2007), five different models developed by Ward (Ward, 2007), and two models developed by Ebel et al. (2007). To build suitable multiple hybrid models, Rhoades et al. (2014) combine the "HKJ" model and Neokinema model with PI model, which are used for earthquake forecast in California. At the same time, Rhoades et al. (2014) combined the "HKJ" model and geodetic model with the PI model, which are used for earthquake forecast in Southern California. Based on the tests results, the operational hybrid models are expected to outperform most of the individual models.

The weight is set on the basis of the size of the forecast time scale. Rhoades et al. (2015) finally obtained 26 hybrid models based on merging the National Seismic Hazard Model background (HBG), PPE, Proximity to mapped faults (PMF), Proximity to plate interface (PPI), and Fault in cell (FLT). Scholars retrospectively tested hybrid models and five individual models for earthquakes of M≥5.0 in the periods 1987-2006 and 2007-2014, respectively. The results show that the hybrid models are helpful to improve the effect of earthquake forecasting at different time scales.

4 THE APPLICATION RESULTS OF OEF

Based on previous earthquake forecast results to guide earthquake reduction, at present, we pay more attention to how to evaluate earthquake risk based on earthquake forecast results, and provide operational and scientific decision-making reference information for possible earthquake emergency evacuation in the OEF study.

4.1 Research and Application in Emergency Evacuation

According to the practical needs for emergency evacuation of the 2009 MW6.3 L'Aquila earthquake, Van Stiphout et al. (2010) developed a new approach for the objective evaluation of short-term evacuation decisions, namely an OEF method that combines the probability gain of earthquakes with the "cost-benefit analysis (CBA)". CBA is commonly used in other disciplines, such as climate forecasts (Katz et al., 1997), earthquake retrofitting of buildings (Smyth et al., 2004), avalanche risk mitigation (Fuchs et al., 2007), or volcanic risk mitigation (Marzocchi et al., 2007, 2009) and allows a transparent and quantitative scheme for the decision-making process.

Set the optimal probability threshold R for an acceptable risk, given the cost, C, of a mitigation action and the potential loss, L, the action is favorable whenever the probabilistic risk exceeds C/L (R>C/L). Based on the earthquake-resistant capacity, inhabitants are divided into vulnerability classes A(30%), B(30%), C(30%), and D(10%), where A refers to the most vulnerable buildings in L'Aquila Italy. At the same time, based on three different assumptions, i.e., the cost of the evacuation is 500, 50, and 20/person/day. It is found that the CBA threshold curve is larger than the loss probability curve. The study on optimal evacuation duration shows that evacuation action is rarely worth doing, and future evacuation targets should be concentrated in areas with poor seismic performance and poor site conditions. The development of more predictive statistical or physics-based models that accurately describe earthquake interaction is the primary obstacle for initiating mitigation actions.

Based on probability gain combined with emergency evacuation approach, Herrmann et al. (2016) simulated a repeat of the 1, 356 Basel earthquake and then analyzed the simulated earthquake catalog and explored how we might provide decision support and information throughout such a crisis. This approach is used to predict earthquake disasters, especially for the quantitative detection of casualties. Scholars found that if an aftershock of M5.5 or larger precedes the M6.6 mainshock, evacuations in the central part of the city would be justified. Gulia et al. (2016) took the change of b-value before the earthquake into consideration in the short-term earthquake forecast model. L'Aquila's time-dependent risk, when considering time-dependent b values, is a factor of 50 times higher than the risk using state-of-the-art time-dependent models. Assessed via CBA, this increase makes the difference to exceed the threshold for evacuation. In our view, probabilistic time-dependent risk assessment considering time-dependent b values is a promising new concept that deserves to be studied more extensively in the future.

4.2 Seismic Fortification

At present, the importance of OEF in the long-term forecast of earthquakes has been widely recognized. Jordan (2013) points out that based on the long-term earthquake forecast information, drawing a seismic hazard distribution map, guide architectural design specifications, seismic design, and other engineering practices for mitigating disasters may be the most effective way to ensure seismic safety so far.

In the study of long-term earthquake forecast in the field of seismic fortification, at present, most countries use seismic hazard analysis (PSHA) methods based on probability. According to historical earthquake intensity and strong motion records, people determine the exceeding probability ground motion parameters (PGA, PGV) by establishing an earthquake probability model. The seismic risk curve is given by using the exceeding probability and ground motion parameters, and the required ground motion parameters are obtained at the "best" time scale and exceeding probability. Then, according to the statistical analysis method, we can determine the relationship between seismic intensity and ground motions (Wald et al., 1999a; Shabestari et al., 2001; Kuwata et al., 2002; Atkinson et al., 2015; Martin et al., 2015). In the PSHA method, when the ground motion parameters of the potential source are beyond the prior value Y in t years, the transcendental probability of Y is Pt (Y>y), given by

 ${P_t}\left({Y > y} \right) = 1 - \exp \left\{ { - t\sum\limits_{k = 1}^N {\sum\limits_{i = 1}^{{n_k}} {\sum\limits_{j = 1}^{{n_j}} {{\upsilon _{kj}}{\omega _{kij}}} \int\limits_{{M_{j - 1}}}^{{M_j}} {P\left[ {Y > y|{E_{ki}}\left(M \right)} \right]{f_{ki}}\left(M \right){\rm{d}}M} } } } \right\}$ (26)

Where, P[Y>y|Eki(M)] is the conditional probability of more than y ground motion parameters caused by at least M magnitude earthquake in the potential seismic source region of the seismic zone of i, fki(M) is a probability density function, N is the number of earthquakes, nk is the number of potential seismic source areas on the seismic belt k, ni is the magnitude of the magnitude in the source area of i, vkj is the annual average rate of the j seismic grade of the i potential source area, Wkij is a spatial distribution function.

Generally speaking, the ground shaking parameter is the modified Mercalli intensity (MMI) (Wood and Neumann, 1931). The intensity maps are derived from instrumental ground motions by using the regression relationships of Wald et al. (1999a). MMI from peak ground acceleration (PGA; cm/s2) and peak ground velocity (PGV; cm/s) can be obtained by using the following relationships:

 ${\rm{MMI = 3}}{\rm{.66lg(PGA) - 1}}{\rm{.66}}$ (27)
 ${\rm{MMI = 3}}{\rm{.47lg(PGV) + 2}}{\rm{.35}}$ (28)

Wald et al. (1999b) found that the best way to reproduce the observed MMI is to use the PGA relationship for MMI≤V, the PGV relationship for MMI≥VII, and a weighted mean of both PGA-derived and PGV-derived values for V≤MMI≤VII. The seismic intensity map generated by the combination of 2 parameters has been widely used internationally, for example, the 2 parameters of ground acceleration peak (PGA) and ground speed peak (PGV) were synthetically calculated when calculating the intensity of earthquake forecast in the United States and Italy (Wald et al., 1999a; Michelini et al., 2008; Lauciani et al., 2012; Marzocchi et al., 2014).

5 CONCLUSION AND DISCUSSION

The research of Operational Earthquake Forecasting (OEF) is being carried out internationally. This paper introduces the research trend and the main technical ideas of OEF, focusing on the major areas of concern, the concept of probability gain, hybrid model development and the application in earthquake disaster reduction.

Although the starting point of OEF was the L'Aquila earthquake in Italy at the beginning of 2009, the foundation and technology of the research are developed on the basis of the research and development of the earthquake forecast model which has been carried out by the CSEP plan for 10 years. Because of the massive and quantitative earthquake forecast models developed by CSEP as well as the exploration of earthquake predictability, OEF research and corresponding system construction are feasible. The development of earthquake forecast research from CSEP to OEF is worthy of consideration and reference. China's earthquake forecast work was launched from the Xingtai earthquake in 1966, i.e. the study has been carried out for half a century, therefore, a large amount of multi-time scale "forward" prediction experience has been drawn.

The service products of China's earthquake forecasting/prediction system are mainly short-term and imminent forecasting, medium-term "earthquake trend forecasting", "annual key earthquake hazard area zoning" and long-term "key monitoring and defense zones". All forecast service products but "key surveillance defense areas" are based on forecast opinions as the final output form. Referring to the application of OEF, the output method that only gives seismic hazard assessment in China is not linked to the decision-making and preparation of earthquake emergency evacuation, which may bring difficulties to disaster reduction decision-making. Referring to the development context, key technology and application fields of OEF, there is some possible inspiration to our relevant work in the following aspects. First, OEF takes into consideration the concept of "probability gain", thus solving the problem of earthquake risk assessment under the condition of low background earthquake occurrence, which significantly reduces the uncertainty of earthquake disaster decision-making. Secondly, the hybrid model is used to optimize the combination of forecast efficiency, which is based on a single model of inspection evaluation in the CSEP plan. The improvement of prediction efficiency is achieved objectively. It is possible to realize the maneuverability of earthquake forecast. This important technical basis is worthy of reference. Finally, it provides an important scientific basis for the scientific decision-making of earthquake disaster reduction in the aspects of application, aseismic design, engineering disaster reduction and emergency rescue preparation.

ACKNOWLEDGEMENTS

This paper is guided by the preparatory work group of the global "Collaboratory for the Study of Earthquake Predictability (CSEP)" plan China CSEP Testing Center. We are also grateful to doctoral candidate Zhang Shengfeng, and Masters candidates Xie Xi and Wang Yawen for their help in collecting information in this study.

REFERENCES