Let $$\mathbf {X} = (X_1,\ldots ,X_p)$$X=(X1,…,Xp) be a stochastic vector having joint density function $$f_{\mathbf {X}}(\mathbf {x})$$fX(x) with partitions $$\mathbf {X}_1 = (X_1,\ldots ,X_k)$$X1=(X1,…,Xk) and $$\mathbf {X}_2 = (X_{k+1},\ldots ,X_p)$$X2=(Xk+1,…,Xp).… Click to show full abstract
Let $$\mathbf {X} = (X_1,\ldots ,X_p)$$X=(X1,…,Xp) be a stochastic vector having joint density function $$f_{\mathbf {X}}(\mathbf {x})$$fX(x) with partitions $$\mathbf {X}_1 = (X_1,\ldots ,X_k)$$X1=(X1,…,Xk) and $$\mathbf {X}_2 = (X_{k+1},\ldots ,X_p)$$X2=(Xk+1,…,Xp). A new method for estimating the conditional density function of $$\mathbf {X}_1$$X1 given $$\mathbf {X}_2$$X2 is presented. It is based on locally Gaussian approximations, but simplified in order to tackle the curse of dimensionality in multivariate applications, where both response and explanatory variables can be vectors. We compare our method to some available competitors, and the error of approximation is shown to be small in a series of examples using real and simulated data, and the estimator is shown to be particularly robust against noise caused by independent variables. We also present examples of practical applications of our conditional density estimator in the analysis of time series. Typical values for k in our examples are 1 and 2, and we include simulation experiments with values of p up to 6. Large sample theory is established under a strong mixing condition.
               
Click one of the above tabs to view related content.