npcdistbw {np} | R Documentation |
npcdistbw
computes a condbandwidth
object for estimating
a p+q-variate kernel conditional cumulative distribution
estimator defined over mixed continuous and discrete (unordered,
ordered) data using either the normal-reference rule-of-thumb or
least-squares cross validation using the method of Li and Racine
(2008) and Li, Lin and Racine (2013).
npcdistbw(...) ## S3 method for class 'formula' npcdistbw(formula, data, subset, na.action, call, gdata = NULL,...) ## S3 method for class 'NULL' npcdistbw(xdat = stop("data 'xdat' missing"), ydat = stop("data 'ydat' missing"), bws, ...) ## S3 method for class 'condbandwidth' npcdistbw(xdat = stop("data 'xdat' missing"), ydat = stop("data 'ydat' missing"), gydat = NULL, bws, bandwidth.compute = TRUE, auto = TRUE, nmulti, remin = TRUE, itmax = 10000, fast.cdf = TRUE, do.full.integral = FALSE, ngrid = 100, ftol = 1.19209e-07, tol = 1.49012e-08, small = 2.22045e-16, ...) ## Default S3 method: npcdistbw(xdat = stop("data 'xdat' missing"), ydat = stop("data 'ydat' missing"), gydat, bws, bandwidth.compute = TRUE, auto, nmulti, remin, itmax, fast.cdf, do.full.integral, ngrid, ftol, tol, small, bwmethod, bwscaling, bwtype, cxkertype, cxkerorder, cykertype, cykerorder, uxkertype, oxkertype, oykertype, ...)
formula |
a symbolic description of variables on which bandwidth selection is to be performed. The details of constructing a formula are described below. |
data |
an optional data frame, list or environment (or object
coercible to a data frame by |
subset |
an optional vector specifying a subset of observations to be used in the fitting process. |
na.action |
a function which indicates what should happen when the data contain
|
call |
the original function call. This is passed internally by
|
gdata |
a grid of data on which the indicator function for least-squares cross-validation is to be computed (can be the sample or a grid of quantiles). |
xdat |
a p-variate data frame of explanatory data on which bandwidth selection will be performed. The data types may be continuous, discrete (unordered and ordered factors), or some combination thereof. |
ydat |
a q-variate data frame of dependent data which bandwidth selection will be performed. The data types may be continuous, discrete (unordered and ordered factors), or some combination thereof. |
gydat |
a grid of data on which the indicator function for
least-squares cross-validation is to be computed (can be the sample
or a grid of quantiles for |
bws |
a bandwidth specification. This can be set as a |
... |
additional arguments supplied to specify the bandwidth type, kernel types, selection methods, and so on, detailed below. |
auto |
a logical value specifying whether to allow the code to
attempt to automatically select the fastest routine, using a
heuristic, to compute the least-squares cross-validation function.
Defaults to |
bwmethod |
which method to use to select bandwidths. |
bwscaling |
a logical value that when set to |
bwtype |
character string used for the continuous variable bandwidth type,
specifying the type of bandwidth to compute and return in the
|
bandwidth.compute |
a logical value which specifies whether to do a numerical search for
bandwidths or not. If set to |
cxkertype |
character string used to specify the continuous kernel type for
|
cxkerorder |
numeric value specifying kernel order for
|
cykertype |
character string used to specify the continuous kernel type for
|
cykerorder |
numeric value specifying kernel order for
|
uxkertype |
character string used to specify the unordered categorical
kernel type. Can be set as |
oxkertype |
character string used to specify the ordered categorical
kernel type. Can be set as |
oykertype |
character string used to specify the ordered categorical
kernel type. Can be set as |
nmulti |
integer number of times to restart the process of finding extrema of the cross-validation function from different (random) initial points |
remin |
a logical value which when set as |
itmax |
integer number of iterations before failure in the numerical
optimization routine. Defaults to |
fast.cdf |
a logical value which when set as |
do.full.integral |
a logical value which when set as |
ngrid |
integer number of grid points to use when computing the moment-based
integral. Defaults to |
ftol |
tolerance on the value of the cross-validation function
evaluated at located minima. Defaults to |
tol |
tolerance on the position of located minima of the
cross-validation function. Defaults to |
small |
a small number, at about the precision of the data type
used. Defaults to |
npcdistbw
implements a variety of methods for choosing
bandwidths for multivariate distributions (p+q-variate) defined
over a set of possibly continuous and/or discrete (unordered, ordered)
data. The approach is based on Li and Racine (2004) who employ
‘generalized product kernels’ that admit a mix of continuous
and discrete data types.
The cross-validation methods employ multivariate numerical search algorithms (direction set (Powell's) methods in multidimensions).
Bandwidths can (and will) differ for each variable which is, of course, desirable.
Three classes of kernel estimators for the continuous data types are available: fixed, adaptive nearest-neighbor, and generalized nearest-neighbor. Adaptive nearest-neighbor bandwidths change with each sample realization in the set, x[i], when estimating the cumulative distribution at the point x. Generalized nearest-neighbor bandwidths change with the point at which the cumulative distribution is estimated, x. Fixed bandwidths are constant over the support of x.
npcdistbw
may be invoked either with a formula-like
symbolic
description of variables on which bandwidth selection is to be
performed or through a simpler interface whereby data is passed
directly to the function via the xdat
and ydat
parameters. Use of these two interfaces is mutually exclusive.
Data contained in the data frames xdat
and ydat
may be a
mix of continuous (default), unordered discrete (to be specified in
the data frames using factor
), and ordered discrete (to be
specified in the data frames using ordered
). Data can be
entered in an arbitrary order and data types will be detected
automatically by the routine (see np
for details).
Data for which bandwidths are to be estimated may be specified
symbolically. A typical description has the form dependent data
~ explanatory data
,
where dependent data
and explanatory data
are both
series of variables specified by name, separated by
the separation character '+'. For example, y1 + y2 ~ x1 + x2
specifies that the bandwidths for the joint distribution of variables
y1
and y2
conditioned on x1
and x2
are to
be estimated. See below for further examples.
A variety of kernels may be specified by the user. Kernels implemented for continuous data types include the second, fourth, sixth, and eighth order Gaussian and Epanechnikov kernels, and the uniform kernel. Unordered discrete data types use a variation on Aitchison and Aitken's (1976) kernel, while ordered data types use a variation of the Wang and van Ryzin (1981) kernel.
npcdistbw
returns a condbandwidth
object, with the
following components:
xbw |
bandwidth(s), scale factor(s) or nearest neighbours for the
explanatory data, |
ybw |
bandwidth(s), scale factor(s) or nearest neighbours for the
dependent data, |
fval |
objective function value at minimum |
if bwtype
is set to fixed
, an object containing
bandwidths (or scale factors if bwscaling = TRUE
) is
returned. If it is set to generalized_nn
or adaptive_nn
,
then instead the kth nearest neighbors are returned for the
continuous variables while the discrete kernel bandwidths are returned
for the discrete variables.
The functions predict
, summary
and plot
support
objects of type condbandwidth
.
If you are using data of mixed types, then it is advisable to use the
data.frame
function to construct your input data and not
cbind
, since cbind
will typically not work as
intended on mixed data types and will coerce the data to the same
type.
Caution: multivariate data-driven bandwidth selection methods are, by
their nature, computationally intensive. Virtually all methods
require dropping the ith observation from the data set, computing an
object, repeating this for all observations in the sample, then
averaging each of these leave-one-out estimates for a given
value of the bandwidth vector, and only then repeating this a large
number of times in order to conduct multivariate numerical
minimization/maximization. Furthermore, due to the potential for local
minima/maxima, restarting this procedure a large number of times may
often be necessary. This can be frustrating for users possessing
large datasets. For exploratory purposes, you may wish to override the
default search tolerances, say, setting ftol=.01 and tol=.01 and
conduct multistarting (the default is to restart min(5, ncol(xdat,ydat))
times) as is done for a number of examples. Once the procedure
terminates, you can restart search with default tolerances using those
bandwidths obtained from the less rigorous search (i.e., set
bws=bw
on subsequent calls to this routine where bw
is
the initial bandwidth object). A version of this package using the
Rmpi
wrapper is under development that allows one to deploy
this software in a clustered computing environment to facilitate
computation involving large datasets.
Tristen Hayfield hayfield@mpia.de, Jeffrey S. Racine racinej@mcmaster.ca
Aitchison, J. and C.G.G. Aitken (1976), “Multivariate binary discrimination by the kernel method,” Biometrika, 63, 413-420.
Hall, P. and J.S. Racine and Q. Li (2004), “Cross-validation and the estimation of conditional probability densities,” Journal of the American Statistical Association, 99, 1015-1026.
Li, Q. and J.S. Racine (2007), Nonparametric Econometrics: Theory and Practice, Princeton University Press.
Li, Q. and J.S. Racine (2008), “Nonparametric estimation of conditional CDF and quantile functions with mixed categorical and continuous data,” Journal of Business and Economic Statistics, 26, 423-434.
Li, Q. and J. Lin and J.S. Racine (2013), “Optimal Bandwidth Selection for Nonparametric Conditional Distribution and Quantile Functions”, Journal of Business and Economic Statistics, 31, 57-65.
Pagan, A. and A. Ullah (1999), Nonparametric Econometrics, Cambridge University Press.
Scott, D.W. (1992), Multivariate Density Estimation. Theory, Practice and Visualization, New York: Wiley.
Silverman, B.W. (1986), Density Estimation, London: Chapman and Hall.
Wang, M.C. and J. van Ryzin (1981), “A class of smooth estimators for discrete distributions,” Biometrika, 68, 301-309.
bw.nrd
, bw.SJ
, hist
,
npudens
, npudist
## Not run: # EXAMPLE 1 (INTERFACE=FORMULA): For this example, we compute the # cross-validated bandwidths (default) using a second-order Gaussian # kernel (default). Note - this may take a minute or two depending on # the speed of your computer. data("Italy") attach(Italy) bw <- npcdistbw(formula=gdp~ordered(year)) # The object bw can be used for further estimation using # npcdist(), plotting using plot() etc. Entering the name of # the object provides useful summary information, and names() will also # provide useful information. summary(bw) # Note - see the example for npudensbw() for multiple illustrations # of how to change the kernel function, kernel order, and so forth. detach(Italy) # EXAMPLE 1 (INTERFACE=DATA FRAME): For this example, we compute the # cross-validated bandwidths (default) using a second-order Gaussian # kernel (default). Note - this may take a minute or two depending on # the speed of your computer. data("Italy") attach(Italy) bw <- npcdistbw(xdat=ordered(year), ydat=gdp) # The object bw can be used for further estimation using npcdist(), # plotting using plot() etc. Entering the name of the object provides # useful summary information, and names() will also provide useful # information. summary(bw) # Note - see the example for npudensbw() for multiple illustrations # of how to change the kernel function, kernel order, and so forth. detach(Italy) ## End(Not run)