summary.emmGrid to calculation of
adjustment factors when ’by` variables are nested (#536)ci.method argument to hpd.summary()
to allow option of producing quantile-based intervals (#538).make.scale() to play nice with new R
requirements"prob" and
"exc.prob", but not "cum.prob") (postponed
from CRAN version 1.11.1 to allow time for other package(s) to
adapt)as.data.frame.summary_emm so it can’t loop
infinitely (#525)ref_grid and FAQs
vignette to clarify how we use all.vars() to identify
predictors, e.g. if a model formula contains log(dose), the
covariate is dose, not log(dose) (#523)"prob" and "exc.prob", but not
"cum.prob"). [This change is on hold as it breaks another
package.]opoly.emmc() that does not
rescale the coefficients to integers, and allows unequally-spaced levels
to be specified as scores (#527). In addition, unlike
poly.emmc, opoly.emmc supports the
exclude and include arguments.helmert.emmc and
nrmlz.emmc. The latter is a wrapper that can be used to
normalize the contrast coefficients from any other .emmc
function.contrast.emmGrid to make a
custom .emmc function easier to find. This bug prevented
some examples from being rendered correctly in all contexts.joint_tests() code that was omitted in
1.11.0 because apparently it was right the first time. (#528)npts to make.meanint()
and make.symmint() to facilitate generating an interval
with more than two points.test() for situations with non-estimability and
infinite df (#528)linfct.emm_list()) methodemmobj() less rigid (so that
as.emmGrid(as.list(obj)) more faithfully reproduces
obj)drop argument (TRUE by
default) to the emm_list methods.se.bhat and
se.diff to emmobj() (#529)linfct() generic and default method that
returns object@linfctjoint_tests() that prevented some
terms from being tested in nested models. Alas, this is still not
perfect.. in the
specs argument of emmeans() – e.g.,
emmeans(mod, "."),
emmeans(mod, pairwise ~ . | drug). (#522). This creates a
list of all sets of means (and contrasts), thus creating an
emm_list object. This also works in
emtrends().emm_list methods, we changed the default for
which from 1 to to NULLA and
B, the reference grid comprises combinations of
A, B, actual_A, and
actual_B the latter two used to track the original settings
of A and B in the dataset. We always average
over combinations of these factors. The previous code was a memory hog,
and we have made it much more efficient for large datasets.emmeans() has also been revised to do special handling
of counterfactual reference grids. Whenever we average over a
counterfactual B, we only use the cases where
B == actual_B, thus obtaining the same results as would be
obtained when B is not regarded as a counterfactual.regrid() to create @post.beta
slot correctly when there are non-estimable cases.subset.emmGrid() (#518)print.emmGrid() so that it calls
show() unless export = TRUE. This change was
made because I noticed that pkgdown uses
print() rather than show() to display example
results.add_submodels() function that allows for
comparison od estimates from different submodels (when supported)eff_size(). Also, a questionable
example was deleted. It is so easy to misuse this function, and I don’t
even buy into the idea of standardized effect sizes except in the
simplest of cases. So I am considering deprecating
eff_size() and letting some other package be to blame for
unsuitable or misleading results.weights bug in lme()
(#356)with_emm_options() to run code with
options temporarily setSE to 3
significant digitsgeeglm and glmgee
(#496)NA levels (#500).
This was previously not allowed, and we added an
"allow.na.levs" option (defaults to TRUE) just
in case we broke anything that used to work.qdrg() (#501)nuisance
(#503)mvregrid() function for multivariate response
transformations such as a compositional response.mice::mira support to use Barnard-Rubin
adjusted d.f. (#494)gls support code (#495)This update is focused mostly on trying to clear up confusion with
some users on the distinction between emmGrid objects and
their summaries, since they display identically; and on encouraging
users not to bypass important annotations.
help(untidy)rbind method for summary_emm objects
(#480). Note that summary_emm objects already have
estimates, P-values, etc. computed, so rbinding them
preserves those results. On the other hand, rbinding
emmGrid or emm_list objects produce new
emmGrid objects which have not yet been summarized
and any adjust methods are applied to the whole
result.gls or lme models,
mode = "satterthwaite" and
mode = "appx-satterthwaite" failed when model was fitted
with no explicit data argument (#465).emmc functions, just to make
it easier to see and use themwtcon.emmc(levs, wts, cmtype, ...) which generates
contrasts via
multcomp::contrMat(wts, type = cmtype, ...)contrast() gains a new argument wts which
can be passed to some .emmc functions including
eff.emmc, del.eff.emmc, and
wtcon.emmc. If wts is left missing, we pass
equal weights of . If we specifywts =
NA, we retrieve weights from the object (potentially different in eachbygroup). Otherwise, the same fixedwts`
are used in each group.weights() method for emmGrid
objectspwpp() to play along if
contrast() changes the by variable via
options (#472)strata()
factors to be included in the reference grid for
survival models. It is up to the user to decide what is
sensible. (#429, #473)tau argument (now optional) for rq models
(#458)at even when apparently valid
(#458)cross.adjust to legal arguments that can be
passed via misc slotcross.adjustvcov. in glmgee support
(#460)xtable() method for
summary_emm objectsinner argument to make.tran() to
allow for compound transform; e.g.,
make.tran("inverse", inner = "sqrt") is reciprocal sqrt
(#462)MuMIn support with subset argument
(#455)glmtoolbox::glmgee support (#454)qdrg() modified such that we often don’t need to
specify data when object is specified.rq, rqs now incorporates
all tau values in the model as a pseudofactor (#458). The
tau argument itself is deprecated and ignored if
specified.make.meanint() and
make.symmint() that return functions that compute symmetric
intervals. The old meanint() and symmint()
functions that return symmetric intervals of width 2 are
retained for back-compatibilitymultinom support so it works with a
model where the response is a matrix of counts (#439)MuMIn support (#442)qdrg() has replaced its ordinal.dim
argument with ordinal, a list with elements
dim and mode – which now fully supports all
the modes available for ordinal models (#444). (ordinal.dim
still works for backward compatibility.)emtrends (#448)averaging support with certain predictor
function calls (#449)contrast when tran is a
list (#428)strata() terms in
survival models (#429)at (#430)qdrg() where we didn’t always get
V rightvcov.emmGrid() now only returns elements where
object@misc$display == TRUE. We also label the dimensions
and provide a sep argument for creating labels.Correction to a bug introduced in version 1.8.4, where we tried
to provide for an offset argument in the same way
as an offset() term in the model formula.
Unfortunately, that change also caused wrong estimates to be computed
when the offset involves a nonlinear function such as
log(), and made for whopping inconsistencies in the
narrative about offsets in the "sophisticated" vignette; I
apologize for these embarrassing errors.
We now provide for both kinds of offset specifications, but in
different ways as explained in a new section in the
"xplanations" vignette. The “subtle difference” mentioned
in the NEWS for 1.8.4 no longer applies.
Change in qdrg(). If object is
specified, default for df is
df.residual(object) rather than
object$df.residual, since df.residual() is a
standard method.
as.mcmc() now uses
get_emm_option("sep") in labeling factor combinations
(#425).
emm_basis.averaging to take care of quirks
in these models (#402, #409)decreasing argument to cld.emmGrid()
for compatibility with multcomp::cld.glht() and
others.emtrends() when data is
specified (semTools issue 119) … and related tune-up to
ref_grid() to avoid issues with repeat calls (#413)emm_list methods to make them more
user-friendly (#417)pwts argument to
recover_data.call(), needed because prior weights did not
always come through. This provides a reliable way of passing prior
weights in a recover_data() methodemmip_ggplot() (#397)as.data.frame behavior. It has been made
more forceful in preserving annotations (i.e., summary_emm
behavior) so that users don’t blind themselves to potentially important
information. Also, some users seem to force display of the data frame in
order to see more digits; so we now are taking a compromise approach:
showing more digits but still as a summary_emm object with
annotations also displayed.Chisq value to results of
test(..., joint = TRUE) and joint_tests() when
df2 is infinite (per request in #400)basics vignette has undergone a major revision that
I hope helps more in getting users oriented. It starts by discussing the
fact that EMMs’ underpinnings are more in experiments than observational
data, and emphasizes more the process of first getting a good
model.confidence-intervals vignette has been updated to
reflect the same example with pigs as is used in
basicssigma value in the
@misc slot. For any models that are not in the
"gaussian" family, sigma is initialized to
NA and this has some implications:
sigma value; however,
for generalized linear models the value of sigma(model)
is often inappropriate for bias adjustment, and in fact anyway.
you should not do that*, and for mixed models, you should calculate
sigma based on the random effects. See the vignette on
transformations.predict(..., interval = "prediction") will refuse to work,
with no option to override. Same with specifying PIs = TRUE
in plot() or emmip(). The calculations done
for prediction intervals are only valid for Gaussian models. You may do
predictions for non-Gaussian models via simulating a posterior
predictive distribution with Bayesian approach; see an illustration in
the “sophisticated” vignette.sigma value associated with them, resulting in
incorrect PIs and incorrect bias adjustments. I have not figured out how
I might help prevent that, but it probably will involve making tedious
modifications to these models’ emm_basis methods. Maybe
some future improvements to be made.averaging objects
(#402)mira objects when data is
required (#406)Fix to scale() response transformation when either
center or scale is FALSE. I also
added support for center() and standardize()
from the datawizard package as response
transformations, though these are mapped to
scale().
Citation correction (#391)
Removed a message about contrasting transformed objects that even confuses me! (I added a topic in the FAQs vignette instead)
Added new exported function inverse available as a
response transformation
I have quietly deprecated the previous I_bet()
function, because it produced a message that was confusing to
inexperienced users. Instead, we have tweaked some functions/methods so
they seem to work the same way with an emm_list object
(using its first element) as an emmGrid object.
We have removed the functions convert_workspace()
and convert_scripts() that were intended to clean up
existing code and objects for the ancient version of
lsmeans. We also completely removed several old
functions from the codebase. Previously, we just ignored them.
More reliable dispatching of recover_data() and
emm_basis() methods (#392)
New permute_levels() function to change the order of
levels of a factor (#393)
This may alter results of existing code for models
involving offsets: A user discovered an issue whereby offsets
specified in an offset() model term are accounted for, but
those specified in an offset = ... argument are ignored. We
have revised the recover_data() and ref_grid()
code so that offsets specified either way (or even both) are treated the
same way (which is to include them in predictions unless
overridden by an offset argument in emmeans()
or ref_grid()).
This change creates a subtle difference in cases where you want
offsets to depend on other predictors: In a model with formula
y ~ trt + offset(off), if you used to specify
cov.reduce = off ~ trt, now you need
cov.reduce = .offset. ~ trt. The latter will work the same
with the model y ~ trt, offset = off.
Recoded some portions of the support functions for
zeroinfl and hurdle objects. We now use
numerical differentiation to do the delta method, and this comes out a
lot cleaner.
Per the improved count-model support, we are now exporting and
have documented two new functions hurdle.support() and
zi.support() that may be useful in providing comparable
support in other packages that offer zero-inflated models.
Efficiency improvements: Several places in the code where we
multiply a matrix by a diagonal matrix, we replace this by equivalent
code using the sweep() function.
Over time, too many users have latched on to the idea that
emmeans(model, pairwise ~ treatment(s)) as the
recipe for using emmeans(). It works okay when you have
just one factor, but when you have three factors, say,
pairwise ~ fac1*fac2*fac3 gives you every possible
comparison among cell means; often, this creates an intractable amount
of output (e.g., 378 comparisons in a 3x3x3 case) – most of which are
diagonal comparisons.
So now, if a user is in interactive mode, specifies contrasts in a
direct emmeans() call (i.e.,
sys.parent() == 0), there is more than one primary
factor (not including by factors), and there are more than
21 contrasts as a result (e.g. more than 7 levels compared pairwise), we
issue an advisory warning message: “You may have generated more
contrasts than you really wanted…”. Because of the restrictions on when
this warning is issued, it should not affect reverse-dependent package
checks at all.
regrid() (#287, revisited)nbasis calculation in ordinal models (#387)addl.vars argument allows including variables (say,
for random slopes) in the reference grid.xtable methods are now dynamically registered. This reduces
the number of package dependencies from 8 to 7 (as of this
version)."atanh" to the options in
make.tran() and to the “named” response transformations
that are auto-detectedmake.tran() replaces param argument with
alpha and beta (param is still
supported for backward compatibility) and documentation has been revised
in hopes of making everything clearercld() so it can show findings rather than
non-findings, in two different ways: Using delta, groupings
are based on actual tests of equivalence with threshold
delta; or setting signif.sets = TRUE, means
that have the same letter are significantly different. We also
added a vignette on “Re-engineering CLDs”.emtrends() (#133)emmip() so that we can
specify color, linetype, and
symbol are all associated with groupings; and addition of
an example to produce a black-and-white plot. Note: While the default
appearance of plots is unchanged, plots from your existing code may be
altered if you have used linearg, dotarg,
etc.vcov. to be coercible to a matrix, or a function
that yields a result coercible to a matrix (#383)"appx-satterthwaite" method
(#384)counterfactuals argument to
ref_grid(), setting up a reference grid consisting of the
stated factors and a constructed factor, .obs.no.. We then
(by default) average this grid over the covariate distribution. This
facilitates G-computation under the exchangeability assumption for
counterfactuals.summary() introduced in #359 and
reported in #364as.data.frame.emm_list() so it preserves
annotations like in as.data.frame.emmGrid()mgcv::gam support to accommodate fancier
smoothers and more accurately detect random terms (#365, #366,
#369)summary() from inside a function
(#367)delta argument to hpd.summary(),
thus allowing a way to assess equivalence with Bayesian estimates
(#370)stanreg estimability code when
subset was used in model.emmip() and plot.emmGrid() now do
appropriate things if point.est or frequentist
appear among the ... arguments, when we have Bayesian
models (note also, frequentist was removed from the visible
arguments for plot.emmGrid).emmip() plotted intervals
regardless of CIs; this has been correctedhead() and tail() methods for
emmGrid objects[.summary_emm(), we changed the default to
as.df = FALSE so that annotations are still visible by
default. This also preserves annotations in head() and
tail() for summariesemm_example() function used to tidy-up certain
help-file examples when they are conditional on an external packagesummary(),
confint(), test(), and
as.data.frame() all produce data frames with annotations
intact and visible. Additional wrapping in data.frame(),
as.data.frame(), etc. is completely unnecessary, and if you
send questions or bug reports with such code, I will regard it as
willful ignorance and will refuse to respond. See also the news for
version 1.8.0.lme support (#356)svyolr objects from the
survey package (#350)mgcv::gam support. Previously, random
smoothers were included. Thanks for Maarten Jung for observing this and
helping to identify them.test(..., joint = TRUE) and
joint_tests()…
"est.fcns" attribute is actually estimable(confounded) entry in
joint_tests() is now much better formulated and more
robust.xplanations
vignetteestimability (>= 1.4.1) due
to a bug in version 1.4joint_tests(), we changed the default from
cov.reduce = range to cov.reduce = meanint,
where meanint(x) returns mean(x) + c(-1, 1).
This centers the covariate values around their means, rather than their
midranges, and is more in line with the default of
ref_grid(..., cov.reduce = mean). However, this change in
default will change the results of joint_tests() from past
experiences with models having covariates that interact with factors or
other covariates. We also added a section on covariates to the help for
joint_tests(), and added another function
symmint() for use in cov.reduce.print.summary_emm() now puts by groups in
correct order rather than in order of appearance.as.data.frame method has a new argument
destroy.annotations, which defaults to FALSE –
in which case it returns a summary_emm object (which
inherits from data.frame). I see that many users routinely
wrap their results in as.data.frame because they want to
access displayed results in later steps. But in doing so they have
missed potentially useful annotations. Users who have used
as.data.frame to see results with lots of digits should
instead use emm_options(opt.digits = FALSE).>= 4.1.0, allowing freedom
to use the forward pipe operator |> and other
features.trend
argument in emmeans(), which has long since been
deprecated. We removed wrappers that implement pmmeans(),
pmtrends(), etc. – which I believe nobody ever used.emm_list, and added more complete documentation. We also
added hidden emm_list support to several functions like
add_grouping(), emmip(), and
emmeans() itself. These changes, we hope, help in
situations where users create objects like
emm <- emmeans(model, pairwise ~ treatment) but are not
experienced or attuned to the distinction between emmGrid
and emm_list objects. The mechanism for this is to provide
a default of for which element of the emm_list to use. A
message is shown that specifies which element was selected and
encourages the user to specify it explicitly in the future via either
[[ ]] or a which argument; for example,
plot(emm[[1]]) or plot(emm, which = 1).joint_tests() and
test(..., joint = TRUE) now has an "est.fcns"
attribute, which is a list of the linear functions associated with the
joint test(s).joint_tests() results now possibly include a
(confounded) entry for effects not purely explained by a
model term.fcross.adjust argument in
summary.emmGrid() allows for additional P-value
adjustment across by groups.glm.nb support no longer requires
data (#355) so the documentation was updated.enhance.levels to
contrast() that allows better labeling of the levels being
contrasted. For example, now (by default) if a factor treat
has numeric levels, then comparisons will have levels like
treat1 - treat2 rather than 1 - 2. We can
request similar behavior with non-numeric levels, but only if we specify
which factors.comb_facs() and
split_fac() for manipulating the factors in an
emmGrid.wts to eff.emmc and
del.eff.emmc, which allows for weighted versions of
effect-style contrasts (#346)qdrg() more robust in accommodating various
manifestations of rank-deficient models.qdrg() now always uses df if provided.
Previously forced df = Inf when a link function was
provided.df.error calculation with gls
(#347)ref_grid(..., transform = ...) now should be
ref_grid(..., regrid = ...) to avoid confusing
transform with the tran option (which kind of
does the opposite). If we match transform and don’t match
tran, it will still work, but a message is displayed with
advice to use regrid instead.averaging support (#324). Previous versions
were potentially dead wrong except for models created by
lm() (and maybe some of those were bad too)which argument to emm() to select
which list elements to pass to multcomp::glht()gls models (note that
nlme allows such models with gls, but not
lme)lqm / lqmm support (#340)averaging support (#319)by = NULL (#321)
(this bug was a subtle byproduct of the name-checking in #305) Note this
fixes visible errors in the vignettes for ver 1.7.1-1gamlss support (#323)withAutoprint() to documentation examples with
require() clauses, so we see interactive-style resultssummary.emmGrid (#31)summary.emmGrid() so that if we have both a
response transformation and a link function, then both transformations
are followed through with type = "response". Previously, I
took the lazy way out and used
summary(regrid(object, transform = "unlink"), type = "response")
(see #325)force_regular() which caused an unintended
warning (#326)emtrends() (#327)by variable names (#305). Related
to this are:
plot.emmGrid() now forces all names to be syntactically
validas.data.frame.emmGrid(), we changed the
optional argument to check.names (defaulting
to TRUE), and it actually has an effect. So by default, the
result will have syntactically valid names; this is a change, but only
because optional did not work right (because it is an
argument for `as.data.frame.list()).linfct from
emmeans() (#308)gnls support (#313, #314, thanks to Fernando
Miguez)glm support so that df.residual
is used when the family is gaussian or gamma. Thus, e.g., we match
lm results when the model is fitted with a Gaussian family.
Previously we ignored the d.f. for all glm objects.rg.limit option (and argument for
ref_grid()) to limit the number of rows in the reference
grid (#282, #292). This change could affect existing code that
used to work – but only in fairly extreme situations. Some
users report extreme performance issues that can be traced to the size
of the reference grid being in the billions, causing memory to be paged,
etc. So providing this limit really is necessary. The default is 10,000
rows. I hope that most existing users don’t bump up against that too
often. The nuisance (or non.nuisance) argument
in ref_grid() (see below) can help work around this
limit.nuisance option in ref_grid(), by
which we can specify names of factors to exclude from the reference grid
(accommodating them by averaging) (#282, #292). These must be factors
that don’t interact with anything, even other nuisance factors. This
provides a remedy for excessive grid sizes.qdrg():
contrasts now object$contrasts
when object is specifiedordinal.dim argument to support ordinal
modelsforce_regular() function adds invisible rows to an
irregular emmGrid to make it regular (i.e., covers all
factor combinations)regrid() with nested structures
(#287)rbind() which mishandled
@grid$.offset.clm and clmm support to
fix issues related to rank deficiency and nested models, particularly
with mode = "prob" (#300)type to be passed in emmeans() when
object is already an emmGrid (incidentally
noticed in #287)add_grouping with multiple
reference factors (#291)ref_grid(object, vcov. = ...) (#283)emmtrends() with covariate formulas (#284)add_grouping()
(#286)contrast() to avoid all-nonEst
results in irregular nested structurescld() results. Also am
providing an emm_list method for emm_list
objects.mvcontrast() function (#281) and assoc vignette
materialupdate.summary_emm()contrast() so that
log2 and log10 transformations are handled
just like log. (#273) Also disabled making ratios with
genlog as it seems ill-advised.log1p transformationtype = "scale" argument to
plot.emmGrid() and emmip(). This is the same
as type = "response" except the scale itself is transformed
(i.e., a log scale if the log transformation was used). Since the same
transformation is used, the appearance of the plot will be the same as
with type = "lp", but with an altered axis scale. Currently
this is implemented only with engine = "ggplot".scheffe.rank > 1 was specified.
(#171)subset() method for emmGrid
objectsmcmc and mcmc.list objects
(#278, #279)test() shows null whenever it is nonzero
on the chosen scale (#280)This version has some changes that affect all users, e.g., not saving
.Last.ref_grid, so we incremented the sub-version
number.
contrast(), so that the odds-ratio transformation persists
into subsequent contrast() calls e.g., interaction
contrasts.contrast(..., type = ...) work
correctlyp.adjust.methods work (#267)mblogit extended to work with
mmblogit models (#268) (However, since,
mclogit pkg incorporates its own interface)export option in print.emmGrid() and
print.emm_summary()emm_options(save.ref_grid = FALSE).
Years ago, it seemed potentially useful to save the last reference grid,
but this is extra overhead, and writes in the user’s global environment.
The option remains if you want it.as.data.frame
(because we lose potentially important annotations), and
information/example on how to see more digits (which I guess is why I’m
seeing users do this).y ~ A:B detected A %in% B and
B %in% A, and hence A %in% A*B and
B %in% A*B due to a change in 1.4.6. Now we omit cases
where factors are nested in themselves!cov.reduce formulas to allow use of custom
models for predicting mediating covariatesmultinom “correction” in version 1.5.4 was actually
an “incorrection.” It was right before, and I made it wrong! If
analyzing multinom models, use a version other
than 1.5.4mblogit modelssurvreg support (#258) –
survreg() doesn’t handle missing factor levels the same way
as lm(). This also affects results from
coxph(), AER::tobit(), …auto.noise dataset, and
changing that example and vignette example to have noise/10
as the response variable. (Thanks to speech and hearing professor Stuart
Rosen for pointing out this issue in an e-mail comment.)appx-satterthwaite mode in
gls/lme models (#263)mode = "asymptotic" for
gls/lme models.facetlab argument to emmip_ggplot()
so user can control how facets are labeled (#261)joint_tests() (#265)joint_tests() and interaction contrasts
for nested models (#266)multinom support suggested by this SO questionrbind.emm_list() to default for
whichgee models
(#249)svyglm objects (#248)lqm, lqmm, and added
support for rq & rqs objects
(quantreg package). User may pass summary
or boot arguments such as method,
se, R, … (#250)multinom objects (SEs were previously
incorrect) and addition of support for related
mclogit::mblogit objects. If at all possible, users should
re-run any pre-1.5.4 analyses of multinomial modelsN.sim argument of regrid(). We are no longer
calling this a posterior sample because this is not really a Bayesian
method, it is just a simulated set of regression coefficients.CLD()
once and for all. We tried in version 1.5.0, but forced to cave due to
downstream problems.levels<- method that maps to
update(... levels =) (#237)cld() so it works with nested cases (#239)coef() method to work with contrasts of nested
models. This makes it possible for pwpp() to work
(#239)plot() that occurs if we use
`type = “response” but there is in fact no transformation (reported on
StackOverflow)"log10" and "log2" as legal
transformations in regrid()stats::make.link())emmip() to route plot output to rendering
functions emmip_ggplot() and emmip_lattice().
These functions allow more customization to the plot and can also be
called independently. (To do later, maybe next update: the same for
plot.emmGrid(). What to name rendering functions?? –
suggestions?).emmc functions so that
parenthesization of levels does not get in the way of ref,
exclude, or include arguments (#246)emtrends() when data is
specified (#247)$model slot in a lm object, as long as
there are no predictor transformations. This provides a little bit
more safety in cases the data have been removed or altered.rbind.emm_list() to allow subsetting. (Also
documentation & example)plot.emmGrid(... comparisons = TRUE) where we
determine arrow bounds and unnecessary-arrow deletions
separately in each by group. See also Stack Overflow
postingemmeans() with contrasts specified ignores
adjust and passes to contrast() instead.
Associated documentation improved (I hope)plot(..., comparisons = TRUE) (#228)plot.emmGrid() so that comparison arrows
work correctly with back-transformations. (Previously we used
regrid() in that case, causing different CIs and PIs
depending on comparisons) (#230)stan_polr models.as.list() and as.emmGrid() to
fully support nesting and submodels.submodel support. Also, much more
memory-efficient code therein (#218, #219)enable.submodel so user can switch off
submodel support when unwanted or to save memory.multinom support for N.sim optionrecover_data
and emm_basis so that an external package’s methods are
always found and given priority whether or not they are registered
(#220)gamlss support. Smoothers are not supported
but other aspects are more reliable. See CV
postingaes argument in pwpp() for more
control over rendering (#178)plot.emmGrid() where ordering of
factor levels could change depending on CIs and
PIs (#225)joint_tests() to reflect
cov.keep (ver. 1.4.2)emm_options() gains a disable argument to
use for setting aside any existing options. Useful for reproducible bug
reporting.emmeans() with a contr argument or
two-sided formula, we now suppress several particular ...
arguments from being passed on to contrast() when they
should apply only to the construction of the EMMs (#214)... arguments are passed to
methodsCLD() was deprecated in version 1.3.4. THIS IS THE LAST
VERSION where it will continue to be available. Users should use
multcomp::cld() instead, for which an emmGrid
method will continue to exist.submodel option
mgcv::gam support (#216)ubds dataset for testing with messy situationslqm and lqmm
models (#213)stanreg
models (#212)stanreg objects
(#202)emmip() to be consistent between one curve and
several, in whether points are displayed (style
option)"scale" option to make.tran()emtrends() (#201)trt.vs.ctrl.emmc() now
throws an error (#208)linfct (the identity) to
emmobjemm_options "sep"
and "parens", and a parens argument in
contrast(). sep controls how factor levels are
combined when ploted or contrasted, and parens sets
whether, what, and how labels are parenthesized in
contrast(). In constructing contrasts of contrasts, for
example, labels like A - B - C - D are now
(A - B) - (C - D), by default. To reproduce old labeling,
do `emm_options(sep = “,”, parens = “a^”)pwpp() so it plays nice with nonestimable
cases"xplanations" vignette with additional
documentation on methods used. (comparison arrows, for starters)plot(), especially regarding comparison
arrowsstanreg models (#196)emmeans(obj, "1", by = "something")
(#197)eff_size() now supports emm_list objects
with a $contrasts component, using those contrasts. This
helps those who specify pairwise ~ treatment.contrast() for factor combinations with
by groups were wacky (#199)emtrends() screwed up with multivariate models
(#200).calc to summary().
For example, calc = c(n = ~.wgt.) will add a column of
sample sizes to the summary.coxph support for models with
strataemmeans() with specs of class
list now passes any offset and
trend arguments (#179)plim argument to pwpp() to allow
controlling the scaleparams (#180)gls objects when data are
incomplete (#181)joint_tests() and
test(..., joint = TRUE) that can occur with nontrivial
@dffun() slots (#184)gls
(#185) and renamed boot-satterthwaite to
appx-satterthwaite (#176)transform argument in ref_grid() so it
is same as in regrid() (#188)pwpm() function for displaying estimates,
pairwise comparisons, and P values in matrix form.all.vars() that addresses #170scheffe.rank in
summary.emmGrid() to manually specify the desired
dimensionality of a Scheffe adjustment (#171)... to be included in options
in calls to emmeans() and contrast(). This
allows passing any summary() argument more easily, e.g.,
emmeans(..., type = "response", bias.adjust = TRUE, infer = c(TRUE, TRUE))
(Before, we would have had to wrap this in summary())plotit argument to plot.emmGrid()
that works similarly to that in emmip().character predictors inat` (#175)emmeans() associated with non-factors such
as Date (#162)nesting.order option to emmip()
(#163)style argument for emmip() allows
plotting on a numeric scalepwpp() has tick marks on P-value axis
(#167)regrid() for error when estimates exceed
boundsformula.tools:::as.character.formula messes me up (thanks
to Berwin Turloch, UWA, for alerting me)dqrg() more visible in the documentation
(because it’s often useful)emm_list objects,
e.g. rbind() and as.data.frame(),
as.list(), and as.emm_list()"bcnPower" option to make.tran()
(per car::bcnPower())emmtrends() (#153)... to hook functions (need exposed by
#154)regrid() whereby we can fake any response
transformation – not just "log" (again inspired by
#154)merMod
objects) (#157)pwpp() to make extremely small P values more
distinguishableemtrends() is now
object, not model, to avoid potential
mis-matching of the latter with optional mode argumentemtrends() now uses more robust and efficient code
whereby a single reference grid is constructed containing all needed
values of var. The old version could fail, e.g., in cases
where the reference grid involves post-processing. (#145)scale argument to contrast()"identity" contrast methodeff_size() function for Cohen effect sizescov.keep argument in ref_grid() for
specifying covariates to be treated just like factors (#148). A side
effect is that the system default for indicator variables as covariates
is to treat them like 2-level factors. This could change the results
obtained from some analyses using earlier versions. To replicate
old analyses, set
emm_options(cov.keep = character(0)).regrid ignored offsets with Bayesian models;
emtrends() did not supply options and
misc arguments to emm_basis() (#143)stanreg in particular (#114)max.degree argument in emtrends()
making it possible to obtain higher-order trends (#133). Plus minor
tuneups, e.g., smaller default increment for difference quotientsemmeans() more forgiving with
’byvariables; e.g.,emmeans(model, ~ dose | treat, by =
“route”)will find bothbyvariables whereas previously“route”`
would be ignored.emm_basis() and
recover_data() methods are used in preference to internal
ones - so package developers can provide improvements over what I’ve
cobbled together.recover_data() failscontrast() in identifying true contrasts
(#134)plot.summary_emm() regarding
CIs and intervals (#137)log(y + 1) ~ ... and
2*sqrt(y + 0.5) ~ ... are now auto-detected. [This may
cause discrepancies with examples in past usages, but if so, that would
be because the response transformation was previously incorrectly
interpreted.]ratios argument to contrast() to
decide how to handle log and logittype = "response" but there is no way to back-transform
them (or we opted out with ratios = FALSE).emm_register() to make it
easier for other packages to register their emmeans
support methodsinfer, explaining that Bayesian models are handled
differently (#128)PIs option to plot.emmGrid() and
emmip() (#131). Also, in plot.emmGrid(), the
intervals argument has been changed to CIs for
sake of consistency and less confusion; intervals is still
supported for backaward compatibility.plot.emmGrid gains a colors argument so we
can customize colors used.glht support (#132 contributed by Balsz
Banfai)regrid gains sim and N.sim
arguments whereby we can generate a fake posterior sample from a
frequentist model.gls objects with non-matrix
apVar member (#119)sigma argument to ref_grid()
(defaults to sigma(object) if available)interval argument in
predict.emmGrid()likelihood argument in
as.mcmc to allow for simulating from the posterior
predictive distributionsigma in objectcld() and
CLD()exclude (#107)recover_data
to emm_basisMCMCglmm supportdo.call(paste, ...) and
do.call(order, ...), to prevent problems with factor names
like method that are argument names for these functions
(#94)summary.emmGrid() whereby
transformations of class list were ignored.update.emmGrid(..., levels = levs)
whereby we can easily relabel the reference grid and ensure that the
grid and roles slots stay consistent. Added
vignette example.emmeans(). We now
ensure that the original order of the reference grid is preserved.
Previously, the grid was re-ordered if any numeric or character levels
occurred out of order, per order()CLD()
due to its misleading display of pairwise-comparison tests.betareg objects, where the wrong
terms component was sometimes used.by variables are present (#98).pwpp() function to plot P values
of comparisonssummary(..., adjust = "scheffe"). We now
actually compute and use the rank of the matrix of linear functions to
obtain the F numerator d.f., rather than trying to guess the
likely correct value.contrast() results if they are later used by
emmeans(). This was first noticed with ordinal models in
prob mode (#83).sommer::mmer,
MuMIn::averaging, and mice::mira objectsnnet::multinom support when there are 2 outcomes
(#19)gls objectsfamSize now correct when exclude or
include is used in a contrast function (see #68)aovList
objects, in part due to the popularity of afex::aov_ez()
which uses these models.emm_options(opt.digits = FALSE)include argument to most .emmc
functions (#67)ref,
exclude, and include in .emmc
functions (#68)... arguments in
emmeans() when two-sided formulas are presentclm support when model is rank-deficientregrid(..., transform = "log") error when there
are existing non-estimable cases (issue #65)brmsfit support (#43)mgcv::gam and mgcv::gamm
models.my.vcov() now passes ... to clientsmanova object no longer requires
data keyword (#72)aovlist
models (#73)CLD fatal error when sort = TRUE
(#77)lme
objects (#75)"mvt" adjustment ignored by groupingcontrast() mis-labeled estimates when levels varied
among by groups (most prominently this happened in
CLD(..., details = TRUE))aovlist support so it re-fits the model when
non-sum-to-zero contrasts were usedprint.summary_emm() now cleans up numeric columns with
zapsmall()nesting in
ref_grid() and update(), and addition of
covnest argument for whether to include covariates when
auto-detecting nestinghpd.summary() and handoff to it from
summary()ref_grid() ignored
mult.levs... where it
shouldn’tCLD() now works for MCMC models (uses frequentist
summary)opt.digits optionref.grid() put to final rest,
and we no longer support packages that provide recover.data
or lsm.basis methods.recover_data() and
.emm_basis() to provide access for extension developers to
all available methodsinst/extdata.all.vars() that could cause errors when
response variable has a function call with character constants.regrid() (so results match summary() labeling
with type = "response").plot.emmGrid(..., comparisons = TRUE, type = "response")
produced incorrect comparison arrows; now fixeddf$y ~ df$treat + df[["cov"]]. This had failed previously
for two obscure reasons, but now works correctly.simplify.names option for above types of
modelsemm_options() with no arguments now returns all options
in force, including the defaults. This makes it more consistent with
options()emtrends(); produced incorrect results in
models with offsets.update.emmGrid() and
emm_options()qdrg() function (quick and dirty reference grid)
for help with unsupported model objectscld() has been deprecated in favor of
CLD(). This had been a headache. multcomp
is the wrong place for the generic to be; it is too fancy a dance to
export cld with or without having multcomp
installed.xtending.Rmd vignette on how to
export methodsrevpairwise.emmc and cld
regarding comparing only 1 EMMcld.emm_list now returns results only for
object[[ which[1] ]], along with a warning message.emmeans specs like cld ~ group,
a vestige of lsmeans as it did not work correctly (and
was already undocumented)Suggests (dozens and
dozens fewer dependencies)lme models in “models”
vignette.emmc
functions (#22)exclude argument to most .emmc
functions: allows user to omit certain levels when computing
contrastshpd.summary() function for Bayesian models to show
HPD intervals rather than frequentist summary. Note:
summary() automatically reroutes to it. Also
plot() and emmip() play along.nlme::lme
modelsSurv() was
interpreted as a response transformation.cld() is applied to an
emm_list (issue #24)offset argument to ref_grid()
(scalar offset only) and to emmeans() (vector offset
allowed) – (issue #18)[.summary_emm to choose
whether to retain its class or coerce to a data.frame
(relates to issue #14)reverse option for trt.vs.ctrl and
relatives (#27)terms is accessed with lme
objects to make it more robustemmeans:::convert_scripts() renames output file more
simply[ method for class summary_emmsimple argument for contrast -
essentially the complement of byjoint_tests()ref_grid() accept ylevs list of
length > 1; also slight argument change: mult.name ->
mult.namesemmeans() wherein weights
was ignored when specs is a listdata argument, if supplied to a data.frame
(recover_data() doesn’t like tibbles…)as.data.frame method for emmGrid
objects, making it often possible to pass it directly to other functions
as a data argument.contrast() where by was
ignored for interaction contrastsas.glht() where it choked on
df = Infdata or
subsetjoint_tests() function tests all [interaction]
contrastsgamlss objects (but
doesn’t support smoothing). Additional argument is
what = c("mu", "sigma", "nu", "tau") It seems to be flaky
when the model of interest is just ~ 1.emmeans() might pass
data to contrast()summary.emmGrid()emm_options(summary = ...) to work
as advertised.emmGrid() function to emm() as had
been intended as alternative to mcp() in
multcomp::glht() (result of ditto).cld.emm_list()Inf to display d.f. for asymptotic (z) tests.
(NA will still work too but Inf is a better
choice for consistency and meaning.)recover_data() now throws an error when it finds recovered
data not reproduciblevcov() calls to comply with recent R-devel
changesThis is the initial major version that replaces the lsmeans package. Changes shown below are changes made to the last real release of lsmeans (version 2.27-2). lsmeans versions greater than that are transitional to that package being retired.
emmeans(),
emtrends(), emmip(), etc. But
lsmeans(), lstrends(), etc. as well as
pmmeans() etc. are mapped to their corresponding
emxxxx() functions.ref.grid -> ref_grid,
lsm.options -> emm_options, etc.ref.grid and lsmobj are gone. Both
are replaced by class emmGrid. An as.emmGrid()
function is provided to convert old objects to class
emmGrid.lmerMod models. Also added
options disable.lmerTest and lmerTest.limit,
similar to those for pbkrtest.neuralgia and pigs
datasetsemmmeans() methods is now top-down
rather than convoluted intermingling of S3 methods-s in labels to
/s to emphasize that thnese results are ratios.ref_grid. (Can be disabled via
emm_options())plot() and emmip() are
now ggplot2-based. Old lattice-based
functionality is still available too, and there is a
graphics.engine option to choose the default.Suggests pkgs to Enhances when not
needed for building/testingNew developments will take place in emmeans, and lsmeans will remain static and eventually will be archived.