[tlocoh-info] auto.a() function suggests much too high a-values

André Zehnder andrezehnder at hotmail.com
Wed May 20 11:17:09 CEST 2015

Hi Andy,


Thank you for your quick answer. That would mean that the lxy.nn.add()
function works correctly, but does not perform well with my kind of data. I
tried out the lxy.amin.add() function you mentioned. I did not find an
example of it but I think this is the correct way of calling it:
lxy.amin.add(“lxy-data”, s=svalue, nnn=kvalue, ptp=0.90). Although the
a-value produced by this function is somewhat lower than the one of
lxy.nn.add(), it is still much higher than what I would choose.


Unfortunately, the MSHC-method to choose the a-/k-value is not possible
since there are no large features that are known to be avoided by the
animals in the study area. I will try to look for another criterion that
supports a more objective choice of the a-value. If I find anything, I will
let you know.


Best regards

André Zehnder


Von: Andy Lyons [mailto:lyons.andy at gmail.com] 
Gesendet: Dienstag, 19. Mai 2015 09:27
An: André Zehnder
Cc: tlocoh-info at lists.r-forge.r-project.org
Betreff: Re: [tlocoh-info] auto.a() function suggests much too high a-values


Hi André,

Good questions. Your diagnosis shows that you understand what's going on.
You're quite right that selecting a value for 'a' is not intuitive, in part
because it represents a cumulative distance of each point to several of its
nearest neighbors, and in the case where time is incorporated in the
selection of nearest neighbors (s>0), the distance is not a physical

The auto.a() function provides a starting point that has proven useful for
many datasets, but it really is just that - a starting point to narrow down
the range of 'a' values that provide a reasonable balance between
over-estimation and under-estimation. The ultimate selection of 'a' should
be based on your (admittedly subjective) assessment of minimizing spurious
holes in the utilization distribution, and spurious cross-overs. Part of the
subjectivity in selecting a parameter value (for any home range estimation
method really) involves reflecting upon whether your research question
requires better fidelity to the core area or overall 'home range'.  In other
words, there is no recommended 'a' value. There are only recommended
principles for selecting 'a' or 'k' (see appendix I of Lyons et al 2013),
along with some tools (plots) to help you select a value. All of which is
less convenient to be sure than a one-click solution, but hopefully keeps
you close to your data and pushes you to think about what you want from your
space use model. 

As to why the upper and lower ranges returned by the 'auto.a()' function did
a poor job for your dataset is hard to say, but it could be related to the
geometry of the points or the sampling frequency.  Remember that auto.a(ptp
= 0.98, nnn = 2) returns the value of 'a' such that 98% of points have at
least two nearest neighbors. If the distribution of points is wide ranging,
this could result in a large "lower bound" that blows up the core areas. The
suggestion to let k = sqrt(numberOfPoints) is likewise a a starting point
and not meant to be a recommended value. 

There is alternative function called lxy.amin.add() to help identify upper
and lower bounds for 'a'. But it is more of a convenience function and it
operates on similar principles as auto.a(). There is also a relatively new
function in tlocoh.dev that opens a GUI (Shinyapp) which is designed to help
select parameter values. It isn't documented yet but see some sample code

Hope this helps.


if (!require(tlocoh.dev)) stop("Please install tlocoh.dev")
## Loading required package: tlocoh.dev
if (packageVersion("tlocoh.dev") < "1.2.02") stop("Please update your
tlocoh.dev package")

## Load suggested packages
pkgs <- c("rgdal", "raster", "shiny", "dismo")
not.installed <- pkgs[!sapply(pkgs, function(p) require(p,

## Create a hullset with evenly spaced parameter values (in this case
## (could also be evenly spaced 'a' values, use something like
a=seq(from=1000, to=15000, by=1000)

raccoon.lhs <- lxy.lhs(raccoon.lxy, s=0.05, k=4:20, iso.add=TRUE,

## Download a background image for display
raccoon.gmap <- lhs.gmap(raccoon.lhs, gmap="hybrid")

## Graphically select one of the parameter values by examining the
isopleths, EAR plot, and ISO area plot
raccoon.sel <- lhs.selection(raccoon.lhs)
raccoon.sel <- lhs.shiny.select(raccoon.lhs, selection=raccoon.sel,

## Apply selection
raccoon.lhs <- lhs.select(racoon.lhs, selection=raccoon.sel)

On 5/18/2015 10:38 AM, André Zehnder wrote:

Hi all,


I am performing a home range analysis with GPS data of some leopards and
lions. The input data has a highly variable point density and result in
quite large areas (roughly a magnitude of 500 to 1’000 km2 for the 95%
isopleth). In agreement with the tutorial, I begin with selecting the value
for the temporal parameter s and then select suitable k values. As an
orientation for that I use the rule of thumb ( k = sqrt(numberOfPoints)) and
the plots. When a k-value has been chosen, the tutorial recommends to use
the auto.a() function (lxy.nn.add(toni.lxy, s=0.003, a=auto.a(nnn=15,


However, the recommended a-value is massively too high and results in a
oversmoothed home range that lacks any details. The higher the s-value, the
more severe is this issue. While the result of the suggested a-value still
shows a few weak spatial details for s=0, almost circular home ranges result
for all isopleths with s>0. I checked whether this issue occurs only for one
dataset, but it is the same for all 5 datasets I have checked. I attached
two images that present the result when using the recommended a-value
(auto_) and one that presents a manually selected a-value (manually_). For
example, for s=0.005, I would rather take an a-value between 150’000 and
190’000 than the recommended value of 1’150’000. The auto.a() function
should thereby include at least k points for 90% of all hulls.


Therefore the question: Has anyone experienced the same issue or is it even
a known technical problem with the package? My datasets contain 5’000 to
30’000 fixes, have some gaps and includes sometimes different sampling
intervals. May the auto.a() function have severe problems due to that? The
choice of an a-value is rather subjective and not really intuitive,
especially when s>0. But when the auto.a() function can’t be used to get an
approximate reference, what other measures are available to be able to
justify your choice of the a-value?


PS: I use T-LoCoH version 1.34.00 with RStudio 0.98.1103.


Best regards,

André Zehnder



Tlocoh-info mailing list
Tlocoh-info at lists.r-forge.r-project.org
<mailto:Tlocoh-info at lists.r-forge.r-project.org> 


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.r-forge.r-project.org/pipermail/tlocoh-info/attachments/20150520/6276b632/attachment.html>

More information about the Tlocoh-info mailing list